url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/2206.14911 | Minimum Weight Euclidean $(1+\varepsilon)$-Spanners | Given a set $S$ of $n$ points in the plane and a parameter $\varepsilon>0$, a Euclidean $(1+\varepsilon)$-spanner is a geometric graph $G=(S,E)$ that contains, for all $p,q\in S$, a $pq$-path of weight at most $(1+\varepsilon)\|pq\|$. We show that the minimum weight of a Euclidean $(1+\varepsilon)$-spanner for $n$ points in the unit square $[0,1]^2$ is $O(\varepsilon^{-3/2}\,\sqrt{n})$, and this bound is the best possible. The upper bound is based on a new spanner algorithm in the plane. It improves upon the baseline $O(\varepsilon^{-2}\sqrt{n})$, obtained by combining a tight bound for the weight of a Euclidean minimum spanning tree (MST) on $n$ points in $[0,1]^2$, and a tight bound for the lightness of Euclidean $(1+\varepsilon)$-spanners, which is the ratio of the spanner weight to the weight of the MST. Our result generalizes to Euclidean $d$-space for every constant dimension $d\in \mathbb{N}$: The minimum weight of a Euclidean $(1+\varepsilon)$-spanner for $n$ points in the unit cube $[0,1]^d$ is $O_d(\varepsilon^{(1-d^2)/d}n^{(d-1)/d})$, and this bound is the best possible.For the $n\times n$ section of the integer lattice in the plane, we show that the minimum weight of a Euclidean $(1+\varepsilon)$-spanner is between $\Omega(\varepsilon^{-3/4}\cdot n^2)$ and $O(\varepsilon^{-1}\log(\varepsilon^{-1})\cdot n^2)$. These bounds become $\Omega(\varepsilon^{-3/4}\cdot \sqrt{n})$ and $O(\varepsilon^{-1}\log(\varepsilon^{-1})\cdot \sqrt{n})$ when scaled to a grid of $n$ points in the unit square. In particular, this shows that the integer grid is \emph{not} an extremal configuration for minimum weight Euclidean $(1+\varepsilon)$-spanners. | \section{Introduction} \label{sec:intro}
For a set $S$ of $n$ points in a metric space, a graph $G=(S,E)$ is a \emph{$t$-spanner} if $G$ contains, between any two points $p,q\in S$, a $pq$-path of weight at most $t\cdot \|pq\|$, where $t\geq 1$ is the \emph{stretch factor} of the spanner. In other words, a $t$-spanner approximates the true distances between the ${n\choose 2}$ pairs of points up to a factor $t$ distortion. Several optimization criteria have been developed for $t$-spanners for a given parameter $t\geq 1$. Natural parameters are the \emph{size} (number of edges), the \emph{weight} (sum of edge weights), the \emph{maximum degree}, and the \emph{hop-diameter}. Specifically, the \emph{sparsity} of a spanner is the ratio $|E|/|S|$ between the number of edges and vertices; and the \emph{lightness} is the ratio between the weight of a spanner and the weight of an MST on $S$.
In the geometric setting, $S$ is a set of $n$ points in Euclidean $d$-space in constant dimension $d\in \N$.
For every $\eps>0$, there exist $(1+\eps)$-spanners with $O_d(\eps^{1-d})$ sparsity and $O_d(\eps^{-d})$ lightness, and both bounds are the best possible~\cite{LeS19}. In particular, the $\Theta$-graphs, Yao-graphs~\cite{ruppert1991approximating}, gap-greedy and path-greedy spanners provide $(1+\eps)$-spanners of sparsity $O_d(\eps^{1-d})$. For lightness, Das et al.~\cite{DasHN93,DasNS95,NS-book} were the first to construct $(1+\eps)$-spanners of lightness $\eps^{-O(d)}$. Gottlieb~\cite{Gottlieb15} generalized this result to metric spaces with doubling dimension $d$; see also~\cite{BorradaileLW19,FiltserS20}. Recently, Le and Solomon~\cite{LeS19} showed that the greedy $(1+\eps)$-spanner in $\R^d$ has lightness $O(\eps^{-d})$; and so it simultaneously achieves the best possible bounds for both lightness and sparsity. The greedy $(1+\eps)$-spanner algorithm~\cite{althofer1993sparse} generalizes Kruskal's MST algorithm: It sorts the ${n\choose 2}$ edges of $K_n$ by nondecreasing weight, and incrementally constructs a spanner $H$: it adds an edge $uv$ if $H$ does not contain an $uv$-path of weight at most $(1+\eps)\|uv\|$.
\subparagraph{Lightness versus Minimum Weight.}
Lightness is a convenient optimization parameter, as it is invariant under scaling.
It also provides an approximation ratio for the minimum weight $(1+\eps)$-spanner, as the weight of a Euclidean MST (for short, EMST) is a trivial lower bound on the spanner weight. However, minimizing the lightness is not equivalent to minimizing the spanner weight for a given input instance, as the EMST is highly sensitive to the distribution of the points in $S$. Given that worst-case tight bounds are now available for the lightness, it is time to revisit the problem of approximating the \emph{minimum weight} of a Euclidean $(1+\eps)$-spanner, without using the EMST as an intermediary.
\subparagraph{Euclidean Minimum Spanning Trees.}
For $n$ points in the unit cube $[0,1]^d$, the weight of the EMST is $O_d(n^{1-1/d})$, and this bound is also best possible~\cite{Fe55,SteeleS89}. In particular, a suitably scaled section of the integer lattice attains these bounds up to constant factors. Supowit et al.~\cite{SupowitRP83} proved similar bounds for the minimum weight of other popular graphs, such as spanning cycles and perfect matchings on $n$ points in the unit cube $[0,1]^d$.
\subparagraph{Extremal Configurations for Euclidean $(1+\eps)$-Spanners.}
The tight $O_d(\eps^{-d})$ bound on lightness~\cite{LeS19} implies that for every set of $n$ points in $[0,1]^d$,
there is a Euclidean $(1+\eps)$-spanner of weight $O(\eps^{-d} n^{1-1/d})$.
However, the combination of two tight bounds need not be tight; and it is unclear which $n$-point configurations require the heaviest $(1+\eps)$-spanners. We show that this bound can be improved to $O(\eps^{-3/2}\sqrt{n})$ in the plane. Furthermore, the extremal point configurations are not an integer grid, but an asymmetric grid.
\subparagraph{Contributions.}
We obtain a tight upper bound on the minimum weight of a Euclidean $(1+\eps)$-spanner for $n$ points in $[0,1]^d$.
\begin{theorem}\label{thm:cube}
For constant $d\geq 2$, every set of $n$ points in the unit cube $[0,1]^d$ admits a Euclidean $(1+\eps)$-spanner
of weight $O_d(\eps^{(1-d^2)/d}n^{(d-1)/d})$, and this bound is the best possible.
\end{theorem}
The upper bound is established by a new spanner algorithm, \textsc{SparseYao}, that sparsifies the classical Yao-graph using novel geometric insight (Section~\ref{sec:alg}). The weight analysis is based on a charging scheme that charges the weight of the spanner to empty regions (Section~\ref{sec:square}).
The lower bound construction is the scaled lattice with basis vectors of weight $\sqrt{\eps}$ and $\frac{1}{\sqrt{\eps}}$ (Section~\ref{sec:lower}); and not (scaled copies of) the integer lattice $\Z^d$.
We analyze the minimum weight of Euclidean $(1+\eps)$-spanners for the integer grid in the plane.
\begin{theorem}\label{thm:grid}
For every $n\in \N$, the minimum weight of a $(1+\eps)$-spanner for the $n\times n$ section of the integer lattice is between $\Omega(\eps^{-3/4}n^2)$ and $O(\eps^{-1}\log(\eps^{-1})\cdot n^2)$.
\end{theorem}
When scaled to $n$ points in $[0,1]^2$, the upper bound confirms that a scaled section of the integer lattice does not maximize the weight of Euclidean $(1+\eps)$-spanners.
\begin{corollary}\label{cor:grid}
For every $n\in \N$, the minimum weight of a $(1+\eps)$-spanner for $n$ points in a scaled section of the integer grid in $[0,1]^2$ is
between $\Omega(\eps^{-3/4}\sqrt{n})$ and $O(\eps^{-1}\log(\eps^{-1})\sqrt{n})$.
\end{corollary}
The lower bound is derived from two elementary criteria (the empty ellipse condition and the empty slab condition) for an edge to be present in every $(1+\eps)$-spanner (Section~\ref{sec:lower}). The upper bound is based on analyzing the \textsc{SparseYao} algorithm from Section~\ref{sec:alg}, combined with results from number theory on Farey sequences (Section~\ref{sec:grid}). Closing the gap between the lower and upper bounds in Theorem~\ref{thm:grid} remains an open problem. Higher dimensional generalizations are also left for future work. In particular, multidimensional variants of Farey sequences are currently not well understood.
\subparagraph{Further Related Previous Work.}
Many algorithms have been developed for constructing $(1+\eps)$-spanners for $n$ points in $\R^d$~\cite{ABUAFFASH2022101807,ChanHJ20,DasHN93,DasNS95,ElkinS15,GudmundssonLN02,LeSolomon-unified1,LevcopoulosNS02,RaoS98}, designed for one or more optimization criteria (lightness, sparsity, hop diameter, maximum degree, and running time). A comprehensive survey up to 2007 is in the book by Narasinham and Smid~\cite{NS-book}. We briefly review previous constructions pertaining to the \emph{minimum weight} for $n$ points the unit square (i.e., $d=2$). As noted above, the recent worst-case tight bound on the lightness~\cite{LeS19} implies that the greedy algorithm returns a $(1+\eps)$-spanner of weight $O(\eps^{-2}\|\MST\|)=O(\eps^{-2}\sqrt{n})$.
A classical method for constructing a $(1+\eps)$-spanners uses \emph{well-separated pair decompositions} (\emph{WSPD}) with a hierarchical clustering (e.g., quadtrees); see~\cite[Chap.~3]{Sariel}. Due to a hierarchy of depth $O(\log n)$, this technique has been adapted broadly to dynamic, kinetic, and reliable spanners~\cite{BuchinHO20,ChanHJ20,gao2006deformable,Roditty12}. However, the weight of the resulting $(1+\eps)$-spanner for $n$ points in $[0,1]^2$ is $O(\eps^{-3}\sqrt{n}\cdot\log n)$~\cite{gao2006deformable}. The $O(\log n)$ factor is due to the depth of the hierarchy; and it cannot be removed for any spanner with hop-diameter $O(\log n)$~\cite{AgarwalWY05,DinitzES10,SolomonE14}.
Yao-graphs and $\Theta$-graphs are geometric proximity graphs, defined as follows. For a constant $k\geq 3$, consider $k$ cones of aperture $2\pi/k$ around each point $p\in S$; in each cone, connect $p$ to the ``closest'' point $q\in S$. For Yao-graphs, $q$ minimizes the Euclidean distance $\|pq\|$, and for $\Theta$-graphs $q$ is the point that minimizes the length of the orthogonal projection of $pq$ to the angle bisector of the cone. It is known that both $\Theta$- and Yao-graphs are $(1+\eps)$-spanners for a parameter $k\in \Theta(\eps^{-1})$, and this bound is the best possible~\cite{NS-book}. However, if we place $\lfloor n/2\rfloor$ and $\lceil n/2\rceil$ equally spaced points on opposite sides of the unit space, then the weight of both graphs with parameter $k=\Theta(\eps^{-1})$ will be $\Theta(\eps^{-1}\, n)$.
\subparagraph{Organization.}
We start with lower bound constructions in the plane (Section~\ref{sec:lower}) as a warm-up exercise. The two elementary geometric criteria build intuition and highlight the significance of $\sqrt{\eps}$ as the ratio between the two axes of an ellipse of all paths of stretch at most $1+\eps$ between the foci. Section~\ref{sec:alg} presents Algorithm \textsc{SparseYao} and its stretch analysis in the plane. Its weight analysis for $n$ points in $[0,1]^2$ is in Section~\ref{sec:square}. The generalization of the algorithm and its analysis are fairly straightforward, and sketched in Section~\ref{sec:d-space}. We analyze the performance of Algorithm \textsc{SparseYao} for the $n\times n$ grid, after a brief review of Feray sequences, in Section~\ref{sec:grid}.
We conclude with a selection of open problems in Section~\ref{sec:con}.
\section{Lower Bounds in the Plane}
\label{sec:lower}
We present lower bounds for the minimum weight of a $(1+\eps)$-spanner for the $n\times n$ section of the integer lattice (Section~\ref{ssec:gridLB}); and for $n$ points in a unit square $[0,1]^2$ (Section~\ref{ssec:squareLB}).
Let $S\subset \R^2$ be a finite point set. We observe two elementary conditions that guarantee that an edge $ab$ is present \emph{in every} $(1+\eps)$-spanner for $S$. Two points, $a,b\in S$, determine a (closed) line segment $ab=\conv \{a,b\}$; the relative interior of $ab$ is denoted by $\mathrm{int}(ab)=ab\setminus \{a,b\}$.
let $\mathcal{E}_{ab}$ denote the ellipse with foci $a$ and $b$, and great axis of weight $(1+\eps)\|ab\|$, $\mathcal{L}_{ab}$ be the slab bounded by two lines parallel to $ab$ and tangent lines to $\mathcal{E}_{ab}$; see Fig.~\ref{fig:ellipse}.
Note that the width of $\mathcal{L}_{ab}$ equals the minor axes of $\mathcal{E}_{ab}$,
which is $((1+\eps)^2-1^2)^{1/2}\|ab\|=(2\eps+\eps^2)^{1/2}\|ab\|>\sqrt{2\eps}\|ab\|$.
\begin{itemize}\itemsep 0pt
\item \textbf{Empty ellipse condition}: $S\cap \mathcal{E}_{ab}=\{a,b\}$.
\item \textbf{Empty slab condition}: $S\cap \mathrm{int}(ab)=\emptyset$ and all points in $S\cap \mathcal{L}_{ab}$ are on the line $ab$.
\end{itemize}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{ellipse}
\end{center}
\caption{The ellipse $\mathcal{E}_{ab}$ with foci $a$ and $b$, and great axis of weight $(1+\eps)\|ab\|$.}
\label{fig:ellipse}
\end{figure}
\begin{observation}\label{obs:elementary}
Let $S\subset \R^2$, $G=(S,E)$ a $(1+\eps)$-spanner for $S$, and $a,b\in S$.
\begin{enumerate}
\item If $ab$ meets the empty ellipse condition, then $ab\in E$.
\item If $S$ is a section of $\Z^2$, $\eps<1$, and $ab$ meets the empty slab condition, then $ab\in E$.
\end{enumerate}
\end{observation}
\begin{proof}
The ellipse $\mathcal{E}_{ab}$ contains all points $p\in \R^2$ satisfying $\|ap\|+\|pb\|\leq (1+\eps)\|ab\|$. Thus, by the triangle inequality, $\mathcal{E}_{ab}$ contains every $ab$-path of weight at most $(1+\eps)\|ab\|$. The empty ellipse condition implies that such a path cannot have interior vertices.
If $S$ is the integer lattice, then $S\cap \mathrm{int}(ab)=\emptyset$ implies that $\overrightarrow{ab}$ is a primitive vector (i.e., the $x$- and $y$-coordinates of $\overrightarrow{ab}$ are relatively prime), hence the distance between any two lattice points along the line $ab$ is at least $\|ab\|$. Given that $\mathcal{E}_{ab}\subset \mathcal{L}_{ab}$, the empty slab condition now implies the empty ellipse condition.
\end{proof}
\subsection{Lower Bounds for the Grid}
\label{ssec:gridLB}
\begin{lemma}\label{lem:LBgrid}
For every $n\in \N$ with $n\geq 2\,\eps^{-1/4}$, the weight of every $(1+\eps)$-spanner for the $n\times n$ section of the integer lattice is $\Omega(\eps^{-3/4}n^2)$.
\end{lemma}
\begin{proof}
Let $S=\{(s_1,s_2)\in \Z^2: 0\leq s_1,s_2<n\}$ and $A=\{(a_1,a_2)\in \Z^2: 0\leq a_1,a_2<\lceil \eps^{-1/4}\rceil/2\}$.
Denote the origin by $o=(0,0)$. For every grid point $a\in A$, we have $\|oa\|\leq \eps^{-1/4}/\sqrt{2}$.
A vector $\overrightarrow{oa}$ is \emph{primitive} if $a=(a_1,a_2)$ and $\mathrm{gcd}(a_1,a_2)=1$. We show that every primitive vector $\overrightarrow{oa}$ with $a\in A$ satisfies the empty slab condition. It is clear that $S\cap \mathrm{int}(oa)=\emptyset$. Suppose that $s\in S$ but it is not on the line spanned by $oa$. By Pick's theorem, $\area(\Delta(oas))\geq \frac12$. Consequently, the distance between $s$ and the line $oa$ is at least $\|oa\|^{-1}\geq \sqrt{2}\cdot \eps^{1/4} \geq 2\, \eps^{1/2}\, \|oa\|$;
and so $s\notin \mathcal{L}_{oa}$, as claimed.
By elementary number theory, $\overrightarrow{oa}$ is primitive for $\Theta(|A|)$ points $a\in A$. Indeed, every $a_1\in \N$ is relatively prime to $N\varphi(a_1)/a_1$ integers in every interval of length $N$, where $\varphi(.)$ is Euler totient function, and $\varphi(a_1)=\Theta(a_1)$. Consequently, the total weight of primitive vectors $\overrightarrow{oa}$, $a\in A$, is $\Theta(|A|\cdot \eps^{-1/4})=\Theta(\eps^{-3/4})$.
The primitive edges $oa$, $a\in A$, form a star centered at the origin. The translates of this star to other points $s\in S$, with $0\leq s_1,s_2\leq \frac{n}{2}\leq n - \lceil \eps^{-1/4}\rceil$ are also present in every $(1+\eps)$-spanner for $S$. As every edge is part of at most two such stars, summation over $\Theta(n^2)$ stars yields a lower bound of $\Omega(\eps^{-3/4}n^2)$.
\end{proof}
\begin{remark}
The lower bound in Lemma~\ref{lem:LBgrid} derives from the total weight of primitive vectors $\overrightarrow{oa}$ with $\|oa\|\leq O(\eps^{-1/4})$, which satisfy the empty slab condition. There are additional primitive vectors that satisfy the empty ellipse condition (e.g., $\overrightarrow{oa}$ with $a=(1,a_2)$ for all $|a_2|<\eps^{-1/3}$). However, it is unclear how to account for all vectors satisfying the empty ellipse condition, and whether their overall weight would improve the lower bound in Lemma~\ref{lem:LBgrid}.
\end{remark}
\begin{remark}
The empty ellipse and empty slab conditions each imply that an edge \emph{must} be present in every $(1+\eps)$-spanner for $S$. It is unclear how the total weight of such ``must have'' edges compare to the
the minimum weight of a $(1+\eps)$-spanner.
\end{remark}
\subsection{Lower Bounds in the Unit Square}
\label{ssec:squareLB}
\begin{lemma}\label{lem:squareLB}
For every $n\in \N$ and $\eps\in (0,1]$, there exists a set $S$ of $n$ points in $[0,1]$ such that every $(1+\eps)$-spanner for $S$ has weight $\Omega(\eps^{-3/2}\sqrt{n})$.
\end{lemma}
\begin{proof}
First let $S_0$ be a set of $2m$ points, where $m=\lfloor \eps^{-1}/2\rfloor$, with $m$ equally spaced points on two opposite sides of a unit square. By the empty ellipse property, every $(1+\eps)$-spanner for $S_0$ contains a complete bipartite graph $K_{m,m}$. The weight of each edge of $K_{m,m}$ is between $1$ and $\sqrt{2}$, and so the weight of every $(1+\eps)$-spanner for $S_0$ is $\Omega(\eps^{-2})$.
For $n\geq \eps^{-1}$, consider an $\lfloor \sqrt{\eps n}\rfloor\times \lfloor \sqrt{\eps n}\rfloor$ grid of unit squares, and insert a translated copy of $S_0$ in each unit square. Let $S$ be the union of these $\Theta(\eps n)$ copies of $S_0$; and note that $|S|=\Theta(n)$. A $(1+\eps)$-spanner for each copy of $S_0$ still requires a complete bipartite graph of weight $\Omega(\eps^{-2})$. Overall, the weight of every $(1+\eps)$-spanner for $S$ is $\Omega(\eps^{-1}n)$.
Finally, scale $S$ down by a factor of $\lfloor \sqrt{\eps n}\rfloor$ so that it fits in a unit square. The weight of every edge scales by the same factor, and the weight of a $(1+\eps)$-spanner for the resulting $n$ points in $[0,1]^2$ is $\Omega(\eps^{-3/2}\, \sqrt{n})$, as claimed.
\end{proof}
\begin{remark}
The points in the lower bound construction above lie on $O(\sqrt{\eps n})$ axis-parallel lines in $[0,1]^2$, and so the weight of their MST is $O(\sqrt{\eps n})$. Recall that the lightness of the greedy $(1+\eps)$-spanner is $O(\eps^{-d}\log \eps^{-1})$~\cite{LeS19}. For $d=2$, it yields a $(1+\eps)$-spanner of weight $O(\eps^{-2}\log \eps^{-1})\cdot \|\MST(S)\|=O(\eps^{-3/2}\log (\eps^{-1}) \sqrt{n} )$.
\end{remark}
\section{Spanner Algorithm: Sparse Yao-Graphs}
\label{sec:alg}
Let $S$ be a set of $n$ points in the plane and $\eps\in (0,\frac19)$. As noted above, the Yao-graph $Y_k(S)$ with $k=\Theta(\eps^{-1})$ cones per vertex is a $(1+\eps)$-spanner for $S$.
We describe an new algorithm, \textsc{SparseYao}$(S,\eps)$, that computes a subgraph of a Yao-graph $Y_k(S)$ (Section~\ref{ssec:alg}); and show that it returns a $(1+\eps)$-spanner for $S$ (Section~\ref{ssec:stretch}). Later, we use this algorithm for $n$ points in the unit square (Section~\ref{sec:square}; and for an $n\times n$ section of the integer lattice (Section~\ref{sec:grid}).
Our algorithm starts with a Yao-graph that is a $(1+\frac{\eps}{2})$-spanner, in order to leave room for minor loss in the stretch factor due to sparsification. The basic idea is that instead of cones of aperture $2\pi/k=\Theta(\eps)$, cones of much larger aperture $\Theta(\sqrt{\eps})$ suffice in some cases.
(This is idea is flashed out in Section~\ref{ssec:stretch}).
The angle $\sqrt{\eps}$ then allows us to charge the weight of the resulting spanner to the area of empty regions (specifically, to an empty section of a cone) in Section~\ref{sec:square}.
\subsection{Sparse Yao-Graph Algorithm}
\label{ssec:alg}
We present an algorithm that computes a subgraph of a Yao-graph for $S$. It starts with cones of aperture $\Theta(\sqrt{\eps})$, and refines them to cones of aperture $\Theta(\eps^{-1})$. We connect each point
$p\in S$ to the closest points in the larger cones, and use the smaller cones only when ``necessary.''
To specify when exactly the smaller cones are used, we define two geometric regions that will also play crucial roles in the stretch and weight analyses.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.7\textwidth]{wedge2}
\end{center}
\caption{Wedges $W_1$ and $W_2$, line segment $A(p,q)$,
and regions $\widehat{A}(p,q)$, $B(p,q)$, and $\widehat{B}(p,q)$ for $p,q\in S$.}
\label{fig:wedge}
\end{figure}
\subparagraph{Definitions.}
Let $p, q\in S$ be distinct points; refer to Fig.~\ref{fig:wedge}. Let $A(p,q)$ be the line segment of weight $\frac{\sqrt{\eps}}{2}\,\|pq\|$ on the line $pq$ with one endpoint at $p$ but interior-disjoint from the ray $\overrightarrow{pq}$; and $\widehat{A}(p,q)$ the set of points in $\R^2$ within distance $\frac{\eps}{16}\,\|pq\|$ from $A(p,q)$.
Let $W_1$ be the cone with apex $p$, aperture $\frac12\cdot \sqrt{\eps}$, and symmetry axis $\overrightarrow{pq}$; and let $W_2$ be the cone with apex $q$, aperture $\sqrt{\eps}$, and the same symmetry axis $\overrightarrow{pq}$. Let $B(p,q)=W_1\cap W_2$.
Finally, let $\widehat{B}(p,q)$ be the set of points in $\R^2$ within distance at most $\frac{\eps}{8}\,\|pq\|$ from $B(p,q)$.
We show below (cf.~Lemma~\ref{lem:technical+}) that if we add edge $pq$ to the spanner, then we do not need any of the edges $ab$ with $a\in \widehat{A}(p,q)$ and $b\in \widehat{B}(p,q)$. We can now present our algorithm.
\subparagraph{Algorithm \textsc{SparseYao}$(S,\eps)$.}
Input: a set $S\subset \R^2$ of $n$ points, and $\eps\in (0,\frac19)$.
\noindent\textbf{Preprocessing Phase: Yao-graphs.}
Subdivide $\R^2$ into $k:=\lceil 16\,\pi/\sqrt{\eps}\rceil$ congruent cones of aperture $2\pi/k\leq \frac18\cdot \sqrt{\eps}$ with apex at the origin, denoted $C_1,\ldots ,C_k$. For $i\in \{1,\ldots ,k\}$, let $\overrightarrow{r}_i$ be the symmetry axis of $C_i$, directed from the origin towards the interior of $C_i$.
For each $i\in \{1,\ldots , k\}$, subdivide $C_i$ into $k$ congruent cones of aperture $2\pi/k^2\leq \eps/8$, denoted $C_{i,1},\ldots , C_{i,k}$; see Fig.~\ref{fig:Yao}. For each point $s\in S$, let $C_i(s)$ and $C_{i,j}(s)$, resp., be the translates of cones $C_i$ and $C_{i,j}$ to apex $s$.
For all $s\in S$ and $i\in \{1,\ldots ,k\}$, let $q_i(s)$ be a closest point to $s$ in $C_i(s)\cap (S\setminus \{s\})$; and for all $j\in \{1,\ldots , k\}$, let $q_{i,j}(s)$ be a closest point in $C_{i,j}(s)\cap (S\setminus \{s\})$; if such points exist.
For each $i\in\{1,\ldots , k\}$, let $L_i$ be the list of all ordered pairs $(s,q_i(s))$ sorted in decreasing order of the orthogonal projection of $s$ to the directed line $\overrightarrow{r}_i$; ties are broken arbitrarily.
\noindent\textbf{Main Phase: Computing a Spanner.}
Initialize an empty graph $G=(S,E)$ with $E:=\emptyset$.
\begin{enumerate}
\item For all $i\in \{1,\ldots , k\}$, do:
\begin{itemize}
\item While the list $L_i$ is nonempty, do:
\begin{enumerate}
\item Let $(p,q)$ be the first ordered pair in $L_i$.
\item Add (the unordered edge) $pq$ to $E$.
\item For all $i'\in \{i-2,\ldots , i+2\}$ and $j\in \{1,\ldots ,k\}$, do:\\
If $\|pq_i(p)\|\leq \|pq_{i',j}(p)\|$ and $q_{i',j}(p)\notin B(p,q)$,
then add $p q_{i',j}(p)$ to $E$.
\item For all $s\in \widehat{A}(p,q)$, including $s=p$, delete the pair $(s,q_i(s))$ from $L_i$.
\end{enumerate}
\end{itemize}
\item Return $G=(S,E)$.
\end{enumerate}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{Yao}
\end{center}
\caption{Cones $C_i(s)$ and $C_{i,j}(s)$ for a point $s\in S$, with $k=6$.}
\label{fig:Yao}
\end{figure}
It is clear that the runtime of Algorithm \textsc{SparseYao} is polynomial in $n$ in the RAM model of computation.
In particular, the runtime is dominated preprocessing phase that constructs the Yao-graph with $O(\eps^{-1}n)$ edges: finding the closest points $q_i(s)$ and $q_{i,j}(s)$ is supported by standard range searching data structures~\cite{Aga17a}. The main phase of the algorithm computes a subgraph of $Y_{k^2}(S)$ in $O(\eps^{-1}n)$ time. Optimizing the runtime, however, is beyond the scope of this paper.
\subsection{Stretch Analysis}
\label{ssec:stretch}
In this section, we show that $G=\textsc{SparseYao}(S,\eps)$ is a $(1+\eps)$-spanner for $S$. In the preprocessing phase, Algorithm \textsc{SparseYao} computes a Yao-graph with $k^2=\Theta(\eps^{-1})$ cones. The following four lemmas justifies that we can omit some of the edges $sq_{i,j}$ from $G$.
The first two technical lemmas show that \emph{if} $G$ already contains $(1+\eps)$-paths from $a$ to $p$ and from $q$ to $b$, then we can concatenate them with the edge $pq$ to obtain a $(1+\eps)$-path from $a$ to $b$. In Lemma~\ref{lem:technical}, we assume that $a\in A(p,q)$ and $b\in B(p,q)$; and Lemma~\ref{lem:technical+} handles the general case where $a\in \widehat{A}(p,q)$ and $b\in \widehat{B}(p,q)$. In inequality~\eqref{eq:technical} below, we use $\left(1+\frac{\eps}{4}\right)\|pq\|$ instead of $\|pq\|$ to absorb further error terms in the general case.
\begin{lemma}\label{lem:technical}
For all $a\in A(p,q)$ and $b\in B(p,q)$, we have
\begin{equation}\label{eq:technical}
(1+\eps)\|ap\|+\left(1+\frac{\eps}{3}\right)\|pq\|+(1+\eps)\|qb\|\leq (1+\eps)\|ab\|.
\end{equation}
\end{lemma}
\begin{proof}
We start with three simplifying assumptions.
\noindent (i) We may assume that $p$ is the origin, $q$ is on the positive $x$-axis, and $b$ is on or above the $x$-axis, by applying a suitable congruence, if necessary. In particular, this implies that $A(p,q)$ is a line segment on the nonpositive $x$-axis; see Fig.~\ref{fig:wedge}.
\noindent (ii) We may assume w.l.o.g.\ that $b$ is in the boundary $\partial B(p,q)$ of $B(p,q)$, since if we rotate the segment $qb$ around $q$, then the left hand side of \eqref{eq:technical} does not change, but the right hand side is minimized for $b\in\partial B(p,q)$.
\noindent (iii) Furthermore, we may assume w.l.o.g.\ that $a=p$ if we establish
the following a slightly stronger inequality for $a=p$ and $b\in B(p,q)$:
\begin{equation}\label{eq:technical-}
\left(1+\frac{\eps}{2}\right)\|pq\|+(1+\eps)\|qb\|\leq (1+\eps)\|pb\|.
\end{equation}
Indeed, if $a\neq p$, we easily show that \eqref{eq:technical-} implies Lemma~\ref{lem:technical}.
Note that $a\in A(p,q)$ implies $B(p,q)\subseteq B(a,q)$ since $a\in A(p,q)$ lies on the symmetry axis of $B(p,q)$, as well as the wedges $W_1$ and $W_2$. Then \eqref{eq:technical-} becomes
\begin{align}\label{eq:reduction}
\left(1+\frac{\eps}{2}\right)\|aq\|+(1+\eps)\|qb\| &\leq (1+\eps)\|ab\| \nonumber\\
\left(1+\frac{\eps}{2}\right)\left(\|ap\| + \|pq\|\right)+(1+\eps)\|qb\|&\leq (1+\eps)\|ab\| \nonumber\\
\left(\|ap\|+\frac{\eps}{2}\, \|ap\| + \frac{\eps}{6}\, \|pq\|\right)
+ \left(1+\frac{\eps}{3}\right)\|pq\|+(1+\eps)\|qb\|&\leq (1+\eps)\|ab\|\nonumber\\
(1+\eps)\|ap\|+\left(1+\frac{\eps}{3}\right)\|pq\|+(1+\eps)\|qb\|&\leq (1+\eps)\|ab\|,
\end{align}
as $\|ap\|\leq \|A(pq)\| = \sqrt{\eps} \|pq\| \leq \frac13\,\|pq\|$ for $\eps<\frac19$.
Let us review the Taylor estimates of some of the trigonometric functions.
For the secant, we use the upper bound
$\sec x = \frac{1}{\cos x}= 1+\frac{x^2}{2}+\frac{5x^4}{24}+\ldots < 1+x^2$ for $0<x<\frac12$.
For the tangent, we use both upper and lower bounds $x\leq \tan x\leq x+\frac{x^3}{3}+\frac{2x^5}{15}+\ldots < x+\frac{x^3}{2}$.
To prove~\eqref{eq:technical-},
we distinguish between two cases based on whether $b\in \partial W_1$ or $b\in \partial W_2$.
Let $c$ be the intersection point of $\partial W_1$ and $\partial W_2$ above the $x$-axis.
Since $\angle pcq = \angle qpc = \sqrt{\eps}/4$, then $\Delta{pqc}$ is an isosceles triangle
with $\|pq\|=\|qc\|$.
\subparagraph{Case~1: $b\in qc$ (Fig.~\ref{fig:cases}(left)).}
Note that $0\leq \angle qpb\leq \sqrt{\eps}/4$. Assume
$\angle qpb = t\cdot \sqrt{\eps}/4$ for some $t\in [0,1]$.
Since the interior angles of triangle $\Delta{pqb}$ add up to $\pi$,
then $\angle qbp = (2-t)\sqrt{\eps}/4$.
Let $q^\perp$ be the orthogonal projection of $q$ to $pb$. Then $\|pb\|=\|pq^\perp\|+\|q^\perp b\|$.
Since $\angle qpb \leq \angle qbp$ implies $\angle qpq^\perp \leq \angle qb q^\perp$,
then we have $\|q^\perp b\|\leq \|p q^\perp\|$,
and so $\|q^\perp b\|\leq \frac12\, \|pb\|$.
We are now ready to prove \eqref{eq:technical-} in Case~1:
\begin{align*}
\left(1+\frac{\eps}{2}\right)\|pq\|+(1+\eps)\|qb\|
&= \left(1+\frac{\eps}{2}\right) \|pq^\perp \| \sec \angle qpb
+(1+\eps) \|q^\perp b\| \sec \angle qbp \\
&= \left(1+\frac{\eps}{2}\right) \|p q^\perp\| \sec \frac{t\,\sqrt{\eps}}{4}
+(1+\eps) \|q^\perp b\| \sec \frac{(2-t)\sqrt{\eps}}{4}\\
&< \left(1+\frac{\eps}{2}\right) \left(1+\frac{t^2\eps}{16}\right)\|pq^\perp\|
+(1+\eps)\left(1+ \frac{(2-t)^2 \eps}{16}\right)\|q^\perp b\| \\
&< \left(1+\frac{(t^2+9)\eps}{16}\right)\|pq^\perp\|
+\left(1+\frac{(t^2-4t+21)\eps}{16}\right)\|q^\perp b\| \\
&= \left(1+\frac{(t^2+9)\eps}{16}\right) \left(\|p q^\perp\|+\|q^\perp b\|\right)
+ \frac{(-4t+12)\eps}{16}\, \|q^\perp b\| \\
&\leq \left(1+\frac{(t^2+9)\eps}{16}\right) \|p b\| + \frac{(-2t+6)\eps}{8}\cdot \frac{\|pb\|}{2}\\
&\leq \left(1+ \eps\cdot \frac{t(t-2)+15}{16}\right) \|pb\| \\
&< \left(1+ \eps \right) \|pb\|,
\end{align*}
as required.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.9\textwidth]{cases}
\end{center}
\caption{Left: Case~1 where $b\in qc$.
Right: Case~2, where $b$ lies to the right of $c$ on the boundary of $B(p,q)$.}
\label{fig:cases}
\end{figure}
\subparagraph{Case~2: $b\in \partial B(p,q)\cap \partial W_1$ (Fig.~\ref{fig:cases}(right)).}
In this case, $\angle qpb=\sqrt{\eps}/4$ is fixed.
Note that $0\leq \angle pbq \leq \angle pcq\leq \sqrt{\eps}/4$.
Assume $\angle pbq = t\cdot \sqrt{\eps}/4$ for some $t\in [0,1]$.
Since $\angle qpb \geq \angle qbp$ implies $\angle qp q^\perp \geq qb q^\perp$,
then we get $\|p q^\perp\|\leq \|q^\perp b\|$ hence $\|p q^\perp\|\leq \frac12\, \|pb\|$.
Furthermore, the right triangles $\Delta{pq q^\perp}$ and $\Delta{bq q^\perp}$ yield
\[
\|q q^\perp\|=\|p q^\perp\| \tan \angle qpb = \|q^\perp b\| \tan \angle qbp.
\]
This further implies
\[
\|q^\perp b\|
=\|p q^\perp\| \frac{\tan \angle qpb}{\tan \angle qbp}
= \|p q^\perp\| \frac{\tan \left(\frac{\sqrt{\eps}}{4}\right) }{\tan\left(t\cdot \frac{\sqrt{\eps}}{4}\right)}
\leq \|pq^\perp \| \frac{\frac{\sqrt{\eps}}{4}+\frac12 \left(\frac{\sqrt{\eps}}{4}\right)^3}{t\cdot \frac{\sqrt{\eps}}{4}}
\leq \|p q^\perp \| \frac{1+\eps/32}{t}.
\]
We are now ready to prove \eqref{eq:technical-} in Case~2:
\begin{align*}
\left(1+\frac{\eps}{2}\right)\|pq\|+(1+\eps)\|qb\|
&= \left(1+\frac{\eps}{2}\right) \|pq^\perp\| \sec \angle qpb +(1+\eps) \|q^\perp b\|\sec \angle qbp \\
&= \left(1+\frac{\eps}{2}\right) \|p q^\perp \|\sec \frac{\sqrt{\eps}}{4} +(1+\eps)\|q^\perp b\| \sec \frac{t\,\sqrt{\eps}}{4}\\
&< \left(1+\frac{\eps}{2}\right)\left(1+\frac{\eps}{16}\right) \|p q^\perp \|
+(1+\eps)\left(1+\frac{t^2\eps}{16}\right)\|q^\perp b\| \\
&< \left(1+\frac{10\eps}{16}\right) \|p q^\perp \|
+\left(1+\eps+\frac{t^2\eps(1+\eps)}{16}\right)\|q^\perp b\| \\
&=(1+\eps) \left(\|p q^\perp\|+\|q^\perp b\|\right) +
\frac{\eps}{16}\left(t^2(1+\eps)\|q^\perp b\|-6 \|p q^\perp \| \right)\\
&\leq (1+\eps) \|pb\| + \frac{\eps}{16}\left(t(1+\eps)\left(1+\frac{\eps}{32}\right) -6 \right) \|p q^\perp \|\\
&< (1+\eps) \|pb\|,
\end{align*}
as required, since $0<t\leq 1$ and $0<\eps<1/9$.
We have confirmed \eqref{eq:technical-} in both cases. This completes the proof of Lemma~\ref{lem:technical}.
\end{proof}
In the general case, we have $a\in \widehat{A}(p,q)$ and $b\in \widehat{B}(p,q)$.
However, for technical reasons, we use a slightly larger neighborhood instead of $\widehat{A}(p,q)$.
Let $\tilde{A}(p,q)$ be the set of points in $\R^2$ within distance at most $\frac{\eps}{5}$ from $A(p,q)$.
\begin{lemma}\label{lem:technical+}
For all $a\in \tilde{A}(p,q)$ and $b\in \widehat{B}(p,q)$, we have
\begin{equation}\label{eq:technical+}
(1+\eps)\|ap\|+\|pq\|+(1+\eps)\|qb\|\leq (1+\eps)\|ab\|.
\end{equation}
\end{lemma}
\begin{proof}
Since $a\in \tilde{A}(p,q)$ and $b\in \widehat{B}(p,q)$, then there exist
$a'\in A(p,q)$ and $b'\in B(p,q)$ with $\|aa'\|\leq \frac{\eps}{5}\,\|pq\|$ and $\|bb'\|\leq \frac{\eps}{8}\,\|pq\|$.
By the triangle inequality, we have $\|ap\|\leq \|aa'\|+\|a'p\|\leq \|a'p\|+\frac{\eps}{5}\,\|pq\|$ and
$\|qb\|\leq \|qb'\|+\|b'b\|\leq \|qb'\|+\frac{\eps}{8}\,\|pq\|$.
Combining these inequalities with Lemma~\ref{lem:technical} for points $a'$ and $b'$, we obtain
\begin{align*}
(1+\eps)\|ap\|+\|pq\|+(1+\eps)\|qb\|
&\leq \Big((1+\eps)\|a'p\|+\|pq\|+(1+\eps)\|qb'\|\Big) + (1+\eps)\Big(\|aa'\|+\|bb'\|\Big)\\
&\leq \left( (1+\eps)\|a'b'\| - \frac{\eps}{4}\, \|pq\|\right) + (1+\eps)\frac{13\eps}{40}\, \|pq\| \\
&\leq (1+\eps)\Big( \|a'a\|+\|ab\| +\|bb'\|\Big) + \left(\frac{3(1+\eps)}{16}-\frac14\right)\eps\,\|pq\| \\
&\leq (1+\eps)\|ab\| + \left((1+\eps)\frac{13}{40}-\frac13\right)\eps\,\|pq\| \\
&< (1+\eps)\|ab\|,
\end{align*}
for $0<\eps<1/9$, as claimed.
\end{proof}
\subparagraph{Relation between $B(p,q)=W_1\cap W_2$ and $W_1\setminus W_2$.}
The following two lemmas help analyze step~2c of Algorithm \textsc{SparseYao}
that adds some of the edges $pq_{i',j}$ to the spanner.
\begin{lemma}\label{lem:WWW}
For points $p,q\in S$, recall that $B(p,q)=W_1\cap W_2$, where $W_1$ and $W_2$ are wedges of aperture $\frac12\cdot\sqrt{\eps}$ and $\sqrt{\eps}$, resp.; see Fig.~\ref{fig:wedge}.
For every point $q'\in W_1\setminus W_2$, we have $\|pq'\|\leq 2\,\|pq\|$.
\end{lemma}
\begin{proof}
The line segment $pq$ decomposes $W_1\setminus W_2$ into two isosceles triangles. By the triangle inequality, the diameter of each isosceles triangle is less than $2\|pq\|$. This implies that for any point $q'\in W_1\setminus W_2$, we have $\|pq' \|< 2\, \|pq_i\|$.
\end{proof}
The following lemma justifies the role of the regions $\widehat{B}(p,q_i)$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\textwidth]{wedge25}
\caption{If $q_{i,j}\in B(p,q_i)$ but $q\notin B(p,q_i)$, then $q\in \widehat{B}(p,q_i)$.}
\label{fig:wedge25}
\end{figure}
\begin{lemma}\label{lem:bbb}
Let $p,q\in S$, and assume that $q\in C_{i',j}(p)$ for some $i,j\in \{1,\ldots , k\}$ and $i'\in \{i-1,i,i+1\}$,
where $q_{i',j}=q_{i',j}(p)$ is a closest point to $p$ in $C_{i',j}(p)$.
If $q\notin {B}(p,q_i)$ but $q_{i',j}\in B(p,q_i)$, then $q\in \widehat{B}(p,q_i)$.
\end{lemma}
\begin{proof}
Since $q\in C_{i-1}(p)\cup C_i(p)\cup C_{i+1}(p)$ and the aperture of $C_i(p)$ is
$2\pi/k\leq \frac18\cdot \sqrt{\eps}$, then $\angle qpq_i\leq \frac38\cdot \sqrt{\eps}$,
and so $q\in W_1\setminus W_2$. Lemma~\ref{lem:WWW} yields $\|pq\|\leq 2\, \|pq_i\|$.
Consider the circle of radius $\|pq\|$ centered at $p$; see Fig.\ref{fig:wedge25}.
Since $\|pq\|\geq \|pq_{i',j}\|$ and $q_{i',j}\in B(p,q_i)$, this circle intersects
$\partial B(p,q_{i'})$ in the cone $C_{i,j}(p)$. Denote by $q'$ the intersection point.
Now $q,q'\in C_{i',j}(p)$ implies that $\angle q'pq_{i',j}\leq 2\pi/k^2\leq \eps/128$.
The distance $\|qq'\|$ is bounded above by the length of the circular arc between them:
$\|qq'\|\leq \|pq\| \angle qpq'
\leq \|pq\| \angle qpq_{i',j}
\leq \frac{\eps}{128}\,\|pq\|
<\frac{\eps}{128}\,2\,\|pq_i\|$.
As $q$ is at distance less than $\frac{\eps}{8}$ from $q'\in B(p,q_i)$,
then $q\in \widehat{B}(p,q_{i'})$, as required.
\end{proof}
We can also clarify the relation between $\widehat{A}(p,q)$ and $\tilde{A}(p,q)$ in the setting used in the stretch analysis.
\begin{lemma}\label{lem:tilde}
Let $p,q\in S$, and assume that $q\in C_{i',j}(p)$ for some $i,j\in \{1,\ldots , k\}$ and $i'\in \{i-1,i,i+1\}$,
where $q_{i',j}=q_{i',j}(p)$ is a closest point to $p$ in $C_{i',j}(p)$.
Then $\widehat{A}(p,q_i)\subset \tilde{A}(p,q_{i',j})$.
\end{lemma}
\begin{proof}
Since the aperture of $C_i(p)$ is $\frac18 \cdot \sqrt{\eps}$ and $q_i\in C_i(p)$, then $\angle q_ipq_{i',j}\leq \frac14 \, \sqrt{\eps}$. Since $\|pq_i\|\leq \|pq_{i',j}\|$, then $\|A(p,q_i)\|\leq \|A(p,q_{i',j})\|$.
Consequently, every point in $A(p,q_i)$ is within distance at most $\|A(p,q_i)\|\sin \angle q_ipq_{i',j})\leq \frac{\sqrt{\eps}}{2}\, \|pq_i\|\cdot \frac14 \, \sqrt{\eps} \leq \frac{\eps}{8}\, \|pq_i\|$ from $A(p,q_{i',j})$. By the triangle inequality, the $(\frac{\eps}{16}\, \|pq_i\|)$-neighborhood of $A(p,q_i)$ is within distance at most $(\frac{\eps}{8}+\frac{\eps}{16})\|pq_i\|< \frac{\eps}{5}\,\|pq_i\|$ from $A(p,q_{i',j})$.
\end{proof}
\subparagraph{Completing the Stretch Analysis.}
We are now ready to present the stretch analysis for \textsc{SparseYao}$(S,\eps)$.
\begin{theorem}\label{thm:twostage}
For every finite point set $S\subset \R^2$ and $\eps\in (0,\frac19)$, the graph $G=\textsc{SparseYao}(S,\eps)$ is a $(1+\eps)$-spanner.
\end{theorem}
\begin{proof}
Let $S$ be a set of $n$ points in the plane.
Let $L_0$ be the list of all $\binom{n}{2}$ edges of the complete graph on $S$ sorted by Euclidean weight (ties broken arbitrarily). For $\ell=1,\ldots , \binom{n}{2}$, let $e_\ell$ be the $\ell$-th edge in $L_0$, and let $E(\ell)=\{e_1,\ldots ,e_\ell\}$. We show the following claim, by induction, for every $\ell=1,\ldots , \binom{n}{2}$:
\begin{claim}\label{cl:induction}
For every edge $ab\in E(\ell)$, $G=(S,E)$ contains an $ab$-path of weight at most $(1+\eps)\|ab\|$.
\end{claim}
For $\ell=1$, the claim clearly holds, as the shortest edge $pq$ is necessarily the shortest in some cones $C_i(p)$ and $C_{i'}(q)$, as well, and so the algorithm adds $pq$ to $E$.
Assume that $1<\ell\leq \binom{n}{2}$ and the claim holds for $\ell-1$.
If the algorithm added edge $e_\ell$ to $E$, then the claim trivially holds for $\ell$.
Suppose that $e_\ell\notin E$. Let $e_\ell=pq$, and $q\in C_{i,j}(p)$ for some $i,j\in \{1,\ldots , k\}$.
Recall that $q_i=q_i(p)$ is a closest point to $p$ in the cone $C_i$; and $q_{i,j}=q_{i,j}(p)$ is a closest point to $p$ in the cone $C_{i,j}(p)$.
We distinguish between two cases.
\smallskip\noindent\textbf{(1) The algorithm added the edge $pq_i$ to $E$.}
Note that $\|q_i q\|< \|pq\|$ and $\|q_{i,j} q\|<\|pq\|$.
By the induction hypothesis, $G$ contains a $q_iq$-path $P_i$ of weight at most $(1+\eps)\|q_iq\|$ and a $q_{i,j}q$-path $P_{i,j}$ of weight at most $(1+\eps)\|q_{i,j} q\|$.
If $q\in \widehat{B}(p,q_i)$, then $pq_i+P_i$ is a $pq$-path of weight at most $(1+\eps)\|pq\|$ by Lemma~\ref{lem:technical+}.
Otherwise, $q\notin \widehat{B}(p,q_i)$.
In this case, $q_{i,j}\notin B(p,q_i)$ by Lemma~\ref{lem:bbb}.
This means that the algorithm added the edge $pq_{i,j}$ to $E$.
We have $q\in \widehat{B}(p,q_{i,j})$ by Lemma~\ref{lem:bbb}, and so
$pq_{i,j}+P_{i,j}$ is a $pq$-path of weight at most $(1+\eps)\|pq\|$ by Lemma~\ref{lem:technical+}.
\smallskip\noindent\textbf{(2) The algorithm did not add the edge $pq_i$ to $E$.}
Then the algorithm deleted $(p,q_i)$ from the list $L_i$ in a step in which
it added another edge $p'q_i'$ to $E$.
This means that $p\in \widehat{A}(p',q_i')$, where $q_i'$ is the closest point to $p'$ in the cone $C_i(p')$.
As $\diam(A(p_i,q_i')) < (\sqrt{\eps}+2\cdot \frac{\eps}{16})\|p'q_i'\|< \frac14\, \|p'q_i'\|$ for $\eps\in (0,\frac19)$, then $p\in \widehat{A}(p',q_i')$ implies $\|pp'\|\leq \frac14\, \|p'q_i'\|$.
Since $L_i$ is sorted by weight, then $\|p'q_i'\|\leq \|pq_i\|$.
Although we have $q\in C_i(p)$, the point $q$ need not be in the cone $C_i(p')$; see Fig.~\ref{fig:wedge6}.
We claim that $q$ lies in the union of three consecutive cones: $q\in C_{i-1}(p')\cup C_i(p')\cup C_{i+1}(p')$.
Let $D_i(p')$ be part of the cone $C_i(p')$ outside of the circle of radius $\|p' q_i'\|$ centered at $p'$. Since $q\in C_i(p)$ and $\|p'q_i'\|<\|pq\|$, then $q$ lies in the translate
$D_i(p')+\overrightarrow{p'p}$ of $D_i(p')$. Consider the union of translates:
\[
D =D_i(p)+\{\overrightarrow{p'a}: a\in A_{p_i,q_i'}\},
\]
and note that $q\in D$. We have $\diam(\widehat{A}(p',q_i'))\leq (\frac{\sqrt{\eps}}{2}+2\cdot \frac{\eps}{16})\|p'q_i'\| < \frac14\, \|p'q_i'\|$ for $\eps\in (0,\frac19)$;
and recall that the aperture of $C_i(p')$ is $\gamma:=2\pi/k\leq \frac{1}{8}\cdot \sqrt{\eps}$.
We can now approximate $\angle qp'q_i'$ as follows; refer to Fig.~\ref{fig:wedge6}:
$\tan \angle qp'q_i' \leq \|p'q_i'\| \tan \gamma / (\|p'q_i'\| - 2\diam(A(p',q_i)))\leq 2\, \tan\alpha$.
Consequently, $\angle qp'q_i' < 2\, \gamma$. It follows that $q\in \bigcup_{i'=i-2}^{i+2}C_{i'}(p')$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{wedge4}
\caption{The relative position of $pq$ and $p' q_i'$. Specifically, $p\in \widehat{A}(p',q_i')$ and $q\in C_i(p)$.
Left: the region $D_i(p')$ and translates of $\widehat{A}(p',q_i')$ to two critical points of $D_i(p')$.
Right: $q\in C_{i-1}(p')\cup C_i(p')\cup C_{i+1}(p')$ and the region $B(p'q_i')$.}
\label{fig:wedge6}
\end{figure}
We distinguish between two subcases:
\smallskip\noindent\textbf{(2a) $q\in \widehat{B}(p',q_i')$.}
By induction, $G$ contains $(1+\eps)$-paths between $p$ and $p'$, and between $q$ and $q_i'$.
By Lemma~\ref{lem:technical+} (with $a=p$ and $b=q$),
the concatenation of these paths and the edge $p'q_i'$ is a $pq$-path of weight at most $(1+\eps)\|pq\|$.
\smallskip\noindent\textbf{(2b) $q\notin \widehat{B}(p',q_i')$.}
Then $q\in C_{i',j'}(p')$ for some $i'\in \{i-1,i,i+1\}$ and $j'\in \{1,\ldots ,k\}$.
By Lemma~\ref{lem:bbb}, we have $q_{i',j'}\notin B(p',q_i)$.
and so the algorithm added the edge $p'q_{i',j'}$, where $q_{i',j'}$ is the closest point to $p'$
in the cone $ C_{i',j'}(p')$.
We have $p\in \widehat{A}(p',q_i')\subset \tilde{A}(p',q_{i',j})$ by Lemma~\ref{lem:tilde},
and $q\in \widehat{B}(p',q_{i',j})$ by Lemma~\ref{lem:bbb}.
By induction, $G$ contains $(1+\eps)$-paths between $p$ and $p'$, and between $q_{i',j'}$ and $q$.
The concatenation of these paths and the edge $p'q_{i',j'}$ is a $pq$-path of weight at most $(1+\eps)\|pq\|$ by Lemma~\ref{lem:technical+}.
\end{proof}
\section{Spanners in the Unit Square}
\label{sec:square}
In this section, we show that for a set $S\subset [0,1]^2$ of $n$ points in the unit square and $\eps\in (0,\frac19)$, Algorithm~\textsc{SparseYao} returns a $(1+\eps)$-spanner of weight $O(\eps^{-3/2}\sqrt{n})$ (cf.~Theorem~\ref{thm:UBsqaure}).
The spanner \textsc{SparseYao}$(S,\eps)$ is a subgraph of the Yao-graph with cones of aperture $2\pi/k^2=O(\eps)$, and so it has $O(\eps^{-1}n)$ edges. Recall that for all $p\in S$ and all $i\in \{1,\ldots ,k\}$, there is at most one edge $pq_i(p)$ in $G$, where $q_i(p)$ is the closest point to $p$ in the cone $C_i(p)$ of aperture $\frac18\,\sqrt{\eps}$.
Let
\[F=\big\{p q_i(p)\in E(G): p\in S, i\in \{1,\ldots , k\}\big\}.\]
We first show that the weight of the edges in $F$ approximates the weight of all other edges.
\begin{lemma}\label{lem:deltoid}
If Algorithm~\textsc{SparseYao} adds $pq_i(p)$ and $pq_{i',j}(p)$ to $G$ in the same iteration, then $\|pq_{i',j}(p)\|< 2\,\|pq_i(p)\|$.
\end{lemma}
\begin{proof}
For short, we write $q_i=q_i(p)$ and $q_{i',j}=q_{i',j}(p)$, where $i\in \{i-1,i,i+1\}$. Since \textsc{SparseYao} added $pq_{i',j}$ to $G$, then $q_{i',j}\notin B(p,q_i)$. Recall (cf.\ Fig.~\ref{fig:wedge}) that $B(p,q_i) = W_1\cap W_2$, where $W_1$ and $W_2$ are cones centered at $p$ and $q_i$, resp., with apertures $\frac12\, \sqrt{\eps}$ and $\sqrt{\eps}$. Since the aperture of the cone $C_i(p)$ is $\frac18\,\sqrt{\eps}$, then $C_{i-1}(p)\cup C_i(p)\cup C_{i+1}(p)\subset W_1$.
Consequently, $\left(C_{i-1}(p)\cup C_i(p)\cup C_{i+1}(p)\right)\setminus B(p,q_i)\subset W_1\setminus W_2$.
Lemma~\ref{lem:WWW} gives $\|pq_{i',j}(p)\|< 2\,\|pq_i(p)\|$. as claimed.
\end{proof}
\begin{lemma}\label{lem:factor}
For $G=\textsc{SparseYao}(S,\eps)$, we have $\|G\|=O(\eps^{-1/2}) \cdot \|F\|$.
\end{lemma}
\begin{proof}
Fix $p$ and $i\in \{1,\ldots , k\}$, let $q_i=q_i(p)$ for short, and suppose $pq_i\in E(G)$.
Consider one step of the algorithm that adds the edge $pq_i$ to $G$, together with
up to $3k=\Theta(\eps^{-1/2})$ edges of type $p q_{i',j}$, where $q_{i',j}\notin B(p,q_i)$ and $i'\in \{i-1,i,i+1\}$.
By Lemma~\ref{lem:deltoid}, $\|pq_{i',j}\|< 2 \|pq_i\|$.
The total weight of all edges $p q_{i',j}$ added to the spanner is
\[
\|p q_i(p)\|+\sum_{i'=i-1}^{i+1}\sum_{j=1}^k \|pq_{i',j}\|
\leq \|pq_i\|+3k\cdot 2\, \|pq_i\|
\leq O(k\|pq_i\|)
\leq O(\eps^{-1/2})\|pq_i\|).
\]
Summation over all edges in $F$ yields $\|G\|=O(\eps^{-1/2}) \cdot \|F\|$.
\end{proof}
It remains to show that $\|F\|\leq O(\eps^{-1}\sqrt{n})$. For $i=1,\ldots , k$, let
\[
F_i=\{pq_i(p)\in E(G): p\in S\},
\]
that is, the set of edges in $G$ between points $p$ and the closest point $q_i(p)$ in cone $C_i(p)$ of aperture $\sqrt{\eps}$. We prove that $\|F_i\|\leq O(\eps^{-1/2}\, \sqrt{n})$ (Lemma~\ref{lem:Fi} below). Since $k=\Theta(\eps^{-1/2})$ this will immediately imply $\|F\|=\sum_{i=1}^k \|F_i\| =O(k\eps^{-1/2} \sqrt{n})=O(\eps^{-1}\sqrt{n})$.
\subparagraph{Charging Scheme.}
Let $i\in \{1,\ldots , k\}$ be fixed. Assume w.l.o.g.\ that the symmetry axis of the cone $C_i$ is horizontal, and the apex is the leftmost point of $C_i$. Refer to Fig.~\ref{fig:drops}(left).
For each edge $pq_i(p)\in F_i$, let $R_i(p)$ be the intersection of cone $C_i(p)$ and the disk of radius $\|pq_i(p)\|$ centered at $p$. Note that $R_i(p)$ is a sector of the disk, and the sectors $R_i(p)$ are pairwise homothetic.
The sector $R_i(p)$ has three vertices: Its leftmost vertex is $p$, and the other two vertices are the endpoints of a circular arc, which have the same $x$-coordinate (by symmetry). As $q_i(p)$ is the closest point to $p$ in $C_i(p)$, then $S\cap \mathrm{int}(R_i(p))=\emptyset$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.75\textwidth]{drops}
\end{center}
\caption{Left: a sector $R_i(p)$. Right: two intersecting sectors $R_i(p)$ and $R_i(p')$.}
\label{fig:drops}
\end{figure}
Let $\mathcal{R}_i$ be the set of sectors for all edges $pq_i(p)$ in $G$. These sectors are not necessarily disjoint;
but we can still give a lower bound on the area of their union. We first study their intersection pattern.
\begin{lemma}\label{lem:intersect}
Assume that $R_i(p),R_i(p')\in \mathcal{R}_i$ and $R_i(p)\cap R_i(p')\neq \emptyset$,
Then $p$ and $p'$ have smaller $x$-coordinates than any other vertices of the two sectors.
\end{lemma}
\begin{proof}
Denote the vertices of $R_i(p)$ and $R_i(p')$ by $a,b,p$ and $a',b',p'$, resp.; see Fig.~\ref{fig:drops}(right).
Point $p'$ is in the exterior of $R_i(p)$, since $S\cap \mathrm{int}((R_i(p))=\emptyset$.
Now $R_i(p)\cap R_i(p')\neq \emptyset$ implies that boundaries of $R_i(p)$ and $R_i(p')$ intersect.
Suppose, to the contrary, $p'$ to the right of the vertical line $ab$. Since $p'$ is the leftmost point of $R_i(p')$, then all boundary intersections $\partial R_i(p)\cap \partial R_i(p')$ are on the circular arc $ab$.
If $a'p'$ or $b'p'$ intersects the circular arc $ab$, then $p'\in \mathrm{int}(R_i(p))$, a contradiction.
Otherwise only the circular arcs $ab$ and $a'b'$ intersect: Then both $p$ and $p'$ lie on the orthogonal bisector
of the two intersection points; and we arrive again at a contradiction $p'\in \mathrm{int}(R_i(p))$.
\end{proof}
We partition the sectors $R_i(p)$ according to their sizes:
For all $j\in \N$, let $\mathcal{R}_{i,j}$ be the set of sectors $R_i(p)$ such that
$2^{-j}\leq \|p q_i(p)\| < 2^{1-j}$.
We show that the sectors $\mathcal{R}_{i,j}$ do not overlap too heavily.
\begin{lemma}\label{lem:density}
For all $j\in \N$, any point $g\in [0,1]^2$ is contained in $O(\eps^{-1/2})$ sectors in $\mathcal{R}_{i,j}$.
\end{lemma}
\begin{proof}
Let $\mathcal{R}_{i,j}(g)=\{R\in \mathcal{R}_{i,j}: g\in R\}$ be the set of sectors that contain $g$; these sectors pairwise intersect. By Lemma~\ref{lem:intersect}, the leftmost vertices of the sectors have smaller $x$-coordinates than any other vertices (i.e., endpoints of circular arcs). Let $\ell$ be a vertical line that separates the leftmost vertices of these sectors from all other vertices; and let $\ell^-$ be the left halfplane bounded by $\ell$; see Fig.~\ref{fig:terrain}(left).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.9\textwidth]{terrain}
\end{center}
\caption{Left: the set of sectors that contains point $g$; and the vertical line $\ell$.
Right: regions $A(p,q_i(p))$ and $\widehat{A}(p,q_i(p))$ for a vertex $p\in \gamma$.}
\label{fig:terrain}
\end{figure}
Recall that for every sector $R_i(p)\in \mathcal{R}_{i,j}(g)$, we have $\|gp\|\geq 2^{-j}$. As the aperture of $C_i$ is $\frac18\, \sqrt{\eps}$, with $\eps\in (0,\frac19)$, then the weight of the vertical segment $\ell\cap R_i(p)$ is at most
$\|\ell\cap R_i(p)\| \leq 2^{-j}\ \cdot 2\sin (\frac{1}{16}\, \sqrt{\eps})\leq O(2^{-j} \sqrt{\eps})$.
For every sector $R_i(p)\in \mathcal{R}_{i,j}(g)$, $R_i(p)\cap \ell^-$ is an isosceles triangle with two legs of slopes $\pm \tan (\frac{1}{16}\sqrt{\eps}) =\pm \Theta(\sqrt{\eps})$. Consider the union of these isosceles triangles, $(\bigcup_{R\in \mathcal{R}(g)} R)\cap \ell^-$. Its right boundary is a vertical segment of weight $O(2^{-j}\sqrt{\eps})$; and its left boundary is a $y$-monotone curve, that we denote by $\gamma$.
The local $x$-minima of $\gamma$ are the leftmost vertices of the sectors $R_i(p)\in \mathcal{R}(g)$.
Suppose $p,p'\in S$ are two consecutive local $x$-minima along $\gamma$; see Fig.~\ref{fig:terrain}(right).
We claim that the $y$-coordinates of $p$ and $p'$ differ by at least $\frac{\eps}{32}\cdot 2^{-j}$.
Suppose, to the contrary, that $|y(p)-y(p')|< \frac{\eps}{32}\cdot 2^{-j}$. Due to the slopes of $\gamma$,
this implies
\[
|x(p)-x(p')|
<\frac{\eps}{32}\cdot 2^{-j} / \tan \left(\frac{1}{16}\sqrt{\eps}\right)
\leq \frac{\sqrt{\eps}}{2}\,2^{-j}.
\]
Assume w.l.o.g.\ that Algorithm~\textsc{SparseYao} added edge $pq_i(p)$ before edge $p' q_i(p')$.
Then $p'$ is to the left of $p$ (i.e., $p'$ has smaller $x$-coordinate than $p$).
When the algorithm added edge $p q_i(p)$ to $G$, it deleted the pairs $(s, q_i(s))$ from $L_i$ for all $s\in \widehat{A}(p,q_i(p))$. Since $\|p q_i(p)\|\geq 2^{-j}$ and $pq_i(p)\subset C_i(P)$,
then $A(p,q_i(p))$ is a line segment of weight at least $2^{-j}\, \sqrt{\eps}$ and slope at most $\tan(\frac{1}{16}\,\sqrt{\eps})$. This implies that $p'$ lies in the horizontal slab spanned by
$A(p,q_i(p))$. Furthermore, the vertical distance between $p'$ and $A(p,q_i(p))$ is
at most $2\, |y(p)-y(p')|< \frac{\eps}{16}\cdot 2^{-j}$. Recall that region $\widehat{A}(p,q_i(p))$
contains every point in an $(\frac{\eps}{16}\cdot 2^{-j})$-neighborhood of $A(p,q_i(p))$.
Consequently, $p'\in \widehat{A}(p,q_i(p))$, and the algorithm deleted the pair $(p',q_i(p)')$ from $L_i$.
This contradicts the assumption $p'q_i(p')\in E(G)$, and proves the claim.
As the height of $\gamma$ is $O(2^{-j} \sqrt{\eps})$, it can accommodate at most $O(2^{-j}\sqrt{\eps})/ (\frac{\eps}{32}\cdot 2^{-j}) =O(\eps^{-1/2})$ local $x$-minima. This implies that
$|\mathcal{R}(g)|\leq O(\eps^{-1/2})$, as claimed.
\end{proof}
In order to obtain disjoint regions, we define the \emph{core} of a sector $R_i(p)$, denoted $\widehat{R}_i(p)$; see Fig.~\ref{fig:core}. Label the vertices of $R_i(p)$ by $a$, $b$, and $p$, where $\|pa\|=\|pb\|=\|pq_i(p)\|$. Let $\mathbf{v}$ be a vector along the angle bisector of $\angle apb$ of weight $\|\mathbf{v}\|=\frac23 \|p q_i(p)\|$. Now let $\widehat{R}_i(p)=R_i(p)\cap (R_i(p)+\mathbf{v})$. Note that $\area(\widehat{R}_i(p))\geq \frac19\,\area(R_i(p))$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{core}
\end{center}
\caption{Left: the core $\widehat{R}_i(p)$ of a sector $R_i(p)$.
Right: The cores $\widehat{R}_i(p)$ and $\widehat{R}_i(p')$ are disjoint.}
\label{fig:core}
\end{figure}
\begin{lemma}\label{lem:disjoint}
If $j+2\leq j'$, then any two sectors in $\mathcal{R}_{i,j}$ and $\mathcal{R}_{i,j'}$ have disjoint cores.
\end{lemma}
\begin{proof}
Let $R_i(p)\in \mathcal{R}_{i,j}$ and $R_i(p')\in \mathcal{R}_{i,j'}$ with $j+2\leq j'$. Label their vertices
by $a,b,p$ and $a',b',p'$, respectively. Note that
\[
\|a'p'\| \leq 2^{-j'} \leq 2^{-(j+2)} \leq \frac12\,\|ap\|.
\]
Suppose, for the sake of contradiction, that $\widehat{R}_i(p)\cap \widehat{R}_i(p')\neq \emptyset$. Then $R_i(p)\cap R_i(p')\neq \emptyset$. By Lemma~\ref{lem:intersect}, $p'$ lies to the left of the vertical line $ab$.
To maximize the intersection $R_i(p)\cap R_i(p')$, we may assume that $R_i(p')$ has maximal size (that is,
$\|a'p'\| = \frac12\,\|ap\|$); and $p'\in ap\cup bp$. In this extremal case, however, $R_i(p')$ is still disjoint from the core $\widehat{R}_i(p)$; cf.~Fig.~\ref{fig:core}. Consequently, $\mathcal{R}_{i,j}$ and $\mathcal{R}_{i,j'}$ are disjoint.
\end{proof}
The combination of Lemmas~\ref{lem:density} and~\ref{lem:disjoint} gives an upper bound on the total area of all sectors in $\bigcup_{j\in \N}\mathcal{R}_{i,j}$.
\begin{corollary}\label{cor:volume}
For every $i$, we have $\sum_{R\in \mathcal{R}_i}\area(R)=\sum_{j\in \N} \sum_{R\in \mathcal{R}_{i,j}} \area(R)\leq O(\eps^{-1/2})$.
\end{corollary}
\begin{proof}
For every sector $R\in \mathcal{R}_i(p)$, we have $\area(R)=\Theta(\area(\widehat{R}))$.
Define the function $f_{i,j}:[0,1]^2\rightarrow \N$ such that for all $g\in [0,1]^2$,
$f(g)$ is the number of cores $\widehat{R}$, $R\in \mathcal{R}_{i,j}$, that contain $g$
Then $\sum_{R\in \mathcal{R}_{i,j}} \area(\widehat{R}) =\int_{[0,1]^2} f_{i,j}(g)$.
By Lemma~\ref{lem:density}, we have $f(g)\leq O(\eps^{-1/2})$ for all $g\in [0,1]$.
By Lemma~\ref{lem:disjoint}, $\sum_{j\in\N} f_{i,3j+\ell}(g) \leq O(\eps^{-1/2})$
for all $\ell\in \{0,1,2\}$ and $g\in [0,1]$. Consequently,
\begin{align*}
\sum_{R\in \mathcal{R}_i}\area(R)
&\leq O\left(\sum_{R\in \mathcal{R}_i}\area(\widehat{R}) \right)
= O\left(\sum_{\ell=0}^2 \sum_{j\in \N} \sum_{R\in \mathcal{R}_{i,3j+\ell}} \area(\widehat{R}) \right)\\
&= O\left(\sum_{\ell=0}^2 \sum_{j\in \N} \int_{[0,1]^2} f_{i,3j+\ell}(g) \right)
= O\left(\sum_{\ell=0}^2 \int_{[0,1]^2} \sum_{j\in \N} f_{i,3j+\ell}(g) \right)\\
&\leq O\left(\sum_{\ell=0}^2 \int_{[0,1]^2} \eps^{-1/2} \right)
= O\left(\sum_{\ell=0}^2 \eps^{-1/2}\area([0,1]^2)\right) =O(\eps^{-1/2}),
\end{align*}
as claimed.
\end{proof}
\begin{lemma}\label{lem:Fi}
For every $i\in \{1,\ldots, k\}$, we have $\|F_i\|\leq O(\eps^{-1/2}\sqrt{n})$.
\end{lemma}
\begin{proof}
For every $R_i(p)\in \bigcup_{j\in \N}\mathcal{R}_{i,j}$, we have
\[
\area(\widehat{R}_i(p))
\geq \Omega(\area(R_i(p)))
\geq \Omega(\|p q_i(p)\|^2 \sqrt{\eps}).
\]
In particular, summation over all $e\in F_i$ and Jensen's inequality gives
\[
\sum_{j\in \N} \sum_{R\in \mathcal{R}_{i,j}} \area(R)
\geq \Omega\left(\sqrt{\eps}\, \sum_{e\in F_i} \|e\|^2\right)
\geq \Omega\left(\sqrt{\eps}\cdot |F_i|\left(\frac{1}{|F_i|}\sum_{e\in F_i} \|e\|\right)^2\right)
\geq \Omega\left(\sqrt{\eps}\cdot |F_i| \cdot w_i^2\right),
\]
where $w_i=\frac{1}{|F_i|}\sum_{e\in F_i} \|e\| = \|F_i\|/|F_i|$ is the average weight of an edge in $F_i$.
Combined with Corollary~\ref{cor:volume}, we obtain
\later{
\begin{align}
\sqrt{\eps}\cdot |F_i|\cdot w_i^2 &\leq O(\eps^{-1/2})\label{eq:key}\\
w_i^2 &\leq O\left(\frac{1}{\eps\, |F_i|}\right)\nonumber \\
w_i &\leq O\left(\frac{1}{\sqrt{\eps\, |F_i|}}\right). \nonumber
\end{align}
}
\begin{equation}\label{eq:key}
\sqrt{\eps}\cdot |F_i|\cdot w_i^2 \leq O(\eps^{-1/2})
\hspace{3mm}\Rightarrow\hspace{3mm}
w_i^2 \leq O\left(\frac{1}{\eps\, |F_i|}\right)
\hspace{3mm}\Rightarrow\hspace{3mm}
w_i \leq O\left(\frac{1}{\sqrt{\eps\, |F_i|}}\right).
\end{equation}
Finally, $\|F_i\|\leq |F_i|\cdot w_i
\leq O(|F_i| /\sqrt{\eps\, |F_i|})
= O(\eps^{-1/2}\sqrt{|F_i|})
\leq O(\eps^{-1/2}\sqrt{n})$, as required.
\end{proof}
\begin{theorem}\label{thm:UBsqaure}
For every set of $n$ points in $[0,1]^2$ and every $\eps>0$, Algorithm \textsc{SparseYao} returns a Euclidean $(1+\eps)$-spanner of weight $O(\eps^{-3/2}\,\sqrt{n})$.
\end{theorem}
\begin{proof}
Let $G=\textsc{SparseYao}(S,\eps)$, and define $F\subset E(G)$ and $F_1,\ldots, F_k$ as above.
By Lemma~\ref{lem:Fi}, $\|F\|=\sum_{i=1}^k \|F_i\|= O(k\,\eps^{-1/2}\,\sqrt{n}) =O(\eps^{-1}\,\sqrt{n})$.
By Lemma~\ref{lem:factor}, $\|G\|\leq O(\eps^{-1/2})\cdot (\|F\|+\sqrt{2})\leq O(\eps^{-3/2}\,\sqrt{n})$,
as claimed.
\end{proof}
\section{Spanners for the Integer Grid}
\label{sec:grid}
We briefly review known results from analytic number theory in Section~\ref{ssec:Farey}, and derive upper bounds on the minimum weight of a $(1+\eps)$-spanner for the $n\times n$ grid: First we analyze the weight of Yao-graphs (Section~\ref{ssec:gridUB}), and then refine the analysis for Sparse Yao-graphs in (Section~\ref{ssec:next}).
\subsection{Preliminaries: Farey Sequences}
\label{ssec:Farey}
Two points in the integer lattice $p,q\in \Z^2$ are \emph{visible} if the line segment $pq$ does not pass through any lattice point. An integer point $(i,j)\in \Z^2$ is visible from the origin $(0,0)$ if $i$ and $j$ are relatively prime, that is, $\mathrm{gcd}(i,j)=1$. The \emph{slope} of a segment between $(0,0)$ and $(i,j)$ is $j/i$. For every $n\in \N$, the \emph{Farey set of order $n$},
\[ F_n=\left\{\frac{a}{b} : 0\leq a\leq b\leq n\right\}, \]
is the set of slopes of the lines spanned by the origin and lattice points $(b,a)\in [0,n]^2$ with $a\leq b$.
The \emph{Farey sequence} is the sequence of elements in $F_n$ in increasing order.
Note that $F_n\subset [0,1]$. Farey sets and sequences have fascinating properties,
and the distribution of $F_n$, as $n\rightarrow \infty$ is not fully understood.
It is known that
\[
|F_n|=1+\sum_{1\leq i\leq n} \varphi(i) = \frac{3n^2}{\pi^2}+O(n \log n),
\]
where $\varphi(i)$ is Euler's totient function (i.e., $\varphi(i)$ is the number of nonnegative integers $j$ with $1\leq j\leq i$ and $\mathrm{gcd}(i,j)=1$). Furthermore, if $\frac{p_1}{q_1}$ and $\frac{p_2}{q_2}$ are consecutive terms of the Farey sequence in reduced form (i.e., $\mathrm{gcd}(p_1,q_1)=1$ and $\mathrm{gcd}(p_2,q_2)=1$), then $|p_1q_2-p_2q_1|=1$ \cite{HW79}. The Farey sequence is uniformly distributed on $[0,1]$ in the sense that for any fixed subinterval $[\alpha,\beta]\subset [0,1]$, the asymptotic frequency of the Farey set in $[\alpha,\beta]$ is known converge as $n$ tends to infinity~\cite{Dress99}:
\[ \frac{|F_n\cap [\alpha,\beta]|}{|F_n|} = \beta-\alpha +o(1). \]
The error term is bounded by $O(n^{-1+\varepsilon})$, for some $\eps>0$. However, determining the rate of convergence is known to be equivalent to the Riemann hypothesis~\cite{Franel24,Landau24}; see also~\cite{Ledoan18} and references therein.
The key result we use is a bound on the average distance to a Farey set $F_n$.
For every $x\in [0,1]$, let
\[\rho_n(x)= \min_{\frac{p}{q}\in F_n} \left| \frac{p}{q} - x\right| \]
denote the distance between $x$ and the Farey set $F_n$.
Kargaev and Zhigljavsky~\cite{KZ96} proved that
\begin{equation}\label{eq:KZ}
\int_0^1 \rho_n(x) \, dx = \frac{3}{\pi^2} \, \frac{\ln n}{n^2} + O\left(\frac{1}{n^2}\right), \hspace{5mm}\textrm{as} \hspace{5mm} n\rightarrow \infty.
\end{equation}
\subsection{The Weight of Yao-Graphs for the Grid}
\label{ssec:gridUB}
\begin{lemma}\label{lem:num}
For a positive integer $k\in \N$, consider the subdivision of the unit interval $[0,1]$ into $k$ subintervals,
$[0,1]=\bigcup_{i=1}^k [\frac{i-1}{k},\frac{i}{k}]$. For every $i=1,\ldots , k$, let $q_i$ be the
smallest positive integer such that $\frac{i-1}{k}\leq \frac{p_i}{q_i}\leq \frac{i}{k}$ for some integer $p_i$.
Then $\sum_{i=1}^k q_i =O(k^{3/2}\log^{1/2} k)$.
\end{lemma}
\begin{proof}
Let $[\alpha,\beta]\subset [0,1]$ be an interval of length $\beta-\alpha=\frac{1}{k}$.
If $[\alpha,\beta]$ contains a point $\frac{p_i}{q_i}\in F_n$, then $q_i\leq n$.
Otherwise, $[\alpha,\beta]$ is disjoint from the Farey set $F_n$, and
then $\rho_n(x)\geq \min\{|x-\alpha|,|x-\beta|\}$ for all $x\in [\alpha,\beta]$.
In this case,
\begin{equation}\label{eq:blip}
\int_{[\alpha,\beta]} \rho_n(x) \, dx
\geq \int_{[\alpha,\beta]}\min\{|x-\alpha|,|x-\beta|\} \, dx
= \frac{1}{4\cdot |\beta-\alpha|^2}
= \frac{1}{4k^2}.
\end{equation}
For every positive integer $n$, let $a(n)$ be the number of intervals in $\{[\frac{i-1}{k},\frac{i}{k}]: i=1,\ldots , k\}$ that are disjoint from $F_n$. Note that $a(n)=0$ for $n\geq k$, since all $k$ (closed) intervals contain a rational of the form $\frac{p}{k}$. In particular, we have $q_i\leq k$ for all $i=1,\ldots , k$.
The combination of \eqref{eq:KZ} and \eqref{eq:blip} yields $a(n)\leq O(k^2 \log n / n^2)$.
If we set $n=\sqrt{ck\log k}$ for a sufficiently large constant $c\geq 2$, then
\[ a \left(\sqrt{ck\log k}\right)
\leq O \left( \frac{k^2 \log(\sqrt{ck \log k})}{ck\log k} \right)
= O \left( \frac{k}{2} \cdot \frac{\log(ck\log k)}{c\log k} \right)
= O \left( \frac{k}{2} \cdot \frac{\log(c) + 2\log(k)}{c\log k} \right)
\leq \frac{k}{2}.
\]
That is, at most $\frac{k}{2}$ of $k$ intervals are disjoint from $F_n$. For the remaining at least $\frac{k}{2}$ intervals, we have $q_i\leq n\leq O(\sqrt{k\log k})$, and the sum of these $q_i$s is $O(k^{3/2}\log^{1/2}k)$. It remains to bound the sum of $q_i$s in the intervals disjoint form $F_n$. We use a standard diadic partition. Let $A=\sqrt{ck\log k}$. Then
\begin{align*}
\sum_{i=1}^k q_i
&=O(k^{3/2}\log^{1/2} k)+\sum_{j=\lfloor \log A\rfloor}^{\lfloor \log k\rfloor} \big(a(2^j) - a(2^{j-1})\big)2^j \\
&=O(k^{3/2}\log^{1/2} k)+\sum_{j=\lfloor \log A\rfloor}^{\lfloor \log k\rfloor} a(2^j) \big(2^j-2^{j-1}\big)\\
&\leq O(k^{3/2}\log^{1/2} k)+\sum_{j=\lfloor \log A\rfloor}^{\lfloor \log k\rfloor} O\left(\frac{k^2 j}{2^{2j}}\right) 2^{j-1}\\
&=O\left(k^{3/2}\log^{1/2} k+ k^2 \sum_{j=\lfloor \log A\rfloor}^{\lfloor \log k\rfloor} \frac{j}{2^j}\right)\\
&=O\left(k^{3/2}\log^{1/2} k + k^2 \frac{\log A}{A} \right)\\
&=O\left(k^{3/2}\log^{1/2} k + k^2 \frac{\log k}{\sqrt{k \log k}} \right)\\
&= O(k^{3/2}\log^{1/2} k),
\end{align*}
as claimed.
\end{proof}
\begin{lemma}\label{lem:num2}
In the setting of Lemma~\ref{lem:num}, we have
$\sum_{i=1}^k q_i^3 =O(k^3\log k)$.
\end{lemma}
\begin{proof}
If we set $n=\sqrt{ck\log k}$ for a sufficiently large constant $c\geq 2$, then $a(n)\leq \frac{k}{2}$.
That is, at most $\frac{k}{2}$ of $k$ intervals are disjoint from $F_n$. For the remaining at least $\frac{k}{2}$ intervals, we have $q_i\leq n\leq O(\sqrt{k\log k})$, and the sum of $q_i^3$ for these values is $O(k^{5/2}\log^{3/2}k)$. It remains to bound the sum of $q_i$s in the intervals disjoint form $F_n$. Let $A=\sqrt{ck\log k}$. Then
\begin{align*}
\sum_{i=1}^k q_i^3
&=O(k^{5/2}\log^{3/2} k)+\sum_{j=\lfloor \log A\rfloor}^{\lfloor \log k\rfloor} \big(a(2^j) - a(2^{j-1})\big) 2^{3j}\\
&=O(k^{5/2}\log^{3/2} k)+\sum_{j=\lfloor \log A\rfloor}^{\lfloor \log k\rfloor} a(2^j) \big(2^{3j}-2^{3(j-1)}\big)\\
&\leq O(k^{5/2}\log^{3/2} k)+\sum_{j=\lfloor \log A\rfloor}^{\lfloor \log k\rfloor} O\left(\frac{k^2 j}{2^{2j}}\right) 2^{3j}\\
&=O\left(k^{5/2}\log^{3/2} k+ k^2 \sum_{j=\lfloor \log A\rfloor}^{\lfloor \log k\rfloor} j\cdot 2^{j}\right)\\
&=O\left(k^{5/2}\log^{3/2} k + k^2 \cdot k\log k \right)\\
&= O(k^3\log k),
\end{align*}
as claimed.
\end{proof}
\begin{lemma}\label{lem:grid}
For a positive integer $k\in \N$, consider the subdivision of the plane into $k$ cones, each with apex at the origin $o$, and aperture $2\pi/k$. For each $i=1,\ldots , k$, let $p_i\in \Z^2\setminus \{0\}$ be a point in the $i$th cone that lies closest to the origin. Then $\sum_{i=1}^k \|op_i\| =O(k^{3/2}\log^{1/2} k)$.
\end{lemma}
\begin{proof}
The function $\tan(x)$ is a monotone increasing bijection from the interval $[0,\frac{\pi}{4}]$ to $[0,1]$. Its derivative is bounded by $1\leq \tan'(x)\leq 2$. Consequently, it distorts the length of any interval by a factor of at most 2. That is, it maps any interval $[\alpha,\beta]\subset [0,\frac{\pi}{4}]$ to an interval $[\tan(\alpha), \tan(\beta)]\subset [0,1]$ with $\beta-\alpha\leq \tan(\beta)-\tan(\alpha)\leq 2(\beta-\alpha)$.
Recall that Algorithm \textsc{SparseYao} operates on $k=\Theta(\eps^{-1/2})$ cones $C_1,\ldots ,C_k$.
The two coordinate axes and the lines $y=\pm y$ subdivide the plane into eight cones with aperture $\pi/4$.
These four lines contain a set $R$ of eight rays emanating from the origin. A cone $C_i$ that contains any ray in $R$ necessarily contains a lattice point $p_i$ with $\|op_i\|\leq \sqrt{2}$. Consider the cones $C_i$ between two consecutive rays in $R$; there are $O(k)$ such cones. Rotate these cones by a multiple of $\frac{\pi}{4}$ so that they correspond to slopes in the interval $[0,1]$. Due to the rotational symmetry of $\Z^2$, the rotation is an isometry on $\Z^2$.
Each cone $C_i$ is an interval of angles $[\alpha_i,\beta_i]\subset [0,\frac{\pi}{4}]$ with $\beta_i-\alpha_i=\frac{2\pi}{k}$. As noted above, this corresponds to an interval of slopes $[\tan(\alpha_i),\tan(\beta_i)]\subset [0,1]$ with
$\frac{2\pi}{k}\leq \tan \beta_i-\tan\alpha_i\leq \frac{4\pi}{k}$.
Subdivide the interval $[0,1]$ into $k$ subintervals of length $\frac{1}{k}$.
Each interval $[\tan(\alpha_i),\tan(\beta_i)]\subset [0,1]$
contains at least one intervals of this subdivision.
Let $a_i$ be the smallest positive integer such that there exists a
rational $\frac{b_i}{a_i}$ in the $i$th interval.
Then the lattice point $(a_i,b_i)$ lies in $C_i$.
Since $\frac{b_i}{a_i}\leq 1$, then $b_i\leq a_i$,
and so the distance between $(a_i,b_i)$ and the origin is at most $\sqrt{2}\cdot a_i$.
Let $p_i$ be a point in $C_i$ closest to the origin,
hence $\|op_i\|\leq \sqrt{2}\cdot a_i$.
By Lemma~\ref{lem:num}, we conclude that
$\sum_{i} \|op_i\| \leq \sqrt{2} \sum_i a_i = O(k^{3/2}\log^{1/2} k)$,
where the summation is over cones between two consecutive rays in $R$.
Summation over all octants readily yields
$\sum_{i=1}^k \|op_i\| \leq O(k^{3/2}\log^{1/2} k)$.
\end{proof}
\begin{corollary}\label{cor:Yao}
For positive integers $k$ and $n$, let $G_{k,n}$ be the Yao-graph with $k$ cones on the $n^2$ points
in the $n\times n$ section of the integer lattice.
Then $\|G_{k,n}\| \leq O(nk^{3/2}\log^{1/2} k)$.
\end{corollary}
\begin{proof}
Recall the Yao-graph is defined as a union of $n$ stars, one for each vertex.
For each vertex $p$, a star $S_p$ is obtained by subdividing the plane into
$k$ cones with apex $p$ and aperture $2\pi/k$, and connecting $p$
to the closes point in the cone (if such a point exists).
By Lemma~\ref{lem:grid}, $\|S_p\|\leq O(k^{3/2}\log^{1/2} k)$ for any $p\in \Z^2$.
Summation over $n^2$ vertices yields $\|G_{k,n}\| \leq O(nk^{3/2}\log^{1/2} k)$.
\end{proof}
For every finite point set $S\subset \R^2$, the Yao-graph $Y_k(s)$ with $k=\Theta(\eps^{-1})$ cones per vertex
is a $(1+\eps)$-spanner. Corollary~\ref{cor:Yao} readily implies that the $n\times n$ section of the integer lattice admits a $(1+\eps)$-spanner of weight $O(\eps^{-3/2}\log^{1/2}(\eps^{-1})\cdot n^2)$.
\subsection{The Weight of Sparse Yao-Graphs for the Grid}
\label{ssec:next}
\begin{theorem}\label{thm:UBgrid}
Let $S$ be the $n\times n$ section of the integer lattice for some positive integer $n$.
Then the graph $G=$\textsc{SparseYao}$(S,\eps)$ has weight $O(\eps^{-1}\log(\eps^{-1})\cdot n^2)$.
\end{theorem}
\begin{proof}
The edges $p q_i(p)$ in Algorithm \textsc{SparseYao} form a Yao-graph $Y_k(S)$ with $k=\Theta(\eps^{-1/2})$ cones per vertex. By Corollary~\ref{cor:Yao}, we have $\|Y_k(S)\|\leq O(\eps^{-3/4}\log^{1/2}(\eps^{-1})\cdot n^2)$.
Note, though, that Algorithm \textsc{SparseYao} does not necessarily add all these edges to the spanner.
Algorithm \textsc{SparseYao} refines each cone $C_i(p)$ of apex $p$ and aperture $2\pi/k$ into $k$ cones $C_{i,j}(p)$, and adds some of the edges between $p$ and a closest point $q_{i,j}(p)\in C_{i,j}(p)$.
It remains to bound the weight of the edges $pq_{i,j}$ added to the spanner. Fix $p\in S$ and $i\in \{1,\ldots , k\}$. Suppose that the algorithm adds $pq_i(p)$ together with $m$ edges of the form $pq_{i',j}$ for $i'\in \{i-1,i,i+1\}$ and $j\in \{1,\ldots , k\}$.
We may assume w.l.o.g.\ that $m'\geq \lfloor m/2\rfloor$ of these points lie on the left side of the ray $\overrightarrow{pq_i}$. Label these $m'$ points in ccw order about $p$ as $t_1,\ldots ,t_{m'}$, and let $t_0=q_i$. Then the triangles $\Delta(p,t_{h-1},t_h)$, for $h=1,\ldots , m'$, are interior-disjoint, contained in $W_i\setminus W_2$, and spanned by lattice points in the $\Z^2$. By Pick's theorem, $\area(\Delta(p,t_{h-1},t_h))\geq \frac12$ for all $h=1,\ldots , m'$. Consequently, $\area(W_1\setminus W_2)\geq m'/2\geq \Omega(m)$.
We also derive an upper bound on $\area(W_1\setminus W_2)$. Recall (cf.\ Fig.~\ref{fig:wedge}) that $B(p,q_i) = W_1\cap W_2$, where $W_1$ and $W_2$ are cones centered at $p$ and $q_i$, with apertures $\frac12\, \sqrt{\eps}$ and $\sqrt{\eps}$, respectively. Note that $pq_i$ partitions $W_1\setminus W_2$ into two congruent isosceles triangles, each with two sides of weight $\|pq_i\|$, and so $\area(W_1\setminus W_2)=2\|pq_i\|^2\sin (\pi-\sqrt{\eps}/2) = O(\eps^{1/2}\,\|pq_i\|^2 )$.
By contrasting the lower and upper bounds for $\area(W_1\setminus W_2)$, we obtain
\[\Omega(m)\leq \area(W_1\setminus W_2)\leq O(\eps^{1/2}\,\|pq_i\|^2 )
\,\,\, \Rightarrow \,\,\, m\leq O(\eps^{1/2} \|pq_i\|^2).
\]
By Lemma~\ref{lem:deltoid}, if an edge $pq_{i',j}(p)$ was added in the same iteration as $pq_i(p)$, then $\|pq_{i',j}(p)\|< 2\,\|pq_i(p)\|$. Overall, the total weight of all edges added together with $pq_i$ is $O(m\, \|pq_i\|)\leq O(\eps^{1/2}\, \|pq_i\|^3)$. Summation over $i=1,\ldots ,k$ yields $O(\eps^{1/2}\, \sum_{i=1}^k \|pq_i\|^3)$.
Using Lemma~\ref{lem:num2} with $k=\Theta(\eps^{-1/2})$, the total weight of the edges added for a vertex $p$ is
$O(\eps^{1/2}\cdot k^3\log k) = O(\eps^{-1}\log \eps^{-1})$. Summation over all $n^2$ vertices $p\in S$,
gives an overall weight of $\|G\|=O(\eps^{-1}\log(\eps^{-1})\cdot n^2)$, as claimed.
\end{proof}
The combination of Lemma~\ref{lem:LBgrid} and Theorem~\ref{thm:UBgrid} (lower and upper bounds) establishes Theorem~\ref{thm:grid}. We restate Theorem~\ref{thm:UBgrid} for $n$ points in the unit square $[0,1]^2$.
\begin{corollary}
Let $S$ be the $\lfloor \sqrt{n}\rfloor \times \lfloor\sqrt{n}\rfloor$ section of the scaled lattice $\frac{1}{\lfloor \sqrt{n}\rfloor}\,\Z^2$, contained in $[0,1]^2$.
Then $G=\textsc{SparseYao}(S,\eps)$ has weight $O(\eps^{-1}\log(\eps^{-1})\cdot \sqrt{n})$.
\end{corollary}
\begin{proof}
Apply Theorem~\ref{thm:UBgrid} for the $\lfloor \sqrt{n}\rfloor \times \lfloor\sqrt{n}\rfloor$ section of the integer lattice $\Z^2$, and then scale down all weights by a factor of $\frac{1}{\lfloor \sqrt{n}\rfloor}$.
\end{proof}
\section{Generalization to Higher Dimensions}
\label{sec:d-space}
\later{
Algorithm \textsc{SparseYao} and its analysis generalizes to $\R^d$ for constant $d\geq 2$.
}
\subparagraph{Upper Bound.}
We sketch the necessary adjustments for a point set $S\subset [0,1]^d$.
In the plane, we partitioned $\R^2$ into $k=\Theta(\eps^{-1/2})$ cones $C_1,\ldots , C_k$, of aperture $2\pi/k=\frac18\,\sqrt{\eps}$. In $d$-dimensions, we can \emph{cover} $\R^d$ with $k=\Theta(\eps^{(1-d)/2})$ cones of aperture $\frac18\,\sqrt{\eps}$. With these cones, Algorithm \textsc{SparseYao} and its stretch analysis goes through almost verbatim.
Standard volume argument shows that every cone of aperture $\frac18\,\sqrt{\eps}$ is covered by $O(\eps^{(1-d)/2})$ cones of aperture $\Theta(\eps^{-1})$. Therefore, the generalization of Lemma~\ref{lem:factor}
yields $\|G\|=O_d(\eps^{(1-d)/2})\cdot \|F\|$. We can partition $F$ into $k=\Theta(\eps^{(1-d)/2})$ subsets $F=\bigcup_{i=1}^k F_i$.
In the generalization of Lemma~\ref{lem:density}, every generic point $g\in [0,1]^d$ is contained in $O(\eps^{(1-d)/2})$ regions $R_i(p)$: In the proof, the $y$-monotote curve $\gamma$ is replaced by a $(d-1)$-dimensional surface/terrain.
In the weight analysis for $\|F_i\|$, we need to charge the weight of each edge $e\in F_i$ to an empty region $\widehat{R}_i$, which is the intersection of a cone of aperture $\Theta(\sqrt{\eps})$ and a ball of radius $\|e\|$. The volume of such a region is $\Theta_d(\|e\|^d\cdot \eps^{(d-1)/2})$. Finally, in the proof of Lemma~\ref{lem:Fi}, Jensen's inequality is used for the function $x\rightarrow x^d$. In particular, for the average weight of edge in $F_i$, $w_i=\|F_i\|/ |F_i|$, inequality~\eqref{eq:key} becomes
\begin{align}
\eps^{(d-1)/2}\cdot |F_i|\cdot w_i^d &\leq O(\eps^{(1-d)/2})\label{eq:key+d}\\
w_i^d &\leq O\left(\frac{1}{\eps^{d-1} |F_i|}\right)\nonumber \\
w_i &\leq O\left(\frac{1}{\eps^{1-1/d}\, |F_i|^{1/d}}\right), \nonumber
\end{align}
\later{
\[
\eps^{(d-1)/2}\cdot |F_i|\cdot w_i^d \leq O(\eps^{(1-d)/2})
\hspace{3mm}\Rightarrow\hspace{3mm}
w_i^d \leq O\left(\frac{1}{\eps^{d-1} |F_i|}\right)
\hspace{3mm}\Rightarrow\hspace{3mm}
w_i \leq O\left(\frac{1}{\eps^{1-1/d}\, |F_i|^{1/d}}\right),
\]
}
and $\|F_i\| =|F_i|\cdot w_i \leq O( \eps^{1/d-1}|F_i|)^{1-1/d}) \leq O( \eps^{1/d-1} n^{1-1/d})$. Overall,
\[
\|G\|
\leq \eps^{(1-d)/2}\sum_i \|F_i\|
\leq O(\eps^{(1-d)/2}\cdot \eps^{(1-d)/2}\cdot \eps^{1/d-1} n^{1-1/d})
\leq O(\eps^{-d+1/d} n^{1-1/d}).
\]
\subparagraph{Lower Bound.}
The lower bound construction readily generalizes to every dimension $d\geq 2$. Let $S_0$ be a set of $2m$ points, where $m=\lfloor (\eps/d)^{1-d}\rfloor$, with $m$ points arranged in a grid on two opposite faces of a unit cube $[0,1]^d$.
Every $(1+\eps)$-spanner for $S_0$ contains a complete bipartite graph $K_{m,m}$, of weight $\Omega_d(\eps^{2(1-d)})$.
By arranging $\Theta_d(\eps^{d-1}n)$ translated copies of $S_0$ in a $(\eps^{d-1}n)^{1/d}\times \ldots \times (\eps^{d-1}n)^{1/d}$ grid, we obtain a set $S$ of $\Theta(n)$ points, and a lower bound of $\Omega_d(\eps^{1-d} n)$. Scaling by a factor of $(\eps^{d-1}n)^{1/d}$ yields a set of $\Theta(n)$ points in $[0,1]^d$ for which any $(1+\eps)$-spanner has weight $\Omega_d(\eps^{-d+1/d}\, n^{1-1/d})$.
\section{Outlook}
\label{sec:con}
Our \textsc{SparseYao} algorithm combines features of Yao-graphs and greedy spanners. It remains an open problem whether the celebrated greedy algorithm~\cite{althofer1993sparse} always returns a $(1+\eps)$-spanner of weight $O(\eps^{-3/2}\sqrt{n})$ for $n$ points in the unit square (and $O(\eps^{(1-d^2)/d}n^{(d-1)/d})$ for $n$ points in $[0,1]^d$). The analysis of the greedy algorithm is known to be notoriously difficult~\cite{FiltserS20,LeS19}.
It is also an open problem whether \textsc{SparseYao} or the greedy algorithm achieves an approximation ratio better than the tight lightness bound of $O(\eps^{-d})$ for $n$ points in $\R^d$ (where the approximation ratio compares the weight of the output with the instance-optimal weight of a $(1+\eps)$-spanner).
All results in this paper pertain to the Euclidean distance (i.e., $L_2$-norm). Generalizations to $L_p$-norms for $p\geq 1$ (or Minkowski norms with respect to a centrally symmetric convex body in $\R^d$) would be of interest.
It is unclear whether some or all of the machinery developed here generalizes to other norms.
Finally, we note that Steiner points can substantially improve the weight of a $(1+\eps)$-spanner in Euclidean space~\cite{BhoreT21b,LeS19,LeS20}. It is left for future work to study the minimum weight of a Euclidean Steiner $(1+\eps)$-spanner for $n$ points in the unit square $[0,1]^2$ or unit cube $[0,1]^d$; and for an $n\times n$ section of the integer lattice.
\bibliographystyle{plainurl}
| {
"timestamp": "2022-07-01T02:03:43",
"yymm": "2206",
"arxiv_id": "2206.14911",
"language": "en",
"url": "https://arxiv.org/abs/2206.14911",
"abstract": "Given a set $S$ of $n$ points in the plane and a parameter $\\varepsilon>0$, a Euclidean $(1+\\varepsilon)$-spanner is a geometric graph $G=(S,E)$ that contains, for all $p,q\\in S$, a $pq$-path of weight at most $(1+\\varepsilon)\\|pq\\|$. We show that the minimum weight of a Euclidean $(1+\\varepsilon)$-spanner for $n$ points in the unit square $[0,1]^2$ is $O(\\varepsilon^{-3/2}\\,\\sqrt{n})$, and this bound is the best possible. The upper bound is based on a new spanner algorithm in the plane. It improves upon the baseline $O(\\varepsilon^{-2}\\sqrt{n})$, obtained by combining a tight bound for the weight of a Euclidean minimum spanning tree (MST) on $n$ points in $[0,1]^2$, and a tight bound for the lightness of Euclidean $(1+\\varepsilon)$-spanners, which is the ratio of the spanner weight to the weight of the MST. Our result generalizes to Euclidean $d$-space for every constant dimension $d\\in \\mathbb{N}$: The minimum weight of a Euclidean $(1+\\varepsilon)$-spanner for $n$ points in the unit cube $[0,1]^d$ is $O_d(\\varepsilon^{(1-d^2)/d}n^{(d-1)/d})$, and this bound is the best possible.For the $n\\times n$ section of the integer lattice in the plane, we show that the minimum weight of a Euclidean $(1+\\varepsilon)$-spanner is between $\\Omega(\\varepsilon^{-3/4}\\cdot n^2)$ and $O(\\varepsilon^{-1}\\log(\\varepsilon^{-1})\\cdot n^2)$. These bounds become $\\Omega(\\varepsilon^{-3/4}\\cdot \\sqrt{n})$ and $O(\\varepsilon^{-1}\\log(\\varepsilon^{-1})\\cdot \\sqrt{n})$ when scaled to a grid of $n$ points in the unit square. In particular, this shows that the integer grid is \\emph{not} an extremal configuration for minimum weight Euclidean $(1+\\varepsilon)$-spanners.",
"subjects": "Computational Geometry (cs.CG)",
"title": "Minimum Weight Euclidean $(1+\\varepsilon)$-Spanners",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9947798732231231,
"lm_q2_score": 0.8056321936479701,
"lm_q1q2_score": 0.8014266914615943
} |
https://arxiv.org/abs/1905.12560 | On the equivalence between graph isomorphism testing and function approximation with GNNs | Graph Neural Networks (GNNs) have achieved much success on graph-structured data. In light of this, there have been increasing interests in studying their expressive power. One line of work studies the capability of GNNs to approximate permutation-invariant functions on graphs, and another focuses on the their power as tests for graph isomorphism. Our work connects these two perspectives and proves their equivalence. We further develop a framework of the expressive power of GNNs that incorporates both of these viewpoints using the language of sigma-algebra, through which we compare the expressive power of different types of GNNs together with other graph isomorphism tests. In particular, we prove that the second-order Invariant Graph Network fails to distinguish non-isomorphic regular graphs with the same degree. Then, we extend it to a new architecture, Ring-GNN, which succeeds in distinguishing these graphs and achieves good performances on real-world datasets. | \section{Conclusions}
In this work we address the important question of organizing the fast-growing zoo
of GNN architectures in terms of what functions they can and cannot represent.
We follow the approach via the graph isomorphism test, and show that is equivalent
to the other perspective via function approximation.
We leverage our graph isomorphism reduction to augment order-$2$ G-invariant nets
with the ring of operators associated with matrix multiplication, which gives provable gains in expressive power with complexity $O(n^3)$, and is amenable to efficiency gains by leveraging sparsity in the graphs.
Our general framework leaves many interesting questions unresolved. First, a more comprehensive analysis on which elements of the
algebra are really needed depending on the application. Next, our current GNN taxonomy is still incomplete, and in particular we believe
it is important to further discern the abilities between spectral and neighborhood-aggregation-based architectures.
Finally, and most importantly, our current notion of invariance (based on permutation symmetry) defines a topology in the space of graphs that is too strong; in other words, two graphs are either considered equal (if they are isomorphic) or not. Extending the theory of symmetric universal
approximation to take into account a weaker metric in the space of graphs, such as the Gromov-Hausdorff distance, is a natural next step, that will better reflect the stability requirements of powerful graph representations to small graph perturbations in real-world applications.
\paragraph{Acknowledgements} We would like to thank Haggai Maron for fruitful discussions and for pointing us towards $G$-invariant networks as powerful models to study representational power in graphs.
This work was partially supported by NSF grant RI-IIS 1816753, NSF CAREER CIF 1845360, the Alfred P. Sloan Fellowship, Samsung GRP and Samsung Electronics.
SV was partially funded by EOARD FA9550-18-1-7007 and the Simons Collaboration Algorithms and Geometry.
\section{Graph G-invariant Networks with maximum tensor order 2} \label{app.Ginvariant}
In this section we prove Theorem \ref{prop.Ginvariant} that says that graph G-invariant Networks with tensor order 2 cannot distinguish between non-isomorphic regular graphs with the same degree.
First, we need to state our definition of the order-2 Graph $G$-invariant Networks. In general, given $G \in \mathbb{R}^{n \times n}$, we let $A^{(0)} = G$, $d^{(0)} = 1$, and
\[
A^{(t+1)} = \sigma(L^{(t)}(A^{(t)}))
\]
and outputs $m \circ h \circ A^{(L)}$, where each $L^{(t)}$ is an equivariant linear layer from $\mathbb{R}^{n \times n \times d^{(t)}}$ to $\mathbb{R}^{n \times n \times d^{(t+1)}}$, $\sigma$ is a point-wise activation function, $h$ is an invariant linear layer from $\mathbb{R}^{n \times n}$ to $\mathbb{R}$, and $m$ is an MLP.
$d^{(t)}$ is the feature dimension in layer $t$, interpreted as the dimension of the hidden state attached to each pair of nodes. For simplicity of notations, in the following proof we assume that $d^{(t)} = 1, \forall t = 1, ..., L$, and thus each $A^{(t)}$ is essentially a matrix.
The following results can be extended to the cases where $d^{(t)} > 1$, by adding more subscripts in the proof.
Given an unweighted graph $G$, let $E \subseteq [n]^2$ be the edge set of $G$, i.e., $(u, v) \in E$ if $u \neq v$ and $G_{uv} = 1$; set $S \subseteq [n]^2$ to be $\{ (u, u) \}_{u \in [n]^2}$; and let $N = [n]^2 \setminus (E \cup S)$. Thus, $E \cup N \cup S = [n]^2$.
\begin{lemma}
Let $G, G'$ be the adjacency matrices of two unweighted regular graphs with the same degree $d$, and let $A^{(t)}, E, N, S$ and $A'^{(t)}, E', N', S'$ be defined as above for $G$ and $G'$, respectively. Then $\forall n \leq L, \exists \xi_1^{(t)}, \xi_2^{(t)}, \xi_3^{(t)} \in \mathbb{R}$ such that $A^{(t)}_{uv} = \xi_1^{(t)} \mathds{1}_{(u, v) \in E} + \xi_2^{(t)} \mathds{1}_{(u, v) \in N} + \xi_3^{(t)} \mathds{1}_{(u, v) \in S}$, and $A'^{(t)}_{uv} = \xi_1^{(t)} \mathds{1}_{(u, v) \in E'} + \xi_2^{(t)} \mathds{1}_{(u, v) \in N'} + \xi_3^{(t)} \mathds{1}_{(u, v) \in S'}$
\end{lemma}
\begin{proof}
We prove this lemma by induction. For $t=0$, $A^{(0)} = G$ and $A'^{(0)} = G'$. Since the graph is unweighted, $G_{uv} = 1$ if $u \neq v$ and $(u, v) \in E$, and $0$ otherwise. Similar is true for $G'$. Therefore, we can set $\xi_1^{(0)} = 1$ and $\xi_2^{(0)} = \xi_3^{(0)} = 0$.
Next, we consider the inductive steps. Assume that the conditions in the lemma are satisfied for layer $t-1$. To simplify the notation, we use $A, A'$ to stand for $A^{(t-1)}, A'^{(t-1)}$, and we assume to satisfy the inductive hypothesis with $\xi_1, \xi_2$ and $\xi_3$. We thus want to show that if $L$ is any equivariant linear, then $\sigma(L(A)), \sigma(L(A'))$ also satisfies the inductive hypothesis. Also, in the following, we use $p_1, p_2, q_1, q_2$ to refer to nodes, $a, b$ to refer to pairs of nodes, $\lambda$ to refer to any equivalence class of 2-tuples (i.e. pairs) of nodes, and $\mu$ to refer to any equivalence class of 4-tuples of nodes.
$\forall a = (p_1, p_2), b = (q_1, q_2) \in [n]^2$, let $\mathcal{E}(a, b)$ denote the equivalence class of 4-tuples containing $(p_1, p_2, q_1, q_2)$, and let $\mathcal{E}(b)$ represent the equivalence class of 2-tuples containing $(q_1, q_2)$.
Two 4-tuples $(u, v, w, x), (u', v', w', x')$ are considered equivalent if $\exists \pi \in S_n$ such that $\pi(u) = u', \pi(v) = v', \pi(w) = w', \pi(x) = x'$. Similarly is equivalence between 2-tuples defined. By equation 9(b) in \cite{maron2018invariant}, using the notations of $T, B, C, w, \beta$ defined there, $L$ is described by, given $A$ as an input as $b$ as the subscript index on the output,
\begin{equation}
\begin{split}
L(A)_{b} &= \sum_{a = (p_1, p_2) = (1, 1)}^{(n, n)} T_{a, b} A_a + Y_b \\
&= \sum_{a, \mu} w_{\mu} B_{a, b}^{\mu} A_a + \sum_\lambda \beta_\lambda C_b^\lambda\\
&= \sum_\mu (\mathop{\sum_{a \in [n]^2}}_{(a, b) \in \mu} A_a) w_\mu + \beta_{\mathcal{E}(b)}
\end{split}
\end{equation}
First, let
\[S_{\mu}^b = \mathop{\sum_{a \in [n]^2}}_{(a, b) \in \mu} A_a\]
By the inductive hypothesis,
\begin{equation}
\begin{split}
S_{\mu}^b &= \mathop{\mathop{\sum_{a \in [n]^2}}_{(a, b) \in \mu}}_{a \in E} A_a + \mathop{\mathop{\sum_{a \in [n]^2}}_{(a, b) \in \mu}}_{a \in N} A_a + \mathop{\mathop{\sum_{a \in [n]^2}}_{(a, b) \in \mu}}_{a \in S} A_a \\
&= \mathop{\mathop{\sum_{a \in [n]^2}}_{(a, b) \in \mu}}_{a \in E} \xi_1 + \mathop{\mathop{\sum_{a \in [n]^2}}_{(a, b) \in \mu}}_{a \in N} \xi_2 + \mathop{\mathop{\sum_{a \in [n]^2}}_{(a, b) \in \mu}}_{a \in S} \xi_3 \\
&= m_E(b, \mu) \xi_1 + m_N(b, \mu) \xi_2 + m_S(b, \mu) \xi_3
\end{split}
\end{equation}
where $m_E(b, \mu)$ is defined as the total number of distinct $a \in [n]^2$ that satisfies $(a, b) \in \mu$ and $a \in E$, and similarly for $m_N(b, \mu)$ and $m_S(b, \mu)$. Formally, for example, $m_E(b, \mu) = card\{ a \in [n]^2 : (a, b) \in \mu, a \in E\}$.
Since $E \cup N \cup S = [n]^2$, $b$ belongs to one of $E, N$ and $S$. Thus, let $\tau(b) = E$ if $b \in E$, $\tau(b) = N$ if $b \in N$ and $\tau(b) = S$ if $b \in S$. It turns out that if $A$ is the adjacency matrix of a undirected regular graph with degree $d$, then $m_E(b, \mu), m_N(b, \mu), m_S(b, \mu)$ can be instead written (with an abuse of notation) as $m_E(\tau(b), \mu), m_N(\tau(b), \mu), m_S(\tau(b), \mu)$, meaning that for a fixed $\mu$, the values of $m_E, m_N$ and $m_S$ only depend on which of the three sets ($E, N$ or $S$) $b$ is in, and changing $b$ to a different member in the set $\tau(b)$ won't change the three numbers. In fact, for each $\tau(b)$ and $\mu$, the three numbers can be computed as functions of $n$ and $d$ using simple combinatorics, and their values are seen in the three tables \ref{table:mE}, \ref{table:mN} and \ref{table:mS}. An illustration of these numbers is given in Figure \ref{coloredreg}.
\begin{figure}
\label{coloredreg}
\centering
\includegraphics[width=0.25\textwidth,trim={6cm 4cm 20cm 7cm},clip]{skl2color2}
\includegraphics[width=0.25\textwidth,trim={6cm 4cm 20cm 7cm},clip]{skl3color2}
\caption{$m_E(E, \mathcal{E}(1, 2, 3, 4))$, $m_E(E, \mathcal{E}(1, 2, 3, 2))$, $m_E(E, \mathcal{E}(1, 2, 3, 1))$, $m_E(E, \mathcal{E}(1, 2, 2, 3))$ and $m_E(E, \mathcal{E}(1, 2, 1, 3))$ of $G_{8, 2} $ and $G_{8, 3}$. In either graph, twice the total number of black edges equal $m_E(E, \mathcal{E}(1, 2, 3, 4)) = 18$ (it is twice because each undirected edge corrspond to two pairs $(p_1, p_2)$ and $(p_2, p_1)$, which combined with $(q_1, q_2)$ both belongs to $\mathcal{E}(1, 2, 3, 4)$); the total number of of red edges, $3$, equals both $m_E(E, \mathcal{E}(1, 2, 2, 3))$ and $m_E(E, \mathcal{E}(1, 2, 1, 3))$; the total number of green edges, also $3$, equals both $m_E(E, \mathcal{E}(1, 2, 3, 2))$, $m_E(E, \mathcal{E}(1, 2, 3, 1))$.}
\label{fig:my_label}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{llll}
\multicolumn{1}{l|}{$\mu$} & $m_E(E, \mu)$ & $m_E(N, \mu)$ & $m_E(S, \mu)$ \\ \hline
\multicolumn{1}{l|}{(1, 2, 3, 4)} & $(n-4)d+2$ & $(n-4)d$ & 0 \\
\multicolumn{1}{l|}{(1, 1, 2, 3)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 2, 3)} & $d-1$ & $d$ & 0 \\
\multicolumn{1}{l|}{(1, 2, 1, 3)} & $d-1$ & $d$ & 0 \\
\multicolumn{1}{l|}{(1, 2, 3, 2)} & $d-1$ & $d$ & 0 \\
\multicolumn{1}{l|}{(1, 2, 3, 1)} & $d-1$ & $d$ & 0 \\
\multicolumn{1}{l|}{(1, 1, 1, 2)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 1, 2, 1)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 1, 2)} & 1 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 2, 1)} & 1 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 3, 3)} & 0 & 0 & $(n-2)d$ \\
\multicolumn{1}{l|}{(1, 1, 2, 2)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 2, 2)} & 0 & 0 & $d$ \\
\multicolumn{1}{l|}{(1, 2, 1, 1)} & 0 & 0 & $d$ \\
\multicolumn{1}{l|}{(1, 1, 1, 1)} & 0 & 0 & 0 \\ \hline
\multicolumn{1}{l|}{Total} & $nd$ & $nd$ & $nd$
\end{tabular}
\caption{$m_E$}
\label{table:mE}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{llll}
\multicolumn{1}{l|}{$\mu$} & $m_N(E, \mu)$ & $m_N(N, \mu)$ & $m_N(S, \mu)$ \\ \hline
\multicolumn{1}{l|}{(1, 2, 3, 4)} & $(n-4)(n-d-1)$ & $(n-4)(n-d-1) + 2$ & 0 \\
\multicolumn{1}{l|}{(1, 1, 2, 3)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 2, 3)} & $n-d-1$ &$ n-d-2$ & 0 \\
\multicolumn{1}{l|}{(1, 2, 1, 3)} & $n-d-1$ & $n-d-2$ & 0 \\
\multicolumn{1}{l|}{(1, 2, 3, 2)} & $n-d-1$ & $n-d-2$ & 0 \\
\multicolumn{1}{l|}{(1, 2, 3, 1)} & $n-d-1$ & $n-d-2$ & 0 \\
\multicolumn{1}{l|}{(1, 1, 1, 2)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 1, 2, 1)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 1, 2)} & 0 & 1 & 0 \\
\multicolumn{1}{l|}{(1, 2, 2, 1)} & 0 & 1 & 0 \\
\multicolumn{1}{l|}{(1, 2, 3, 3)} & 0 & 0 & $(n-2)(n-d-1)$ \\
\multicolumn{1}{l|}{(1, 1, 2, 2)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 2, 2)} & 0 & 0 & $n-d-1$ \\
\multicolumn{1}{l|}{(1, 2, 1, 1)} & 0 & 0 & $n-d-1$ \\
\multicolumn{1}{l|}{(1, 1, 1, 1)} & 0 & 0 & 0 \\ \hline
\multicolumn{1}{l|}{Total} & $n(n-d-1)$ & $n(n-d-1)$ & $n(n-d-1)$
\end{tabular}
\caption{$m_N$}
\label{table:mN}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{llll}
\multicolumn{1}{l|}{$\mu$} & $m_S(E, \mu)$ & $m_S(N, \mu)$ & $m_S(S, \mu)$ \\ \hline
\multicolumn{1}{l|}{(1, 2, 3, 4)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 1, 2, 3)} & $n-2$ & $n-2$ & 0 \\
\multicolumn{1}{l|}{(1, 2, 2, 3)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 1, 3)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 3, 2)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 3, 1)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 1, 1, 2)} & 1 & 1 & 0 \\
\multicolumn{1}{l|}{(1, 1, 2, 1)} & 1 & 1 & 0 \\
\multicolumn{1}{l|}{(1, 2, 1, 2)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 2, 1)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 3, 3)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 1, 2, 2)} & 0 & 0 & $n-1$ \\
\multicolumn{1}{l|}{(1, 2, 2, 2)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 2, 1, 1)} & 0 & 0 & 0 \\
\multicolumn{1}{l|}{(1, 1, 1, 1)} & 0 & 0 & 1 \\ \hline
\multicolumn{1}{l|}{Total} & $n$ & $n$ & $n $
\end{tabular}
\caption{$m_S$}
\label{table:mS}
\end{table}
Therefore, we have $L(A)_b = \sum_\mu w_\mu (m_E(\tau(b), \mu) + m_N(\tau(b), \mu) + m_S(\tau(b), \mu)) + \beta_{\mathcal{E}(b)}$. Moreover, notice that $\tau(b)$ determines $\mathcal{E}(b)$: if $\tau(b) = E$ or $N$, then $\mathcal{E}(b) = \mathcal{E}(1, 2); $ if $\tau(b) = S$, then $\mathcal{E}(b) = \mathcal{E}(1, 1)$. Hence, we can write $\beta_{\tau(b)}$ instead of $\beta_{\mathcal{E}(b)}$ without loss of generality. Then in particular, this means that $L(A)_b = L(A)_{b'}$ if $\tau(b) = \tau(b')$. Therefore, $L(A)_b = \overline{\xi}_1 \mathds{1}_{b \in E} + \overline{\xi}_2 \mathds{1}_{b \in N} + \overline{\xi}_3 \mathds{1}_{b \in S}$, where $\overline{\xi}_1 = \sum_\mu w_\mu (m_E(E, \mu) + m_N(E, \mu) + m_S(E, \mu)) + \beta_E$, $\overline{\xi}_2 = \sum_\mu w_\mu (m_E(N, \mu) + m_N(N, \mu) + m_S(N, \mu)) + \beta_N$, and $\overline{\xi}_3 = \sum_\mu w_\mu (m_E(S, \mu) + m_N(S, \mu) + m_S(S, \mu)) + \beta_S$.
Similarly, $L(A')_b = \overline{\xi'}_1 \mathds{1}_{b \in E'} + \overline{\xi'}_2 \mathds{1}_{b \in N'} + \overline{\xi'}_3 \mathds{1}_{b \in S'}$. But importantly, $\forall$ equivalence class of 4-tuples, $\mu$, and $\forall \lambda_1, \lambda_2 \in \{E, N, S \}, m_{\lambda_1}(\lambda_2, \mu) = m'_{\lambda_1}(\lambda_2, \mu)$, as both of them can be obtained from the same entry of the same table. Therefore, $\overline{\xi}_1 = \overline{\xi'}_1, \overline{\xi}_2 = \overline{\xi'}_2$, $\overline{\xi}_3 = \overline{\xi'}_3$.
Finally, let $\xi^*_1 = \sigma(\overline{\xi}_1), \xi^*_2 = \sigma(\overline{\xi}_2)$, and $\xi^*_3 = \sigma(\overline{\xi}_3)$. Then, there is $\sigma(L(A))_b = \xi^*_1 \mathds{1}_{b \in E} + \xi^*_2 \mathds{1}_{b \in N} + \xi^*_3 \mathds{1}_{b \in S}$, and $\sigma(L(A'))_b = \xi^*_1 \mathds{1}_{b \in E'} + \xi^*_2 \mathds{1}_{b \in N'} + \xi^*_3 \mathds{1}_{b \in S'}$, as desired.
\end{proof}
Since $h$ is an invariant function, $h$ acting on $A^{(L)}$ essentially computes the sum of all the diagonal terms (i.e., for $b \in S$) and the sum of all the off-diagonal terms (i.e., for $b \in E \cup N$) of $A^{(L)}$ separately and then adds the two sums with two weights. If $G, G'$ are regular graphs with the same degree, then $|E| = |E'|, |S| = |S'|$ and $|N| = |N'|$. Therefore, by the lemma, there is $h(A^{(L)}) = h(A'^{(L)})$, and as a consequence $m(h(A^{(L)})) = m(h(A'^{(L)}))$.
\section{Universal approximation, graph isomorphism and motif counting}
\section{Introduction}
Graph structured data naturally occur in many areas of knowledge, including computational biology, chemistry and social sciences. Graph neural networks, in all their forms, yield useful representations of graph data partly because they take into consideration the intrinsic symmetries of graphs, such as invariance and equivariance with respect to a relabeling of the nodes \cite{scarselli2008graph, duvenaud2015convolutional, kipf2016semi, gilmer2017neural, hamilton2017representation, velivckovic2017graph, bronstein2017geometric}.
All these different architectures are proposed with different purposes (see \cite{wu2019comprehensive} for a survey and references therein), and a priori it is not obvious how to compare their power. The recent work \cite{xu2018powerful} proposes to study the representation power of GNNs via their performance on graph isomorphism tests. They developed the Graph Isomorphism Networks (GINs) that are as powerful as the one-dimensional Weisfeiler-Lehman (1-WL or just WL) test for graph isomorphism \cite{weisfeiler1968reduction}, and showed that no other neighborhood-aggregating (or message passing) GNN can be more powerful than the 1-WL test. Variants of message passing GNNs include \cite{scarselli2008graph, hamilton2017inductive}.
On the other hand, for feed-forward neural networks, many results have been obtained regarding their ability to approximate continuous functions, commonly known as the universal approximation theorems, such as the seminal works of \cite{cybenko1989approximation, hornik1991hornik}. Following this line of work, it is natural to study the expressivity of graph neural networks in terms of function approximation. Since we could argue that many if not most functions on a graph that we are interested in are invariant or equivariant to permutations of the nodes in the graph, GNNs are usually designed to be invariant or equivariant, and therefore the natural question is whether certain classes GNNs can approximate any continuous and invariant or equivariant functions. Recent work \cite{maron2019universality} showed the universal approximation of $G$-invariant networks, constructed based on the linear invariant and equivariant layers studied in \cite{maron2018invariant}, if the order of the tensor involved in the networks can grow as the graph gets larger. Such a dependence on the graph size was been theoretically overcame by the very recent work \cite{keriven2019universal}, though there is no known upper bound on the order of the tensors involved.
With potentially very-high-order tensors, these models that are guaranteed of univeral approximation are not quite feasible in practice.
The foundational part of this work aims at building the bridge between graph isomorphism testing and invariant function approximation, the two main perspectives for studying the expressive power of graph neural networks. We demonstrate an equivalence between the the ability of a class of GNNs to distinguish between any pairs of non-isomorphic graph and its power of approximating any (continuous) invariant functions, for both the case with finite feature space and the case with continuous feature space. Furthermore, we argue that the concept of sigma-algebras on the space of graphs is a natural description of the power of graph neural networks, allowing us to build a taxonomy of GNNs based on how their respective sigmas-algebras interact.
Building on this theoretical framework, we identify an opportunity to increase the expressive power of order-$2$ $G$-invariant networks with computational tractability, by considering a ring of invariant matrices under addition and multiplication. We show that the resulting model, which we refer to as \emph{Ring-GNN}, is able to distinguish between non-isomorphic regular graphs where order-$2$ $G$-invariant networks provably fail. We illustrate these gains numerically in synthetic and real graph classification tasks.
Summary of main contributions:
\begin{itemize}
\item We show the equivalence between graph isomorphism testing and approximation of permutation-invariant functions for analyzing the expressive power of graph neural networks.
\item We introduce a language of sigma algebra for studying the representation power of graph neural networks, which unifies both graph isomorphism testing and function approximation, and use this framework to compare the power of some GNNs and other methods.
\item We propose Ring-GNN, a tractable extension of order-2 Graph $G$-invariant Networks that uses the ring of matrix addition and multiplication. We show this extension is necessary and sufficient to distinguish Circular Skip Links graphs.
\end{itemize}
\section{Related work}
\paragraph{Graph Neural Networks and graph isomorphism.} Graph isomorphism is a fundamental problem in theoretical computer science. It amounts to deciding, given two graphs $A,B$, whether there exists a permutation $\pi$ such that $\pi A = B\pi$. There exists no known polynomial-time algorithm to solve it, but recently Babai made a breakthrough by showing that it can be solved in quasi-polynomial-time \cite{babai2016graph}. Recently
\cite{xu2018powerful} introduced graph isomorphism tests as a characterization of the power of graph neural networks. They show that if a GNN follows a neighborhood aggregation scheme, then it cannot distinguish pairs of non-isomorphic graphs that the 1-WL test fails to distinguish. Therefore this class of GNNs is at most as powerful as the 1-WL test. They further propose the Graph Isomorphism Networks (GINs) based on approximating injective set functions by multi-layer perceptrons (MLPs), which can be as powerful as the 1-WL test. Based on $k$-WL tests \cite{cai1992optimal}, \cite{morris2019higher} proposes $k$-GNN, which can take higher-order interactions among nodes into account. Concurrently to this work, \cite{maron2019provably} proves that order-$k$ invariant graph networks are at least as powerful as the $k$-WL tests, and similarly to us, it and augments order-2 networks with matrix multiplication. They show they achieve at least the power of 3-WL test. \cite{murphy2019relational} proposes relational pooling (RP), an approach that combines \textit{permutation-sensitive} functions under all permutations to obtain a permutation-invariant function. If RP is combined with permutation-sensitive functions that are sufficiently expressive, then it can be shown to be a universal approximator. A combination of RP and GINs is able to distinguish certain non-isomorphic regular graphs which GIN alone would fail on. A drawback of RP is that its full version is intractable computationally, and therefore it needs to be approximated by averaging over randomly sampled permutations, in which case the resulting functions is not guaranteed to be permutation-invariant.
\paragraph{Universal approximation of functions with symmetry.} Many works have discussed the function approximation capabilities of neural networks that satisfy certain symmetries.
\cite{bloemreddy2019probabilistic} studies the symmetry in neural networks from the perspective of probabilistic symmetry and characterizes the deterministic and stochastic neural networks that satisfy certain symmetry. \cite{ravanbakhsh2017sharing} shows that equivariance of a neural network corresponds to symmetries in its parameter-sharing scheme. \cite{yarotsky2018universal} proposes a neural network architecture with polynomial layers that is able to achieve universal approximation of invariant or equivariant functions.
\cite{maron2018invariant} studies the spaces of all invariant and equivariant linear functions, and obtained bases for such spaces. Building upon this work, \cite{maron2019universality} proposes the $G$-invariant network for a symmetry group $G$, which achieves universal approximation of $G$-invariant functions if the maximal tensor order involved in the network to grow as $\frac{n (n-1)}{2}$, but such high-order tensors are prohibitive in practice. Upper bounds on the approximation power of the $G$-invariant networks when the tensor order is limited remains open except for when $G = A_n$ \cite{maron2019universality}. The very recent work \cite{keriven2019universal} extends the result to the equivariant case, although it suffers from the same problem of possibly requiring high-order tensors. Within the computer vision literature, this problem has also been addressed, in particular \cite{herzig2018mapping} proposes an architecture that can potentially express all equivariant functions.
To the best our knowledge, this is the first work that shows an explicit connection between the two aforementioned perspectives of studying the representation power of graph neural networks - graph isomorphism testing and universal approximation. Our main theoretical contribution lies in showing an equivalence between them, for both finite and continuous feature space cases, with a natural generalization of the notion of graph isomorphism testing to the latter case. Then we focus on the Graph $G$-invariant network based on \cite{maron2018invariant,maron2019universality}, and showed that when the maximum tensor order is restricted to be 2, then it cannot distinguish between non-isomorphic regular graphs with equal degrees. As a corollary, such networks are not universal. Note that our result shows an upper bound on order 2 $G$-invariant networks, whereas concurrently to us, \cite{maron2019provably} provides a lower bound by relating to $k$-WL tests. Concurrently to \cite{maron2019provably}, we propose a modified version of order-2 graph networks to capture higher-order interactions among nodes without computing tensors of higher-order.
\section{Graph isomorphism testing and universal approximation}
In this section we show that there exists a very close connection between the universal approximation of permutation-invariant functions by a class of functions, and its ability to perform graph isomorphism tests. We consider graphs with nodes and edges labeled by elements of a compact set $\mathcal{X}\subset \mathbb R$. We represent graphs with $n$ nodes by an $n$ by $n$ matrix $G \in \mathcal{X}^{n \times n}$, where a diagonal term $G_{ii}$ represents the label of the $i$th node, and a non-diagonal $G_{ij}$ represents the label of the edge from the $i$th node to the $j$th node. An undirected graph will then be represented by a symmetric $G$.
Thus, we focus on analyzing a collection $\mathcal{C}$ of functions from $\mathcal{X}^{n \times n}$ to $\mathbb{R}$. We are especially interested in collections of \textit{permutation-invariant functions}, defined so that $f(\pi^\intercal G \pi) = f(G)$, for all $G \in \mathcal{X}^{n \times n}$, and all $\pi \in S_n$, where $S_n$ is the permutation group of $n$ elements. For classes of functions, we define the property of being able to discriminate non-isomorphic graphs, which we call \textit{GIso-discriminating}, which as we will see generalizes naturally to the continuous case.
\begin{definition}
\label{pd}
Let $\mathcal{C}$ be a collection of permutation-invariant functions from $\mathcal{X}^{n \times n}$ to $\mathbb{R}$. We say $\mathcal C$ is \textbf{GIso-discriminating} if for all non-isomorphic $G_1, G_2 \in \mathcal X^{n\times n}$ (denoted $G_1 \not\simeq G_2$), there exists a function $ h \in \mathcal{C}$ such that $h(G_1) \neq h(G_2)$. This definition is illustrated by figure 2 in the appendix.
\end{definition}
\begin{definition}
Let $\mathcal{C}$ be a collection of permutation-invariant functions from $\mathcal{X}^{n \times n}$ to $\mathbb{R}$. We say $\mathcal C$ is \textbf{universally approximating} if for all permutation-invariant function $f$ from $\mathcal{X}^{n \times n}$ to $\mathbb{R}$, and for all $\epsilon > 0$, there exists $h_{f,\epsilon} \in \mathcal{C}$ such that $\| f - h_{f,\epsilon} \|_{\infty} := \sup_{G \in \mathcal X^{n\times n}} |f(G) - h(G)| < \epsilon$
\end{definition}
\subsection{Finite feature space}
As a warm-up we first consider the space of graphs with a finite set of possible features for nodes and edges, $\mathcal X=\{1,\ldots, M\}$.
\begin{theorem}
\label{UA2PD}
Universally approximating classes of functions are also GIso-discriminating.
\end{theorem}
\vspace{-1.0em}
\begin{proof}
Given $G_1, G_2 \in \mathcal X^{n\times n}$, we consider the permutation-invariant function $\mathds{1}_{\simeq G_1}:\mathcal X^{n\times n}\to \mathbb R$ such that $\mathds{1}_{\simeq G_1}(G)=1$ if $G$ is isomorphic to $G_1$ and 0 otherwise. Therefore, it can be approximated with $\epsilon = 0.1$ by a function $h \in \mathcal{C}$. Then $h$ is a function that distinguishes $G_1$ from $G_2$, as in Definition~\ref{pd}. Hence $\mathcal{C}$ is GIso-discriminating.
\end{proof}
To obtain a result on the reverse direction, we first introduce the concept of an augmented collection of functions, which is especially natural when $\mathcal{C}$ is a collection of neural networks.
\begin{definition}
Given $\mathcal{C}$, a collection of functions from $\mathcal{X}^{n \times n}$ to $\mathbb{R}$, we consider an augmented collection of functions also from $\mathcal{X}^{n \times n}$ to $\mathbb{R}$ consisting of functions that map an input graph $G$ to $\mathcal{NN}([h_1(G), ..., h_d(G)])$ for some finite $d$, where $\mathcal{NN}$ is a feed-forward neural network / multi-layer perceptron, and $h_1, ..., h_d \in \mathcal{C}$. When $\mathcal{NN}$ is restricted to have $L$ layers, we denoted this augmented collection by $\mathcal{C}^{+L}$. In this work, we consider ReLU as the nonlinear activation function in the neural networks.
\end{definition}
\begin{remark}
If $\mathcal{C}_{L_0}$ is the collection of feed-forward neural networks with $L_0$ layers, then $\mathcal{C}_{L_0}^{+L}$ represents the collection of feed-forward neural networks with $L_0 + L$ layers.
\end{remark}
\begin{remark}
If $\mathcal{C}$ is a collection of permutation-invariant functions, so is $\mathcal{C}^{+L}$.
\end{remark}
\begin{theorem}
\label{PD2UAfin}
If $\mathcal{C}$ is GIso-discriminating, then $\mathcal{C}^{+2}$ is universal approximating.
\end{theorem}
The proof is simple and it is a consequence of the following lemmas that we prove in Appendix~\ref{app.universal}.
\begin{lemma} \label{lemma1}
If $\mathcal{C}$ is GIso-discriminating, then for all $G \in \mathcal{X}^{n \times n}$, there exists a function $\tilde{h}_G \in \mathcal{C}^{+1}$ such that for all $G', \tilde{h}_G(G') = 0$ if and only if $G \simeq G'$.
\end{lemma}
\begin{lemma} \label{lemma2}
Let $\mathcal C$ be a class of permutation-invariant functions from $\mathcal X^{n\times n}$ to $\mathbb R$ satisfying the consequences of Lemma \ref{lemma1},
then $\mathcal{C}^{+1}$ is universally approximating.
\end{lemma}
\subsection{Extension to the case of continuous (Euclidean) feature space}
Graph isomorphism is an inherently discrete problem, whereas universal approximation is usually more interesting when the input space is continuous. With our definition \ref{pd} of \textit{GIso-discriminating}, we can achieve a natural generalization of the above results to the scenarios of continuous input space. All proofs for this section can be found in Appendix~\ref{app.universal}.
Let $\mathcal{X}$ be a compact subset of $\mathbb{R}$, and we consider graphs with $n$ nodes represented by $G \in K = \mathcal{X}^{n \times n}$; that is, the node features are $\{G_{ii}\}_{i = 1, \ldots, n}$ and the edge features are $\{G_{ij}\}_{i, j = 1,\ldots, n; i \neq j}$.
\begin{theorem}\label{ua2pdinf}
If $\mathcal{C}$ is universally approximating, then it is also GIso-discriminating
\end{theorem}
The essence of the proof is similar to that of Theorem~\ref{UA2PD}. The other direction - showing that pairwise discrimination can lead to universal approximation - is less straightforward. As an intermediate step between, we make the following definition:
\begin{definition}
\label{locate}
Let $\mathcal{C}$ be a class of functions $K \to \mathbb{R}$. We say it is able to \textbf{locate every isomorphism class} if for all $G \in K$ and for all $\epsilon > 0$ there exists $h_G \in \mathcal{C}$ such that:
\begin{itemize}
\item for all $G' \in K, h_G(G') \geq 0$;
\item for all $G' \in K$, if $G' \simeq G$, then $h_G(G') = 0$; and
\item there exists $\delta_G > 0$ such that if $h_G < \delta_G$, then $\exists \pi \in S_n$ such that $d(\pi(G'), G) < \epsilon$, where $d$ is the Euclidean distance defined on $\mathbb{R}^{n \times n}$
\end{itemize}
\end{definition}
\begin{lemma} \label{lemma.C+1}
If $\mathcal{C}$, a collection of continuous permutation-invariant functions from $K$ to $\mathbb{R}$, is GIso-discriminating, then $\mathcal{C}^{+1}$ is able to locate every isomorphism class.
\end{lemma}
Heuristically, we can think of the $h_G$ in the definition above as a ``loss function'' that penalizes the deviation of $G'$ from the equivalence class of $G$. In particular, the second condition says that if the loss value is small enough, then we know that $G'$ has to be close to the equivalence class of $G$.
\begin{lemma} \label{lemma.locate.approx}
Let $\mathcal{C}$ be a class of permutation-invariant functions $K \to \mathbb{R}$.
If $\mathcal{C}$ is able to locate every isomorphism class, then $\mathcal{C}^{+2}$ is universally approximating.
\end{lemma}
Combining the two lemmas above, we arrive at the following theorem:
\begin{theorem}
If $\mathcal{C}$, a collection of continuous permutation-invariant functions from $K$ to $\mathbb{R}$, is GIso-discriminating, then $\mathcal{C}^{+3}$ is universaly approximating.
\end{theorem}
\section{A framework of representation power based on sigma-algebra}
\label{sec.sigma}
\subsection{Introducing sigma-algebra to this context}
Let $K = \mathcal{X}^{n \times n}$ be a finite input space. Let $Q_K:=K/_{\simeq}$ be the set of isomorphism classes under the equivalence relation of graph isomorphism. That is, for all $\tau \in Q_K, \tau = \{ \pi^\intercal G \pi : \pi \in \Gamma_n \}$ for some $G \in K$.
Intuitively, a maximally expressive collection of permutation-invariant functions, $\mathcal{C}$, will allow us to know exactly which isomorphism class $\tau$ a given graph $G$ belongs to, by looking at the outputs of certain functions in the collection applied to $G$. Heuristically, we can consider each function in $\mathcal{C}$ as a ``measurement'', which partitions that graph space $K$ according to the function value at each point. If $\mathcal{C}$ is powerful enough, then as a collection it will partition $K$ to be as fine as $Q_K$. If not, it is going to be coarser than $Q_K$. These intuitions motivate us to introduce the language of sigma-algebra.
Recall that an algebra on a set $K$ is a collection of subsets of $K$ that includes $K$ itself, is closed under complement, and is closed under finite union. Because $K$ is finite, we have that an algebra on $K$ is also a sigma-algebra on $K$, where a sigma-algebra further satisfies the condition of being closed under countable unions. Since $Q_K$ is a set of (non-intersecting) subsets of $K$, we can obtain the algebra generated by $Q_K$, defined as the smallest algebra that contains $Q_K$, and use $\sigma(Q_K)$ to denote the algebra (and sigma-algebra) generated by $Q_K$.
\begin{observation}
If $f : \mathcal{X}^{n \times n} \to \mathbb{R}$ is a permutation-invariant function, then $f$ is measurable with respect to $\sigma(Q_K)$, and we denote this by $f \in \mathcal{M}[\sigma(Q_K)]$
\end{observation}
Now consider a class of functions $\mathcal{C}$ that is permutation-invariant. Then for all $f \in \mathcal{C}, f \in \mathcal{M}[\sigma(Q_K)]$. We define the sigma-algebra generated by $f$ as the set of all the pre-images of Borel sets on $\mathbb{R}$ under $f$, and denote it by $\sigma(f)$. It is the smallest sigma-algebra on $K$ that makes $f$ measurable. For a class of functions $\mathcal{C}$, $\sigma(\mathcal{C})$ is defined as the smallest sigma-algebra on $K$ that makes all functions in $\mathcal{C}$ measurable. Because here we assume $K$ is finite, it does not matter whether $\mathcal{C}$ is a countable collection.
\subsection{Reformulating graph isomorphism testing and universal approximation with sigma-algebra}
\label{sec.reformulating}
We restrict our attention to the case of finite feature space. Given a graph $G \in \mathcal{X}^{n \times n}$, we use $\mathcal{E}(G)$ to denote its isomorphism class, $\{ G' \in \mathcal{X}^{n \times n}: G' \simeq G \}$. The following results are proven in Section~\ref{sec.proofs.reformulating}
\begin{theorem}\label{teo5}
If $\mathcal{C}$ is a class of permutation-invariant functions on $\mathcal{X}^{n \times n}$ and $\mathcal{C}$ is GIso-discriminating, then $\sigma(\mathcal{C}) = \sigma(Q_K)$
\end{theorem}
Together with Theorem \ref{UA2PD}, the following is an immediate consequence:
\begin{corollary}
If $\mathcal{C}$ is a class of permutation-invariant functions on $\mathcal{X}^{n \times n}$ and $\mathcal{C}$ achieves universal approximation, then $\sigma(\mathcal{C}) = \sigma(Q_K)$.
\end{corollary}
\begin{theorem} \label{teo6}
Let be $\mathcal{C}$ a class of permutation-invariant functions on $\mathcal{X}^{n \times n}$ with $\sigma(\mathcal{C}) = \sigma(Q_K)$. Then $\mathcal{C}$ is GIso-discriminating.
\end{theorem}
Thus, this sigma-algebra language is a natural notion for characterizing the power of graph neural networks, because as shown above, generating the finest sigma-algebra $\sigma(Q_K)$ is equivalent to being GIso-discriminating, and therefore to universal approximation.
Moreover, when $\mathcal{C}$ is not GIso-discriminating or universal, we can evaluate its representation power by studying $\sigma(\mathcal{C})$, which gives a measure for comparing the power of different GNN families.
Given two classes of functions $\mathcal{C}_1, \mathcal{C}_2$, there is $\sigma(\mathcal{C}_1) \subseteq \sigma(\mathcal{C}_2)$ if and only if $\mathcal{M}[\sigma(\mathcal{C}_1)] \subseteq \mathcal{M}[\sigma(\mathcal{C}_2)]$ if and only if $\mathcal{C}_1$ is less powerful than $\mathcal{C}_2$ in terms of representation power.
In Appendix \ref{app.comparison} we use this notion to compare the expressive power of different families of GNNs as well as other algorithms like 1-WL, linear programming and semidefinite programming in terms of their ability to distinguish non-isomorphic graphs. We summarize our findings in Figure 1.
\begin{figure}[ht]
\label{diagram_main_text}
\small
\centering
\begin{tikzcd}
& \text{sGNN}(I,A) \arrow[d, hook] \arrow[ddr, hook]& \\
& LP\equiv 1-WL \equiv GIN \arrow[d, hook] \arrow[dl, hook] & \\
\text{SDP}\arrow[dd,hook]& \text{MPNN}^* \arrow[d,hook] & \text{sGNN}(I,D,A,\{\min\{A^t,1\}\}_{t=1}^T) \arrow[ddl,hook]\\
& \text{order 2 $G$-invariant networks}^* \arrow[d,hook] & \text{spectral methods}\arrow[dl,hook] \\
\text{SoS hierarchy}& \text{Ring-GNN}&
\end{tikzcd}
\caption{\small Relative comparison of function classes in terms of their ability to solve graph isomorphism.
\newline $^*$Note that, on one hand GIN is defined by \cite{xu2018powerful} as a form of message passing neural network justifying the inclusion GIN $\hookrightarrow$ MPNN. On the other hand \cite{maron2018invariant} shows that message passing neural networks can be expressed as a modified form of order 2 $G$-invariant networks (which may not coincide with the definition we consider in this paper).
Therefore the inclusion GIN $\hookrightarrow$ order 2 $G$-invariant networks has yet to be established rigorously.
\vspace{-15pt} }
\label{fig.diagram}
\end{figure}
\section{Ring-GNN: a GNN defined on the ring of equivariant functions}
We now investigate the $G$-invariant network framework proposed in \cite{maron2019universality} (see Appendix~\ref{app.Ginvariant} for its definition and a description of an adapted version that works on graph-structured inputs, which we call the \textit{Graph $G$-invariant Networks}). The architecture of $G$-invariant networks is built by interleaving compositions of equivariant linear layers between tensors of potentially different orders and point-wise nonlinear activation functions. It is a powerful framework that can achieve universal approximation
if the order of the tensor can grow as $\frac{n(n-1)}{2}$, where $n$ is the number of nodes in the graph, but less is known about its approximation power when the tensor order is restricted. One particularly interesting subclass of $G$-invariant networks is the ones with maximum tensor order 2, because \cite{maron2018invariant} shows that it can approximate any Message Passing Neural Network \cite{gilmer2017neural}. Moreover, it is both mathematically cumbersome and computationally expensive to include equivariant linear layers involving tensors with order higher than 2.
Our following result shows that the order-2 Graph $G$-invariant Networks subclass of functions is quite restrictive. The proof is given in Appendix \ref{app.Ginvariant}.
\begin{theorem} \label{prop.Ginvariant}
Order-2 Graph $G$-invariant Networks cannot distinguish between non-isomorphic regular graphs with the same degree.
\end{theorem}
Motivated by this limitation, we propose a GNN architecture that extends the family of order-2 Graph $G$-invariant Networks without going into higher order tensors. In particular, we want the new family to include GNNs that can distinguish some pairs of non-isomorphic regular graphs with the same degree. For instance, take
the pair of Circular Skip Link graphs $G_{8, 2}$ and $G_{8,3}$,
illustrated in Figure \ref{cslfig}.
Roughly speaking, if all the nodes in both graphs have the same node feature, then because they all have the same degree, the updates of node states in both graph neural networks based on neighborhood aggregation and the WL test will fail to distinguish the nodes. However, the \textit{power graphs}\footnote{If $A$ is the adjacency matrix of a graph, its power graph has adjacency matrix $\min(A^2, 1)$. The matrix $\min(A^2, 1)$ has been used in \cite{chen2019cdsbm} in graph neural networks for community detection and in \cite{nowak2017note} for the quadratic assignment problem.} of $G_{8, 2}$ and $G_{8,3}$ have different degrees.
Another important example comes from spectral methods that operate on \emph{normalized} operators, such as the normalized Laplacian $\Delta = I - D^{-1/2} A D^{-1/2}$, where $D$ is the diagonal degree operator. Such normalization preserves the permutation symmetries and in many clustering applications leads to dramatic improvements \cite{von2007tutorial}.
This motivates us to consider a polynomial ring generated by the matrices that are the outputs of permutation-equivariant linear layers, rather than just the linear space of those outputs. Together with point-wise nonlinear activation functions such as ReLU, power graph adjacency matrices like $\min(A^2, 1)$ can be expressed with suitable choices of parameters. We call the resulting architecture the \textit{Ring-GNN} \footnote{We call it Ring-GNN since the main object we consider is the ring of matrices, but technically we can express an associative algebra since our model includes scalar multiplications.}.
\begin{definition}[Ring-GNN] Given a graph in $n$ nodes with both node and edge features in $\mathbb R^d$, we represent it with a matrix $A\in \mathbb R^{n\times n\times d}$.
\cite{maron2018invariant}
shows that all linear equivariant layers from $\mathbb{R}^{n \times n}$ to $\mathbb{R}^{n \times n}$ can be expressed as $L_\theta(A)=\sum_{i=1}^{15} \theta_i L_i(A) + \sum_{i=16}^{17} \theta_i \overline{L}_i$, where the $\{L_i\}_{i = 1, ..., 15}$ are the 15 basis functions
of all linear equivariant functions from $\mathbb{R}^{n \times n}$ to $\mathbb{R}^{n \times n}$, $\overline{L}_{16}$ and $\overline{L}_{17}$ are the basis for the bias terms, and $\theta \in \mathbb{R}^{17}$ are the parameters that determine $L$. Generalizing to an equivariant linear layer from $\mathbb{R}^{n \times n \times d}$ to $\mathbb{R}^{n \times n \times d'}$, we set $L_{\theta}(A)_{\cdot, \cdot, k'} = \sum_{k=1}^d \sum_{i=1}^{15} \theta_{k, k', i} L_i(A_{\cdot, \cdot, i}) + \sum_{i=16}^{17} \theta_{k, k', i} \overline{L}_i$, with $\theta \in \mathbb{R}^{d \times d' \times 17}$.
With this formulation, we now define a Ring-GNN with $T$ layers. First, set $A^{(0)}=A$. In the $t^{th}$ layer, let
\begin{eqnarray*}
B_1^{(t)}&=& \rho(L_{\alpha^{(t)}}(A^{(t)})) \\
B_2^{(t)}&=& \rho(L_{\beta^{(t)}}(A^{(t)})\cdot L_{\gamma^{(t)}}(A^{(t)})) \\
A^{(t+1)}&=& k_1^{(t)} B_1^{(t)} + k_2^{(t)} B_2^{(t)}
\end{eqnarray*}
where $k_1^{(t)}, k_2^{(t)} \in \mathbb{R}$, $\alpha^{(t)}, \beta^{(t)}, \gamma^{(t)} \in \mathbb{ R}^{d^{(t)} \times {d'}^{(t)} \times 17}$ are learnable parameters. If a scalar output is desired, then in the general form, we set the output to be $\theta_S \sum_{i,j} A_{ij}^{(T)} + \theta_D \sum_{i,i} A_{ii}^{(T)} + \sum_{i} \theta_{i} \lambda_{i}(A^{(T)})$, where $\theta_S, \theta_D, \theta_1, \ldots, \theta_n \in \mathbb{R}$ are trainable parameters, and $\lambda_{i}(A^{(T)})$ is the $i$-th eigenvalue of $A^{(L)}$.
\end{definition}
Note that each layer is equivariant, and the map from $A$ to the final scalar output is invariant. A Ring-GNN can reduce to an order-2 Graph $G$-invariant Network if $k_2^{(t)} = 0$. With $J+1$ layers and suitable choices of the parameters, it is possible to obtain $\min(A^{2^{J}}, 1)$ in the $(J+1)^{th}$ layer. Therefore, we expect it to succeed in distinguishing certain pairs of regular graphs that order-2 Graph $G$-invariant Networks fail on, such as the Circular Skip Link graphs. Indeed, this is verified in the synthetic experiment presented in the next section. The normalized Laplacian can also be obtained, since the degree matrix can be inverted by taking the reciprocal on the diagonal, and then entry-wise inversion and square root on the diagonal can be approximated by MLPs.
The terms in the output layer involving eigenvalues are optional, depending on the task. For example, in community detection spectral information is commonly used \cite{krzakala2013spectral}. We could also take a fixed number of eigenvalues instead of the full spectrum. In the experiments, Ring-GNN-SVD includes the eigenvalue terms while Ring-GNN does not, as explained in appendix \ref{archi}. Computationally, the complexity of running the forward model grows as $O(n^3)$, dominated by matrix multiplications and possibly singular value decomposition for computing the eigenvalues.
We note also that Ring-GNN can be augmented with matrix inverses or more generally with functional calculus on the spectrum of any of the intermediate representations \footnote{When $A=A^{(0)}$ is an undirected graph, one easily verifies that $A^{(t)}$ contains only symmetric matrices for each $t$.} while keeping $O(n^3)$ computational complexity.
Finally, note that a Graph $G$-invariant Network with maximal tensor order $d$ will have complexity at least $O(n^d)$. Therefore, the Ring-GNN explores higher-order interactions in the graph that order-2 Graph $G$-invariant Networks neglects while remaining computationally tractable.
\begin{figure}
\label{cslfig}
\centering
\includegraphics[width=0.15\textwidth,trim={6cm 4.8cm 20cm 7.8cm},clip]{skl2black}
\includegraphics[width=0.15\textwidth,trim={6cm 4.8cm 20cm 7.8cm},clip]{skl3black}
\caption{The Circular Skip Link graphs $G_{n,k}$ are undirected graphs in $n$ nodes $q_0,\ldots, q_{n-1}$ so that $(i,j)\in E$ if and only if $|i-j|\equiv 1 \text{ or } k \pmod n$. In this figure we depict (left) $G_{8,2}$ and (right) $G_{8,3}$. It is very easy to check that $G_{n,k}$ and $G_{n',k'}$ are not isomorphic unless $n=n'$ and $k\equiv \pm k' \pmod n$. Both 1-WL and $G$-invariant networks fail to distinguish them. \vspace{-10pt}}
\label{fig.skiplength}
\end{figure}
\section{Experiments}
\label{experiments}
The different models and the detailed setup of the experiments are discussed in Appendix \ref{archi}.
\subsection{Classifying Circular Skip Links (CSL) graphs}
\label{cslexp}
The following experiment on synthetic data demonstrates the connection between function fitting and graph isomorphism testing. The Circular Skip Links graphs are undirected regular graphs with node degree 4 \cite{murphy2019relational}, as illustrated in Figure \ref{cslfig}. Note that two CSL graphs $G_{n,k}$ and $G_{n',k'}$ are not isomorphic unless $n=n'$ and $k\equiv \pm k' \pmod n$. In the experiment, which has the same setup as in \cite{murphy2019relational}, we fix $n=41$, and set $k \in \{2, 3, 4, 5, 6, 9, 11, 12, 13, 16 \}$, and each $k$ corresponds to a distinct isomorphism class. The task is then to classify a graph $G_{n, k}$ by its skip length $k$.
Note that since the 10 classes have the same size, a naive uniform classifier would obtain $0.1$ accuracy. As we see from Table \ref{table.synthetic}, both GIN and $G$-invariant network with tensor order 2 do not outperform the naive classifier. Their failure in this task is unsurprising: WL tests are proved to fall short of distinguishing such pairs of non-isomorphic regular graphs \cite{cai1992optimal}, and hence neither can GIN \cite{xu2018powerful}; by the theoretical results from the previous section, order-2 Graph $G$-invariant network are unable to distinguish them either. Therefore, their failure as graph isomorphism tests is consistent with their failure in this classification task, which can be understood as trying to approximate the function that maps the graph to their class labels.
It should be noted that, since graph isomorphism tests are not entirely well-posed as classfication tasks, the performance of GNN models could vary due to randomness. But the fact that Ring-GNNs achieve a relatively high maximum accuracy (compared to RP for example) demonstrates that as a class of GNNs it is rich enough to contain functions that distinguish the CSL graphs to a large extent.
\begin{table}[ht]
\centering
\begin{tabular}{l|lll||ll|ll}
\hline
& \multicolumn{3}{|c||}{Circular Skip Links} & \multicolumn{2}{c|}{IMDBB} & \multicolumn{2}{c}{IMDBM} \\
GNN architecture & max & min & std & mean & std & mean & std \\
\hline \hline
RP-GIN $\dagger$ & 53.3 & 10 & 12.9 & - & - & - & - \\
GIN $\dagger$ & 10 & 10 & 0 & 75.1 & 5.1 & 52.3 & 2.8 \\
Order 2 G-invariant $\dagger$ & 10 & 10 & 0 & 71.27 & 4.5 & 48.55 & 3.9 \\
sGNN-5 & 80 & 80 & 0 & 72.8 & 3.8 & 49.4 & 3.2 \\
sGNN-2 & 30 & 30 & 0 & 73.1 & 5.2 & 49.0 & 2.1 \\
sGNN-1 & 10 & 10 & 0 & 72.7 & 4.9 & 49.0 & 2.1 \\
LGNN \cite{chen2019cdsbm} & 30 & 30 & 0 & 74.1 & 4.6 & 50.9 & 3.0 \\
Ring-GNN & 80 & 10 & 15.7 & 73.0 & 5.4 & 48.2 & 2.7 \\
Ring-GNN-SVD & 100 & 100 & 0 & 73.1 & 3.3 & 49.6 & 3.0 \\
\hline
\end{tabular}
\vspace{5pt}
\caption{\textbf{(left)} Accuracy of different GNNs at classifying CSL (see Section \ref{cslexp}). We report the best performance and worst performance among 10 experiments.
\textbf{(right)} Accuracy of different GNNs at classifying real datasets (see Section \ref{cslexp}). We report the best performance among all epochs on a 10-fold cross validation dataset, as was done in \cite{xu2018powerful}.
$\dagger$: Reported performance by \cite{murphy2019relational}, \cite{xu2018powerful} and \cite{maron2018invariant}.
}
\vspace{-1.5em}
\label{table.synthetic}
\end{table}
\subsection{IMDB datasets} \label{sec.imbdb}
We use the two IMDB datasets (IMDBBINARY, IMDBMULTI) to test different models in real-world scenarios. Since our focus is on distinguishing graph structures, these datasets are suitable as they do not contain node features, and hence the adjacency matrix contains all the input data. IMDBBINARY dataset has 1000 graphs, with average number of nodes 19.8 and 2 classes. The dataset is randomly partitioned into 900/100 for training/validation. IMDBMULTI dataset has 1500 graphs, with average number of nodes 13.0 and 3 classes. The dataset is randomly partitioned into 1350/150 for training/validation. All models are evaluated via 10-fold cross validation and best accuracy is calculated through averaging across folds followed by maximizing along epochs~\cite{xu2018powerful}. Importantly, the architecture hyper-parameter of Ring-GNN we use is close to that provided in \cite{maron2018invariant} to show that order-2 $G$-invariant Network is included in model family we propose. The results show that Ring-GNN models achieve higher performance than Order-2 G-invariant networks in both datasets.
Admittedly its accuracy does not reach that of the state-of-the-art.
However, the main goal of this part of our work is not necessarily to invent the best-performing GNN through hyperparameter optimization, but rather to propose Ring-GNN as an augmented version of order-2 Graph $G$-invariant Networks and show experimental results that support the theory.
\input{conclusions.tex}
\bibliographystyle{plain}
| {
"timestamp": "2019-05-30T02:21:13",
"yymm": "1905",
"arxiv_id": "1905.12560",
"language": "en",
"url": "https://arxiv.org/abs/1905.12560",
"abstract": "Graph Neural Networks (GNNs) have achieved much success on graph-structured data. In light of this, there have been increasing interests in studying their expressive power. One line of work studies the capability of GNNs to approximate permutation-invariant functions on graphs, and another focuses on the their power as tests for graph isomorphism. Our work connects these two perspectives and proves their equivalence. We further develop a framework of the expressive power of GNNs that incorporates both of these viewpoints using the language of sigma-algebra, through which we compare the expressive power of different types of GNNs together with other graph isomorphism tests. In particular, we prove that the second-order Invariant Graph Network fails to distinguish non-isomorphic regular graphs with the same degree. Then, we extend it to a new architecture, Ring-GNN, which succeeds in distinguishing these graphs and achieves good performances on real-world datasets.",
"subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "On the equivalence between graph isomorphism testing and function approximation with GNNs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693241947446616,
"lm_q2_score": 0.8267117962054048,
"lm_q1q2_score": 0.8013517461427168
} |
https://arxiv.org/abs/1711.08791 | Cantor set arithmetic | Every element $u$ of $[0,1]$ can be written in the form $u=x^2y$, where $x,y$ are elements of the Cantor set $C$. In particular, every real number between zero and one is the product of three elements of the Cantor set. On the other hand the set of real numbers $v$ that can be written in the form $v=xy$ with $x$ and $y$ in $C$ is a closed subset of $[0,1]$ with Lebesgue measure strictly between $\tfrac{17}{21}$ and $\tfrac89$. We also describe the structure of the quotient of $C$ by itself, that is, the image of $C\times (C \setminus \{0\})$ under the function $f(x,y) = x/y$. | \section{Introduction.}
One of the first exotic mathematical objects encountered by the
post-calculus student is the Cantor set
\begin{equation}\label{Steinhaus}
C = \left\{\sum_{k= 1}^\infty \alpha_k3^{-k}, \alpha_k \in \{0,2\}\right\}.
\end{equation}
(See {Section 2} for
several equivalent definitions of $C$.) One of its most beautiful
properties is that
\begin{equation}\label{C+C}
C+C := \{x+y: x,y \in C\}
\end{equation}
is equal to $[0,2]$. (The whole interval is produced by adding dust to itself!)
The first published proof of \eqref{C+C} was by Hugo Steinhaus
\cite{steinhaus:sums} in 1917. The result was later rediscovered by
John Randolph in 1940 \cite{ran:cantor}.
We remind the reader of the beautiful constructive proof of
\eqref{C+C}. It is enough to prove the containment $C+C \supset
[0,2]$. Given $u \in [0,2]$, consider the ternary representation for
$u/2$:
\begin{equation}
\frac u2= \sum_{k= 1}^{\infty} \frac{\epsilon_k}{3^k}, \quad \epsilon_k \in \{0,1,2\}.
\end{equation}
Define pairs $(\alpha_k,\beta_k)$ to be $(0,0), (2,0), (2,2)$ according to
whether $\epsilon_k = 0, 1, 2$, respectively, and define elements $x, y \in
C$ by
\begin{equation}
x = \sum_{k= 1}^{\infty} \frac{\alpha_k}{3^k}, \qquad
y = \sum_{k= 1}^{\infty} \frac{\beta_k}{3^k}.
\end{equation}
Since $\alpha_k + \beta_k = 2\epsilon_k$, $x+y = 2\cdot\frac u2 = u$.
While presenting this proof in a class, one of the authors
(BR)
wondered what would happen if addition were replaced by the other arithmetic
operations. Another author
(JT)
immediately pointed out that subtraction is easy, because of a symmetry of $C$:
\begin{equation*}
x = \sum_{k=1}^\infty \frac{\epsilon_k}{3^k} \in C \iff 1-x = \sum_{k=1}^\infty
\frac{2-\epsilon_k}{3^k} \in C.
\end{equation*}
Thus
\begin{equation*}
C-C := \{x-y: x,y \in C\} = \{x-(1-z): x,z \in C\} = C+C -1 = [-1,1].
\end{equation*}
More generally, to understand the structure of linear combinations
$aC+bC$, $a,b\in{\mathbb R}$, it suffices to consider the case $a=1$ and $0\le
b\le 1$. (If $a>b>0$, then $aC+bC=a(C+(b/a)C)$; the remaining cases
are left to the reader.) The precise structure of the linear
combination set
$$
aC+bC
$$
was obtained by Paw{\l}owicz \cite{paw:cantor}, who extended an
earlier result by Utz \cite{utz:cantor}, published in 1951. Utz's
result states that
\begin{equation}\label{eq:utz}
C + b C = [0,1+b] \qquad \mbox{for every $\tfrac13 \le b \le 3$.}
\end{equation}
Multiplication is trickier. The inclusion $C \subset [0,\frac13] \cup
[\frac23, 1]$
implies that any element of the interval $(\frac 13,\frac49)$ cannot
be written as a product
of two elements from $C$. { Thus the measure of the product of $C$
with itself is at most $\tfrac89$.} This paper grew out of a study
of multiplication on $C$.
In this paper we will prove the following results.
\begin{theorem}
\label{maintheorem}
\
\begin{enumerate}
\item Every $u \in [0,1]$ can be written as $u = x^2y$ for some $x,y \in C$.
\item The set of quotients from $C$ can be described as follows:
\begin{equation}\label{E:quot}
\left\{ \frac xy: x, y \in C, y\neq 0\right\} = \bigcup_{m = -\infty}^\infty
\left[\frac 23 \cdot 3^m , \frac 32 \cdot 3^m\right].
\end{equation}
\item The set $\{xy:x,y \in C\}$ is a closed set with Lebesgue measure
{ strictly greater than}
$\frac{17}{21}$.
\end{enumerate}
\end{theorem}
In particular, part (1) of Theorem \ref{maintheorem} implies that
every real number in the interval $[0,1]$ is the product of three
elements of $C$.
In words, \eqref{E:quot} says that each positive real number is a quotient of two elements of
the Cantor set if and only if either the left-most nonzero digit in
the ternary representation of $u$ is ``2,'' or the left-most nonzero digit
is ``1,'' but the first subsequent non-``1'' digit is ``0,'' not ``2.''
This paper is organized as follows. We begin in section \ref{sec:tools} with
several different descriptions of the Cantor set, and then the key tools, all of which
are accessible to { students in} a good undergraduate analysis class. As a warmup, we
use these tools to give a short proof of Utz's result \eqref{eq:utz}
in Section \ref{subsec:plus-and-minus}. Sections \ref{subsec:times},
\ref{subsec:divide} and \ref{subsec:times-again} are devoted to the
proofs of {parts (1), (2) and (3) of} Theorem~\ref{maintheorem},
respectively. Sprinkled throughout are relevant open questions.
This article began as a standard research paper, but we realized that
many of our main results might be of interest to a wider audience. In
Section \ref{sec:final-remarks}, we discuss some of our other results,
which will be
published elsewhere, with different combinations of co-authors.
\section{Tools.}\label{sec:tools}
We begin by recalling several equivalent and well-known definitions of the Cantor
set. See Fleron \cite{fl:cantor} and the references within for an excellent overview of the history of the Cantor set and the context in which several of these definitions first arose.
The standard ternary representation of a real number $x$ in $[0,1]$ is
\begin{equation}\label{E:ternrep}
x = \sum_{k=1}^{\infty} \frac {\alpha_k(x)}{3^k},\qquad \alpha_k(x) \in \{0,1,2\}.
\end{equation}
This representation is unique, except for the {\it ternary rationals},
$\{ \frac m{3^n}, m,n \in \mathbb N\}$, which have two
ternary representations. Supposing $\alpha_n > 0$ and $m \neq 0 \mod
3$, so that $\alpha_n \in \{1,2\}$ below, we have
\begin{equation}\label{E:ternexc}
\begin{split}
\frac m{3^n}
&= \sum_{k=1}^{n-1} \frac {\alpha_k}{3^k} + \frac{\alpha_n}{3^n} + \sum_{k=n+1}^{\infty} \frac {0}{3^k}\\
&= \sum_{k=1}^{n-1} \frac {\alpha_k}{3^k} + \frac{\alpha_n-1}{3^n} + \sum_{k=n+1}^{\infty} \frac {2}{3^k}.
\end{split}
\end{equation}
The {\it Cantor set} $C$ consists those $x \in [0,1]$ { admitting} a ternary representation as in \eqref{E:ternrep} with
$\alpha_k(x) \in \{0,2\}$ for all $k$. Note that $C$ also contains those ternary rationals as in \eqref{E:ternexc} whose final digit is ``1.'' These may be
transformed as above into a representation in which $\alpha_n(x) = 0$ and $\alpha_k(x) = 2$ for $k > n$.
As noted earlier, $x \in C$ if and only if $1-x \in C$. Further, if $k \in \mathbb N$, then
\begin{equation}
x \in C \implies 3^{-k}x \in C.
\end{equation}
This definition arises in dynamical systems, as the Cantor set $C$ can be viewed as an invariant set for the map $x \mapsto 3x \bmod 1$, or equivalently, as the image of an invariant set $C'$ for the one-sided shift map $\sigma$ acting on $\Omega = \{0,1,2\}^{\mathbb N}$. Given a sequence $\omega = (\omega_n)_{n=1}^\infty = (\omega_1,\omega_2,\ldots)$ in $\Omega$,
$$
\sigma\omega = (\omega_2,\omega_3,\ldots)\,.
$$
Letting $C' = \{0,2\}^{\mathbb N} \subset \Omega$, we realize the Cantor set $C$ as the image of $C'$ under the coding map $T:\Omega \to [0,1]$ given by
$$
T(\omega) = \sum_{n=1}^\infty \omega_n 3^{-n}.
$$
We now present the usual ``middle-third'' definition of the Cantor
set: Define $C_n = \{x: \alpha_k(x) \in \{0,2\}, 1 \le k \le n\}$, which is
a union of $2^n$ closed intervals of length $3^{-n}$, written as
\begin{equation}
C_n = \bigcup_{i=1}^{2^n} I_{n,i}.
\end{equation}
The left-hand endpoints of the $I_{n,i}$'s comprise the set
\begin{equation}
\left \{\sum_{k=1}^{n} \frac{\epsilon_k}{3^k}: \epsilon_k \in \{0,2\}\right\}.
\end{equation}
The right-hand endpoints have ``1'' as their final nonzero
ternary digit when written as a finite ternary expansion.
The more direct definition of $C$ is as a nested intersection of
closed sets:
\begin{equation}
C = \bigcap_{n=1}^\infty C_n;\qquad C_1 \supset C_2 \supset C_3 \supset \cdots.
\end{equation}
This definition is standard in fractal geometry, where the Cantor set $C$ is seen as the invariant set for the pair of contractive linear mappings $f_1(x) = \tfrac13 x$ and $f_2(x) = \tfrac13 x + \tfrac23$ acting on the real line. That is, $C$ is the unique nonempty compact set that is fully invariant under $f_1$ and $f_2$:
$$
C = f_1(C) \cup f_2(C).
$$
Observe that each ``parent'' interval $I_{n,i} = [a,a+\frac 1{3^n}]$ in $C_n$ has two ``child'' intervals
\begin{equation}
I_{n+1,2i-1} = \left[ a,a+\tfrac 1{3^{n+1}}\right],\qquad
I_{n+1,2i} = \left[a+\tfrac2{3^{n+1}},a+\tfrac3{3^{n+1}}\right]
\end{equation}
in $C_{n+1}$, and $C_{n+1}$ is the union of all children
intervals whose parents are in $C_n$.
It is useful to introduce the following notation to
represent the omission of the middle third:
\begin{equation}
I = [a,a+3t] \implies \ddot I = [a,a+t] \cup [a+2t,a+3t].
\end{equation}
Using this notation,
\begin{equation}
C_{n+1} = \bigcup_{i=1}^{2^{n+1}} I_{n+1,i} = \bigcup_{i=1}^{2^n} \ddot I_{n,i}.
\end{equation}
It will also be useful, for studying products and quotients, to give a third
definition. Let $\widetilde C$ = $C \cap [1/2,1] = C \cap
[2/3,1] = \frac 23 + \frac 13\cdot C$. Then by examining
the smallest $k$ for which $\epsilon_k = 2$, we see that
\begin{equation}
C = \{0\} \cup \bigcup_{k=0}^{\infty} 3^{-k}\widetilde C.
\end{equation}
Similarly, let $\widetilde C_n$ = $C_n \cap [1/2,1]$, consisting of
$2^{n-1}$ closed intervals of length $3^{-n}$:
\begin{equation}
\widetilde C_n = \bigcup_{i=2^{n-1}+1}^{2^n} I_{n,i}.
\end{equation}
Then
\begin{equation}
\widetilde C = \bigcap_{n=1}^\infty \widetilde C_n.
\end{equation}
And, analogously,
\begin{equation}
\widetilde C_{n+1} = \bigcup_{i=2^{n}+1}^{2^{n+1}} I_{n+1,i} =
\bigcup_{i=2^{n-1}+1}^{2^n} \ddot I_{n,i}.
\end{equation}
By looking at the left-hand endpoints, we see that
each interval $I_{n,i} \neq [0,\frac 1{3^n}]$ can be written as
$\frac 1{3^{n-k}} I_{k,j}$ for some $I_{k,j} \in \widetilde C_k$; hence
\begin{equation}
\begin{gathered}
C_n = \left[0,\frac 1{3^n}\right] \cup \bigcup_{k=1}^{n} \frac
1{3^{n-k}} \widetilde C_k.
\end{gathered}
\end{equation}
The keys to our method lie in two lemmas which might appear on a
first serious analysis exam. (Please do not send such exams to the authors!)
\begin{lemma}\label{lemma1}
Suppose $\{K_i\}
\subset \mathbb R$ are nonempty compact sets, $K_1
\supseteq K_2 \supseteq K_3 \supseteq \cdots$, and $K = \cap K_i$.
\noindent (i) If $(x_j) \to x$, $x_j \in K_j$, then $x \in K$.
\noindent (ii) If $F:\mathbb R^m
\to \mathbb R$ is continuous, then $F(K^{m}) = \cap F(K_i^{m})$.
\end{lemma}
\begin{proof}
(i). If $x \notin K$, then $x \notin K_r$ for some
$r$. Since $K_r^c$ is
open, there exists $\epsilon > 0$ so that $(x - \epsilon, x+ \epsilon) \subseteq K_r^c
\subseteq K_{r+1}^c \cdots$ and hence
$|x_j - x| \ge \epsilon$ for $j \ge r$, a contradiction to $x_j \to x$.
(ii). Since $K \subseteq K_i$, { we have} $F(K^{m}) \subseteq \cap F(K_i^{m})$. Conversely, suppose $u \in \cap F(K_i^{m})$. We need to find
$x \in K^{m}$ such that $F(x) = u$. For each $i$, choose $x_i = (x_{i,1}, \dots, x_{i,m}) \in K_i^m$ so that $F(x_i) = u$. Since $K_1^{m}$ is compact,
the Bolzano--Weierstrass theorem implies { that} the sequence $(x_i)$ has a convergent subsequence
$x_{r_j} = (x_{r_j,1}, \dots x_{r_j,m}) \to y = (y_1,\dots, y_m)$.
Applying (i) to the subsequence $K_{r_1} \supseteq K_{r_2} \supseteq
K_{r_3} \supseteq \cdots$, we see that each $y_k \in K$ and
since $F$ is continuous, $F(y) = u$, as desired.
\end{proof}
If we perform the middle-third construction with an initial
interval of $[a,b]$, it is easy to see that the limiting object is a
translate of the Cantor set, specifically $C_{a,b}:= a +(b-a)C$.
\begin{lemma}\label{lemma2}
Suppose $F: \mathbb R^m \to \mathbb R$ is continuous, and suppose
that for every choice of disjoint or identical subintervals $I_k
\subset [a,b]$ of common length,
\begin{equation}
F(I_1,\dots,I_m) = F(\ddot I_1,\dots, \ddot I_m).
\end{equation}
Then $F(C_{a,b}^{m}) = F([a,b]^{m})$.
\end{lemma}
\begin{proof}
We prove the result for $[a,b] = [0,1]$; the result follows generally by
composing $F$ with an appropriate linear function. Let
\begin{equation}
C_k = \bigcup_{j=1}^{2^k} I_{k,j},
\end{equation}
where each interval $I_{k,j}$ has length $3^{-k}$.
It follows that
\begin{equation}
F(C_k^{m}) = \bigcup_{1\le j_1,\dots,j_m\le 2^k} F(I_{k,j_1},\dots, I_{k,j_m}),
\end{equation}
where for each pair $(\ell,\ell')$, $I_{k,j_\ell}$ and $I_{k,j_\ell'}$
are either identical or disjoint.
Since
\begin{equation}
C_{k+1} = \bigcup_{j=1}^{2^k} \ddot I_{k,j},
\end{equation}
the hypothesis implies that $F(C_k^{m}) = F(C_{k+1}^{m})$, and
the result then follows from Lemma \ref{lemma1}(ii).
\end{proof}
We only apply Lemma \ref{lemma2} in the cases that $m=2$ and $[a,b] =
[0,1]$ and $[\frac23,1]$. (In the latter case, when $F(x,y) = xy$ or $x/y$,
it is helpful to have control of the ratio $x/y$.)
\section{Arithmetic on the Cantor set.}
\subsection{Addition and subtraction}\label{subsec:plus-and-minus}
Sums and differences of Cantor sets have been widely studied in
connection with dynamical systems. In this section we give a brief
proof of the following result of Utz \cite{utz:cantor}. Further
information about sums of Cantor sets can be found in \cite{chm:sums} and
\cite{chm:sums2}.
\begin{theorem}[Utz]
If $\lambda \in [\frac 13,3]$, then every element $u$ in $[0,1+\lambda]$ can be written in the form $u = x + \lambda y$ for $x,y \in C$.
\end{theorem}
We include this proof in order to introduce the main ideas in the proof of Theorem \ref{maintheorem} in a simpler context. The key tool is Lemma \ref{lemma2}.
Let $f_{\lambda}(x,y) = x + \lambda y$; we wish to show that $f_{\lambda}(C^{2}) = [0,1+\lambda]$. Observe that $C + \lambda C = \lambda(C + \lambda^{-1}C)$ for $\lambda \neq 0$, so it suffices to consider $\tfrac13 \le \lambda \le 1$.
\begin{proof}
We apply Lemma \ref{lemma2} and show that for any two closed
intervals $I_1,I_2$ of the same length in $[a,b]$, $f_{\lambda}(I_1,I_2) =
f_{\lambda}(\ddot I_1,\ddot I_2)$. For clarity, we write $I_1 =
[r,r+3t]$, $I_2 = [s,s+3t]$, and $w = r + \lambda s$, so that
$f_{\lambda}(I_1,I_2) = [w, w+ 3(1+\lambda)t]$.
Observe that $\ddot I_1 = [r,r+t]\cup[r+2t,r+3t]$
and $\ddot I_2 = [s,s+t]\cup[s+2t,s+3t]$, so
\begin{equation}\begin{split}
f_{\lambda}(\ddot I_1,\ddot I_2)
&= \left(\ [w,w+(1+\lambda)t] \cup [w+2\lambda t,w+(1+3\lambda)t]\ \right) \\
&\cup \left(\ [w+2t,w+(3+\lambda)t] \cup [w+(2+2\lambda t,w+(3+3\lambda)t]\ \right) \,.
\end{split}\end{equation}
Since $\lambda \le 1$, $1+\lambda \ge 2\lambda$ and $3+\lambda \ge 2+ 2\lambda$, the pairs of intervals coalesce into
\begin{equation}
f_{\lambda}(\ddot I_1,\ddot I_2) = [w,w+(1+3\lambda)t]) \cup [w+2t,w+(3+3\lambda)t].
\end{equation}
Since $\lambda \ge \frac 13$, { we have} $2 \le 1+3\lambda$. { Hence} $f_{\lambda}(\ddot I_1,\ddot I_2) =f_{\lambda}(I_1,I_2)$, completing the proof.
\end{proof}
\begin{remark}
Unlike the folklore proof for $\lambda = 1$, there seems to be no obvious
algorithmic proof, save for $\lambda = \frac 13$. In this case, suppose
$u \in [0,\frac 43]$. If $u = \frac 43$,
then $u = 1 + \frac 13\cdot 1 \in C + \frac 13 C$. If $u < \frac 43$,
then we can write $u = x + y$, $x, y \in C$, and assume $x\ge
y$. Since $y \le \frac u2 <
\frac 23$ is in $C$, $y \le \frac 13$, hence $y = \frac 13 z$, $z \in C$. This
produces the desired construction.
\end{remark}
The case of subtraction, {that is, the case of} $f_{\lambda}$ when $\lambda < 0$, is easily handled.
\begin{theorem}
If $\beta = -\lambda <0$, then
\begin{equation}
f_{\beta}(C^{2}) = -\lambda + f_{\lambda}(C^{2}).
\end{equation}
\end{theorem}
\begin{proof}
If $\beta < 0$, $x,y \in C$, we have
\begin{equation}
x + \beta y = x + \beta(1-z) = -\lambda + x + \lambda z
\end{equation}
for $x$ and $z$ in $C$.
\end{proof}
\begin{remark}\label{rem:sums}
Arithmetic sums of Cantor sets and more general compact sets have been studied intensively. For instance, Mendes and Oliveira \cite{mo:topological} discuss the topological structure of sums of Cantor sets, while Schmeling and Shmerkin \cite{SS:dimension} characterize those nondecreasing sequences $0\le d_1 \le d_2\le d_3\le \cdots \le 1$ { that} can arise as the sequence of Hausdorff dimensions of iterated sumsets $A,A+A,A+A+A,\ldots$ for a compact subset $A$ of ${\mathbb R}$. Recent work of Gorodetski and Northrup \cite{gn:sums} involves the Lebesgue measure of sumsets of Cantor sets and other compact subsets of the real line.
We refer the interested reader to these papers and the references therein for more information.
\end{remark}
\subsection{Multiplication}\label{subsec:times}
We let $f(x,y) = x^2 y$ and shall show that $f(C^2) = [0,1]$. We begin
by showing that it suffices to consider $f(\widetilde C^2)$.
\begin{lemma}
If $f(\widetilde C^2) = [\frac 8{27},1]$, then $f(C^2) = [0,1]$.
\end{lemma}
\begin{proof}
Suppose $u \in [0,1]$. If $u = 0$, then $u = 0^2\cdot 0$. If $u > 0$,
then there exists a
unique integer $r \ge 0$ so that $u = 3^{-r}v$, where $v \in (\frac
13,1]$. Since $\frac 8{27} < \frac 13$, $v = x^2y$ for $x,y\in
\widetilde C \subset C$, and since $x, 3^{-r}y \in C$,
$u = x^2 (3^{-r}y)$ is the desired representation.
\end{proof}
Accordingly, we confine our attention to $\widetilde C$.
\begin{lemma}
If $I = [a,a+3t]$ and $J = [b,b+3t]$ are in
$[\frac 23, 1]$, then $f(I,J) = f(\ddot I, \ddot J)$.
\end{lemma}
\begin{proof}
We first define
\begin{equation}
\begin{gathered}
\
[a^2b, (a+t)^2 (b + t)]=: [u_1,v_1]; \\
[a^2(b+2t), (a+t)^2 (b + 3t)]=: [u_2,v_2]; \\
[(a+2t)^2b, (a+3t)^2 (b + t)]=: [u_3,v_3]; \\
[(a+2t)^2(b+2t), (a+3t)^2 (b +3 t)]=: [u_4,v_4].
\end{gathered}
\end{equation}
Evidently, $f(I,J) = [u_1,v_4]$, and also, $u_1 < u_2, v_1 < v_2$, and $u_3
< u_4, v_3 < v_4$. If we can first show that $v_1 > u_2$ and $v_3 > u_4$,
then $[u_1,v_1] \cup [u_2,v_2] = [u_1, v_2]$ and
$[u_3,v_3] \cup [u_4,v_4] = [u_3, v_4]$. Second, since $u_1 < u_3$ and $v_2 <
v_4$, if we can show that $v_2 > u_3$, then $[u_1,v_2] \cup [u_3,v_4]
= [u_1, v_4]$, and the proof will be complete.
We compute
\begin{equation}\label{eq:36}
\begin{gathered}
v_1 - u_2 = a(2b-a)t + (2a + b) t^2 + t^3; \\
v_2 - u_3 = a(3a - 2b)t + (6a - 3b)t^2 + 3t^3; \\
v_3 - u_4 = a(2b - a)t + (5b - 2a) t^2 + t^3.
\end{gathered}
\end{equation}
Since $2b - a \ge 2\cdot \frac 23 - 1 > 0$, $3a-2b \ge 3\cdot \frac
23 - 2 \ge 0$, $6a - 3b \ge 6 \cdot \frac 23 -3 > 0$, and $5b-2a \ge
5\cdot \frac 23 - 2 > 0$, each of the quantities in \eqref{eq:36} is positive and
the proof is complete.
\end{proof}
\begin{theorem}\label{mult}
\begin{equation}
f(\widetilde C^2) = [\tfrac 8{27},1].
\end{equation}
\end{theorem}
\begin{proof}
Apply Lemma \ref{lemma2}, noting that $f([\frac 23,1]^2) =
[\tfrac 8{27},1]$.
\end{proof}
\begin{remark}
Observe that $u \in [0,1]$, written as $x^2y$, where $x,y \in C$, is also
$x\cdot x \cdot y$, so this implies that every element in
$[0,1]$ is a product of three elements from the Cantor set. (This can
also be proved in a more ungainly way, by taking $m=3$ in Lemma
\ref{lemma2} with $f(x_1,x_2,x_3) = x_1x_2x_3$.)
\end{remark}
\begin{remark}
We do not know an algorithm for expressing $u \in [0,1]$ in the form
of $x^2y$ or $x_1x_2x_3$ as a product of elements of the Cantor set.
\end{remark}
\begin{remark}
More generally, one can look at $f_{a,b}(C^2)$ where $f_{a,b}(x,y) :=
x^ay^b$. Since $u = x^ay^b$ if and only if $u^{1/a} = x y^{b/a}$, for $u \in
(0,1)$, it suffices to consider $a=1$. By looking at $C_1 =
[0,\frac 13] \cup [\frac23, 1]$, it is not hard to see that if $f(x,y)
= x y^t$, $t \ge 1$, and $(\frac 23)^{1+t} > \frac 13$, then
$f(C_1^2)$ is already missing an interval from $[0,1]$. This condition
occurs when $t < \frac{\log 2}{\log 3/2} \approx 1.7095$.
\end{remark}
\begin{remark}
By taking logarithms, we can convert the question about products of
elements of $C$ into a question about sums. (We can omit the point $0$
since its multiplicative behavior is trivial.) Of course, the
underlying set is no longer the standard self-similar Cantor set but
is a more general (``non-linear'') closed subset of ${\mathbb R}$. Some
conclusions about the number of factors needed to recover all of
$[0,1]$ can be obtained from the general results in the papers of
Cabrelli--Hare--Molter \cite{chm:sums}, \cite{chm:sums2}, but it does
not appear that one obtains the precise conclusion of part 2 of Theorem \ref{maintheorem} in this fashion.
\end{remark}
\subsection{Division}\label{subsec:divide}
In this section, we complete our arithmetic discussion by considering quotients.
\begin{theorem}\label{div}
\begin{equation}\label{E:div}
\left\{ \frac uv: u, v \in C\right\} = \bigcup_{m = -\infty}^{\infty}
\left[\frac 23 \cdot 3^m , \frac 32 \cdot 3^m\right].
\end{equation}
\end{theorem}
\begin{proof}
As with multiplication, it suffices to consider $\widetilde C$.
\begin{lemma}
Theorem \ref{div} is implied by the identity
\begin{equation}\label{E:div-c-tilde}
\left\{ \frac uv: u, v \in \widetilde C\right\} = \left[\frac 23 ,
\frac 32 \right]\,.
\end{equation}
\end{lemma}
\begin{proof}
Write $u, v \in C$ as $u = 3^{-s}\tilde u$, $v = 3^{-t}\tilde v$ for
integers $c,d \ge 0$ and $\tilde u, \tilde v \in \widetilde C$. Then
$u/v = 3^{d-c}\tilde u/\tilde v$, where $m=d-c$ can attain any integer value.
\end{proof}
We now prove \eqref{E:div-c-tilde}. Consider $\widetilde C_1 = [\frac 23,1]$ and apply Lemma
\ref{lemma2}. Clearly, $\{\frac uv: u, v \in \widetilde C_1\} = [\frac 23, \frac 32]$.
Consider two intervals in $\widetilde C_n$, $I_1 = [a,a+3t]$ and
$I_2=[b,b+3t]$. { These intervals are either identical or disjoint.} Since $x = \frac uv$ implies $\frac 1x = \frac vu$, there is no harm in assuming $a \le
b$, and either $I_1 = I_2$ and $a=b$, or the intervals are disjoint and $a + 3t
\le b$. The quotients from these intervals will lie in
\[
J_0:= \left[ \frac a{b+3t}, \frac {a+3t}b \right]:= [r_0,s_0].
\]
Since $\ddot I_1 = [a,a+t] \cup [a+2t,a+3t]$
and $\ddot I_2 = [b,b+t] \cup [b+2t,b+3t]$, we obtain four subintervals
\[
\begin{gathered}
J_1 =\left[ \frac a{b+3t}, \frac {a+t}{b+2t} \right] = [r_1,s_1], \\
J_2 =\left[ \frac a{b+t}, \frac {a+t}{b} \right] = [r_2,s_2], \\
J_3 =\left[ \frac {a+2t}{b+3t}, \frac {a+3t}{b+2t} \right] = [r_3,s_3], \\
J_4 =\left[ \frac {a+2t}{b+t}, \frac {a+3t}{b} \right] = [r_4,s_4]. \\
\end{gathered}
\]
We need to see how $J_0 = J_1 \cup J_2 \cup J_3 \cup J_4$. There are
two cases, depending on whether $a=b$ or $a<b$.
We first record some algebraic relations. We have
$r_1 = r_0$ and $s_4 = s_0$, and, evidently,
$r_1 < r_2$, $s_1 < s_2$, $r_3 < r_4$, $s_3 < s_4$. Further,
\[
\begin{gathered}
r_3 - r_2 = \frac {a+2t}{b+3t} - \frac a{b+t} =
\frac{2t(b-a+t)}{(b+t)(b+3t)}, \\
s_3 - s_2 = \frac {a+3t}{b+2t} - \frac {a+t}{b} =
\frac{2t(b-a-t)}{b(b+2t)}, \\
s_1 - r_2 = \frac {a+t}{b+2t} - \frac a{b+t} =
\frac{t(b-a+t)}{(b+t)(b+2t)}, \\
s_2 - r_3 = \frac {a+t}{b} - \frac {a+2t}{b+3t} = \frac{t(3a + 3t -
b)}{b(b+3t)} \ge \frac {t(3\cdot\frac23 + 0 - 1)}{b(b+3t)} > 0, \\
s_3 - r_4 = \frac {a+3t}{b+2t} - \frac {a+2t}{b+t} =
\frac{t(b-a-t)}{(b+t)(b+2t)}.
\end{gathered}
\]
Suppose first that $a < b$, so $ a+3t < b$. Then each of the
differences above is positive, so $r_1 < r_2 < r_3 < r_4$ and
$s_1 < s_2 < s_3 < s_4$; further, the intervals overlap: $s_1 > r_2$,
$s_2 > r_3$ and $s_3 > r_4$. Thus $J_0 = J_1 \cup J_2 \cup J_3 \cup
J_4$.
If $a = b$, then $J_3 =\left[ \frac {a+2t}{a+3t}, \frac {a+3t}{a+2t}
\right] \subset \left[ \frac a{a+t}, \frac {a+t}{a} \right] = J_2$, so
we may drop $J_3$ from consideration. We have $r_1 < r_2 < r_4$ and $s_1
< s_2 < s_4$ and need only show that $s_1 > r_2$ and $s_2 > r_4$. The
first is clear, and for the second,
\begin{equation}
s_2 - r_4 = \frac {a+t}{a} - \frac {a+2t}{a+t} = \frac{t^2}{a(a+t)}
> 0,
\end{equation}
so { $J_0 = J_1 \cup J_2 \cup J_4$,} and we are done.
\end{proof}
\begin{remark}
We do not know an algorithm for expressing a feasible $u$ as a
quotient of elements { in} $C$.
\end{remark}
\subsection{Multiplication, revisited}\label{subsec:times-again}
Let $g(x,y) = xy$. As noted earlier, $g(C^2)$ is not the full interval
$[0,1]$, though $g(C^2) = \cap g(C_i^2)$ is the intersection of a
descending chain of closed sets and so is closed. In order to gain
some information about $g(C^2)$, we look carefully at how Lemma
\ref{lemma2} fails.
\begin{lemma}\label{middle}
Let $I = [a,a+3t]$ and $J = [b,b+3t]$, with { $\tfrac23 \le a \le b \le 1$,} be either
identical or disjoint intervals. Then
\begin{equation*}
\begin{gathered}
a < b \implies g(\ddot I, \ddot J) = g(I, J)\,; \\
a = b \implies g(\ddot I, \ddot I) = g(I,I) \setminus
((a+2t)^2-t^2,(a+2t)^2).
\end{gathered}
\end{equation*}
\end{lemma}
{
\begin{proof}
We have
$$
g([a,a+3t],[b,b+3t]) = [ab, ab + 3(a+b)t + 9t^2]
$$
and
\begin{equation}\label{eq:gab}
\begin{gathered}
g([a,a+t]\cup[a+2t,a+3t],[b,b+t]\cup[b+2t,b+3t]) = \\
[ab, ab + (a + b)t + t^2] \cup [ab + 2at, ab + (3a + b)t + 3t^2] \\
\cup [ab + 2b t, ab + (a + 3b)t + 3t^2] \\ \cup[ ab + (2a+2b)t +
4t^2, ab + 3(a+b)t + 9t^2].
\end{gathered}
\end{equation}
Since $a \le b$, it follows that $ab + 2at \le ab + (a + b)t + t^2$,
and the first two intervals coalesce into $[ab, ab + (3a + b)t + 3t^2]$.
Suppose that $a < b$, and recall that we have assumed $\tfrac23 \le a < b \le 1$.
Since $a+t \le b$, it follows that $ab+(2a+2b)t+4t^2 \le ab + (a+3b)t +3t^2$
and the last two intervals coalesce into $[ab+2bt,ab+3(a+b)t+9t^2]$. Thus the right-hand side of \eqref{eq:gab} reduces to
\begin{equation}\label{eq:gab2}
[ab, ab + (3a + b)t + 3t^2] \cup [ ab + 2bt, ab + 3(a+b)t + 9t^2]
\end{equation}
Moreover,
\begin{equation*}
ab+ (3a+b)t + 3t^2 - (ab + 2bt) = t(3a+3t - b) \ge t (3 \cdot \tfrac23 + 0 - 1) = t \ge 0,
\end{equation*}
which shows that the pair of intervals in \eqref{eq:gab2} coalesces to a single interval. This proves the first statement.
If $a = b$, then the middle two intervals in \eqref{eq:gab} are the same, and
\begin{equation*}\begin{split}
&g(([a,a+t]\cup[a+2t,a+3t])^2) \\
&\quad = [a^2, a^2+4at + 3t^2] \cup [a^2 + 4at + 4t^2, a^2 + 6at + 9t^2] \\
&\quad = [a^2,(a+3t)^2] \setminus ((a+2t)^2-t^2,(a+2t)^2)).
\end{split}\end{equation*}
\end{proof}
}
This leads to the following estimate for the Lebesgue measure of $g(C^2)$.
\begin{theorem}\label{meas}
\[
\mu(g(C^2)) \ge \frac {17}{21}.
\]
\end{theorem}
\begin{proof}
First note that $g(\widetilde C^2) \subset
g(\widetilde C_1^2) = [\frac 49,1]$.
It follows as before
\begin{equation}
g(C^2) = \{0\} \cup \bigcup_{k=0}^\infty 3^{-k}g(\widetilde C^2),
\end{equation}
and since $\frac 13 < \frac 49$, the sets $3^{-k}g(\widetilde C^2)$
are disjoint. Therefore,
\begin{equation}
\mu(g(C^2)) = \sum_{k=0}^\infty 3^{-k}\mu(g(\widetilde C^2)) =
\tfrac 32 \mu(g(\widetilde C^2)).
\end{equation}
Since $\widetilde C_n$ consists of $2^{n-1}$ intervals of length
$3^{-n}$, it follows from Lemma \ref{middle} that
\begin{equation}\begin{split}
&\mu(g(\widetilde C_{n+1}^2)) \ge \mu(g(\widetilde C_{n}^2)) -
\frac {2^{n-1}}{3^{2n+2}} \\
&\implies \mu(g(\widetilde C^2)) \ge \left(1-\frac 49\right) - \sum_{n=1}^{\infty} \frac {2^{n-1}}{3^{2n+2}} =
\frac{34}{63} \\
&\implies \mu(g(C^2)) \ge \frac 32\cdot \frac{34}{63} = \frac {17}{21}.
\end{split}\end{equation}
\end{proof}
\begin{remark}
This argument shows that for all $m$,
\begin{equation}
\begin{gathered}
\mu(g(\widetilde C_m^2)) \ge \mu(g(\widetilde C^2))
\ge\mu(g(\widetilde C_m^2)) - \sum_{n=m+1}^{\infty} \frac {2^{n-1}}{3^{2n+2}}.
\end{gathered}
\end{equation}
\end{remark}
The reason that Theorem \ref{meas} is only an estimate is that there is no
guarantee that intervals missing from $f(\ddot I^2)$ cannot be covered
elsewhere. The first instance in which this occurs is for $n=4$: one
of the intervals in $\widetilde C_4$ is $I_0 = [\frac {62}{81},\frac
{63}{81}]$ = $[.2022_3,.21_3]$. By Lemma \ref{middle},
\begin{equation}
(\tfrac{188^2-1}{243^2}, \tfrac{188^2}{243^2}) =
(\tfrac{35343}{59049},\tfrac{35344}{59049})
\approx (.5985368,.5985537) \notin f(\ddot I_0^2).
\end{equation}
However, $\widetilde C_5$ contains the intervals
\begin{equation}
J_1 = [\tfrac {162}{243}, \tfrac {163}{243}] = [.2_3,.20001_3], \quad
J_2 = [\tfrac {216}{243}, \tfrac {217}{243}] = [.22_3,.22001_3]
\end{equation}
and
\[
J_1J_2 = [\tfrac{34992}{59049},\tfrac{35371}{59049}] \approx
(.5925926,.59901099)
\]
covers the otherwise-missing interval.
A more detailed {\it Mathematica} computation, using $m=11$, gives the first
eight decimal digits for $\mu(g(C^2))$:
\begin{equation}
\mu(g(C^2)) = .80955358\dots \approx \frac{17}{21} + 2.97 \times 10^{-5}.
\end{equation}
\section{Final remarks.}\label{sec:final-remarks}
As mentioned in the introduction, this paper is part of a larger project. { We discuss a few results from this project whose proofs will appear elsewhere, written by various combinations of the authors and their students.}
\subsection{Self-similar Cantor sets}
The Cantor set easily generalizes to sets defined with different
``middle-fractions'' removed.
Consider the self-similar Cantor set $D^{(t)}$ obtained as the
invariant set for the pair of contractive mappings
$$
f_1(x) = tx, \qquad f_2(x) = tx+(1-t)
$$
acting on the real line. Thus
$$
D^{(t)} = \bigcap_{n\ge 0} D_n^{(t)},
$$
where, for each $n\ge 0$, $D_n^{(t)}$ is a union of $2^n$ intervals,
each of length $t^n$, contained in $[0,1]$.
For instance,
$$
D_1^{(t)} = [0,t] \cup [1-t,1],
$$
$$
D_2^{(t)} = [0,t^2]\cup[t(1-t),t]\cup[1-t,1-t+t^2]\cup[1-t^2,1]\,.
$$
For an integer $m\ge 2$, let $t_m$ be the unique solution to
$(1-t)^m=t$ in $[0,1]$. Then
$$
\cdots < t_4 < t_3 < t_2 < \frac12
$$
and $t_m \to 0$ as $m\to\infty$. Numerical values are
\begin{eqnarray*}
&t_2 = \tfrac{3-\sqrt5}2 \approx 0.381966 \dots \\
&t_3 \approx 0.317672 \dots \\
&t_4 \approx 0.275508 \dots \\
&t_5 \approx 0.245122 \dots \\
&t_6 \approx 0.22191 \dots \\
&t_7 \approx 0.203456 \dots
\end{eqnarray*}
If we choose $t$ such that $t_m \le t < t_{m-1}$ (that is, $(1-t)^m \le t < (1-t)^{m-1}$) and let $g_n(x_1,\dots,x_n) = x_1x_2\cdots x_n$, then
\[
g_m((D^{(t)})^m) = [0,1],
\]
but $g_{m-1}((D^{(t)})^{m-1})$ has Lebesgue measure strictly less than one.
In particular, if a Cantor set $D$ in which a middle fraction $\lambda$ is
taken with $\lambda \le 1 - 2t_2 = \sqrt 5 - 2 \approx .23607$, then every
element in $[0,1]$ can be written as a product of two elements of $D$.
\subsection{Number Theory}
{ There is much more to be said about the representation of specific numbers in $C/C$. For example, if $v = 2u$ for $u,v \in C$, then \begin{equation*} u = \frac 1{3^n},\ v = \frac 2{3^n},\quad n \ge 1; \end{equation*} if $v = 11u$ for $u,v \in C$, then \begin{equation*} u = \frac 1{4\cdot 3^n},\ v = \frac {11}{4\cdot 3^n},\quad n \ge 1.
\end{equation*}
By contrast, if $v = 4u$ for $u,v \in C$, then there exists a finite or infinite sequence of integers $(n_k)$ with $n_1 \ge 2$ and $n_{k+1}-n_k \ge 2$, so that \begin{equation*} u = \sum_k \frac 2{3^{n_k}},\qquad v = \sum_k\left(\frac 2{3^{n_k-1}}
+ \frac 2{3^{n_k}}\right).
\end{equation*}
The proof of the second result is trickier than the other two.}
We also mention a conjecture for which there is strong numerical evidence.
\begin{conjecture}
Every $u \in [0,1]$ can be written as $x_1^2 + x_2^2 + x_3^2 + x_4^2$,
$x_i \in C$.
\end{conjecture}
We need a minimum of four squares, since $(\frac 13)^2 + (\frac
13)^2+(\frac 13)^2 < (\frac 23)^2$, so the open interval $(\frac 13,
\frac 49)$ will be missing from the sum of three squares.
\section{Acknowledgments}
The authors wish to thank the referees and editors for their rapid, sympathetic
and extremely useful suggestions for improving the manuscript. BR
wants to thank Prof. W. A. J. Luxemburg's Math 108 at Caltech in 1970--1971
for introducing him to the beauties of analysis.
| {
"timestamp": "2017-11-27T02:09:00",
"yymm": "1711",
"arxiv_id": "1711.08791",
"language": "en",
"url": "https://arxiv.org/abs/1711.08791",
"abstract": "Every element $u$ of $[0,1]$ can be written in the form $u=x^2y$, where $x,y$ are elements of the Cantor set $C$. In particular, every real number between zero and one is the product of three elements of the Cantor set. On the other hand the set of real numbers $v$ that can be written in the form $v=xy$ with $x$ and $y$ in $C$ is a closed subset of $[0,1]$ with Lebesgue measure strictly between $\\tfrac{17}{21}$ and $\\tfrac89$. We also describe the structure of the quotient of $C$ by itself, that is, the image of $C\\times (C \\setminus \\{0\\})$ under the function $f(x,y) = x/y$.",
"subjects": "Metric Geometry (math.MG); Number Theory (math.NT)",
"title": "Cantor set arithmetic",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9886682471364099,
"lm_q2_score": 0.8104789109591832,
"lm_q1q2_score": 0.8012947642390421
} |
https://arxiv.org/abs/2112.04654 | Unimodular triangulations of sufficiently large dilations | An integral polytope is a polytope whose vertices have integer coordinates. A unimodular triangulation of an integral polytope in $\mathbb{R}^d$ is a triangulation in which all simplices are integral with volume $1/d!$. A classic result of Knudsen, Mumford, and Waterman states that for every integral polytope $P$, there exists a positive integer $c$ such that $cP$ has a unimodular triangulation. We strengthen this result by showing that for every integral polytope $P$, there exists $c$ such that for every positive integer $c' \ge c$, $c'P$ admits a unimodular triangulation. This answers a longstanding question in the area. | \section{Introduction}
Unimodular triangulations are elementary objects which arise naturally in algebra and combinatorics. We refer to the paper by Haase et al.\ \cite{HPPS} for an extensive survey on the subject. In this paper we answer a longstanding question on the existence of certain unimodular triangulations.
An integral polytope is a polytope with integer coordinates. Let $P$ be a $d$-dimensional integral polytope in $\mathbb R^d$. A unimodular triangulation of $P$ is a triangulation of $P$ into simplices each of which has the minimum possible volume $1/d!$. For $d \ge 3$, not every integral polytope has a unimodular triangulation. For example, the simplex with vertices $(0,0,0)$, $(1,0,0)$, $(0,1,0)$, $(1,1,99)$ does not have any nontrivial triangulation, but has volume $>1/6$. On the other hand, every polytope has a \emph{dilation} which admits a unimodular triangulation, as described below.
\begin{thm}[\cite{KKMS}, 1973] \label{thm:KMW}
For every integral polytope $P$, there is a positive integer $c$ such that $cP$ admits a unimodular triangulation.
\end{thm}
This theorem is known as the Kempf--Knudsen--Mumford--Saint-Donat theorem (KKMS) or the Knudsen--Mumford--Waterman theorem (KMW).\footnote{The result appears in the book \cite{KKMS} which is authored by Kempf, Knudsen, Mumford, and Saint-Donat. The individual chapter in which it appears is authored by Knudsen, who also credits Mumford and Waterman. As a result, both naming conventions have appeared in the literature. In this paper we use the convention ``KMW theorem''.} It is one of the earliest considerations of unimodular triangulations, and was proved in the context of algebraic geometry in order to prove semistable reduction for families of varieties over a curve. A more general version of semistable reduction was conjectured by Abramovich and Karu in 2000 \cite{AK00} and was reduced to proving the existence of certain unimodular triangulations of maps. The conjecture was recently proven by Adiprasito, Temkin, and the author in \cite{ALT18}.
Understanding what values of $c$ work in Theorem~\ref{thm:KMW} is an old and difficult problem. The answer is almost completely known in dimensions $\le 3$ \cite{SZ13}, and a general upper bound for the smallest possible $c$ is known in terms of the dimension and volume of the polytope \cite{HPPS}. In this paper, we prove the following:
\begin{thm} \label{thm:main}
For every integral polytope $P$, there is a positive integer $c$ such that for all $c' \ge c$, $c'P$ admits a unimodular triangulation.
\end{thm}
The result may be a bit surprising, as there are known to be polytopes $P$ and integers $c$ such that $cP$ has a unimodular triangulation but $(c+1)P$ does not.
We do not provide an explicit value for the $c$ in Theorem~\ref{thm:main}, but we note that the upper bound should be doubly exponential in the dimension and volume of $P$ using the arguments from \cite{HPPS}. We also note that the unimodular triangulation can be made regular, but to keep the paper simpler we do not prove this.
The idea of the proof is as follows. The argument from \cite{KKMS} in fact shows that if $cP$ has a unimodular triangulation, then $c'P$ has a unimodular triangulation for any $c'$ a multiple of $c$. For our result, we prove the following: There exist relatively prime positive integers $a$ and $b$ such that for any nonnegative integers $r$ and $s$, $(ra + sb)P$ has a unimodular triangulation.
In order to prove this, we extend the results of \cite{ALT18} to \emph{mixed subdivisions}, which can be thought of as coupled subdivisions of two polytopes. This ends up being more difficult than one might expect. While the constructions from \cite{ALT18} have natural mixed analogues, these natural analogues do not lead to a proof of the theorem; see the discussion at the beginning of Section~\ref{sec:main}. Therefore, we have to create some more complicated analogues as well as make some new constructions.
Another feature of this proof is that it makes heavy use of an idea we call \emph{canonical subdivisions}. Canonical subdivisions are present in a more specialized form in \cite{KKMS} and \cite{HPPS}, and are expanded in a very general way in \cite{ALT18}. In \cite{ALT18}, the idea was explained using the language of categories and functors. In this paper we rework these ideas in terms of posets and poset maps, which turns out to be more flexible and general for this situation.
Essentially, a canonical subdivision is a rule to subdivide every polytope within a family of polytopes so that this rule is compatible with the operation of restricting to a face of a polytope. The importance of this is that canonical subdivisions can be used to further subdivide arbitrary polytopal complexes in a consistent way. This allows us to iteratively construct a unimodular triangulation through many intermediate canonical subdivisions. This idea is implicit in Knudsen et al.'s original proof of the KMW theorem, and in fact they proved that any polytopal \emph{complex} $X$ of integral polytopes has a constant $c$ such that $cX$ has a unimodular triangulation. Our main result also extends to polytopal complexes in this way, which is immediate from the proof.
The cornerstone of our canonical subdivision method is Theorem~\ref{thm:confluence}, which may be of independent interest. This theorem gives conditions under which a family of non-canonical subdivisions can be used to recursively construct a canonical subdivision. This theorem is important because our desired canonical subdivision is very complicated and canonicity is difficult to check. The theorem allows us to instead construct simpler non-canonical subdivisions, after which the criteria of the theorem are straightforward to check. To demonstrate Theorem~\ref{thm:confluence}, we have also included in Section~\ref{sec:canonicalexamples} some examples of well-known subdivisions in the literature that can be constructed using these methods.
Finally, we would like to mention a few open problems. Despite the method of proof used in this paper, it is unknown whether $c_1P$ and $c_2P$ having unimodular triangulations implies that $(c_1+c_2)P$ has one as well. In addition, it is unknown whether for every dimension $d$ there exists an integer $c_d$ such that $c_d P$ has a unimodular triangulation for every $d$-dimensional polytope $P$. Finally, the question of whether specific classes of polytopes have unimodular triangulations has attracted significant attention. Classes of interest include smooth polytopes, matroid polytopes, and parallelepipeds.
The paper is organized as follows. In Section~\ref{sec:prelim} we introduce the language of polytopes, posets, and canonical subdivisions. This section is very abstract, but the author believes the initial investment makes the main argument much easier to follow. In Section~\ref{sec:Cayleypolytopes}, we introduce Cayley polytopes which are the main building blocks of our constructions. In Section~\ref{sec:boxpoints} we introduce box points, which allow us to modify triangulations to produce triangulations with simplices of smaller volume. Our main constructions and proof are in Section~\ref{sec:main}.
\section{Preliminaries} \label{sec:prelim}
\subsection{Polytopes} \label{sec:polytopes}
Throughout the paper, we work in $\mathbb R^d$ with some fixed $d$ unless otherwise specified. In this paper, a \emph{polytope} is a nonempty convex hull of finitely many points in $\mathbb R^d$. Given a polytope $P$ and a linear functional $\phi \in (\mathbb R^d)^\ast$, the \emph{face} of $P$ with respect to $\phi$ is the set of all points in $P$ at which $\phi$ reaches its maximum on $P$. We \textbf{do not} consider the empty set to be a face. A \emph{simplex} is the convex hull of a nonempty, affinely independent set of points.
For any polytope $P \subset \mathbb R^d$, we define $V(P)$ to be the real span of the set $\{ u -v : u,v \text{ are vertices of } P \}$. We say that polytopes $P_1$, \dots, $P_n$ are \emph{independent} if $V(P_1)$, \dots, $V(P_n)$ are linearly independent vector subspaces, i.e. $\dim(V(P_1) + \dots + V(P_n)) = \dim V(P_1) + \dots + \dim V(P_n)$. A polytope of the form $\sum_{j=1}^n S_j$ where $S_1$, \dots, $S_n$ are independent simplices is called a \emph{polysimplex} or \emph{product of simplices}.
In this paper, a \emph{lattice} is an additive subgroup of $\mathbb Z^d$. We define the \emph{normalized index}, or just \emph{index}, of a lattice $L$ to be the group index $[ \Span_{\mathbb R}(L) \cap \mathbb Z^d, L ]$. This index is always finite. We denote the index by $\ind(L)$.
An \emph{integral polytope} is a polytope whose vertices have integer coordinates. Given an integral polytope $P$, we define $L(P)$ to be the lattice generated over $\mathbb Z$ by the set $\{ u -v : u,v \text{ are vertices of } P \}$. We define $N(P)$ to be the lattice $V(P) \cap \mathbb Z^d$. The \emph{index} of $P$ is the normalized index of $L(P)$, which equals $[N(P), L(P)]$. A \emph{unimodular simplex} is an integral simplex of index 1. For $d$-dimensional simplices in $\mathbb R^d$ this is equivalent to the definition given in the introduction; the current definition extends this to simplices with dimension lower than the ambient space.
An \emph{ordered polytope} is a polytope along with a total ordering on its vertices. An \emph{ordered face}, or \emph{face}, of an ordered polytope is a face of the underlying polytope along with the vertex order induced by the original polytope. Any translation or positive dilation of an ordered polytope is also an ordered polytope, with the obvious ordering.
\subsection{Posets and subdivisions}
\subsubsection{Relative posets}
Recall that a \emph{poset} is a set $\mathcal A$ along with a binary relation $\le_{\mathcal A}$ on $\mathcal A$ which is reflexive, antisymmetric, and transitive. We will always denote a poset by its set of elements, and if there is no risk of confusion we will use the symbol ``$\le$'' in place of $\le_{\mathcal A}$.
Given two posets $\mathcal A$ and $\mathcal B$, a \emph{poset map} is a function $f : \mathcal A \to \mathcal B$ such that $f(x) \le_{\mathcal B} f(y)$ whenever $x \le_{\mathcal A} y$. A \emph{poset isomorphism} is a poset map which has an inverse which is a poset map.
Let $\mathcal B$ be a poset. We define a \emph{$\mathcal B$-poset} to be a pair $(\mathcal A, p)$ where $\mathcal A$ is a poset and $p : \mathcal A \to \mathcal B$ is a poset map. If there is no risk of confusion, we denote $(\mathcal A, p)$ by just $\mathcal A$. Clearly, if $(\mathcal A, p)$ is a $\mathcal B$-poset and $(\mathcal A', p')$ is an $\mathcal A$-poset, then $(\mathcal A', p \circ p')$ is a $\mathcal B$-poset.
Given two $\mathcal B$-posets $(\mathcal A, p)$ and $(\mathcal A', p')$, we define a \emph{poset map over $\mathcal B$} to be a poset map $f : \mathcal A \to \mathcal A'$ such that $p = p' \circ f$.
Given a poset $\mathcal A$ and a subset $X \subset \mathcal A$, we let $\langle X \rangle_{\mathcal A}$ or $\langle X \rangle$ denote the set $X$ along with all members of $\mathcal A$ below $X$ in $\mathcal A$. If $x$ is a single element of $\mathcal A$, we use $\langle x \rangle$ as shorthand for $\langle \{x\} \rangle$. We let $\Max_{\mathcal A} X$ or $\Max X$ denote the maximal elements of $X$ with respect to $\le_{\mathcal A}$.
\subsubsection{The poset of polytopes and subdivisions}
Let $\mathcal P$ be the poset whose elements are all polytopes in $\mathbb R^d$, and with the partial order $F \le_{\mathcal P} P$ if $F$ is a face of $P$. The main class of posets we will work with in this paper are $\mathcal P$-posets.
\begin{exam}
Trivially, $(\mathcal P, \id)$ is a $\mathcal P$-poset, where $\id$ is the identity map.
\end{exam}
\begin{exam}
Let $\mathcal O$ be the poset whose elements are ordered polytopes and with the partial order $F \le_{\mathcal O} P$ if $F$ is an ordered face of $P$. Then $(\mathcal O, \Forget)$, where $\Forget$ is the map $\mathcal O \to \mathcal P$ which forgets the vertex ordering, is a $\mathcal P$-poset.
\end{exam}
\begin{exam} \label{exam:pointsets}
Let $\mathcal S$ be the poset whose elements are totally ordered finite subsets of $\mathbb R^d$, with the partial order $B \le_{\mathcal S} A$ if $B = A \cap F$, where $F$ is any face of $\conv(A)$, and the order on $B$ is the order induced by $A$. Then $(\mathcal S, \conv)$ is a $\mathcal P$-poset.
\end{exam}
A \emph{polytopal complex}, or \emph{$\mathcal P$-complex}, is a finite subset $X$ of $\mathcal P$ satisfying the following.
\begin{enumerate}[label=(\alph*)]
\item If $F$, $P \in \mathcal P$ such that $P \in X$ and $F \le P$, then $F \in X$.
\item If $P$, $Q \in X$ are different, then the relative interiors of $P$ and $Q$ are disjoint.
\end{enumerate}
We define the \emph{support} of a polytopal complex $X$ to be
\[
\abs{X} := \bigcup_{x \in X} x.
\]
If $\abs{X}$ is a polytope $Q$, we say that $X$ is a \emph{subdivision} of $Q$. A \emph{triangulation} is a subdivision all of whose elements are simplices.
More generally, let $(\mathcal A, p)$ be a $\mathcal P$-poset. An \emph{$(\mathcal A, p)$-complex} or \emph{$\mathcal A$-complex} is subset $X$ of $\mathcal A$ such that $p(X)$ is a polytopal complex and the map $p : X \to p(X)$ is a poset isomorphism from the subposet of $\mathcal A$ induced on $X$ to the subposet of $\mathcal P$ induced on $p(X)$. A \emph{subcomplex} of $X$ is a subset of $X$ which is also an $\mathcal A$-complex. The \emph{support} of $X$ is defined to $\abs{p(X)}$. We denote the support by simply $\abs{X}$. If the support is a polytope $Q$, we say that $X$ is a \emph{$(\mathcal A,p)$-subdivision} or \emph{$\mathcal A$-subdivision} of $Q$.
For any $\mathcal P$-poset $(\mathcal A,p)$, let $\Subd(\mathcal A)$ be the poset whose elements are $\mathcal A$-subdivisions of polytopes, and with partial order $X' \le_{\Subd(\mathcal A)} X$ if $X'$ is a subcomplex of $X$ such that $\abs{X'}$ is a face of $\abs{X}$. Then the map $\abs{\cdot} : \Subd(\mathcal A) \to \mathcal P$ which sends $X$ to $\abs{X}$ is a poset map, and $(\Subd(\mathcal A), \abs{\cdot})$ is a $\mathcal P$-poset.
The following proposition is easy to verify.
\begin{prop} \label{prop:mapofsubdivisions}
Let $(\mathcal A, p)$ and $(\mathcal A', p')$ be $\mathcal P$-posets and let $f : \mathcal A \to \mathcal A'$ be a poset map over $\mathcal P$. Then the map $f : \Subd(\mathcal A) \to \Subd(\mathcal A')$ which sends $X$ to $f(X)$ is a poset map over $\mathcal P$.
\end{prop}
\subsubsection{Trivial subdivisions and perfect posets}
For any polytope $P$, we define the \emph{trivial subdivision} of $P$ to be $\triv(P) := \langle P \rangle_{\mathcal P}$.
Let $(\mathcal A, p)$ be a $\mathcal P$-poset. If, for all $x \in \mathcal A$, the set $\langle x \rangle_{\mathcal A}$ is an $\mathcal A$-complex, then we call $(\mathcal A,p)$ a \emph{perfect} $\mathcal P$-poset. In this situation we define $\triv_{\mathcal A}(x) := \langle x \rangle_{\mathcal A}$. We necessarily have that $p(\triv_{\mathcal A}(x))$ is the trivial subdivision of $p(x)$.
Every example we have given so far is a perfect $\mathcal P$-poset. In particular, $(\Subd(\mathcal A), \abs{\cdot})$ is a perfect $\mathcal P$-poset for any $\mathcal P$-poset $\mathcal A$.
\subsection{Mixed subdivisions} \label{sec:mixed}
Let $n$ be a positive integer. Define $n\mathcal P$ to be the poset whose set of elements is
\[
\underbrace{\mathcal P \times \dots \times \mathcal P}_{n \text{ times}}
\]
and with $(F_1, \dots, F_n) \le_{n \mathcal P} (P_1, \dots, P_n)$ if and only if $F_1$, \dots, $F_n$ are faces of $P_1$, \dots, $P_n$, respectively, such that $F_1+\dots+F_n$ is a face of $P_1+\dots+P_n$. Equivalently, $(F_1, \dots, F_n) \le_{n \mathcal P} (P_1, \dots, P_n)$ if and only if there exists a linear functional such that $F_1$, \dots, $F_n$ are the faces of $P_1$, \dots, $P_n$, respectively, with respect to this linear functional. In the future, whenever we write $F \le P$ for two $n$-tuples of polytopes $F$, $P$, we mean that $F \le_{n \mathcal P} P$.
Let $\plus : n\mathcal P \to \mathcal P$ be the map sending $(P_1,\dots,P_n)$ to $P_1+\dots+P_n$. Then by the definition of $n\mathcal P$ this is a poset map. Furthermore, $(n\mathcal P, \plus)$ is a perfect $\mathcal P$-poset.
Let $X$ be an $n\mathcal P$-complex. Define the \emph{$n$-support} of $X$ to be
\[
\abs{X}_n := (Q_1,\dots,Q_n)
\]
where
\[
Q_k = \bigcup_{(P_1,\dots,P_n) \in X} P_k.
\]
It is well-known \cite{San05} that if $\abs{X}$ is a polytope, then $Q_1$, \dots, $Q_n$ are polytopes and $\abs{X} = Q_1 + \dots + Q_n$. The set $X$ is known as a \emph{mixed subdivision} of $(Q_1,\dots,Q_n)$. In addition, the map $\abs{\cdot}_n : \Subd(n\mathcal P) \to n\mathcal P$ which sends $X$ to $\abs{X}_n$ is a poset map over $\mathcal P$. In particular, it is a poset map, so $(\Subd(n\mathcal P), \abs{\cdot}_n)$ is an $n\mathcal P$-poset.
Now suppose $(\mathcal A, p)$ is an $n\mathcal P$-poset. Then $(\mathcal A, \plus \circ p)$ is a $\mathcal P$-poset. For the rest of the paper, whenever we define an $n \mathcal P$-poset $(\mathcal A, p)$, we will also implicitly treat $\mathcal A$ as a $\mathcal P$-poset as above. In particular, we can define $\Subd(\mathcal A)$ as the poset of $(\mathcal A, \plus \circ p)$-subdivisions.
Let $(\mathcal A, p)$ be an $n\mathcal P$-poset, and let $X$ be an $\mathcal A$-complex. Then $p(X)$ is an $n\mathcal P$ complex. We call $\abs{p(X)}_n$ the \emph{$n$-support} of $X$, and for convenience we denote it by $\abs{X}_n$. Suppose additionally that $X$ is an $\mathcal A$-subdivision.
By Proposition~\ref{prop:mapofsubdivisions}, the map $p : \Subd(\mathcal A) \to \Subd(n\mathcal P)$ given by $Y \mapsto p(Y)$, is a poset map over $\mathcal P$. As noted previously, $\abs{\cdot}_n : \Subd(n\mathcal P) \to n\mathcal P$ is also a poset map over $\mathcal P$. Hence, the map $\abs{\cdot}_n : \Subd(\mathcal A) \to n\mathcal P$ given by $X \mapsto \abs{p(X)}_n = \abs{X}_n$ is a poset map over $\mathcal P$. It follows that $\abs{X} = \Sum \abs{X}_n$. In addition, $(\Subd(\mathcal A), \abs{\cdot}_n)$ is an $n\mathcal P$-poset.
We note the following fact for later.
\begin{prop}\label{prop:nsupporttrivial}
Let $(\mathcal A,p)$ be an $n \mathcal P$-poset and let $X$ be an $\mathcal A$-subdivision. If $X$ has a single maximal element $x$, then $\abs{X}_n = p(x)$.
\end{prop}
\begin{proof}
If $X$ has a single maximal element $x$, then $p(X)$ has a single maximal element $p(x)$. Therefore,
\begin{align*}
\abs{X}_n &= \left( \bigcup_{(P_1,\dots,P_n)\in p(X)} P_k \right)_{k=1}^n \\
&= \left( p(x)_k \right)_{k=1}^n \\
&= p(x).
\end{align*}
\end{proof}
\subsection{Canonical subdivisions}
Let $(\mathcal A, p)$, $(\mathcal A', p')$ be $n\mathcal P$-posets. We define a \emph{canonical subdivsion} over $n \mathcal P$ to be any poset map over $n \mathcal P$ from $(\mathcal A,p)$ to $(\Subd(\mathcal A'),\abs{\cdot}_n)$. The importance of these maps is the following proposition.
\begin{prop} \label{prop:canonicalrefinement}
Let $\Sigma : \mathcal A \to \Subd(\mathcal A')$ be a canonical subdivision over $n\mathcal P$. For $X$ an $\mathcal A$-complex, define
\[
\Sigma^\ast(X) := \bigcup_{x \in X} \Sigma(x).
\]
Then $\Sigma^\ast(X)$ is an $\mathcal A'$-complex with $n$-support $\abs{X}_n$. Moreover, the map $\Sigma^\ast : \Subd(\mathcal A) \to \Subd(\mathcal A')$ given by $X \mapsto \Sigma^\ast(X)$ is a canonical subdivision over $n\mathcal P$.
\end{prop}
Before proving this, we set some notation. Let $(\mathcal A, p)$, $(\mathcal A', p')$ be $n\mathcal P$-posets. Let $\mathcal A_0$ be any subset of $\mathcal A$. A \emph{subdivision} over $n\mathcal P$ is any map of sets $\Sigma : \mathcal A_0 \to \Subd(\mathcal A')$ such that $p(x) = \abs{\Sigma(x)}_n$ for all $x \in \mathcal A_0$. A canonical subdivision is therefore a subdivision $\Sigma : \mathcal A \to \Subd(\mathcal A')$ which is a poset map.
Let $\Sigma : \mathcal A_0 \to \Subd(\mathcal A')$ be a subdivision over $n\mathcal P$, and let $x \in \mathcal A_0$ and $y \in \mathcal A$ with $y \le x$. The \emph{restriction} of $\Sigma$ from $x$ to $y$, denoted by $\Sigma(x|y)$, is the unique $Y \in \Subd(\mathcal A')$ such that $Y \le \Sigma(x)$ and $\abs{Y}_n = p(y)$. Equivalently, this is the unique $Y \in \Subd(\mathcal A')$ such that $Y \le \Sigma(x)$ and $\abs{Y} = (\Sum \circ p)(y)$. We note the following:
\begin{prop} \label{prop:canonicalcriterion}
A subdivision $\Sigma : \mathcal A \to \Subd(\mathcal A')$ over $n\mathcal P$ is a poset map (and hence a canonical subdivision over $n\mathcal P$) if and only if $\Sigma(x|y) = \Sigma(y)$ for all $x$, $y \in \mathcal A$ such that $y \le x$.
\end{prop}
\begin{proof}
If $\Sigma$ is a poset map and $x$, $y \in \mathcal A$ such that $y \le x$, then $\Sigma(y) \le \Sigma(x)$. Since $\abs{\Sigma(y)}_n = p(y)$ by the definition of a subdivision over $n\mathcal P$, we must have $\Sigma(x|y) = \Sigma(y)$ by the definition of $\Sigma(x|y)$. Conversely, if $\Sigma(x|y) = \Sigma(y)$ for all $x$, $y \in \mathcal A$ such that $y \le x$, then in particular $\Sigma(y) = \Sigma(x|y) \le \Sigma(x)$, so $\Sigma$ is a poset map.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:canonicalrefinement}]
Let $x$, $y \in X$ such that $(\Sum \circ p)(x)$ and $(\Sum \circ p)(y)$ share a face. Let $z$ be the element of $X$ such that $p(z)$ is this face, so $z \le x$ and $z \le y$. Since $\Sigma$ is a canonical subdivision, by Proposition~\ref{prop:canonicalcriterion} we have $\Sigma(x|z) = \Sigma(y|z) = \Sigma(z)$. Thus, the polytopal complexes $(\Sum \circ p')(\Sigma(x))$ and $(\Sum \circ p')(\Sigma(y))$ have the subdivision $(\Sum \circ p')(\Sigma(z))$ as their common intersection. This implies that $\Sigma^\ast(X)$ is indeed an $\mathcal A'$-complex.
We next show that $\abs{\Sigma^\ast(X)}_n = \abs{X}_n$. We have
\[
\abs{\Sigma^\ast(X)}_n = (Q_1,\dots,Q_n)
\]
where
\begin{align*}
Q_k &= \bigcup_{(P_1,\dots,P_n) \in p ((\Sigma^\ast(X))} P_k \\
&= \bigcup_{x \in X} \bigcup_{(P_1,\dots,P_n) \in p (\Sigma(x))} P_k \\
&= \bigcup_{x \in X} (k^\text{th} \text{ entry of } \abs{\Sigma(x)}_n) \\
&= \bigcup_{x \in X} (k^\text{th} \text{ entry of } p(x)) \\
&= \bigcup_{(P_1,\dots,P_n) \in p(X)} P_k \\
&= k^\text{th} \text{ entry of } \abs{X}_n
\end{align*}
where we have used in the fourth line that $p(x) = \abs{\Sigma(x)}_n$ since $\Sigma$ is a poset map over $n\mathcal P$. Hence $\abs{\Sigma^\ast(X)}_n = \abs{X}_n$, as desired.
It follows that we have a map $\Sigma^\ast : \Subd(\mathcal A) \to \Subd(\mathcal A')$ that is a subdivision over $n \mathcal P$. To show that it is canonical, we need to show it is a poset map. Let $X$, $Y \in \Subd(\mathcal A)$ such that $Y \le X$. Thus $\abs{Y}$ is a face of $\abs{X}$, and hence $\abs{\Sigma^\ast(Y)}$ is a face of $\abs{\Sigma^\ast(X)}$ since $\abs{\Sigma^\ast(X)} = \abs{X}$ for all $X \in \Subd(\mathcal A)$. Moreover, since $Y$ is a subcomplex of $X$, it is easy to see from the definition of $\Sigma$ that $\Sigma(Y)$ is a subcomplex of $\Sigma(X)$. Thus $\Sigma^\ast(Y) \le \Sigma^\ast(X)$, as desired.
\end{proof}
\begin{exam}
For any perfect $n \mathcal P$-poset $(\mathcal A,p)$, recall the trivial subdivision $\triv_{\mathcal A} : \mathcal A \to \Subd(\mathcal A)$ given by $\triv_{\mathcal A}(x) = \langle x \rangle_{\mathcal A}$. By Proposition~\ref{prop:nsupporttrivial}, we have $\abs{\triv_{\mathcal A}(x)}_n = p(x)$, hence $\triv_{\mathcal A}$ is a subdivision over $n \mathcal P$. It is also clear that if $x$, $y \in \mathcal A$ with $y \le x$ then $\triv_{\mathcal A}(x|y) = \triv_{\mathcal A}(y)$, so $\triv_{\mathcal A}$ is a canonical subdivision over $n \mathcal P$.
\end{exam}
\subsection{Confluent subdivisions} \label{sec:confluent}
In this paper we will construct very complicated subdivisions recursively from smaller, simpler subdivisions. The purpose of this section is to give a systematic way to prove that the final subdivisions are well-defined and canonical. The key idea is the notion of confluence and Newman's lemma.
Let $(\mathcal A, p)$ be an $n\mathcal P$-poset. Let $\{\mathcal A_\alpha\}$ be a (possibly infinite) collection of subsets of $\mathcal A$. For each $\alpha$, let $\sigma_\alpha : \mathcal A_\alpha \to \mathcal \Subd(\mathcal A)$ be a subdivision over $n\mathcal P$.
Let $X$ be any finite subset of $\mathcal A$. Suppose $x$ is an element of $X$ and $x \in \mathcal A_\alpha$ for some $\alpha$. We define a \emph{$\sigma_\alpha$-move} on $X$ at $x$ to be the act of transforming $X$ into the set
\[
X' := X \setminus \{x\} \cup \Max_{\mathcal A}\sigma_\alpha(x).
\]
We write this move as $X \xrightarrow{x,\sigma_\alpha} X'$, or simply $X \to X'$ if we do not need to specify $(x,\sigma_\alpha)$. We call a move $X \to X'$ \emph{non-trivial} if $X \neq X'$.
Given another finite set $Y \subset \mathcal A$, we write $X \xrightarrow{\ast} Y$ if there exists a sequence of moves $X \to X_1 \to X_2 \to \dots \to Y$. We allow this sequence to contain only one term; in other words, we always have $X \xrightarrow{\ast} X$.
Suppose that $X$, $Y \subset \mathcal A$ are finite. We say that $X$ and $Y$ are \emph{joinable} if there exists $Z \subset \mathcal A$ such that $X \xrightarrow{\ast} Z$ and $Y \xrightarrow{\ast} Z$.
We say that the family of subdivisions $\{\sigma_\alpha\}$ is \emph{locally confluent} if for any finite $X \subset \mathcal A$ and any two moves $X \to Y_1$, $X \to Y_2$, we have that $Y_1$ and $Y_2$ are joinable. Note that this is equivalent to saying that for all $x \in \mathcal A$ and $\alpha$, $\beta$ such that $x \in \mathcal A_\alpha \cap \mathcal A_\beta$, we have that $\Max \sigma_\alpha(x)$ and $\Max \sigma_\beta(y)$ are joinable.
We call a set $X \subset \mathcal A$ \emph{terminal} if there are no non-trivial moves from $X$. We call an element $x \in \mathcal A$ \emph{terminal} if $\{x\}$ is terminal. An element $x$ is terminal if and only if for every $\alpha$ such that $x \in \mathcal A_\alpha$, we have $x \in \sigma_\alpha(x)$. (This is because $\sigma_\alpha(x)$ is an $\mathcal A$-subdivision of $(\Sum \circ p)(x)$, so if $x \in \sigma_\alpha(x)$, then $x$ must be the only maximal element of $\sigma_\alpha(x)$.) From this, it follows that a set is terminal if and only if all its elements are terminal.
We say that $\{\sigma_\alpha\}$ is \emph{terminating} if there are is no infinite sequence $X_1 \to X_2 \to X_3 \to \dots$ of non-trivial moves.
Finally, we say that $\{\sigma_\alpha\}$ is \emph{facially compatible} if the following two properties hold:
\begin{itemize}
\item If $x$, $y \in \mathcal A$ such that $y \le x$ and $x \in \mathcal A_\alpha$, then $\Max\sigma_\alpha(x|y)$, $\{y\}$ are joinable.
\item If $x$, $y \in \mathcal A$ such that $y \le x$ and $x$ is terminal, then $y$ is terminal.
\end{itemize}
Our main result is the following.
\begin{thm} \label{thm:confluence}
Let $(\mathcal A, p)$ be a perfect $n\mathcal P$-poset and let $\{\sigma_\alpha : \mathcal A_\alpha \to \Subd(\mathcal A)\}$ be a family of locally confluent, facially compatible, and terminating subdivisions. Then for any $x \in \mathcal A$, there is a unique terminal set $S(x) \subset \mathcal A$ such that $\{x\} \xrightarrow{\ast} S(x)$. Moreover, $\Sigma(x) := \langle S(x) \rangle_{\mathcal A}$ is an $\mathcal A$-subdivision, and the map $\Sigma : \mathcal A \to \Subd(\mathcal A)$ given by $x \mapsto \Sigma(x)$ is a canonical subdivision over $n \mathcal P$.
\end{thm}
\begin{proof}
The fact that $S(x)$ exists and is unique follows directly from Newman's diamond lemma, which states that any locally confluent and terminating binary relation is globally confluent \cite{New42}. (In this case, the binary relation is non-trivial moves ``$\to$''.)
It remains to show that $\Sigma(x)$ is an $\mathcal A$-subdivision, and the map $\Sigma : \mathcal A \to \Subd(\mathcal A)$ is a canonical subdivision over $n \mathcal P$. Define a binary relation $\prec$ on $\mathcal A$ by $y \prec x$ if there is a non-trivial move $\{x\} \to Y$ such that $y \in Y$. We prove the following:
{\bf Claim:} $\prec$ is a well-founded relation on $\mathcal A$.
\begin{proof}[Proof of claim]
Suppose that $x_1 \succ x_2 \succ x_3 \succ \dots$ is an infinite descending sequence. Define a sequence of non-trivial moves $X_1 \to X_2 \to \dots$, inductively as follows. Define $X_1 = \{x_1\}$. Next fix $k \ge 2$, and assume by induction that we have constructed $X_1 \to \dots \to X_{k-1}$ such that $x_j \in X_j$ for all $1 \le j \le k-1$. By definition of $\prec$, there exists a non-trivial move $x_{k-1} \xrightarrow{(x_{k-1},\sigma_\alpha)} Y_k$ such that $x_k \in Y_k$. Define $X_k$ by $X_{k-1} \xrightarrow{(x_{k-1},\sigma_\alpha)} X_k = X_{k-1} \setminus \{x_{k-1}\} \cup Y_k$. Since the move $x_{k-1} \to Y_k$ is non-trivial, we have $x_{k-1} \notin Y_k$. Thus $x_{k-1} \notin X_k$, so $X_{k-1} \neq X_k$. Hence we have a non-trivial move $X_{k-1} \to X_k$ with $x_k \in X_k$, completing the induction. However, this contradicts the terminating property, proving the claim.
\end{proof}
We now prove by induction on $\prec$ that
\begin{enumerate}[label=(\alph*)]
\item $\Sigma(x)$ is an $\mathcal A$-subdivision
\item $\abs{\Sigma(x)}_n = p(x)$
\item $\Sigma(x|y) = \Sigma(y)$ for all $y \le x$.
\end{enumerate}
By Proposition~\ref{prop:canonicalcriterion}, this will complete the proof.
For the base case, assume that $x$ is terminal. Thus $S(x) = \{x\}$, so $\Sigma(x) = \langle x \rangle_{\mathcal A} = \triv_{\mathcal A}(x)$, since $\mathcal A$ is perfect. Hence $\Sigma(x)$ is an $\mathcal A$-subdivision, proving (a). Proposition~\ref{prop:nsupporttrivial} implies (b). Suppose $y \le x$. By facial compatibility, $y$ is also terminal, so $\Sigma(y) = \triv_{\mathcal A}(y)$. Since $\triv_{\mathcal A}(x|y) = \triv_{\mathcal A}(y)$, (c) is proved.
For the inductive step, assume that $x$ is not terminal. Let $\{x\} \xrightarrow{(x,\sigma_\alpha)} Y$ be a non-trivial move. In particular, we have $x \notin Y$. Then $S(x) = \bigcup_{y \in Y} S(y)$, and hence $\Sigma(x) = \bigcup_{y \in Y} \Sigma(y)$. By induction, we have proved (a)-(c) for all $y \in Y$. Thus, the proof of Proposition~\ref{prop:canonicalrefinement} implies (a) and (b) for $x$. For (c), let $y \le x$. By facial compatibility, we have that $\Max\sigma_\alpha(x|y)$ and $\{y\}$ are joinable. It follows that
\[
S(y) = \bigcup_{z \in \Max\sigma_\alpha(x|y)} S(z)
\]
and hence
\begin{equation} \label{eq:Sigmay}
\Sigma(y) = \bigcup_{z \in \Max\sigma_\alpha(x|y)} \Sigma(z).
\end{equation}
For every $z \in \Max\sigma_\alpha(x|y)$, there is $z' \in \Max \sigma_\alpha(x) = Y$ such that $z \le z'$. Since $z' \neq x$ by assumption that $\{x\} \to Y$ is non-trivial, we have $z' \prec x$, and therefore by induction $\Sigma(z) = \Sigma(z'|z)$. Hence $\Sigma(y)$ is a union of elements of the form $\Sigma(z'|z)$ with $z' \in Y$. Since $\Sigma(x) = \bigcup_{z' \in Y} \Sigma(z')$ and $\Sigma(z'|z)$ is a subcomplex of $\Sigma(z')$, it follows that $\Sigma(y)$ is a subcomplex of $\Sigma(x)$. On the other hand, the formula \eqref{eq:Sigmay} implies the support of $\Sigma(y)$ is $\abs{y}$. So we must have $\Sigma(x|y) = \Sigma(y)$, completing the proof.
\end{proof}
\subsection{Examples of canonical subdivisions} \label{sec:canonicalexamples}
In this section we give two examples of well-known subdivisions which can be expressed as canonical subdivisions in our sense. This section can be skipped without logically affecting the main proof, but the ideas may be useful for understanding the later arguments.
\subsubsection{Pulling triangulations}
Let $(\mathcal S, \conv)$ be the $\mathcal P$-poset from Example~\ref{exam:pointsets}. Let $A \in \mathcal S$. A \emph{covector} of $A$ is a point $x \in A$ such that $\dim(A \setminus \{x\}) < \dim(A)$. (Here, $\dim(A)$ denotes the dimension of the smallest affine subspace containing $A$.) Every element of $A$ is a covector if and only if $A$ is affinely independent, i.e. $A$ is the set of vertices of a simplex.
Let $\mathcal S^\ast$ be the set of elements of $\mathcal S$ which are not affinely independent. We define a subdivision $\pull : \mathcal S^\ast \to \Subd(\mathcal S)$ as follows. Let $A \in \mathcal S^\ast$, and let $x$ be the smallest element of $A$ (according to the order on $A$) which is not a covector. We define $\pull(A)$ to be the set of all $B$ and $x \cup B$ such that $B \le_{\mathcal S} A$ and $x \notin B$. Then $\pull(A)$ is an $\mathcal S$-subdivision of $\abs{A}$, so we have a subdivision $\pull : \mathcal S^\ast \to \Subd(\mathcal S)$ over $\mathcal P$.
We now claim that the family $\{\pull\}$ consisting of a single subdivision is locally confluent, terminating, and facially compatible. Local confluence is trivial since the family has only one subdivision. If $A \in \mathcal S^\ast$ and $B \in \Max \pull(A)$, then $B \subsetneq A$, which proves termination. Finally, suppose $A \in \mathcal S^\ast$ and $B \le_{\mathcal S} A$. If $x \notin B$, then $\pull(A|B) = \triv_{\mathcal S}(B)$. If $x \in B$, then we can check that
\[
\pull(A|B) = \begin{dcases*}
\pull(B) & if $x$ is not a covector of $B$ \\
\triv_{\mathcal S}(B) & otherwise.
\end{dcases*}
\]
In all cases, $\Max\pull(A|B)$ and $\{B\}$ are joinable, proving the first condition of facial compatibility. For the second property, we note that $A$ is terminal if and only if $A \notin \mathcal S^\ast$, i.e. $A$ is affinely independent. Clearly if this holds for $A$ then it holds for $B$, completing the proof.
Thus, by Theorem~\ref{thm:confluence}, we have a canonical subdivision $\Pull : \mathcal S \to \Subd(\mathcal S)$ such that if $A \in \mathcal S$ and $B \in \Pull(A)$, then $B$ is terminal, i.e. affinely independent. This subdivision is known in the literature as the \emph{pulling triangulation}.
\subsubsection{Dicing}
Let $\mathscr H$ be a finite set of hyperplanes in $\mathbb R^d$. For each $H \in \mathscr H$, let $\mathcal P_H$ be set of polytopes in $\mathbb R^d$ whose relative interior intersects $H$. We define a subdivision $\dice_H : \mathcal P_H \to \Subd(\mathcal P)$ as follows. Let $H_1$, $H_2$ be the two closed half-spaces of $\mathbb R^d$ cut out by $H$. For $P \in \mathcal P_H$, we define
\[
\dice_H(P) = \triv(P \cap H_1) \cup \triv(P \cap H_2)
\]
Then $\dice_H(P)$ is a polytopal subdivision of $P$ and $\dice_H : \mathcal P_H \to \Subd(\mathcal P)$ is a subdivision over $\mathcal P$.
We claim that the family $\{ \dice_H \}_{H \in \mathscr H}$ is locally confluent, terminating, and facially compatible. We start with terminating. If $A \in \mathcal P_H$ and $B \in \Max\dice(A)$, then $B \notin \mathcal P_H$. Since $\mathscr H$ is finite, this implies $\{ \dice_H \}_{H \in \mathscr H}$ is terminating. We next show facial compatibility. Let $A \in \mathcal P_H$ and let $B \le_{\mathcal P} A$. If $B \in \mathcal P_H$, then $\dice_H(A|B) = \dice_H(B)$. Otherwise, $\dice_H(A|B) = \triv(B)$. Either way, $\Max \dice_H(A|B)$ and $\{B\}$ are joinable, proving the first condition of facial compatibility. Next, note that $A$ is terminal if and only if its relative interior does not intersect any $H \in \mathscr H$. If this holds for $A$ then it clearly holds for $B$, proving facial compatibility.
Finally, we prove local confluence. Suppose $P \in \mathcal P_{G} \cap \mathcal P_{H}$ for distinct $G$, $H \in \mathscr H$. Consider $\Max \dice_G(P)$ and $\Max \dice_H(P)$. If we apply a $\dice_H$-move to both elements of $\Max \dice_G(P)$, then we obtain the same result as when we apply $\dice_G$-move to both elements of $\Max \dice_H(P)$. Thus $\Max \dice_G(P)$ and $\Max \dice_H(P)$ are joinable, so $\{ \dice_H \}_{H \in \mathscr H}$ is locally confluent.
Thus, by Theorem~\ref{thm:confluence}, we have a canonical subdivision $\Dice : \mathcal P \to \Subd(\mathcal P)$ such that for all $P \in \mathcal P$ and $Q \in \Dice(P)$, the relative interior of $Q$ does not intersect any $H \in \mathscr H$. This is of course the subdivision obtained by intersecting $P$ with each of the closed regions of $\mathbb R^d$ cut out by $\mathscr H$.
\section{Cayley polytopes} \label{sec:Cayleypolytopes}
\subsection{Notation}
Before proceeding, we set some notation regarding tuples and matrices. Let $a = (a_1, \dots, a_m)$ be an $m$-tuple (entry type unspecified). We allow $m=0$, in which case the tuple has no entries. We define $\abs{a} := m$. For $I \subset [m]$, we use $a_I$ to denote the tuple $(a_{i_1}, \dots, a_{i_k})$ where $I = \{i_1,\dots,i_k\}$ and $i_1 < \dots < i_k$. We use $a_{\setminus j}$ to denote $a_{[m] \setminus \{j\}}$.
Similarly, let $a = (a_{ij})_{i,j=1}^{m,n}$ be an $m \times n$ matrix and $I \subset [m]$, $J \subset [n]$. We allow $m=0$ or $n=0$, in which case the matrix has no entries. We use $a_{I \times \bullet}$ to denote the matrix obtained by restricting $a$ to the rows indexed by $I$, preserving the order of the rows. We similarly define $a_{\bullet \times J}$ and $a_{I \times J}$. We use $a_{\setminus i \times \bullet}$ to denote $a_{I \setminus \{i\} \times \bullet}$, and similarly define $a_{\bullet \times \setminus j}$ and $a_{\setminus i \times \setminus j}$.
\subsection{Cayley sums} \label{sec:Cayley}
Let $P = (P_1,\dots,P_m)$ be an $m$-tuple of polytopes in $\mathbb R^d$, where $m \ge 1$. We say that $P$ is in \emph{Cayley position} if there exists a linear map $\pi: \mathbb R^d \to \mathbb R^d$ such that $\pi(P_i)$ is a point for all $i$ and the sequence of points $(\pi(P_i))_{i \in [m]}$ is affinely independent. In this situation, we define the \emph{Cayley sum} $\Cay(P)$ to be the convex hull of the entries of $P$.
The faces of a Cayley sum $\Cay(P)$ are precisely Cayley sums of the form $\Cay(F_I)$, where $I$ is a nonempty subset of $[m]$ and $F = (F_1, \dots, F_m)$ where $F \le P$. (Recall that $F \le P$ means that there exists a linear functional such that $F_1$, \dots, $F_m$ are the faces of $P_1$, \dots, $P_m$, respectively, with respect to this linear functional.)
Let $S = (S_1,\dots,S_n)$ be an $n$-tuple of independent simplices (recall the definition of independence from Section~\ref{sec:polytopes}), and let $a = (a_{ij})_{i,j=1}^{m,n}$ be an $m \times n$ matrix of nonnegative integers, where $m \ge 1$ and $n \ge 0$. We consider Cayley sums of the form $\Cay(P)$, where $P = (P_1,\dots,P_m)$ is an $m$-tuple of polytopes in Cayley position, and for each $i \in [m]$ we have
\[
P_i = p_i + \sum_{j = 1}^n a_{ij} S_j
\]
for some $p_i \in \mathbb R^d$. The faces of $\Cay(P)$ are as follows. Let $S' = (S'_1,\dots,S'_n)$ be any $n$-tuple of polytopes such that $S'_j \le S_j$ for all $j$. Since the entries of $S$ are independent, this is equivalent to $S' \le S$. Let $P' = (P_1',\dots,P_m')$ be the tuple with
\[
P_i' = p_i + \sum_{j=1}^n a_{ij} S_j'.
\]
Then $P' \le P$. Thus, for any nonempty $I \subset [m]$, $\Cay(P'_I)$ is a face of $\Cay(P)$. All faces of $\Cay(P)$ arise this way.
\subsection{The poset $\mathcal C$}
Let $\mathcal C_0$ be the poset defined as follows. Its elements are all tuples $(p,S,a)$
where
\begin{itemize}
\item $p = (p_1,\dots,p_m)$ is a tuple of points in $\mathbb R^d$, for some positive integer $m$.
\item $S = (S_1,\dots,S_n)$ is a tuple of independent ordered integral simplices in $\mathbb R^d$, for some nonnegative integer $n$.
\item $a = (a_{ij})_{i,j=1}^{m,n}$ is an $m \times n$ matrix of nonnegative integers.
\item The polytopes $P_1$, \dots, $P_m$ are in Cayley position, where
\[
P_i := p_i + \sum_{j = 1}^n a_{ij} S_j.
\]
\end{itemize}
We equip $\mathcal C_0$ with the partial order $\le _{\mathcal C_0}$, where
\[
(p_I,S',a_{I \times \bullet}) \le_{\mathcal C_0} (p,S,a)
\]
if $I$ is a nonempty subset of $[\abs{p}]$ and $S' \le S$. It is easy to see that this is a partial order.
Let $\Cay : \mathcal C_0 \to \mathcal P$ be the map defined by
\[
\Cay(p,S,a) = \Cay \left(p_i + \sum_{j = 1}^n a_{ij} S_j \right)_{i \in [\abs{p}]}
\]
By the discussion from the previous subsection, this is a poset map. Thus $(\mathcal C_0, \Cay)$ is a $\mathcal P$-poset.
We now define an equivalence relation $\sim$ on $\mathcal C_0$ as follows. If $j \in [\abs{S}]$ is such that $S_j$ is a point or $a_{ij} = 0$ for all $i \in [\abs{p}]$, then we set
\[
(p,S,a) \sim ((p_i+a_{ij}S_j)_{i \in [\abs{p}]},S_{\setminus j},a_{\bullet \times \setminus j}).
\]
We define $\mathcal C$ to be $\mathcal C_0 / \sim$.
For $A$, $B \in \mathcal C$, we let $A \le_{\mathcal C} B$ if there are representatives $A_0$, $B_0$ of $A$ and $B$, respectively, in $\mathcal C_0$ such that $A_0 \le_{\mathcal C_0} B_0$. It is straightforward to check that this defines a partial order on $\mathcal C$ (for example, using standard form defined in the next subsection). Furthermore, we have that $\Cay : \mathcal C_0 \to \mathcal P$ is constant on equivalence classes of $\sim$. Thus there is a well-defined map $\Cay : \mathcal C \to \mathcal P$, this is a poset map, and $(\mathcal C, \Cay)$ is a $\mathcal P$-poset.
If there is no risk of confusion, we will abuse notation and denote elements in $\mathcal C$ using their representatives in $\mathcal C_0$.
\subsubsection{Standard form and $L(A)$}
Given an object $A \in \mathcal C$, there is a unique representative $(p,S,a)$ of $A$ in $\mathcal C_0$ such that for all $j$, the $j$-th column of $a$ is not all zeroes. We call this the \emph{standard form} of $A$. It is easy to check that if $A_0$ is the standard form of $A \in \mathcal C$, then $B \le_{\mathcal C} A$ if and only if $B$ has a representative $B_0 \in \mathcal C_0$ such that $B_0 \le_{\mathcal C_0} A_0$. (In fact, this is true if $A_0$ is any representative of $A$, which can be proved from the previous sentence.)
Using standard form, it is not hard to see that if $A \in \mathcal C$ and we have two different elements $B$, $C \in \mathcal C$ such that $B \le_{\mathcal C} A$ and $C \le_{\mathcal C} A$, then $\Cay(B)$ and $\Cay(C)$ are different faces of $\Cay(A)$. Moreover, for every face $F$ of $\Cay(A)$, there is a face $B$ of $A$ such that $\Cay(B) = F$. It follows that $\mathcal C$ is a perfect $\mathcal P$-poset.
Let $A \in \mathcal C$ with standard form $(p,S,a)$.
For $j \in [\abs{S}]$, let $v_j$ be the first vertex of $S_j$. Define
\[
S_0(A) := \conv \left( p_i + \sum_{j=1}^{\abs{S}} a_{ij} v_j \right)_{i \in [\abs{p}]}.
\]
In other words, $S_0(A)$ is the face of $\Cay(A)$ whose vertices are the first vertices of each Cayley summand of $\Cay(A)$.
Note that $S_0(A)$, $S_1$, \dots, $S_{\abs{S}}$ are independent simplices. We define
\begin{align*}
L(A) &:= L(S_0(A) + S_1 + \dots + S_{\abs{S}}) \\
&= L(S_0(A)) \oplus L(S_1) \oplus \dots \oplus L(S_{\abs{S}}).
\end{align*}
\subsection{$\gamma_T$ subdivisions} \label{sec:T}
\subsubsection{Definition of $\gamma_T$}
Let $T$ be an ordered integral simplex of dimension at least 1. Let $\mathcal C_T$ be the set of elements of $\mathcal C$ whose standard form $(p,S,a)$ has the property that $T$ is an entry of $S$. In this section we construct a subdivision $\gamma_T : \mathcal C_T \to \Subd(\mathcal C)$ over $\mathcal P$ such that the family $\{\gamma_T\}$ of all such subdivisions satisfies the conditions of Theorem~\ref{thm:confluence}. These subdivisions will be a main building block for future subdivisions.
Let $A \in \mathcal C_T$ with standard form $(p,S,a)$. Let $j$ be the unique number such that $S_j = T$, and let $i$ be the smallest number such that $a_{ij} = \max_{i'j} a_{i'j}$. Let $v$ be the first vertex of $T$, and let $f$ be the facet of $T$ opposite $v$.
Note that for any positive integer $c$, the dilated simplex $cT$ can be written as the union of the two polytopes
\[
v + (c-1)T, \quad \Cay((c-1)f, cf).
\]
(Here, we are using the ordinary Cayley sum as defined in Section~\ref{sec:Cayley}.) This can be extended to a $\mathcal C$-subdivision of $\Cay(A)$ as follows. Recall that the standard form of $A$ is $(p,S,a)$. Let $m := \abs{p}$ and $n := \abs{S}$.
We define
\begin{equation} \label{eq:A'A''}
\begin{aligned}
A' &= (p',S,a') \in \mathcal C \\
A'' &= (p'', F, a'') \in \mathcal C
\end{aligned}
\end{equation}
where
\begin{itemize}
\item $p'$ is the $m$-tuple obtained by replacing the $i$-th entry of $p$ with $p_{i} + v$.
\item $a'$ is the $m \times n$ matrix obtained by subtracting 1 from the $(i,j)$ entry of $a$.
\item $p''$ is the $(m+1)$-tuple obtained by inserting $p_{i} + v$ directly before the $i$-th entry of $p$.
\item $F$ is the $n$-tuple obtained by replacing the $j$-th entry of $S$ with $f$.
\item $a''$ is the $(m+1) \times n$ matrix obtained by inserting the $i$-th row of $a'$ directly above the $i$-th row of $a$.
\end{itemize}
We define
\[
\gamma_T(A) := \triv_{\mathcal C}(A') \cup \triv_{\mathcal C}(A'').
\]
Then $\gamma_T(A)$ is a $\mathcal C$-subdivision of $\Cay(A)$. If $a_{ij} = 1$ and $a_{i'j} = 0$ for all $i' \neq i$, then this subdivision has a unique maximal element $A''$. Otherwise, the maximal elements are $A'$ and $A''$.
\subsubsection{Properties of $\gamma_T$}
We first prove the following:
\begin{prop} \label{prop:LgammaT}
Let $A \in \mathcal C_T$ and $B \in \Max \gamma_T(A)$. Then $L(A) = L(B)$.
\end{prop}
\begin{proof}
We have either $B = A'$ or $B = A''$. In the former case, we have $S_0(A) = S_0(B)$ and $A$ and $B$ have the same second entry, so $L(A) = L(B)$ as desired.
Assume $B = A''$. Without loss of generality, assume $i = j = 1$. For convenience, we will reuse the variables $i$ and $j$ in the below proof. Let $w$ be the second vertex of $T$. We have
\begin{align*}
S_0(B) &= \conv \left( \left\{ p_i + a_{i1} w + \sum_{j=2}^{\abs{S}} a_{ij} v_j \right\}_{i \in [\abs{p}]} \cup \right. \\
&\qquad\qquad \left. \left\{ p_{1} + v + (a_{11}-1)w + \sum_{j=2}^{\abs{S}} a_{1j}v_j \right\}
\right) \\
&= \conv \left( \left\{ p_i + a_{i1} (w-v) + \sum_{j=1}^{\abs{S}} a_{ij} v_j \right\}_{i \in [\abs{p}]} \cup \right. \\
&\qquad\qquad \left. \left\{ p_{1} + (a_{11}-1)(w-v) + \sum_{j=1}^{\abs{S}} a_{1j}v_j \right\}
\right).
\end{align*}
Subtracting the last entry in the above convex hull from the first entry, we get $w-v$. Hence $w-v \in L(S_0(B))$. It is then not hard to see from the above expression that $L(S_0(B)) = L(S_0(A)) \oplus \mathbb Z\langle w-v \rangle$. Thus,
\begin{align*}
L(B) &= L(S_0(A)) \oplus \mathbb Z\langle w-v \rangle \oplus L(f) \oplus L(S_2) \oplus \dots \oplus L(S_{\abs{S}}) \\
&= L(S_0(A)) \oplus L(T) \oplus L(S_2) \oplus \dots \oplus L(S_{\abs{S}}) \\
&= L(A)
\end{align*}
as desired.
\end{proof}
\begin{prop} \label{prop:facegammaT}
Let $A \in \mathcal C_T$ and suppose $\mathcal B \in \mathcal C$ such that $B \le_{\mathcal C} A$. Then either $\gamma_T(A|B) = \gamma_{T'}(B)$ for some $T'$ or $\gamma_T(A|B) = \triv_{\mathcal C}(B)$.
\end{prop}
\begin{proof}
Let the standard form of $A$ be $(p,S,a)$. Let $B = (p_I,S',a_{I \times \bullet})$ with $S' \le S$ and $I$ a nonempty subset of $[\abs{p}]$. Assume that $T = S_j$, and let $i$ be the smallest number such that $a_{ij} = \max_{i'} a_{i'j}$. If $i \in I$ and $S_j'$ contains the first vertex of $T$ and another vertex, then $\gamma_T(A|B) = \gamma_{S_j'}(B)$. Otherwise, we have $\gamma_T(A|B) = \triv_{\mathcal C}(B)$.
\end{proof}
We now prove the following.
\begin{prop} \label{prop:confgammaT}
The family of subdivisions $\{ \gamma_T \}$, where $T$ ranges over all ordered integral simplices of dimension at least 1, is locally confluent, terminating, and facially compatible.
\end{prop}
\begin{proof}
We first prove local confluence. Let $A \in \mathcal C$ with standard form $(p,S,a)$, and let $T_1$, $T_2$ be distinct entries of $S$. We need to show that $\Max \gamma_{T_1}(A)$ and $\Max \gamma_{T_2}(A)$ are joinable. Without loss of generality, assume $T_1 = S_1$ and $T_2 = S_2$.
Let $i_1$, $i_2$ be the smallest numbers such that $a_{i_1 1} = \max_i a_{i1}$ and $a_{i_22} = \max_i a_{i2}$, respectively. First suppose that $i_1 \neq i_2$. Then applying a $\gamma_{T_1}$-move on $\Max \gamma_{T_2}(A)$ at each of its elements yields the same result as applying a $\gamma_{T_2}$-move on $\Max \gamma_{T_1}(A)$ at each of its elements. Hence $\Max \gamma_{T_1}(A)$ and $\Max \gamma_{T_2}(A)$ are joinable, as desired.
Now assume $i_1 = i_2$. Let $A_1'$, $A_1'' \in \gamma_{T_1}(A)$ be as in \eqref{eq:A'A''} and define $A_2'$, $A_2'' \in \gamma_{T_2}(A)$ analogously. Consider the following sequence of moves starting from $\gamma_{T_1}(A)$. First, if $A_1' \in \Max(\gamma_{T_1}(A))$, then apply a $\gamma_{T_2}$ move at $A_1'$. Next, apply a $\gamma_{T_2}$ move at $A_1''$. Since $A_1''$ has at least two entries greater than 0 in the second column, $\gamma_{T_2}(A_1'')$ has two maximal elements $(A_1'')'$, $(A_1'')''$, defined analogously to \eqref{eq:A'A''}. Finally, apply a $\gamma_{T_2}$ move at $(A_1'')'$.
If we do an analogous sequence of moves starting from $\gamma_{T_2}(A)$ (with $\gamma_{T_1}$ moves replacing $\gamma_{T_2}$ moves), then we obtain the same set as we did with the above sequence. Hence, $\gamma_{T_1}(A)$ and $\gamma_{T_2}(A)$ are joinable, as desired.
We next show that $\{ \gamma_T \}$ is terminating. Let $A \in \mathcal C_T$ with standard form $(p,S,a)$. Then if $B \in \mathcal C_T$ with standard from $(p',S',a')$, then either $\sum \dim S'_j < \sum \dim S_j$, or $S = S'$ and $\sum_{i,j} a'_{ij} < \sum_{i,j} a_{ij}$. Since the quantities $\sum \dim S_j$ and $\sum_{i,j} a_{ij}$ are both nonnegative integers, it follows that $\{ \gamma_T \}$ is terminating.
Finally, we show that $\{ \gamma_T \}$ is facially compatible. Let $A \in \mathcal C_T$ and suppose $\mathcal B \in \mathcal C$ such that $B \le_{\mathcal C} A$. Proposition~\ref{prop:facegammaT} implies $\Max \gamma_T(A|B)$ and $\{B\}$ are joinable, as desired.
Next, suppose $A \in \mathcal C$. From the definition of $\gamma_T$, $A$ is terminal with respect to $\{ \gamma_T \}$ if and only if $A \notin \mathcal C_T$ for any $T$. Thus, if $(p,S,a)$ is the standard form of $A$, then $A$ is terminal if and only if $\abs{S} = 0$. Clearly if this holds for $A$, it holds for any $B \le A$. Hence $\{ \gamma_T \}$ is facially compatible, completing the proof.
\end{proof}
From the previous two propositions we immediately have the following from Theorem~\ref{thm:confluence}.
\begin{thm} \label{thm:Gamma}
There is a canonical subdivision $\Gamma : \mathcal C \to \Subd(\mathcal C)$ over $\mathcal P$ such that for any $A \in \mathcal C$ and any $B \in \Max \Gamma(A)$, where $(p,S,a)$ is the standard form of $B$, we have $\abs{S} = 0$ and $L(A) = L(B)$.
\end{thm}
When restricted to elements $(p,S,a)$ of $\mathcal C$ where $\abs{p} = 1$ and $\abs{S} = 1$, $\Gamma$ is known as the ``canonical triangulation'' of a dilated ordered simplex \cite{HPPS}.
\section{Box points and index-lowering} \label{sec:boxpoints}
\subsection{Box points}
In this section we describe the standard way of reducing the indices of integral polytopes, first defined in \cite{KKMS} as \emph{Waterman points}. We use the terminology of \cite{HPPS} and call them \emph{box points}.
Let $S = (S_1,\dots,S_n)$ be a tuple of $n$ independent integral simplices in $\mathbb R^d$. Consider the lattices
\begin{align*}
L(S) &:= L(S_1) + \dots + L(S_n) \\
N(S) &:= N(S_1) + \dots + N(S_n).
\end{align*}
We define a \emph{box point} of $S$ to be any nonzero element of the quotient group $G(S) := N(S) / L(S)$. Let $v_1$, \dots, $v_n$ be any vertices of $S_1$, \dots, $S_n$, respectively. For any box point $x$ of $S$, there is a unique tuple $(c_1,\dots,c_n)$ of nonnegative integers such that the polysimplex $\sum_{j=1}^n c_j(S_j-v_j)$ contains exactly one representative of $x$ in $N(S)$.\footnote{Explicitly, $(c_1,\dots,c_n)$ can be described as follows. For each $1 \le j \le n$, let $\{u_1,\dots,u_{k_j}\}$ be the vertices of $S_j$ other than $v_j$, and define $e_j^i := u_i - v_j$. Then the $e_j^i$ form a basis for $L(S)$. Every $x \in G(S)$ has a unique representative in $N(S)$ of the form $\sum_{i,j} x_j^i e_j^i$, where each $0 \le x_j^i < 1$. Then $c_j = \lceil \sum_i x_j^i \rceil$.}
The tuple $(c_1,\dots,c_n)$ does not depend on the choice of $v_1$, \dots, $v_n$. We denote the tuple $(c_1,\dots,c_n)$ by $c(S,x)$. We have $0 \le c_j \le d$ for all $j$. We define the \emph{support} of $c$ to be $\supp c := \{ j \in [n] : c_j \neq 0 \}$.
Let $S$ be as above, and let $S' \le S$. Then we have a natural inclusion $G(S') \hookrightarrow G(S)$. Thus, any box point of $S'$ can be regarded as a box point of $S$. Suppose $x$ is a box point of $S'$, and thus a box point of $S$. Then $c(S',x) = c(S,x)$.
Now, let $L$ be a $d$-dimensional sublattice of $\mathbb Z^d$. Let $x \in \mathbb Z^d / L$ be nonzero. For any tuple $S = (S_1,\dots,S_n)$ of independent integral simplices in $\mathbb Z^d$, we say that $x$ is a \emph{box point} of $S$ if there is $S' \le S$ such that $L(S') = L \cap N(S')$ and $N(S')$ contains a representative of $x$. In this situation, clearly $x$ can be naturally identified with a unique nonzero element of $G(S')$, and hence it can be identified with a unique nonzero element of $G(S)$.
Let $x$ be as above. If $A \in \mathcal C$ has standard form $(p,S,a)$, we say that $x$ is a box point of $A$ if $x$ is a box point of $S$.
\subsection{A proof of the KMW theorem} \label{sec:KMW}
We now use box points to give a short proof of the KMW theorem. The fundamental construction is the same as in the original proof \cite{KKMS}, but the argument structure is simplified due to the use of canonical subdivisions. This section is optional and is not needed in the proof of the main theorem of the paper.
Let $S$ be an ordered integral simplex. Let $x$ be a box point of $S$. Let $c$ be the single entry of $c(S,x)$. Let $x_0$ be the unique representative of $x + cv$ in $cS$, where $v$ is the first vertex of $S$. Let $\mathfrak F = \mathfrak F(S)$ be the set of all facets $F$ of $S$ such that $cF$ does not contain $x_0$. Consider the collection of simplices
\[
\{ \conv(x_0,cF) : F \in \mathfrak F \}.
\]
These simplices are the maximal simplices of a triangulation of $cS$, called the \emph{stellar subdivision} of $cS$ at $x_0$.
We can make this stellar subdivision a $\mathcal C$-subdivision by considering all elements in $\mathcal C$ with standard form
\begin{equation} \label{eq:starsimplex}
\left( (x_0,0), (F), \begin{pmatrix} 0 \\ c \end{pmatrix} \right)
\end{equation}
where $F \in \mathfrak F$. These elements map via $\Cay$ to the maximal elements of the above stellar subdivision. Thus, they are the maximal elements of a $\mathcal C$-subdivision whose image under $\Cay$ is the stellar subdivision.
Crucially, for any $F \in \mathfrak F$, the lattice distance $h'$ between $cF$ and $x_0$ is strictly smaller than the lattice distance $h$ between $F$ and the opposite vertex of $S$. Thus, if $A \in \mathcal C$ is of the form \eqref{eq:starsimplex}, we have
\[
\ind L(A) = h' \ind L(F) < h \ind L(F) = \ind L(S)
\]
Now consider the polytope $NcS$, where $N$ is a positive integer. Consider the collection of polytopes
\begin{multline*}
\{
\{ \conv(x_0+(N-1)cF,NcF) : F \in \mathfrak F \} \\
\cup \{ \conv((N-1)x_0+cF,(N-2)x_0+2cF) : F \in \mathfrak F \} \cup \dotsb \\
\cup \{ \conv(Nx_0,(N-1)x_0+cF) : F \in \mathfrak F \} \}
\end{multline*}
These are the maximal elements of a polytopal subdivision of $NcS$. We can make this a $\mathcal C$-subdivision by considering all elements in $\mathcal C$ with standard form
\begin{equation} \label{eq:concentricpolytope}
\left( \left( rx_0, (r-1)x_0 \right),(F), \begin{pmatrix} (N-r)c \\ (N-r+1)c \end{pmatrix} \right)
\end{equation}
where $F \in \mathfrak F$ and $1 \le r \le N$ is an integer. These are the maximal elements of a $\mathcal C$-subdivision whose image under $\Cay$ is the previous polytopal subdivision. For any $A \in \mathcal C$ of the form \eqref{eq:concentricpolytope}, we have $\ind L(A) < \ind L(S)$.
We can now proceed with proving the KMW theorem. Let $L \subset \mathbb Z^d$ be a $d$-dimensional lattice, and fix some nonzero element $x \in \mathbb Z^d/L$. Let $\mathcal C_x := \mathcal C_x^\circ \cup \mathcal C_x^\bullet$, where
\begin{itemize}
\item $\mathcal C_x^\circ$ is the set of all elements in $\mathcal C$ which do not have $x$ as a box point.
\item $\mathcal C_x^\bullet$ is the set of all elements in $\mathcal C$ whose standard form $(p,S,a)$ is such that $x$ is a box point of $S$, $\abs{p} = 1$, $\abs{S} = 1$, and the entry of $a$ is $d!$.
\end{itemize}
We let $\mathcal C_x$ inherit a poset structure and poset map $\Cay : \mathcal C_x \to \mathcal P$ from $\mathcal C$. We can check that $(\mathcal C_x, \Cay)$ is a perfect $\mathcal P$-poset.
Let $\gamma_T^\circ$ be the restriction of $\gamma_T$ to $\mathcal C_x^\circ \cap \mathcal C_T$. It is easy to see that the image of $\gamma_T^\circ$ is contained in $\Subd(\mathcal C_x^\circ)$. Thus we have a well-defined subdivision $\gamma_T^\circ : \mathcal C_x^\circ \cap \mathcal C_T \to \Subd(\mathcal C_x)$ over $\mathcal P$.
Now, define a subdivision $\stell_x : \mathcal C_x^\bullet \to \Subd(\mathcal C_x)$ as follows. Let $A \in \mathcal C_x^\bullet$ with standard form $(p,S,a)$. We will abuse notation and identify $p$ and $S$ with their single entries. Let $c := c(S,x)$ and $x_0$ be the unique representative of $x +cv$ in $S$, where $v$ is the first vertex of $S$. Using \eqref{eq:concentricpolytope}, we define $\stell_x(A)$ to be the $\mathcal C$-subdivision whose maximal elements are
\[
\left( \left( p+rx_0, p+(r-1)x_0 \right),(F), \begin{pmatrix} (N-r)c \\ (N-r+1)c \end{pmatrix} \right)
\]
where $N = d!/c$, $1 \le r \le N$, and $F \in \mathfrak F(S)$. As in \eqref{eq:concentricpolytope}, this is a $\mathcal C$-subdivision of $\Cay(A)$, and since $x$ is not a box point of $F$ by definition of $\mathfrak F$, this is a $\mathcal C_x$-subdivision, as claimed.
We leave the following as an exercise:
\begin{prop}
The family of subdivisions $\{\gamma_T^\circ\}_T \cup \{\stell_x\}$ is locally confluent, terminating, and facially compatible (as $\mathcal C_x$-subdivisions).
\end{prop}
From this and the previous discussions we have the following.
\begin{thm} \label{thm:Gammax}
There is a canonical subdivision $\Gamma_x : \mathcal C_x \to \Subd(\mathcal C_x)$ over $\mathcal P$ such that for all $A \in \mathcal C_x$, we have the following:
\begin{enumerate}[label=(\alph*)]
\item If $A \in \mathcal C_x^\circ$, then $\Gamma_x(A) = \Gamma(A)$.
\item If $A \in \mathcal C_x^\bullet$ and $B \in \Max(\Gamma_x(A))$, where $(p,S,a)$ is the standard form of $B$, then $\abs{S} = 0$ and $\ind(L(B)) < \ind(L(A))$.
\end{enumerate}
\end{thm}
We can now give a proof of Theorem~\ref{thm:KMW}.
\begin{proof}[Proof of Theorem~\ref{thm:KMW}]
For any triangulation $X$, we will consider the collection of lattices
\[
\mathscr L(X) := \{ L(S) : S \in \Max X \}.
\]
Start with a $d$-dimensional triangulation $X$ in $\mathbb R^d$. Dilate the triangulation by $d!$ to form $d!X$. We can view $d!X$ as a $\mathcal C$-subdivision by replacing each simplex $d!S \in d!X$ by the element $((0), (S), (d!)) \in \mathcal C$. Call this $\mathcal C$-subdivision $Y$.
Now suppose there is a $d$-dimensional simplex $S \in X$ such that $\ind L(S) > 1$. Choose any nonzero $x \in \mathbb Z^d / L(S)$. Note that $Y \in \Subd(\mathcal C_x)$. Let $X' = \Cay \Gamma_x^\ast(Y)$, where $\Gamma_x$ is from Theorem~\ref{thm:Gammax} and the $\ast$ construction is from Proposition~\ref{prop:canonicalrefinement}. For every $T' \in X'$, we have that $T'$ is a simplex, and either
\begin{itemize}
\item $\ind L(T') = \ind L(T)$ for some $T \in X$ where $x$ is not a box point of $T$, or
\item $\ind L(T') < \ind L(T)$ for some $T \in X$ where $x$ is a box point of $T$.
\end{itemize}
Hence, comparing $\mathscr L(X)$ and $\mathscr L(X')$, we have that $\mathscr L(X')$ is obtained by replacing at least one lattice of $\mathscr L(X)$ with lattices of lower index, while keeping the other lattices the same. Thus, if we repeat the above process on $X'$, and so on, we will eventually obtain a triangulation $Z$ of $(d!)^N X$, for some $N$, such that $\mathscr L(Z)$ contains only the lattice $\mathbb Z^d$. Letting $X$ be any triangulation of an integral polytope $P$, we obtain the theorem.
\end{proof}
\subsection{The structure of box points} \label{sec:structboxpoints}
In this section we investigate the distribution of representatives of box points in dilated polysimplices.
Let $S = (S_1,\dots,S_n)$ be a tuple of ordered independent integral simplices and let $x$ be a box point of $S$. Let $c = c(S,x)$. By definition, the polysimplex $\sum_{j=1}^n c_j S_j$ contains a unique representative of $x + \sum_{j=1}^n c_j v_j$, where $v_j$ is the first vertex of $S_j$. Let $x_0$ be this representative. We call $x_0$ the \emph{focus} of $(S,x)$.
Let $\mathfrak F = \mathfrak F(S,x)$ be the set of all tuples $F = (F_1,\dots,F_n)$ such that $F \le S$, $\sum F_j$ is a facet of $\sum S_j$, and $x$ is not a box point of $F$. Thus, if $F \in \mathfrak F$, the point $x_0$ is not contained in the facet $\sum_{j=1}^n c_j F_j$ of $\sum_{j=1}^n c_j S_j$. Moreover, the lattice distance from $x_0$ to $\sum_{j=1}^n c_j F_j$ is smaller than the lattice height of $\sum_{j=1}^n S_j$ with respect to the facet $\sum_{j=1}^n F_j$.
As in Section~\ref{sec:KMW}, the elements of the form
\[
\left((x_0,0),F, \begin{pmatrix} 0 & \cdots & 0 \\ c_1 & \cdots & c_n \end{pmatrix} \right)
\]
where $F \in \mathfrak F$ are the maximal elements of a $\mathcal C$-subdivision of $\sum_{j=1}^n c_j S_j$. If $B$ is such an element, then $\ind L(B) < \ind (L(S_1) + \dots + L(S_n))$.
Now, consider the polysimplex $P = \sum_{j=1}^n a_j S_j$, where the $a_j$ are integers satisfying $a_j \ge c_j$ for all $j$. Since
\[
P = \sum_{j=1}^n c_j S_j + \sum_{j=1}^n (a_j - c_j) S_j\]
it follows that $P$ contains the polytope $P' := x_0 + \sum_{j=1}^n (a_j - c_j) S_j$. Moreover, if $F \in \mathfrak F$, then the lattice distance between the facet $x_0 + \sum_{j=1}^n (a_j - c_j) F_j$ of $P'$ and the facet $\sum_{j=1}^n a_j F_j$ of $P$ is equal to lattice distance from $x_0$ to $\sum_{j=1}^n c_j F_j$.
Consider the elements of $\mathcal C$ of the form
\[
\left( (x_0,0),F, \begin{pmatrix} a_1-c_1 & \cdots & a_n-c_n \\ a_1 & \cdots & a_n \end{pmatrix} \right).
\]
where $F$ ranges over $\mathfrak F$. These elements are the maximal elements of a $\mathcal C$-complex whose support is $\overline{P \setminus P'}$, the closure of $P \setminus P'$.
Note that if there is some $j \in \supp c$ such that $a_j = c_j$, then $P'$ has smaller dimension than $P$, and so in this case this complex is a $\mathcal C$-subdivision of $P$.
If $B \in \mathcal C$ is an element of the above form, then $\ind L(B) < \ind (L(S_1) + \dots + L(S_n))$.
If $a_j \ge 2c_j$ for all $j$, then repeating the above argument on $P'$, we have that the elements of $\mathcal C$ of the form
\[
\left( (2x_0,x_0),F, \begin{pmatrix} a_1-2c_1 & \cdots & a_n-2c_n \\ a_1-c_1 & \cdots & a_n-c_n \end{pmatrix} \right).
\]
are the maximal elements of a $\mathcal C$-complex whose support if $\overline{P' \setminus P''}$, where $P'' := 2x_0 + \sum_{j=1}^n (a_j-2c_j) F_j$. In general, if $a_j \ge Nc_j$ for some positive integer $N$, then the elements of $\mathcal C$ of the form
\[
\left( \left(rx_0,(r-1)x_0 \right),F,\begin{pmatrix} a_1-rc_1 & \cdots & a_n-rc_n \\ a_1-(r-1)c_1 & \cdots & a_n-(r-1)c_n \end{pmatrix} \right),
\]
where $r = 1$, \dots, $N$ and $F \in \mathfrak F$, are the maximal elements of a $\mathcal C$-complex whose support is $\overline{P \setminus P^N}$, where $P^N := Nx_0 + \sum_{j=1}^n (a_j - Nc_j) F_j$. If in addition $a_j = Nc_j$ for some $j \in \supp c$, then this complex is a $\mathcal C$-subdivision of $P$. As before, if $B \in \mathcal C$ is a maximal element of this complex, then $\ind L(B) < \ind (L(S_1) + \dots + L(S_n))$.
\subsection{$\kappa_{T,x}$-subdivisions} \label{sec:kappaTx}
We will need one final geometric construction for the main proof. Like the previous constructions in this section, this construction uses box points to lower the indices of lattices, but in a way more analogous to $\gamma_T$. This is where our constructions begin to fundamentally diverge from the original proof of the KMW theorem.
Let $L \subset \mathbb Z^d$ be a $d$-dimensional lattice and let $x \in \mathbb Z^d / L$ be nonzero. Let $T$ be an ordered integral simplex with dimension at least one. Let $\mathcal K_{T,x}$ be the set of elements of $\mathcal C$ whose standard form $(p,S,a)$ satisfies the following:
\begin{itemize}
\item $T$ is an entry of $S$, say the $j_0$-th entry.
\item $x$ is a box point of $S$.
\item $a_{ij} \ge c(S,x)_{j}$ for all $i$ and all $j$.
\item $a_{ij_0} > c(S,x)_{j_0}$ for some $i$.
\end{itemize}
We now construct a subdivision $\kappa_{T,x} : \mathcal K_{T,x} \to \Subd(\mathcal C)$ over $\mathcal P$. Let $A \in \mathcal K_{T,x}$ with standard form $(p,S,a)$. Let $c = c(S,x)$. Let $j_0$ be the unique number such that $S_{j_0} = T$. Let $i_0$ be the smallest number such that $a_{i_0j_0} = \max_{i} a_{ij_0}$. Hence $a_{i_0j_0} > c_j$. To make the notation easier to read, we will assume from now on that $i_0 = j_0 = 1$; the below construction can be easily adjusted to allow for other values of $i_0$ and $j_0$.
Let $v$ be the first vertex of $T$, and let $f$ be the facet of $T$ opposite $v$.
Let $m = \abs{p}$ and $n = \abs{S}$. Let $F = (F_1,\dots,F_n)$ where $F_1 = f$ and $F_i = S_i$ for all $i > 1$.
Let $A' = (p',S,a')$ and $A'' = (p'',F,a'')$ be as in Section~\ref{sec:T}, using $i = i_0 = 1$ and $j = j_0 = 1$.
We consider two cases:
\textbf{Case 1:} $x$ is a box point of $F$.
In this case, we define $\kappa_{T,x}(A) = \gamma_T(A)$, as defined Section~\ref{sec:T}.
\textbf{Case 2:} $x$ is not a box point of $F$.
Let $\mathfrak F = \mathfrak F(S,x)$ be as in Section~\ref{sec:structboxpoints}, so $F \in \mathfrak F$.
Let $x_0$ be the focus of $(S,x)$. Recall from Section~\ref{sec:structboxpoints} that the polytope $x_0 + \sum_{j=1}^n (a_{1j}-c_j)F_j$ is contained in $\sum_{j=1}^n a_{1j} S_j$. Moreover, the lattice distance between $x_0 + \sum_{j=1}^n (a_{1j}-c_j)F_j$ and $\sum_{j=1}^n a_{1j}F_j$ is less than the lattice height of $\sum_{j=1}^n S_j$ with respect to its facet $\sum_{j=1}^n F_j$. Hence, $x_0 + \sum_{j=1}^n (a_{1j}-c_j)F_j$ is contained in the polytope
\[
\conv \left( v + (a_{11}-1)F_1 + \sum_{j=2}^n a_{1j} F_j, \sum_{j=1}^n a_{1j}F_j \right).
\]
Consider the tuple $(p''', F, a''')$, where $p'''$ is obtained from $p$ by replacing the first entry with $p_1 + x_0$, and $a'''$ is obtained from $a$ by replacing the first row with $(a_{1j}-c_j)_{j=1}^n$. It follows from the above discussion that $\Cay(p''',F,a''') \subset \Cay(A'')$. The polytope $\Cay(p''',F,a''')$ is parallel to the facets $\Cay(p,F,a)$ and $\Cay(p',F,a')$ of $\Cay(A'')$.
We now define
\begin{equation} \label{eq:sharpflat}
\begin{aligned}
A^\sharp &= (p^\sharp,F,a^\sharp) \in \mathcal C \\
A^\flat &= (p^\flat,F,a^\flat) \in \mathcal C
\end{aligned}
\end{equation}
where
\begin{itemize}
\item $p^\sharp$ is the $(m+1)$-tuple obtained by inserting $p_1 + x_0$ directly before the $1$st entry of $p$.
\item $a^\sharp$ is the $(m+1) \times n$ matrix obtained by inserting the row $(a_{1j}-c_j)_{j=1}^n$ directly above the $1$st row of $a$.
\item $p^\flat$ is the $(m+1)$-tuple obtained by inserting $p_1 + x_0$ directly before the $1$st entry of $p'$.
\item $a^\flat$ is the $(m+1) \times n$ matrix obtained by inserting the row $(a_{1j}-c_j)_{j=1}^n$ directly above the $1$st row of $a'$.
\end{itemize}
We have that
\begin{align*}
\Cay(A^\sharp) &= \conv( \Cay(p''',F,a'''), \Cay(p,F,a)) \\
\Cay(A^\flat) &= \conv( \Cay(p''',F,a'''), \Cay(p',F,a')).
\end{align*}
Now, let $\mathfrak G$ be the set of all tuples $G = (G_1,\dots,G_n)$ such that $G \le F$, $\sum_{j=1}^n G_j$ is a facet of $\sum_{j=1}^n F_j$, and $G \le F'$ for some $F' \in \mathfrak F$ with $F' \neq F$. Let
\begin{equation} \label{eq:AG}
A^G = (p^G, G, a^G) \in \mathcal C
\end{equation}
where
\begin{itemize}
\item $p^G$ is the $(m+2)$-tuple obtained by inserting $p_1+x_0$ directly before the 1st entry of $p''$.
\item $a^G$ is the $(m+2)\times n$ matrix obtained by inserting the row $(a_{1j}-c_j)_{j=1}^n$ directly above the $1$st row of $a''$.
\end{itemize}
Then the elements $A^\sharp$, $A^\flat$, and $A^G$ over all $G \in \mathfrak G$ are the maximal elements of a $\mathcal C$-subdivision of $\Cay(A'')$. Thus, the elements $A'$, $A^\sharp$, $A^\flat$, and $A^G$ over all $G \in \mathfrak G$ are the maximal elements of a $\mathcal C$-subdivision of $\Cay(A)$. We define $\kappa_{T,x}(A)$ to be this subdivision.
We prove the following two properties of $\kappa_{T,x}$.
\begin{prop} \label{prop:kappaTx}
Let $A \in \mathcal K_{T,x}$ and $B \in \Max \kappa_{T,x}(A)$. We have the following:
\begin{itemize}
\item If $x$ is a box point of $B$, then $L(A) = L(B)$.
\item If $x$ is not a box point of $B$, then $\ind(L(B)) < \ind(L(A))$.
\end{itemize}
\end{prop}
\begin{proof}
If we are in Case 1, then $x$ is a box point of $B$ and $L(A) = L(B)$ by Proposition~\ref{prop:LgammaT}. Assume we are in Case 2. If $B = A'$, then $x$ is a box point of $B$ and $L(A) = L(B)$ by Proposition~\ref{prop:LgammaT}. So assume $B = A^\sharp$, $A^\flat$, or $A^G$ for some $G \in \mathfrak G$. Thus $x$ is not a box point of $B$.
Let $F$, $\mathfrak F$ and $x_0$ be as above. For each $E \in \mathfrak F$, let $h_E$ be the lattice height of $\sum S_j$ with respect to its facet $\sum E_j$. Let $h_E'$ be the lattice distance between $x_0$ and $\sum E_j$. Hence $0 < h_E' < h_E$ for all $E \in \mathfrak F$.
If $B = A^\sharp$, then we have
\[
\ind(L(B)) = h_F' \ind(L(F)) < h_F \ind(L(F)) = \ind(L(A)).
\]
If $B = A^\flat$, then we have
\[
\ind(L(B)) = (h_F-h_F') \ind(L(F)) < h_F \ind(L(F)) = \ind(L(A)).
\]
Finally, suppose $B = A^G$ for some $G \in \mathfrak G$. Let $E \in \mathfrak F$ such that $G \le E$ and $E \neq F$. Then
\begin{multline*}
\ind(L(B)) = h_E' \ind(L(p'',G,a'')) = h_E' \ind(L(E)) \\ < h_E \ind(L(E)) = \ind(L(A)).
\end{multline*}
In all cases $\ind(L(B)) < \ind(L(A))$, completing the proof.
\end{proof}
\begin{prop} \label{prop:facekappaTx}
Let $A \in \mathcal K_{T,x}$ and suppose $\mathcal B \in \mathcal C$ such that $B \le_{\mathcal C} A$. Then one of the following hold:
\begin{itemize}
\item $\kappa_{T,x}(A|B) = \kappa_{T',x}(B)$ for some $T'$
\item $\kappa_{T,x}(A|B) = \gamma_{T'}(B)$ for some $T'$
\item $\kappa_{T,x}(A|B) = \triv_{\mathcal D}(B)$.
\end{itemize}
\end{prop}
\begin{proof}
Let the standard form of $A$ be $(p,S,a)$. Let $B = (p_I,S',a_{I \times \bullet})$ with $S' \le S$ and $I$ a nonempty subset of $[\abs{p}]$. Assume that $T = S_j$, and let $i$ be the smallest number such that $a_{ij} = \max_{i'} a_{i'j}$. If $i \in I$ and $S_j'$ contains the first vertex of $T$ and another vertex, then
\[
\kappa_{T,x}(A|B) = \begin{dcases*}
\kappa_{S_j',x}(B) & if $x$ is a box point of $S'$ \\
\gamma_{S_j'}(B) & if $x$ is not a box point of $S'$
\end{dcases*}
\]
Otherwise, we have $\gamma_T(A|B) = \triv_{\mathcal C}(B)$, as desired.
\end{proof}
\section{Main proof} \label{sec:main}
Let $d$ be a positive integer and let $c$ be any integer greater than or equal to $d!+d$. In this section we prove the following.
\begin{thm} \label{thm:main2}
For any $d$-dimensional integral polytope $P$ in $\mathbb R^d$, there exists a positive integer $N$ such that for all nonnegative integers $r$ and $s$, $(rc^N + s(d!)^N)P$ has a unimodular triangulation.
\end{thm}
Taking $c$ to be relatively prime to $d!$, this theorem will imply Theorem~\ref{thm:main}, as desired.
The overall strategy of the proof is similar to the proof of the KMW theorem in Section~\ref{sec:KMW}. We will fix a box point $x$, and then construct two types of subdivisions: one that preserves lattices that do not have $x$ as a box point, and one that replaces lattices which do have $x$ as a box point with lattices of smaller index. There are two main differences between the current proof and that of the KMW theorem. The first is that we work with intermediate dilations by $c$, rather than just $d!$. To deal with this we will use the subdivisions $\kappa_{T,x}$ defined earlier. The second difference is that we work over $2\mathcal P$. Roughly, this raises the following problem: We might produce an element $(P,Q) \in 2 \mathcal P$ where $P$ and $Q$ are independent and $\ind(P+Q) > \ind(P)\ind(Q)$. Such an element can never have a mixed subdivision into elements of index 1, even after dilation. Thus, we must avoid such elements. The way this is done is to work within a carefully defined $2\mathcal P$-poset $\mathcal D$ which always avoids such elements, and construct subdivisions which always remain in $\mathcal D$. These restrictions are the reason why this proof is much more complicated than the proof of the KMW theorem or the arguments in \cite{ALT18}.
\subsection{The poset $\mathcal D$} \label{sec:posetD}
\subsubsection{The poset $\mathcal M$}
We first define a poset $\mathcal M_0$ as follows. Its elements are all elements
\[
(p,S,a) \times (q,S,b) \times k \in \mathcal C_0 \times \mathcal C_0 \times \mathbb Z,
\]
satisfying the following properties.
\begin{enumerate}[label=(\Roman*)]
\item $\abs{p} = \abs{q}$.
\item $0 \le k \le \abs{S}$.
\item $p_i-q_i$ is constant over all $i$
\item \label{samedifference} For each $j \le k$, we have $a_{ij} = b_{ij}$ for all $i$.
\item For each $j > k$, $a_{ij} = 1$ for all $i$, and either $b_{ij} = 0$ for all $i$ or $b_{ij} = 1$ for all $i$.
\end{enumerate}
We define an equivalence relation on $\mathcal M_0$ as follows. If $j$ is such that $j \le k$ and either $S_j$ is a point or $a_{ij} = 0$ for all $i$, we set
\begin{multline*}
(p,S,a) \times (q,S,b) \times k \sim \\
((p_i+a_{ij}S_j)_{i \in [\abs{p}]}, S_{\setminus j}, a_{\bullet \times \setminus j}) \times ((q_i+b_{ij}S_j)_{i \in [\abs{q}]}, S_{\setminus j}, b_{\bullet \times \setminus j}) \times (k-1).
\end{multline*}
Also, if $j$ is such that $j > k$ and $S_j$ is a point, then we set
\begin{multline*}
(p,S,a) \times (q,S,b) \times k \sim \\
((p_i+a_{ij}S_j)_{i \in [\abs{p}]}, S_{\setminus j}, a_{\bullet \times \setminus j}) \times ((q_i+b_{ij}S_j)_{i \in [\abs{q}]}, S_{\setminus j}, b_{\bullet \times \setminus j}) \times k.
\end{multline*}
(Note that if $j > k$, then by definition $a_{ij} \neq 0$ for any $i$.) We define $\mathcal M := \mathcal M_0 / \sim$. We define the \emph{standard form} of an element $A \in \mathcal M$ to be the unique representative $(p,S,a) \times (q,S,b) \times k \in \mathcal M_0$ of $A$ such that $(p,S,a)$ is in standard form.
For $A$, $B \in \mathcal M$ we let $B \le_{\mathcal M} A$ if the standard form of $A$ is $(p,S,a) \times (q,S,b) \times k$ and $B$ has a representative of the form
\[
(p_I,S',a_{I \times \bullet}) \times (q_I,S',b_{I \times \bullet}) \times k
\]
where $I$ is a nonempty subset of $[\abs{p}]$ and $S' \le S$. Then $\le_{\mathcal M}$ is a partial order on $\mathcal M$.
Let $\Cay : \mathcal M \to 2\mathcal P$ be the map $\Cay(U \times V \times k) = (\Cay(U),\Cay(V))$. This is a well-defined poset map. Hence $(\mathcal M,\Cay)$ is a $2\mathcal P$-poset. Furthermore, $(\mathcal M,\Sum \circ \Cay)$ is a perfect $\mathcal P$-poset.
\subsubsection{The poset $\mathcal N$}
We similarly define a poset $\mathcal N$ as follows. Its elements are all elements
\[
(p,S,a) \times (q,S,b) \times k \in \mathcal C_0 \times \mathcal C_0 \times \mathbb Z,
\]
satisfying the following properties.
\begin{enumerate}[label=(\Roman*), resume]
\item $\abs{q} = 1$.
\item $0 \le k \le \abs{S}$.
\item $a_{ij} \ge b_{1j}$ for all $i$, $j$.
\item For each $j > k$, $a_{ij} = 1$ for all $i$, and $b_{1j} = 0$ or 1.
\end{enumerate}
We define equivalence relation $\sim$ on $\mathcal N_0$ analogously to the previous section and define $\mathcal N = \mathcal N_0 / \sim$. As before, we define the \emph{standard form} of an element $A \in \mathcal N$ to be the unique representative $(p,S,a) \times (q,S,b) \times k \in \mathcal N_0$ of $A$ such that $(p,S,a)$ is in standard form.
For $A$, $B \in \mathcal N$ we let $B \le_{\mathcal N} A$ if the standard form of $A$ is $(p,S,a) \times (q,S,b) \times k$ and $B$ has a representative of the form
\[
(p_I,S',a_{I \times \bullet}) \times (q,S',b) \times k
\]
where $I$ is a nonempty subset of $[\abs{p}]$ and $S' \le S$. Then $\le_{\mathcal N}$ is a partial order on $\mathcal N$.
As before, let $\Cay : \mathcal N \to 2\mathcal P$ be the map $\Cay(U \times V \times k) = (\Cay(U),\Cay(V))$. This is a well-defined poset map. Hence $(\mathcal N,\Cay)$ is a $2\mathcal P$-poset. Furthermore, $(\mathcal N,\Sum \circ \Cay)$ is a perfect $\mathcal P$-poset.
\subsubsection{The poset $\mathcal D$}
The set $\mathcal M \cap \mathcal N$ is a poset ideal of both $\mathcal M$ and $\mathcal N$; specifically, it is the set of elements $(p,S,a) \times (q,S,b) \times k$ in $\mathcal M$ or $\mathcal N$ such that $\abs{p} = 1$ and \ref{samedifference} holds. Moreover, if $A$, $B \in \mathcal M \cap \mathcal N$, then $A \le_{\mathcal M} B$ if and only if $A \le_{\mathcal N} B$. Hence, we can define a poset $\mathcal D$ on the set $\mathcal M \cup \mathcal N$ where $A \le_{\mathcal D} B$ if and only if $A \le_{\mathcal M} B$ or $A \le_{\mathcal N} B$.
As above, we define a map $\Cay : \mathcal D \to 2 \mathcal P$ by $\Cay(U \times V \times k) = (\Cay(U),\Cay(V))$. Then $(\mathcal D,\Cay)$ is a $2\mathcal P$-poset, and $(\mathcal D,\Sum \circ \Cay)$ is a perfect $\mathcal P$-poset.
For $A \in \mathcal D$ with standard form $(p,S,a) \times (q,S,b) \times k$, we define $L(A) := L(p,S,a)$.
We say that $x$ is a box point of $A$ if $x$ is a box point of $S_{[k]}$.
\subsection{$\mu_T$, $\nu_T$, and $\epsilon$ subdivisions}
In this section we define three subdivisions on subposets of $\mathcal D$. These subdivisions play the role of $\gamma_T$ in that they preserve lattices.
\subsubsection{$\mu_T$ subdivisions}
Let $T$ be an ordered integral simplex of dimension at least 1. Let $\mathcal M_T$ be the set of elements of $\mathcal M$ whose standard form $(p,S,a) \times (q,S,b) \times k$ has the property that $T$ is an entry of $S_{[k]}$.
We construct a subdivision as follows $\mu_T : \mathcal M_T \to \Subd(\mathcal D)$. Let $A \in \mathcal M_T$ with standard form $U \times V \times k = (p,S,a) \times (q,S,b) \times k$. Assume $T$ is the $j$-th entry of $S$, where $j \le k$. We have that $U \in \mathcal C_T$, where $\mathcal C_T$ is as in Section~\ref{sec:T}. Also, by \ref{samedifference}, we have that the $j$-th column of $b$ is not all 0, so $V \in \mathcal C_T$.
Following the construction of $\gamma_T(U)$ and $\gamma_T(V)$, let $U'$, $U''$, $V'$, and $V''$ be as in \eqref{eq:A'A''}. Then $U' \times V' \times k$, $U'' \times V'' \times k \in \mathcal M$. We define
\[
\mu_T(A) = \triv_{\mathcal D}(U' \times V' \times k) \cup \triv_{\mathcal D}(U'' \times V'' \times k).
\]
Then $\mu_T(A)$ is a $\mathcal D$-subdivision with 2-support $\Cay(U) \times \Cay(V) = \Cay(A)$. Hence, $\mu_T : \mathcal M_T \to \Subd(\mathcal D)$ is a subdivision over $2 \mathcal P$.
From Proposition~\ref{prop:LgammaT}, we have the following.
\begin{prop} \label{prop:LmuT}
Let $A \in \mathcal M_T$ and $B \in \Max \mu_T(A)$. Then $L(A) = L(B)$.
\end{prop}
From the proof of Proposition~\ref{prop:facegammaT}, we have the following.
\begin{prop} \label{prop:facemuT}
Let $A \in \mathcal M_T$ and suppose $\mathcal B \in \mathcal D$ such that $B \le_{\mathcal D} A$. Then either $\mu_T(A|B) = \mu_{T'}(B)$ for some $T'$ or $\mu_T(A|B) = \triv_{\mathcal D}(B)$.
\end{prop}
\subsubsection{$\nu_T$ subdivisions}
Let $T$ be an ordered integral simplex of dimension at least 1. Let $\mathcal N_T$ be the set of elements of $\mathcal N$ whose standard form $(p,S,a) \times (q,S,b) \times k$ has the property that
\begin{enumerate}[label=(\alph*)]
\item $T$ is an entry of $S_{[k]}$, say the $j$-th entry.
\item There is some $i$ such that $a_{ij} > b_{1j}$.
\end{enumerate}
We construct a subdivision as follows $\nu_T : \mathcal N_T \to \Subd(\mathcal D)$. Let $A \in \mathcal N_T$ with standard form $U \times V \times k = (p,S,a) \times (q,S,b) \times k$. Following the construction of $\gamma_T(U)$, let $U'$, $U''$ be as in \eqref{eq:A'A''}. Let $V_F := (q,F,b)$, where $F$ is as in \eqref{eq:A'A''}. Then $U' \times V \times k$, $U'' \times V_F \times k \in \mathcal N$. We define
\[
\nu_T = \triv_{\mathcal D}(U' \times V \times k) \cup \triv_{\mathcal D}(U'' \times V_F \times k)
\]
Then $\nu_T(A)$ is a $\mathcal D$-subdivision with 2-support $\Cay(U) \times \Cay(V) = \Cay(A)$. Hence, $\nu_T : \mathcal N_T \to \Subd(\mathcal D)$ is a subdivision over $2 \mathcal P$.
From Proposition~\ref{prop:LgammaT}, we have the following.
\begin{prop} \label{prop:LnuT}
Let $A \in \mathcal N_T$ and $B \in \Max \nu_T(A)$. Then $L(A) = L(B)$.
\end{prop}
From the proof of Proposition~\ref{prop:facegammaT}, we have the following.
\begin{prop} \label{prop:facenuT}
Let $A \in \mathcal N_T$ and suppose $\mathcal B \in \mathcal D$ such that $B \le_{\mathcal D} A$. Then either $\nu_T(A|B) = \nu_{T'}(B)$ for some $T'$ or $\nu_T(A|B) = \triv_{\mathcal D}(B)$.
\end{prop}
\subsubsection{$\epsilon$ subdivisions} \label{sec:epsilon}
Finally, let $\mathcal E$ be the set of elements $A \in \mathcal D$ whose standard form $(p,S,a) \times (q,S,b) \times k$ satisfies the following: $\abs{p} > 1$, and one of the following hold:
\begin{enumerate}[label=(\roman*)]
\item $A \in \mathcal M$, and $k = 0$.
\item $A \in \mathcal N$, and for all $j \le k$ we have $a_{ij} = b_{ij}$ for all $i$.
\end{enumerate}
Note that in either case, all the rows of $a$ are equal to each other and all the rows of $b$ are equal to each other.
We construct a subdivision $\epsilon : \mathcal E \to \mathcal D$ as follows. Let $A \in \mathcal E$ with standard form $(p,S,a) \times (q,S,b) \times k$. Note that the entries of $p$ are affinely independent, because by definition of $\mathcal C_0$ the polytopes $p_i + \sum_{j=1}^n a_{ij}S_j$ are in Cayley position and $\sum_{j=1}^n a_{ij}S_j$ is constant over all $i$. Let $T$ be the ordered simplex with vertices given by the entries of $p$, in that order.
Given a tuple $c$ and an object $d$, let $c \cup d$ denote the tuple obtained by concatenating $d$ to the end of $c$. Let $a_1$ denote the first row of $a$ and define $b_1$ similarly. If $A \in \mathcal M$, we define
\[
A^T := ( (0), S \cup T, a_1 \cup 1 ) \times ((q_1-p_1), S \cup T, b_1 \cup 1) \times k \in \mathcal D.
\]
If $A \in \mathcal N$, we define
\begin{equation}
A^T := ( (0), S \cup T, a_1 \cup 1 ) \times (q, S \cup T, b \cup 0) \times k \in \mathcal D.
\end{equation}
Either way, we have $\Cay(A^T) = \Cay(A)$. We define $\epsilon(A) = \triv_{\mathcal D}(A^T)$. We have that $\epsilon : \mathcal E \to \mathcal D$ is a subdivision over $2 \mathcal P$.
We have the following two properties of $\epsilon$.
\begin{prop} \label{prop:Lepsilon}
Let $A \in \mathcal E$ and $B \in \Max \epsilon(A)$. Then $L(A) = L(B)$.
\end{prop}
\begin{proof}
This follows from the observation that $T$ is a translation of $S_0(p,S,a)$.
\end{proof}
From the proof of Proposition~\ref{prop:facegammaT}, we have the following.
\begin{prop} \label{prop:faceepsilon}
Let $A \in \mathcal E$ and suppose $\mathcal B \in \mathcal D$ such that $B \le_{\mathcal D} A$. Then either $\epsilon(A|B) = \epsilon(B)$ or $\epsilon(A|B) = \triv_{\mathcal D}(B)$.
\end{prop}
\begin{proof}
Let the standard form of $A$ be $(p,S,a) \times (q,S,b) \times k$ and let $B = U \times V \times W$, where $U = (p_I,S',a_{I \times \bullet})$ for some nonempty $I \subseteq [\abs{p}]$ and $S' \le S$. If $\abs{I} > 1$, then $B \in \mathcal E$ and $\epsilon(A|B) = \epsilon(B)$. Otherwise, $\epsilon(A|B) = \triv_{\mathcal D}(B)$.
\end{proof}
\subsection{$\tau_x$, $\sigma_x$, and $\rho_x$ subdivisions}
The final subdivisions we will construct are analogues of the $\stell_x$ and $\kappa_{T,x}$ subdivisions from earlier. These will allow us to lower indices of lattices. The order they are presented here is ``backwards'', in the sense that in practice, one would apply $\rho_x$ subdivisions first, then $\sigma_x$ subdivisions, then $\tau_x$.
Throughout this section, we fix $L$ a $d$-dimensional lattice in $\mathbb R^d$ and some nonzero $x \in \mathbb Z^d / L$.
\subsubsection{$\tau_x$ subdivisions}
Let $\mathcal T_x$ be the set of elements of $\mathcal D$ whose standard form $(p,S,a) \times (q,S,b) \times k$ satisfies the following:
\begin{enumerate}[label=(\alph*)]
\item $x$ is a box point of $S_{[k]}$.
\item $\abs{p} = 1$.
\item For all $j \le k$, $a_{1j} = b_{1j} = d!$.
\end{enumerate}
We construct a subdivision $\tau_x : \mathcal T_x \to \Subd(\mathcal D)$ as follows. Let $A \in \mathcal T_x$ with standard form $(p,S,a) \times (q,S,b) \times k$. Let $n = \abs{S}$. Let $c$ be the $n$-tuple of integers where
\[
c_j = \begin{dcases*}
c(S_{[k]},x)_j & for $1 \le j \le k$ \\
0 & for $k+1 \le j \le n$
\end{dcases*}
\]
Now, let $x_0$ be the focus of $(S,x)$, and let $\mathfrak F = \mathfrak F(S,x)$.
Let $N = d!/\max_j c_j$. For all $F \in \mathfrak F$ and $r = 1$, \dots, $N$, define
\begin{align*}
U_{F,r} &:= \left( \left(p_1+rx_0,p_1+(r-1)x_0 \right),F,\right. \\
& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \left.
\begin{pmatrix} a_{11}-rc_1 & \cdots & a_{1n}-rc_n \\ a_{11}-(r-1)c_1 & \cdots & a_{1n}-(r-1)c_n \end{pmatrix} \right) \\
V_{F,r} &:= \left( \left(q_1+rx_0,q_1+(r-1)x_0 \right),F, \right. \\
& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \left. \begin{pmatrix} b_{11}-rc_1 & \cdots & b_{1n}-rc_n \\ b_{11}-(r-1)c_1 & \cdots & b_{1n}-(r-1)c_n \end{pmatrix} \right).
\end{align*}
We have that $U_{F,r} \times V_{F,r} \times k \in \mathcal M$.
From Section~\ref{sec:structboxpoints}, the set of all $U_{F,r} \times V_{F,r} \times k$ over $F \in \mathfrak F$ and $r \in [N+1]$ is the set of maximal elements of a $\mathcal D$-subdivision with 2-support $\Cay(p,S,a) \times \Cay(q,S,b) = \Cay(A)$. We define $\tau_x(A)$ to be this $\mathcal D$-subdivision.
Hence, we have constructed a subdivision $\tau_x :\mathcal T_x \to \Subd(\mathcal D)$ over $2 \mathcal P$. From Section~\ref{sec:structboxpoints}, we have the following:
\begin{prop} \label{prop:Ltaux}
Let $A \in \mathcal T_x$ and $B \in \Max \tau_x(A)$. Then $L(B) < L(A)$.
\end{prop}
We also have the following.
\begin{prop} \label{prop:facetaux}
Let $A \in \mathcal T_x$ and suppose $\mathcal B \in \mathcal D$ such that $B \le_{\mathcal D} A$. Then either $\tau_x(A|B) = \tau_x(B)$ or $\tau_x(A|B) = \triv_{\mathcal D}(B)$.
\end{prop}
\begin{proof}
If $x$ is a box point of $B$, then $\tau_x(A|B) = \tau_x(B)$. Otherwise, $\tau_x(A|B) = \triv_{\mathcal D}(B)$.
\end{proof}
\subsubsection{$\sigma_x$ subdivisions}
Let $\mathcal S_x$ be the set of elements of $\mathcal D$ whose standard form $(p,S,a) \times (q,S,b) \times k$ satisfies the following:
\begin{enumerate}[label=(\alph*)]
\item $x$ is a box point of $S_{[k]}$.
\item $(p,S,a) \times (q,S,b) \times k \in \mathcal N$.
\item For all $j \le k$, $b_{1j} = 0$ or $d!$.
\item For all $i$, either $a_{ij} = b_{1j}$ for all $j \le k$ or $a_{ij} = b_{1j} + c(S_{[k]},x)_j$ for all $j \le k$.
\item There is some $i$ such that $a_{ij} = b_{1j} + c(S_{[k]},x)_j$ for all $j \le k$.
\end{enumerate}
We construct a subdivision $\sigma_x : \mathcal S_x \to \Subd(\mathcal D)$ as follows. Let $A \in \mathcal S_x$ with standard form $U \times V \times k = (p,S,a) \times (q,S,b) \times k$. Let $m = \abs{p}$ and $n = \abs{S}$. Let $c$ be the $n$-tuple of integers where $c_j = c(S_{[k]},x)_j$ for $j \le k$ and $c_j = 0$ for $j > k$, as before.
Let $x_0$ be the focus of $(S,x)$, and let $\mathfrak F = \mathfrak F(S,x)$.
Let $i$ be the smallest number such that $a_{ij} = b_{1j} + c_j$ for all $j \le k$. For all $F \in \mathfrak F$, we define
\[
A^{\sharp,F} := U^{\sharp,F} \times V_F \times k \in \mathcal N
\]
where $V_F = (q,F,b)$, and $U^{\sharp,F} = (p^{\sharp},F,a^{\sharp})$ is defined analogously to $A^\sharp$ in \eqref{eq:sharpflat}; in other words,
\begin{itemize}
\item $p^{\sharp}$ is the $(m+1)$-tuple obtained by inserting $p_1 + x_0$ directly before the $i$th entry of $p$.
\item $a^{\sharp}$ is the $(m+1) \times n$ matrix obtained by inserting the row $(a_{ij}-c_j)_{j=1}^n$ directly above the $i$th row of $a$.
\end{itemize}
In addition, we define
\[
A^\star := (p''', S, a''') \times V \times k \in \mathcal N
\]
where $p'''$ and $a'''$ are defined right before \eqref{eq:sharpflat}; that is,
\begin{itemize}
\item $p'''$ is the $m$-tuple obtained by replacing the $i$th entry of $p$ with $p_1 + x_0$.
\item $a'''$ is the $m \times n$ matrix obtained by replacing the $i$th row with $(a_{ij}-c_j)_{j=1}^n$.
\end{itemize}
From the discussion in Section~\ref{sec:structboxpoints}, $\{A^\star\} \cup \{A^{\sharp,F} : F \in \mathfrak F\}$ is the set of maximal elements of a $\mathcal D$-subdivision with 2-support $\Cay(U) \times \Cay(V) = \Cay(A)$. We define $\sigma_x(A)$ to be this $\mathcal D$-subdivision.
Hence, we have constructed a subdivision $\sigma_x :\mathcal S_x \to \Subd(\mathcal D)$ over $2 \mathcal P$. Using similar arguments as the previous section, we have the following:
\begin{prop} \label{prop:Lsigmax}
Let $A \in \mathcal S_x$ and $B \in \Max \sigma_x(A)$. Then $L(B) < L(A)$.
\end{prop}
\begin{prop} \label{prop:facesigmax}
Let $A \in \mathcal S_x$ and suppose $\mathcal B \in \mathcal D$ such that $B \le_{\mathcal D} A$. Then either $\sigma_x(A|B) = \sigma_x(B)$ or $\sigma_x(A|B) = \triv_{\mathcal D}(B)$.
\end{prop}
\subsubsection{$\rho_x$ subdivisions.}
Let $\mathcal R_x$ be the set of elements of $\mathcal D$ whose standard from $(p,S,a) \times (q,S,b) \times k$ satisfies the following.
\begin{enumerate}[label=(\alph*)]
\item $x$ is a box point of $S$.
\item $(p,S,a) \times (q,S,b) \times k \in \mathcal N$.
\item For all $j \le k$, either $b_{1j} = 0$ or $b_{1j} = d!$.
\item For all $i$ and all $j \le k$, we have $a_{ij} \ge b_{1j} + c(S,x)_j$.
\item There exists some $i$ and some $j \le k$ satisfying $a_{ij} > b_{1j} + c(S,x)_j$.
\end{enumerate}
We construct a subdivision $\rho_x : \mathcal R_x \to \Subd(\mathcal D)$ as follows. Let $A \in \mathcal R_x$ with standard form $U \times V \times k = (p,S,a) \times (q,S,b) \times k$. Let $j \le k$ be the smallest number such that there exists $i$ satisfying $a_{ij} > b_{1j} + c(S,x)_j$. Let $T = S_j$. Let $f$ be the facet of $T$ opposite the first vertex of $T$, and let $F$ be the tuple obtained from $S$ by replacing $T$ with $f$. We consider two cases, in parallel to Section~\ref{sec:kappaTx}.
\textbf{Case 1:} $x$ is a box point of $F$.
In this case, we define $\rho_x(A) = \nu_T(A)$.
\textbf{Case 2:} $x$ is not a box point of $F$.
Following the construction of $\kappa_{T,x}$, define $U'$ as in \eqref{eq:A'A''}, and define $U^\sharp$, $U^\flat$ as in \eqref{eq:sharpflat}. In addition, define $\mathfrak G$ as in Section~\ref{sec:kappaTx}, and for each $G \in \mathfrak G$ define $U^G$ as in \eqref{eq:AG}. For any $R \le S$, let $V_R := (q,R,b)$. Then the set of elements
\[
\{ U' \times V \times k, U^\sharp \times V_F \times k, U^\flat \times V_F \times k \} \cup \{ U^G \times V_G \times k : G \in \mathfrak G \} \subset \mathcal N
\]
is the set of maximal elements of a $\mathcal D$-subdivision with 2-support $\Cay(A)$. We define $\rho_x(A)$ to be this $\mathcal D$-subdivision. Hence we have defined a subdivision $\rho_x : \mathcal R_x \to \Subd(\mathcal D)$ over $2 \mathcal P$.
From Proposition~\ref{prop:kappaTx}, we have the following.
\begin{prop} \label{prop:Lrhox}
Let $A \in \mathcal R_x$ and $B \in \Max \rho_x(A)$. We have the following.
\begin{itemize}
\item If $x$ is a box point of $B$, then $L(A) = L(B)$.
\item If $x$ is not a box point of $B$, then $\ind(L(B)) < \ind(L(A))$.
\end{itemize}
\end{prop}
Finally, we have the following, with an analogous proof to Proposition~\ref{prop:facekappaTx}.
\begin{prop} \label{prop:facerhox}
Let $A \in \mathcal R_x$ and suppose $\mathcal B \in \mathcal D$ such that $B \le_{\mathcal D} A$. Then one of the following hold:
\begin{itemize}
\item $\rho_x(A|B) = \rho_x(B)$
\item $\rho_x(A|B) = \nu_T(B)$ for some $T$
\item $\rho_x(A|B) = \triv_{\mathcal D}(B)$.
\end{itemize}
\end{prop}
\subsubsection{The set $\mathcal D^\bullet_x$}
Let $\mathcal D^\bullet_x$ be the set of elements of $\mathcal D$ whose standard form $(p,S,a) \times (q,S,b) \times k$ satisfies the following:
\begin{enumerate}[label=(\alph*)]
\item $x$ is a box point of $S_{[k]}$.
\item $(p,S,a) \times (q,S,b) \times k \in \mathcal N$.
\item For all $j \le k$, $b_{1j} = 0$ or $d!$.
\item For all $i$, either $a_{ij} = b_{1j}$ for all $j \le k$ or $a_{ij} \ge b_{1j} + c(S_{[k]},x)_j$ for all $j \le k$.
\end{enumerate}
Observe that the sets $\mathcal R_x$, $\mathcal S_x$, $\mathcal T_x$, and $\mathcal E \cap \mathcal D^\bullet_x$ are pairwise disjoint and partition $\mathcal D^\bullet_x$.
\subsection{Proof of Theorem~\ref{thm:main2}}
With our constructions completed, we are now ready to prove Theorem~\ref{thm:main2}. The proof mirrors our proof of the KMW theorem from Section~\ref{sec:KMW}.
Let $L$ be a $d$-dimensional lattice in $\mathbb R^d$, and fix some nonzero element $x \in \mathbb Z^d / L$. Define
\[
\mathcal D_x := \mathcal D_x^\circ \cup \mathcal D_x^\bullet,
\]
where $\mathcal D_x^\circ$ is the set of all elements of $\mathcal D$ which do not have $x$ as a box point. We let $\mathcal D_x$ inherit a poset structure and poset map $\Cay : \mathcal D_x \to 2\mathcal P$ from $\mathcal D$. We can check that $(\mathcal D_x,\Cay)$ is a perfect $\mathcal P$-poset.
Let $\mu_T^\circ$ and $\nu_T^\circ$ be the restrictions of $\mu_T$ and $\nu_T$, respectively, to
\begin{align*}
\mathcal M_T^\circ &:= \mathcal M_T \cap \mathcal D_x^\circ \\
\mathcal N_T^\circ &:= \mathcal N_T \cap \mathcal D_x^\circ
\end{align*}
respectively. Let $\epsilon_x$ be the restriction of $\epsilon$ to $\mathcal E_x := \mathcal E \cap \mathcal D_x$.
We can check that for each of the subdivisions $\mu_T^\circ$, $\nu_T^\circ$, $\epsilon_x$, $\rho_x$, $\sigma_x$, and $\tau_x$, the output is always a $\mathcal D_x$-subdivision. Thus we have well defined subdivisions
\begin{align*}
\mu_T^\circ &: \mathcal M_T^\circ \to \Subd(\mathcal D_x) \\
\nu_T^\circ &: \mathcal N_T^\circ \to \Subd(\mathcal D_x) \\
\epsilon_x &: \mathcal E_x \to \Subd(\mathcal D_x) \\
\rho_x &: \mathcal R_x \to \Subd(\mathcal D_x) \\
\sigma_x &: \mathcal S_x \to \Subd(\mathcal D_x) \\
\tau_x &: \mathcal T_x \to \Subd(\mathcal D_x)
\end{align*}
over $2\mathcal P$. We now have the following.
\begin{prop} \label{prop:S}
The family of subdivisions
\[
\mathscr S_x := \{\mu_T^\circ\}_T \cup \{\nu_T^\circ\}_T \cup \{\epsilon_x\} \cup \{\rho_x\} \cup \{\sigma_x\} \cup \{\tau_x\}
\]
is locally confluent, terminating, and facially compatible (as $\mathcal D_x$-subdivisions).
\end{prop}
\begin{proof}
We first check local confluence. Note that for any simplices $T_1$, $T_2$, the sets $\mathcal M_{T_1}^\circ$, $\mathcal N_{T_2}^\circ$, $\mathcal E_x$, $\mathcal R_x$, $\mathcal S_x$, and $\mathcal T_x$ are pairwise disjoint. Thus, if $A \in \mathcal D_x$ and there are two distinct moves from $\{A\}$, then these moves must be either $\mu_{T_1}$, $\mu_{T_2}$ for some $T_1$, $T_2$ or $\nu_{T_1}$, $\nu_{T_2}$ for some $T_1$, $T_2$. The same argument from Proposition~\ref{prop:confgammaT} shows that the results of these moves are joinable.
We next show the terminating property. Let $A \in \mathcal D_x$ with standard form $(p,S,a) \times (q,S,b) \times k$, and suppose we have a subdivision $f(A)$ where $f \in \mathscr S_x$. Let $B \in \Max f(A)$ with standard form $(p',S',a') \times (q',S',b') k'$. Then one of the following holds:
\begin{enumerate}
\item $\sum_{j \le k'} \dim S'_j < \sum_{j \le k} \dim S_j$.
\item The above inequality is equality, and
\[
\sum_{\substack{i \\ j \le k'}} a'_{ij} < \sum_{\substack{i \\ j \le k}} a_{ij}.
\]
\item The above two inequalities are equality, and $\abs{p'} < \abs{p}$. (This can only possibly occur if $f = \epsilon$.)
\end{enumerate}
It follows that $\mathscr S_x$ is terminating.
Finally, we note that Propositions~\ref{prop:facemuT}, \ref{prop:facenuT}, \ref{prop:faceepsilon}, \ref{prop:facetaux}, \ref{prop:facesigmax}, and \ref{prop:facerhox} imply the first criterion of facial compatibility. For the second criterion, let $A \in \mathcal D_x$ with standard form $(p,S,a) \times (q,S,b) \times k$. We have that $A$ is terminal if and only if $A$ is not in any of the sets $\mathcal M^\circ$, $\mathcal N^\circ$, $\mathcal E_x$, $\mathcal R_x$, $\mathcal S_x$, $\mathcal T_x$. This occurs if and only if $\abs{p} = 1$ and $k = 0$. Clearly if this property holds for $A$, then it holds for any $B \le_{\mathcal D} A$. Hence $\mathscr S_x$ is facially compatible.
\end{proof}
From this we can conclude the following.
\begin{thm} \label{thm:Deltax}
There is a canonical subdivision $\Delta_x : \mathcal D_x \to \Subd(\mathcal D_x)$ over $2\mathcal P$ such that for all $A \in \mathcal D_x$ and $B \in \Max(\Delta_x(A))$, where $(p,S,a) \times (q,S,b) \times k$ is the standard form of $B$, we have the following:
\begin{enumerate}[label=(\alph*)]
\item $\abs{p} = 1$ and $k = 0$.
\item If $A \in \mathcal D_x^\circ$, then $L(B) = L(A)$.
\item If $A \in \mathcal D_x^\bullet$, then $L(B) < L(A)$.
\end{enumerate}
\end{thm}
\begin{proof}
Construct $\Delta_x$ from $\mathscr S_x$ and Theorem~\ref{thm:confluence}. Then, (a) follows from the proof of Proposition~\ref{prop:S}. (b) follows from Propositions~\ref{prop:LmuT}, \ref{prop:LnuT}, and \ref{prop:Lepsilon}. For (c), note that if $B$ satsifies (a), then $x$ is not a box point of $B$. Thus, (c) follows from Propositions~\ref{prop:LmuT}, \ref{prop:LnuT}, \ref{prop:Lepsilon}, \ref{prop:Ltaux}, \ref{prop:Lsigmax}, and \ref{prop:Lrhox}.
\end{proof}
We are now ready for the final proof. As in Theorem~\ref{thm:main2}, let $d$ be the dimension of the space we are working in and let $c$ be an integer $ \ge d!+d$.
\begin{proof}[Proof of Theorem~\ref{thm:main2}]
For any $\mathcal D$-complex $X$ with $\dim \abs{X} = d$, we consider the collection of lattices
\[
\mathscr L(X) := \{ L(A) : A \in \Max X \}.
\]
Start with a $\mathcal D$-subdivision $X$ with $\dim \abs{X} = d$ and 2-support $(P,Q)$. Assume that every element of $X$ is terminal with respect to $\mathscr S_x$. Consider a map $\theta : X \to \mathcal D$ defined as follows: If $A \in X$ has standard form $(p,S,a) \times (q,S,b) \times 0$, then
\[
\theta(A) := (cp, S, ca) \times (d!q, S, d!b) \times \abs{S}.
\]
Then $\theta(X)$ is a $\mathcal D$-subdivision with 2-support $(cP, d!Q)$.
Choose some lattice $L \in \mathscr L(X)$ and some nonzero $x \in \mathbb R^d / L$. Note that $\theta(X)$ is a $\mathcal D_x$-complex; indeed, for all $j \le \abs{S}$, we have $b_{1j} = 0$ or $d!$ and $a_{1j} = c \ge d! + d \ge b_{1j} + d$. Let $X' := \Delta_x^\ast(\theta(X))$, where $\Delta_x$ is from Theorem~\ref{thm:Deltax} and the $\ast$ construction is from Proposition~\ref{prop:canonicalrefinement}. Then $X'$ is a $\mathcal D$-subdivision with 2-support $(cP, d!Q)$ where every element is terminal with respect to $\mathscr S_x$. Comparing $\mathscr L(X)$ and $\mathscr L(X')$, by Theorem~\ref{thm:Deltax}, we have that $\mathscr L(X')$ is obtained by replacing at least one lattice of $\mathscr L(X)$ with lattices of lower index, while keeping the other lattices the same.
Thus, if we repeat the above process on $X'$ instead of $X$, and so on, we will eventually obtain a $\mathcal D$-subdivision $Y$ with 2-support $(c^N P, (d!)^N Q)$ for some $N$, such that every element of $Y$ is terminal and $\mathscr L(Y) = \{ \mathbb Z^d \}$.
Let $r$, $s$ be nonnegative integers. Consider the map $\omega_{r,s} : Y \to \mathcal C$ given by
\[
\omega((p,S,a) \times (q,S,b) \times 0) = (rp+sq,S,ra+sb).
\]
Then $\omega_{r,s}(Y)$ is a $\mathcal C$-subdivision with support $rc^N P + s(d!)^N Q$. Let
\[
Z := \Gamma^\ast(\omega_{r,s}(Y)),
\]
where $\Gamma$ is from Theorem~\ref{thm:Gamma}. Then $Z$ is a $\mathcal C$-subdivision with support $rc^N P + s(d!)^N Q$ such that for all $(p,S,a) \in Z$ in standard form, we have $\abs{S} = 0$ and $L(p,S,a) = \mathbb Z^d$. Thus $\Cay(Z)$ is a unimodular triangulation of $rc^N P + s(d!)^N Q$.
Now, let $P$ be a $d$-dimensional integral polytope in $\mathbb R^d$, and let $X_0$ be any triangulation of $P$ into integral simplices. Let
\[
X := \{ ((0),(T),(1)) \times ((0),(T),(1)) \times 0 \in \mathcal D : T \in X_0 \}
\]
Then $X$ is $\mathcal D$-subdivision with 2-support $(P,P)$, all of whose elements are terminal. Hence, applying the above argument to $X$ gives the result.
\end{proof}
| {
"timestamp": "2021-12-10T02:06:53",
"yymm": "2112",
"arxiv_id": "2112.04654",
"language": "en",
"url": "https://arxiv.org/abs/2112.04654",
"abstract": "An integral polytope is a polytope whose vertices have integer coordinates. A unimodular triangulation of an integral polytope in $\\mathbb{R}^d$ is a triangulation in which all simplices are integral with volume $1/d!$. A classic result of Knudsen, Mumford, and Waterman states that for every integral polytope $P$, there exists a positive integer $c$ such that $cP$ has a unimodular triangulation. We strengthen this result by showing that for every integral polytope $P$, there exists $c$ such that for every positive integer $c' \\ge c$, $c'P$ admits a unimodular triangulation. This answers a longstanding question in the area.",
"subjects": "Combinatorics (math.CO)",
"title": "Unimodular triangulations of sufficiently large dilations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98866824479921,
"lm_q2_score": 0.8104789086703224,
"lm_q1q2_score": 0.8012947600818668
} |
https://arxiv.org/abs/1603.00677 | Karhunen-Loeve expansions of Levy processes | Karhunen-Loeve expansions (KLE) of stochastic processes are important tools in mathematics, the sciences, economics, and engineering. However, the KLE is primarily useful for those processes for which we can identify the necessary components, i.e., a set of basis functions, and the distribution of an associated set of stochastic coefficients. Our ability to derive these components explicitly is limited to a handful processes. In this paper we derive all the necessary elements to implement the KLE for a square-integrable Levy process. We show that the eigenfunctions are sine functions, identical to those found in the expansion of a Wiener process. Further, we show that stochastic coefficients have a jointly infinitely divisible distribution, and we derive the generating triple of the first d coefficients. We also show, that, in contrast to the case of the Wiener process, the coefficients are not independent unless the process has no jumps. Despite this, we develop a series representation of the coefficients which allows for simulation of any process with a strictly positive Levy density. We implement our theoretical results by simulating the KLE of a variance gamma process. | \section{Introduction}
Fourier series are powerful tools in mathematics and many other fields. The Karhunen-Lo{\`e}ve theorem (KLT) allows us to create generalized Fourier series from stochastic processes in an, in some sense, optimal way. Arguably the most famous application of the KLT is to derive the classic sine series expansion of a Wiener process $W$ on $[0,1]$. Specifically,
\begin{align}\label{eq:wklt}
W_t = \sqrt{2}\sum_{k \geq 1}Z_k \frac{\sin\left(\pi(k-\frac{1}{2}t)\right)}{\pi\left(k - \frac{1}{2}\right)}
\end{align}
where convergence of the series is in $L^2(\Omega,{\mathbb P})$ and uniform in $t \in [0,1]$, and the $\{Z_k\}_{k\geq 1}$ are i.i.d. standard normal random variables. The main result of this paper is to show that a square integrable L\'evy process admits a similar representation as a series of sine functions; the key difference is that the stochastic coefficients are no longer normal nor independent.\\ \\
The KLT applies much more generally and is thus an important tool in many fields. For example, we see applications of the KLT and Principal Component Analysis, its discrete time counterpart, in physics and engineering \cite{Ghanesh,Phoon}, \cite[Chapter 10]{deep}, in signal and image processing \cite{unser1998wavelets}, \cite[Chapter 1]{sigbook}, in finance and economics \cite{Benko,cont2002dynamics,gun1} and other areas. For interesting recent theses on the KLT from three different points of view see also \cite{shijin} (probability and time series), \cite[Chapter 7]{Luo} (stochastic partial differential equations), and \cite{wang} (statistics).\\ \\
Deriving the Karhunen--L{\`o}eve expansion (KLE) of the type \eqref{eq:wklt} for a square integrable stochastic process $X$ on $[a,b]$ requires two steps: first, one must solve a Fredholm integral equation to obtain the basis functions $\{e_k\}_{k\geq1}$ (c.f. the sine functions in Equation \ref{eq:wklt}). Second, one must identify the distribution of the stochastic coefficients
\begin{align}\label{eq:stocoef}
Z_k := \int_a^bX_te_k(t){\textnormal d} t,\quad k\in \mathbb{N}.
\end{align}
In general, obtaining both the basis functions and the distribution of the stochastic coefficients is not an easy task, and we have full knowledge in only a few specific cases. Besides the Wiener process, the Brownian Bridge process, the Anderson--Darling process, and spherical fractional Brownian Motion (see \cite{Istas20061578} for the latter) are some examples. For further examples with derivation see \cite[Chapter 1]{shijin}. Non-Gaussian processes pose an additional challenge and the problem of deriving the KLE is usually left to numerical means (see e.g., \cite{Phoon}). \\ \\
In this paper we derive all the elements of the KLE for a square integrable L\'{e}vy process on the interval $[0,T]$. The result is timely since in many of the fields mentioned above, especially in finance, but recently also in the area of image/signal processing (see e.g., \cite{bouya}), L\'{e}vy models are becoming increasingly popular. In Section \ref{sec:klt} we show that the basis functions are sine functions, identical to those in \eqref{eq:wklt}, and that the first $d$ stochastic coefficients are jointly distributed like an infinitely divisible (ID) random vector. We identify the generating triple of this vector from which it follows that the coefficients are independent only when the process has no jumps, i.e., when the process is a scaled Wiener process with drift. Although simulating dependent multivariate random variables from a characteristic function is generally difficult, in Section \ref{sec:shotnoise} we derive a shot-noise (series) representation for
\begin{align}\label{eq:vecstoco}
Z^{(d)} := (Z_1,Z_2,\ldots,Z_d)^{\text{\textbf{T}}},\quad d \in \mathbb{N},
\end{align}
for those processes which admit a strictly positive L\'{e}vy density. This result, in theory, allows us to simulate the truncated KLE for a large class of L\'{e}vy models. We conclude by generating some paths of a $d$-term KLE approximation of a variance gamma process.\\ \\
To begin, we recall the necessary facts from the theory of L\'{e}vy processes and ID random vectors.
\section{Facts from the theory of L\'{e}vy processes}\label{sec:introlev}
The L\'{e}vy-Khintchine theorem states that every $d$-dimensional ID random vector $\xi$ has a Fourier transform of the form
\begin{align*}
{\mathbb E}[e^{\i\langle \mathbf{z},\xi\rangle}] = e^{-\Psi(\mathbf{z})},\quad \mathbf{z} \in {\mathbb R}^d,
\end{align*}
where
\begin{align}\label{eq:char}
\Psi(\mathbf{z}) = \frac{1}{2}\mathbf{z}^{\textbf{T}}Q\mathbf{z} - \i\langle \mathbf{a}, \mathbf{z} \rangle - \int_{{\mathbb R}^d\backslash\{\mathbf{0}\}}e^{\i\langle \mathbf{z}, \mathbf{x} \rangle} - 1 - \i\langle \mathbf{z},\mathbf{x} \rangle h(\mathbf{x})\nu({\textnormal d} \mathbf{x}),
\end{align}
and where $\mathbf{a} \in {\mathbb R}^d$, $Q$ is a positive semi-definite matrix, and $\nu({\textnormal d} \mathbf{x})$ is a measure on ${\mathbb R}^d\backslash\{\mathbf{0}\}$ satisfying
\begin{align}\label{eq:int_cond}
\int_{{\mathbb R}^d\backslash\{\mathbf{0}\}}\min(1,\vert \mathbf{x} \vert^2)\nu({\textnormal d} \mathbf{x} ) < \infty.
\end{align}
The function $h$ is known as the cut-off function; in general, we need such a function to ensure convergence of the integral. An important fact is that up to a choice of $h$, the generating triple $(\mathbf{a},Q,\nu)$ uniquely identifies the distribution of $\xi$. The L\'{e}vy-Khintchine theorem for L\'{e}vy processes gives us an analogously powerful result, specifically, for any $d$-dimensional L\'{e}vy process $X$ we have
\begin{align*}
{\mathbb E}[e^{i\langle \mathbf{z},X_t\rangle}] = e^{-t\Psi(\mathbf{z})},\quad \mathbf{z} \in {\mathbb R}^d,\,t \geq 0,
\end{align*}
where $\Psi$ is as in \eqref{eq:char} and $X$ is uniquely determined, up to identity in distribution, by the triple $(\mathbf{a},Q,\nu)$. Following convention, we will refer to the function $\Psi$ as the characteristic exponent of $\xi$ (resp. $X$) and will write $\Psi_{\xi}$ (resp. $\Psi_{X}$) if there is the potential for ambiguity. In one dimension we will write $(a,\sigma^2,\nu)$ for the generating triple; the measure $\nu$ will always be referred to as the L\'{e}vy measure. When $\nu({\textnormal d} x) = \pi(x){\textnormal d} x$ for some density function $\pi$, we will write $(a,\sigma^2,\pi)$ and refer to $\pi$ as the L\'{e}vy density. If we wish to be specific regarding the cut-off function we will write $(\mathbf{a},Q,\nu)_{h\equiv \cdot}$ or $(a,\sigma^2,\nu)_{h\equiv\cdot}$ for the generating triples. \\ \\
\noindent In this article we will work primarily with one dimensional L\'{e}vy processes having zero mean and finite second moment; by this we mean that ${\mathbb E}[X_t] = 0$ and ${\mathbb E}[X_t^2] < \infty$ for every $t \geq 0$. We will denote the set of all such L\'{e}vy processes by $\mathcal{K}$. One may show that the later condition implies that $\Psi$ is twice differentiable. Thus, when we work with a process $X \in \mathcal{K}$, we can express the variance of $X_t$ as
\begin{align*}
\textnormal{Var}(X_t) = {\mathbb E}[X_t^{2}] = \Psi''(0)t,
\end{align*}
and the covariance of $X_t$ and $X_s$ as
\begin{align*}
\textnormal{Cov}(X_s,X_t) = {\mathbb E}[X_sX_t] = \Psi''(0)\min(s,t).
\end{align*}
For notational convenience we will set $\alpha := \Psi''(0)$. \\ \\
\noindent The existence of moments for both L\'{e}vy processes and ID random vectors can be equivalently expressed in terms of the L\'{e}vy measure. An ID random vector $\xi$ or L\'{e}vy process $X$ with associated L\'{e}vy measure $\nu$ has a finite second moment (meaning the component-wise moments) if, and only if,
\begin{align}\label{eq:conditionA}
\int_{\vert \mathbf{x}\vert > 1} \vert \mathbf{x}\vert^2\nu({\textnormal d} \mathbf{x}) < \infty \tag{Condition A}.
\end{align}
We will denote the class of ID random vectors with zero first moment and finite second moment by $\mathcal{C}$. The subset of $\mathcal{C}$ which also satisfies
\begin{align}\label{eq:conditionB}
\int_{\vert \mathbf{x}\vert \leq 1} \vert \mathbf{x}\vert\nu({\textnormal d} \mathbf{x}) < \infty. \tag{Condition B}
\end{align}
will be denoted $\mathcal{CB}$ and $\mathcal{KB}$ will denote the analogous subset of $\mathcal{K}$. We remark that any $\xi \in \mathcal{C}$ (resp. $X \in \mathcal{K}$) necessarily has a representation of the form $(\mathbf{0},Q,\nu)_{h\equiv 1}$ (resp. $(0,\sigma^2,\nu)_{h\equiv 1}$). Additionally, any $d$-dimensional $\xi \in \mathcal{CB}$ necessarily has representation $(\mathbf{a},Q,\nu)_{h\equiv 0}$ where $\mathbf{a}$ has entries
\begin{align*}
-\int_{\mathbb{R}^d\backslash\{\mathbf{0}\}}P_k(\mathbf{x})\nu({\textnormal d} \mathbf{x}),\quad k \in \{1,2,\ldots d\}
\end{align*}
and $P_k$ is the projection onto the $k$-th component. Analogously, if $X \in \mathcal{KB}$ then we have representation $(a,\sigma^2,\nu)_{h\equiv 0}$ where $a = -\int_{\mathbb{R}\backslash\{0\}}x\nu({\textnormal d} x)$.
\section{The Karhunen--Lo\`{e}ve theorem}\label{sec:klt}
Given a real valued continuous time stochastic process $X$ defined on an interval $[a,b]$ and an orthonormal basis $\{\phi_k\}_{k \geq 1}$ for $L^2([a,b])$ we might try to express $X$ as a generalized Fourier series
\begin{align}\label{eq:eigen}
X_t = \sum_{k=1}^\infty Y_k\phi_k(t),\quad\text{ where }\quad Y_k := \int_a^bX_t\phi_k(t){\textnormal d} t.
\end{align}
In this section, our chosen basis will be derived from the eigenfunctions corresponding to the non-zero eigenvalues $\{\lambda_k\}_{k \geq 1}$ of the integral operator $K:L^{2}([a,b]) \rightarrow L^{2}([a,b])$,
\begin{align*}
(Kf)(s):=\int_a^b\textnormal{Cov}(X_{s},X_t)f(t){\textnormal d} t.
\end{align*}
When the covariance satisfies a continuity condition it is known (see for example \cite{Ghanesh} Section 2.3.3) that the normalized set of eigenfunctions $\{e_k\}_{k \geq 1}$ of $K$ is countable and forms a basis for $L^{2}([a,b])$. When we choose this basis in \eqref{eq:eigen} we adopt the special notation $\{Z_k\}_{d \geq 1}$ for the stochastic coefficients. In this case, the expansion is optimal in a number of ways. Specifically, we have:
\begin{theorem}[The Karhunen-Lo\`{e}ve Theorem]\label{theo:klt}
Let $X$ be a real valued continuous time stochastic process on $[a,b]$ such that $0 \leq a \leq b < \infty$ and let ${\mathbb E}[X_t] = 0$ and ${\mathbb E}[X^2_t] < \infty$ for each $t \in [a,b]$. Further, suppose $\textnormal{Cov}(X_s,X_t)$ is continuous on $[a,b]\times[a,b]$.
\begin{enumerate}[(i)]
\item Then,
\begin{align*}
{\mathbb E}\left[\left( X_t - \sum_{k=1}^{d}Z_ke_k(t)\right)^2\right] \rightarrow 0,\quad\text{ as }\quad d \rightarrow \infty
\end{align*}
uniformly for $t \in [a,b]$. Additionally, the random variables $\{Z_k\}_{k \geq 1}$ are uncorrelated and satisfy ${\mathbb E}[Z_k] = 0$ and ${\mathbb E}[Z_k^2] = \lambda_k$.
\item For any other basis $\{\phi_k\}_{k \geq 1}$ of $L^2([a,b])$, with corresponding stochastic coefficients $\{Y_k\}_{k \geq 1}$, and any $d \in \mathbb{N}$, we have
\begin{align*}
\int_{a}^{b}{\mathbb E}\left[\left(\varepsilon_d(t)\right)^2\right]{\textnormal d} t \leq \int_{a}^{b}{\mathbb E}\left[\left(\tilde\varepsilon_d(t)\right)^2\right]{\textnormal d} t,
\end{align*}
where $\varepsilon_d$ and $\tilde\varepsilon_d$ are the remainders $\varepsilon_d(t) :=\sum_{d+1}^{\infty}Z_ke_k(t)$ and $\tilde\varepsilon_d(t) :=\sum_{d+1}^{\infty}Y_k\phi_k(t)$.
\end{enumerate}
\end{theorem}
\noindent Going forward we assume the order of the eigenvalues, eigenfunctions, and the stochastic coefficients is determined according to $\lambda_1 \geq \lambda_2 \geq \lambda_3, \ldots$. \\ \\
\noindent According to Ghanem and Spanos \cite{Ghanesh} the Karhunen-Lo\`{e}ve theorem was proposed independently by Karhunen \cite{Karhun}, Lo\`{e}ve \cite{love}, and Kac and Siegert \cite{Katchy}. Modern proofs of the first part of the theorem can be found in \cite{Ashy} and \cite{Ghanesh} and the second part -- the optimality of the truncated approximation -- is also proven in \cite{Ghanesh}. A concise and readable overview of this theory is given in \cite[Chapter 7.1]{Luo}.\\ \\
\noindent We see that although the KLT is quite general, it is best applied in practice when can determine the three components necessary for a Karhunen-Lo\'{e}ve expansion: the eigenfunctions $\{e_k\}_{k \geq 1}$; the eigenvalues $\{\lambda_k\}_{k \geq 1}$; and the distribution of the stochastic coefficients $\{Z_k\}_{k \geq 1}$. If we wish to use the KLE for simulation then we need even more: We also need to know how to simulate the random vector $Z^{(d)} = (Z_1,Z_2,\ldots,Z_d)$ which, in general, has uncorrelated but not necessarily independent components. \\ \\
For Gaussian processes, the second obstacle is removed, since one can show that the $\{Z_k\}_{k \geq 1}$ are again Gaussian, and therefore independent. There are, of course, many ways to simulate a vector of independent Gaussian random variables. For a process $X \in \mathcal{K}$, the matter is slightly more complicated as we establish in Theorem \ref{theo:main1}. However, since the covariance function of a process $X \in \mathcal{K}$ differs from that of a Wiener process only by the scaling factor $\alpha$, the method for determining the eigenfunctions and the eigenvalues for a L\'{e}vy process is identical to that employed for a Wiener process. Therefore, we omit the proof of the following proposition, and direct the reader to \cite[pg. 41]{Ashy} where the proof for the Wiener process is given.
\begin{proposition}\label{prop:joe}
The eigenvalues and associated eigenfunctions of the operator $K$ defined on $L^{2}([0,T])$ with respect to $X \in \mathcal{K}$ are given by
\begin{align}\label{eq:efuncval}
\lambda_k= \frac{\alpha T^2}{\pi^2\left(k - \frac{1}{2}\right)^2},\quad\text{ and }\quad e_k(t)= \sqrt{\frac{2}{T}}\sin\left(\frac{\pi}{T}\left(k-\frac{1}{2}\right)t\right),\quad k\in \mathbb{N},\,t\in[0,T].
\end{align}
\end{proposition}
\noindent A nice consequence of Proposition \ref{prop:joe} and Theorem \ref{theo:klt} is that it allows us to estimate the amount of total variance $v(T) := \int_0^T\textnormal{Var}(X_t){\textnormal d} t = \int_0^T{\mathbb E}[X^2_t]{\textnormal d} t = \alpha T^2/2$ we capture when we represent our process by a truncated KLE. Using the orthogonality of the $\{e_k\}_{k \geq 1}$, and the fact that ${\mathbb E}[Z_k^2] = \lambda_k$ for each $k$, it is straightforward to show that the total variance satisfies $v(T) = \sum_{k \geq 1}\lambda_k$. Therefore, the total variance explained by a $d$-term approximation is
\begin{align*}
\frac{\sum_{k=1}^{d}\lambda_k}{v(T)} = \frac{2}{\pi^2}\sum_{k=1}^{d}\frac{1}{\left(k-\frac{1}{2}\right)^2}.
\end{align*}
By simply computing the quantity on the right we find that the first 2, 5 and 21 terms already explain $90\%$, $95\%$, and $99\%$ of the total variance of the process. Additionally, we see that this estimate holds for all $X \in \mathcal{K}$ independently of $\alpha$ or $T$.\\ \\
\noindent The following lemma is the important first step in identifying the joint distribution of the stochastic coefficients of the KLE for $X \in \mathcal{K}$. The reader should note, however, that the lemma applies to more general L\'{e}vy processes, and is not just restricted to the set $\mathcal{K}$.
\begin{lemma}\label{lem:bert}
Let $X$ be a L\'evy process and let $\{f_k\}_{k = 1}^d$ be a collection of functions which are in $L^1([0,T])$. Then the vector $\mathbf{\xi}$ consisting of elements
\begin{align*}
\xi_k = \int_0^{T}X_tf_k(s){\textnormal d} s, \quad k \in \{1,2,\ldots, d\},
\end{align*}
has an ID distribution with characteristic exponent
\begin{align}\label{eq:bert}
\Psi_{\mathbf{\xi}}(\mathbf{z}) = \int_0^T\Psi_X\left(\langle \mathbf{z}, \mathbf{u}(t) \rangle\right){\textnormal d} t,\quad \mathbf{z} \in {\mathbb R}^d,
\end{align}
where $\mathbf{u}:[0,T]\rightarrow {\mathbb R}^{d}$ is the function with $k$-th component $u_{k}(t) :=\int_t^T f_{k}(s){\textnormal d} s$, $k \in \{1,2,\ldots,d\}$.
\end{lemma}
\begin{remark}
A similar identity to \eqref{eq:bert} is known, see pg. 128 in \cite{handbook}. In the proof of Lemma \ref{lem:bert}, we borrow some ideas from there. Since the proof is rather lengthy we relegate it to the Appendix.
\end{remark}
\noindent With Lemma \ref{lem:bert} and Proposition \ref{prop:joe} in hand, we come to our first main result. In the following theorem we identify the generating triple of the vector $Z^{(d)}$ containing the first $d$ stochastic coefficients of the KLE for a process $X \in \mathcal{K}$. Although it follows that $Z^{(d)}$ has dependent entries (see Corollary \ref{cor:main2}), Theorem \ref{theo:main1}, and in particular the form of the L\'{e}vy measure $\Pi$, will also be the key to simulating $Z^{(d)}$. Going forward we use the notation $\mathcal{B}_{S}$ for the Borel sigma algebra on the topological space $S$.
\begin{theorem}\label{theo:main1}
If $X \in \mathcal{K}$ with generating triple $(0,\sigma^2,\nu)_{h \equiv 1}$ then $Z^{(d)} \in \mathcal{C}$ with generating triple \\ $(\mathbf{0},\mathcal{Q},\Pi)_{h\equiv 1}$ where
$\mathcal{Q}$ is a diagonal $d\times d$ matrix with entries
\begin{align}\label{eq:themat}
q_{k,k} := \frac{\sigma^2}{2}\frac{T^2}{\pi^2\left(k - \frac{1}{2}\right)^2},\quad k \in \{1,2,\ldots,d\},
\end{align}
and $\Pi$ is the measure,
\begin{align}\label{eq:levymeas}
\Pi(B) := \int_{{\mathbb R}\backslash\{0\}\times[0,T]}{\mathbb I}(f(\mathbf{v}) \in B)(\nu\times\lambda)({\textnormal d} \mathbf{v}),\quad B \in \mathbb{B}_{\mathbb{R}^d\backslash\{\mathbf{0}\}},
\end{align}
where $\lambda$ is the Lebesgue measure on $[0,T]$ and $f:{\mathbb R}\times[0,T] \rightarrow {\mathbb R}^d$ is the function
\begin{align}\label{eq:theff}
(x,t) \mapsto \frac{\sqrt{2T}x}{\pi}\left(
\frac{\cos\left(\frac{\pi}{T}\left(1 - \frac{1}{2}\right) t \right)}{\left(1 - \frac{1}{2}\right)},\frac{\cos\left(\frac{\pi}{T}\left(2 - \frac{1}{2}\right) t \right)}{\left(2 - \frac{1}{2}\right)},\ldots,\frac{\cos\left(\frac{\pi}{T}\left(d - \frac{1}{2}\right) t \right)}{\left(d - \frac{1}{2}\right)}\right)^{\textnormal{\textbf{T}}}.
\end{align}
\end{theorem}
\begin{proof}
We substitute the formula for the characteristic exponent (Formula \ref{eq:char} with $a=0$ and $h\equiv 1$) and the eigenfunctions (Formula \ref{eq:efuncval}) into \eqref{eq:bert} and carry out the integration. Then \eqref{eq:themat} follows from the fact that
\begin{align*}
u_k(t) = \int_t^Te_k(s){\textnormal d} s = \sqrt{\frac{2}{T}}\int_t^T\sin\left(\frac{\pi}{T}\left(k - \frac{1}{2}\right)s\right){\textnormal d} s = \sqrt{2T}\frac{\cos\left(\frac{\pi}{T}\left(k - \frac{1}{2}\right)t\right)}{\pi(k-\frac{1}{2})}, \quad k \in \mathbb{N}
\end{align*}
and that the $\{u_k\}_{k \geq 1}$ are therefore also orthogonal on $[0,T]$. \\ \\
\noindent Next we note that $f$ is a continuous function from ${\mathbb R}\backslash\{0\}\times[0,T]$ to ${\mathbb R}^d$ and is therefore $\left(\mathcal{B}_{{\mathbb R}\backslash\{0\}\times[0,T]},\mathcal{B}_{{\mathbb R}^d\backslash\{\mathbf{0}\}}\right)$ measurable. Therefore, $\Pi$ is nothing other than the push forward measure obtained from $(\nu\times\lambda)$ and $f$; in particular, it is a well-defined measure on $\mathcal{B}_{{\mathbb R}^d\backslash\{\mathbf{0}\}}$. It is also a L\'{e}vy measure that satisfies \ref{eq:conditionA} since
\begin{align}\label{eq:prolev}
\int_{\vert \mathbf{x} \vert > 1}\vert \mathbf{x} \vert^2\Pi(d \mathbf{x}) \leq \int_{{\mathbb R}^{d}\backslash\{\mathbf{0}\}}\vert \mathbf{x}\vert^2\Pi({\textnormal d} \mathbf{x})
= \frac{2T}{\pi^2}\int_0^T\left(\sum_{k=1}^du_k^2(t)\right){\textnormal d} t\int_{{\mathbb R}\backslash\{0\}}x^2\nu(d x) < \infty,
\end{align}
where the final inequality follows from the fact that $X \in \mathcal{K}$. Applying Fubini's theorem and a change of variables, i.e.,
\begin{align*}
\int_0^T\int_{{\mathbb R}\backslash\{0\}}e^{\i x\langle \mathbf{z}, \mathbf{u}(t) \rangle} - 1 - \i x\langle \mathbf{z},\mathbf{u}(t) \rangle \nu({\textnormal d} x){\textnormal d} t &= \int_{{\mathbb R}\backslash\{0\}\times[0,T]}e^{\i \langle \mathbf{z}, f(\mathbf{v}) \rangle} - 1 - \i \langle \mathbf{z},f(\mathbf{\mathbf{v}}) \rangle (\nu\times\lambda)({\textnormal d} \mathbf{v})\\ &= \int_{{\mathbb R}^d\backslash\{\mathbf{0}\}}e^{\i\langle \mathbf{z}, \mathbf{x} \rangle} - 1 - \i\langle \mathbf{z},\mathbf{x} \rangle \Pi({\textnormal d} \mathbf{x}),
\end{align*}
concludes the proof of infinite divisibility. Finally, noting that
\begin{align*}
{\mathbb E}[Z_k] = {\mathbb E}\left[\int_0^TX_te_k(t){\textnormal d} t\right] = \int_0^T{\mathbb E}[X_t]e_k(t){\textnormal d} t = 0,\quad k \in \{1,2,\ldots,d\},
\end{align*}
shows that $Z^{(d)} \in \mathcal{C}$.
\end{proof}
\begin{remark}
Note, that if we set $\sigma=1$, $\nu \equiv 0$, and $T=1$ we may easily recover the KLE of the Wiener process, i.e., \eqref{eq:wklt}, from Theorem \ref{theo:main1}.
\end{remark}
\noindent We gather some fairly obvious but important consequences of Theorem \ref{theo:main1} in the following corollary.
\begin{corollary}\label{cor:main1} Suppose $X \in \mathcal{K}$, then:
\begin{enumerate}[(i)]
\item $X \in \mathcal{KB}$ with generating triple $(a,\sigma^2,\nu)_{h\equiv 0}$ if, and only if, $Z^{(d)} \in \mathcal{CB}$ with generating triple $(\mathbf{a},\mathcal{Q},\Pi)_{h\equiv 0}$, where $\mathcal{Q}$ and $\Pi$ are as defined in \eqref{eq:themat} and \eqref{eq:levymeas} and $\mathbf{a}$ is the vector with entries
\begin{align}\label{eq:drift}
a_k := a\frac{(-1)^{k+1}\sqrt{2}T^{\frac{3}{2}}}{\pi^2\left(k - \frac{1}{2}\right)^2},\quad k \in \{1,2,\ldots,d\}.
\end{align}
\item $X$ has finite L\'{e}vy measure $\nu$ if, and only if, $Z^{(d)}$ has finite L\'{e}vy measure $\Pi$.
\end{enumerate}
\end{corollary}
\begin{proof}\ \\
\emph{(i)} Since
\begin{align*}
\int_{\mathbb{R}^d\backslash\{\mathbf{0}\}}\vert \mathbf{x} \vert \Pi({\textnormal d} \mathbf{x}) = \frac{\sqrt{2T}}{\pi}\int_{0}^{T}\left\vert\sum_{k=1}^{d}u_k(t)\right\vert{\textnormal d} t\int_{\mathbb{R}\backslash\{0\}}\vert x \vert \nu({\textnormal d} x)
\end{align*}
and \ref{eq:conditionA} is satisfied by both $\nu$ and $\Pi$ it follows that \ref{eq:conditionB} is satisfied for $\nu$ if, and only if, it is satisfied for $\Pi$. Formula \ref{eq:drift} then follows from the fact that
\begin{align*}
-\int_{\mathbb{R}^d\backslash\{\mathbf{0}\}}P_k(\mathbf{x}) \Pi({\textnormal d} \mathbf{x}) = -\int_{\mathbb{R}\backslash\{0\}}x\nu({\textnormal d} x)\frac{\sqrt{2T}}{\pi}\int_0^T\frac{\cos\left(\frac{\pi}{T}\left(k - \frac{1}{2}\right) t \right)}{\left(k - \frac{1}{2}\right)}{\textnormal d} t = a\frac{(-1)^{k+1}\sqrt{2}T^{\frac{3}{2}}}{\pi^2\left(k - \frac{1}{2}\right)^2}.
\end{align*}
\noindent \emph{(ii)} Straightforward from the definition of $\Pi$ in Theorem \ref{theo:main1}.
\end{proof}
\noindent Also intuitively obvious, but slightly more difficult to establish rigorously, is the fact that the entries of $Z^{(d)}$ are dependent unless $\nu \equiv 0$.
\begin{corollary}\label{cor:main2}
If $X \in \mathcal{K}$ then $Z^{(d)}$ has independent entries if, and only if, $\nu$ is the zero measure.
\end{corollary}
\noindent To prove Corollary \ref{cor:main2} we use the fact that a $d$-dimensional ID random vector with generating triple $(\mathbf{a},Q,\nu)$ has independent entries if, and only if, $\nu$ is supported on the union of the coordinate axes and $Q$ is diagonal (see E 12.10 on page 67 in \cite{sato}). For this purpose we define, for a vector $\mathbf{x} = (x_1,x_2,\ldots,x_d)^{\textbf{T}} \in {\mathbb R}^d$ such that $\,x_k > 0,\,k\in\{1,2,\ldots,d\}$, the sets
\begin{align*}
\mathcal{I^+}(\mathbf{x}) := \Pi_{k=1}^{d}(x_k,\infty),\quad\text{and}\quad\mathcal{I^-}(\mathbf{x}) := \Pi_{k=1}^{d}(-\infty,-x_k),
\end{align*}
where we caution the reader that the symbol $\Pi$ indicates the Cartesian product and not the L\'{e}vy measure of $Z^{(d)}$. \\ \\
\noindent In the proof below, and throughout the remainder of the paper, $f$ will always refer to the function defined in \eqref{eq:theff}, and $f_k$ to the $k$-th coordinate of $f$.
\begin{proof}[Proof of Corollary \ref{cor:main2}]\ \\
\noindent ($\Leftarrow$) The assumption $\nu \equiv 0$ implies our process is a scaled Wiener process in which case it is well established that $Z^{(d)}$ has independent entries. Alternatively, this follows directly the fact that the matrix $\mathcal{Q}$ in Theorem \ref{theo:main1} is diagonal.\\ \\
\noindent ($\Rightarrow$) We assume that $\nu$ is not identically zero and show that there exists $\mathbf{x}$ such that either $\Pi(\mathcal{I}^+(\mathbf{x}))$ or $\Pi(\mathcal{I}^-(\mathbf{x}))$ is strictly greater than zero.\\ \\
\noindent Since $\nu(\mathbb{R}\backslash\{0\}) > 0$ there must exist $\delta > 0$ such that one of $\nu((-\infty,-\delta))$ and $\nu((\delta,\infty))$ is strictly greater than zero; we will initially assume the latter. We observe that for $d \in \mathbb{N}$, $d \geq 2$, the zeros of the function $h_d:[0,T]\rightarrow{\mathbb R}$ defined by
\begin{align*}
t \mapsto \frac{\cos\left(\frac{\pi}{T}\left(d - \frac{1}{2}\right) t \right)}{\left(d - \frac{1}{2}\right)},
\end{align*}
occur at points $\{nT/(2d-1)\}_{n=1}^{2d -1}$, and therefore the smallest zero is $t_d := T/(2d-1)$. From the fact that the cosine function is positive and decreasing on $[0,\pi/2]$ we may conclude that
\begin{align*}
\frac{\cos\left(\frac{\pi}{T}\left(k - \frac{1}{2}\right) t \right)}{\left(k - \frac{1}{2}\right)} > \epsilon,\quad k\in\{1,2,\ldots,d\},\quad t \in [0,t_d/2],
\end{align*}
where $\epsilon = h_d(t_d/2) > 0$. Now, let $\mathbf{x}$ be the vector with entries $x_k = \delta\epsilon\sqrt{2T}/\pi$ for $k \in \{1,2,\ldots,d\}$. Then,
\begin{align*}
(\delta,\infty) \times [0,t_d/2] \subset f^{-1}\left(\mathcal{I}^+\left(\mathbf{x}\right)\right),
\end{align*}
since for $(x,t)\in (\delta,\infty) \times [0,t_d/2]$ we have
\begin{align*}
f_k(x,t) = \frac{\sqrt{2T}}{\pi}x\frac{\cos\left(\frac{\pi}{T}\left(k - \frac{1}{2}\right) t \right)}{\left(k - \frac{1}{2}\right)} > \delta\epsilon\frac{\sqrt{2T}}{\pi} = x_k,\quad k \in \{1,2,\ldots,d\}.
\end{align*}
But then,
\begin{align}
\Pi(\mathcal{I}^+(\mathbf{x})) \geq \nu((\delta,\infty))\times\lambda([0,t_d/2]) > 0.
\end{align}
If we had initially assumed that $\nu((-\infty,-\delta)) > 0$ we would have reached the same conclusion by using the interval $(-\infty,-\delta)$ and $\mathcal{I}^-(\mathbf{x})$. We conclude that $\Pi$ is not supported on the union of the coordinate axes, and so $Z^{(d)}$ does not have independent entries.
\end{proof}
\section{Shot-noise representation of $Z^{(d)}$}\label{sec:shotnoise}
Although we have characterized the distribution of our stochastic coefficients $Z^{(d)}$ we are faced with the problem of simulating a random vector with dependent entries with only the knowledge of the characteristic function. In general, this seems to be a difficult problem, even generating random variables from the characteristic function is not straightforward (see for example \cite{fromchar}). In our case, thanks to Theorem \ref{theo:main1} we know that $Z^{(d)}$ is infinitely divisible and that the L\'{e}vy measure $\Pi$ has a special disintegrated form. This will help us build the connection with the so-called shot-noise representation of our vector $Z^{(d)}$. The goal is to represent $Z^{(d)}$ as an almost surely convergent series of random vectors. \\ \\
\noindent To explain this theory -- nicely developed and explained in \cite{rosy,theoapp} -- we assume that we have two random sequences $\{V_i\}_{i \geq 1}$ and $\{\Gamma_i\}_{i \geq 1}$ which are independent of each other and defined on a common probability space. We assume that each ${\Gamma_i}$ is distributed like a sum of $i$ independent exponential random variables with mean 1, and that the $\{V_i\}_{i \geq 1}$ take values in a measurable space $D$, and are i.i.d. with common distribution $F$. Further, we assume we have a measurable function $H:(0,\infty)\times D \rightarrow {\mathbb R}^{d}$ which we use to define the random sum
\begin{align}\label{eq:shot}
S_n:= \sum_{i = 1}^{n}H(\Gamma_i,V_i),\quad n \in \mathbb{N},
\end{align}
and the measure
\begin{align}\label{eq:meas}
\mu(B) := \int_0^{\infty}\int_{D}{\mathbb I}(H(r,v) \in B)F({\textnormal d} v){\textnormal d} r,\quad B \in B_{\mathbb{R}^d\backslash\{\mathbf{0}\}}.
\end{align}
The function $C:(0,\infty)\rightarrow {\mathbb R}^{d}$ is defined by
\begin{align}\label{eq:funA}
C_k(s) := \int_0^{s}\int_DP_k(H(r,v))F({\textnormal d} v){\textnormal d} r,\quad k \in \{1,2,\ldots,d\},
\end{align}
where, as before, $P_k$ is the projection onto the $k$-th component.
The connection between \eqref{eq:shot} and ID random vectors is then explained in the following theorem whose results can be obtained by restricting Theorems 3.1, 3.2, and 3.4 in \cite{rosy} from a general Banach space setting to ${\mathbb R}^d$.
\begin{theorem}[Theorems 3.1, 3.2, and 3.4 in \cite{rosy}]\label{theo:ros}
Suppose $\mu$ is a L\'evy measure, then:
\begin{enumerate}[(i)]
\item If \ref{eq:conditionB} holds then $S_n$ converges almost surely to an ID random vector with generating triple $(\mathbf{0},\mathbf{0},\mu)_{h\equiv 0}$ as $n \rightarrow \infty$.
\item If \ref{eq:conditionA} holds, and for each $v \in S$ the function $r \rightarrow \vert H(r,v) \vert$ is non increasing, then
\begin{align}\label{eq:compsum}
M_n := S_n - C(n), \quad n\in \mathbb{N}
\end{align}
converges almost surely to an ID random vector with generating triple $(\mathbf{0},\mathbf{0},\mu)_{h\equiv 1}$.
\end{enumerate}
\end{theorem}
\noindent The name ``shot-noise representation" comes from the idea that $\vert H \vert$ can be interpreted as a model for the volume of the noise of a shot $V_i$ that occurred $\Gamma_i$ seconds ago. If $\vert H \vert $ is non increasing in the first variable, as we assume in case (ii) in Theorem \ref{theo:ros}, then the volume decreases as the elapsed time grows. The series $\lim_{n\rightarrow \infty}S_n$ can be interpreted as the total noise at the present time of all previous shots.\\ \\
\noindent The goal is to show that for any process in $\mathcal{K}$ whose L\'{e}vy measure admits a strictly positive density $\pi$, the vector $Z^{(d)}$ has a shot-noise representation of the form \eqref{eq:shot} or \eqref{eq:compsum}. To simplify notation we make some elementary but necessary observations/assumptions: First, we assume that $X$ has no Gaussian component $\sigma^2$. There is no loss of generality to this assumption, since if $X$ does have a Gaussian component then $Z^{(d)}$ changes by the addition of a vector of independent Gaussian random variables. This poses no issue from a simulation standpoint. Second, from \eqref{eq:char} we see that any L\'{e}vy process $X$ with representation $(0,0,\pi)_{h\equiv j}$, $j \in \{0,1\}$ can be decomposed into the difference of two independent L\'{e}vy processes, each having only positive jumps. Indeed, splitting the integral and making a change of variable $x \mapsto -x$ gives
\begin{align}\label{eq:split}
\Psi_{X}(z) &= -\int_{\mathbb{R}\backslash\{0\}} e^{\i zx} - 1 - \i zxj\pi(x){\textnormal d} x \nonumber \\
&= -\int_{0}^{\infty} e^{\i zx} - 1 - \i zxj\pi(x){\textnormal d} x -\int_{0}^{\infty}e^{\i z(-x)} - 1 - \i z(-x)j\pi(-x){\textnormal d} x \nonumber \\
&= \Psi_{X^+}(z) + \Psi_{-X^-}(z)
\end{align}
where $X^+$ (resp. $X^-$) has L\'{e}vy density $\pi(\cdot)$ (resp. $\pi(-\cdot)$) restricted to $(0,\infty)$. In light of this observation, the results of Theorem \ref{theo:main2} are limited to L\'{e}vy processes with positive jumps. It should be understood that for a general process we can obtain $Z^{(d)}$ by simulating $Z^{(d)}_+$ and $Z^{(d)}_-$ -- corresponding to $X^+$ and $X^-$ respectively -- and then subtracting the second from the first to obtain a realization of $Z^{(d)}$.\\ \\
\noindent Last, for a L\'{e}vy process with positive jumps and strictly positive L\'{e}vy density $\pi$, we define the function
\begin{align}\label{eq:thefung}
g(x) := \int_x^{\infty}\pi(s){\textnormal d} s.
\end{align}
which is just the tail integral of the L\'{e}vy measure. We see that $g$ is strictly monotonically decreasing to zero, and so admits a strictly monotonically decreasing inverse $g^{-1}$ on the domain $(0,g(0))$.
\begin{theorem}\label{theo:main2} Let $\pi$ be a strictly positive L\'{e}vy density on $(0,\infty)$ and identically zero elsewhere.
\begin{enumerate}[(i)]
\item If $X \in \mathcal{KB}$ with generating triple $(a,0,\pi)_{h\equiv 0}$, then $Z^{(d)}$ has a shot noise representation
\begin{align}\label{eq:shotnoise}
Z^{(d)} \,{\buildrel d \over =}\ \mathbf{a} + \sum_{i \geq 1}H(\Gamma_i,U_i)
\end{align}
where $f$ and $\mathbf{a}$ are defined in \eqref{eq:theff} and \eqref{eq:drift} respectively, $\{U_i\}_{i \geq 1}$ is an i.i.d. sequence of uniform random variables on $[0,1]$, and
\begin{align}\label{eq:H}
H(r,v) := f(g^{-1}(r/T){\mathbb I}( 0 < r < g(0)),Tv).
\end{align}
\item If $X \in \mathcal{K}$ with generating triple $(0,0,\pi)_{h\equiv 1}$, then $Z^{(d)}$ has a shot noise representation
\begin{align}\label{eq:const1}
Z^{(d)} \,{\buildrel d \over =}\ \lim_{n\rightarrow\infty}\sum_{i = 1}^{n}H(\Gamma_i,U_i) - C(n),
\end{align}
where $H$ and $\{U_i\}_{i\geq 1}$ are as in Part $(i)$ and $C$ is defined as in \eqref{eq:funA}.
\end{enumerate}
\end{theorem}
\begin{proof}
Rewriting \eqref{eq:levymeas} to suit our assumptions and making a change of variables $t = Tv$ gives, for any $B \in \mathcal{B}_{\mathbb{R}^d\backslash\{\mathbf{0}\}}$
\begin{align*}
\Pi(B) &= \int_0^T\int_0^{\infty}{\mathbb I}(f(x,t) \in B)\pi(x){\textnormal d} x{\textnormal d} t = \int_0^1\int_0^{\infty}{\mathbb I}(f(x,Tv) \in B)T\pi(x){\textnormal d} x{\textnormal d} v.
\end{align*}
Making a further change of variables $r = Tg(x)$ gives
\begin{align*}
\Pi(B) &= \int_0^1\int_0^{g(0)}{\mathbb I}(f(g^{-1}(r/T),Tv) \in B){\textnormal d} r{\textnormal d} v.
\end{align*}
Since $0 \notin B$, so that ${\mathbb I}(0 \in B) = 0$, we may conclude that
\begin{align*}
\Pi(B) = \int_0^{\infty}\int_{0}^1{\mathbb I}(f(g^{-1}(r/T){\mathbb I}( 0 < r < g(0)),Tv) \in B){\textnormal d} v{\textnormal d} r.
\end{align*}
From the definition of the function $f$ (Formula \ref{eq:theff}), and that of $g^{-1}$, it is clear that
\begin{align}\label{eq:myh}
(r,v) \mapsto f(g^{-1}(r/T){\mathbb I}( 0 < r < g(0)),Tv)
\end{align}
is measurable and non increasing in absolute value for any fixed $v$. Therefore, we can identify \eqref{eq:myh} with the function $H$, the uniform distribution on $[0,1]$ with $F$, and $\Pi$ with $\mu$. The results then follow by applying the results of Theorems \ref{theo:main1} and \ref{theo:ros} and Corollary \ref{cor:main1}.
\end{proof}
\noindent Going forward we will write simply $H(r,v) = f(g^{-1}(r/T),Tv)$ where it is understood that $g^{-1}$ vanishes outside the interval $(0,g(0))$.
\subsubsection*{Discussion}
There are two fairly obvious difficulties with the series representations of Theorem \ref{theo:main2}. The first -- this a common problem for all series representations of ID random variables when the L\'evy measure is not finite -- is that we have to truncate the series when $g(0) =\infty$ (equivalently $\nu(\mathbb{R}\backslash\{0\}) = \infty$). Besides the fact that in these cases our method fails to be exact, computation time may become an issue if the series converge too slowly. The second issue is that $g^{-1}$ is generally not known in closed form. Thus, in order to apply the method we will need a function $g$ that is amenable to accurate and fast numerical inversion. In the survey \cite{rosy} Rosi{\'n}ski reviews several methods, which depend on various properties of the L\'evy measure (for example, absolute continuity with respect to a probability distribution), that avoid this inversion. In a subsequent paper \cite{rosinski2007tempering} he develops special methods for the family of tempered $\alpha$-stable distributions that also do not require inversion of the tail of the L\'evy measure. We have made no attempt to adapt these techniques here, as the fall outside of the scope of this paper. However, this seems to be a promising area for further research.\\ \\
A nice feature of simulating a $d$-dimensional KLE of a L\'{e}vy process $X \in \mathcal{K}$ via Theorem \ref{theo:main2} is that we may increase the dimension incrementally. That is, having simulated a path of the $d$-term KLE approximation of $X$,
\begin{align}\label{eq:partsum}
S_{t}^{(d)} := \sum_{k=1}^{d}Z_ke_k(t),\quad t\in[0,T],
\end{align}
we may derive a path of $S^{(d+1)}$ directly from $S^{(d)}$ as opposed to starting a fresh simulation. We observe that a realization $z_k$ of $Z_k$ can be computed individually once we have the realizations $\{\gamma_i,u_i\}_{i\geq 1}$ of $\{\Gamma_i,\,U_i\}_{i\geq 1}$. Specifically,
\begin{align*}
z_k = a_k + \sum_{i \geq 1}\frac{\sqrt{2T}g^{-1}(\gamma_i/T)}{\pi}\frac{\cos\left(\pi\left(k - \frac{1}{2}\right)u_i\right)}{\left(k-\frac{1}{2}\right)},
\end{align*}
when \ref{eq:conditionB} holds, with an analogous expression when it does not. Thus, if $s^{(d)}_t$ is our realization of $S^{(d)}_t$ we get a realization of $S^{(d+1)}_t$ via $s^{(d+1)}_t = s^{(d)}_t + z_{d+1}e_{d+1}(t)$.\\ \\
\noindent It is also worthwhile to compare the series representations for L\'{e}vy processes found in \cite{rosy} and the proposed method. As an example, suppose we have a subordinator $X$ with a strictly positive L\'{e}vy density $\pi$. Then, it is also true that
\begin{align}\label{eq:regseries}
\{X_t:t\in[0,T]\} \,{\buildrel d \over =}\ \left\{\sum_{i\geq 1}g^{-1}(\Gamma_i/T){\mathbb I}(TU_i < t):t\in[0,T]\right\}.
\end{align}
The key difference between the approaches, is that the series in \eqref{eq:regseries} depends on $t$, whereas the series representation of $Z^{(d)}$ is independent of $t$. Therefore, in \eqref{eq:regseries} we have to recalculate the series for each $t$, adding those summands for which $U_iT < t$. Of course, the random variables $\{\Gamma_i, U_i\}_{i\geq 1}$ need to be generated only once. On the other hand, while we have to simulate $Z^{(d)}$ only once for all $t$, each summand requires the evaluation of $d$ cosine functions, and for each $t$ we have to evaluate $d$ sine functions when we form the KLE. However, since there is no more randomness once we have generated $Z^{(d)}$ the second computation can be done in advance.
\subsubsection*{Example} Consider the Variance Gamma (VG) process which was first introduced in \cite{initvg} and has since become a popular model in finance. The process can be constructed as the difference of two independent Gamma processes, i.e., processes with L\'{e}vy measures of the form
\begin{align}\label{eq:gammasub}
\nu({\textnormal d} x) = c\frac{e^{-\rho x}}{x}{\textnormal d} x,\quad x > 0,
\end{align}
where $c,\,\rho > 0$. For this example we use a Gamma process $X^+$ with parameters $c=1$ and $\rho=1$ and subtract a Gamma process $X^-$ with parameters $c=1$ and $\rho = 2$ to yield a VG process $X$. Assuming no Gaussian component or additional linear drift, it can be shown (see Proposition 4.2 in \cite{contan}) that the characteristic exponent of $X$ is then
\begin{align*}
\Psi_X(z) = -\left(\int_{0}^{\infty}(e^{\i zx} -1)\frac{e^{-x}}{x}{\textnormal d} x + \int_0^{\infty}(e^{-\i zx} -1)\frac{e^{-2x}}{x}{\textnormal d} x \right) = \log\left(1-\i z\right) + \log\left(1 + \frac{\i z}{2}\right).
\end{align*}
We observe that $X^+,\,X^{-} \notin \mathcal{K}$ since
\begin{align*}
{\mathbb E}[X^+_t] = \i t\Psi'_{X^+}(0) = t \neq 0\quad\text{ and }\quad{\mathbb E}[X^-_t] = \i t\Psi'_{X^-}(0) = \frac{t}{2} \neq 0.
\end{align*}
However, this is not a problem, since we can always construct processes $\tilde{X}^+,\,\tilde{X}^- \in \mathcal{K}$ by subtracting $t$ and $t/2$ from $X^+$ and $X^-$ respectively. We then generate the KLE of $\tilde{X}^+$ and add back $t$ to the result, and apply the analogous procedure for $X^{-}$. This is true generally as well, i.e., for a square integrable L\'{e}vy process with expectation ${\mathbb E}[X_t] = \i t\Psi_X'(0) \neq 0$ we can always construct a process $\tilde{X} \in \mathcal{K}$ by simply subtracting the expectation $\i t\Psi_X'(0)$.\\ \\
\noindent From \eqref{eq:gammasub} we see that the function $g$ will have the form
\begin{align*}
g(x) = c\int_x^{\infty}\frac{e^{-\rho s}}{s}{\textnormal d} s = cE_1(\rho x),
\end{align*}
where $E_1(x) := \int_x^{\infty}s^{-1}e^{-s}{\textnormal d} s$ is the exponential integral function. Therefore,
\begin{align*}
g^{-1}(T^{-1}r) = \frac{1}{\rho}E_1^{-1}\left(\frac{r}{Tc}\right).
\end{align*}
There are many routines available to compute $E_1$; we choose a Fortran implementation to create a lookup table for $E^{-1}_1$ with domain $[6.226\times10^{-22},45.47]$. We discretize this domain into $200000$ unevenly spaced points, such that the distance between two adjacent points is no more than 0.00231. Then we use polynomial interpolation between points.\\ \\
\noindent When simulating $Z^{(d)}_+$ we truncate the series \eqref{eq:shotnoise} when $(Tc)^{-1}\Gamma_i > 45.47$; at this point we have $g^{-1}(T^{-1}\Gamma_i) < \rho^{-1}10^{-19}$. Using the fact that the $\{\Gamma_i\}_{i\geq 1}$ are nothing other than the waiting times of a Poisson process with intensity one, we estimate that we need to generate on average $45Tc$ random variables to simulate $Z^{(d)}_+$ and similarly for $Z^{(d)}_-$. We remark that for the chosen process both the decay and computation of $g^{-1}$ are manageable.\\ \\
\noindent We simulate sample paths of $S^{(d)}$ for $d \in \{5,10,15,20,25,100,3000\}$ using the described approach. We also compute a Monte Carlo (MC) approximation of the expectation of $X$ by averaging over $10^6$ sample paths of the $d$-term approximation. Some sample paths and the results of the MC simulation are depicted in Figure \ref{fig1}, where the colors black, grey, red, green, blue, cyan, and magenta correspond to $d$ equal to 5, 10, 15, 20, 25, 100, and 3000 respectively.\\ \\
\noindent In Figure \ref{figa} we show the sample paths resulting from a simulation of $S^{(d)}$. We notice that the numerical results correspond with the discussion of Section \ref{sec:klt}: the large movements of the sample path are already captured by the 5-term approximation. We also notice peaks resulting from rapid oscillations before the bigger ``jumps" in the higher term approximations. This behaviour is magnified for the 3000-term approximation in Figure \ref{figb}. In classical Fourier analysis this is referred to as the Gibbs phenomenon; the solution in that setting is to replace the partial sums by Ces\`{a}ro sums. We can employ the same technique here, replacing $S^{(d)}$ with $C^{(d)}$, which is defined by
\begin{align*}
C_t^{(d)} :=\frac{1}{d}\sum_{k=1}^dS_t^{(k)}.
\end{align*}
It is relatively straightforward to show that $C^{(d)}$ converges to $X$ in the same manner as $S^{(d)}$ (as described in Theorem \ref{theo:klt} (i)). In Figure \ref{figc} we show the effect of replacing $S^{(d)}$ with $C^{(d)}$ on all sample paths, and in Figure \ref{figd} we show the $C^{(3000)}$ approximation -- now the Gibbs phenomenon is no longer apparent. \\ \\
In Figure \ref{fige} we show the MC simulation of $E[S^{(5)}_t]$ (black +) plotted together with $E[X_t] = t/2$ (green $\circ$). We see the 5-term KLE already gives a very good approximation. In Figure \ref{figf} we also show the errors $E[S^{(d)}_t] - E[X_t]$ for $d=5$ (black +), $d=25$ (blue $\circ$), and $d=3000$ (magenta $\square$). Again we have agreement with the discussion in Section \ref{sec:klt}: little is gained in our MC approximation of $E[X_t]$ by choosing a KLE with more than 25 terms. Recall that a KLE with 25 terms already captures more than 99\% of the total variance of the given process.
\begin{figure}
\centering
\subfloat[]
{\label{figa}\includegraphics[height =5.5cm]{all}}
\subfloat[]
{\label{figb}\includegraphics[height =5.5cm]{close}}\\
\subfloat[]
{\label{figc}\includegraphics[height =5.5cm]{ces_all}}
\subfloat[]
{\label{figd}\includegraphics[height =5.5cm]{ces_close}}\\
\subfloat[]
{\label{fige}\includegraphics[height =5.5cm]{exp_5}}
\subfloat[]
{\label{figf}\includegraphics[height =5.4cm]{exp_err}}
\caption{(a) KLE sample paths (b) Example of Gibbs phenomenon (c) KLE with Ces\`{a}ro sums \\(d) Mitigated Gibbs phen. (e) ${\mathbb E}[X_t]=t/2$ and MC sim. of ${\mathbb E}[S^{(5)}_t]$ (f) MC Err. ${\mathbb E}[S^{(d)}_t] - t/2$}
\label{fig1}
\end{figure}
\section*{Author's acknowledgements} My work is supported by the Austrian Science Fund (FWF) under the project F5508-N26, which is part of the Special Research Program ``Quasi-Monte Carlo Methods: Theory and Applications". I would like to thank Jean Bertoin for explaining his results in \cite{handbook} to me. This helped me extend identity \eqref{eq:bert} of Lemma \ref{lem:bert} from $C^1$ functions to $L^1$ functions. Further I would like to thank Alexey Kuznetsov and Gunther Leobacher for reading a draft of this paper and offering helpful suggestions.
\newpage
\begin{appendices}
\section{Additional proof}\label{app:A}
\begin{proof}[Proof of Lemma \ref{lem:bert}]
We give a proof for continuously differentiable $\{f_k\}_{k = 1}^d$ first and then prove the general case. Accordingly, we fix $\mathbf{z} \in {\mathbb R}^d$, a collection of continuously differentiable $\{f_k\}_{k = 1}^d$ defined on $[0,T]$, and a L\'{e}vy process $X$ with state space ${\mathbb R}$. Instead of proving identity \eqref{eq:bert} directly for $X$ we will prove that
\begin{align}\label{eq:withb}
\Psi_{\xi^{(b)}}(\mathbf{z}) = \int_0^T\Psi_{X^{(b)}}\left(\langle \mathbf{z}, \mathbf{u}(t) \rangle\right){\textnormal d} t = b\int_0^T\Psi_X\left(\langle \mathbf{z}, \mathbf{u}(t) \rangle\right){\textnormal d} t,\quad \mathbf{z} \in {\mathbb R}^d,\quad b > 0,
\end{align}
where $X^{(b)}$ is the process defined by $X^{(b)}_t := X_{bt}$ and $\xi^{(b)}$ is the vector with entries
\begin{align*}
\xi^{(b)}_k := \int_{0}^{T}X_{bt}f_k(t){\textnormal d} t,\quad k \in \{1,2,\ldots, k\}.
\end{align*}
It is clear that $X^{(b)}$ is a L\'{e}vy process, that $\Psi_{X^{(b)}} = b\Psi_X$, and that \eqref{eq:bert} corresponds to the special case $b=1$. We focus on this more general result because it will lead directly to a proof of infinite divisibility. We begin by defining
\begin{align}\label{eq:arr}
{R^{(k)}_N} := \frac{T}{N}\sum_{n=0}^{N-1}f_k\left(\frac{(n+1)T}{N}\right)X_{\frac{b(n+1)T}{N}},\quad k \in \{1,2,\ldots, k\},\, N \in \mathbb{N},
\end{align}
which are $N$-point, right-endpoint Riemann sum approximations of the random variables $\xi^{(b)}_k$. By the usual telescoping sum technique for L\'evy processes we can write
\begin{align*}
X_{\frac{b(n+1)T}{N}} &= \left(X_{\frac{b(n+1)T}{N}} - X_{\frac{bnT}{N}}\right) + \left(X_{\frac{bnT}{N}} - X_{\frac{b(n-1)T}{N}}\right) + \ldots + \left(X_{\frac{b2T}{N}} - X_{\frac{bT}{N}}\right) + X_{\frac{bT}{N}} \\
&\,{\buildrel d \over =}\ X^{(1)} + X^{(2)} + \ldots +X^{(n+1)},
\end{align*}
where the random variables $X^{(i)}$ are independent and each distributed like $X_{bT/N}$. This allows us to rearrange the sum $R^{(k)}_N$ according to the random variables $X^{(i)}$, gathering together those with the same index. Therefore, we have
\begin{align*}
R^{(k)}_N \,{\buildrel d \over =}\ \sum_{n=0}^{N-1}X^{(n+1)}\left(\frac{T}{N}\sum_{j=n}^{N-1}f_k\left(\frac{(j+1)T}{N}\right)\right).
\end{align*}
We notice that the term in brackets on the right-hand side is a $(N - n)$-point, right-endpoint Riemann sum approximation for the integral of $f_k$ over the interval $[nT/N,T]$. Let us therefore define
\begin{align}\label{eq:esstee}
t^{(k)}_{n,N} := \frac{T}{N}\sum_{j=n}^{N-1}f_k\left(\frac{(j+1)T}{N}\right), \quad \text{ and } \quad s^{(k)}_{n,N} := \int_{\frac{nT}{N}}^{T}f_k(s){\textnormal d} s,
\end{align}
as well as the $d$-dimensional vectors $\mathbf{t}_{n,N}$ and $\mathbf{s}_{n,N}$ consisting of entries $t^{(k)}_{n,N}$ and $s^{(k)}_{n,N}$ respectively. We observe that
\begin{align}\label{eq:convlev}
{\mathbb E}[\exp(\i\langle \mathbf{z}, \xi^{(b)}\rangle] =\lim_{N \rightarrow \infty}{\mathbb E}\left[\exp\left(\sum_{n=0}^{N-1}\i X^{(n+1)}\langle \mathbf{z}, \mathbf{t}_{n,N}\rangle\right)\right]
=\lim_{N \rightarrow \infty}\exp\left(-\frac{bT}{N}\sum_{n=0}^{N-1}\Psi_X(\langle \mathbf{z}, \mathbf{t}_{n,\,N}) \rangle\right),
\end{align}
where we have used the dominated convergence theorem to obtain the first equality, and the independence of the $X^{(i)}$ to obtain the final equality. Further, we get
\begin{align*}
\exp\left(-\int_0^T\Psi_{X^{(b)}}\left(\langle \mathbf{z}, \mathbf{u}(t) \rangle \right){\textnormal d} t\right) = \lim_{N \rightarrow \infty}\exp\left(-\frac{bT}{N}\sum_{n=0}^{N-1}\Psi_X(\langle \mathbf{z}, \mathbf{s}_{n,\,N}) \rangle \right),
\end{align*}
by using the left-endpoint Riemann sums. We note that $\vert \langle \mathbf{z}, \mathbf{t}_{n,\,N} \rangle - \langle \mathbf{z}, \mathbf{s}_{n,\,N} \rangle \vert \rightarrow 0$ uniformly in $n$ since
\begin{align}\label{eq:bnd}
\vert \langle \mathbf{z}, \mathbf{t}_{n,\,N} \rangle - \langle \mathbf{z}, \mathbf{s}_{n,\,N} \rangle\vert \leq \sum_{k=1}^{d}\vert z_k \vert\left \vert t^{(k)}_{n,N} - s^{(k)}_{n,N} \right \vert \leq \frac{d T^2}{N} \max_{1\leq k \leq d}\left\{\vert z_k\vert\sup_{x \in [0,T]}\vert f'_k(x) \vert \right\},
\end{align}
where the last estimate follows from the well-known error bound $((c-a)^2\sup_{x \in [a,c]}\vert g'(x) \vert)/N$ for the absolute difference between an $N$-point, right end-point Riemann sum and the integral of a $C^1$ function $g$ over $[a,c]$. Then, by the continuity of $\Psi_X$, for any $\epsilon > 0$ we may choose an appropriately large $N$ such that
\begin{align*}
\left \vert \frac{1}{N}\sum_{n=0}^{N-1}\psi_X(\langle \mathbf{z}, \mathbf{t}_{n,\,N} \rangle ) - \frac{1}{N}\sum_{n=0}^{N-1}\psi_X(\langle \mathbf{z}, \mathbf{s}_{n,\,N} \rangle )\right\vert &\leq \frac{1}{N}\sum_{n=0}^{N-1}\vert\psi_X(\langle \mathbf{z}, \mathbf{t}_{n,\,N} \rangle ) - \psi_X(\langle \mathbf{z}, \mathbf{t}_{n,\,N} \rangle )\vert \leq \epsilon.
\end{align*}
This proves \eqref{eq:withb} and therefore also \eqref{eq:bert} for $C^{1}$ functions. \\ \\
To establish the infinite divisibility of $\xi$ we note that \eqref{eq:withb} shows that $\Psi_{\xi^{(b)}} = b\Psi_{\xi^{(1)}} = b\Psi_{\xi}$ and that $e^{-b\Psi_{\xi}}$ is therefore a positive definite function for every $b$ since it is the characteristic function of the random vector $\xi^{(b)}$. Positive definiteness follows from Bochner's Theorem (see for example Theorem 2.13 in \cite{levtype}). Also, we clearly have $\Psi_{\xi}(\mathbf{0}) = 0$ since $\Psi_{X}(0) = 0$. By Theorem 2.15 in \cite{levtype} these two points combined show that $\Psi_{\xi}$ is the characteristic exponent of an ID probability distribution, and hence $\xi$ is an ID random vector.\\ \\
Now one can extend the lemma to $L^{1}$ functions $\{f_k\}_{k = 1}^d$ by exploiting the density of $C^{1}([0,T])$ in $L^{1}([0,T])$. In particular, for each $k$ we can find a sequence of $C^{1}$ functions $\{f_{n,k}\}_{n\geq 1}$ which converges in $L^1$ to $f_k$. Then,
\begin{align*}
\vert u_k(t) - u_{n,k}(t) \vert = \left\vert \int_t^T f_k(t){\textnormal d} t - \int_t^T f_{n,k}(t){\textnormal d} t\right\vert \leq \int_0^T\left\vert f_k(t) - f_{n,k}(t)\right\vert{\textnormal d} t
\end{align*}
showing that $u_{n,k} \rightarrow u_k$ uniformly in $t$. This shows that for each $\mathbf{z}$ the functions $\{\Psi_{X}(\langle \mathbf{z},\mathbf{u}_n(\cdot)\rangle)\}_{n\geq 1}$, with $\mathbf{u}_n := (u_{n,1},\cdots,u_{n,d})^{\textnormal{\textbf{T}}}$, are uniformly bounded on $[0,T]$, so that the dominated convergence theorem applies and we have
\begin{align}\label{eq:rhs}
\lim_{n \rightarrow \infty}\exp\left(-\int_0^T\Psi_X\left(\langle \mathbf{z}, \mathbf{u}_n(t) \rangle\right){\textnormal d} t\right) = \exp\left(-\int_0^T\Psi_X\left(\langle \mathbf{z}, \mathbf{u}(t) \rangle\right){\textnormal d} t\right).
\end{align}
On the other hand, $X$ is a.s. bounded on $[0,T]$, so that
\begin{align*}
\lim_{n\rightarrow\infty}\vert \xi_k - \xi_{n,k} \vert = \lim_{n\rightarrow\infty}\left\vert \int_0^{T}X_tf_{k}(t){\textnormal d} t - \int_0^{T}X_t f_{n,k}(t){\textnormal d} t \right \vert \leq\left(\sup_{t\in[0,T]}\vert X_t \vert\right) \lim_{n\rightarrow\infty}\int_0^{T}\vert f_{k}(t) - f_{n,k}(t) \vert{\textnormal d} t = 0,
\end{align*}
a.s.. Therefore $\Xi_n := (\xi_{n,1},\cdots,\xi_{n,d})^{\textnormal{\textbf{T}}}$ converges a.s. and consequently also in distribution to $\xi$. Together with \eqref{eq:rhs}, this implies that for each $\mathbf{z}$
\begin{align}\label{eq:final}
\lim_{n\rightarrow\infty}{\mathbb E}[e^{i\langle \mathbf{z},\Xi_n \rangle}] = {\mathbb E}[e^{i\langle \mathbf{z},\xi \rangle}] = \exp\left(-\int_0^T\Psi_X\left(\langle \mathbf{z}, \mathbf{u}(t) \rangle\right){\textnormal d} t\right).
\end{align}
Therefore, \eqref{eq:bert} is also proven for functions in $L^1$. Since each $\Xi_n$ has an ID distribution Lemma 3.1.6 in \cite{Messer} guarantees that $\xi$ is also an ID random vector.
\end{proof}
\end{appendices}
\bibliographystyle{plain}
| {
"timestamp": "2016-03-03T02:09:04",
"yymm": "1603",
"arxiv_id": "1603.00677",
"language": "en",
"url": "https://arxiv.org/abs/1603.00677",
"abstract": "Karhunen-Loeve expansions (KLE) of stochastic processes are important tools in mathematics, the sciences, economics, and engineering. However, the KLE is primarily useful for those processes for which we can identify the necessary components, i.e., a set of basis functions, and the distribution of an associated set of stochastic coefficients. Our ability to derive these components explicitly is limited to a handful processes. In this paper we derive all the necessary elements to implement the KLE for a square-integrable Levy process. We show that the eigenfunctions are sine functions, identical to those found in the expansion of a Wiener process. Further, we show that stochastic coefficients have a jointly infinitely divisible distribution, and we derive the generating triple of the first d coefficients. We also show, that, in contrast to the case of the Wiener process, the coefficients are not independent unless the process has no jumps. Despite this, we develop a series representation of the coefficients which allows for simulation of any process with a strictly positive Levy density. We implement our theoretical results by simulating the KLE of a variance gamma process.",
"subjects": "Probability (math.PR)",
"title": "Karhunen-Loeve expansions of Levy processes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9886682488058381,
"lm_q2_score": 0.8104789040926008,
"lm_q1q2_score": 0.8012947588033065
} |
https://arxiv.org/abs/2008.06577 | Cycles of a given length in tournaments | We study the asymptotic behavior of the maximum number of directed cycles of a given length in a tournament: let $c(\ell)$ be the limit of the ratio of the maximum number of cycles of length $\ell$ in an $n$-vertex tournament and the expected number of cycles of length $\ell$ in the random $n$-vertex tournament, when $n$ tends to infinity. It is well-known that $c(3)=1$ and $c(4)=4/3$. We show that $c(\ell)=1$ if and only if $\ell$ is not divisible by four, which settles a conjecture of Bartley and Day. If $\ell$ is divisible by four, we show that $1+2\cdot\left(2/\pi\right)^{\ell}\le c(\ell)\le 1+\left(2/\pi+o(1)\right)^{\ell}$ and determine the value $c(\ell)$ exactly for $\ell = 8$. We also give a full description of the asymptotic structure of tournaments with the maximum number of cycles of length $\ell$ when $\ell$ is not divisible by four or $\ell\in\{4,8\}$. | \section{Introduction}
\label{sec:intro}
In this paper, we address one of the most natural extremal problems concerning tournaments:
\emph{What is the maximum number of cycles of a given length that can be contained in an $n$-vertex tournament?}
The cases of cycles of length three and four are well-understood.
An $n$-vertex tournament has at most $\frac{n(n^2-1)}{24}$ cycles of length three (cyclic triangles) if $n$ is odd, and
at most $\frac{n(n^2-4)}{24}$ if $n$ is even; both bounds are the best possible.
This result can be traced back to 1940 to the work of Kendall and Babington Smith~\cite{KenB40} and of Szele~\cite{Sze43}, also see~\cite{Moo15}, and
it is well-known that the number of cycles of length three is determined by the degree sequence of a tournament~\cite{Goo59}.
Beineke and Harary~\cite{BeiH65} and Colombo~\cite{Col64} proved in the 1960s the best possible bounds on the number of cycles of length four:
$\frac{n(n^2-1)(n-3)}{48}$ when $n$ is odd and $\frac{n(n^2-4)(n-3)}{48}$ when $n$ is even.
The asymptotics of the case of cycles of length five was determined only recently by Komarov and Mackey~\cite{KomM17}
who showed that the number of cycles of length five is asymptotically maximized if and only if the tournament is almost regular,
i.e., the behavior in this case is completely analogous to that of cycles of length three;
exact results on cycles of length five for regular tournaments and tournaments of odd order
were obtained by Savchenko in~\cite{Sav16,SavXX}.
In this paper,
we employ algebraic techniques
to provide asymptotically optimal results on the maximum number of cycles for each cycle length not divisible by four, and
determine the limit behavior in the case of cycle lengths divisible by four.
To state our results precisely, we need to fix some notation.
Let $C(n,\ell)$ be the maximum number of cycles of length $\ell$ in an $n$-vertex tournament.
We compare this quantity to the expected number of cycles of length $\ell$ in a random $n$-vertex tournament,
which is $R(n,\ell)=\frac{(\ell-1)!}{2^{\ell}}\binom{n}{\ell}$, and define
\[c(\ell)=\lim_{n\to\infty}\frac{C(n,\ell)}{R(n,\ell)}.\]
In particular, the results that we have mentioned earlier imply that $c(3)=c(5)=1$ and $c(4)=4/3$.
Bartley and Day conjectured the following.
\begin{conjecture}[{Bartley~\cite[Conjecture 104]{Bar18} and Day~\cite[Conjecture 40]{Day17}}]
\label{conj:equiv}
For $\ell\ge 3$, it holds that $c(\ell)=1$ if and only if $\ell$ is not divisible by four.
\end{conjecture}
We remark that the statement of Conjecture~\ref{conj:equiv} has been proven for \emph{regular} tournaments
by Savchenko~\cite{Sav16}, also see~\cite{Sav17}, and also jointly by Bartley and Day~\cite{Day17,Bar18},
and for $\ell\le 8$ by Bartley~\cite[Theorem 109]{Bar18}.
Since it is known that $c(\ell)>1$ for all $\ell$ divisible by four,
the following theorem, which is implied by Theorems~\ref{thm:ck1} and~\ref{thm:ck2}, settles the conjecture.
\begin{theorem}
\label{thm:ck1+2}
Let $\ell\ge 3$. If $\ell$ is not divisble by four, then $c(\ell)=1$.
\end{theorem}
If $\ell$ is divisible by four, we establish an asymptotically tight upper bound (Theorem~\ref{thm:ck4}):
\[1+2\cdot\left(2/\pi\right)^{\ell}\le c(\ell)\le 1+\left(2/\pi+o(1)\right)^{\ell}\]
Our asymptotic result on $c(\ell)$ for $\ell$ divisible by four
provides a strong evidence for the following conjecture on the value of $c(\ell)$ for such $\ell$ (we remark that Conjecture~\ref{conj:div4} is stated in~\cite{Day17} by giving an extremal construction,
which we mention at the end of Section~\ref{sec:prelim});
the conjecture is an extension of an earlier problem posed by Savchenko~\cite{Sav16} for regular tournaments.
\begin{conjecture}[{Bartley~\cite[Conjecture 106]{Bar18} and Day~\cite[Conjecture 45]{Day17}}]
\label{conj:div4}
If $\ell$ is divisible by four, then
\[c(\ell)=1+2\cdot\sum_{i=1}^{\infty}\left(\frac{2}{(2i-1)\pi}\right)^{\ell}.\]
\end{conjecture}
Our asymptotic result agrees with the conjecture on the dominant term of the sum.
We also show that $c(8)=332/315$ (Theorem~\ref{thm:c8}) and classify the extremal constructions (Theorem~\ref{thm:c48});
note that the value of $c(8)$ is the one given in Conjecture~\ref{conj:div4}.
In addition to the results on the value of $c(\ell)$,
we have also been able to determine the asymptotic structure of extremal tournaments when $\ell$ is not divisible by four.
If $\ell$ is odd,
then tournaments achieving the maximum number of cycles of length $\ell$ are exactly those that are almost regular (Theorem~\ref{thm:ck1}), and
if $\ell$ is even but not divisible by four,
then a tournament achieves the maximum number of cycles of length $\ell$ if and only if it is quasirandom (Theorem~\ref{thm:ck2}).
In particular, maximizing the density of cycles of length $4k+2$ is a \emph{quasirandom-forcing} property,
i.e., a property that a tournament has if and only if it is quasirandom.
We remark that in the induced setting,
in addition to the density of transitive tournaments with four or more vertices,
which is known to be quasirandom-forcing, see~\cite{CorR17} and \cite[Exercise 10.44]{Lov93},
there is only one additional tournament such that its density is quasirandom-forcing~\cite{BucLSS21,CorPS19,HanKKMPSV19},
which is the unique $5$-vertex strongly connected tournament with diameter four.
\section{Preliminaries}
\label{sec:prelim}
In this section, we fix the notation used throughout the paper and present analytic and algebraic tools needed for our arguments.
The set of the first $n$ positive integers is denoted by $[n]$.
A \emph{tournament} is an orientation of a complete graph, and
the \emph{random tournament} is an orientation of a complete graph
where each edge is directed with probability $1/2$ in each of the two possible directions independently of the other edges.
Let $C(T,\ell)$ be the ratio of the number of cycles of length $\ell$ in an $n$-vertex tournament $T$ and
the expected number of cycles of length $\ell$ in the random $n$-vertex tournament.
In particular, the maximum value of $C(T,\ell)$,
where the maximum is taken over all $n$-vertex tournaments,
is $C(n,\ell)/R(n,\ell)$.
The \emph{adjacency matrix} $A$ of a tournament $T$ is the zero-one matrix
with rows and columns indexed by vertices of $T$ such that
$A_{ij}=1$ iff $T$ contains an edge from the $i$-th vertex to the $j$-th vertex.
The \emph{tournament matrix} of an $n$-vertex tournament $T$
is the matrix obtained from the adjacency matrix of $T$ by setting its diagonal entries to be equal to $1/2$ and
then dividing each entry of the matrix by $n$.
We say that a real square matrix $A$ of order $n$
is \emph{skew-symmetric} if $A=-A^T$, i.e., $A_{ij}=-A_{ji}$ for all $i,j\in [n]$, and
$A$ is \emph{complementary} if $A$ is non-negative and $A_{ij}+A_{ji}=1/n$ for all $i,j\in [n]$.
In particular, the tournament matrix of a tournament is complementary.
Finally,
if $A$ is an $n\times n$ matrix, then its \emph{Frobenius norm}, which is denoted by $\|A\|_F$, is
\[\|A\|_F=\sqrt{\sum_{i,j\in [n]}A_{ij}^2}.\]
We recall that $\|Av\|\le\|A\|_F\cdot\|v\|$ for every vector $v\in\RR^n$.
Since the trace of the $\ell$-th power of the adjacency matrix of $T$
is the number of closed walks of length $\ell$, we obtain the following;
note that we state the next proposition for tournament matrices rather than adjacency matrices.
\begin{proposition}
\label{prop:eigen}
Let $A$ be the tournament matrix of an $n$-vertex tournament $T$,
$\lambda_1,\ldots,\lambda_n$ be its eigenvalues, and $\ell\ge 3$ an integer.
It holds that
\[C(T,\ell)=\frac{2^{\ell}}{n^\ell}\cdot\sum_{i=1}^n \lambda_i^{\ell}+O(n^{-1}).\]
\end{proposition}
We next recall some basic properties of tournament matrices, and more generally complementary matrices, used in~\cite{ChaGKN19};
we remark that similar results were also used earlier by Brauer and Gentry~\cite{BraG68}.
\begin{proposition}
\label{prop:matrix}
Let $A$ be a complementary matrix.
There is a positive real number $\rho$ such that $\rho$ is a real eigenvalue of $A$, and
the absolute value of each eigenvalue of $A$ is at most $\rho$.
In addition,
each eigenvalue of $A$ has non-negative real part, and the sum of the eigenvalues is equal to $1/2$.
\end{proposition}
\subsection{Tournament limits}
\label{subsec:limit}
We now define tournament limits,
which are analogous to graph limits described in detail in the monograph by Lov\'asz~\cite{Lov12} and
which were used earlier in~\cite{ChaGKN19,Tho18,ZhaZ20}, also see~\cite{DiaJ08,LovS10i} for related concepts.
Most of the results translate readily from the setting of graphs to that of tournaments.
However,
the results on the convergence of spectra seem to be an exception as we point out further.
A \emph{tournamenton} is a measurable function $W:[0,1]^2\to [0,1]$ such that $W(x,y)+W(y,x)=1$ for all $(x,y)\in[0,1]^2$.
The \emph{density} of a tournament $T$ in a tournament $T_0$, which is denoted by $d(T,T_0)$,
is the probability that a uniformly randomly chosen subset of $|T|$ vertices of $T_0$ induces a tournament isomorphic to $T$;
if $|T|>|T_0|$, we set $d(T,T_0)=0$.
The \emph{density} of an $n$-vertex tournament $T$ in a tournamenton $W$ is defined as
\[d(T,W)=\frac{|T|!}{|\Aut(T)|!}\int_{x_1,\ldots,x_n\in [0,1]}\prod_{i\to j}W(x_i,x_j)\;\mathrm{d} x_1\cdots x_n,\]
where the product is taken over all $i$ and $j$ such that the $i$-th vertex is joined by an edge to the $j$-th vertex.
Two tournamentons $W$ and $W'$ are \emph{weakly isomorphic} if $d(T,W)=d(T,W')$ for every tournament $T$.
We say that a sequence $(T_n)_{n\in\NN}$ of tournaments is \emph{convergent}
if $|T_n|$ tends to infinity and the sequence $d(T,T_n)$ converges for every tournament $T$.
A tournamenton $W$ is a \emph{limit} of a convergent sequence $(T_n)_{n\in\NN}$ of tournaments
if $d(T,W)$ is equal to the limit of $d(T,T_n)$ for every tournament $T$.
For example,
the tournamenton equal to $1/2$ everywhere
is the limit of the sequence of random $n$-vertex tournaments with probability one.
A tournamenton $W$ is called \emph{regular} if
\[\int_{[0,1]}W(x,y)\;\mathrm{d} y=1/2\]
for almost every $x\in [0,1]$;
such tournamentons are limits of tournaments
where the in-degrees and out-degrees of most of the vertices are asymptotically equal to half of the total number of vertices.
An analogous line of arguments as in the graph case yields that
every convergent sequence of tournaments has a limit and
every tournamenton is a limit of a convergent sequence of tournaments.
In particular, a tournamenton $W$ is a limit of $W$-random tournaments that we next define.
An $n$-vertex \emph{$W$-random tournament} is obtained as follows:
sample $n$ points $x_1,\ldots,x_n$ uniformly and independently in $[0,1]$ and
orient the edge between the $i$-th and $j$-th vertex from the $i$-th vertex to the $j$-th vertex with probability $W(x_i,x_j)$.
Note that the expected density of a tournament $T$ in a $W$-random tournament is equal to $d(T,W)$.
We next introduce the quantity $C(W,\ell)$, which is the limit analogue of $C(T,\ell)$ defined earlier:
\[C(W,\ell)=2^{\ell}\int_{x_1,\ldots,x_\ell\in [0,1]}W(x_1,x_2)W(x_2,x_3)\cdots W(x_{\ell-1},x_{\ell})W(x_{\ell},x_1)\;\mathrm{d} x_1\cdots x_{\ell}.\]
It follows that $c(\ell)$ is the maximum of $C(W,\ell)$ where the maximum is taken over all tournamentons $W$ (and
it can be shown that the maximum is indeed attained).
Note that the expected value of $C(T,\ell)$ for an $n$-vertex $W$-random tournament $T$, $n\ge\ell$, is equal to $C(W,\ell)$.
If $W$ and $W'$ are two tournamentons, then the \emph{cut distance} between $W$ and $W'$,
which is denoted by $\cut{W}{W'}$, is defined as
\[\cut{W}{W'}=\sup_{X,Y\subseteq [0,1]}\left|\int_{X\times Y}W(x,y)-W'(x,y)\;\mathrm{d} x\;\mathrm{d} y\right|,\]
where the supremum is taken over all measurable subsets $X$ and $Y$ of $[0,1]$.
As in the graph case, it can be shown that
\begin{equation}
\left|d(T,W)-d(T,W')\right|\le |T|^2\cut{W}{W'}\label{eq:cut}
\end{equation}
for every tournament $T$ and all tournamentons $W$ and $W'$.
Every complementary matrix $A$ of order $k$ can be associated with a tournamenton $W$ as follows:
the interval $[0,1]$ is split into $k$ disjoint measurable sets $U_1,\ldots,U_k$ each of measure $1/k$ and
$W(x,y)=k\cdot A_{ij}$ if $x\in U_i$ and $y\in U_j$;
tournamentons that can be obtained in this way are called \emph{step tournamentons}.
A \emph{step approximation} of a tournamenton $W$ is a complementary matrix $A$ such that
there exists a partition of the interval $[0,1]$ to $k$ disjoint measurable sets $U_1,\ldots,U_k$ each of measure $1/k$ that
\[A_{ij}=k\int_{U_i\times U_j}W(x,y)\;\mathrm{d} x\;\mathrm{d} y\]
for every $i,j\in [k]$.
The step tournamenton associated with a step approximation $A$ and the sets $U_1,\ldots,U_k$ used to define $A$ is denoted by $W[A]$.
We say that a sequence of step approximations $(A_n)_{n\in\NN}$ of a tournamenton $W$ is \emph{convergent}
if $\cut{W}{W[A_n]}$ converges to zero.
The Regularity Lemma yields that for every $\varepsilon>0$ and every tournamenton $W$,
there exists a step approximation $A$ of the tournamenton $W$ such that $\cut{W}{W[A]}\le\varepsilon$.
In particular, every tournamenton has a convergent sequence of step approximations and
we obtain using \eqref{eq:cut} the following.
\begin{proposition}
\label{prop:cut}
If $W$ is a tournamenton and $(A_n)_{n\in\NN}$ is a convergent sequence of its step approximations,
then
\[d(T,W)=\lim_{n\to\infty}d(T,W[A_n])\]
for every tournament $T$.
In particular, it holds that
\[C(W,\ell)=2^\ell\lim_{n\to\infty}\Trace A_n^\ell\]
for every $\ell\ge 3$.
\end{proposition}
We finish this subsection by describing a tournamenton that
is believed to be extremal for Conjecture~\ref{conj:div4} for every $\ell$ divisible by four.
For $x,y\in[0,1]$,
define $W_C(x,x)=1/2$, $W_C(x,y)=1$ if $y\in (x-1,x-1/2)\cup (x,x+1/2]$, and $W_C(x,y)=0$ otherwise.
The tournamenton $W_C$, which we refer to as the carousel tournamenton, is depicted in Figure~\ref{fig:carouselle}, and
it is the limit of the following tournaments described in~\cite{Day17} in relation to Conjecture~\ref{conj:div4}:
take vertices $0,\ldots,2n$ and join a vertex $i$ to the vertices $i+1,\ldots,i+n$ (computations modulo $2n+1$).
These tournaments are called \emph{carousel tournaments}.
The value of $C(W_C,\ell)$ for every positive integer $\ell$ divisible by four
is equal to the value of $c(\ell)$ given in Conjecture~\ref{conj:div4}.
\begin{figure}
\begin{center}
\epsfysize 3cm
\epsfbox{tourn-ck-7.mps}
\hskip 2cm
\epsfbox{tourn-ck-1.mps}
\end{center}
\caption{The tournament matrix of the $9$-vertex carousel tournament and the carousel tournamenton.
The origin of the coordinate system is in the top left corner, the $x$-axis is vertical and the $y$-axis is horizontal.
The black color represents the value $1$ and the white color the value $0$.}
\label{fig:carouselle}
\end{figure}
\subsection{Spectral properties of tournaments and their limits}
\label{subsec:spectrum}
A tournamenton $W$ can be viewed as a linear operator from $L_2[0,1]$ to $L_2[0,1]$
defined as
\[(Wf)(x)=\int_{[0,1]}W(x,y)f(y)\;\mathrm{d} y.\]
Since this operator is compact (as all Hilbert-Schmidt integral operators are),
its spectrum $\sigma(W)$ is either finite or countably infinite,
the only accumulation point of $\sigma(W)$ can be zero, and
every non-zero element of $\sigma(W)$ is an eigenvalue of $W$.
Moreover, for every non-zero $\lambda\in\sigma(W)$, there exists $k_{\lambda}\in\NN$ such that
the kernels of $(W-\lambda)^{k_{\lambda}}$ and $(W-\lambda)^{k_{\lambda}+1}$ are the same and
their dimension is finite.
For example, the spectrum of the carousel tournamenton $W_C$ defined at the end of Subsection~\ref{subsec:limit}
consists of $1/2$, $\pm\unit/((2k-1)\pi)$ for $k\in\NN$, and $0$.
It is plausible that
if $(T_n)_{n\in\NN}$ is a convergent sequence of tournaments and $W$ is a limit tournamenton,
then the normalized spectra of the tournament matrices of $T_n$ converge to the spectrum of $W$
in the sense used in the graph setting in~\cite[Section 6]{BorCLSV12}, also see~\cite[Chapter 11]{Lov12}.
However, the equality between the density of cycles of length $\ell$ and
the trace of $W^\ell$ for $\ell\ge 3$ (note that $W^\ell$ is a trace-class operator for $\ell\ge 2$),
which forms the core of the argument in~\cite[Section 6]{BorCLSV12} and is straightforward in the case of graphons,
is not obvious in the case of tournamentons.
While we establish a close analogy of this equality as \eqref{eq:limitpower} in Proposition~\ref{prop:limitset},
it does not seem to be strong enough to give results completely analogous to those on the convergence of spectra of graph limits.
To proceed with our exposition, we need to define a notion of convergence for multisets of complex numbers.
If $z$ is a complex number,
we write $N_{\varepsilon}(z)$ for the set of all complex numbers $z'$ with $|z-z'|\le\varepsilon$.
We say that a sequence $(X_i)_{i\in\NN}$ of multisets of complex numbers \emph{converges as multisets}
to a multiset $X$ if the following holds:
\begin{itemize}
\item if $z$ is an element of $X$ with a finite multiplicity,
then there exists $\varepsilon_0>0$ such that for every $\varepsilon\in(0,\varepsilon_0)$,
there exists $i_0$ such that
both $|X_i\cap N_{\varepsilon}(z)|$ and $|X\cap N_{\varepsilon}(z)|$ are finite and equal for every $i\ge i_0$, and
\item if $z$ is an element of $X$ with infinite multiplicity,
then for every $\varepsilon>0$ and every $k\in\NN$,
there exists $i_0$ such that
$|X_i\cap N_{\varepsilon}(z)|\ge k$ for every $i\ge i_0$,
\end{itemize}
We say that a sequence $(A_n)_{n\in\NN}$ of step approximations of a tournamenton $W$ is \emph{strongly convergent}
if it is convergent and the spectra of $A_n$ converge as multisets.
Later, we will show that every convergent sequence of step approximations is also strongly convergent
but we treat the two notions as distinct until we establish their equivalence.
We summarize properties of the limit multiset of a strongly convergent sequence of step approximations in the next proposition.
\begin{proposition}
\label{prop:limitset}
Let $W$ be a tournamenton and
let $(A_n)_{n\in\NN}$ be a strongly convergent sequence of step approximations of $W$.
The limit multiset $X$ of the spectra of $A_n$ satisfies the following.
\begin{itemize}
\item The set $X$ contains zero and its multiplicity is infinite.
\item Every non-zero element of $X$ has finite multiplicity.
\item Every element of $X$ has a non-negative real part.
\item The real parts of the elements of $X$ sum to at most $1/2$ (taking their multiplicities into account).
\item If $x$ is an element of $X$ that is not real, then $X$ contains the complex conjugate of $x$ and
the multiplicities of $x$ and its complex conjugate are the same.
\item If $X$ contains a non-zero element,
then it contains a positive real $\rho$ such that the absolute value of all elements of $X$ is at most $\rho$.
\item The sum of the $\ell$-th powers of the elements of $X$ is absolutely convergent for every $\ell\ge 2$.
\item It holds that
\begin{equation}
C(W,\ell)=2^{\ell}\cdot\sum_{x\in X}x^\ell\label{eq:limitpower}
\end{equation}
for every $\ell\ge 3$.
\end{itemize}
\end{proposition}
\begin{proof}
Let $X_n$ be the spectrum of $A_n$.
By Proposition~\ref{prop:matrix},
all the elements of $X_n$ have non-negative real parts and the sum of their real parts is $1/2$.
Hence, all the elements of $X$ have non-negative real parts and their real parts sum to at most $1/2$.
Since the matrix $A_n$ is real,
every non-real eigenvalue of $A_n$ comes in a pair with a complex conjugate eigenvalue of the same multiplicity.
Hence, the pairs of complex conjugate non-real elements of $X$ must have the same multiplicity.
Let $\rho_n$ be the largest real eigenvalue of $A_n$.
Note that the absolute value of all the elements of $X_n$ is at most $\rho_n$ by Proposition~\ref{prop:matrix}.
The sequence $\rho_n$ converges (because the sets $X_n$ converge as multisets) and its limit $\rho$ belongs to $X$.
If $\rho=0$, then $X$ has no non-zero elements.
If $\rho>0$, then the absolute value of all elements of $X$ is at most $\rho$.
We next show that the sum of the $\ell$-th powers of the elements of $X$ is absolutely convergent for every $\ell\ge 2$.
Let $X^+$ and $X^-$ be the multiset of the non-zero elements $x$ of $X$ such that $\Real x\ge|x|/2$ and $\Real x\le|x|/2$, respectively.
Note that $\Real x^2 \le - |x^2|/2 < 0$ for every $x\in X^-$.
First observe that
\begin{equation}
\sum_{x\in X^+}|x|\le 2\sum_{x\in X^+}\Real x\le 1.\label{eq:X+}
\end{equation}
Since $\Real x^2\le (\Real x)^2$ for every $x\in X_n$ and
the sum of real parts of the elements of $X_n$ is $1/2$,
it follows that
\[\sum_{x\in X_n,\Real x^2>0}\Real x^2\le\frac{1}{4}.\]
Since the trace of $A_n^2$ is non-negative,
we obtain that
\[-\sum_{x\in X_n,\Real x^2<0}\Real x^2\le\frac{1}{4}.\]
In particular, it holds that
\[
-\sum_{x\in X^-}\Real x^2\le\frac{1}{4}.
\]
As $-\Real x^2\ge |x^2|/2$ for every $x\in X^-$, we obtain that
\begin{equation}
\sum_{x\in X^-}|x|^2\le -2\sum_{x\in X^-}\Real x^2\le\frac{1}{2}.\label{eq:X-}
\end{equation}
The inequalities \eqref{eq:X+} and \eqref{eq:X-} yield that
all elements of the multiset $X$ except for $0$ have finite multiplicity and
the sum of the $\ell$-th powers of the elements of $X$ is absolutely convergent for every $\ell\ge 2$.
Since the sizes of the multisets $X_n$ tend to infinity and $|x|\le 1/2$ for all their elements,
the multiset $X$ must contain an element with infinite multiplicity;
since such an element can be only $0$, the multiset $X$ contains $0$ with infinite multiplicity.
It remains to establish \eqref{eq:limitpower}. Fix $\varepsilon\in (0,1)$.
Similarly to the previous paragraph,
define $X_n^+$ to be the multiset of the elements $x$ of $X_n$ such that $\Real x\ge|x|/2$ and
$X_n^-$ to be the multiset of the elements $x$ of $X_n$ such that $\Real x\le|x|/2$.
Along the lines leading to \eqref{eq:X+} and \eqref{eq:X-}, we obtain that
\[\sum_{x\in X_n^+}|x|^2\le\frac{1}{2}\sum_{x\in X_n^+}|x|\le\frac{1}{2}\qquad\mbox{and}\qquad\sum_{x\in X_n^-}|x|^2\le\frac{1}{2}.\]
Hence, we obtain for $\ell \geq 3$ that
\begin{equation}
\left|\Trace A_n^\ell-\sum_{x\in X_n,|x|>\varepsilon}x^\ell\right|
\le\sum_{x\in X_n,|x|\le\varepsilon}|x|^\ell
\le\varepsilon\sum_{x\in X_n,|x|\le\varepsilon}|x|^2\le\varepsilon.\label{eq:Xneps}
\end{equation}
Similarly, we obtain that
\begin{equation}
\left|\sum_{x\in X,|x|\le\varepsilon}x^\ell\right|
\le\sum_{x\in X,|x|\le\varepsilon}|x|^\ell
\le\varepsilon\sum_{x\in X,|x|\le\varepsilon}|x|^2\le\varepsilon.\label{eq:Xeps}
\end{equation}
Consequently, it follows from \eqref{eq:Xneps} and \eqref{eq:Xeps} that
\begin{equation}
\lim_{n\to\infty}\left|\Trace A_n^\ell-\sum_{x\in X_n}x^\ell\right|\le 2\varepsilon.\label{eq:Xnlim}
\end{equation}
Since the estimate \eqref{eq:Xnlim} holds for every $\varepsilon\in (0,1)$,
It follows that
\[\lim_{n\to\infty}\Trace A_n^\ell=\sum_{x\in X_n}x^\ell.\]
The identity \eqref{eq:limitpower} now follows from Proposition~\ref{prop:cut}.
\end{proof}
By compactness, every convergent sequence of step approximations of $W$ has a strongly convergent subsequence.
The limit multisets of two such strongly convergent subsequences must be the same
since no two different multisets can be absolutely convergent and satisfy \eqref{eq:limitpower} for every $\ell\ge 3$.
Hence, every convergent sequence of step approximations of $W$ is also strongly convergent,
which we state as a corollary.
\begin{corollary}
\label{cor:strongconv}
Every convergent sequence $(A_n)_{n\in\NN}$ of step approximations of a tournamenton $W$ is strongly convergent.
\end{corollary}
Corollary~\ref{cor:strongconv} implies that
the limit multiset of every convergent sequence of step approximations of a tournamenton $W$ is the same;
we will write $\wsigma(W)$ for this limit multiset after removing $0$.
Proposition~\ref{prop:limitset} now yields that
\begin{equation}
C(W,\ell)=2^{\ell}\cdot\sum_{\lambda\in\wsigma(W)}\lambda^\ell\label{eq:lambda}
\end{equation}
for every $\ell\ge 3$.
We conclude with two propositions relating structural properties of a tournamenton $W$ and $\wsigma(W)$.
\begin{proposition}
\label{prop:reg}
A tournamenton $W$ is regular if and only if $1/2\in\wsigma(W)$.
\end{proposition}
\begin{proof}
Fix a tournamenton $W$.
Let $(A_n)_{n\in\NN}$ be a convergent sequence of step approximations of $W$.
Let $k_n$ be the order of $A_n$,
$\rho_n$ the largest real eigenvalue of $A_n$, and
$v_n$ the corresponding eigenvector with norm one.
Further let $\JJ_n$ be the $k_n\times k_n$ matrix with all entries equal to $k_n^{-1}$ and
$j_n$ the $k_n$-dimensional vector with all entries equal to $k_n^{-1/2}$.
Note that $j_n=\JJ_n j_n$ and $1$ is the only non-zero eigenvalue of $\JJ_n$.
Finally, let $j_0$ be the function $[0,1]\to [0,1]$ such that $j_0(x)=1$ for all $x\in [0,1]$.
If $W$ is a regular tournamenton, then $A_nj_n=j_n/2$.
It follows that $1/2$ is an eigenvalue of $A_n$ for every $n\in\NN$ and so $1/2\in\wsigma(W)$.
We next assume that $1/2\in\wsigma(W)$ and show that the tournamenton $W$ is regular.
Since the matrix $A_n$ cannot have a real eigenvalue larger than $1/2$ by Proposition~\ref{prop:matrix},
it follows that the values of $\rho_n$ converge to $1/2$.
Observe that
\[v_n^T\JJ_nv_n=v_n^T(A_n^T+A_n)v_n=v_n^TA_n^Tv_n+v_n^TA_nv_n=2\rho_n.\]
It follows that $<v_n|j_n>^2=2\rho_n$, which implies that
\[\|v_n-j_n\|=2-2<v_n|j_n>=2-2\sqrt{2\rho_n}.\]
Since $\|A_n\|_F\le 1$, we obtain that
\begin{align*}
\|j_n-2A_nj_n\| &\le\|j_n-v_n\|+\|v_n-2A_nv_n\|+2\|A_nv_n-A_nj_n\|\\
&\le 3\|j_n-v_n\|+1-2\rho_n = 7-2\rho_n-6\sqrt{2\rho_n}.
\end{align*}
It follows that
\[\lim_{n\to\infty}\|j_n-2A_nj_n\|=0.\]
Since the step tournamentons $W[A_n]$ converge to the tournamenton $W$ in the cut distance,
we obtain that $\|j_0-2Wj_0\|_2=0$
where $Wj_0$ is the function resulting from applying the linear operator given by $W$ to the function $j_0$.
It follows that the tournamenton $W$ is regular.
\end{proof}
\begin{proposition}
\label{prop:qrand}
A tournamenton $W$ is equal to $1/2$ almost everywhere if and only if $\wsigma(W)=\{1/2\}$.
\end{proposition}
\begin{proof}
Fix a tournamenton $W$.
Let $(A_n)_{n\in\NN}$ be a convergent sequence of step approximations of $W$, and
let $k_n$ be the order of $A_n$.
Further let $\JJ_n$ be the $k_n\times k_n$ matrix with all entries equal to $(2k_n)^{-1}$ and
$j_n$ the $k_n$-dimensional vector with all entries equal to $k_n^{-1/2}$.
Note that the definition of the matrix $\JJ_n$ differs from that in the proof of Proposition~\ref{prop:reg}.
In particular, it holds that $\JJ_nj_n=j_n/2$.
If the tournamenton $W$ is equal to $1/2$ almost everywhere,
then $A_n=\JJ_n$ for every $n\in\NN$ and the only non-zero eigenvalue of $A_n$ is $1/2$.
It follows that $\wsigma(W)=\{1/2\}$.
We next assume that $\wsigma(W)=\{1/2\}$ and show that $W$ is equal to $1/2$ almost everywhere.
Since $1/2\in\wsigma(W)$, the tournamenton $W$ is regular and it follows that $A_nj_n=j_n/2$.
Define $B_n=A_n-\JJ_n$ and observe that $B_n$ is a skew-symmetric matrix and that $B_nj_n$ is the zero vector,
i.e., the vector $j_n$ belongs to the kernel of~$B_n$.
Hence, the non-zero eigenvalues of $A_n$ are $1/2$ and the non-zero eigenvalues of~$B_n$,
which are square roots of the (real and negative) eigenvalues of the symmetric matrix $B_n^2$;
in particular, they all are purely imaginary.
Suppose that $W$ is not equal to $1/2$ almost everywhere,
in particular, the cut distance of $W$ and the tournamenton equal to $1/2$ everywhere is positive.
Since the tournamentons $W[A_n]$ converge to $W$ in the cut distance,
there exists a sequence of vectors $v_n\in\RR^{k_n}$ with $\|v_n\|=1$ and a real $\delta>0$ such that
\[\lim_{n\to\infty}\|B_nv_n\|\ge\delta.\]
In particular,
\[\lim_{n\to\infty} |v_n^TB_n^2v_n|\ge\delta^2,\]
which implies that the smallest eigenvalue $B_n^2$ is less than $-\delta^2$.
Hence, every $B_n$ has a purely imaginary eigenvalue with absolute value at least $\delta$.
However, this is impossible since $\wsigma(W)=\{1/2\}$.
We conclude that $W$ is equal to $1/2$ almost everywhere.
\end{proof}
\section{Cycles of length not divisible by four}
\label{sec:ck12}
In this section, we compute $c(\ell)$ when $\ell$ is not divisible by four and
we characterize tournamentons that are extremal.
The proofs of both Theorem~\ref{thm:ck1} and~\ref{thm:ck2}
are based on the analysis of spectra of linear operators associated with tournamentons,
however,
the arguments apply the same in the setting of tournament matrices.
\begin{theorem}
\label{thm:ck1}
If $\ell\ge 3$ is odd,
then $C(W,\ell)\le 1$ for every tournamenton $W$, and equality holds if and only if $W$ is regular.
In particular, $c(\ell)=1$.
\end{theorem}
\begin{proof}
Fix $\ell\ge 3$ and a tournamenton $W$.
Let $\rho$ be the largest positive real contained in $\wsigma(W)$.
We start with establishing the following.
If $z$ is a complex number with $|z|\le\rho$ and $\Real z\ge 0$,
then
\begin{equation}
\Real z^{\ell}\le \ell \rho^{\ell-1} \Real z, \label{eq:ck1}
\end{equation}
and equality holds if and only if $\Real z=0$.
Consider such $z$ and
let $\alpha$ be such that $\Real z=|z|\cdot\cos\alpha$ and $\Imag z=|z|\cdot\sin\alpha$.
If $\Real z=0$, the estimate \eqref{eq:ck1} holds with equality.
Hence, we can assume that $\alpha\in [0,\pi/2)$ (considering the complex conjugate of $z$ if needed).
If $\ell$ is one modulo four, we set $\beta=\pi/2-\alpha$ and obtain the following:
\[
\Real z^{\ell}=|z|^{\ell}\cos\ell\alpha
=|z|^{\ell}\sin\ell\beta
<\ell|z|^{\ell}\sin\beta
=\ell|z|^{\ell}\cos\alpha
=\ell|z|^{\ell-1}\Real z.
\]
If $\ell$ is three modulo four, we set $\beta=-\pi/2+\alpha$ and obtain the following:
\[
\Real z^{\ell}=|z|^{\ell}\cos\ell\alpha
=|z|^{\ell}\sin\ell\beta
<-\ell|z|^{\ell}\sin\beta
=\ell|z|^{\ell}\cos\alpha
=\ell|z|^{\ell-1}\Real z.
\]
We now obtain the estimate \eqref{eq:ck1} using $|z|\le\rho$.
We next bound the sum of the $\ell$-th powers of the elements of $\wsigma(W)$
by applying~\eqref{eq:ck1} to every element of $\wsigma(W)$ except for $\rho$.
Note that we treat $\wsigma(W)$ as a multiset,
i.e., if the multiplicity of $\rho$ in $\wsigma(W)$ is larger than one,
then $\wsigma(W)\setminus\{\rho\}$ contains~$\rho$.
\begin{equation}
\sum_{\lambda\in\wsigma(W)}\lambda^{\ell}
=\sum_{\lambda\in\wsigma(W)}\Real\lambda^{\ell}
\le\rho^\ell+\;\; \sum_{\mathclap{\lambda\in\wsigma(W)\setminus\{\rho\}}}\;\; \ell \rho^{\ell-1}\Real\lambda
\le\left(\rho+\;\; \sum_{\mathclap{\lambda\in\wsigma(W)\setminus\{\rho\}}}\;\; \Real\lambda\right)^\ell
\label{eq:ck1s}
\end{equation}
Since the sum of the real parts of the elements of $\wsigma(W)$ is at most $1/2$ by Proposition~\ref{prop:limitset},
we obtain that
\[\sum_{\lambda\in\wsigma(W)}\lambda^{\ell}\le\frac{1}{2^{\ell}}.\]
The identity \eqref{eq:lambda} now yields that $C(W,\ell)\le 1$ and
since the choice of $W$ was arbitrary, it follows that $c(\ell)\le 1$.
Moreover, if $C(W,\ell)=1$, the sum of real parts of the elements of $\wsigma(W)$ is $1/2$ and
the estimate \eqref{eq:ck1} holds with equality for every element of $\wsigma(W)\setminus\{\rho\}$.
In particular, the real part of every element of $\wsigma(W)\setminus\{\rho\}$ is zero.
Hence, if $C(W,\ell)=1$, then $\rho=1/2$ and the tournamenton $W$ is regular by Proposition~\ref{prop:reg}.
\end{proof}
We next focus on the case when $\ell$ is even but not divisible by four.
\begin{theorem}
\label{thm:ck2}
If $\ell\ge 6$ is even but not divisible by four,
then $C(W,\ell)\le 1$ for every tournamenton $W$ and equality holds if and only if $W$ is equal to $1/2$ almost everywhere.
In particular, $c(\ell)=1$.
\end{theorem}
\begin{proof}
Fix $\ell\ge 6$ and a tournamenton $W$, and
let $\rho$ be the largest positive real contained in $\wsigma(W)$.
We start with establishing the following.
If $z$ is a complex number with $|z|\le\rho$ and $\Real z\ge 0$,
then
\begin{equation}
\Real z^{\ell}\le \ell \rho^{\ell-1} \Real z, \label{eq:ck2}
\end{equation}
and equality holds only if $z=0$.
Consider such $z\not=0$ and
let $\alpha$ be such that $\Real z=|z|\cdot\cos\alpha$ and $\Imag z=|z|\cdot\sin\alpha$.
By symmetry, we can assume that $\alpha\in [0,\pi/2]$.
Let $\beta=\pi/2-\alpha$.
We first show that
\begin{equation}
-\cos\ell\beta<\ell\sin\beta.\label{eq:ck2sin}
\end{equation}
If $0\le\beta<\frac{\pi}{2\ell}$, then $\cos\ell\beta>0$, and
the inequality in \eqref{eq:ck2sin} holds since its left side is negative while the right side is non-negative.
If $\beta\ge\frac{\pi}{2\ell}$, then $\sin\beta>\frac{1}{\ell}$, and
the inequality in \eqref{eq:ck2sin} holds since its right side is larger than one.
We now apply \eqref{eq:ck2sin} as follows.
\[
\Real z^{\ell}=|z|^{\ell}\cos\ell\alpha
=-|z|^{\ell}\cos\ell\beta
<\ell|z|^{\ell}\sin\beta
=\ell|z|^{\ell}\cos\alpha
=\ell|z|^{\ell-1}\Real z.
\]
We now obtain the estimate \eqref{eq:ck2} using $|z|\le\rho$.
Similarly to the proof of Theorem~\ref{thm:ck2},
we bound the sum of the $\ell$-th powers of the elements of $\wsigma(W)$ using \eqref{eq:ck2}
as follows.
\[
\sum_{\lambda\in\wsigma(W)}\lambda^{\ell}
=\sum_{\lambda\in\wsigma(W)}\Real\lambda^{\ell}
\le\rho^\ell+\sum_{\lambda\in\wsigma(W)\setminus\{\rho\}}\ell \rho^{\ell-1}\Real\lambda
\le\left(\rho+\sum_{\lambda\in\wsigma(W)\setminus\{\rho\}}\Real\lambda\right)^\ell
\]
Note that the inequality is strict unless $\rho$ is the only non-zero element of $\wsigma(W)$.
Since the sum of the real parts of the elements of $\wsigma(W)$ is at most $1/2$,
we obtain using \eqref{eq:lambda} that
\[C(W,\ell)=2^\ell\sum_{\lambda\in\wsigma(W)}\lambda^{\ell}\le 1.\]
Since the choice of $W$ was arbitrary, it follows that $c(\ell)\le 1$.
Moreover, if $C(W,\ell)=1$, then $\rho$ is the only element of $\wsigma(W)$ and $\rho=1/2$.
Consequently, if $C(W,\ell)=1$, then $W$ is equal to $1/2$ almost everywhere by Proposition~\ref{prop:qrand}.
\end{proof}
\section{Cycles of length divisible by four}
\label{sec:ck4}
The proof of the main result of this section
requires bounding the spectral radius of skew-symmetric matrices with all entries between $-1$ and $+1$.
To get the tight bound on the spectral radius of such matrices,
we need the following auxiliary lemma.
\begin{lemma}
\label{lm:sumsq}
Let $s_1,\ldots,s_k$ be positive reals.
Further let $x_1,\ldots,x_k$ be non-negative reals such that
$x_1\ge x_2\ge \cdots \ge x_k\ge 0$ and
\[
\begin{array}{ccccccccccccccccc}
x_1 & & & & & & & \le & s_1 & + & s_2 & + & s_3 & + & \cdots & + & s_k\\
x_1 & + & x_2 & & & & & \le & s_1 & + & 2s_2 & + & 2s_3 & + & \cdots & + & 2s_k\\
x_1 & + & x_2 & + & x_3 & & & \le & s_1 & + & 2s_2 & + & 3s_3 & + & \cdots & + & 3s_k\\
\vdots && \vdots && & \ddots && & \vdots && \vdots & & \vdots & & & & \vdots\\
x_1 & + & x_2 & + & \cdots & + & x_k & \le & s_1 & + & 2s_2 & + & 3s_3 & + & \cdots & + & ks_k,
\end{array}
\]
i.e., the $m$-th inequality, $m\in [k]$ is
\[\sum_{i=1}^m x_i \le \sum_{i=1}^m \min\{i,m\}\cdot s_i.\]
It then holds that
\[\sum_{i=1}^k x_i^2\le\sum_{i=1}^k\left(\sum_{j=i}^k s_j\right)^2\]
and equality holds if and only if
$x_i=s_i+s_{i+1}+\cdots+s_k$ for all $i\in [k]$.
\end{lemma}
\begin{proof}
Let $S_i$, $i\in [k]$, be the right side of the $i$-th inequality listed in the statement of the lemma, and set $S_0=0$.
Observe that \[S_1-S_0>S_2-S_1>\cdots>S_k-S_{k-1}=s_k.\]
Since the set of reals $x_1\ge x_2\ge \cdots \ge x_k\ge 0$ that
satisfy the $k$ inequalities $x_1+\cdots+x_i\le S_i$, $i\in [k]$, is compact,
there exists a $k$-tuple $x_1,\ldots,x_k$ that maximizes the sum $x_1^2+\cdots+x_k^2$
subject to $x_1\ge x_2\ge \cdots \ge x_k\ge 0$ and the $k$ inequalities.
Fix such a $k$-tuple. Observe that $x_k\ge s_k>0$.
We first establish that $x_1>x_2>\cdots>x_k$.
Suppose that there exists $i$ such that $x_i=x_{i+1}$. Choose the smallest such $i$,
and let $i'$ be the largest index such that $x_{i'}=x_i$.
Next suppose that $x_1+\cdots+x_j=S_j$ for some $j\in\{i,\ldots,i'-1\}$.
Since $x_1+\cdots+x_{j-1}\le S_{j-1}$, it follows that $x_j\ge S_j-S_{j-1}$.
We next obtain using that $x_j=x_{j+1}$ the following:
\[x_1+\cdots+x_j+x_{j+1}=x_1+\cdots+x_j+x_j\ge S_j+(S_j-S_{j-1})>S_j+(S_{j+1}-S_j)=S_{j+1},\]
which violates the $(j+1)$-th inequality listed in the statement of the lemma.
Hence, it holds that $x_1+\cdots+x_j<S_j$ for every $j=i,\ldots,i'-1$.
Choose $\varepsilon>0$ such that
\begin{itemize}
\item $x_1+\cdots+x_j+\varepsilon\le S_j$ for every $j=i,\ldots,i'-1$,
\item if $i\ge 2$, then $x_i+\varepsilon\le x_{i-1}$,
\item if $i'\le k-1$, then $x_{i'}-\varepsilon\ge x_{i'+1}$, and
\item if $i'=k$, then $x_{i'}-\varepsilon\ge 0$.
\end{itemize}
Consider $x'_1,\ldots,x'_k$ such that
\[x_j=\begin{cases}
x_j+\varepsilon & \mbox{if $j=i$,} \\
x_j-\varepsilon & \mbox{if $j=i'$, and} \\
x_j & \mbox{otherwise.}
\end{cases}\]
Observe that $x'_1\ge x'_2\ge\cdots\ge x'_k\ge 0$ and $x'_1+\cdots+x'_j\le S_j$ for every $j\in [k]$.
Since the sum of the squares of $x'_1,\ldots,x'_k$ is larger than the sum of the squares of $x_1,\ldots,x_k$,
we obtain that the $k$-tuple $x_1,\ldots,x_k$ does not maximize the sum of the squares subject
to $x_1\ge x_2\ge \cdots \ge x_k\ge 0$ and the $k$ inequalities listed in the statement of the lemma.
This contradicts the choice of $x_1,\ldots,x_k$.
Hence, we have established that $x_1>x_2>\cdots>x_k$.
We next show that $x_1+\cdots+x_i=S_i$ for every $i\in [k]$.
Suppose the opposite, that $x_1+\cdots+x_i<S_i$ for some $i$, and choose $\varepsilon>0$ such that
\begin{itemize}
\item $x_1+\cdots+x_i+\varepsilon\le S_i$,
\item if $i\ge 2$, then $x_i+\varepsilon\le x_{i-1}$,
\item if $i\le k-2$, then $x_{i+1}-\varepsilon\ge x_{i+2}$, and
\item if $i=k-1$, then $x_{i+1}-\varepsilon\ge 0$.
\end{itemize}
Consider $x'_1,\ldots,x'_k$ such that
\[x_j=\begin{cases}
x_j+\varepsilon & \mbox{if $j=i$,} \\
x_j-\varepsilon & \mbox{if $j=i+1$, and} \\
x_j & \mbox{otherwise.}
\end{cases}\]
Observe that $x'_1\ge x'_2\ge\cdots\ge x'_k\ge 0$ and $x'_1+\cdots+x'_j\le S_j$ for every $j\in [k]$.
Since the sum of the squares of $x'_1,\ldots,x'_k$ is larger than the sum of the squares of $x_1,\ldots,x_k$,
we obtain that the $k$-tuple $x_1,\ldots,x_k$ does not maximize the sum of the squares subject
to $x_1\ge x_2\ge \cdots \ge x_k\ge 0$ and the $k$ inequalities listed in the statement of the lemma.
Hence, we conclude that $x_1+\cdots+x_i=S_i$ for every $i\in [k]$,
which implies that $x_i=S_i-S_{i-1}$ for every $i\in [k]$.
Since it holds that $S_i-S_{i-1}=s_i+\cdots+s_k$, the statement of the lemma now follows.
\end{proof}
We next bound the spectral radius of skew-symmetric matrices with entries between $-1$ and $+1$.
For $n\in\NN$, define $D_n$ to be the skew-symmetric matrix
with all entries above the diagonal equal to $+1$ and all entries below the diagonal equal to $-1$.
The next lemma asserts that the matrix $D_n$ has the largest possible spectral radius
among all skew-symmetric matrices with entries between $-1$ and~$+1$.
\begin{lemma}
\label{lm:antisym}
For every $n\in\NN$,
the spectral radius of a skew-symmetric matrix $A\in [-1,1]^{n\times n}$
is at most the spectral radius of $D_n$.
\end{lemma}
\begin{proof}
We fix $n$ and write $D$ for $D_n$ throughout the proof.
We establish that
for every vector $v\in\RR^n$ with $\|v\|=1$, there exists a vector $w$ that can be obtained from $v$
by permuting the entries of $v$ and changing their signs such that $\|Av\|\le\|Dw\|$.
This would imply that the spectral radius of $A$ does not exceed that of $D$.
First observe that it is enough to prove the inequality for vectors $v\in\RR^n$ with non-zero entries (if the statement fails
for a vector $v$, it also fails for any unit vector obtained by any small perturbation of $v$).
Next observe that if $A'$ is obtained by changing the signs of all entries in the $i$-th row and the $i$-th column and
$v'$ is obtained from $v$ by changing the sign of its $i$-th entry,
then $(Av)_i=-(A'v')_i$ and $(Av)_j=(A'v')_j$ for all $j\not=i$; in particular, $\|Av\|=\|A'v'\|$.
Hence, we will assume without loss of generality that all entries of $v$ are positive.
By permuting rows and columns of $A$ symmetrically and applying the same permutation to $v$,
we can assume that $(Av)_1\ge (Av)_2\ge \cdots \ge (Av)_n$.
Let $k$ be the largest index such that $(Av)_k\ge 0$, and
let $w$ be the vector obtained from $v$ by permuting its first $k$ entries and its remaining $n-k$ entries separately in a way that
$w_1\le w_2\le\dots\le w_k$ and $w_{k+1}\ge w_{k+2}\ge\dots\ge w_n$.
We will show that
\begin{equation}
\sum_{i=1}^k(Av)_i^2\le\sum_{i=1}^k(Dw)_i^2\quad\mbox{and}\quad\sum_{i=k+1}^n(Av)_i^2\le\sum_{i=k+1}^n(Dw)_i^2.\label{eq:antisym1}
\end{equation}
The arguments for the two cases are symmetric and so we focus on establishing that
\begin{equation}
\sum_{i=1}^k(Av)_i^2\le\sum_{i=1}^k(Dw)_i^2.\label{eq:antisym2}
\end{equation}
Observe that the following holds for every $m\in [k]$ (we use that $A_{ij}+A_{ji}=0$ for all $i,j\in [m]$):
\begin{align*}
\sum_{i=1}^m (Av)_i & = \sum_{i=1}^m\sum_{j=1}^n A_{ij}v_j \\
& \le \sum_{1\le i<j\le m}\max\{v_i-v_j,v_j-v_i\}+\sum_{i=1}^m\sum_{j=m+1}^n v_j \\
& = \sum_{1\le i<j\le m} \left(v_i+v_j-2\min\{v_i,v_j\}\right)+m\sum_{j=m+1}^n v_j \\
& = -\sum_{1\le i,j\le m} \min\{v_i,v_j\}+m\sum_{j=1}^n v_j \\
& \le -\sum_{1\le i,j\le m} \min\{w_i,w_j\}+m\sum_{j=1}^n w_j \\
& = \sum_{1\le i<j\le m} \left(w_j-w_i\right)+m\sum_{j=m+1}^n w_j \\
& = \sum_{i=1}^m\sum_{j=1}^n D_{ij}w_j = \sum_{i=1}^m (Dw)_i.
\end{align*}
Let $k'$ be the largest index such that $k'\le k$ and $(Dw)_{k'}>0$.
We choose $\varepsilon>0$ and apply Lemma~\ref{lm:sumsq} with the following parameters:
$x_i=(Av)_i$, $i\in [k]$,
$s_i=(Dw)_i-(Dw)_{i+1}$ for $i\in [k'-1]$, $s_{k'}=(Dw)_{k'}$ and $s_i=\varepsilon$, $i\in [k]\setminus [k']$.
Observe that $x_1,\ldots,x_{k}$ and $s_1,\ldots,s_{k}$ satisfy the assumptions of Lemma~\ref{lm:sumsq}.
Hence,
Lemma~\ref{lm:sumsq} implies that
\[\sum_{i=1}^{k} x_i^2\le\sum_{i=1}^{k'}\left((Dw)_i+(k-k')\varepsilon\right)^2+(k-k')\varepsilon^2.\]
Since this inequality holds for every $\varepsilon>0$,
we obtain that
\[\sum_{i=1}^{k} x_i^2\le\sum_{i=1}^{k'}(Dw)_i^2.\]
This establishes \eqref{eq:antisym2}.
The other inequality in \eqref{eq:antisym1} can be proven analogously.
Hence, we conclude that $\|Av\|\le\|Dw\|$ as desired.
\end{proof}
We next use Lemma~\ref{lm:antisym} to bound elements of $\wsigma(W)$ of a regular tournamenton~$W$.
\begin{lemma}
\label{lm:ck4}
Let $W$ be a tournamenton.
If $1/2$ is contained in $\wsigma(W)$,
then each other element of $\wsigma(W)$ has absolute value at most $1/\pi$.
\end{lemma}
\begin{proof}
Let $A_n$, $n\in\NN$, be a convergent sequence of step approximations of $W$, and
let $k_n$ be the order of $A_n$.
Since $W$ is regular by Proposition~\ref{prop:reg},
the sum of each row of $A_n$ is $1/2$.
This yields that $1/2$ is an eigenvalue of $A_n$ and the associated eigenvector is $(1,\ldots,1)$.
Let $J_{k_n}$ be the square matrix of order $k_n$ with all entries equal to $1$ and
let $B_n=J_{k_n}-2k_n\cdot A_n$.
Note that the matrix $B_n$ is skew-symmetric and all its entries are between $-1$ and $+1$.
Also note that the vector $(1,\ldots,1)$ is an eigenvector of $A_n$ associated with the eigenvalue $1/2$, and
it is also an eigenvector of $B_n$ associated with the eigenvalue $0$.
Since $B_n$ is skew-symmetric,
all its non-zero eigenvalues are purely imaginary and
there exists an orthonormal basis of~$\CC^{k_n}$ formed by eigenvectors of the matrix $B_n$
(see the beginning of Section~\ref{sec:c8} for a more detailed exposition on properties of skew-symmetric matrices).
This means that we can assume that every eigenvector of $B_n$ associated with a non-zero eigenvalue
is orthogonal to the vector $(1,\ldots,1)$ in the space $\CC^{k_n}$.
Since $(1,\ldots,1)$ is an eigenvector of $B_n$ associated with the eigenvalue $0$,
every eigenvector of~$B_n$ associated with a non-zero eigenvalue of $B_n$ is also an eigenvector of $A_n$.
Thus, if $\lambda$ is an eigenvalue of $A_n$,
then either $\lambda$ is equal to $1/2$ (for the vector $(1,\ldots,1)$) or $-2k_n\lambda$ is an eigenvalue of~$B_n$.
Since the spectral radius of $B_n$ is at most the spectral radius of the matrix~$D_{k_n}$ by Lemma~\ref{lm:antisym} and
the spectral radiuses of the matrices $D_{k_n}$ divided by $k_n$ converge to $2/\pi$,
it follows that the limit of the maximum absolute value of an eigenvalue of $A_n$ different from $1/2$ is at most $1/\pi$.
The statement of the lemma now follows.
\end{proof}
We are now ready to asymptotically determine $c(\ell)$ for $\ell$ divisible by four.
Since the multiset $\wsigma(W_C)$ for the carousel tournamenton $W_C$,
which we described at the end of Section~\ref{sec:prelim},
consists of $1/2$ and $\pm\unit/((2i-1)\pi)$ for $i\in\NN$,
we obtain using \eqref{eq:lambda} that
\[c(\ell)\ge 1+2\cdot\sum_{i=1}^{\infty}\left(\frac{2}{(2i-1)\pi}\right)^{\ell}\]
for every $\ell$ divisible by four.
The next theorem of this section provides an asymptotically matching upper bound.
\begin{theorem}
\label{thm:ck4}
For every $\varepsilon>0$, there exists $\ell_0$ such that
the following holds for every $\ell\ge\ell_0$ divisible by four:
\[c(\ell)\le 1+\left(\frac{2}{\pi}+\varepsilon\right)^{\ell}.\]
\end{theorem}
\begin{proof}
Let $W_k$ be a tournamenton that maximizes the sum of the $(4k)$-th powers of $\wsigma(W_k)$.
Suppose that the statement of the theorem is false,
i.e., there exists $\varepsilon>0$ and a sequence $(k_i)_{i\in\NN}$ such that
\begin{equation}\label{eq:ck4c}
C(W_{k_i},4k_i)>1+\left(\frac{2}{\pi}+\varepsilon\right)^{4k_i}
\end{equation}
for every $i\in\NN$.
Without loss of generality,
we may assume that the sequence $(W_{k_i})_{i\in\NN}$ is convergent in the cut distance and
let $W$ be the tournamenton that is its limit.
Since partitions of $[0,1]$ corresponding to fine enough step approximations of $W_{k_i}$
yield step approximations close to $W$ in the cut distance and
the step approximations of $W_{k_i}$ and $W$ are also close in the cut distance,
the sets $\left(\wsigma(W_{k_i})\cup\{0^\infty\}\right)_{i\in\NN}$
converge to $\wsigma(W)\cup\{0^\infty\}$ as multisets.
Let $\rho$ be the largest positive real contained in $\wsigma(W)$.
If $\rho$ is smaller than $1/2$, then it holds that
\[\lim_{i\to\infty}C(W_{k_i},4k_i)=0.\]
Since this contradict the choice of $(k_i)_{i\in\NN}$,
we can assume that $\rho=1/2$, i.e., $\wsigma(W)$ contains $1/2$.
Hence,
all other non-zero elements of $\wsigma(W)$ are purely imaginary and
the absolute value of every such element at most $1/\pi$ by Lemma~\ref{lm:ck4}.
Let $m$ be the number elements of $\wsigma(W)$ with the absolute value $1/\pi$ (note that $m$ can be zero).
We can assume that $\varepsilon$ is small enough that
the absolute value of any element of $\wsigma(W)$ with absolute value smaller than $1/\pi$ is at most $1/\pi-\varepsilon$.
It follows that
\begin{equation}
\lim_{i\to\infty}\frac{C(W_{k_i},4k_i)-2^{4k_i}\sum_{\lambda\in\wsigma(W_{k_i}),|\lambda|\ge 1/\pi-\varepsilon/2}\lambda^{4k_i}}{(2/\pi)^{4k_i}}=0.\label{eq:ck4a}
\end{equation}
The choice of $m$ implies that there exists $i_0$ such that
\begin{equation}
\sum_{\lambda\in\wsigma(W_{k_i}),|\lambda|\ge 1/\pi-\varepsilon/2}\lambda^{4k_i}\le\frac{1}{2^{4k_i}}+m\left(\frac{1}{\pi}+\frac{\varepsilon}{4}\right)^{4k_i}\label{eq:ck4b}
\end{equation}
for every $i\ge i_0$.
Using \eqref{eq:ck4c}, \eqref{eq:ck4a} and \eqref{eq:ck4b} we get a contradiction.
\end{proof}
\section{Cycles of length eight}
\label{sec:c8}
In this section, we present our results on cycles of length eight.
We also present the analogous arguments in the (simpler) case of cycles of length four to make the exposition more accessible,
although the presented results for cycles of length four have been previously proven.
To be able to present our arguments, we need to recall some results on matrices and particularly on skew-symmetric matrices.
If $A$ is a square matrix of order $n$ and $X\subseteq [n]$,
then $A[X]$ is the square matrix formed by entries in the rows and the columns indexed by the elements of $X$.
Throughout this section, $J_n$ denotes the square matrix of order $n$ with all entries equal to $1$.
Recall that $D_n$ is the skew-symmetric $n\times n$ matrix with all entries above the diagonal equal to $+1$ and all entries below the diagonal equal to $-1$.
We say that two skew-symmetric matrices are \emph{sign-equivalent}
if one can be obtained from the other by permuting the rows and columns symmetrically and
multiplying some of the rows and the symmetric set of columns by $-1$.
It is well-known that for every real skew-symmetric matrix~$A$,
there exists an orthogonal (real) matrix $Q$
(i.e. a square matrix $Q$ such that $Q^TQ$ is the identity matrix) such that
the matrix $Q^TAQ$ is a block diagonal matrix with two kinds of blocks:
blocks of size two of the form $\begin{pmatrix} 0 & a \\ -a & 0 \end{pmatrix}$ and
blocks of size one equal to the zero matrix.
In particular, there exists an orthogonal basis formed by eigenvectors of the matrix $A^2$ and
the values $-a^2$ from the blocks of the matrix $Q^TAQ$ are the eigenvalues of $A^2$ (each with multiplicity two).
In addition, if $v$ and $v'$ are two rows of $Q$ associated with the single block of $Q^TAQ$ with a value $a$,
i.e., $av=Av'$ and $-av'=Av$, the complex vectors $v+\unit v'$ and $v-\unit v'$
are eigenvectors of $A$ associated with the eigenvalues $a\unit$ and $-a\unit$, respectively, and
the vectors $v+\unit v'$ and $v-\unit v'$ are orthogonal in the space $\CC^n$.
We apply the just reviewed results on skew-symmetric matrices to get the following upper bound
the trace of the fourth and eight powers of the sum of the all-one matrix and a skew-symmetric matrix.
\begin{lemma}
\label{lm:midterms}
Let $B$ be a skew-symmetric matrix of order $n$ with entries between $-1$ and $+1$.
It holds that
\begin{align*}
\Trace (J_n+B)^4 &=\Trace J_n^4+\Trace B^4-4n||Bj||^2 \quad\mbox{and}\\
\Trace (J_n+B)^8 &\le\Trace J_n^8+\Trace B^8-2n^5||Bj||^2,
\end{align*}
where $j$ is the vector with all entries equal to one.
In particular, it holds that
\[\Trace (J_n+B)^4\le\Trace J_n^4+\Trace B^4\quad\mbox{and}\quad \Trace (J_n+B)^8\le\Trace J_n^8+\Trace B^8,\]
and equality holds if and only if the sum of each row of $B$ is zero.
\end{lemma}
\begin{proof}
We start with the trace of the $4$-th power of $J_n+B$.
We obtain the following by expanding $(J_n+B)^4$ and using that $\Trace XY=\Trace YX$:
\[\Trace (J_n+B)^4=\Trace J_n^4+4\Trace J_n^3B+4\Trace J_n^2B^2+2\Trace J_nBJ_nB+4\Trace J_nB^3+\Trace B^4.\]
Since $B$ is skew-symmetric, any odd power of $B$ is also skew-symmetric.
In particular, $J_nB^{2k-1}J_n$ is the zero matrix and $\Trace J_n B^{2k-1}=0$ for every $k\in\NN$.
It follows that
\begin{equation}
\Trace (J_n+B)^4=\Trace J_n^4+4\Trace J_n^2B^2+\Trace B^4.\label{eq:mid1}
\end{equation}
Let $Q$ be the orthogonal matrix such that $Q^TBQ$ has the block structure described before the statement of this lemma,
let $k$ be the number of blocks of size two, and
let $a_1,\ldots,a_k$ be the (non-zero) numbers associated with these blocks.
Since the trace of $B^2$ is equal to $2(a_1^2+\cdots+a_k^2)$, it follows that
\begin{equation}
a_1^2+\cdots+a_k^2\le \frac{n^2}{2}.\label{eq:mida}
\end{equation}
Further, let $q_i$ and $q'_i$ be the two rows of $Q$ corresponding to the block with $a_i$, $i\in [k]$, and
let $\alpha_i\in [0,\pi/2]$ be the angle between the vector $j$ and the plane generated by $q_i$ and $q'_i$, $i\in [k]$.
Note that the $(n-2k)$-dimensional subspace orthogonal to the space generated by $q_1,\ldots,q_k$ and $q'_1,\ldots,q'_k$
is the kernel of $B$ (as it is generated by the rows of $Q$ corresponding to the blocks of size one).
Since the rows of $Q$ form an orthogonal basis, it follows that
\begin{equation}
\sum_{i=1}^k\cos^2\alpha_i\le 1.\label{eq:mid2}
\end{equation}
Observe that the following identities hold.
\begin{align*}
\Trace J_n^4 & = n^4 \\
\Trace J_n^2B^2 & = -n^2\sum_{i=1}^k a_i^2\cos^2\alpha_i \\
\Trace B^4 & = 2\sum_{i=1}^k a_i^4
\end{align*}
In particular, the second term in \eqref{eq:mid1} is non-positive and equal to $-4n||Bj||^2$.
Hence, $\Trace (J_n+B)^4\le \Trace J_n^4+\Trace B^4$ and
equality holds if and only if the vector $j$ is in the kernel of $B$.
The latter holds if and only if the sum of each row of $B$ is zero.
This establishes the statement of the lemma concerning the trace of the $4$-th power of $J_n+B$.
We next analyze the trace of the $8$-th power of $J_n+B$.
As in the case of the $4$-th power the trace of some terms in the expansion of $(J_n+B)^8$ is zero, and
we obtain the following.
\begin{align}
\Trace (J_n+B)^8 & = \Trace J_n^8+8\Trace J_n^6B^2+8\Trace J_n^4B^4+8\Trace J_n^3B^2J_nB^2\nonumber\\
& + 4\Trace J_n^2B^2J_n^2B^2+8\Trace J_n^2B^6+8\Trace J_nB^2J_nB^4+\Trace B^8\label{eq:mid3}
\end{align}
As in the previous case, we can express some of the terms in \eqref{eq:mid3} using $a_i$ and $\alpha_i$, $i\in [k]$.
\begin{align*}
\Trace J_n^8 & = n^8 \\
\Trace J_n^6B^2 & = -n^6\sum_{i=1}^k a_i^2\cos^2\alpha_i \\
\Trace J_n^4B^4 & = n^4\sum_{i=1}^k a_i^4\cos^2\alpha_i \\
\Trace J_n^3B^2J_nB^2 & = \Trace J_n^2B^2J_n^2B^2 = n^4\left(\sum_{i=1}^k a_i^2\cos^2\alpha_i\right)^2 \\
\Trace J_n^2B^6 & = -n^2\sum_{i=1}^k a_i^6\cos^2\alpha_i \\
\Trace J_nB^2J_nB^4 & = -n^2\left(\sum_{i=1}^k a_i^2\cos^2\alpha_i\right)\left(\sum_{i=1}^k a_i^4\cos^2\alpha_i\right)\\
\Trace B^8 & = 2\sum_{i=1}^k a_i^8
\end{align*}
We derive using \eqref{eq:mid2} that
\begin{align}
&2\Trace J_n^6B^2+8\Trace J_n^3B^2J_nB^2+8\Trace J_nB^2J_nB^4\nonumber\\
&=-2n^2\left(\sum_{i=1}^k a_i^2\cos^2\alpha_i\right)\left(n^4-4n^2\sum_{i=1}^k a_i^2\cos^2\alpha_i+4\sum_{i=1}^k a_i^4\cos^2\alpha_i\right)\nonumber\\
&\le-2n^2\left(\sum_{i=1}^k a_i^2\cos^2\alpha_i\right)\left(n^4\sum_{i=1}^k\cos^2\alpha_i-4n^2\sum_{i=1}^k a_i^2\cos^2\alpha_i+4\sum_{i=1}^k a_i^4\cos^2\alpha_i\right)\nonumber\\
&=-2n^2\left(\sum_{i=1}^k a_i^2\cos^2\alpha_i\right)\left(\sum_{i=1}^k\left(n^2-2a_i^2\right)^2\cos^2\alpha_i\right)\le 0.\label{eq:mid4+}
\end{align}
Using \eqref{eq:mida} and \eqref{eq:mid2}, we obtain that
\[\sum_{i=1}^k a_i^2\cos^2\alpha_i\le\frac{n^2}{2},\]
which yields that
\begin{align}
2\Trace J_n^6B^2+4\Trace J_n^2B^2J_n^2B^2&=\nonumber\\
2n^4\left(\sum_{i=1}^k a_i^2\cos^2\alpha_i\right)\left(-n^2+2\sum_{i=1}^k a_i^2\cos^2\alpha_i\right)&\le 0.\label{eq:mid4}
\end{align}
We finally bound a portion of the second term, and the third and sixth terms in \eqref{eq:mid3} as follows.
\begin{align}
2\Trace J_n^6B^2+8\Trace J_n^4B^4+8\Trace J_n^2B^6&=\nonumber\\
-2n^2\sum_{i=1}^k\left(n^4-4n^2a_i^2+4a_i^4\right)a_i^2\cos^2\alpha_i&=\nonumber\\
-2n^2\sum_{i=1}^k\left(n^2-2a_i^2\right)^2a_i^2\cos^2\alpha_i&\le 0\label{eq:mid5}
\end{align}
Using \eqref{eq:mid4+}, \eqref{eq:mid4} and \eqref{eq:mid5},
we obtain the following estimate on $\Trace (J_n+B)^8$ using the expansion in \eqref{eq:mid3}.
\begin{equation}
\Trace (J_n+B)^8 \le \Trace J_n^8+\Trace B^8+2\Trace J_n^6B^2\label{eq:mid6}
\end{equation}
Since it holds that $\Trace J_n^6B^2=-n^5||Bj||^2\le 0$,
the inequality from the statement of the lemma now follows.
Hence, $\Trace (J_n+B)^8\le \Trace J_n^8+\Trace B^8$.
Moreover, since the inequalities \eqref{eq:mid4+}, \eqref{eq:mid4} and \eqref{eq:mid5}
hold with equality if $\cos\alpha_i=0$ for every $i\in [k]$, which holds if $j$ is in the kernel of $B$,
we can conclude that
$\Trace (J_n+B)^8=\Trace J_n^8+\Trace B^8$ if and only if the sum of each row of $B$ is zero.
\end{proof}
The following two lemmas will be important
to establish an upper bound on the trace of the last term in the upper bound given in Lemma~\ref{lm:midterms}.
To state the lemmas, we need the following definition.
For a square matrix $A$ of order $n$ we define the \emph{cyclic index} of $A$ as
\[\Cycl A=\sum_{\pi\in S_n}\prod_{i=1}^nA_{\pi(i)\pi(i+1)},\]
where the computation with indices is modulo $n$, i.e., $\pi(n+1)=\pi(1)$.
\begin{lemma}
\label{lm:trace4}
Let $B$ be a skew-symmetric matrix of order $4$ such that
each off-diagonal entry of $B$ is $+1$ or $-1$.
The cyclic index of $B$ is at most the cyclic index of $D_4$ and
equality holds if and only if $B$ is sign-equivalent to $D_4$.
\end{lemma}
\begin{proof}
Since the cyclic index of sign-equivalent matrices is the same,
we may assume without loss of generality that the first row of $B$ contains $+1$ only.
Hence, we need to consider the following two matrices (after a permutation of rows and columns):
\[
\begin{pmatrix}
0 & +1 & +1 & +1 \\
-1 & 0 & +1 & +1 \\
-1 & -1 & 0 & +1 \\
-1 & -1 & -1 & 0 \\
\end{pmatrix}
\quad\mbox{and}\quad
\begin{pmatrix}
0 & +1 & +1 & +1 \\
-1 & 0 & +1 & -1 \\
-1 & -1 & 0 & +1 \\
-1 & +1 & -1 & 0 \\
\end{pmatrix}
\]
The cyclic index of the left matrix is $8$ and
the cyclic index of the right matrix is $-24$.
The statement of the lemma follows.
\end{proof}
To state the next lemma, we need to introduce another skew-symmetric matrix.
\[D'_8=
\begin{pmatrix}
0 & +1 & +1 & +1 & +1 & +1 & +1 & +1 \\
-1 & 0 & +1 & -1 & +1 & +1 & +1 & +1 \\
-1 & -1 & 0 & -1 & +1 & +1 & +1 & +1 \\
-1 & +1 & +1 & 0 & -1 & -1 & -1 & -1 \\
-1 & -1 & -1 & +1 & 0 & +1 & +1 & -1 \\
-1 & -1 & -1 & +1 & -1 & 0 & +1 & +1 \\
-1 & -1 & -1 & +1 & -1 & -1 & 0 & +1 \\
-1 & -1 & -1 & +1 & +1 & -1 & -1 & 0
\end{pmatrix}
\]
\begin{lemma}
\label{lm:trace8}
Let $B$ be a skew-symmetric matrix of order $8$ such that
each off-diagonal entry of $B$ is $+1$ or $-1$.
The cyclic index of $B$ is at most the cycle index of $D_8$ and
equality holds if and only if $B$ is sign-equivalent to $D_8$ or to $D'_8$.
\end{lemma}
The proof of Lemma~\ref{lm:trace8} proceeds by a computer assisted inspection of all skew-symmetric $8 \times 8$ matrices
where the off-diagonal entries in the first row are $+1$, those in the first column are $-1$, and
all other off-diagonal entries are either $+1$ or $-1$.
In an independent way, we have prepared a C program and a C++ program to verify Lemma~\ref{lm:trace8},
i.e., to check that the cyclic index of every skew-symmetric $8 \times 8$ matrix of the above form is at most $2\,176$ and
the equality holds if and only if the matrix is sign-equivalent to $D_8$ or to $D'_8$;
the code of the C program and its output are available as ancillary files on arXiv.
We are now ready to compute the value of $c(8)$.
\begin{theorem}
\label{thm:c8}
It holds that $c(4)=4/3$ and $c(8)=332/315$.
\end{theorem}
\begin{proof}
Fix $\ell\in\{4,8\}$ and
let $A$ be the tournament matrix of an $n$-vertex tournament $T$.
Proposition~\ref{prop:eigen} yields that
\[C(T,\ell)=\frac{2^\ell}{n^\ell}\Trace A^\ell+O(n^{-1}).\]
Let $B=J_n-2A$ and note that $A=\frac{J_n+B}{2}$.
By Lemma~\ref{lm:midterms}, we obtain that
\[\Trace A^\ell\le\frac{1}{2^\ell}\left(\Trace J_n^\ell+\Trace B^\ell\right).\]
The trace of $B^\ell$ can be combinatorially interpreted as the sum taken over all closed walks with length $\ell$ in $T$
where the sum contains $+1$ for every such walk with an even number of forward edges and
$-1$ for every such walk with an odd number of forward edges.
Such walks that are not cycles contribute to the sum only $O(n^{\ell-1})$ and
those that are cycles can be counted as cyclic indices of the square submatrices
with rows and columns indexed by the vertices of the cycle.
Hence, we obtain the following.
\[\Trace B^\ell=\sum_{X\in\binom{[n]}{\ell}}\Cycl B[X]+O(n^{\ell-1}).\]
By Lemmas~\ref{lm:trace4} and~\ref{lm:trace8},
it holds $\Cycl B[X]\le\Cycl D_\ell$ for every $X\in\binom{[n]}{\ell}$,
which yields that
\[\Trace B^\ell\le\Trace D_n^\ell+O(n^{\ell-1}).\]
Hence, we obtain that
\[C(T,\ell)\le\frac{1}{n^\ell}\left(n^\ell+\Trace D_n^\ell+O(n^{\ell-1})\right).\]
We next proceed separately for $\ell=4$ and $\ell=8$.
Analyzing the spectrum of the matrix $D_n$ yields that
\[\lim_{n\to\infty}\frac{\Trace D_n^4}{n^4}=2\sum_{i=1}^{\infty}\left(\frac{2}{(2i-1)\pi}\right)^4=\frac{1}{3},\]
which implies that
\[C(T,4)\le\frac{4}{3}+O(n^{-1}).\]
Similarly, we obtain that
\[\lim_{n\to\infty}\frac{\Trace D_n^8}{n^8}=2\sum_{i=1}^{\infty}\left(\frac{2}{(2i-1)\pi}\right)^8=\frac{17}{315},\]
which implies that
\[C(T,8)\le\frac{332}{315}+O(n^{-1}).\]
Since it holds that $C(W_C,4)=4/3$ and $C(W_C,8)=332/315$ for the carousel tournamenton $W_C$,
the statement of the theorem now follows.
\end{proof}
The methods used to prove Theorem~\ref{thm:c8} actually provide the characterization of extremal tournamentons.
We fix some additional notation:
$T^4$ is the $4$-vertex transitive tournament,
$C^4$ is the unique $4$-vertex hamiltonian tournament,
$L^4$ is the unique $4$-vertex non-transitive tournament with a sink, and
$W^4$ is the unique $4$-vertex non-transitive tournament with a source.
The four tournaments are depicted in Figure~\ref{fig:CLTW}.
It is also interesting to note that the matrix $D'_8$ is sign-equivalent to the following matrix $D''_8$:
\[D''_8=
\begin{pmatrix}
0 & +1 & +1 & -1 & +1 & +1 & +1 & +1 \\
-1 & 0 & +1 & +1 & +1 & +1 & +1 & +1 \\
-1 & -1 & 0 & +1 & +1 & +1 & +1 & +1 \\
+1 & -1 & -1 & 0 & +1 & +1 & +1 & +1 \\
-1 & -1 & -1 & -1 & 0 & +1 & +1 & -1 \\
-1 & -1 & -1 & -1 & -1 & 0 & +1 & +1 \\
-1 & -1 & -1 & -1 & -1 & -1 & 0 & +1 \\
-1 & -1 & -1 & -1 & +1 & -1 & -1 & 0
\end{pmatrix}
.
\]
The $8$-vertex tournament with the tournament matrix $\frac{J_8+D''_8}{2}$
is the tournament obtained from two copies of $C^4$ by adding edges directed from the first copy to the second;
this tournament is depicted in Figure~\ref{fig:C4C4}.
\begin{figure}
\begin{center}
\epsfbox{tourn-ck-4.mps}
\hskip 1cm
\epsfbox{tourn-ck-2.mps}
\hskip 1cm
\epsfbox{tourn-ck-5.mps}
\hskip 1cm
\epsfbox{tourn-ck-3.mps}
\end{center}
\caption{The tournaments $T^4$, $C^4$, $L^4$ and $W^4$.}
\label{fig:CLTW}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{tourn-ck-6.mps}
\end{center}
\caption{The $8$-vertex tournament with the tournament matrix $\frac{J_8+D''_8}{2}$.
The edges between the two copies of $C^4$ are drawn in gray to better display the structure of the tournament.}
\label{fig:C4C4}
\end{figure}
\begin{theorem}
\label{thm:c48}
Let $W$ be a tournamenton.
It holds that $C(W,4)=c(4)=4/3$ if and only if $W$ is weakly isomorphic to the carousel tournamenton $W_C$, and
it holds that $C(W,8)=c(8)=332/315$ if and only if $W$ is weakly isomorphic to the carousel tournamenton $W_C$.
\end{theorem}
\begin{proof}
If $W$ is weakly isomorphic to the the carousel tournamenton $W_C$, then $C(W,4)=C(W_C,4)=c(4)$ and $C(W,8)=C(W_C,8)=c(8)$.
Hence, we focus on proving the converse implications and start with the one concerning cycles of length four.
Let $W$ be a tournamenton such that $C(W,4)=c(4)$.
Let $T_n$, $n\in\NN$, be an $n$-vertex $W$-random tournament, and
let $B_n$ be the skew-symmetric matrix such that $\frac{J_n+B_n}{2}$ is the tournament matrix of $T_n$.
Note that the limit of $C(T_n,4)$ is $C(W,4)$ with probability one.
Similarly to the proof of Theorem~\ref{thm:c8}, we obtain using Lemma~\ref{lm:midterms} that
\[C(T_n,4)\le 1+\frac{1}{n^4}\Trace D_n^4-\frac{4}{n^3}||B_nj||^2+O(n^{-1}),\]
where $j$ is the vector with all entries equal to one.
It follows that
\[C(W,4)=\lim_{n\to\infty} C(T_n,4)\le 1+\lim_{n\to\infty}\frac{1}{n^4}\Trace D_n^4=c(4).\]
Furthermore, equality can hold only if
\begin{equation}
\lim_{n\to\infty}\frac{\|B_nj\|^2}{n^3}=0\label{eq:Bnj}
\end{equation}
and the proportion of principal submatrices of $B_n$ of order four that are sign-equivalent to $D_4$ tends to $1$.
The latter implies that $d(T^4,W)+d(C^4,W)=1$ and $d(L^4,W)+d(W^4,W)=0$,
i.e., the only $4$-vertex tournaments with positive density in $W$ are $T^4$ and $C^4$.
We conclude that all $4$-vertex subtournaments of $T_n$ are $T^4$ and $C^4$ (with probability one),
in particular, the in-neighborhood and the out-neighborhood of every vertex of $T_n$ is transitive.
If \eqref{eq:Bnj} holds, then $1/2\in\wsigma(W)$ and the tournamenton $W$ is regular by Proposition~\ref{prop:reg}.
Hence, the in-degree of every vertex of $T_n$ is close to $n/2$ with high probability,
which implies that $W$ is a limit of the carousel tournaments described at the end of Section~\ref{sec:prelim}.
Since the tournamenton $W_C$ is also a limit of the carousel tournaments,
the tournamentons $W$ and $W_C$ are weakly isomorphic.
We now deal with the case of cycles of length eight.
Let $W$ be a tournamenton such that $C(W,8)=c(8)$.
As in the previous case, we conclude that $W$ is regular and
the only $8$-vertex tournaments with positive density in $W$ are those
whose tournament matrix $A$ satisfies that $2A-J_n$ is sign-equivalent to $D_8$ and $D'_8$.
A~straightforward case analysis yields that every skew-symmetric matrix $B$ of order nine such that
every principal submatrix of order eight of $B$ is sign-equivalent to $D_8$ or $D'_8$
satisfies that $B$ is sign-equivalent to $D_9$.
It follows that the only $9$-vertex tournaments with positive density in $W$ are those
whose tournament matrix $A$ satisfies that $2A-J_n$ is sign-equivalent to $D_9$.
Consequently, the only $4$-vertex tournaments with positive density in $W$ are those
whose tournament matrix $A$ satisfies that $2A-J_n$ is sign-equivalent to $D_4$,
i.e., the tournaments $T^4$ and $C^4$.
Analogously to the previous case, we now conclude that the tournamentons $W$ and $W_C$ are weakly isomorphic.
\end{proof}
\bibliographystyle{bibstyle}
| {
"timestamp": "2022-07-25T02:10:38",
"yymm": "2008",
"arxiv_id": "2008.06577",
"language": "en",
"url": "https://arxiv.org/abs/2008.06577",
"abstract": "We study the asymptotic behavior of the maximum number of directed cycles of a given length in a tournament: let $c(\\ell)$ be the limit of the ratio of the maximum number of cycles of length $\\ell$ in an $n$-vertex tournament and the expected number of cycles of length $\\ell$ in the random $n$-vertex tournament, when $n$ tends to infinity. It is well-known that $c(3)=1$ and $c(4)=4/3$. We show that $c(\\ell)=1$ if and only if $\\ell$ is not divisible by four, which settles a conjecture of Bartley and Day. If $\\ell$ is divisible by four, we show that $1+2\\cdot\\left(2/\\pi\\right)^{\\ell}\\le c(\\ell)\\le 1+\\left(2/\\pi+o(1)\\right)^{\\ell}$ and determine the value $c(\\ell)$ exactly for $\\ell = 8$. We also give a full description of the asymptotic structure of tournaments with the maximum number of cycles of length $\\ell$ when $\\ell$ is not divisible by four or $\\ell\\in\\{4,8\\}$.",
"subjects": "Combinatorics (math.CO)",
"title": "Cycles of a given length in tournaments",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9886682451330957,
"lm_q2_score": 0.8104789018037399,
"lm_q1q2_score": 0.8012947535637022
} |
https://arxiv.org/abs/1201.6649 | Discriminant coamoebas through homology | Understanding the complement of the coamoeba of a (reduced) A-discriminant is one approach to studying the monodromy of solutions to the corresponding system of A-hypergeometric differential equations. Nilsson and Passare described the structure of the coamoeba and its complement (a zonotope) when the reduced A-discriminant is a function of two variables. Their main result was that the coamoeba and zonotope form a cycle which is equal to the fundamental cycle of the torus, multiplied by the normalized volume of the set A of integer vectors. That proof only worked in dimension two. Here, we use simple ideas from topology to give a new proof of this result in dimension two, one which can be generalized to all dimensions. | \section*{Introduction}
$A$-hypergeometric functions, which are solutions to $A$-hypergeometric systems of
differential equations~\cite{GKZ89,GKZ94,SST}, enjoy two complimentary analytical formulae
which together give an approach to studying the monodromy of the
solutions~\cite{Beukers} at non-resonant parameters.
One formula is as explicit power series whose convergence domains in
${\bf{C}}^{N+1}$ have an action of the group ${\bf{T}}^{N+1}$ of phases.
These power series form a basis of solutions, with known local monodromy around loops from
${\bf{T}}^{N+1}$.
Another formula is as $A$-hypergeometric Mellin-Barnes
integrals~\cite{Nilsson} evaluated at phases $\theta\in{\bf{T}}^{N+1}$.
When the Mellin-Barnes integrals give a basis of solutions, they may be
used to glue together the local monodromy groups and determine a subgroup
of the monodromy group, which may sometimes be the full monodromy group.
Here, $A\subset{\bf{Z}}^n$ consists of $N{+}1$ integer vectors that generate ${\bf{Z}}^n$.
Considering ${\bf{Z}}^n\subset{\bf{Z}}^{n+1}$ as the vectors with first coordinate 1, we regard $A$ as a
collection of $N{+}1$ vectors in ${\bf{Z}}^{n+1}$.
The $A$-discriminant is a multihomogeneous polynomial in $N{+}1$ variables with $n{+}1$
homogeneities corresponding to $A$.
Removing these homogeneities gives the reduced $A$-discriminant, $D_B$, which is a hypersurface in
${\bf{C}}^d$ ($d:=N{-}n$) that depends upon a vector configuration $B\subset{\bf{Z}}^d$ Gale dual to
$A$.
This reduction corresponds to a homomorphism $\beta\colon({\bf{C}}^*)^{N+1}\to({\bf{C}}^*)^d$ and
induces a corresponding map $\Arg(\beta)$ on phases.
The Mellin-Barnes integrals at $\theta\in{\bf{T}}^{N+1}$ give a basis of solutions when
$\Arg(\beta)(\theta)$ has a neighborhood in ${\bf{T}}^{d}$ with the property that no point of
$D_B$ has a phase lying in that neighborhood~\cite{Nilsson}.
By results in~\cite{Johansson,NS}, this means that $\Arg(\beta)(\theta)$ lies in the
complement of the closure of the coamoeba ${\mathcal{A}}_B$ of $D_B$.
When $d=2$, the closure of ${\mathcal{A}}_B$ and its complement were described
in~\cite{NP10} as topological chains in ${\bf{T}}^2$ (induced from natural chains in its
universal cover ${\bf{R}}^2$, where ${\bf{T}}^2=({\bf{R}}/2\pi{\bf{Z}})^2$).
The closure of the coamoeba is an explicit chain depending on
$B$.
Its edges coincide with the edges of the zonotope $Z_B$ generated by $B$.
The main result of~\cite{NP10} is the following theorem.
\begin{itheorem}\label{T:NP}
The sum of the coamoeba chain $\overline{{\mathcal{A}}_B}$ and the zonotope $Z_B$ forms a two-dimensional
cycle in ${\bf{T}}^2$ that is equal to $n!\vol(A)$ times the fundamental cycle.
\end{itheorem}
Here, $n!\vol(A)$ is the normalized volume of the convex hull of $A$, which is the dimension
of the space of solutions to the (non-resonant) $A$-hypergeometric system.
The zonotope $Z_B$ gives points in the complement of ${\mathcal{A}}_B$, by Theorem~\ref{T:NP}.
Its proof in~\cite{NP10} only works when $d=2$ and it is not clear
how to generalize it to $d>2$.
However, any such generalization would be important, for Mellin-Barnes
integrals at a set of phases $\theta$ where $\Arg(\beta)(\theta)$ are distinct points of
$Z_B$ with the same image in ${\bf{T}}^d$ are linearly independent.
We give a proof of Theorem~\ref{T:NP} which explains the occurrence of the
zonotope and can be generalized to higher dimensions.
This proof uses the Horn-Kapranov parametrization of the $A$-discriminant~\cite{K91},
which implies that the discriminant coamoeba is the image of the coamoeba of a line $\ell_B$ in
$\P^N$ under the map $\Arg(\beta)$.
We construct a piecewise linear \demph{zonotope chain} in ${\bf{T}}^N$ (the quotient of
${\bf{T}}^{N+1}$ by the diagonal torus) which is a cone over the
boundary of the coamoeba of $\ell_B$, and compute the homology class of the sum of the coamoeba
and this zonotope chain.
This gives a formula for the image of this cycle under $\Arg(\beta)$, which
we show is $n!\vol(A)$ times the fundamental cycle of ${\bf{T}}^2$.
Theorem~\ref{T:NP} follows as the map $\Arg(\beta)$ sends the coamoeba of $\ell$ to the
coamoeba ${\mathcal{A}}_B$ of $D_B$ and sends the zonotope chain to $Z_B$.
While for $A$-discriminants, the set $A$ consists of distinct integer vectors and
consequently its Gale dual $B$ generates ${\bf{Z}}^2$ and has no
two vectors parallel, we establish Theorem~\ref{T:NP}
in the greater generality of any finite multiset $B$ of
integer vectors in ${\bf{Z}}^2$ with sum ${\bf 0}$ that spans ${\bf{R}}^2$.
This generality is useful in our primary application to hypergeometric systems,
for example the classical systems of Appell~\cite{appell} and Lauricella~\cite{lauricella}
may be expressed as $A$-hypergeometric systems with repeated vectors in the Gale dual $B$.
In this setting, we replace the reduced $A$-discriminant by the Horn-Kapranov
parametrization given by the vectors $B$, and study the coamoeba ${\mathcal{A}}_B$ of the
image, which is also written $D_B$.
The normalized volume $n!\vol(A)$ of the configuration $A$ is replaced by a quantity $d_B$
that depends upon the vectors in $B$.
We collect some preliminaries in Section~\ref{S:one}.
In Section~\ref{S:realline} we study the coamoeba of a line in $\P^N$ defined over the real
numbers and define its associated zonotope chain.
Our main result is a computation of the homology class of the cycle formed by these two
chains.
In Section~\ref{S:three} we show that under the map $\Arg(\beta)$ the coamoeba and zonotope
chains map to the coamoeba ${\mathcal{A}}_B$ and the zonotope $Z_B$,
and a simple application of the result in Section~\ref{S:realline} shows that the homology
class of $\overline{{\mathcal{A}}_B}+Z_B$ is $d_B$ times the fundamental cycle of ${\bf{T}}^2$.\medskip
\noindent{\bf Remark.}
This approach to reduced $A$-discriminant coamoebas and their complements was developed during the
Winter 2011 semester at the Institut Mittag-Leffler, with the main result obtained in August
2011, along with a sketch of a program to extend it to $d\geq 2$.
With the tragic death of Mikael Passare on 15 September 2011, the task of completing this
paper fell to the second author, and the program extending these results is being carried out
in collaboration with Mounir Nisse.
\section{Coamoebas and cohomology of tori}\label{S:one}
Throughout $N$ will be an integer strictly greater than 1.
Let $\defcolor{\P^N}$ be $N$-dimensional complex projective space, which will
always have a preferred set of coordinates $[x_1:\dotsb:x_N:x_{N+1}]$ (up to reordering).
Similarly, $\defcolor{{\bf{C}}^N}$, $\defcolor{({\bf{C}}^*)^N}$, $\defcolor{{\bf{R}}^N}$, and $\defcolor{{\bf{Z}}^N}$
are $N$-tuples of complex numbers, non-zero complex numbers, real numbers, and integers, all
with corresponding preferred coordinates.
We will write ${\bf e}_i$ for the $i$th basis vector in a corresponding ordered basis.
The argument map ${\bf{C}}^*\ni z=re^{\sqrt{-1}\theta}\mapsto\theta\in{\bf{T}}:={\bf{R}}/2\pi{\bf{Z}}$ induces an
argument map $\defcolor{\Arg}\colon({\bf{C}}^*)^N\to{\bf{T}}^N$.
To a subvariety $X\subset\P^N$ (or ${\bf{C}}^N$ or $({\bf{C}}^*)^N$) we associate its \demph{coamoeba}
$\defcolor{{\mathcal{A}}(X)}\subset{\bf{T}}^N$ which is the image of $X\cap({\bf{C}}^*)^N$ under $\Arg$.
The closure of the coamoeba ${\mathcal{A}}(X)$ was studied in~\cite{Johansson,NS}.
This closure contains ${\mathcal{A}}(X)$, together with all limits of arguments of unbounded
sequences in $X\cap({\bf{C}}^*)^N$, which constitute the
\demph{phase limit set of $X$, ${\mathcal{P}}^\infty(X)$}.
The main result of~\cite{NS} (proven when $X$ is a complete intersection in~\cite{Johansson})
is that ${\mathcal{P}}^\infty(X)$ is the union of the coamoebas of all initial degenerations of
$X\cap({\bf{C}}^*)^N$.
Lines in ${\bf{C}}^3$ were studied in~\cite{NS}, and the arguments there imply some basic facts about
coamoebas of lines.
When $X=\defcolor{\ell}\subset{\bf{C}}^N$ is a line which is not parallel to a sum of coordinate
directions
(${\bf e}_{i_1}+\dotsb+{\bf e}_{i_s}$ for some subset $\{i_1,\dotsc,i_s\}$ of $\{1,\dotsc,N\}$), its
coamoeba is two-dimensional and its phase limit set is a union of at most $N{+}1$
one-dimensional subtori of ${\bf{T}}^N$, one for each point of $\ell$ at infinity, whose directions are
parallel to sums of coordinate directions.
If $\ell'\subset{\bf{C}}^M$ ($M<N$) is the image of $\ell$ under a coordinate projection, then the
coamoeba ${\mathcal{A}}(\ell')$ is the image of ${\mathcal{A}}(\ell)$ under the induced projection.
If $\ell'$ is not parallel to a sum of coordinate directions, then the map
$\overline{{\mathcal{A}}(\ell)}\to\overline{{\mathcal{A}}(\ell')}$ is an injection except for those components
of the phase limit set which are collapsed to points. \medskip
The integral cohomology of the compact torus ${\bf{T}}^N$ is the exterior algebra $\wedge^*{\bf{Z}}^N$.
Under the natural identification of homology with the linear dual of cohomology
(which is again $\wedge^*{\bf{Z}}^N$), we will write ${\bf e}_i$ for the fundamental $1$-cycle
$[{\bf{T}}_i]$ of the coordinate circle ${\bf{T}}_i:= 0^{i-1}\times {\bf{T}}\times 0^{N-i}$ and ${\bf e}_i\wedge {\bf e}_j$
is the fundamental cycle $[{\bf{T}}_{i,j}]$ of the coordinate 2-torus ${\bf{T}}_{i,j}\simeq{\bf{T}}^2$ in the
directions $i$ and $j$ with the implied orientation.
Given a continuous map $\rho\colon{\bf{T}}^N\to{\bf{T}}^2$, the induced map in homology is
$\rho_*\colon H_*({\bf{T}}^N,{\bf{Z}})\to H_*({\bf{T}}^2,{\bf{Z}})$ where $\rho_*({\bf e}_i)=[\rho({\bf{T}}_i)]$, where
we interpret $[\rho({\bf{T}}_i)]$ as a cycle---the set of points in
$\rho({\bf{T}}_i)$ over which $\rho$ has degree $n$ will appear in $[\rho({\bf{T}}_i)]$ with coefficient
$n$.
By the identification of $H_*({\bf{T}}^N,{\bf{Z}})$ with $\wedge^*{\bf{Z}}^N$,
such a map is determined by its action on $H_1({\bf{T}}^N,{\bf{Z}})$, where it is an integer linear map
${\bf{Z}}^N\to{\bf{Z}}^2$.
\section{The coamoeba and zonotope chains of a real line}\label{S:realline}
We study the coamoeba ${\mathcal{A}}(\ell)$ of a line $\ell$ in $\P^N$ defined by real
equations.
Its closure $\overline{{\mathcal{A}}(\ell)}$ is a two-dimensional chain in ${\bf{T}}^N$ whose boundary consists
of at most $N{+}1$ one-dimensional subtori parallel to sums of coordinate directions.
We describe a piecewise linear two-dimensional chain---the \demph{zonotope chain} of
$\ell$---which has the same boundary as the coamoeba, but with opposite orientation.
The union of the coamoeba and the zonotope chain forms a cycle whose
homology class we compute.
The line $\ell$ has a parametrization
\[
\Phi\ \colon\ \P^1\ni z\ \longmapsto\
[b_1(z)\,:\, b_2(z)\,:\, \dotsb\,:\,b_{N+1}(z)]\ \in\ \P^N\,,
\]
where $b_1,\dotsc,b_{N+1}$ are real linear forms with zeroes
$\xi_1,\dotsc,\xi_{N+1}\in{\bf{R}}\P^1$.
The formulation and statement of our results about the coamoeba of $\ell$ will be with
respect to particular orderings of the forms $b_i$, which we now describe.
\begin{definition}\label{D:conventions}
Suppose that these zeroes are in a weakly increasing cyclic order on ${\bf{R}}\P^1$,
\begin{equation}\label{Eq:A}
\xi_1\ \leq\ \xi_2\ \leq\ \dotsb\ \leq\ \xi_{N+1}\,.
\end{equation}
Next, identify $\P^1\smallsetminus\{\xi_{N+1}\}$ with
${\bf{C}}$, so that $\xi_{N+1}$ is the point $\infty$ at infinity, and suppose that the
distinct zeroes are
\begin{equation}\label{Eq:B}
\zeta_1\ <\ \zeta_2\ <\ \dotsb\ <\ \zeta_M\ <\ \zeta_{M{+}1}\ =\ \infty\,.
\end{equation}
(Note that $M\leq N$.)
Let ${\bf{R}}={\bf{R}}\P^1\smallsetminus\{\infty\}$ and consider the forms $b_i$ as affine functions on
${\bf{R}}$.
Fix a scaling of these functions so that $b_{N+1}=1$.
On the interval $(-\infty,\zeta_1)$ the sign of each function $b_i$ is constant.
Define $\defcolor{\sgn_i}\in\{\pm 1\}$ to be this sign.
By~\eqref{Eq:A} and~\eqref{Eq:B}, there exist numbers
$1=m_1<\dotsb<m_{M+1}<m_{M+2}=N{+}2$ such that $b_i(\zeta_j)=0$ if and only if
$i\in[m_j,m_{j+1})$.
We further suppose that on each of these intervals $[m_j,m_{j+1})$ the signs $\sgn_i$
are weakly ordered.
Specifically, there are integers $n_1,\dotsc,n_{M{+}1}$ with $m_j< n_j \leq m_{j+1}$ such that
one of the following holds
\begin{eqnarray}
\sgn_{m_j}=\sgn_{m_j+1}=\dotsb=\sgn_{n_j-1}=-1 &<
&1=\sgn_{n_j}=\dotsb=\sgn_{m_{j+1}-1}\,,\makebox[.1in][l]{\qquad or}\label{Eq:inc}\\
\sgn_{m_j}=\sgn_{m_j+1}=\dotsb=\sgn_{n_j-1}=1 &>
&-1=\sgn_{n_j}=\dotsb=\sgn_{m_{j+1}-1}\,,\label{Eq:dec}
\end{eqnarray}
for $j=1,\dotsc,M{+}1$.
If $n_j=m_{j+1}$, then all the signs are the same; otherwise both
signs occur.
Since $b_{N+1}=1$, either~\eqref{Eq:inc} occurs with $n_{M+1}\leq N{+}1$
or~\eqref{Eq:dec} occurs with $n_{M+1}=N{+}1$.
\hfill\includegraphics[height=10pt]{figures/QED.eps}
\end{definition}
The point $\Arg(b_1(z),\dotsc,b_N(z))\in{\bf{T}}^N$ is constant for $z$ in
each interval of ${\bf{R}}^1\smallsetminus\{\zeta_1,\dotsc,\zeta_M\}$.
Let $\defcolor{p_1}:=(\arg(\sgn_i)\mid i=1,\dotsc,N)$ be the point coming from the
interval $(-\infty,\zeta_1)$, and for each $j=1,\dotsc,M$, let \defcolor{$p_{j+1}$} be the
point coming from the interval $(\zeta_j,\zeta_{j+1})$.
These $M{+}1$ points $p_1,\dotsc,p_{M+1}$ of ${\bf{T}}^N$ are the vertices of the coamoeba ${\mathcal{A}}(\ell)$
of $\ell$.
To understand the rest of the coamoeba, note that when $M\geq 2$ the map $\Arg\circ\Phi$ is
injective on $\P^1\smallsetminus{\bf{R}}\P^1$ (see~\cite[\S~2]{NS}).
(When $M=1$, $\ell$ is parallel to a sum of coordinate directions and ${\mathcal{A}}(\ell)$ is a
translate of the corresponding one-dimensional subtorus of ${\bf{T}}^N$.)
It suffices to consider the image of the upper half plane, as the image of the lower half
plane is obtained by multiplying by $-1$ (induced by complex conjugation).
For the upper half plane, consider $\Arg\circ\Phi(z)$ for $z$ lying on a contour $C$
as shown in Figure~\ref{F:contour}
\begin{figure}[htb]
\begin{picture}(223,77)
\put(0,10){\includegraphics{figures/contour.eps}}
\put(40,0){$\zeta_1$} \put(85,0){$\zeta_2$} \put(185,0){$\zeta_M$}
\put(228,18){${\bf{R}}$} \put(71,50){$C$}
\end{picture}
\caption{Contour in upper half plane}
\label{F:contour}
\end{figure}
that contains semicircles of
radius $\epsilon$ centered at each root $\zeta_j$ and a semicircle of radius $1/\epsilon$
centered at 0, but otherwise lies along the real axis, for $\epsilon$ a sufficiently small
positive number.
As $z$ moves along $C$, $\Arg\circ\Phi(z)$ takes on values $p_1,\dotsc,p_{M+1}$, for
$z\in C\cap{\bf{R}}$.
On the semicircular arc around $\zeta_j$, it traces a curve from $p_{j}$ to $p_{j+1}$ in
which nearly every component is constant, except for those $i$ where $b_i(\zeta_j)=0$,
each of which decreases by $\pi$.
In the limit as $\epsilon\to 0$, this becomes the line segment between $p_{j}$ and $p_{j+1}$
with direction $-{\bf f}_j$, where
\[
\defcolor{{\bf f}_j}\ :=\ \sum_{i\colon b_i(\xi_j)=0} {\bf e}_i
\ =\ \sum_{i=m_j}^{m_{j+1}-1} {\bf e}_i\,,
\]
and where we set ${\bf e}_{N+1}:=-({\bf e}_1+\dotsb+{\bf e}_N)$.
This is because we are really working in the torus for $\P^N$, which is the quotient
${\bf{T}}^{N+1}/\Delta({\bf{T}})$ of ${\bf{T}}^{N+1}$ modulo the diagonal torus, and
${\bf e}_i\in{\bf{T}}^{N+1}/\Delta({\bf{T}})$ is the image of the standard basis element in ${\bf{T}}^{N+1}$.
Thus ${\bf e}_1+\dotsb+{\bf e}_{N+1}=0$.
Along the arc near infinity, $\Arg\circ\Phi(z)$ approaches the line segment between $p_{M+1}$ and
$p_1$ which has direction $-{\bf f}_{M+1}$, where
\begin{equation}\label{Eq:bbf_M+1}
{\bf f}_{M{+}1}\ =\ - \sum_{i\colon b_i(\infty)\neq0} {\bf e}_j\ =\ -({\bf f}_1+\dotsb+{\bf f}_M)\,.
\end{equation}
This polygonal path connecting $p_1,\dotsc,p_{M+1}$ in cyclic order forms the
boundary of the image of the upper half plane under $\Arg\circ\Phi$, which is a
two-dimensional membrane in ${\bf{T}}^N$.
The boundary of the image of the lower half plane is also a piecewise linear path
connecting $p_1,\dotsc,p_{M+1}$ in cyclic order, but the edge directions are
${\bf f}_1,\dotsc,{\bf f}_{M{+}1}$.
\begin{example}\label{Ex:tc}
Let $N=3$ and suppose that the affine functions $b_i$ are $z$, $1{-}2z$, $z{-}2$,
and $1$.
Then $M=N$, $\xi_i=\zeta_i$, $\zeta_1=0$, $\zeta_1=1/2$, $\zeta_2=2$,
and ${\bf f}_i={\bf e}_i$.
The vertices of ${\mathcal{A}}(\ell)$ are
\[
p_1\ =\ (\pi,0,\pi)\,,\quad
p_2\ =\ (0,0,\pi)\,,\quad
p_3\ =\ (0,-\pi,\pi)\,,\quad \mbox{and}\quad
p_4\ =\ (0,-\pi,0)\,.
\]
Figure~\ref{F:linecoamoeba} shows two views of ${\mathcal{A}}(\ell)$ in the fundamental domain
$[-\pi,\pi]^3\subset{\bf{R}}^3$ of ${\bf{T}}^3$, where the opposite faces of the cube are identified
to form ${\bf{T}}^3$.
\hfill$\includegraphics[height=10pt]{figures/QED.eps}$
\end{example}
\begin{figure}[htb]
\begin{picture}(133,115)
\put(0,0){\includegraphics[height=115pt]{figures/three_1.eps}}
\put(35, 78){$p_1$} \put(53,104){$p_2$}
\put(13,110){$p_3$} \put(10,62){$p_4$}
\put(75,33){$p_1$} \put( 55, 7){$p_2$}
\put(97, 2){$p_3$} \put( 97,50){$p_4$}
\end{picture}
\qquad\qquad
\begin{picture}(142,115)(-23,0)
\put(0,0){\includegraphics[height=115pt]{figures/three_2.eps}}
\put( 43,104){$p_1$} \put( 32, 84){$p_2$}
\put(110, 83){$p_3$} \put(108,86){\vector(-1,0){20}}
\put(110, 50){$p_4$} \put(108,53){\vector(-1,0){20}}
\put( 30, -2){$p_1$} \put( 36, 31){$p_2$}
\put(-23, 28){$p_3$} \put(-12,30){\vector(1,0){20}}
\put(-23, 61){$p_4$} \put(-12,63){\vector(1,0){20}}
\end{picture}
\caption{Two views of ${\mathcal{A}}(\ell)$}
\label{F:linecoamoeba}
\end{figure}
\begin{example}\label{Ex:CoArepeat}
We consider three examples when $N=3$ in which the affine functions have repeated zeroes.
For the first, suppose that the affine functions $b_i$ are
$-1{-}z,-1{-}z,2z$, and $2$.
These have zeroes $-1\leq-1<0<\infty$ and the vertices of the coamoeba ${\mathcal{A}}(\ell)$ are
\[
(0,0,\pi)\,,\quad (-\pi,-\pi,\pi)\,,\quad \mbox{and}\quad
(-\pi,-\pi,0)\,.
\]
So ${\mathcal{A}}(\ell)$ consists of two triangles with edges parallel to ${\bf e}_1{+}{\bf e}_2$,
${\bf e}_3$, and ${\bf e}_1{+}{\bf e}_2{+}{\bf e}_3$.
It lies in the plane $\theta_1=\theta_2$.
For a second example, suppose that the affine functions $b_i$ are
$\frac{1}{2}+z,\frac{1}{2}-z,-2$, and $1$.
These have zeroes $-1,1,\infty$, and $\infty$.
The vertices of the coamoeba ${\mathcal{A}}(\ell)$ are
\[
(\pi,0,\pi)\,,\quad (0,0,\pi)\,,\quad \mbox{and}\quad
(0,-\pi,\pi)\,.
\]
So ${\mathcal{A}}(\ell)$ consists of two triangles with edges parallel to ${\bf e}_1$,
${\bf e}_2$, and ${\bf e}_1{+}{\bf e}_2$.
It lies in the plane $\theta_3=\pi$.
Finally, suppose that the affine functions $b_i$ are
$-z,1-z,2z-2$, and $1$.
These have zeroes $0,1,1$, and $\infty$.
The vertices of the coamoeba ${\mathcal{A}}(\ell)$ are
\[
(0,0,\pi)\,,\quad (-\pi,0,\pi)\,,\quad \mbox{and}\quad
(-\pi,-\pi,0)\,.
\]
So ${\mathcal{A}}(\ell)$ consists of two triangles with edges parallel to ${\bf e}_1$,
${\bf e}_2{+}{\bf e}_3$, and ${\bf e}_1{+}{\bf e}_2{+}{\bf e}_3$.
It lies in the plane $\theta_3=\theta_2+\pi$.
We display all three coamoebas in Figure~\ref{F:repeat}.
\hfill$\includegraphics[height=10pt]{figures/QED.eps}$
\end{example}
The \demph{coamoeba chain $\overline{{\mathcal{A}}(\ell)}$} of $\ell$ is the closure of the
coamoeba of $\ell$ in which the image of each half plane (under $\Arg\circ\Phi(\cdot)$)
is oriented so that its boundary
is an oriented polygonal path connecting $p_1,\dotsc,p_{M+1},p_1$.
On the upper half plane this agrees with the orientation induced by the parametrization
$\P^1\smallsetminus{\bf{R}}\P^1\to{\mathcal{A}}(\ell)$, but it has the opposite orientation on the lower
half plane.
The boundary of $\overline{{\mathcal{A}}(\ell)}$ consists of $M{+}1$ circles in which $p_j$ and
$p_{j+1}$ are antipodal points on the $j$th circle and both semicircles (each is the
boundary of the image of a half plane) are oriented to point from $p_j$ to $p_{j+1}$.
This coamoeba chain is not a closed chain, as it has nonempty oriented boundary, but there
is a natural zonotope chain $Z(\ell)$ such that $\overline{{\mathcal{A}}(\ell)}+Z(\ell)$ is closed.
Intuitively, $Z(\ell)$ is the cone over the boundary of $\overline{{\mathcal{A}}(\ell)}$ with vertex
the origin $\defcolor{{\bf 0}}:=(0,\dotsc,0)$.
Unfortunately, there is no notion of a cone in ${\bf{T}}^N$ and the zonotope chain may be more than
just this cone.
We instead define a chain in ${\bf{R}}^N$ as the cone over an oriented
polygon $P(\ell)$ with vertex the origin and set $Z(\ell)$ to be the image of this chain in
${\bf{T}}^N$.
\begin{definition}\label{Def:Zonotope_chain}
Recall that the affine functions $b_1,\dotsc,b_N,b_{N{+}1}=1$ are ordered in the
following way.
Their zeroes are $\zeta_1<\dotsb<\zeta_M<\zeta_{M{+}1}=\infty$ and there are
integers $1=m_1<\dotsb<m_{M+1}\leq N{+}1$ and $n_1,\dotsc,n_{M{+}1}$ with
$m_j< n_j\leq m_{j+1}$ such that one
of~\eqref{Eq:inc} or~\eqref{Eq:dec} holds, where $\sgn_i$ is the sign of $b_i$ on
$(-\infty,\zeta_1)$.
We had defined ${\bf f}_j:=\sum_{i=m_j}^{m_{j+1}-1}{\bf e}_i$.
We will need the following vectors
\[
\defcolor{{\bf g}_j}\ :=\ \sum_{i=m_j}^{n_j-1}{\bf e}_i
\qquad\mbox{and}\qquad
\defcolor{{\bf h}_j}\ :=\ \sum_{i=m_j}^{m_{j+1}-1}\sgn_i {\bf e}_i\ =\
\sgn_{m_j}(2{\bf g}_j-{\bf f}_j)\ \,.
\]
We first define a sequence of points
$\widetilde{p}\hspace{1.2pt}_1,{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_1,\dotsc,\widetilde{p}\hspace{1.2pt}_{2M+2},{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_{2M+2}\in(\pi{\bf{Z}})^N$ with the property that
$\widetilde{p}\hspace{1.2pt}_i,{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_i,\widetilde{p}\hspace{1.2pt}_{M{+}1{+}i},$ and ${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_{M{+}1{+}i}$ all map to $p_i\in{\bf{T}}^N$.
To begin, set $\widetilde{p}\hspace{1.2pt}_1$ to be the unique point in $\{0,\pi\}^N\subset{\bf{R}}^N$ which maps to
$p_1\in{\bf{T}}^N$,
\begin{equation}\label{Eq:ptilde}
\widetilde{p}\hspace{1.2pt}_{1,i}\ =\ \arg(\sgn_i)\ =\ \left\{
\begin{array}{rcl}\pi&\ &\mbox{if }\sgn_i=-1\\
0&&\mbox{if }\sgn_i=1\end{array}\right.\ .
\end{equation}
For each $j=1,\dotsc,M{+}1$, set $\defcolor{\widetilde{p}\hspace{1.2pt}_{j+1}}:=\widetilde{p}\hspace{1.2pt}_j+\pi{\bf h}_j$.
Since ${\bf h}_j=\sgn_{m_j}(2{\bf g}_j-{\bf f}_j)$, we have that $\widetilde{p}\hspace{1.2pt}_{j+1}$ maps to $p_{j+1}$, as
$p_{j+1}=p_j-\pi{\bf f}_j\mod (2\pi{\bf{Z}})^N$.
For the remainder of the points, if $n_j<m_{j+1}$, so that both signs occur, set
$\defcolor{{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j} := \widetilde{p}\hspace{1.2pt}_j+2\pi\sgn_{m_j}{\bf g}_j$, and otherwise set
$\defcolor{{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j} := \widetilde{p}\hspace{1.2pt}_j$.
Observe that ${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j$ maps to $p_j$ and that
in every case, $\widetilde{p}\hspace{1.2pt}_{j+1}={\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j-\pi\sgn_{m_j}{\bf f}_j$.
We claim that $\widetilde{p}\hspace{1.2pt}_{M{+}2}=-\widetilde{p}\hspace{1.2pt}_1$.
Since $\widetilde{p}\hspace{1.2pt}_{M{+}2}=\widetilde{p}\hspace{1.2pt}_1+\pi({\bf h}_1+\dotsb+{\bf h}_{M{+}1})$, we need to show that
$\pi({\bf h}_1+\dotsb+{\bf h}_{M{+}1})=-2\widetilde{p}\hspace{1.2pt}_1$.
By definition,
\[
{\bf h}_1+\dotsb+{\bf h}_{M+1}\ =\ \sum_{i=1}^{N+1} \sgn_i{\bf e}_i\,.
\]
We have $\sgn_{N+1}=1$ as $b_{N{+}1}=1$.
Since we defined ${\bf e}_{N+1}$ to be $-({\bf e}_1+\dotsb+{\bf e}_N)$, we see that
\[
{\bf h}_1+\dotsb+{\bf h}_{M+1}\ =\ \sum_{i=1}^N (\sgn_i-1){\bf e}_i\,.
\]
The $i$th component of this sum is $-2$ if $\sgn_i=-1$ and $0$ if
$\sgn_i=1$.
Since $\widetilde{p}\hspace{1.2pt}_{1,i}=\arg(\sgn_i)$, this proves the claim.
Finally, for each $M{+}2\leq j\leq 2M{+}2$, set
\[
\defcolor{\widetilde{p}\hspace{1.2pt}_j}\ :=\ -\widetilde{p}\hspace{1.2pt}_{j-(M{+}1)}\qquad\mbox{and}\qquad
\defcolor{{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j}\ :=\ -\widetilde{p}\hspace{1.2pt}_{j-(M{+}1)}\,,
\]
and let $P(\ell)$ be the cyclically oriented path obtained by connecting
\[
{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_{2M+2}\,,\,\widetilde{p}\hspace{1.2pt}_{2M+2}\,,\,
{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_{2M+1}\,,\,\widetilde{p}\hspace{1.2pt}_{2M+1}\,,\, \dotsc,
{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_2\,,\,\widetilde{p}\hspace{1.2pt}_2\,,\,{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_1\,,\, \widetilde{p}\hspace{1.2pt}_1
\]
in cyclic order.
The cone over $P(\ell)$ with vertex the origin is the union of possibly degenerate triangles of
the form
\[
\conv({\bf 0},\widetilde{p}\hspace{1.2pt}_{i+1},{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_{i})
\qquad\mbox{and}\qquad
\conv({\bf 0},{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_{i},\widetilde{p}\hspace{1.2pt}_{i})
\qquad\mbox{for}\qquad i=2M{+}2,\dotsc,2,1\,,
\]
where $\widetilde{p}\hspace{1.2pt}_{2M+3}:=\widetilde{p}\hspace{1.2pt}_1$.
Each triangle is oriented so its three vertices occur in positive order along its boundary.
If a point $\widetilde{p}\hspace{1.2pt}_i$ or ${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_i$ is ${\bf 0}$, then the triangles involving it degenerate into line
segments, as do triangles $\conv({\bf 0},{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_{i},\widetilde{p}\hspace{1.2pt}_{i})$ when ${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_{i}=\widetilde{p}\hspace{1.2pt}_{i}$.
Let $\widetilde{Z(\ell)}$ be the union of these oriented triangles, which is a chain in
${\bf{R}}^N$.
Define the \demph{zonotope chain $Z(\ell)$} to be the image in ${\bf{T}}^N$ of
$\widetilde{Z(\ell)}$.
\hfill$\includegraphics[height=10pt]{figures/QED.eps}$
\end{definition}
\begin{example}\label{Ex:CoZrepeat}
Figure~\ref{F:ZC} shows two views of the zonotope chain with the coamoeba chain of
Figure~\ref{F:linecoamoeba}.
\begin{figure}[htb]
\begin{picture}(153,150)
\put(0,0){\includegraphics[height=150pt]{figures/ZC_1.eps}}
\put( 44,101){$\widetilde{p}\hspace{1.2pt}_1$} \put(69,134){$\widetilde{p}\hspace{1.2pt}_2$}
\put(120,126){$\widetilde{p}\hspace{1.2pt}_3$} \put(127,64){$\widetilde{p}\hspace{1.2pt}_4$}
\put(101, 46){$\widetilde{p}\hspace{1.2pt}_5$} \put(73,10){$\widetilde{p}\hspace{1.2pt}_6$}
\put( 18, 32){$\widetilde{p}\hspace{1.2pt}_7$} \put(18,82){$\widetilde{p}\hspace{1.2pt}_8$}
\put(70.5,60){${\bf 0}$}
\end{picture}
\qquad\qquad
\begin{picture}(170,150)(-20,0)
\put(0,0){\includegraphics[height=150pt]{figures/ZC_2.eps}}
\put( 59,135){$\widetilde{p}\hspace{1.2pt}_1$} \put( 46,110){$\widetilde{p}\hspace{1.2pt}_2$}
\put( 0,126){$\widetilde{p}\hspace{1.2pt}_3$}
\put(-20, 78){$\widetilde{p}\hspace{1.2pt}_4$} \put(-8,82){\vector(1,0){18}}
\put( 55, 9){$\widetilde{p}\hspace{1.2pt}_5$} \put( 65, 35){$\widetilde{p}\hspace{1.2pt}_6$}
\put(113, 21){$\widetilde{p}\hspace{1.2pt}_7$}
\put(133, 66){$\widetilde{p}\hspace{1.2pt}_8$} \put(131,69){\vector(-1,0){18}}
\put(53,66){${\bf 0}$}
\end{picture}
\caption{Two views of the coamoeba and zonotope chains}
\label{F:ZC}
\end{figure}
Now consider the zonotope chains for the three lines of Example~\ref{Ex:CoArepeat}.
When $\ell$ is defined by $z\mapsto[-1-z,-1-z,2z,2]$, the points $\widetilde{p}\hspace{1.2pt}_1,\dotsc,{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_6$
(omitting repeated points) are
\[
(0,0, \pi)\,,\ ( \pi, \pi, \pi)\,,\ ( \pi, \pi,0)\,,\
(0,0,-\pi)\,,\ (-\pi,-\pi,-\pi)\,,\quad\mbox{ and }\quad (-\pi,-\pi,0)\,.
\]
We display the coamoeba chain and the zonotope chain of $\ell$ at the left of
Figure~\ref{F:repeat}.
\begin{figure}[htb]
\begin{picture}(102,150)(0,-20)
\put(0,0){\includegraphics[height=110pt]{figures/equal.eps}}
\put( 51, 95){$\widetilde{p}\hspace{1.2pt}_1$} \put(79, 72){$\widetilde{p}\hspace{1.2pt}_2$}
\put( 79, 35){$\widetilde{p}\hspace{1.2pt}_3$}
\put( 39, 10){$\widetilde{p}\hspace{1.2pt}_4$} \put( 11, 35){$\widetilde{p}\hspace{1.2pt}_5$}
\put( 11, 70){$\widetilde{p}\hspace{1.2pt}_6$}
\thicklines
\put( 12,118){\White{\vector(2,-3){21}}}
\put( 90, -8){\White{\vector(-2,3){21}}}
\thinlines
\put(0 ,121){${\mathcal{A}}(\ell)$}\put( 12,118){\vector(2,-3){20}}
\put(80,-18){${\mathcal{A}}(\ell)$}\put( 90, -8){\vector(-2,3){20}}
\end{picture}
\qquad
\begin{picture}(116,150)(-5,-20)
\put(0,0){\includegraphics[height=110pt]{figures/parallelI.eps}}
\put(12, 75){$\widetilde{p}\hspace{1.2pt}_1$} \put( 50,97){$\widetilde{p}\hspace{1.2pt}_2$}
\put(79, 71){$\widetilde{p}\hspace{1.2pt}_3$} \put( 72,-5){${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_3$}
\put(88, 30){$\widetilde{p}\hspace{1.2pt}_4$}
\put( 6, 4){$\widetilde{p}\hspace{1.2pt}_5$}
\put( 25,18){$\widetilde{p}\hspace{1.2pt}_6$} \put( 25,107){${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_6$}
\put(-5,122){${\mathcal{A}}(\ell)$}
\put( 28, -19){${\mathcal{A}}(\ell)$}
\thicklines
\put(16, 9){\White{\vector(4,1){36}}}
\put(8,118){\White{\vector(1,-1){28}}}
\put(42,-9){\White{\vector(1, 1){28}}}
\thinlines
\put(16,9){\vector(4,1){35}}
\put(8,118){\vector(1,-1){27}}
\put(42,-9){\vector(1, 1){27}}
\end{picture}
\qquad
\begin{picture}(181,150)(-5,-20)
\put(0,0){\includegraphics[height=110pt]{figures/parallelII.eps}}
\put( 83, 97){$\widetilde{p}\hspace{1.2pt}_1$} \put(104, 89){$\widetilde{p}\hspace{1.2pt}_2$}
\put(166, 101){${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_2$} \put(144, 47){$\widetilde{p}\hspace{1.2pt}_3$}
\put( 81, 7){$\widetilde{p}\hspace{1.2pt}_4$}
\put( 60, 19){$\widetilde{p}\hspace{1.2pt}_5$}
\put( -5, 3){${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_5$} \put( 20, 64){$\widetilde{p}\hspace{1.2pt}_6$}
\put( 0,120){${\mathcal{A}}(\ell)$}
\put(145, -10){${\mathcal{A}}(\ell)$}
\thicklines
\put( 18,114){\White{\vector(2,-1){49}}}
\put(153, 0){\White{\vector(-2,1){49}}}
\thinlines
\put( 18,114){\vector(2,-1){48}}
\put(153, 0){\vector(-2,1){48}}
\end{picture}
\caption{Coamoeba and zonotope chains}
\label{F:repeat}
\end{figure}
When $\ell$ is defined by $z\mapsto [\frac{1}{2}+z,\frac{1}{2}-z,-2,1]$, the points
$\widetilde{p}\hspace{1.2pt}_1,\dotsc,{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_6$ are
\begin{eqnarray*}
&\widetilde{p}\hspace{1.2pt}_1\ =\ (\pi,0,\pi)\,,\
\widetilde{p}\hspace{1.2pt}_2\ =\ (0,0,\pi)\,,\
\widetilde{p}\hspace{1.2pt}_3\ =\ (0,\pi,\pi)\,,\
\Magenta{{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_3\ =\ (0,\pi,-\pi)}\,,&\\
&\widetilde{p}\hspace{1.2pt}_4\ =\ (-\pi,0,-\pi)\,,\
\widetilde{p}\hspace{1.2pt}_5\ =\ (0,0,-\pi)\,,\
\widetilde{p}\hspace{1.2pt}_6\ =\ (0,-\pi,-\pi)\,,\
\Magenta{\widetilde{p}\hspace{1.2pt}_6'\ =\ (0,-\pi,\pi)}\,.&
\end{eqnarray*}
We display the coamoeba and zonotope chains of $\ell$ in the middle of Figure~\ref{F:repeat}.
When $\ell$ is defined by $z\mapsto [-z,1-z,2z-2,1]$, the
points $\widetilde{p}\hspace{1.2pt}_1,\dotsc,{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_6$ are
\begin{eqnarray*}
&\widetilde{p}\hspace{1.2pt}_1\ =\ (0,0,\pi)\,,\
\widetilde{p}\hspace{1.2pt}_2\ =\ (\pi,0,\pi)\,,\
\Magenta{{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_2\ =\ (\pi,2\pi,\pi)}\,,\
\widetilde{p}\hspace{1.2pt}_3\ =\ (\pi,\pi,0)\,,&\\
&\widetilde{p}\hspace{1.2pt}_4\ =\ (0,0,-\pi)\,,\
\widetilde{p}\hspace{1.2pt}_5\ =\ (-\pi,0,-\pi)\,,\
\Magenta{{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_5\ =\ (-\pi,-2\pi,-\pi)}\,,\
\widetilde{p}\hspace{1.2pt}_6\ =\ (-\pi,-\pi,0)\,.&
\end{eqnarray*}
We display the coamoeba and zonotope chains of $\ell$ on the right of Figure~\ref{F:repeat}.
\hfill$\includegraphics[height=10pt]{figures/QED.eps}$
\end{example}
We state the main result of this section.
\begin{theorem}\label{Th:homology}
The sum, $\overline{{\mathcal{A}}(\ell)}+Z(\ell)$, of the coamoeba chain and the zonotope chain forms a
cycle in ${\bf{T}}^N$ whose homology class is
\[
[\overline{{\mathcal{A}}(\ell)}+Z(\ell)]\ =\
\sum_{\substack{1\leq i<j\leq N\\(\widetilde{p}\hspace{1.2pt}_{1,i},\widetilde{p}\hspace{1.2pt}_{1,j})=(0,\pi)}}
{\bf e}_i\wedge{\bf e}_j\,.
\]
\end{theorem}
\begin{example}\label{Ex:homology_class}
For the line of Example~\ref{Ex:tc}, $\widetilde{p}\hspace{1.2pt}_1=(\pi,0,\pi)$, and the only entries $i<j$ with 0 at
$i$ and $\pi$ at $j$ are $i=2$ and $j=3$, and so
\[
[\overline{{\mathcal{A}}(\ell)}+Z(\ell)]\ =\ {\bf e}_2\wedge{\bf e}_3\,.
\]
For the first line of Example~\ref{Ex:CoArepeat}, $\widetilde{p}\hspace{1.2pt}_1=(0,0,\pi)$, and so
\[
[\overline{{\mathcal{A}}(\ell)}+Z(\ell)]\ =\ {\bf e}_1\wedge{\bf e}_3 + {\bf e}_2\wedge{\bf e}_3\,.
\]
For the second line of Example~\ref{Ex:CoArepeat}, $\widetilde{p}\hspace{1.2pt}_1=(\pi,0,\pi)$, so
that $[\overline{{\mathcal{A}}(\ell)}+Z(\ell)]={\bf e}_2\wedge{\bf e}_3$.
For the third line of Example~\ref{Ex:CoArepeat}, $\widetilde{p}\hspace{1.2pt}_1=(0,0,\pi)$, and
$[\overline{{\mathcal{A}}(\ell)}+Z(\ell)]={\bf e}_1\wedge{\bf e}_3 + {\bf e}_2\wedge{\bf e}_3$.
These homology classes are apparent from Figures~\ref{F:ZC} and~\ref{F:repeat}.
\hfill$\includegraphics[height=10pt]{figures/QED.eps}$
\end{example}
\begin{example}\label{Ex:plane}
Our proof of Theorem~\ref{Th:homology} rests on the case of $N=2$.
Suppose first that $M=2$.
Up to positive rescaling and translation in the domain ${\bf{R}}\P^1$, there are four lines.
\[
\begin{picture}(65,62)(-7.5,-12)
\put(0,0){\includegraphics{figures/lineI.eps}}
\put(-4.5,-12){$[z:z{-}1:1]$}
\end{picture}
\qquad
\begin{picture}(65,62)(-7.5,-12)
\put(0,0){\includegraphics{figures/lineIV.eps}}
\put(-4.5,-12){$[z:1{-}z:1]$}
\end{picture}
\qquad
\begin{picture}(65,62)(-7.5,-12)
\put(0,0){\includegraphics{figures/lineIII.eps}}
\put(-8.5,-12){$[-z:1{-}z:1]$}
\end{picture}
\qquad
\begin{picture}(65,62)(-7.5,-12)
\put(0,0){\includegraphics{figures/lineII.eps}}
\put(-8.5,-12){$[-z:z{-}1:1]$}
\end{picture}
\]
For these, the initial point
$p_1$ is $(\pi,\pi)$, $(\pi,0)$, $(0,0)$, and $(0,\pi)$, respectively.
The four coamoeba chains are, in the fundamental domain $[-\pi,\pi]^2$,
\[
\includegraphics{figures/2dcoamoebaI.eps}
\qquad
\includegraphics{figures/2dcoamoebaIV.eps}
\qquad
\includegraphics{figures/2dcoamoebaIII.eps}
\qquad
\includegraphics{figures/2dcoamoebaII.eps}
\]
and the corresponding zonotope chains are as follows.
\[
\includegraphics{figures/2dzonotopeI.eps}
\qquad
\includegraphics{figures/2dzonotopeIV.eps}
\qquad
\includegraphics{figures/2dzonotopeIII.eps}
\qquad
\includegraphics{figures/2dzonotopeII.eps}
\]
For each, the sum $\overline{{\mathcal{A}}(\ell)}+Z(\ell)$ of chains is a
cycle.
This cycle is homologous to zero for the first three, and it forms the
fundamental cycle ${\bf e}_1\wedge{\bf e}_2$ of ${\bf{T}}^2$ for the fourth.
Now suppose that $M=1$.
We may assume that $\xi_1=0$.
Up to positive rescaling there are eight
possibilities for the parametrization of $\ell$,
\begin{eqnarray*}
&[-z:-z:1]\,,\
[ z: z:1]\,,\
[-z: 1:1]\,,\
[ z: 1:1]\,,&\\
&[ z:-z:1]\,,\
[ z:-1:1]\,,\
[-z:-1:1]\,,\
[-z: z:1]\,.&
\end{eqnarray*}
For all of these, the coamoeba is one-dimensional.
In the first four, the zonotope chain is one-dimensional.
Table~\ref{T:paths} gives the parametrization, the vertices of the coamoeba of the upper half plane, and
the path $P(\ell)=\widetilde{p}\hspace{1.2pt}_4,\widetilde{p}\hspace{1.2pt}_3,\widetilde{p}\hspace{1.2pt}_2,\widetilde{p}\hspace{1.2pt}_1$ for these four.
\begin{table}[htb]
\caption{Coamoeba and zonotope chains.}\label{T:paths}
\begin{tabular}{|r||l|l|}\hline
\multicolumn{1}{|c||}{$\ell$} &\multicolumn{1}{|c|}{${\mathcal{A}}(\ell)$}&\multicolumn{1}{|c|}{$P(\ell)$}\\\hline
$[-z:-z:1]$&$(0,0)\,,\,(-\pi,-\pi)$&$(-\pi,-\pi)\,,\,(0,0)\,,\,(\pi,\pi)\,,\,(0,0)$\\\hline
$[ z: z:1]$&$(\pi,\pi)\,,\,(0,0) $&$(0,0)\,,\,(-\pi,-\pi)\,,\,(0,0)\,,\,(\pi,\pi)$\\\hline
$[-z: 1:1]$&$(0,0)\,,\,(-\pi,0) $&$(-\pi,0)\,,\,(0,0)\,,\,(\pi,0)\,,\,(0,0)$\\\hline
$[ z: 1:1]$&$(\pi,0)\,,\,(0,0) $&$(0,0)\,,\,(-\pi,0)\,,\,(0,0)\,,\,(\pi,0)$\\\hline
\end{tabular}
\end{table}
The remaining parametrizations are more interesting.
When $\ell$ is given by $z\mapsto[z:-z:1]$, we have $p_1=(\pi,0)$ and $p_2=(0,-\pi)$, and
$P(\ell)$ is
\[
\widetilde{p}\hspace{1.2pt}_4=(0,-\pi)\,,\ {\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_3=(\pi,0)\,,\ \widetilde{p}\hspace{1.2pt}_3=(-\pi,0)\,,\
\widetilde{p}\hspace{1.2pt}_2=(0,\pi)\,,\ {\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_1=(-\pi,0)\,,\quad\mbox{and}\quad\widetilde{p}\hspace{1.2pt}_1=(\pi,0)\,,
\]
and the zonotope chain is shown on the left in Figure~\ref{F:M=1}.
The path $\widetilde{p}\hspace{1.2pt}_4{-}{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_3{-}\widetilde{p}\hspace{1.2pt}_3{-}\widetilde{p}\hspace{1.2pt}_2{-}{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_1{-}\widetilde{p}\hspace{1.2pt}_1{-}\widetilde{p}\hspace{1.2pt}_4$ zig-zags over itself, once
in each direction, and consequently each triangle is covered twice, once with each
orientation, and therefore $[Z(\ell)]=0$ in homology.
\begin{figure}[htb]
\begin{picture}(90,89)(-23,-24)
\put(0,0){\includegraphics{figures/2dzonotopeV.eps}}
\put( 42,33){$\widetilde{p}\hspace{1.2pt}_1={\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_3$}
\put(-23,15){${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_1=\widetilde{p}\hspace{1.2pt}_3$}
\put(23,-9){$\widetilde{p}\hspace{1.2pt}_4$} \put(23,58){$\widetilde{p}\hspace{1.2pt}_2$}
\put(0,-24){$[ z:-z:1]$}
\end{picture}
\qquad
\begin{picture}(81,89)(-15,-24)
\put(0,0){\includegraphics{figures/2dzonotopeVI.eps}}
\put(-11,-7){$\widetilde{p}\hspace{1.2pt}_3$} \put(57,52){$\widetilde{p}\hspace{1.2pt}_1$}
\put(21,-9){${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_2{\,=\,}\widetilde{p}\hspace{1.2pt}_4$}
\put(-2,58){$\widetilde{p}\hspace{1.2pt}_2{\,=\,}{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_4$}
\put(0,-24){$[ z:-1:1]$}
\end{picture}
\qquad
\begin{picture}(81,89)(-15,-24)
\put(0,0){\includegraphics{figures/2dzonotopeVII.eps}}
\put(-11,55){${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_4$} \put(-11,-7){$\widetilde{p}\hspace{1.2pt}_4$}
\put(22,-9){$\widetilde{p}\hspace{1.2pt}_3$} \put(57,-7){${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_2$}
\put(57,52){$\widetilde{p}\hspace{1.2pt}_2$} \put(22,58){$\widetilde{p}\hspace{1.2pt}_1$}
\put(0,-24){$[-z:-1:1]$}
\end{picture}
\qquad
\begin{picture}(120,89)(-32,-24)
\put(0,0){\includegraphics{figures/2dzonotopeVIII.eps}}
\put(-11,30){$\widetilde{p}\hspace{1.2pt}_4$} \put(-36,-3){${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_3$}
\put(23,-9){$\widetilde{p}\hspace{1.2pt}_3$} \put(56,19){$\widetilde{p}\hspace{1.2pt}_2$}
\put(81,52){${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_1$} \put(22,58){$\widetilde{p}\hspace{1.2pt}_1$}
\put(0,-24){$[-z: z:1]$}
\end{picture}
\caption{Four more zonotope chains.}\label{F:M=1}
\end{figure}
When $\ell$ is given by $[z:-1:1]$, we have $p_1=(\pi,\pi)$ and $p_2=(0,\pi)$, and
$P(\ell)$ is
\[
{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_4=(0,\pi)\,,\ \widetilde{p}\hspace{1.2pt}_4=(0,-\pi)\,,\ \widetilde{p}\hspace{1.2pt}_3=(-\pi,-\pi)\,,
\ {\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_2=(0,-\pi)\,,\ \widetilde{p}\hspace{1.2pt}_2=(0,\pi)\,,\quad\mbox{and}\quad\widetilde{p}\hspace{1.2pt}_1=(\pi,\pi)\,,
\]
and the zonotope chain is shown on the left center of Figure~\ref{F:M=1}.
As before, each triangle is covered twice, once with each orientation, and therefore
$[Z(\ell)]=0$ in homology.
When $\ell$ is given by $[-z:-1:1]$, we have $p_1=(0,\pi)$ and
$p_2=(-\pi,\pi)$, and $P(\ell)$ is
\[
{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_4=(-\pi,\pi)\,,\ \widetilde{p}\hspace{1.2pt}_4=(-\pi,-\pi)\,,\ \widetilde{p}\hspace{1.2pt}_3=(0,-\pi)\,,
{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_2=(\pi,-\pi)\,,\ \widetilde{p}\hspace{1.2pt}_2=(\pi,\pi)\,,\quad\mbox{and}\quad \widetilde{p}\hspace{1.2pt}_1=(0,\pi)\,,
\]
and the zonotope chain is shown on the right center of Figure~\ref{F:M=1}.
The triangles $\conv({\bf 0},\widetilde{p}\hspace{1.2pt}_2,{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_2)$ and $\conv({\bf 0},\widetilde{p}\hspace{1.2pt}_4,{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_4)$ are shaded
differently.
The zonotope chain is equal to the fundamental cycle of ${\bf{T}}^2$, with the standard
positive orientation.
Thus $[Z(\ell)]={\bf e}_1\wedge{\bf e}_2$ in homology.
Finally, when $\ell$ is given by $[-z: z:1]$, we have $p_1=(0,\pi)$ and
$p_2=(-\pi,0)$, and $P(\ell)$ is
\[
\widetilde{p}\hspace{1.2pt}_4=(-\pi,0)\,,\ {\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_3=(-2\pi,-\pi)\,,\ \widetilde{p}\hspace{1.2pt}_3=(0,-\pi)\,,
\widetilde{p}\hspace{1.2pt}_2=( \pi,0)\,,\ {\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_1=( 2\pi, \pi)\,,\quad\mbox{and}\quad \widetilde{p}\hspace{1.2pt}_1=(0,\pi)\,,
\]
and the zonotope chain is shown on the right of Figure~\ref{F:M=1}.
Again, $[Z(\ell)]=[{\bf{T}}^2]$.
Observe that ${\mathcal{A}}(\ell)+Z(\ell)$ forms a cycle which is homologous to zero
unless $\widetilde{p}\hspace{1.2pt}_1=(0,\pi)$, in which case it equals the fundamental cycle ${\bf e}_1\wedge{\bf e}_2$ of ${\bf{T}}^2$.
\hfill$\includegraphics[height=10pt]{figures/QED.eps}$
\end{example}
\begin{proof}[Proof of Theorem~$\ref{Th:homology}$]
We show that the two chains $\overline{{\mathcal{A}}(\ell)}$ and $Z(\ell)$ have
the same boundary, but with opposite orientation, which implies that their sum is a cycle.
We observed that the boundary of ${\mathcal{A}}(\ell)$ lies along the $M{+}1$ circles in which
the $j$th contains $p_j$ and $p_{j+1}$ (with $p_{M+2}=p_1$) and has direction parallel to
${\bf f}_j$.
On this $j$th circle the boundary of ${\mathcal{A}}(\ell)$ consists of the two semicircles
oriented from $p_j$ to $p_{j+1}$.
There are two types of edges forming the boundary of the zonotope cycle $Z(\ell)$.
The first comes from the edges of $P(\ell)$ with direction $\pm{\bf f}_j$ connecting
$\widetilde{p}\hspace{1.2pt}_{j+1}$ to ${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j$ and $\widetilde{p}\hspace{1.2pt}_{M+1+j+1}$ to ${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_{M+1+j}$, and the second
comes from edges connecting ${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j$ to $\widetilde{p}\hspace{1.2pt}_j$, when ${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j\neq\widetilde{p}\hspace{1.2pt}_j$.
The first type of edge gives a part of the boundary of $Z(\ell)$ which is equal to the
boundary of ${\mathcal{A}}(\ell)$, but with opposite orientation.
(The edges point from $p_{j+1}$ to $p_j$.)
The edges of the second type come in pairs which cancel each other.
Indeed, when $\widetilde{p}\hspace{1.2pt}_j\neq{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j$, then the edge from ${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j$ to $\widetilde{p}\hspace{1.2pt}_j$ is the directed
circle connecting $p_j$ with itself and having direction $\pm{\bf g}_j$, which is equal to, but
opposite from, the edge connecting ${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_{M+1+j}$ to $\widetilde{p}\hspace{1.2pt}_{M+1+j}$.
Thus $\overline{{\mathcal{A}}(\ell)}+Z(\ell)$ forms a cycle in homology.
We determine the homology class $[\overline{{\mathcal{A}}(\ell)}+Z(\ell)]$ by computing its pushforward
to each two-dimensional coordinate projection of ${\bf{T}}^N$.
Let $1\leq i<j\leq N$ be two coordinate directions and consider
the projection onto the plane of the coordinates $i$
and $j$, which is a map ${\it pr}\colon{\bf{T}}^N\to{\bf{T}}^2$.
The image of $\ell$ under ${\it pr}$ is parametrized by
\begin{equation}\label{Eq:proj_param}
z\ \longmapsto\
[b_i(z)\,:\,b_j(z)\,:\,b_{N+1}(z)]\,.
\end{equation}
If $b_i,b_j$, (and $b_{N{+}1}=1$) all vanish at $\xi_{N{+}1}=\infty$, then the image of
$\ell$ under ${\it pr}$ is a point, and the image of $Z(\ell)$ is either a point or is
one-dimensional, and so ${\it pr}_*[{\mathcal{A}}(\ell)+Z(\ell)]=0$.
In this case $(\widetilde{p}\hspace{1.2pt}_{1,i},\widetilde{p}\hspace{1.2pt}_{1,j})$ is either $(0,0)$, $(\pi,0)$, or $(\pi,\pi)$,
by~\eqref{Eq:inc} and~\eqref{Eq:ptilde}.
Otherwise, the image of $\ell$ under the
projection of $\P^N$ to the $(i,j)$-coordinate plane is the line
$\ell'$ parameterized by~\eqref{Eq:proj_param}.
It is immediate from the definitions that
\[
{\it pr}(\overline{{\mathcal{A}}(\ell)})\ = \overline{{\mathcal{A}}(\ell')}
\qquad\mbox{and}\qquad
{\it pr}(Z(\ell))\ =\ Z(\ell')\,.
\]
When $b_i$ and $b_j$ have distinct (finite) zeroes, say $\zeta_a$ and $\zeta_b$, then ${\it pr}$ is
injective on the interior of $\overline{{\mathcal{A}}(\ell)}$ and on the edges with directions
$\pm{\bf f}_a$, $\pm{\bf f}_b$, and $\pm{\bf f}_{M+1}$ (sending them to edges with directions
$\pm{\bf e}_1$, $\pm{\bf e}_2$, and $\pm({\bf e}_1{+}{\bf e}_2)$) and collapsing the others
to points.
In the other cases, ${\mathcal{A}}(\ell')$ is a circle.
However, in all cases ${\it pr}$ is one-to-one over the interiors of each triangle in the
image zonotope cycle $Z(\ell')$, collapsing the other triangles to line segments or to points.
Thus
\[
{\it pr}_*[\overline{{\mathcal{A}}(\ell)}+Z(\ell)]\ =\
[\overline{{\mathcal{A}}(\ell')}+Z(\ell')]\,.
\]
Since the last vertex of the path $P(\ell')$ is $(\widetilde{p}\hspace{1.2pt}_{1,i},\widetilde{p}\hspace{1.2pt}_{1,j})$, the theorem
follows from the computation of Example~\ref{Ex:plane}.
\end{proof}
\section{Structure of discriminant coamoebas in dimension two}\label{S:three}
Suppose now that $B\subset{\bf{Z}}^2$ is a multiset of $N{+}1$ vectors which span ${\bf{R}}^2$ and
have sum ${\bf 0}=(0,0)$.
We use $B=\{{\bf b}_1,\dotsc,{\bf b}_{N{+}1}\}$ to define a rational map ${\bf{C}}^2-\to{\bf{C}}^2$
\begin{equation}\label{Eq:HK}
z\ \longmapsto\
\Big( \prod_{i=1}^{N+1} \langle {\bf b}_i,z\rangle^{{\bf b}_{i,1}}\,,\,
\prod_{i=1}^{N+1} \langle {\bf b}_i,z\rangle^{{\bf b}_{i,2}}\Bigr)\,.
\end{equation}
Since $\sum_i {\bf b}_i={\bf 0}$, each coordinate is homogenous of degree $0$, and
so~\eqref{Eq:HK} induces a rational map $\Psi_B\colon\P^1\to\P^2$ (where the image has
distinguished coordinates).
Define \defcolor{$D_B$} to be the image of this map~\eqref{Eq:HK}.
When $B$ consists of distinct vectors that span ${\bf{Z}}^2$, then it is Gale dual to a
set of vectors of the form $(1,{\bf a})$ for ${\bf a}\in A\subset{\bf{Z}}^{n+2}$.
In this case,~\eqref{Eq:HK} is the Horn-Kapranov parametrization~\cite{K91} of the reduced
$A$-discriminant.
We use Theorem~\ref{Th:homology} to study the coamoeba \defcolor{${\mathcal{A}}_B$} of $D_B$ and
its complement, for any multiset $B$.
The results of Section~\ref{S:realline} are applicable because the map~\eqref{Eq:HK} factors,
\[
\begin{array}{rcrcl}
{\bf{C}}^2\ni z&\longmapsto&(\langle{\bf b}_1,z\rangle,\langle{\bf b}_2,z\rangle,
\dotsc,\langle{\bf b}_{N+1},z\rangle)\in{\bf{C}}^{N+1}\\
&&{\bf{C}}^{N+1}\ni (x_1,x_2,\dotsc,x_{N+1})&\longmapsto&
{\displaystyle\Big( \prod_{i=1}^{N+1} x_i^{{\bf b}_{i,1}}\,,\,
\prod_{i=1}^{N+1} x_i^{{\bf b}_{i,2}}\Bigr)\in{\bf{C}}^2}
\end{array}
\]
The first map, $\Phi_B$, is linear and the second, $\beta$, is a monomial map.
They induce maps $\P^1\to\P^N-\to\P^2$, with the second a rational map.
Let \defcolor{$\ell_B$} be the image of $\Phi_B$ in $\P^N$, which is a real line as in
Section~\ref{S:realline}.
The map $\Arg(\beta)$ is the homomorphism ${\bf{T}}^N\to{\bf{T}}^2$ induced by the linear map
on the universal covers, (also written $\Arg(\beta)$),
\[
\Arg(\beta)\ \colon\ {\bf{R}}^N\ \ni\ {\bf e}_i\ \longmapsto\ {\bf b}_i\ \in\ {\bf{R}}^2\,,
\]
and the following is immediate.
\begin{lemma}\label{L:coamoeba_structure}
The coamoeba ${\mathcal{A}}_B$ is the image of the coamoeba ${\mathcal{A}}(\ell_B)$ under
the map $\Arg(\beta)$.
\end{lemma}
\begin{example}\label{Ex:rational_cubic}
Let $B$ be the vector configuration $\{(1,0), (-2,1), (1,-2), (0,1)\}$.
Observe that ${\bf b}_1+{\bf b}_2+{\bf b}_3+{\bf b}_4=0$ and $3{\bf b}_1+2{\bf b}_2+{\bf b}_3=0$, thus $B$ is Gale dual
to the vector configuration $\{(1,3),(1,2),(1,1),(1,0)\}\subset\{1\}\times{\bf{Z}}$.
So $A$ is simply $\{0,1,2,3\}$ if we identify ${\bf{Z}}$ with $\{1\}\times{\bf{Z}}$.
We show these two configurations.
\[
\begin{picture}(63,58)(-10,-4)
\put(0,0){\includegraphics{figures/vectorsB.eps}}
\put( 27,47){${\bf b}_4$} \put(47,26){${\bf b}_1$}
\put(-3,50){${\bf b}_2$} \put(47,-4){${\bf b}_3$}
\put(5,5){$B$}
\end{picture}
\qquad\qquad
\begin{picture}(68,59)(-2,0)
\put(0,30){\includegraphics{figures/vectorsA.eps}}
\put(-1.5,23.4){$0$} \put(18.8,23.4){$1$}
\put(38.8,23.4){$2$} \put(58.8,23.4){$3$}
\put(29,44){$A$}
\end{picture}
\]
Observe that the convex hull of $A$ has volume $d_B=3$.
The map~\eqref{Eq:HK} becomes
\[
(x,y)\ \longmapsto\ \Bigl( \frac{x(x-2y)}{(y-2x)^2}\,,\,
\frac{y(y-2x)}{(x-2y)^2}\Bigr)\,,
\]
whose image is the curve below.
\[
\begin{picture}(81,100)
\put(0,0){\includegraphics[height=100pt]{figures/Adiscr.eps}}
\put(21,76.5){$-2$} \put(78,29){$-2$}
\end{picture}
\]
The line $\ell_B$ is the line of Example~\ref{Ex:tc} and so ${\mathcal{A}}_B$ is the image of the
coamoeba of Figure~\ref{F:linecoamoeba} under the map
\[
\Arg(\beta)\ \colon\ (\theta_1,\theta_2,\theta_3)\ \longmapsto\
(\theta_1{-}2\theta_2{+}\theta_3, \theta_2{-}2\theta_3)\,.
\]
We display this image below, first in the fundamental domain $[\pi,\pi]^2$ of ${\bf{T}}^2$, and
then in universal cover ${\bf{R}}^2$ of ${\bf{T}}^2$ (each square is one fundamental domain).
\begin{equation}\label{Eq:A-discr}
\raisebox{-40pt}{\begin{picture}(60,90)
\put(0,0){\includegraphics{figures/AdiscrCo.eps}}
\put(23,2){${\mathcal{A}}_B$}
\end{picture}
\qquad\qquad
\begin{picture}(165,90)(5,0)
\put(37,0){\includegraphics{figures/AdiscrCoUC.eps}}
\put(0,2){${\mathcal{A}}_B$}
\put(20,5){\vector(1,0){92}}
\put(11,13){\vector(1,2){31}}
\put(84,46){$0$}
\put(105,75){$Z_B$} \put(104,78){\vector(-2,-1){30}}
\end{picture}}
\end{equation}
In the picture on the left, the darker shaded regions are where the argument map is
two-to-one.
The octagon on the right is the zonotope $Z_B$ generated by $B$ and it is the image of the
zonotope chain of Figure~\ref{F:ZC} under the map $\Arg(\beta)$.
Observe that the union of the coamoeba and the zonotope covers the fundamental domain $d_B=3$
times.
\hfill$\includegraphics[height=10pt]{figures/QED.eps}$
\end{example}
What we observe in this example is in fact quite general.
We first use Lemma~\ref{L:coamoeba_structure} to describe the coamoeba ${\mathcal{A}}_B$
more explicitly, then study the zonotope $Z_B$ generated by $B$, before making an
important definition and giving our proof of Theorem~\ref{T:NP}.
The line $\ell_B$ is parametrized by the forms
$z\mapsto\langle {\bf b}_i,z\rangle$, for $i=1,\dotsc,N{+}1$.
Let $\xi_i\in{\bf{R}}\P^1$ be the zero of the $i$th form,
and suppose these are in a weakly increasing cyclic order on ${\bf{R}}\P^1$,
\[
\xi_1\ \leq\ \xi_2\ \leq\ \dotsb\ \leq\ \xi_{N+1}\,.
\]
Next, identify $\P^1\smallsetminus\{\xi_{N+1}\}$ with
${\bf{C}}$, so that $\xi_{N+1}$ is the point $\infty$ at infinity, and suppose that the
distinct zeroes are
\[
\zeta_1\ <\ \zeta_2\ <\ \dotsb\ <\ \zeta_M\ <\ \zeta_{M{+}1}\ =\ \infty\,.
\]
By the description of the coamoeba ${\mathcal{A}}(\ell_B)$ of
Section~\ref{S:realline} and Lemma~\ref{L:coamoeba_structure}, we see that the coamoeba
${\mathcal{A}}_B$ is composed of two components, each bounded by polygonal paths that are the
images of the boundary of ${\mathcal{A}}(\ell_B)$ under the map $\Arg(\beta)$.
For each $j=1,\dotsc,M{+}1$, set
\[
\defcolor{{\bf c}_j}\ :=\ \Arg(\beta)({\bf f}_j)\ =\ \sum_{i\colon \langle{\bf b}_i,\zeta_j\rangle=0} {\bf b}_i\,.
\]
The components of ${\mathcal{A}}_B$ correspond to the half planes of $\P^1$, and the boundary
along each is the polygonal path with edges $\pm\pi{\bf c}_1,\dotsc,\pm\pi{\bf c}_{M+1}$ with the
$+$ signs for the upper half plane and $-$ signs for the lower half plane.
The complete description requires the following proposition, which is explained
in~\cite[\S~2]{NP10}.
\begin{proposition}\label{P:unrammified}
Suppose that $M>1$.
Then the composition
\[
\P^1\smallsetminus\{\zeta_1,\dotsc,\zeta_{M+1}\}
\xrightarrow{\,\Psi_B\,}D_B\xrightarrow{\,\Arg\,}{\mathcal{A}}_B\ \subset\ {\bf{T}}^2
\]
is an immersion when restricted to $\P^1\smallsetminus{\bf{R}}\P^1$ (in fact it is locally a
covering map).
\end{proposition}
The edges $\pm\pi{\bf c}_1,\dotsc,\pm\pi{\bf c}_{M+1}$ decompose ${\bf{T}}^2$ into polygonal regions.
Over each polygonal region the map of Proposition~\ref{P:unrammified} has a constant number of
preimages.
This number of preimages equals the winding number of the polygonal path around that region.
Then the pushforward $\Arg(\beta)_*(\overline{{\mathcal{A}}(\ell_B)})$ of the coamoeba chain of
the line $\ell_B$ is the chain in ${\bf{T}}^2$ where the multiplicity of a region is this number
of preimages/winding number.
This equals the coamoeba chain of $D_B$.
We will write \defcolor{$\overline{{\mathcal{A}}_B}$} for this chain
$\Arg(\beta)_*(\overline{{\mathcal{A}}(\ell_B)})$, as our arguments use the pushforward.
There is another natural chain we may define from the vector configuration $B$.
Let $\defcolor{\overline{{\bf 0},\pi{\bf b}_i}}$ be the directed line segment in ${\bf{R}}^2$ connecting the
origin to the endpoint of the vector $\pi{\bf b}_i$.
Let $\defcolor{Z_B}\subset{\bf{R}}^2$ be the Minkowski sum of the line segments
$\overline{{\bf 0},\pi{\bf b}_i}$ for ${\bf b}_i\in B$.
This is a centrally symmetric zonotope as $\sum_i {\bf b}_i={\bf 0}$.
We will also write $Z_B$ for its image in ${\bf{T}}^2$, considered now as a chain.
For any ${\bf v}\in{\bf{R}}^2$, the points
\[
q\ :=\ \sum_{\langle{\bf b}_i,{\bf v}\rangle>0} {\bf b}_i \qquad\mbox{and}\qquad
q'\ :=\ \sum_{\langle{\bf b}_i,{\bf v}\rangle\geq0} {\bf b}_i
\]
are vertices of $Z_B$ which are extreme in the direction of ${\bf v}$.
These differ only if the line ${\bf{R}}{\bf v}$ represents a zero $\zeta_j$ of one of the
forms, and then the edge between them is $\pi{\bf d}_j$, where
\begin{equation}\label{Eq:bdj}
{\bf d}_j\ :=\ \sum_{i\colon \langle{\bf b}_i,{\bf v}\rangle=0}
\sign(\langle{\bf b}_i,{\bf w}\rangle)\; {\bf b}_i,,
\end{equation}
where ${\bf w}$ is a vector such that $\langle -{\bf w},q\rangle>\langle -{\bf w},q'\rangle$ and
$\sign(x)\in\{\pm1\}$ is the sign of the real number $x$.
Thus ${\bf d}_j$ is the vector parallel to any ${\bf b}_i$ with $\langle{\bf b}_i,\zeta_j\rangle=0$
whose length is the sum of the lengths of these vectors and its direction is such that
$\langle{\bf d}_j,{\bf w}\rangle>0$.
Starting at a vertex of $Z_B$ and moving, say clockwise, the successive edge vectors will be
the vectors $\{\pm\pi{\bf d}_1,\dotsc,\pm\pi{\bf d}_M,\pm\pi{\bf d}_{M+1}\}$ occuring in a cyclic
clockwise order.
This may be seen on the right in~\eqref{Eq:A-discr}, where $Z_B$ is the octagon.
Its southeastern-most vertex is $\pi{\bf b}_1+\pi{\bf b}_3$ (corresponding to the vector ${\bf v}_1=-{\bf b}_2$,
and the edges encountered from there in clockwise order are
$-\pi{\bf b}_1,\pi{\bf b}_2,-\pi{\bf b}_3, \pi{\bf b}_4, \pi{\bf b}_1,-\pi{\bf b}_2, \pi{\bf b}_3,-\pi{\bf b}_4$.
(Here, ${\bf d}_j={\bf b}_j$)
Before giving our proof of Theorem~\ref{T:NP}, we make an important definition.
Let $B=\{{\bf b}_1,\dotsc,{\bf b}_{N+1}\}$ be a multiset of vectors in ${\bf{Z}}^2$ that span
${\bf{R}}^2$ and whose sum is ${\bf 0}$.
Write \demph{$\cone({\bf b}_i,{\bf b}_j)$} for the cone generated by the vectors ${\bf b}_i, {\bf b}_j$.
Suppose that ${\bf v}$ is any vector in ${\bf{R}}^2$ not pointing in the direction of a vector in $B$,
and set
\begin{equation}\label{Eq:dBv}
\defcolor{d_{B,{\bf v}}}\ :=\ \sum_{{\bf v}\in\cone({\bf b}_i,{\bf b}_j)} |{\bf b}_i\wedge {\bf b}_j|\ .
\end{equation}
Here $|{\bf b}_i\wedge {\bf b}_j|$ is the absolute value of the determinant of the matrix whose columns are
the two vectors, which is the area of the parallelogram generated by ${\bf b}_i$ and ${\bf b}_j$.
\begin{lemma}\label{L:independent}
The sum~$\eqref{Eq:dBv}$ is independent of choice of ${\bf v}$.
\end{lemma}
\begin{proof}
The rays generated by elements of $B$ divide ${\bf{R}}^2$ into regions.
The sum~\eqref{Eq:dBv} depends only upon the region containing ${\bf v}$---it is a sum over all
cones containing the given region.
To show its independence of region, let ${\bf v},{\bf v}'$ lie in adjacent regions with
$\defcolor{{\bf u}}$ a vector generating the ray separating the regions.
Suppose that the vectors in $B$ are indexed so that
${\bf b}_\kappa,{\bf b}_{\kappa+1},\dotsc,{\bf b}_{\mu-1}$ are the vectors with direction $-{\bf u}$ and
${\bf b}_\mu,{\bf b}_{\mu+1},\dotsc,{\bf b}_\lambda$ are the vectors with direction ${\bf u}$.
Then the sums for $d_{B,{\bf v}}$ and $d_{B,{\bf v}'}$ both include the sum over all cones whose relative
interior contains ${\bf u}$, but have different terms involving cones with one generator among
${\bf b}_\mu,\dotsc,{\bf b}_\lambda$.
All such cones appear, and up to a sign, the difference $d_{B,{\bf v}}-d_{B,{\bf v}'}$ is equal to
\begin{multline*}
\bigl({\bf b}_\mu +\dotsb+ {\bf b}_\lambda\bigr) \wedge \bigl({\bf b}_1 +\dotsb+ {\bf b}_{\kappa-1}\ +\
{\bf b}_{\lambda+1}+\dotsb+ {\bf b}_{N+1}\bigr)\\
\ =\ \bigl({\bf b}_\mu +\dotsb +{\bf b}_\lambda\bigr) \wedge ({\bf b}_1+\dotsb+{\bf b}_{N+1})\ =\ 0\,,
\end{multline*}
which proves the lemma.
\end{proof}
\begin{remark}
The sum~\eqref{Eq:dBv} is known to coincide with the normalized volume of the convex hull of the
vector configuration $A$ that is Gale dual to $B$ (see~\cite{DS02}), so Lemma~\ref{L:independent}
also follows from this fact.
We will henceforth write \demph{$d_B$} for this volume/sum.
\hfill$\includegraphics[height=10pt]{figures/QED.eps}$
\end{remark}
\begin{example}
Consider the sum~\eqref{Eq:dBv} for the vector configuration $B$ of
Example~\ref{Ex:rational_cubic}.
There are four choices for the vector ${\bf v}$ as indicated below
\begin{equation}\label{Eq:B-and-v}
\raisebox{-46pt}{\begin{picture}(100,93)(-15,-3)
\put(0,0){\includegraphics{figures/vectorsB-v.eps}}
\put( 36,63){${\bf b}_4$} \put(62,37){${\bf b}_1$}
\put(-13,57){${\bf b}_2$} \put(62, -3){${\bf b}_3$}
\put( 36,12){${\bf v}_1$} \put(81,21){${\bf v}_2$}
\put( 61,61){${\bf v}_3$} \put( 7,79){${\bf v}_4$}
\end{picture}}
\end{equation}
The vector ${\bf v}_1$ lies only in $\cone({\bf b}_2,{\bf b}_3)$, and we have
${\bf b}_2\wedge {\bf b}_3=|\begin{smallmatrix}-2&1\\1&-2\end{smallmatrix}|=3$.
The vector ${\bf v}_2$ lies in $\cone({\bf b}_3,{\bf b}_1)$ and $\cone({\bf b}_3,{\bf b}_4)$, and
we have ${\bf b}_3\wedge {\bf b}_1+{\bf b}_3\wedge {\bf b}_4=
|\begin{smallmatrix}1&-2\\1&0\end{smallmatrix}|+
|\begin{smallmatrix}1&-1\\0&1\end{smallmatrix}|=2+1=3$.
Similarly, ${\bf v}_3$ lies in $\cone({\bf b}_3,{\bf b}_4)$, $\cone({\bf b}_1,{\bf b}_4)$, and $\cone({\bf b}_1,{\bf b}_2)$,
and ${\bf b}_3\wedge {\bf b}_4+{\bf b}_1\wedge {\bf b}_4+{\bf b}_1\wedge {\bf b}_2=1+1+1=3$, and the calculation for
${\bf v}_4$ is the mirror-image of that for ${\bf v}_2$.
In every case, $d_{B,{\bf v}_i}=3$, and so $d_B=3$.
\hfill$\includegraphics[height=10pt]{figures/QED.eps}$
\end{example}
\begin{theorem}\label{Th:Cycle}
The sum, $\overline{{\mathcal{A}}_B}+Z_B$, of the coamoeba chain of $D_B$ and the
$B$-zonotope chain is a cycle in ${\bf{T}}^2$ which equals $d_B[{\bf{T}}^2]$.
\end{theorem}
\begin{proof}
We will show that $\Arg(\beta)_*[Z(\ell_B)]=[Z_B]$, which implies that
\[
[\overline{{\mathcal{A}}_B}+Z_B]\ =\ \Arg(\beta)_*[\overline{{\mathcal{A}}(\ell_B)} + Z(\ell_B)]
\]
is a cycle, as $\Arg(\beta)_*[\overline{{\mathcal{A}}(\ell_B)}]=[\overline{{\mathcal{A}}_B}]$.
Since $\Arg(\beta)_*({\bf e}_i\wedge{\bf e}_j)={\bf b}_i\wedge{\bf b}_j\cdot[{\bf{T}}^2]$, the formula of
Theorem~\ref{Th:homology} will give us the homology class of $[\overline{{\mathcal{A}}_B}+Z_B]$.
We will use~\eqref{Eq:dBv} and Lemma~\ref{L:independent} to show that it equals
$d_B[{\bf{T}}^2]$.
This will imply the theorem as we will show that there is an ordering
of the vectors $B$ such that the map $\Arg(\beta)\colon Z(\ell_B)\to Z_B$ in the
universal covers ${\bf{R}}^N\to{\bf{R}}^2$ is injective.
Recall that $\xi_1,\dotsc,\xi_{N+1}$ are points of ${\bf{R}}\P^1$ with $\langle {\bf b}_i,\xi_i\rangle=0$
and $\zeta_1,\dotsc,\zeta_{M+1}$ are the distinct points among them.
Let ${\bf 0}\neq\defcolor{{\bf v}}\in{\bf{R}}^2$ represent $\xi_{N+1}=\zeta_{M+1}$
(so that $\langle{\bf b}_{N+1},{\bf v}\rangle=0$)
and choose $x\in{\bf{R}}^2$ to be a point with $\langle{\bf b}_{N+1},x\rangle=1$.
Then $t\mapsto x+t{\bf v}$ gives a parametrization of ${\bf{R}}\P^1$ with $\infty=\zeta_{M+1}$, and
identifies ${\bf{R}}$ with ${\bf{R}}\P^1\smallsetminus\{\infty\}$.
To agree with Definition~\ref{D:conventions}, we suppose that the points
of $B$ are ordered so that~\eqref{Eq:A} and~\eqref{Eq:B} hold.
Thus there are integers
$1=m_1<\dotsb<m_{M+1}<m_{M+2}=N{+}2$ such that
\[
\langle {\bf b}_i,\zeta_j\rangle=0\
\Longleftrightarrow\ m_j\leq i<m_{j+1}\,.
\]
We further suppose that $B$ is ordered so that one of~\eqref{Eq:inc}
or~\eqref{Eq:dec} holds for every $j=1,\dotsc,M{+}1$.
Specifically, let ${\bf w}:=x+\tau {\bf v}$ for some fixed $\tau<\zeta_1$.
Then there exist integers $n_1,\dotsc,n_{M+1}$ such that for each $j=1,\dotsc,M{+}1$
we have $m_j< n_j\leq m_{j+1}$ and either
\begin{eqnarray*}
\langle{\bf b}_{m_j},{\bf w}\rangle\,,\ \dotsc\,,\ \langle{\bf b}_{n_j-1},{\bf w}\rangle
&<\ 0\ <&
\langle{\bf b}_{n_j},{\bf w}\rangle\,,\ \dotsc\,,\ \langle{\bf b}_{m_{j+1}-1},{\bf w}\rangle\,,
\makebox[.1in][l]{\qquad or}\\
\langle{\bf b}_{m_j},{\bf w}\rangle\,,\ \dotsc\,,\ \langle{\bf b}_{n_j-1},{\bf w}\rangle
&>\ 0\ >&
\langle{\bf b}_{n_j},{\bf w}\rangle\,,\ \dotsc\,,\ \langle{\bf b}_{m_{j+1}-1},{\bf w}\rangle\,.
\end{eqnarray*}
For $i=1,\dotsc,N{+}1$, let $\sgn_i\in\{\pm 1\}$ be the sign of $\langle{\bf b}_i,{\bf w}\rangle$.
Note that $\sgn_{N+1}=1$.
Define ${\bf f}_j,{\bf g}_j,{\bf h}_j$ as in Definition~\ref{D:conventions},
\[
{\bf f}_j\ :=\ \sum_{i=m_j}^{m_{j+1}-1} {\bf e}_i\,,\qquad
{\bf g}_j\ :=\ \sum_{i=m_j}^{n_{j}-1} {\bf e}_i\,,\qquad\mbox{and}\qquad
{\bf h}_j\ :=\ \sum_{i=m_j}^{m_{j+1}-1} \sgn_i{\bf e}_i\,.
\]
Consider now the following affine parametrization of $\ell_B\subset\P^N$,
\[
\Phi_B\ \colon\ t\ \longmapsto\
[\langle{\bf b}_1,x+t{\bf v}\rangle\ \colon
\dotsb\ \colon\
\langle{\bf b}_N,x+t{\bf v}\rangle\ \colon\ \langle{\bf b}_{N+1},x+t{\bf v}\rangle=1]\,.
\]
Let $\widetilde{p}\hspace{1.2pt}_1\in\{0,\pi\}^N\in{\bf{R}}^N$ be the point whose $i$th coordinate is
$\arg(\sgn_i)$.
Its image $p_1\in{\bf{T}}^N$ is the point on the coamoeba of $\ell_B$ coming from
the real points $\Phi_B(-\infty,\zeta_1)$.
We describe $\Arg(\beta)(Z(\ell_B))$ in the universal cover ${\bf{R}}^2$ of ${\bf{T}}^2$.
For each $j=1,\dotsc,2M{+}2$, set $\defcolor{\widetilde{q}_j}:=\Arg(\beta)(\widetilde{p}\hspace{1.2pt}_j)$ and
$\defcolor{{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j}:=\Arg(\beta)({\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j)$.
Since
\begin{equation}\label{Eq:def_p1tilde}
\widetilde{p}\hspace{1.2pt}_{1,i}\ =\
\left\{ \begin{array}{rcl} \pi &&\mbox{if }\langle{\bf b}_i,{\bf w}\rangle<0\\
0 &&\mbox{if }\langle{\bf b}_i,{\bf w}\rangle>0
\end{array}\right.\ ,
\end{equation}
we have
\[
\widetilde{q}_1\ =\ \pi\cdot\sum_{\langle{\bf b}_i,{\bf w}\rangle<0}{\bf b}_i\ ,
\]
and so $\widetilde{q}_1$ is a vertex of $Z_B$ which is extreme in the direction of $-{\bf w}$.
The zonotope chain $Z(\ell_B)$ is a union of the triangles
\begin{equation}\label{Eq:ZC_triangles}
\conv({\bf 0},\widetilde{p}\hspace{1.2pt}_{j+1},{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j)
\qquad\mbox{and}\qquad
\conv({\bf 0},{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j,\widetilde{p}\hspace{1.2pt}_j)
\qquad\mbox{for}\ j=M{+}2,\dotsc,1\,,
\end{equation}
where the second is degenerate if $\widetilde{p}\hspace{1.2pt}_j={\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j$.
Thus $\Arg(\beta)(Z(\ell_B))$ will be the union of the (possibly degenerate) triangles
\begin{equation}\label{Eq:Zone_triangles}
\conv({\bf 0},\widetilde{q}_{j+1},{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j)
\qquad\mbox{and}\qquad
\conv({\bf 0},{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j,\widetilde{q}_j)
\qquad\mbox{for}\ j=M{+}2,\dotsc,1\,,
\end{equation}
For $j\leq M{+}1$, $\widetilde{p}\hspace{1.2pt}_{j+1}=\widetilde{p}\hspace{1.2pt}_j+\pi{\bf h}_j$, so
\[
\widetilde{q}_{j+1}\ =\ \widetilde{q}_j+\pi\Arg(\beta)({\bf h}_j)\ =\
\widetilde{q}+\pi{\bf d}_j\,,
\]
which we see by~\eqref{Eq:bdj} (with the vector ${\bf w}=x+\tau{\bf v}$) and our definition of
$\sgn_i$.
If we fix the orientation so that ${\bf v}$ is clockwise of ${\bf b}_{N+1}$, then
by our choice of ordering of the zeroes $\zeta_j$, the lines ${\bf{R}}{\bf d}_1, \dotsc,{\bf{R}}{\bf d}_{M+1}$
occur in clockwise order.
Since $\langle{\bf d}_j,{\bf w}\rangle>0$ and $\widetilde{q}_1$ is extreme in the direction of $-{\bf w}$,
the vectors $\pi{\bf d}_1,\dotsc,\pi{\bf d}_{M{+}1}$ will form the edges of the zonotope
starting at $\widetilde{q}_1$ and moving clockwise.
It follows from the discussion following~\eqref{Eq:bdj} that $\widetilde{q}_1,\dotsc,\widetilde{q}_{2M+2}$ form
the vertices of the zonotope $Z_B$.
This implies that no $\widetilde{q}_j$ coincides with the origin ${\bf 0}$.
All that remains is to understand the two triangles~\eqref{Eq:Zone_triangles}
for those $j$ when ${\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j\neq\widetilde{q}_j$.
In this case, ${\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j=\widetilde{p}\hspace{1.2pt}_j+2\pi\sgn_{m_j}{\bf g}_j$, and so
\[
{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j\ =\ \widetilde{p}\hspace{1.2pt}_j\ +\ 2\pi\sgn_{m_j}\sum_{i=m_j}^{n_j-1} {\bf b}_i\ =\
\widetilde{p}\hspace{1.2pt}_j\ +\ 2\pi\sum_{i=m_j}^{n_j-1} \sgn_i{\bf b}_i\,.
\]
Since ${\bf b}_{m_j},\dotsc,{\bf b}_{m_{j+1}-1}$ are parallel, $\widetilde{q}_j,{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j$, and
$\widetilde{q}_{j+1}$ are collinear.
This implies that
\[
\Arg(\beta)_* [ \conv({\bf 0},{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j,\widetilde{p}\hspace{1.2pt}_j) + \conv({\bf 0},\widetilde{p}\hspace{1.2pt}_{j+1},{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j)]\ =\
[\conv({\bf 0},\widetilde{q}_{j+1},\widetilde{q}_j)]\,,
\]
which shows that $\Arg(\beta)_*[Z(\ell_B)]=[Z_B]$.
Indeed, if ${\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j$ lies between $\widetilde{q}_j$ and $\widetilde{q}_{j+1}$ then $\Arg(\beta)$
preserves the orientation of the triangles~\eqref{Eq:ZC_triangles} and is therefore
injective over their images, whose union is $\conv({\bf 0},\widetilde{q}_{j+1},\widetilde{q}_j)$.
Otherwise, the two triangles~\eqref{Eq:Zone_triangles} have opposite orientations and
\[
\conv({\bf 0},{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j,\widetilde{q}_j)\ \supset\ \conv({\bf 0},\widetilde{q}_{j+1},{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j)\,,
\]
so that $\Arg(\beta)_* [ \conv({\bf 0},{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j,\widetilde{p}\hspace{1.2pt}_j) + \conv({\bf 0},\widetilde{p}\hspace{1.2pt}_{j+1},{\widetilde{p}\hspace{1.6pt}'\hspace{-3.4pt}}_j)]$
equals
\[
[ \conv({\bf 0},{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j,\widetilde{q}_j)]-[ \conv({\bf 0},\widetilde{q}_{j+1},{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j)]
\ =\ [\conv({\bf 0},\widetilde{q}_{j+1},\widetilde{q}_j)]\,.
\]
Theorem~\ref{Th:homology}, Equation~\eqref{Eq:def_p1tilde}, and
$\Arg(\beta)_*({\bf e}_i\wedge {\bf e}_j)={\bf b}_i\wedge {\bf b}_j\cdot[{\bf{T}}^2]$, show that
\[
\Arg(\beta)_*[\overline{{\mathcal{A}}(\ell_B)}+Z(\ell_B)]\ =\ [{\bf{T}}^2]\cdot
\sum_{\substack{1\leq i<j\leq N\\\langle{\bf b}_i,{\bf w}\rangle>0>\langle{\bf b}_j,{\bf w}\rangle}}
{\bf b}_i\wedge {\bf b}_j\,.
\]
We will show that this equals $d_B[{\bf{T}}^2]$.
Observe that if ${\bf b}_i$ and ${\bf b}_j$ are parallel, then ${\bf b}_i\wedge{\bf b}_j=0$ and they do not
contribute to the sum.
We will consider the sum with the restriction that the vectors ${\bf b}_i$ and ${\bf b}_j$ are not parallel.
Set $\defcolor{{\bf w}^\perp}:=-{\bf b}_{N+1}+{\bf w}/\langle{\bf w},{\bf w}\rangle$, which is orthogonal to
${\bf w}$.
Suppose that ${\bf v}$ is clockwise of ${\bf b}_{N+1}$, as below.
\[
\begin{picture}(93,104)(-13,-20)
\put(-10,-10){\includegraphics{figures/bij.eps}}
\put(-1,-12){${\bf b}_i$} \put(58,-13){${\bf b}_j$}
\put(25,78){${\bf b}_{N+1}$} \put(61,33){${\bf v}$}
\put(-13,32){${\bf w}$} \put(16,-18){${\bf w}^\perp$}
\end{picture}
\]
By our choice of ${\bf w}$, the lines ${\bf{R}}{\bf w}^\perp,{\bf{R}}{\bf b}_1,\dotsc,{\bf{R}}{\bf b}_{N+1}$ occur in weak
clockwise order with ${\bf{R}}{\bf w}^\perp$ distinct from the rest.
Suppose now that $1\leq i<j\leq N$ where
\begin{equation}\label{Eq:condition}
\langle{\bf b}_i,{\bf w}\rangle\ >\ 0\ >\ \langle{\bf b}_j,{\bf w}\rangle\,,
\end{equation}
and ${\bf b}_i$ and ${\bf b}_j$ are not parallel.
The cone spanned by ${\bf b}_i$ and ${\bf b}_j$ meets a half ray of ${\bf{R}}{\bf w}^\perp$, with ${\bf b}_i$
to the left of ${\bf{R}}{\bf w}^\perp$ and ${\bf b}_j$ to the right of ${\bf{R}}{\bf w}^\perp$, by~\eqref{Eq:condition}.
Since ${\bf{R}}{\bf w}^\perp,{\bf{R}}{\bf b}_i$, and ${\bf{R}}{\bf b}_j$ occur in clockwise order, we must have that
${\bf w}^\perp\in\cone({\bf b}_i,{\bf b}_j)$, which shows that
\[
\sum_{\substack{1\leq i<j\leq N\\\langle{\bf b}_i,{\bf w}\rangle<0<\langle{\bf b}_j,{\bf w}\rangle}}
{\bf b}_i\wedge {\bf b}_j
\ =\
\sum_{\substack{1\leq i<j\leq N\\{\bf w}^\perp\in\cone({\bf b}_i,{\bf b}_j)}}
{\bf b}_i\wedge {\bf b}_j\ =\ d_{B,{\bf w}^\perp}\ =\ d_B\,.
\]
The sum equals $d_{B,{\bf w}^\perp}$ because if ${\bf b}_j$ is counter clockwise from ${\bf b}_i$
by~\eqref{Eq:condition} and the condition that ${\bf w}^\perp\in\cone({\bf b}_i,{\bf b}_j)$ with
$i<j$.
Thus ${\bf b}_i\wedge{\bf b}_j>0$.
We complete the proof by noting that ${\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_j$ will lie between $\widetilde{q}_j$ and $\widetilde{q}_{j+1}$
if either $n_j=m_{j+1}$, so that ${\bf g}_j={\bf f}_j$, or if
\[
\|{\bf g}_j\|\ =\ \|\sum_{i=m_j}^{n_j-1}{\bf b}_j\|
\ =\ \sum_{i=m_j}^{n_j-1}\|{\bf b}_j\|\ \leq\
\sum_{i=n_j}^{m_{j+1}-1}\|{\bf b}_j\|\ =\ \|{\bf f}_j-{\bf g}_j\|\,,
\]
as ${\bf b}_{m_j},\dotsc,{\bf b}_{n_j-1}$ have the same direction which is opposite to the (common)
direction of ${\bf b}_{n_j},\dotsc,{\bf b}_{m_{j+1}-1}$.
If this does not occur for our given order, then we simply reverse the vectors
${\bf b}_{m_j},\dotsc,{\bf b}_{m_{j+1}-1}$, replacing ${\bf g}_j$ with ${\bf f}_j-{\bf g}_j$.
\end{proof}
\begin{example}\label{ex:parallel}
The last point in the proof about the injectivity of
\[
\Arg(\beta)\ \colon\ Z(\ell_B)\ \longrightarrow\ Z_B
\]
(and more generally the arguments when $B$ has parallel vectors)
is geometrically subtle.
We expose this subtlety in the following two examples.
Suppose that $B$ consists of the vectors $(1,0),(0,1),(-2,-2)$, and $(1,1)$,
\[
\includegraphics{figures/vectorsparallel.eps}
\]
When ${\bf v}=(1,-1)$ and $x=(\frac{1}{2},\frac{1}{2})$, then $\ell_B$ has the parametrization
\begin{equation}\label{Eq:first_P}
z\ \longmapsto\ [ \tfrac{1}{2}+z\;:\; \tfrac{1}{2}-z\;:\; -2\;:\; 1]\,,
\end{equation}
which is the second line in our running Examples~\ref{Ex:CoArepeat},~\ref{Ex:CoZrepeat},
and~\ref{Ex:homology_class}.
In this case the image $\Arg(\beta)(Z(\ell_B))$ is shown on the left of Figure~\ref{F:images}.
It is superimposed over a fundamental domain and dashed lines $\theta_1,\theta_2=n\pi$ for
$n\in{\bf{Z}}$.
The segments $\widetilde{q}_3,{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_2$ and $\widetilde{q}_6,{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_5$ are covered in both directions as
$\Arg(\beta)(P(\ell_B))$ backtracks over these segments.
In fact, the triangles
\[
\conv({\bf 0},\widetilde{q}_3,{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_2)\qquad\mbox{and}\qquad
\conv({\bf 0},\widetilde{q}_6,{\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_5)
\]
have orientation opposite of the other triangles.
The medium shaded parts (near ${\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_2$ and ${\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_5$) are covered twice and the darker shaded
parts near ${\bf 0}$ are covered thrice.
\begin{figure}[htb]
\begin{picture}(177,126)(-11,-11)
\put(0,0){\includegraphics{figures/imageParallelI.eps}}
\put( 71, 52){${\bf 0}$}
\put( 23,-10){$\widetilde{q}_1$} \put( 48,-10){$\widetilde{q}_2$}
\put(156,102){${\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_2$} \put(129, 67){$\widetilde{q}_3$}
\put(129,107){$\widetilde{q}_4$} \put( 93,107){$\widetilde{q}_5$}
\put(-11, -3){${\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_5$} \put( 16, 32){$\widetilde{q}_6$}
\end{picture}
\qquad
\begin{picture}(177,126)(-11,-11)
\put(0,0){\includegraphics{figures/imageParallelII.eps}}
\put( 71, 54){${\bf 0}$}
\put( 23,-10){$\widetilde{q}_1$} \put( 48,-10){$\widetilde{q}_2$}
\put(104, 42){${\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_2$} \put(129, 67){$\widetilde{q}_3$}
\put(129,107){$\widetilde{q}_4$} \put( 93,107){$\widetilde{q}_5$}
\put( 39, 57){${\widetilde{q}\hspace{1.6pt}'\hspace{-3.4pt}}_5$} \put( 16, 32){$\widetilde{q}_6$}
\thicklines
\put(130,11){\White{\vector(-1,0){114}}} \put(130,11){\White{\vector(-1,0){115}}}
\put(138,21){\White{\vector(0,1){78}}} \put(138,22){\White{\vector(0,1){78}}}
\thinlines
\put(130,11){\vector(-1,0){113}}
\put(138,21){\vector(0,1){77}}
\put(132,9){${\mathcal{A}}_B$}
\end{picture}
\caption{Images of $\Arg(\beta)(Z(\ell_B))$}
\label{F:images}
\end{figure}
Now suppose that the vectors in $B$ are in the order $(1,0)$, $(1,1)$, $(-2,-2)$, and $(0,1)$,
and $v=(-1,0)$ and $x=(0,1)$.
Then $\ell_B$ is parametrized by
\[
z\ \longmapsto\ [-z\;:\; 1-z \;:\; 2z-2 \;:\; 1]\,,
\]
In this case the image $\Arg(\beta)(Z(\ell_B))$ is equal to the zonotope
$Z_B$, and is shown on the right of Figure~\ref{F:images}, together with the
coamoeba ${\mathcal{A}}_B$.
As explained in the proof of Theorem~\ref{Th:Cycle}, the image equals the zonotope because
in the pair of parallel vectors $(1,1)$ and $(-2,-2)$, the shorter comes first in this
case, while in the previous case, the shorter one came second.
In both cases (which are just different parametrizations of the same line)
$\Arg(\beta)_*[Z(\ell_B)]=[Z_B]$ as shown in the proof of
Theorem~\ref{Th:Cycle}, and the coamoebas coincide.
Furthermore, $[\overline{{\mathcal{A}}_B}+Z_B]=2[{\bf{T}}^2]$ for both, as $d_B=2$.
\hfill\includegraphics[height=10pt]{figures/QED.eps}
\end{example}
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2012-08-03T02:04:43",
"yymm": "1201",
"arxiv_id": "1201.6649",
"language": "en",
"url": "https://arxiv.org/abs/1201.6649",
"abstract": "Understanding the complement of the coamoeba of a (reduced) A-discriminant is one approach to studying the monodromy of solutions to the corresponding system of A-hypergeometric differential equations. Nilsson and Passare described the structure of the coamoeba and its complement (a zonotope) when the reduced A-discriminant is a function of two variables. Their main result was that the coamoeba and zonotope form a cycle which is equal to the fundamental cycle of the torus, multiplied by the normalized volume of the set A of integer vectors. That proof only worked in dimension two. Here, we use simple ideas from topology to give a new proof of this result in dimension two, one which can be generalized to all dimensions.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Discriminant coamoebas through homology",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180673335565,
"lm_q2_score": 0.8128673223709251,
"lm_q1q2_score": 0.8012580060060713
} |
https://arxiv.org/abs/1404.2710 | On Collatz' Words, Sequences and Trees | Motivated by a recent work of Trümper we consider the general Collatz word (up-down pattern) and the sequences following this pattern. The recurrences for the first and last sequence entries are given, obtained from repeated application of the general solution of a binary linear inhomogeneous Diophantine equation. These recurrences are then solved. The Collatz tree is also discussed. | \section{Introduction}
The Collatz map $C$ for natural numbers maps an odd number $m$ to $3\,m\,+\, 1$ and an even number to \dstyle{\frac{m}{2}}. The {\sl Collatz} conjecture \cite{Lagarias}, \cite{WikiC}, \cite{WeissteinC} is the claim that every natural number $n$ ends up, after sufficient iterations of the map $C$, in the trivial cycle $(4,\, 2,\, 1)$. Motivated by the work of {\sl Tr\"umper} ~\cite{Truemper} we consider a general finite {\sl Collatz} word on the alphabet $\{u,\,d\}$, where $u$ (for `up') indicates application of the map $C$ on an odd number, and $d$ (for `down') for applying the map $C$ on an even number. The task is to find all sequences which follow this word pattern (to be read from the left to the right). These sequences will be called $CS$ (for {\sl Collatz} sequence also for the plural) realizing the CW (for {\sl Collatz} word also for the plural) under consideration. This problem was solved by {\sl Tr\"umper} \cite{Truemper} under the restriction that the first and last sequence entries are odd. Here we shall not use this restriction.The solution will be given in terms of recurrence relations for the first and last entries of the $CS$ for a given $CW$. This involves a repeated application of the general solution on positive numbers of the linear inhomogeneous {\sl Diophant}ine equation $a\,x\,+\, b\,y\,=\, c$, with $a\,=\, 3^m$ and $b\,=\, 2^n$ and given integer $c$. Because $gcd(3,2)\,=\, 1$ one will always have a countable infinite number of solutions. This general solution depends on a non-negative integer parameter $k$. We believe that our solution is more straightforward than the one given in \cite{Truemper}.
\section{Collatz words, sequences and the Collatz tree}\label{section2}
The {\sl Collatz} map $C:\ \mathbb{N} \to \mathbb{N},\ m \mapsto 3\,m\,+\, 1$ if $m$ is odd, \dstyle{m \mapsto \frac{m}{2}} if $m$ is even, leads to an increase $u$ (for `up') or decrease $d$ (for `down'), respectively. Finite {\sl Collatz} words over the alphabet $\{u,\,d\}$ are considered with the restriction that, except for the one letter word $u$, every $u$ is followed by a $d$, because $2\,m\,+\, 1 \,\mapsto\, 2\,(3\,m+1)$. This is the reason for introducing (with \cite{Truemper}) also $s\, :=\, ud$. Thus $s$ stands for $2\,m\,+\, 1\,\mapsto\, 3\,m\,+\, 1$. The general finite word is encoded by an $(S+1)-$tuple $\vec n_{S}\,=\, [n_0,\,n_1,\,...,\,n_S ]$ with $S\,\in\, \mathbb N$.
\begin{eqnarray} \label{CW}
CW(\vec n_{S+1}) &\,=\,& d^{n_0}\,s\,d^{n_1-1}\,s\, \cdots\, s\,d ^{n_S-1} \\
&\,=\, & (d^{n_0}\,s)\,(d^{n_1-1}\,s)\, \cdots\, \,(d ^{n_{S-1}-1}\,s)\, d ^{n_S-1}\ , \nonumber
\end{eqnarray}
with $n_0\,\in\, {\mathbb N}_0\, :=\, {\mathbb N}\, \cup\, \{0\} $, $n_i\,\in\, \mathbb N$, for $i\,=\, 1,\,2,\,...,\, S$. The number of $u$ (that is of $s\,=\, ud$) letters in the word $CW(\vec n_{S})$, or $CW(S)$, for short, is $S$ (which is why we have used $\vec n_S$ not $\vec n_{S+1}$ for the $S+1$ tuple), and the number of $d$ is \dstyle{D(S)\, :=\, \sum_{j=0}^{S}\,n_j }. In \cite{Truemper} $n_0\,=\, 0$ (start with an odd number), $y\,=\, S$ and $x\,=\, D(S)$.\par\smallskip\noindent
Some special words are not covered by this notation: first the one letter word $u$ with the {\sl Collatz} sequence ($CS$) of length two $CS(u;\,k)\,=\, [2\,k\,+\, 1,\, 2\,(3\,k\,+\, 2)]$, and $CW([n_0])\,=\, d^{n_0}$ with the family of sequences $CS([n_0];k) \,=\, [2^{n_0}\,k,2^{n_0 - 1}\,k,\,...,\,1\,k] $ with $k\,\in\, {\mathbb N}_0$. \par\smallskip\noindent
A {\sl Collatz} sequence $CS$ realizing a word $CW(\vec n_{S})$ is of length $L\,=\, D\,+\, S\,+\, 1$ and follows the word pattern from the left to the right. $CS(\vec n_{S})\,=\, [c_1,\, c_2,\,...,\,c_L]$. For example, $CW([1,2,1])\,=\, dsds$ with $S\,=\, 2$, $D(2)\,=\, 2$, and length $L\,=\, 7$ with $SC_0(\vec n_S) \,=\, [2, 1, 4, 2, 1, 4, 2]$ is the first of these sequences (for non-negative integers), the one with smallest start number $c_1$. In order to conform with the notation used in \cite{Truemper} we shall use for the start number $c_1\,=\, M$ and for the last number $c_L \,=\, N$. However, in \cite{Truemper} $M$ and $N$ are restricted to be odd which will not be the case here. Later one can get the words with odd start number M by choosing $n_0\,=\, 0$. In order to have also $N$ odd one has to pick from $SC_k([0,n_1,...,n_S])$, for $k\,\in\, \mathbb N_0$ only the odd members.\par\smallskip\noindent
In \cite{Truemper} the monoid of Collatz words, with the unit element $e\,=\,$ {\it empty word} is treated. This will not be considered in this work. Also the connection to the $3\,m\,-\, 1$ problem will not be pursued here.\par\smallskip\noindent
The {\sl Collatz} tree CT is an infinite (incomplete) ternary tree, starting with the root, the number $8$ on top at level $l=0$. Three branches, labeled $L$, $V$ and $R$ can be present:
If a node (vertex) has label $n\,\equiv\, 4\,(mod\,6))$ the out-degree is $2$ with the a left edge (branch) labeled $L$ ending in a node with label \dstyle{\frac{n-1}{3}} and a right edge (label $R$) ending in the node labeled $2\,n$. In the other cases, $n\,\equiv\, 0,\,1,\,2,\,3,\,5,\,(mod\,6)$, with out-degree $1$, a vertical edge (label $V$) ends in the node labeled $2\,n$. The root labeled $8$ stands for the trivial cycle $8$\, repeat$(4,\,2,\,1)$. See the {Figure 1} for $CT_7$ with only the first eight levels. It may seem that this tree is left-right symmetric (disregarding the node labels) but this is no longer the case starting at level $l\,=\, 12$. At level $l=10$ the $mod\, 6$ structure of the left and right part of $CT$, also taking into account the node labels, is broken for the first time, but the node labels $4 (mod\, 6)$ are still symmetric. At the next level $l\,=\, 11$ the left-right symmetry concerning the labels $4 (mod\, 6)$ is also broken, leading at level $l\,=\, 12$ to a symmetry breaking in the branch structure of the left and right part of $CT$. Thus at level $l\,=\, 12$ the number of nodes becomes odd for the first time: $15$ nodes on the left side versus $14$ nodes on the right one. See rows $ l \,+\, 3$ of \seqnum{A127824} for the node labels of the first levels, and \seqnum{A005186}$(l+3)$ for the number of nodes. The number of $4\, (mod\, 6)$ nodes at level $l$ is given in \seqnum{A176866}$(l+4)$. \par\smallskip\noindent
A $CS$ is determined uniquely from its start number $M$. Therefore no number can appear twice in $CT$, except for the numbers $1,\,2,\,4$ of the (hidden) trivial cycle. The Collatz conjecture is that every natural number appears in CT at some level ($1, 2,$ and $4$ are hidden in the root $8$). A formula for $l\,=\, l(n)$ would prove the conjecture.
\par\smallskip\noindent
Reading $CT$ from bottom to top, beginning with some number $M$ at a certain level $l$, recording the edge labels up to level $l=0$, leads to a certain $L,V,R$-sequence. E.g., $M\,=\, 40$ at level $l\,=\, 5$ generates the length $5$ sequence $[V,R,V,L,V]$. This is related to the CS starting with $M\,=\, 40$, namely $[40,20,10,5,16,8]$, one of the realizations of the CW $d,d,d,u,d\,=\, d^3s$, with $S\,=\, 1$ and $\vec n_1\,=\, [3,1]$. (Later it will be seen that this is the realizations with the third smallest start number, the smaller once are $8$ and $24$). One has to map $V$ and $R$ to $d$ and $L$ to $u$. This shows that the map from a $L,V,R$-sequence to a CW is not one to one. The numbers $n\,\equiv\, 4\, (mod\ 6)$ except $4$ (see \seqnum{A016957}) appear exactly in two distinct CS. For example, $64 \,\equiv\, 4\,(mod\, 6)$ shows up in all $CS$ starting at any vertex which descends from the bifurcation at $64$, {\it e.g.},\, $21,\,128$; $42,\,256$; $84,\,85,\,512$; \, {\it etc.}\, \par\bigskip\noindent
\par\smallskip\noindent
\parbox{16cm}{\begin{center}
{\includegraphics[height=10cm,width=.8\linewidth]{CollatzTree}}
\end{center}
}
\par\smallskip\noindent
\hskip 6cm {\bf Figure: Collatz Tree $\bf CT_7$}
\section{Solution of a certain linear inhomogeneous Diophantine equation}\label{section3}
The derivation of the recurrence relations for the start and end numbers $M$ and $N$ of {\sl Collatz} sequences ($CS$) with prescribed up-down pattern (realizing a given $CW$) we shall need the general solution of the following linear and inhomogeneous {\sl Diophant}ine equation.
\begin{equation}\label{Diophant}
D(m,n;c):\hskip 2cm 3^m\,x\,-\, 2^n\,y\,=\, c(m,n),\ \ m\,\in\, \mathbb N_0, \ n\,\in\, \mathbb N_0,\ c(m,n) \,\in\, \mathbb Z\ .
\end{equation}
It is well known \cite{NZM}, pp. 212-214, how to solve the equation $a\,x\,+\, b\,y\,=\, c$
for integers $a,\, b$ (not $0$) and $c$ provided $g\,=\, gcd(a,\,b)$ divides $c$ (otherwise there is no solution) for integers $x$ and $y$. One will find a sequence of solutions parameterized by $t\,\in\, \mathbb Z$. Then one has to restrict the $t$ range to obtain all positive solutions. The procedure is to find first a special solution $(x_0,\, y_0)$ of the equation with $c\,=\, g$. Then the general solution is $(x\,=\, \frac{c}{g}\, x_0\,+\, \frac{b}{g}\, t,\, y\,=\, \frac{c}{g}\, y_0\,-\, \frac{a}{g}\, t)$ with $t\,\in\, Z$. The proof is found in \cite{NZM}. For our problem $g\,=\, gcd(3^m,\, 2^n)\,=\, 1$ for non-negative $m,\, n$ which will divide any $c(m,\, n)$. \par\bigskip\noindent
\begin{lemma}\label{SolDiophant} {\bf Solution of $\bf D(m,n;c)$}\par\smallskip\noindent
{\bf a)} A special positive integer solution of D(m,n;1) is\par\smallskip\noindent
\begin{eqnarray}\label{x0y0}
y_0(m,n) &\,=\,& \left(\frac{3^m\,+\, 1}{2}\right)^{n\,+\, 3^{m-1}}\, (mod\, 3^m)\ , \nonumber\\
x_0(m,n) &\,=\,& \frac{1\,+\, 2^n\,y_0(m,n)}{3^m}\ .
\end{eqnarray}
{\bf b)} The general solution with positive $x$ and $y$ is\par\smallskip\noindent
\begin{eqnarray}\label{Solxy}
x(m,n) &\,=\,& c(m,n)\,x_0(m,n)\,+\, 2^n\,t_{min}(m,n;sign(c)) \,+\, 2^n\,k \ ,\nonumber \\
y_0(m,n) &\,=\,& c(m,n)\,y_0(m,n)\,+\, 3^m\,t_{min}(m,n;sign(c)) \,+\, 3^m\,k \ ,
\end{eqnarray}
with $k\,\in\, \mathbb N_0$, and
\begin{equation}\label{tmin}
t_{min}(m,n;sign(c))\,=\, {\Caseszwei{\ceil{\abs{c(m,n)}\,\frac{x_0(m,n)}{2^n}}}{${\rm if} \ c\,<\, 0 \ ,$} {\ceil{-c(m,n)\,\frac{y_0(m,n)}{3^m}}} {${\rm if} \ c\,\geq\, 0\ .$}}
\end{equation}
\end{lemma}
\noindent
For the proof we shall use the following {\sl Lemma}:
\begin{lemma}\label{Anm}
$ A(n,m)\, :=\, {\binomial{n-1}{m-1}}\, \frac{gcd(m,n)}{m}$\, is a positive integer for $ m\,=\, 1,\,2\, ...\, n,\ n\,\in\, \mathbb N$\, .
\end{lemma}
\begin{proof}\par\smallskip\noindent
[due to {\sl Peter Bala}, see \seqnum{A107711}, history, Feb 28 2014]:\par\noindent
This is the triangle \seqnum{A107711} with A(0,0) =1.
By a rearrangement of factors one also has \dstyle{A(n,m)\,=\, {\binomial{n}{m}}\,\frac{gcd(n,m)}{n}}. Use $gcd(n,m)\,lcm(m,n)\,=\, n\, m$ ({\it e.g.},\, \cite{FR}), theorem 2.2.2., pp. 15-16, where also the uniqueness of the $lcm$ is shown). \dstyle{A(n,m)\,=\, \frac{a(n,m)}{lcm(n,m)}} with \dstyle{a(n,m)\,=\, {\binomial{n}{m}}\,m}, a positive integer because the binomial is a combinatorial number. $m\,|\, a(n,m)$ and $n\,|\, a(n,m)$ because \dstyle{a(n,m) \,=\, n\,{\binomial{n-1}{m-1}}} by a rearrangement. Hence $a(n,m)\,=\, k_1\,m\,=\, k_2\,n$, {\it i.e.},\, $a(n,m)$ is a common multiple of $n$ and $m$ (call it $cm(n,m)$). $lcm(n,m)\,|\, a(n,m)$ because $lcn(n,m)$ is the (unique) lowest $cm(n,m)$. Therefore \dstyle{\frac{a(n,m)}{lcm(n,m)}\,\in\, \mathbb N}, since only natural numbers are in the game.
\end{proof}\noindent
Now to the proof of {\sl Lemma\ 1}.\par\noindent
\begin{proof} \par\smallskip\noindent
{\bf a)} \dstyle{x_0(m,n) \,=\, \frac{1\,+\, 2^n\,y_0(m,n)}{3^m}} is a solution of $D(m,n;1)$ for any $y_0(m,n)$. The given $y_0(m,n)$ is a positive integer $\,\in\, \{1,\,2,\,...,\, 3^{m}\,-\, 1\},\ m\,\in\, \mathbb N$ and $y_0(0,n)\,=\, 1$ for $n\,\in\, \mathbb N_0$. One has to prove that $x_0(m,n)$ is a positive integer. This can be done by showing that $1\,+\, 3^n\,y_0(n,m)\,\equiv\, 0\,(mod\,3^m)$ for $m\,\in\, \mathbb N$. One first observes that \dstyle{\frac{3^m\,+\, 1}{2}\,\equiv\, \frac{1}{2}\,(mod\, 3^m)}, because obviously \dstyle{2\,\frac{3^m\,+\, 1}{2} \,\equiv\, 1\, (mod\, 3^m)} ($2$ is a unit in the ring $\mathbb Z_{3^m}$). For $m\,=\, 0$ one has $x_0(0,n) \,=\, 1\,+\, 2^n$, $n\,\in\, \mathbb N_0$, which is positive. In the following $m\,\in\, \mathbb N$.
\begin{equation}
1\,+\, 2^n\,\left(\frac{3^m\,+\, 1}{2}\right)^{n+3^{m-1}} \,\equiv\, 1 + 2^n\, \left(\frac{1}{2}\right)^{n+3^{m-1}} \,\equiv\, 1\,+\, \left(\frac{1}{2}\right)^{3^{m-1}}\, (mod\, 3^m)\ .
\end{equation}
Now we show that \dstyle{L(m)\, :=\, \left(\frac{3^m\,+\, 1}{2}\right)^{3^{m-1}}\,\equiv\, 0\, (mod\, 3^m)} by using \dstyle{\frac{3^m\,+\, 1}{2} \,=\, 3\, k(m)\,-\, 1} with \dstyle{k(m)\, :=\, \frac{3^{m-1}\sspp1}{2}}, a positive integer. The binomial theorem leads with $ a(m)\,=\, 3^{m\,-\, 1}$ to
\begin{eqnarray}
(3\,k(m)\,-\, 1)^{a(m)} &\,=\,& \sum_{j=0}^{a(m)-1}\,{\binomial{a(m)}{j}}\, (-1)^j\, (3\,k(m))^{a(m)-j} \nonumber \\
&\,=\,& 3^m\,\Sigma_1(m) \,+\, \Sigma_2(m), {\rm with} \\
\Sigma_1(m) &\,=\,& \sum_{j=0}^{a(m)-m}\, (-1)^j\,{\binomial{a(m)}{j}}\,k(m)^{a(m)-j}\,3^{a(m)-m-j}, \ {\rm and} \\
\Sigma_2(m) &\,=\,& \sum_{j=a(m)-m+1}^{a(m)-1}\, (-1)^j\,{\binomial{a(m)}{j}}\,(3\, k(m))^{a(m)-j}\ .
\end{eqnarray}
$\Sigma_1(m)$ is an integer because of $a(m)-j\,\geq\, a(m)-m-j\,\geq\, 0$ and the integer binomial, hence $L(m)\,\equiv\, \Sigma_2 \,(mod\, 3^m)$. Rewriting $\Sigma_2$ with $j'\,=\, j-a(m)+m-1$, using also the symmetry of the binomial, one has
\begin{eqnarray}
\Sigma_2(m)&\,=\,& \sum_{j=0}^{m-2}\,{\binomial{a(m)}{j}}\, (-1)^{j-m}\,(3\,k(m))^{m-1-j}\nonumber \\
&\,=\,& \sum_{j=1}^{m-1}\,(-1)^{j+1}\, {\binomial{a(m)}{j}}\, (3\,k(m))^j \,=\, 3^m\,\widehat\Sigma_2(m)\, \ {\rm with}\\
\widehat\Sigma_2(m)&\,=\,& \sum_{j=1}^{m-1}\,(-1)^{1+j}\,{\binomial{a(m)}{j}}\,k(m)^j\,3^{j-m}\nonumber \\
&\,=\,& \sum_{j=1}^{m-1}\,(-1)^{1+j}\,k(m)^j\,{\binomial{a(m)-1}{j-1}}\,\frac{1}{j}\,3^{j-1}\ .
\end{eqnarray}
In the last step a rearrangement of the binomial has been applied, remembering that $a(m)\,=\, 3^{m-1}$. It remains to be shown that \dstyle{A_{m,j}\, :=\, 3^{j-1}\,{\binomial{3^{m-1}-1}{j-1}}\,\frac{1}{j}} is a(positive) integer for $j\,=\, 1,\,2,\,...\,m-1$. Here {\sl Lemma 2} comes to help. Consider there $A(3^{m-1},j)$ for $j\,=\, 1,\,2,\,...,\,m-1$ ($m=0$ has been treated separately above), which is a positive integer. If $3\, \not|\, j$ then $3^{j-1}\,A(3^{m-1},j) \,=\, A_{m,j}$, hence a positive integer. If $j\,=\, 3^k\,J$, with $k\,\in\, \mathbb N$ the largest power of $3$ dividing $j$ then $gcd(3,J)\,=\, 1$, and $j\,=\, 3^k\,J \sspleq m-1\,<\, 3^{m-1}$ and $gcd(3^{m-1},3^k\,J)\,=\, 3^q$ with $q\,=\, min(k,m-1)$.
\end{proof}\noindent
\begin{proof} \par\smallskip\noindent
{\bf b)} The general integer solution of ~\ref{Diophant} is then (see \cite{NZM}, pp. 212-214; note that there $b>0$, here $b<0$, and we have changed $t\mapsto -t$)
\begin{eqnarray}
x\,=\, \hat x(m,n;t)&\,=\,& c(m,n)\,x_0(m,n)\,+\, 2^n\,t\, ,\nonumber \\
y\,=\, \hat y(m,n;t)&\,=\,& c(m,n)\,y_0(m,n)\,+\, 3^m\,t\, , \ t\,\in\, \mathbb Z\,.
\end{eqnarray}
In order to find all positive solutions for $x$ and $y$ one has to restrict the $t$ range, depending on the sign of $c$. If $c(m,n)\,\geq\, 0$ then, because $x_0$ and $y_0$ are positive and \dstyle{\frac{x_0(m,n)}{2^n}\,=\, \frac{y_0(m,n)}{3^m}\,+\, \frac{1}{2^n\,3^m}},\ \dstyle{t\,>\, -\frac{c(m,n)\,x_0(m,n)}{2^n}} and \dstyle{t\,>\, -\frac{c(m,n)\,y_0(m,n)}{3^m}}, {\it i.e.},\, \par\noindent
\dstyle{t\,\geq\, \ceil{max\left( - \frac{c(m,n)\,x_0(m,n)}{2^n},\, -\frac{c(m,n)\,y_0(m,n)}{3^m}\right)}} \dstyle{\,=\, \ceil{-c(m,n)\,min\left(\frac{x_0(m,n)}{2^n},\, \frac{y_0(m,n)}{3^m} \right)}} \par\noindent
\dstyle{\,=\, \ceil{-c(m,n)\, \frac{y_0(m,n)}{3^m} }\,=\, t_{min}(m,n;+)}. \par\noindent
If $c(m,n)\,<\, 0$ then \dstyle{t\,\geq\, \ceil{ \abs{c(m,n)}\, max\left(\frac{x_0(m,n)}{2^n},\, \frac{y_0(m,n)}{3^m}\right)}\,=\, \ceil{\abs{c(m,n)}\, \frac{x_0(m,n)}{2^n}}} $\,=\, t_{min}(m,n;-)$. Thus with $t\,=\, t_{min}(m,n;sign(c)) \,+\, k$, with $k\,\in\, \mathbb N_0$ one has the desired result. Note that $(x_0(m,n),\, y_0(m,n))$ is the smallest positive solution of the equation $D(m,n;1)$, eq.~\ref{Diophant}, because, for $c(m,n)\,=\, 1$, $t_{min}(m,n;+) \,=\, \ceil{-\frac{y_0(m,n)}{3^m}}$, but with $y_0(m,n)\,\in\, \{1,\,2,\,...\,3^m-1\}$ this is $0$.
\end{proof} \noindent
A proposition on the periodicity of the solution $y_0(m,n)$ follows.\par\smallskip\noindent
\begin{proposition} \label{y0Periodicity}
{\bf Periodicity of $\bf y_0(m,n)$ in $\bf n$}\par\smallskip\noindent
{\bf a)} The sequence $y_0(m,n)$ is periodic in $n$ with primitive period length $L_0\,=\, \varphi(3^m)$, for $m\,\in\, \mathbb N_0\, $ with {\sl Euler}'s totient function $\varphi(n)\,=\,$\seqnum{A000010}$(n)$, where $\varphi(1)\, :=\, 1$.
\par\smallskip\noindent
{\bf b)} The sequence $x_0(m,n\,+\, L_0(m))\,=\, q(m)\,x_0(m,n)\,-\, r(m)$, $m\,\in\, \mathbb N_0$, with $q(m)\, :=\, 2^{\varphi(3^m)}$ and \dstyle{r(m) \, :=\, \frac{2^{\varphi{(3^m)}}\,-\, 1}{3^m}}. See \seqnum{A152007}.\par\smallskip\noindent
{\bf c)} The set $Y_0(m)\, :=\, \{y_0(m,n) \,|\, n\,=\, 0,\,1,\,...,\, \varphi(3^m)\,-\, 1\} $, is, for $m\,\in\, \mathbb N_0$, a representation of the set $RRS(3^m)$, the smallest positive restricted residue system modulo $3^m$. See \cite{TApostol} for the definition. The multiplicative group modulo $3^m$, called \dstyle{\mathbb Z_{3^m}^{\times}\,=\, (\mathbb Z/3^m\,\mathbb Z)^\times} is congruent to the cyclic group $C_{\varphi(3^m)}$. See, e.g., \cite{WikiRRS},\par\smallskip\noindent
\end{proposition}
\begin{proof}\par\smallskip\noindent
{\bf a)} By {\sl Euler}'s theorem ({\it e.g.},\, \cite{FR}, theorem 2.4.4.3 on p. 32) $a^{\varphi(n)} \,\equiv\, 1\, (mod\, n)$, provided $gcd(a,n)\,=\, 1$. Now \dstyle{gcd\left(\frac{3^m+1}{2},3^m\right) \,=\, gcd\left(\frac{3^m+1}{2},3\right)\,=\, 1} because \dstyle{\frac{3^m+1}{2} \,\equiv\, \frac{1}{2}\, (mod\, 3^m)} (see above) and hence \dstyle{\frac{3^m+1}{2} \,\not\equiv \, 0\, (mod\, 3^m)}. This shows that $L_0(m)$ is a period length, but we have to show that it is in fact the length of the primitive period, {\it i.e.},\, we have to prove that the order of \dstyle{\frac{3^m+1}{2}} modulo $3^m$ is $L_0(m)$. (See {\it e.g.},\, \cite{FR}, Definition 2.4.4.1. on p.31, for the order definition.) In other words we want to show that \dstyle{\frac{3^m+1}{2}} is a primitive root (of $1$) modulo $3^m$. Assume that $k(m)$ is this order (the existence is certain due to {\sl Euler}'s theorem), hence $(\frac{1}{2})^{k(m)} \,\equiv\, 1\, (mod\, 3^m)$ and $k(m)\,|\, L_0(m)$. It is known that the module $3^m$ possesses primitive roots, and the theorem on the primitive roots says that there are precisely $\varphi(\varphi(3^m))$ incongruent ones ({\it e.g.},\, \cite{NZM}, pp. 205, 207, or \cite{Nagell}, theorem 62, 3., p. 104 and theorem, 65, p. 107). In our case this number is $\varphi(2\cdot 3^{m-1}) \,=\, 2\cdot 3^{m-2}$ if $m \,\geq\, 1$. The important point, proven in \cite{Nagell}, theorem 65.3 on p. 107, is that if we have a primitive root $r$ modulo an odd prime, here $3$, then, if $r^{3-1}\,-\, 1$ is not divisible by $3^2$, it follows that $r$ is in fact a primitive root for any modulus $3^q$, with $q\,\in\, \mathbb N_0 $. One of the primitive roots modulo $3$ is $2$, because $2^2\,=\, 4\,\equiv\, 1\, (mod\, 3)$ and $2^1\,\not\equiv \, 1\, (mod\ 3)$. Also $2^{3-1}\,-\, 1 \,=\, 3$ is not divisible by $3^2$, hence $2$ is a primitive root of any modulus $3^q$ for $q\,\in\, \mathbb N_0$. From this we proof that \dstyle{\frac{3^m+1}{2}\,\equiv\, \frac{1}{2}\,(mod\, 3^m)} is a primitive root modulo $3^m$. Consider \dstyle{\left(\frac{3^m+1}{2}\right)^k\,\equiv\, \frac{1}{2^k}\,(mod\, 3^m)} for $k\,=\, 1,\,2,\,...,\, \varphi(3^m)$. In order to have \dstyle{\left(\frac{1}{2}\right )^k \,\equiv\, 1\, (mod\, 3^m)} one needs $2^k\,\equiv\, 1\, (mod\, 3^m)$. But due to \cite{Nagell} theorem 65.3. p. 107, for $p\,=\, 3$, a primitive root modulo $3^m$ is $2$, and the smallest positive $k$ is therefore $\varphi(3^m)$, hence \dstyle{\frac{3^m+1}{2}} is a primitive root (of $1$) of modulus $3^m$. \par\smallskip\noindent
{\bf b)} \dstyle{x_0(m,n\,+\, \varphi(3^m))\,=\, \frac{1\,+\, 2^n\,2^{\varphi(3^m)}\,y_0(m,n)}{3^m}} from the periodicity of $y_0$. Rewritten as \dstyle{\frac{2^{\varphi(3^m)}\,\left( ( 2^{-\varphi(3^m)} \,-\, 1) \,+\, (1\,+\, 2^n\,y_0(m,n) \right)} {3^m}\,=\,} \dstyle{-\frac{1}{3^m}\,(2^{\varphi(3^m)}\,-\, 1) \,+\, 2^{\varphi(3^m)}\,x_0(m,n)\,=\, } \dstyle{ q(m)\, x_0(m,n) \,-\, r(m)} with the values given in the {\sl Proposition}.\par\smallskip\noindent
{\bf c)} This follows from the reduced residue system modulo $3^m$ for $m\,\in\, \mathbb N_0$, \par\noindent
\dstyle{\left\{ \left(\frac{1}{2}\right)^0,\,\left(\frac{1}{2}\right)^1,\, ...,\, \left(\frac{1}{2}\right)^{\varphi(3^m)\sspm1} \right\}}, because \dstyle{\frac{1}{2}} is a primitive root modulo $3^m$ (from part {\bf b)}). With $a(m)\, :=\, \frac{3^m\,+\, 1}{2}$ one has $1\,=\, gcd(a(m),3)\,=\, gcd(a(m),3^m)\,=\, gcd(a(m)^{b(m)},3^m)$ with $b(m)\, :=\, 3^{m\,-\, 1}$, also\par\noindent
\dstyle{\left\{a(m)^{b(m)}\, \left(\frac{1}{2}\right)^0,\,a(m)^{b(m)}\, \left(\frac{1}{2}\right)^1,\,...,\,a(m)^{b(m)}\, \left(\frac{1}{2}\right)^{\varphi(3^m)\sspm1}\right\}} is a reduced residue system modulo $3^m$ (see \cite{TApostol}, theorem 5.16, p. 113). Thus \par\noindent
\dstyle{Y_0(m)\,\equiv\, \{a(m)^{b(m)}\,1,\, a(m)^{b(m)+1},\,...,\, a(m)^{b(m) \,+\, \varphi(3^m)\,-\, 1}\}} is a reduced residue system modulo $3^m$. Therefore this gives a permutation of the reduced residue system modulo $3^m$ with the smallest positive integers sorted increasingly.
\end{proof} \par\smallskip\noindent
\begin{example} For $m\,=\, 3$, $\varphi(3^3)\,=\, 2\cdot 3^2\,=\, 18\,=\, L_0(3)$, \par\noindent
$\{y_0(3,n)\}_{n=0}^{17}\,=\, \{26,\, 13,\, 20, \,10,\, 5,\, 16,\, 8,\, 4,\, 2, \,1,\, 14,\, 7,\, 17,\, 22,\, 11,\, 19,\, 23,\, 25\}$ a permutation of the standard reduced residue system modulo $27$, obtained by resorting the found system increasingly. See \seqnum{A239125}. For $m=1,\, 2$ and $ 4$ see \seqnum{A007583},\ \seqnum{A234038} and \seqnum{A239130} for the solutions $(x_0(m,n),\, y_0(m,n))$.
\end{example}
\par\smallskip\noindent
\section{Recurrences and their solution}\label{section4}
After these preparations it is straightforward to derive the recurrence for the start and end numbers $M$ and $N$ for any given $CW(\vec n_S)$, for $S\,\in\, \mathbb N$. \par\smallskip\noindent
{\bf A)} We first consider the case of words with $n_S\,=\, 1$. {\it i.e.},\, $\vec n_S\,=\, [n_0,\,n_1\,,...\,n_{S-1},\,1]$. This is the word $CW(\vec n_S )\,=\, \rprod_{j=0}^{S-1}\, d^{n_j}\, s$ (with an ordered product, beginning with $j=0$ at the left-hand side). In order to simplify the notation we use $M(S)$, $N(S)$, $y_0(S)$, $x_0(S)$, and $c(S)$ for $ M(\vec n_S )$, $N(\vec n_S)$ , $y_0(S,n_S)$, $x_0(S,n_S)$ and $c(S,n_S)$, respectively.
For $S=1$, the input for the recurrence, one has
\begin{equation}
M(1;k) \,=\, 2^{n_0}\,(2\,k+1)\ \text{and}\ N(1;k) \,=\, 3\,k\,+\, 2,\ {\text for}\ k \,\in\, \mathbb N\ ,
\end{equation}
because there are $n_0$ factors of $2$ from $d^{n_0}$, and then an odd number $2\,k\,+\, 1$ leads after application of $s$ to $3\,k\,+\, 2$. Thus $M(1)\,=\, 2^{n_0}$ and $N(1)\,=\, 2$.\par\smallskip\noindent
\begin{proposition} {\bf Recurrences for $\bf M(S)$ and $\bf N(S)$ with $\bf n_S\,=\, 1$} \label{Rec}\par\smallskip\noindent
{\bf a)} The coupled recurrences for $M(S,t)$ and $N(S,t)$, the first and last entry of the {\sl Collatz} sequences $CS(\vec n_S;t)$ for the word $ CW(\vec n_S)$ with $\vec n_S\,=\, [n_0,\,n_1,\,...\, n_{S-1},1]$ ($n_S\,=\, 1$) are \par\smallskip\noindent
\begin{eqnarray}
M(S,t)\,=\, M(S)&\,+\,& 2^{\hat D(S)}\, t\ , \nonumber \\
N(S,t)\,=\, N(S)&\,+\,& 3^S\,t\, ,\ {\rm with}\ t\,\in\, \mathbb Z\ ,
\end{eqnarray}
where \dstyle{\hat D(S)\, :=\, \sum_{j=0}^{S-1}\,n_j} (we prefer to use a new symbol for the $n_S\,=\, 1$ case), and the recurrences for $M(S)$ and $\widetilde N(S)\,=\, N(S)\,-\, 2$ are
\begin{eqnarray}
M(S) \,=\, M(S-1) &\,+\,& 2^{\hat D(S-1)}\,c(S-1)\, x_0(S-1) \ , \nonumber \\
\widetilde N(S) &\,=\,& 3\,y_0(S-1)\, c(S-1)\,
\end{eqnarray}
with
\begin{equation}
c(S-1) \,=\, 2\,(2^{n_{S-1}-2}\,-\, 1) \,-\, \widetilde N(S-1) \,=:\, A(S-1) \,-\, \widetilde N(S-1)\ .
\end{equation}
The recurrence for $c(S)$ is\par\smallskip\noindent
\begin{equation}
c(S) \,=\, -3\,y_0(S-1)\,c(S-1) \,+\, A(S)\ , S\,\geq\, 2,
\end{equation}
and the input is $M(1)\,=\, 2^{n_0}$, $\widetilde N(1)\,=\, 0$ and $c(1) = A(1)$.\par\smallskip\noindent
{\bf b)} The general positive integer solution is
\begin{eqnarray}
M(S;k) &\,=\, & M(S) \,+\, 2^{\hat D(S)}\,t_{min}(S-1) \,+\, 2^{\hat D(S)}\,k\,,\nonumber \\
N(S;k) &\,=\, & 2\,+\, \widetilde N(S) \,+\, 3^S\,t_{min}(S-1) \,+\, 3^S\,k, \ k\,\in\, \mathbb N_0 \, ,
\end{eqnarray}
where
\begin{equation}
t_{min}(S) \,=\, t_{min}(S, n_S,sign(c(S))) \,=\, {\Caseszwei{\ceil{\abs{c(S)}\,\frac{x_0(S)}{2^{n_S}}}}{${\rm if} \ c(S)\,<\, 0 \ ,$} {\ceil{-c(S)\,\frac{y_0(S)}{3^S}}} {${\rm if} \ c(S)\,\geq\, 0\ .$}}
\end{equation}
\end{proposition}
\begin{corollary}
\begin{eqnarray}
M(S;k) &\,\equiv\,& M(S) \,+\, 2^{\hat D(S)}\,t_{min}(S-1)\, (mod\, 2^{\hat D(S)})\, , \nonumber \\
N(S;k) &\,\equiv\,& \widetilde N(S) \,+\, 3^S\,t_{min}(S-1)\, (mod\, 3^S)\, .
\end{eqnarray}
\end{corollary} \par\smallskip\noindent
In Terras' article \cite{Terras} the first congruence corresponds to {\sl theorem 1.2}, where the encoding vector $E_k(n)$ refers to the modified {\sl Collatz} tree using only $d$ and $s$ operations.
\begin{proof} \par\smallskip\noindent
{\bf a)} By induction over $S$. For $S\,=\, 1$ the input $M(1)\,=\, 2^{n_0}$, $N(1)\,=\, 2$ or $\widetilde N(1)\,=\, 0$ provides the start of the induction. Assume that part {\bf a)} of the proposition is true for $S$ values $1,\,2,\,...,\,S-1$. To find $M(S)$ one has to make sure that $d^{n_{S-1}}\,s$ can be applied to $N(S-1;k)$, the end number of step $S-1$ sequence $CS(\vec n_{S-1};t)$ which is $N_{int}(S-1,t)\,=\, N(S-1) +3^{S-1}\, t$, with integer $t$, by the induction hypothesis. This number has to be of the form $2^{n_{S-1}-1}\,(2\, m\sspp1)$ (one has to have an odd number after $n_{S-1}$ $d-$steps such that $s$ can be applied). Thus \dstyle{3^{S-1}\,t \,-\, 2^{n_{S-1}}\,m \,=\, 2^{n_{S-1}-1}\,-\, N(S-1) \,=\, A(S-1)\,-\, \widetilde N(S-1)\,=:\, c(S-1)}, where $\widetilde N(S-1)\,=\, N(S-1) \,-\, 2$ and $ A(S-1)\,=\, 2\,(2^{n_{S-1}-2}\,-\, 1)$. Due to {\sl Lemma}~\ref{SolDiophant} the general solution, with $t\,\to\, x(S-1,n_{S-1};t)\, \hat =\, x(S-1;t)$, $m\,\to\, y(S-1,n_{S-1};t)\, \hat =\, y(S-1;t)$, to shorten the notation, is
\begin{eqnarray}
t\,\to\, x(S;t) &\,=\,& c(S-1)\,x_0(S-1) \,+\, 2^{n_{S-1}}\,t \, , \nonumber \\
m\,\to\, y(S;t) &\,=\,& c(S-1)\,y_0(S-1) \,+\, 3^{S-1}\,t\, , \ t\,\in\, \mathbb Z\ .
\end{eqnarray}
Therefore the first entry of the sequence $CS(\vec n_S;t)$ is \dstyle{M(S;t)\,=\, M(s-1,x(S-1,t))} which is
\begin{equation}
M_{int}(S;t)\,=\, M(S-1) \,+\, 2^{\hat D(S-1)}\,c(S-1)\,x_0(S-1) \,+\, 2^{\hat D(S)}\, t\, ,
\end{equation}
hence \dstyle{M(S)\,=\, M(S-1)\,+\, 2^{\hat D(S-1)}\,c(S-1)\, x_0(S-1)}, the claimed recurrence for $M(S)$.\par\noindent
The last member of $CS(\vec n_{S-1};t)$ is $3\,m+2$ (after applying $s$ on $2\,m\,+\, 1$ from above). Thus \dstyle{N_{int}(S;t) \,=\, 3\,y(S;t)\,+\, 2}, or \dstyle{N_{int}(S;t)\,-\, 2\,=\, 3\,c(S-1)\,y_0(S-1)\,+\, 3^S\, t}. Therefore, $\widetilde N(S) \,=\, N(S)\,-\, 2 \,=\, 3\,c(S-1)\,y_0(S-1)$ the claim for the $\widetilde N$ recurrence. Note that the remainder structure of eqs. $(20)$ and $(21)$, expressed also in the {\sl Corollary}, has also been verified by this inductive proof. The recurrence for $c(S)\,=\, A(S) \,-\, \widetilde N(S)$ follows from the one for $\widetilde N(S)$. \par\smallskip\noindent
{\bf b)} Positive integer solutions from $M_{int}(S;t)$ and $N_{int}(S;t)$ of part {\bf a)} are found from the second part of {\sl Lemma} ~\ref{SolDiophant} applied to the equation $3^{S-1}\, x \,-\, 2^{n_{S-1}}\,y\,=\, c(S-1)$, determining $t_{min}(S-1)$ as claimed. This leads finally to the formulae for $M(S;k)$ and $N(S;k)$ with $k\,\in\, \mathbb N_0$.
\end{proof}
\begin{example} {\bf $\bf (sd)^{S-1}\,s$ Collatz sequences}\par\smallskip\noindent
Here $n_0 \,=\, 0 \,=\, n_S $ and $n_{j}\,=\, 2$ for $j\,=\, 1,\,2\, ...,\,S-1$.
The first entries $M(S;k)$ and the last entries $N(S;k)$ of the {\sl Collatz} sequence $CS([0,2,...,2];k)$ (with $S-1$ times a $2$), whose length is $3\,S$, are $M(S;k)\,=\, 1\,+\, 2^{2\,S-1}\,k$ and $N(S;k)\,=\, 2 \,+\, 3^S\,k$. For $S=3$ a complete {\sl Collatz} sequence $CS([0,2,2];3)$ of length $9$ is $[97, 292, 146, 73, 220, 110, 55, 166, 83]$ which is a special realization of the word $sdsds$ with start number $M(3;3)\,=\, 97$ ending in $N(3;3)\,=\, 83$. Note that for this $u-d$ pattern the start and end numbers have remainders $ M(S;0) = M(1;0) \,=\, 1$ and $N(S;0)\,=\, N(1;0) \,=\, 2$. See the tables \seqnum{A240222} and \seqnum{A240223}.
\end{example}\par\smallskip\noindent
The recurrences for $M(S)\, \hat =\, M(\vec n_{S-1})$, $\tilde N(S)\, \hat =\, \tilde N(\vec n_{S-1})$ or $N(S)\, \hat =\, N(\vec n_{S-1})$ and $c(S)\, \hat =\, c(\vec n_{S-1}$ are solved by iteration with the given inputs $M(1)\,=\, 2^{n_0}$, $\tilde N(1)\,=\, 0$ and $c(1)\,=\, A(1)\,=\, 2\,(2^{n_0-2}\,-\, 1)$. \par\smallskip\noindent
\begin{proposition} {\bf Solution of the recurrences for $\bf n_s\,=\, 1$} \label{SolRec}\par\smallskip\noindent
The solution of the recurrences of {\sl Proposition}~\ref{Rec} with the given inputs are, for $S\,\in\, \mathbb N$:\par\smallskip\noindent
\begin{eqnarray}
c(S)&\,=\,& A(S) \,+\, \sum_{j=1}^{S-1} \,(-3)^j\,A(S-j)\,\prod_{l=1}^j\,y_0(S-l) \, , \nonumber \\
\tilde n(S)&\,=\,& A(S) \,-\, c(S) \,=\, -\sum_{j=1}^{S-1}\,(-1)^j\,A(S-j)\, \prod_{l=1}^{j}\,y_0(S-l)\, , \nonumber \\
N(s)&\,=\,& \tilde N(S)\,+\, 2\, , \nonumber \\
M(S) &\,=\,& 2^{n_0}\,+\, \sum_{j=1}^{S-1}\,R(S-j)\, ,
\end{eqnarray}
with \dstyle{\hat D(S)\, :=\, 1\,+\, \sum_{j=0}^{S-1}\, n_j},\ \dstyle{A(S)\, :=\, 2\,(2^{n_S-2}\,-\, 1)},\ \dstyle{R(S)\, :=\, 2^{\hat D(S)} \,x_0(S)\,c(S)}\ and $y_0(S)\, \hat =\, y_0(S,n_S)$, $x_0(S)\, \hat =\, x_0(S,n_S)$, given in {\sl Lemma} ~\ref{SolDiophant}.
\end{proposition}
\begin{proof}\par\smallskip\noindent
This is obvious.
\end {proof} \par\noindent
{\bf B)} The general case $n_S \,\geq\, 1$ can now be found by appending the operation $d^{n_S-1}$ to the above result. This leads to the following {\sl theorem}.\par\smallskip\noindent
\begin{theorem} \label{GenSol} {\bf The general case $\bf {\overrightarrow n}_S$} \par\smallskip\noindent
For the Collatz word $CW(\vec n_S)\,=\, d^{n_0}\,\rprod_{j=1}^{S}\,(s\,d^{n_j-1}) \,=\, d^{n_0}\, s\,\rprod_{j=1}^{S}\,(d^{n_j-1}\, s)\,d^{n_S-1}$ (the ordered product begins with $j=1$ on the left-hand side) with $n_0\,\in\, \mathbb N_0$ , $n\,\in\, \mathbb N$, the first and last entries of the corresponding Collatz sequences $\{CS(\vec n_S;k)\}$, of length $L(S) \,=\, n_0 \,+\, 2\,\sum_{j=1}^S n_j$, for $k\,\in\, \mathbb N_0$, are\par\smallskip\noindent
\begin{eqnarray}
M(\vec n_S;k) &\,=\,& M(S) \,-\, 2^{\hat D(S)}\,N(S)\,x_0(S,n_S-1)\,+\, 2^{D(S)}\,t_{min}(S,n_S-1,sign(c_{new}(S)) \nonumber\\
&& \,+\, 2^{D(S)}\,k\, , \nonumber \\
N(\vec n_S;k) &\,=\,& c_{new}(S)\,y_0(S,n_S-1)\,+\, 3^S\, t_{min}(S,n_S-1,sign(c_{new}(S))\sspp3^S\, k\ ,
\end{eqnarray}
\end{theorem}\noindent
with $c_{new}(S)\, :=\, -N(S)$, $\hat D(S) \,=\, 1\,+\, \sum_{j=0}^{S-1}\,n_j$, $D(S)\,=\, \sum_{j=0}^S\, n_j$.
\begin {proof}\par\smallskip\noindent
In order to be able to apply to the {\sl Collatz} sequences $CS([n_0,n_1,...,n_S-1,1])$ (with the results from part A) above) the final $d^{n_S-1}$ operation one needs for the last entries $N_{int}(S;t) \,=\, N(S) \,+\, 3^S\,t \,=\, 2^{n_S-1}\,m$ with some (even or odd) integer m. The new last entries of $CS([n_0,n_1,...,n_S];t)$ will then be $m$. The general solution of $3^S\,-\, 2^{n_S-1}\,m\,=\, -N(S)\,=:\, c_{new}(S)$ is according to {\sl Lemma}~\ref{SolDiophant} \par\smallskip\noindent
\begin{eqnarray}
t\,\to\, x(S;t)&\,=\,& c_{new}(S)\, x_0(S,n_S-1)\,+\, 2^{n_S-1}\,t,\, \nonumber \\
m\,\to\, y(S;t)&\,=\,& c_{new}(S)\, y_0(S,n_S-1)\,+\, 3^S\,t,\,\ t\,\in\, \mathbb Z\, .
\end{eqnarray}
This leads to positive integer solutions after the shift $t\,\to\, t_{min}\,+\, k$, with \par\noindent
$t_{min} \,=\, t_{min}(S,n_S-1,sign(c_{new}(S)))$ to the claimed result $N(S;k)$ for the new last number of $CS(\vec n;k)$, with $k\,\in\, \mathbb N_0$. The new start value $M(S;k)$ is obtained by replacing $t\,\to\, x(S;t)$ in the old $M_{int}(S;t)$ (with $n_S\,=\, 1$). $M(S;k)\,=\, M_{int}(S,x(S;t))$ with $t\,\to\, t_{min}\,+\, k$, also leading to the claimed formula.
\end{proof} \par\smallskip\noindent
The remainder structure modulo $2^{D(S)}$ for $M(\vec n_S,k)$ and modulo $3^S$ for $N(\vec n_S,k)$ is manifest.\par\smallskip\noindent
The explicit sum versions of the results for case $n_S\,=\, 1$, given in {\sl Proposition}~\ref {GenSol}, can be inserted here.
\begin{example} {$\bf ud^{m}\,=\, sd^{m-1}$}\par\smallskip\noindent
For $m \,=\, 1,\,2,\, 3$ and $ k\,=\, 0,\,1,\,...,\,10$ one finds for $N([0,m],k)$:\par\noindent $[2, 5, 8, 11, 14, 17, 20, 23, 26, 29, 32]$,\ $[1, 4, 7, 10, 13, 16, 19, 22, 25, 28, 31]$,\ \par\noindent
$ [2, 5, 8, 11, 14, 17, 20, 23, 26, 29, 32]$, and for $M([0,m],k)$:\par\noindent
$[1,3,5,7,9,11,13,15,17,19,21]$,\ $[1,5,9,13,17,21,25,29,33,37,41]$,\ \par\noindent
$[5,13,21,29,37,45,53,61,69,77,85]$. Only the odd members of $N([0,m],k)$, that is the odd indexed entries, and the corresponding $M([0,m],k)$ appear in \cite{Truemper}, example 2.1. See \seqnum{A238475} for $M([0,2\,n],k)$ and \seqnum{A238476} for $M([0,2\,n\,-\, 1],k)$. The odd $N([0,2\,n],k)$ values are the same for all $n$, namely $5\,+\, 6\,k$, and $N([0,2\,n-1],k)\,=\, 1\,+\, 6\,k$ for all $n\,\in\, \mathbb N$.
\end{example}
\begin{example} {$\bf (ud)^{n}\,=\, s^{S}, S\,\in\, \mathbb N$}\par\smallskip\noindent
$\vec n_S\,=\, [0,1,...,1]$ with $S$ times a $1$. For $S \,=\, 1,\,2,\, 3$ and $ k\,=\, 0,\,1,\,...,\,10$ one finds $N(\vec n_S,k)$\ \par\noindent
$[5,\,8,\,11,\,14,\,17,\,20,\,23,\,26,\,29,\,32]$,\ $[17,\,26,\,35,\,44,\,53,\,62,\,71,\,80,\,89,\,98]$,\ \par\noindent
$[53,\,80,\,107,\,134,\,161,\,188,\,215,\,242,\,269,\,296]$, and for $M(\vec n_S,k)$:\par\noindent
$[3,\, 5,\, 7, \,9,\, 11,\, 13,\, 15,\, 17,\, 19, \,21]$,\ $ [7,\, 11,\, 15,\, 19, \,23,\, 27, \,31, \,35,\, 39,\, 43]$,\ \par\noindent
$[15, \,23,\, 31, \,39, \,47, \,55, \,63, \,71, \,79, \,87]$,. For odd $N$ entries, and corresponding $M$ entries this is \cite{Truemper}, example 2.1. See \seqnum{A239126} for these $M$ values, and \seqnum{A239127} for these $N$ values, which are here $S$ dependent.
\end{example} \par\smallskip\noindent
In conclusion the author does not think that the knowledge of all {\sl Collatz} sequences with a given up-down pattern (a given {\sl Collatz} word) will help to prove the {\sl Collatz} conjecture. Nevertheless the problem considered in this paper is a nice application of a simple {\sl Diophan}tine equation.\par\bigskip\noindent
{\bf Acknowlegement}\par\smallskip\noindent
Thanks go to {\sl Peter Bala} who answered the author's question for a proof that all triangle \seqnum{A107711} entries are non-negative integers. See the history there, Feb 28 2014.
\par\bigskip\noindent
\par\bigskip\noindent
| {
"timestamp": "2014-04-11T02:05:00",
"yymm": "1404",
"arxiv_id": "1404.2710",
"language": "en",
"url": "https://arxiv.org/abs/1404.2710",
"abstract": "Motivated by a recent work of Trümper we consider the general Collatz word (up-down pattern) and the sequences following this pattern. The recurrences for the first and last sequence entries are given, obtained from repeated application of the general solution of a binary linear inhomogeneous Diophantine equation. These recurrences are then solved. The Collatz tree is also discussed.",
"subjects": "Number Theory (math.NT)",
"title": "On Collatz' Words, Sequences and Trees",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180631379973,
"lm_q2_score": 0.8128673223709251,
"lm_q1q2_score": 0.8012580025956383
} |
https://arxiv.org/abs/2301.02645 | The Generalized Kauffman-Harary Conjecture is True | For a reduced alternating diagram of a knot with a prime determinant $p,$ the Kauffman-Harary conjecture states that every non-trivial Fox $p$-coloring of the knot assigns different colors to its arcs. In this paper, we prove a generalization of the conjecture stated nineteen years ago by Asaeda, Przytycki, and Sikora: for every pair of distinct arcs in the reduced alternating diagram of a prime link with determinant $\delta,$ there exists a Fox $\delta$-coloring that distinguishes them. | \section{History of the alternation conjecture}
In 1998, Louis H. Kauffman and Frank Harary formulated the following conjecture \cite{HK}:
\begin{conjecture*}
Let $D$ be a reduced, alternating diagram of a knot $K$ having determinant $p$, where $p$ is prime. Then every non-trivial $p$-coloring of $D$ assigns different colors to different arcs.
\end{conjecture*}
This conjecture is now known as the Kauffman-Harary conjecture. It was proved for rational knots \cite{KLa, PDDGS}, Montesinos knots \cite{APS}, some Turk's head knots \cite{DMMS}, and for algebraic knots \cite{DS}. In 2009, Thomas W. Mattman and Pablo Solis proved this conjecture using the notion of pseudo colorings. A generalization of this conjecture, known as the generalized Kauffman-Harary (GKH) conjecture, was formulated by Marta M. Asaeda, Adam S. Sikora, and the fifth author in 2004 \cite{APS}. They proved this conjecture for Montesinos links in the same paper. In this paper, we prove it in full generality.
\
The paper is structured as follows. In the next section we introduce the GKH conjecture and we prove it in Section \ref{proofconjecture}. In Section \ref{nonprimealt}, we reformulate and prove the conjecture for non-prime alternating links. We illustrate the results with some examples in Section \ref{exfoxsection}. In the last section, we discuss pseudo colorings followed by some open questions.
\section{Preliminaries}
In this section, we state the original and alternate versions of the GKH conjecture. The difference between the original and generalized versions of the conjecture is that the former is about links with prime determinant, while the generalized version is about links with determinant not necessarily prime. It is important to note that the only link whose determinant is prime is the Hopf link.
\begin{conj*}
If $D$ is a reduced alternating diagram of a prime link $L$, then different arcs of $D$ represent different elements of $H_1(M^{(2)}_L,\mathbb Z)$, where $M^{(2)}_L$ denotes the double branched cover of $S^3$ branched along $L$.
\end{conj*}
The GKH conjecture was formulated in \cite{APS} using the homology of the double branched cover of $S^3$ branched along $L$. In this paper we use a diagrammatic version of this conjecture by using the universal\footnote{Analogous to the fundamental group and the fundamental quandle, this group is often called the fundamental group of Fox colorings.} group of Fox colorings $Col(D)$ for a prime link $L$ with diagram $D$.
\begin{definition}
The group $\boldsymbol{Col(D)}$ is the abelian group whose generators are indexed by the arcs of $D$, denoted by $arcs(D)$, and whose relations are
$2b-a-c=0$ given by the crossings of $D$. More precisely, $$Col(D) = \displaystyle \bigg\{ \text {arcs}(D) \ | \ \ \vcenter {\hbox{
\begin{overpic}[scale = .08]{CrossingMatrixRelation.jpg}
\put(26, 26.5){\tiny{$b$}}
\put(0, -1){\tiny{$b$}}
\put(14, -1){\tiny{$c=2b-a$}}
\put(0, 26.5){\tiny{$a$}}
\end{overpic} }} \ \ \ \ \ \ \bigg\}.$$
\end{definition}
It is known that $Col(D)= \mathbb Z \oplus H_1(M^{(2)}_L,\mathbb Z)$ (see, for example, \cite{Prz1}).
\begin{definition}\label{coltrivialdefi}
Let $Col^{trivial}(D) \cong \mathbb Z$ be the group of trivial colorings of $D$. This group is embedded in $Col(D)$ and the quotient group $\displaystyle \frac{Col(D)}{Col^{trivial}(D)}$ is called the \textbf{reduced group of Fox colorings}. We denote it by $Col^{red}(D)$.
\end{definition}
Notice that, for a diagram $D$ of a link $L$, $Col^{red}(D)=H_1\big(M^{(2)}_L,\mathbb Z\big)$ and for non-split alternating links, this group is finite with non-zero determinant.
\
The first two statements of the following conjecture are equivalent to the original GKH conjecture, while part $(c)$ offers an extension.
\begin{conjecture}[Alternate forms of the generalized Kauffman-Harary conjecture]\label{ConAPS}
\
Let $D$ be a reduced alternating diagram of an alternating prime link and let $\delta(D)$ denote the absolute value of its determinant.
\begin{enumerate}
\item [\namedlabel{a}{(a)}] Let $\mathbb{Z}^{|arcs|}$ denote the free abelian group $\mathbb{Z}^{|arcs|} = \{arcs(D) \mid \emptyset \}$. Consider the map
$\mathbb{Z}^{|arcs|} \xrightarrow{\beta} Col(D).$
Then $\beta$ is injective on the arcs of $D$, that is, $\beta(a_i) \neq \beta(a_j)$ for $i \neq j$.
\item [\namedlabel{b}{(b)}] The diagram $D$ has $t$ Fox $\delta(D)$-colorings $y_1, y_2, \hdots, y_t$, such that for every pair of distinct arcs $a_i, a_j$, there exists $y_k$ such that $y_k(a_i) \neq y_k(a_j)$.
\item [\namedlabel{c}{(c)}] If $Col^{red}(D) = \mathbb Z_{n_1} \oplus \mathbb Z_{n_2} \oplus \cdots \oplus \mathbb Z_{n_s}$ with $n_{i+1} | n_i$, then there are $s$ Fox $n_1$-colorings that distinguish all the arcs of $D$. Note that, $s$ is strictly less than the number of crossings of $D$.
\end{enumerate}
\end{conjecture}
\begin{remark}
Parts \ref{a} and \ref{b} of Conjecture \ref{ConAPS} are equivalent to each other, since for a finite group $G$, we have $G \cong Hom(G, \mathbb{Z}_{n_1})\cong Hom(G, \mathbb{Z}_{\delta(D)})$, where $G=\mathbb{Z}_{n_1}\oplus \mathbb Z_{n_2} \oplus \cdots \oplus \mathbb{Z}_{n_s}$, with $n_{i+1} | n_i$ and $\delta(D) =n_{1}n_{2}\cdots n_{s}$. In particular, $Hom(Col^{red}(D), \mathbb{Z}_{\delta(D)}) \cong Hom(Col^{red}(D), \mathbb{Z}_{n_1}) \cong Col^{red}(D)$. Thus, we can work with a group or its dual. To distinguish elements in the group we often analyze its homomorphisms (dual elements) into the given ring. See \cite{lang}, for example.
\end{remark}
\section{Proof of the generalized Kauffman-Harary conjecture}\label{proofconjecture}
The proof of the GKH conjecture is organized as follows. First, we define the crossing matrix $C'(D)$ and coloring matrix $L(D)$ of a link diagram $D$. Following \cite{MS} we prove that every column of the coloring matrix represents a non-trivial Fox $\delta(D)$-coloring. Then using the fact that the coloring matrix of the mirror image of $D$ is the transpose of $L$, we prove part \ref{b}, and equivalently, part \ref{a} of Conjecture \ref{ConAPS}. Additionally, we show that the columns of the coloring matrix generate the group $Col^{red}(D)$ and use this fact to prove part \ref{c} of Conjecture \ref{ConAPS}.
\begin{definition}
A \textbf{Fox} $\boldsymbol{k}$\textbf{-coloring} of a diagram $D$ is a function $f: \mathit{arcs}(D) \to \mathbb{Z}_{k}$, satisfying the property that every arc is colored by an element of $\mathbb{Z}_{k}=\left\lbrace 0, 1, 2, 3, \dots, k-1\right\rbrace $ in such a way that at each crossing the sum of the colors of the undercrossings is equal to twice the color of the overcrossing modulo $k$. That is, if at a crossing $v$ the overcrossing is colored by $b$, and the undercrossings are colored by $a$ and $c$, then $2b-a-c \equiv0$ modulo $k$. See Figure \ref{CrossingMatrixRelation} for an illustration. The group of Fox $k$-colorings of a diagram $D$ is denoted by $\mathit{Col}_{k}(D)$ and the number of Fox $k$-colorings is denoted by $\mathit{col}_{k}(D)$. Analogous to Definition \ref{coltrivialdefi}, we divide the group $Col_{k}(D)$ by the group of trivial colorings and denote the quotient group by $Col^{red}_{k}(D)$.
\end{definition}
The matrix describing the space of colorings $Col(D)$ is referred to, by Mattman and Solis, as the crossing matrix for a fixed arbitrary ordering of the crossings \cite{MS}. Here we do not assume that the diagram is alternating.
\begin{definition}
Fix an ordering of the crossings of a reduced link diagram $D$. Then the set of arcs inherits the order of the set of crossings. In this way, the over-arc has the same index as the crossing. The \textbf{crossing matrix}\footnote{The alternative, more descriptive, name could be {\it unreduced fundamental Fox colorings matrix.}} of $D$, denoted by $C'(D)$, is an $n \times n$ matrix such that each row corresponds to a crossing that gives the relation $2b-a-c=0$ (see Figure \ref{CrossingMatrixRelation}). The entries of the matrix are defined as follows\footnote{It is possible that two under-arcs at a crossing are not distinct. Then the relation $2b-a-c=0$ becomes $2b-2a=0$. For instance, this may occur for the Hopf link.}:
\begin{minipage}{0.5\textwidth}
$$ C_{ij}' = \left\{ \begin{array}{rr}
2 & \text{if } a_i \
\text{is the over-arc at } c_i, \\
-1 & \text{ if } a_j \text{ is an under-arc at } c_i \text{ }(i\neq j), \text{and} \\
0 & \text{ otherwise}.
\end{array}
\right.$$
\end{minipage}
\begin{minipage}{0.47\textwidth}
\centering
\includegraphics[scale=0.45]{FoxrelationOverpic.png}
\captionof{figure}{Fox coloring relation at crossing $v$.}
\label{CrossingMatrixRelation}
\end{minipage}
\end{definition}
\begin{figure}[ht]
\centering
$$\vcenter{\hbox{
\begin{overpic}[scale = .44]{Crossingchange.png}
\put(80, 125){$c_k$}
\put(80, 83){$c_i$}
\put(80, 40){$c_j$}
\put(92, 70){$a_i$}
\put(65, 135){$a_k$}
\put(65, 50){$a_j$}
\put(385, 125){$c_k$}
\put(360, 125) {$\overline{a_k}$}
\put(385, 83){$c_i$}
\put(369, 95){$\overline{a_i}$}
\put(385, 40){$c_j$}
\put(360, 40){$\overline{a_j}$}
\end{overpic} }}$$
\caption{Neighborhood of the crossing $c_i$ in $D$ (on the left) and $\overline{D}$.}
\label{mirrorcrossing}
\end{figure}
The following lemma holds only for alternating links and plays an important role in the proof of the GKH conjecture.
\begin{lemma}\label{Mirror}
Let $D$ be a reduced alternating link diagram with crossing matrix $C'(D)$ and let $\overline{D}$ be its mirror image. Then the matrix $C'^{T}$ is a crossing matrix for $\overline{D}$.
\end{lemma}
\begin{proof}
Denote the crossings of the diagram $D$ by $c_1, \dots,c_n$ and let the over-arc at the crossing $c_i$ be denoted by $a_i$. Notice that, in the matrix $C'(D)$ all entries on the diagonal are $2$. We obtain $\overline{D}$ by crossing-change operations and we keep the ordering and names of the crossings. Now, let $\overline{a_i}$ denote the over-arc at the crossing $c_i$ in the diagram $\overline{D}$. In the row corresponding to the crossing $c_i$, suppose the columns corresponding to the arcs $a_j$ and $a_k$ have $-1$ as entries. Then in the matrix $C'(\overline{D})$, the column corresponding to $\overline{a_i}$ must have entries $-1$ in the rows corresponding to the crossings $c_j$ and $c_k$; see Figure \ref{mirrorcrossing}.
\end{proof}
Recall that if $\delta(D)\neq0$, then $Col^{red}(D)$ is a finite group whose invariant factor decomposition is $Col^{red}(D) = \mathbb Z_{n_1}\oplus \mathbb Z_{n_2}\oplus \cdots \oplus \mathbb Z_{n_s}$, with $n_{i+1} | n_i$ for all $i$. Notice that, $s$ is the minimum number of generators of this group and $n_{1}$ is the annihilator of the group. Let $C(D)$ denote the reduced crossing matrix of $D$, which is the matrix obtained from $C'(D)$ by removing its last row and last column. We call the arc corresponding to the last column of $C'(D)$ the \textbf{base arc}. This matrix describes the group $Col^{red}(D)$. The matrix $C^{-1}(D)$ is a matrix with rational entries. However, $n_1C^{-1}(D)$ is an integral matrix, which we denote by $L_{n_1}(D).$ Observe that the columns of $L_{n_1}(D)$ modulo $n_1$ represent Fox $n_1$-colorings of the diagram $D$ after coloring the base arc by color $0$.
\
The following result also holds for reduced non-alternating links.
\begin{theorem}\label{generatorlemma}
Let $D$ be a reduced diagram of a link with non-zero determinant. Then the columns of $L_{n_1}(D)$ modulo $n_1$ generate the space of Fox $n_1$-colorings of $D$.
\end{theorem}
\begin{proof}
Let $C(D)$ be the reduced crossing matrix of $D$ and let $Col^{red}(D) = \mathbb Z_{n_1}\oplus \mathbb Z_{n_2}\oplus \cdots \oplus \mathbb Z_{n_s}$, with $n_{i+1} | n_i$ for all $i$. After row and column operations, $C(D)$ can be reduced to its Smith normal form, denoted by $C_{SNF}(D)$, given below.
\begin{equation*}
C_{SNF}(D) =
\begin{pmatrix}
n_{1} & & && \\
&n_{2} & &&&\text{\Huge0}& \\
& & \ddots&&& \\
& &&n_s && \\
&&&&1 && \\
&\text{\Huge0}&&&&\ddots &&\\
& & &&&& 1
\end{pmatrix}
\end{equation*}
Its inverse matrix, $C^{-1}_{SNF}(D)$, with entries in $\mathbb{Q}$ has the following form.
\begin{equation*}
C^{-1}_{SNF}(D) =
\begin{pmatrix}
1/n_{1} & & && \\
&1/n_{2} & &&&\text{\Huge0}& \\
& & \ddots&&& \\
& &&1/n_s && \\
&&&&1 && \\
&\text{\Huge0}&&&&\ddots &&\\
& & &&&& 1
\end{pmatrix}
\end{equation*}
Thus, we obtain the following integral matrix $L^{SNF}_{n_1}(D)$.
\begin{equation*}
L^{SNF}_{n_1}(D)=n_1 C^{-1}_{SNF}(D)=
\begin{pmatrix}
n_1/n_{1} & & && \\
&n_1/n_{2} & &&&\text{\Huge0}& \\
& & \ddots&&& \\
& &&n_1/n_s && \\
&&&&n_1 && \\
&\text{\Huge0}&&&&\ddots &&\\
& & &&&& n_1
\end{pmatrix}
\end{equation*}
\
Now, the $i^{\mathit{th}}$ column $(0, 0, \dots, n_{1}/n_{i}, \dots, 0)^T$ of $L^{SNF}_{n_1}(D)$ modulo $n_1$ with $i \le s$ generates the subgroup $\mathbb{Z}_{n_i}$ of $\mathbb{Z}_{n_1}$. Since $Col^{red}_{n_1}(D) = Hom(\mathbb Z_{n_1}\oplus \mathbb Z_{n_2}\oplus \cdots \oplus \mathbb Z_{n_s}, \mathbb Z_{n_1})$, therefore, the columns of $L^{SNF}_{n_1}$ generate the group $Col^{red}_{n_1}(D)$, as desired.\\
\end{proof}
For alternating diagrams we can prove the following stronger result, which proves part \ref{b}, and equivalently, part \ref{a} of Conjecture \ref{ConAPS}.
\begin{theorem}\label{mainlemma}
Let $D$ be a reduced alternating diagram of a prime link. For any two arcs $a_i$ and $a_j$, there exists a column of $L_{n_1}(D)$ which distinguishes them.
\end{theorem}
\begin{proof}
Suppose the arcs (indexing rows) of the coloring matrix $n_{1}C^{-1}(D) = L_{n_1}(D)$ are given by $a_1$, $a_2$, \dots, $a_{n-1}$ as shown below. $$L_{n_1}(D)=n_{1}C^{-1}(D) = \begin{pNiceMatrix}[first-row,last-row,first-col,last-col]
\ \ \ \ \ & & & & \\
\textcolor{blue}{a_1:} \ \ \ \ \ \ & c_{1,1} & c_{1,2} & \cdots & c_{1,n-1} & \ \ \\
\textcolor{blue}{a_2:} \ \ \ \ \ \ & c_{2,1} & c_{2,2} & \cdots & c_{2,n-1} & \ \ \\
\textcolor{blue}{\vdots \ \ \ } \ \ \ \ \ \ & \vdots & \vdots & \vdots & \vdots & \ \ \\
\textcolor{blue}{a_{n-1}:}\ \ \ \ \ \ & c_{n-1,1} & c_{n-1,2} & \cdots & c_{n-1,n-1} & \ \ \\
\textcolor{blue}{a_n:} \ \ \ \ \ \ & 0 & 0 & \cdots & 0 & \ \ \\
\end{pNiceMatrix}$$
Recall that, the reduced crossing matrix $C(D)$ is obtained from $C'(D)$ by removing its last row and last column. Now, each column of $L_{n_1}(D)$ colors the the remaining first $n-1$ arcs of the diagram. For a complete Fox $n_1$-coloring of $D$ we color the last (base) arc $a_n$ by color $0$. If any $c_{i,1}=0 \mod n_1$ for $i < n$, then column $C_1$ modulo $n_1$ cannot distinguish between the arcs $a_n$ and $a_i$. If all the entries of the rows corresponding to $a_i$ and $a_j$ are not identical modulo $n_1$, then they can be automatically distinguished by the column in which they are different.
\
\textbf{Step 1:} If there is no column $C_j$ of $L_{n_1}(D)$ such that $c_{i,j} \neq 0$ mod $n_1$, then every entry in the $i^{\mathit{th}}$ row is $0$ mod $n_1$. It follows that in the transpose matrix $L_{n_1}^{T}(D)$, the column $C_{i}^{T}$ is the zero column modulo $n_1$. This would result in the existence of a pseudo coloring of $\overline{D}$ (see Definition \ref{pseudodef} and \cite{MS}), which is a contradiction. Thus, the base arc $a_n$ can be distinguished from any other arc by some column in $L_{n_1}(D)$ modulo $n_1$.
\
\textbf{Step 2:} Furthermore, if there are two arcs $a_i$ and $a_j$ with the same color in every column of $L_{n_1}(D)$, then we choose arc $a_j$ as the base arc, which implies that the colors of $a_i$ are equal to zero. So we are back to Step 1.
\end{proof}
The next theorem proves part \ref{c} of Conjecture \ref{ConAPS}, which is a more general version of Theorem \ref{mainlemma}.
\begin{theorem}\label{mainlemma2}
If $Col^{red}(D) = \mathbb Z_{n_1} \oplus \mathbb Z_{n_2} \oplus \cdots \oplus \mathbb Z_{n_s}$, with $n_{i+1} | n_i$, then there are $s$ Fox $n_1$-colorings (not necessarily corresponding to the columns of the coloring matrix) which distinguish all arcs. That is, for every pair of arcs of $D$, one of these $n_{1}$-colorings distinguishes them.
\end{theorem}
\begin{proof}
Denote the generators of the group $Col^{red}(D)$ by $a_1$, $a_2$, \dots, $a_s$. Every generator $a_i$ is a linear combination of some columns of the coloring matrix $L_{n_1}(D)$ modulo $n_1$ (see Theorem \ref{generatorlemma}). Therefore, they correspond to some coloring of the diagram $D$. Hence, for every pair of arcs there is a column of $L_{n_1}(D)$ modulo $n_1$ that distinguishes them.
\end{proof}
\begin{corollary}\label{Coro}
If $Col^{red}(D)$ is the cyclic group $\mathbb{Z}_{n_1}$,\hfill
\begin{itemize}
\item [(a)] then there exists a non-trivial Fox $n_1$-coloring that distinguishes all arcs.
\item [(b)] Additionally, if $n_1$ is a prime number, then the original Kauffman-Harary conjecture holds. That is, every non-trivial Fox $n_1$-coloring distinguishes all arcs.
\end{itemize}
\end{corollary}
\begin{proof}
Part (a) follows directly from Theorem \ref{mainlemma2}, for $s=1$. Part (b) follows because every non-zero element of $\mathbb{Z}_{n_1}$ is its generator.
\end{proof}
\section{Non-prime alternating links}\label{nonprimealt}
Theorems \ref{mainlemma} and \ref{mainlemma2} do not hold as stated for the connected sum of alternating links\footnote{The connected sum of alternating links, is an alternating link. For example, see \cite{PBIMW}.} (see part \ref{(a)} of Lemma \ref{nonprimelemma}). In Theorem \ref{connectedsumnonprime}, we present a version of the GKH conjecture which holds for non-prime alternating links.
\begin{lemma}\cite{Prz1}
\label{nonprimelemma}
Let $D= D_1 \ \# \ D_2$ be the connected sum of two link diagrams. Then,
\begin{itemize}
\item [\namedlabel{(a)}{(a)}] the arcs connecting the two components represent the same element in $Col(D)$, and
\item[(b)] $Col^{red}(D_1 \ \# \ D_2) \cong Col^{red}(D_1)\oplus Col^{red}(D_2) $.
\end{itemize}
\end{lemma}
\begin{theorem}\label{connectedsumnonprime}
Let $D= D_1 \ \# \ \ D_2 \ \# \ \cdots \ \# \ D_n$, where $D_i$ is a reduced alternating diagram of a prime link $L_i$, for $i=1, 2, \dots, n$. Then,
\begin{itemize}
\item [(a)] for any pair of arcs different from arcs joining $D_i$ with $D_{i+1}$, there exists a Fox $n_1$-coloring which distinguishes them, and
\item [(b)] there are $t$ ($t \leq s$) Fox $n_1$-colorings such that any pair of arcs different from the ones joining $D_i$ with $D_{i+1}$, is distinguished by one of them.
\end{itemize}
\end{theorem}
\begin{proof}
This result follows from Theorems \ref{generatorlemma} and \ref{mainlemma}, and Lemma \ref{nonprimelemma}.
\end{proof}
\begin{remark}
Theorem \ref{connectedsumnonprime} was formulated for connected sums of diagrams. However, from William W. Menasco's result (see \cite{Men,Hos}), it follows that if an alternating diagram represents the connected sum of alternating links, then it is already a connected sum of diagrams.
\end{remark}
\begin{example}
Let $D$ be an alternating diagram of the square knot, that is $D= \overline{3}_1 \ \# \ 3_1$, with reduced crossing matrix $C(D)$ (see Figure \ref{FoxSquareknot}). Then
$Col^{red}(\overline{3}_1 \ \# \ 3_1)=
\mathbb Z_3 \oplus \mathbb Z_3$. Observe that columns 3 and 5 of $L_3(D)$ modulo $3$ (Figure \ref{matricesSquareKnot}) distinguish all pairs of arcs except the ones connecting $\overline 3_1$ with $3_1.$ Also, the third row (corresponding to the third crossing in the chosen ordering and, therefore, to the third arc) has all zero entries. That is, the third arc cannot be distinguished from the base arc.
\begin{figure}[H]
\begin{subfigure}{.4\textwidth}
$ C(D) =
\begin{pmatrix*}[r]
2 & -1 & 0 & 0 & 0\\ -1 & 2 & -1 & 0 & 0\\ -1 & -1 & 2 & 0 & 0\\ 0 & 0 & -1 & 2 &
-1 \\ 0 & 0 & 0 & -1 & 2
\end{pmatrix*}$
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
$$\vcenter{\hbox{
\begin{overpic}[scale = 1.7]{squareknot}
\put(55, 60){$(1,0)$}
\put(29, 52){$(2,0)$}
\put(37, 135){$(0,0)$}
\put(133, 65){$(0,1)$}
\put(105, 54){$(0,2)$}
\put(30, 5){$(0,0)$}
\end{overpic} }}$$
\end{subfigure}
\caption{The reduced crossing matrix for the square knot (on the left). The square knot $\overline{3}_1 \ \# \ 3_1$ with two Fox $3$-colorings distinguishing every pair of arcs (on the right).} \label{FoxSquareknot}
\end{figure}
\begin{figure}[h]
$ L_3(D) = 3 C^{-1}(D) =
\begin{pmatrix}
3 & 2 & 1 & 0 & 0\\ 3 & 4 & 2 & 0 & 0\\ 3 & 3 & 3 & 0 & 0\\ 2 & 2 & 2 & 2 & 1\\
1 & 1 & 1 & 1 & 2
\end{pmatrix} \ \ \ \ \
L_3(D) \ \text{mod 3}=
\begin{pmatrix}
0 & 2 & 1 & 0 & 0\\ 0 & 1 & 2 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 2 & 2 & 2 & 2 & 1\\
1 & 1 & 1 & 1 & 2
\end{pmatrix} $
\caption{Matrices $L_3(D)$ and $L_3(D)$ modulo $3$ for the square knot.}\label{matricesSquareKnot}
\end{figure}
\end{example}
\section{Examples of Fox colorings}\label{exfoxsection}
In this section we study examples of alternating link diagrams and their Fox colorings. For the structure of the group $Col^{red}(D)=H_{1}(M_{D}^{(2)}, \mathbb{Z})$ for knots up to 10 crossings, see Appendix C in \cite{BZ}.
\begin{example}
Kauffman and Harary
showed that the knot $7_7$ is a counterexample to their conjecture for a knot with non-prime determinant \cite{HK}. We have, $det \ ( 7_7)= 21$ and $Col^{red}(7_7)=\mathbb{Z}_{21}$.\footnote{It was noticed in \cite{KLa} that the Kauffman-Harary conjecture holds for any rational (2-bridge) knot without restrictions on the determinant of the knot. However, as they note, the formulation of the conjecture needs to be changed from ``every non-trivial Fox $D$-coloring" to ``there exists a Fox $D$-coloring." See Corollary \ref{Coro}.} See Figure \ref{7_7 figure} for a Fox $21$-coloring distinguishing all arcs.
\begin{figure}[h]
$L(7_7)=\begin{pmatrix}
24 & 20 & 12 & 10 & 16 & 11 \\
12 & 24 & 6 & 12 & 15 & 9 \\
15 & 16 & 18 & 8 & 17 & 13 \\
6 & 5 & 3 & 13 & 4 & 8 \\
18 & 22 & 9 & 11 & 26 & 10 \\
12 & 10 & 6 & 5 & 8 & 16 \\
\end{pmatrix} \ \ \ \ \
L(7_7) \ \text{mod} \ 21=\begin{pmatrix}
3 & 20 & 12 & 10 & 16 & 11 \\
12 & 3 & 6 & 12 & 15 & 9 \\
15 & 16 & 18 & 8 & 17 & 13 \\
6 & 5 & 3 & 13 & 4 & 8 \\
18 & 1 & 9 & 11 & 5 & 10 \\
12 & 10 & 6 & 5 & 8 & 16 \\
\end{pmatrix} $
\caption{Matrices $L(7_7)$ and $L(7_7)$ modulo $21$. Some non-trivial Fox $21$-colorings of $7_7$ do not distinguish all arcs; for example, columns 1 or 3 of $L$ modulo $21$. However, columns $2$, $4$, $5$, and $6$ distinguish all arcs.}
\end{figure}
\begin{figure}[h]
$$\vcenter{\hbox{
\begin{overpic}[scale = 0.8]{7_7.png}
\put(170, 130){$1$}
\put(142, 100){$2$}
\put(53, 138){$4$}
\put(75, 100){$7$}
\put(2, 20){$12$}
\put(70, 40){$20$}
\put(195, 45){$0$}
\end{overpic} }}$$
\caption{The knot $7_7$ with a Fox $21$-coloring which distinguishes all arcs.}\label{7_7 figure}
\end{figure}
\end{example}
\begin{example}
Consider the family of links obtained by closing the braids $(\sigma_1\sigma_2^{-1})^{n}$. These links are sometimes called Turk's head links and can also be obtained by drawing the Tait diagrams of the wheel graphs $W_n$. The closed formula for the determinant of the $D(W_n)$ is given in \cite{Prz-Goeritz}. Examples for $n=5$ and $n=6$ are drawn in Figure \ref{W5andW6} and their reduced groups of Fox colorings are as follows:
\begin{itemize}
\item [(a)] For $n=5$, $D(W_5)$ is $10_{123}$ in Rolfsen's table \cite{Rol}. $Col^{red}(D(W_5)) = \mathbb Z_{11} \oplus \mathbb Z_{11} $.
\item [(b)] For $n=6$, $D(W_6)$ is the link $12^{3}_{474}$ in Thistlethwaite's tables \cite{Prz-Goeritz, Thi}.
$Col^{red}(D(W_6)) = \mathbb Z_{40} \oplus \mathbb Z_{8}$.
\end{itemize}
\begin{figure}[ht]
\centering
\includegraphics[width=1\linewidth]{W_5.png}
\caption{The knot $D(W_5)$ with two Fox colorings distinguishing all arcs (on the left), and the link $D(W_6)$ with two Fox colorings distinguishing all arcs (on the right).}
\label{W5andW6}
\end{figure}
\end{example}
\begin{example}
The group $Col^{red}$ for pretzel links is given in Proposition $7$ in \cite{APS} and its generalization to Montesinos links is given in Proposition $8$ in \cite{APS}. Here we show two examples together with their coloring matrices modulo $n_1$.
\begin{itemize}
\item[(a)] Let $P(3,3,3,3,3)$ be a pretzel knot with $15$ crossings. Its group $Col^{red}$ is equal to $\mathbb{Z}_{15} \oplus \mathbb{Z}_3 \oplus \mathbb{Z}_3 \oplus \mathbb{Z}_3$. See its coloring matrix, $L(P(3,3,3,3,3))$ modulo $15$ in Figure \ref{coloringpretzel1}.
\item[(b)] Let $P(3,3,3,6)$ be a pretzel knot with $15$ crossings. Its group $Col^{red}$ is equal to $\mathbb{Z}_{21} \oplus \mathbb{Z}_3 \oplus \mathbb{Z}_3$. See its coloring matrix, $L(P(3,3,3,6))$ modulo $21$ in Figure \ref{coloringpretzel2}.
\end{itemize}
\end{example}
\begin{figure}
$L(P(3,3,3,3,3))\ mod \ 15=\left(
\begin{array}{cccccccccccccc}
5 & 1 & 1 & 1 & 12 & 12 & 7 & 8 & 3 & 3 & 14 & 14 & 14 & 10 \\
10 & 5 & 10 & 10 & 10 & 10 & 5 & 10 & 0 & 10 & 10 & 10 & 10 & 10 \\
10 & 1 & 11 & 1 & 12 & 12 & 7 & 8 & 13 & 3 & 4 & 14 & 14 & 10 \\
10 & 12 & 12 & 7 & 14 & 14 & 9 & 6 & 11 & 11 & 13 & 3 & 3 & 10 \\
10 & 1 & 1 & 1 & 7 & 12 & 7 & 8 & 13 & 13 & 4 & 4 & 14 & 10 \\
10 & 12 & 12 & 12 & 14 & 9 & 9 & 6 & 11 & 11 & 13 & 13 & 3 & 0 \\
10 & 8 & 8 & 8 & 6 & 6 & 11 & 4 & 9 & 9 & 7 & 7 & 7 & 5 \\
5 & 7 & 7 & 7 & 9 & 9 & 4 & 11 & 6 & 6 & 8 & 8 & 8 & 10 \\
0 & 3 & 13 & 13 & 11 & 11 & 6 & 9 & 9 & 14 & 12 & 12 & 12 & 10 \\
10 & 14 & 4 & 4 & 13 & 13 & 8 & 7 & 12 & 7 & 1 & 1 & 1 & 10 \\
10 & 3 & 3 & 13 & 11 & 11 & 6 & 9 & 14 & 14 & 7 & 12 & 12 & 10 \\
10 & 14 & 14 & 4 & 3 & 13 & 8 & 7 & 12 & 12 & 1 & 11 & 1 & 10 \\
10 & 10 & 10 & 10 & 10 & 0 & 10 & 5 & 10 & 10 & 10 & 10 & 5 & 10 \\
10 & 14 & 14 & 14 & 3 & 3 & 8 & 7 & 12 & 12 & 1 & 1 & 1 & 5 \\
\end{array}
\right)$
\caption{$L(P(3,3,3,3,3))$ modulo $15$. The colorings given by the columns 3, 4, and 10 distinguish all arcs.}\label{coloringpretzel1}
\end{figure}
\begin{figure}
$L(P(3,3,3,6)) \ mod \ 21=\left(
\begin{array}{cccccccccccccc}
9 & 5 & 1 & 10 & 11 & 18 & 18 & 5 & 7 & 3 & 20 & 1 & 1 & 14 \\
1 & 6 & 11 & 12 & 9 & 16 & 16 & 20 & 14 & 19 & 3 & 18 & 18 & 14 \\
14 & 7 & 0 & 14 & 7 & 14 & 14 & 14 & 0 & 14 & 7 & 14 & 14 & 14 \\
6 & 8 & 10 & 16 & 5 & 12 & 12 & 8 & 7 & 9 & 11 & 10 & 10 & 14 \\
15 & 13 & 11 & 5 & 16 & 9 & 9 & 13 & 14 & 12 & 10 & 11 & 11 & 7 \\
13 & 15 & 17 & 9 & 12 & 12 & 19 & 15 & 14 & 16 & 18 & 17 & 3 & 0 \\
11 & 17 & 2 & 13 & 8 & 15 & 8 & 17 & 14 & 20 & 5 & 2 & 16 & 14 \\
13 & 15 & 17 & 9 & 12 & 19 & 19 & 8 & 14 & 16 & 18 & 3 & 3 & 14 \\
5 & 16 & 6 & 11 & 10 & 17 & 17 & 2 & 0 & 11 & 1 & 20 & 20 & 14 \\
18 & 17 & 16 & 13 & 8 & 15 & 15 & 17 & 7 & 6 & 5 & 16 & 16 & 14 \\
10 & 18 & 5 & 15 & 6 & 13 & 13 & 11 & 14 & 1 & 9 & 12 & 12 & 14 \\
12 & 16 & 20 & 11 & 10 & 17 & 3 & 2 & 14 & 18 & 1 & 13 & 20 & 14 \\
14 & 14 & 14 & 7 & 14 & 0 & 14 & 14 & 14 & 14 & 14 & 14 & 7 & 14 \\
12 & 16 & 20 & 11 & 10 & 3 & 3 & 16 & 14 & 18 & 1 & 20 & 20 & 7 \\
\end{array}
\right)$
\caption{$L(P(3,3,3,6))$ modulo $21$. The colorings given by columns 1 and 6 distinguish all arcs.}\label{coloringpretzel2}
\end{figure}
\section{Odds and ends}\label{odds}
\subsection{Pseudo colorings}
An important tool in our proof of Theorem \ref{mainlemma} is the idea of pseudo colorings. In \cite{MS} and in this paper, it is shown that no pseudo colorings exist for reduced, prime, alternating link diagrams. However, the existence of pseudo colorings can be used to see how far a diagram is from being an alternating link diagram. In this section, we briefly explore this concept. In \cite{MS}, Proposition 3.2 depends on the fact that for reduced alternating diagrams the rows of the crossing matrix add to zero. This does not hold for non-alternating diagrams, as we illustrate in the following examples.
\begin{definition}\label{pseudodef}
Let $D$ be a link diagram and $\epsilon \in \{-1,+1\}$. Following Mattman and Solis \cite{MS}, we define an $\boldsymbol{\epsilon}$\textbf{-pseudo coloring} of $D$ as colorings of the arcs of $D$ such that, at all but two crossings the Fox coloring convention $2b - a - c = 0$ is satisfied. We denote the other two crossings by $c_{+1}$ and $c_{\epsilon},$ where the coloring conventions are $2b - a - c = +1$ and $2b - a - c = \epsilon,$ respectively. To obtain the pseudo colorings as defined in \cite{MS}, put $\epsilon=-1$.
\end{definition}
For an alternating link diagram $D$, our convention was to order crossings first and then, the set of arcs inherits the order of the set of crossings. Compare Definition \ref{CrossingMatrixRelation}. The reason for such a choice is that $C'(\overline{D})$ is the same as $C'(D)^{T}$. This does not work for non-alternating link diagrams.
\
In general, we can arbitrarily order crossings and arcs. In Figure \ref{orderarcs} we give an example of ordering crossings and arcs for the knot $8_{19}$. We first choose a base point and an orientation (shown by an arrow on the left-hand side of Figure \ref{orderarcs}). Starting at this base point, we move along the knot and order crossings. Next, arcs can be ordered arbitrarily with the base arc always being the last one. In Figure \ref{orderarcs} the first coordinate gives the number of the crossing and the second one gives the number of the arc.
\begin{figure}[ht]
\centering
\begin{subfigure}{.5\textwidth}
$$\vcenter{\hbox{
\begin{overpic}[scale = 1.7]{8_19aspretzel}
\put(103, 72){$1,5$}
\put(103, 45){$2,1$}
\put(25, 30){$3,7$}
\put(25, 58){$4,4$}
\put(25, 85){$5,6$}
\put(62, 85){$6,3$}
\put(65, 58){$7,8$}
\put(65, 31){$8,2$}
\end{overpic} }}$$
\end{subfigure}%
\begin{subfigure}{.5\textwidth} $$\vcenter{\hbox{
\begin{overpic}[scale = 1.7]{8_19aspretzel}
\put(70, 105){$0$}
\put(70, 13){$-1$}
\put(27, 27){$0$}
\put(-3, 40){$0$}
\put(-3, 75){$0$}
\put(65, 124){$0$}
\put(69, 70){$0$}
\put(110, 31){$0$}
\put(47, 20){$c_{+1}$}
\put(103, 45){$c_{+1}$}
\end{overpic} }}$$
\end{subfigure}
\caption{The torus knot $T(3,4)$ ($8_{19}$ in Rolfsen's table \cite{Rol}) depicted as the pretzel knot $P(3,3,-2)$ showing ordering of crossings and arcs (on the left). On the right, there is pseudo coloring given by the second column; compare Remark \ref{remarkpseudo}.} \label{orderarcs}
\end{figure}
\
In the following example, we analyze non-split, non-prime alternating diagrams.
\begin{example}\label{Ex52}
Let $D = D_1 \ \# \ D_2$ be a non-split, non-prime alternating link diagram. $D$ always has a $-1$-pseudo coloring using color $1$ on $D_1$ and color $0$ on $D_2$. We illustrate this idea for the square knot $\overline{3}_1 \ \# \ 3_1$ in Figure \ref{PseudoSquare}.
\begin{figure}[h]
\centering
$$\vcenter{\hbox{
\begin{overpic}[scale = 1.7]{squareknot}
\put(57, 70){$1$}
\put(32, 78){$1$}
\put(40, 132){$0$}
\put(5,103){$c_{+1}$}
\put(108, 48){$0$}
\put(134, 60){$0$}
\put(32, 4){$1$}
\put(85,23){$c_{-1}$}
\end{overpic} }}$$
\caption{$-1$-pseudo coloring of the square knot with the $+1$-crossing denoted by $c_{
+1}$ and the $-1$-crossing denoted by $c_{
-1}$.}\label{PseudoSquare}
\end{figure}
\end{example}
On the other hand, non-alternating link diagrams often have $-1$-pseudo colorings and $+1$-pseudo colorings. See Examples \ref{Ex53} and \ref{Ex819}. If the determinant of a knot with diagram $D$ is equal to $1$, we have $L(D) = C^{-1}(D)$ and every column of $C^{-1}(D)$ colors the first $n-1$ arcs of the diagram. Then for a complete $\epsilon$-pseudo coloring of $D$, we color the last (base) arc $a_n$ by color $0$.
\begin{example}\label{Ex53}
Consider the braid word $\sigma^{3}_{2}\sigma^{}_{1}\sigma^{-1}_{3}\sigma^{-2}_{2}\sigma^{}_{1}\sigma^{-1}_{2}\sigma^{}_{1}\sigma^{-1}_{3}$ whose closure is the Conway knot. The determinant of this knot is $1$ and its crossing matrix $C'(D)$ is given in Figure \ref{CMCK}. The $+1$-pseudo coloring given by column $4$ and the $-1$-pseudo coloring given by column $1$ in the matrix shown in Figure \ref{pseudoConwayKnot} are illustrated in Figure \ref{CKpic} on the left and on the right, respectively.
\end{example}
\begin{figure}[ht]
\centering
\begin{subfigure}{.5\textwidth}
$$\vcenter{\hbox{
\begin{overpic}[scale = 0.9]{CK}
\put(55, 150){$2$}
\put(35.5, 118){$-8$}
\put(40, 108){$-5$}
\put(42, 86){$-2$}
\put(5, 222){$5$}
\put(1, 110){$0$}
\put(20, 210){$9$}
\put(18, 165){$6$}
\put(18, 149){$3$}
\put(60, 100){$-3$}
\put(3, 72){$1$}
\put(50, 60){$c_{+1}$}
\put(14, 76){$c_{+1}$}
\end{overpic} }}$$
\label{CKpicA}
\end{subfigure}%
\begin{subfigure}{.5\textwidth} $$\vcenter{\hbox{
\begin{overpic}[scale = 0.9]{CK}
\put(55, 150){$6$}
\put(35.5, 118){$-33$}
\put(38.5, 108){$-21$}
\put(42, 86){$-9$}
\put(5, 225){$21$}
\put(1, 110){$0$}
\put(20, 212){$39$}
\put(13, 165){$26$}
\put(13, 149){$13$}
\put(60, 100){$-13$}
\put(3, 72){$3$}
\put(50, 60){$c_{-1}$}
\put(55, 130){$c_{+1}$}
\end{overpic} }}$$
\label{CKpicB}
\end{subfigure}
\caption{The Conway knot with $+1$-pseudo coloring (on the left) and with $-1$-pseudo coloring (on the right). The last crossing $c_{+1}$ in the left figure changes to $c_{-1}$ in the right figure.}\label{CKpic}
\end{figure}
\begin{figure}[ht]
$$C'(D)=\left(
\begin{array}{ccccccccccc}
-1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 \\
0 & -1 & -1 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 2 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & 0\\
2 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 2 & 0 & -1 & -1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 2 \\
-1 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & -1 \\
0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 2& 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & -1 & -1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 2 \\
2& 0 & 0&0 & 0 & 0&0 &0 &-1 & -1 &0 \\
\end{array}
\right)$$
\caption{The crossing matrix of the Conway knot. Notice that the rows of the crossing matrix satisfy the linear equation $R_{1}-R_2-R_3-R_4-R_5-R_6-R_7+R_8+R_9+R_{10}+R_{11}=0.$}\label{CMCK}
\end{figure}
\begin{figure}[h]
$$L(D)= \begin{pNiceMatrix}[r,first-row,last-row,first-col]
\ \ \ \ \ & \text{ \ } & \text{ \ } & \text{ \ } & \text{ \ } & \text{ \ } & \text{ \ } & \text{ \ } & \text{ \ } & \text{ \ } & \text{ \ } \\
\ \ \ \ \ \ \ & \textbf{6} & -6 & -2 & \textbf{2} & -4 & -10 & -3 & 4 & 8 & 12 \ \ \\
\ \ \ \ \ \ & \textbf{-33} & 32 & 12 & \textbf{-8} & 22 & 52 & 17 & -22 & -44 & -66 \ \ \\
\ \ \ \ \ \ & \textbf{-9} & 9 & 4 & \textbf{-2} & 6 & 14 & 5 & -6 & -12 & -18 \ \ \\
\ \ \ \ \ \ & \textbf{21} & -21 & -8 & \textbf{5} & -14 & -34 & -11 & 14 & 28 & 42 \ \ \\
\ \ \ \ \ \ &\textbf{-21} & 21 & 8 & \textbf{-5} & 14 & 33 & 11 & -14 & -28 & -42 \\
\ \ \ \ \ \ &\textbf{3} & -3 & -1 & \textbf{1} & -2 & -5 & -1 & 2 & 4 & 6 \\
\ \ \ \ \ \ & \textbf{39} & -39 & -15 & \textbf{9} & -27 & -63 & -21 & 26 & 52 & 78 \\
\ \ \ \ \ \ & \textbf{13} & -13 & -5 & \textbf{3} & -9 & -21 & -7 & 9 & 18 & 26 \\
\ \ \ \ \ \ & \textbf{-13} & 13 & 5 & \textbf{-3} & 9 & 21 & 7 & -9 & -18 & -27 \\
\ \ \ \ \ \ &\textbf{26} & -26 & -10 & \textbf{6} & -18 & -42 & -14 & 18 & 35 & 52 \\
& \textbf{0} & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 \ \ \\
\end{pNiceMatrix}$$
\caption{Coloring matrix for the Conway knot. The last row of zeroes correspond to the coloring of the base arc.}\label{pseudoConwayKnot}
\end{figure}
\begin{example}\label{Ex819}
Consider the torus knot $T(3,4)$ with diagram $D$ and crossings and arcs ordered as illustrated in Figure \ref{orderarcs} . Its crossing matrix is shown in Figure \ref{CprimeMatrix819}. Three columns of $C^{-1}(D)$ (shown in Figure \ref{CMatrix819}) are integral and they yield $\epsilon$-pseudo colorings. Column 5 gives a $-1$-pseudo coloring (shown on the right in Figure \ref{8_19aspretzel}) and columns 1 and 2 give -1-pseudo colorings. The $+1$-pseudo coloring corresponding to column $1$ is shown on the left of Figure \ref{8_19aspretzel}.
\begin{figure}[ht]
\centering \includegraphics[width=0.85\linewidth]{8_19aspretzel.png}
\caption{The torus knot $T(3,4)$ ($8_{19}$ in the Rolfsen's table \cite{Rol}) depicted as the pretzel knot $P(3,3,-2)$. } \label{8_19aspretzel}
\end{figure}
\begin{figure}[h]
$C'(D)=\left(
\begin{array}{rrrrrrrr}
2& 0 &0 &0 & -1 & -1 & 0&0 \\
-1 & -1 & 0 &0 &2 &0 &0 &0 \\
0& 0 & 0 &0 &2 &0 &-1&-1 \\
0&0 &0 &-1 &-1 &0 &2 &0 \\
0& 0 &0 & 2&0 &-1 &-1 &0 \\
2& 0 & -1 &-1 &0 & 0&0 &0 \\
-1 & 0 & 2 &0 & 0& 0&0 & -1 \\
0& -1 & -1 &0 &0 &0 &0 & 2 \\
\end{array}
\right)$
\caption{The crossing matrix $C'(P(3,3,-2))$. The rows satisfy the linear relation
$R_1 +R_2 - R_3 -R_4 -R_5 -R_6 -R_7 - R_8 = 0$.}
\label{CprimeMatrix819}
\end{figure}
\setlength{\tabcolsep}{20pt}
\renewcommand{\arraystretch}{1.5}
\begin{figure}[ht]
$\displaystyle C^{-1}(D)=\displaystyle \left(
\begin{array}{rrrrrrr}
-2 & 0 & -\frac{2}{3} & \frac{2}{3} & 2 & \frac{10}{3} & \frac{5}{3} \\
0 & -1 & \frac{4}{3} & \frac{2}{3} & 0 & -\frac{2}{3} & -\frac{1}{3} \\
-1 & 0 & -\frac{1}{3} & \frac{1}{3} & 1 & \frac{5}{3} & \frac{4}{3} \\
-3 & 0 & -1 & 1 & 3 & 4 & 2 \\
-1 & 0 & \frac{1}{3} & \frac{2}{3} & 1 & \frac{4}{3} & \frac{2}{3} \\
-4 & 0 & -\frac{5}{3} & \frac{2}{3} & 3 & \frac{16}{3} & \frac{8}{3} \\
-2 & 0 & -\frac{1}{3} & \frac{4}{3} & 2 & \frac{8}{3} & \frac{4}{3} \\
\end{array}
\right)$
\caption{$C^{-1}(D)$ corresponding to $T(3,4)$ with three integral columns.}
\label{CMatrix819}
\end{figure}
\end{example}
Non-alternating link diagrams always have $\epsilon$-pseudo colorings, as we describe in the following remark.
\begin{remark}\label{remarkpseudo}
Let $D$ be a non-alternating link diagram.
\begin{enumerate}
\item[(1)] Every integral column of $C^{-1}(D)$ leads to some $\epsilon$-pseudo coloring.
\item[(2)] $D$ has an $\epsilon$-pseudo coloring. This follows from the fact that every non-alternating diagram has a tunnel of length at least two. Now, we can color $D$ by coloring one of the arcs of the tunnel by color $-1$ and all other arcs by color $0$ to get the $+1$-pseudo coloring. An example of such a coloring is shown on the right-hand side of Figure \ref{orderarcs}.
\end{enumerate}
\end{remark}
\subsection{Future directions}
The Kauffman-Harary conjecture was extended to the case of virtual knots by Mathew Williamson \cite{Wil} and proved by Zhiyun Cheng \cite{Che}. A natural question is to ask whether the conjecture in \cite{APS} holds for virtual links whose determinants are not prime. Another path of further research is to look for a natural generalization to non-alternating diagrams using a set theoretic Yang-Baxter operator or a general Yang-Baxter operator.
\
An interesting prospect is to approach the generalized Kauffman-Harary conjecture from the perspective of incompressible surfaces in the double branched cover $M^{(2)}_L$ of $S^3$ branched along $L$. This was outlined in \cite{APS} with the hope of proving the GKH conjecture. Now that the GKH conjecture is proved, we can proceed in the opposite direction and analyze incompressible surfaces in $M^{(2)}_L$.
\section*{Acknowledgements}
The first author acknowledges the support of Dr. Max Rössler, the Walter Haefner Foundation, and the ETH Zürich Foundation. The third author acknowledges the support of the National Science Foundation through Grant DMS-2212736. The fourth author was supported by the American Mathematical Society and the Simons Foundation through the AMS-Simons Travel Grant. The fifth author was partially supported by the Simons Collaboration Grant 637794.
| {
"timestamp": "2023-01-09T02:13:55",
"yymm": "2301",
"arxiv_id": "2301.02645",
"language": "en",
"url": "https://arxiv.org/abs/2301.02645",
"abstract": "For a reduced alternating diagram of a knot with a prime determinant $p,$ the Kauffman-Harary conjecture states that every non-trivial Fox $p$-coloring of the knot assigns different colors to its arcs. In this paper, we prove a generalization of the conjecture stated nineteen years ago by Asaeda, Przytycki, and Sikora: for every pair of distinct arcs in the reduced alternating diagram of a prime link with determinant $\\delta,$ there exists a Fox $\\delta$-coloring that distinguishes them.",
"subjects": "Geometric Topology (math.GT)",
"title": "The Generalized Kauffman-Harary Conjecture is True",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180673335565,
"lm_q2_score": 0.8128673155708975,
"lm_q1q2_score": 0.8012579993031613
} |
https://arxiv.org/abs/2106.12190 | Closed-Form, Provable, and Robust PCA via Leverage Statistics and Innovation Search | The idea of Innovation Search, which was initially proposed for data clustering, was recently used for outlier detection. In the application of Innovation Search for outlier detection, the directions of innovation were utilized to measure the innovation of the data points. We study the Innovation Values computed by the Innovation Search algorithm under a quadratic cost function and it is proved that Innovation Values with the new cost function are equivalent to Leverage Scores. This interesting connection is utilized to establish several theoretical guarantees for a Leverage Score based robust PCA method and to design a new robust PCA method. The theoretical results include performance guarantees with different models for the distribution of outliers and the distribution of inliers. In addition, we demonstrate the robustness of the algorithms against the presence of noise. The numerical and theoretical studies indicate that while the presented approach is fast and closed-form, it can outperform most of the existing algorithms. | \section{Introduction}
Principal Component Analysis (PCA) has been extensively
used for linear dimensionality reduction.
While PCA is useful when the data has
low intrinsic dimension, its output is sensitive to outliers
in the sense that the subspace found by PCA can arbitrarily deviate from
the true underlying subspace even if a small portion of the data is corrupted. In addition, locating the outlying components is of great interest in many applications.
There are two different robust PCA problems corresponding to two different models for the data corruption. The first problem, known as low rank plus sparse matrix decomposition, assumes that a random subset of the elements of data are corrupted and the corrupted elements are not concentrated in any column/row of the data~\cite{lamport22,lamport1}. In the second problem, a subset of columns of the data \textcolor{black}{are affected by the data corruption}~\cite{lamport10,zhang2014novel,lerman2014fast,fischler1981random,li1985projection,choulakian2006l1,feng2012robust,mccoy2011two,hardt2013algorithms,zhang2016robust,you2017provable,markopoulos2014optimal}.
This paper focuses on the column-wise model, i.e., it is assumed that
data matrix $\mathbf{D} \in \mathbb{R}^{M_1 \times M_2}$ can be expressed as
$$
\mathbf{D} = ( [\mathbf{B} \hspace{.2cm} \mathbf{A}] ) \: \mathbf{T} \:,
$$
where $\mathbf{A} \in \mathbb{R}^{M_1 \times n_i}$, $\mathbf{B} \in \mathbb{R}^{M_1 \times n_o}$, $\mathbf{T}$ is an unknown permutation matrix, and $[\mathbf{B} \hspace{.2cm} \mathbf{A}]$ represents the concatenation of matrices $\mathbf{A}$ and $\mathbf{B}$.
The columns of $\mathbf{A}$ lie in an $r$-dimensional subspace $\mathcal{U}$. The columns of $\mathbf{B}$ do not lie entirely in $\mathcal{U}$, i.e., the $n_i$ columns of $\mathbf{A}$ are the inliers and the $n_o$ columns of $\mathbf{B}$ are the outliers.
The output of a robust PCA method is an estimate for $\mathcal{U}$.
If $\mathcal{U}$ is estimated accurately, the outliers can be located by projecting the data points on the complement of $\mathcal{U}$.
\subsection{Summary of Contributions}
Most of the existing robust PCA algorithms require a large number of iterations each with high computational complexity and most of them are not supported with thorough performance guarantees. We present closed-form and provable robust PCA methods which mostly outperform the existing methods. The main contributions can be summarized as follows.
\\
$\bullet$ It is proved that Innovation Value introduced in~\cite{rahmani2019outlier2} is equivalent to Leverage Score if a quadratic cost function is used to find the optimal directions instead of the $\ell_1$-norm based cost function. This interesting connection is used to establish several theoretical guarantees
\\
$\bullet$ Inspired by the explanation of Leverage Scores with Innovation Search, a new robust PCA method, which uses a symmetric measure of similarity, is presented. The presented closed-form methods mostly outperform the existing methods while they only include one singular value decomposition plus one matrix multiplication.
\\
$\bullet$ Theoretical performance guarantees under several different models for the distribution of the outliers and inliers are presented. Furthermore, the robustness to the presence of noise is studied and it is shown that the algorithm can provably distinguish the outliers when the data is noisy.
\subsection{Notation}
Given a matrix $\mathbf{A}$, $\| \mathbf{A} \|$ denotes its spectral norm, $\| \mathbf{A} \|_F$ denotes its Frobenius norm, and $\mathbf{A}^T$ is the transpose of $\mathbf{A}$. For a vector $\mathbf{a}$, $\| \mathbf{a} \|_p$ denotes its $\ell_p$-norm and $\mathbf{a}(i)$ its $i^{\text{th}}$ element. For a matrix $\mathbf{A}$, $\mathbf{a}_i$ denotes its $i^{\text{th}}$ column and $\| \mathbf{A} \|_{1,2} = \sum_{i} \| \mathbf{a}_i \|_2$.
$\mathbb{S}^{M_1 - 1}$ indicates the unit $\ell_2$-norm sphere in $\mathbb{R}^{M_1}$. Matrix $\mathbf{D} = \mathbf{U}^{'} \Sigma \mathbf{V}$ where $\mathbf{U}^{'} \in \mathbb{R}^{M_1 \times r_d}$ is the matrix of left singular vectors, $\Sigma \in \mathbb{R}^{r_d \times r_d}$ is a diagonal matrix whose diagonal values are equal to the non-zero singular values of $\mathbf{D}$, the rows of $\mathbf{V} \in \mathbb{R}^{r_d \times M_2}$ are equal to the right singular vectors, and $r_d$ is the rank of $\mathbf{D}$. The orthonormal matrix $\mathbf{U} \in \mathbb{R}^{M_1 \times r}$ is defined as a basis for $\mathcal{U}$.
Note that $\mathbf{U}^{'}$ is a basis for the entire data, $\mathbf{U}$ is a basis for the inliers, $r_d$ is the rank of $\mathbf{D}$, and $r$ is the rank of $\mathbf{A}$. The subspace $\mathcal{U}^{\perp}$ is defined as the complement of $\mathcal{U}$.
\section{Related Work}
\label{sec:related_work}
Robust PCA is a well-known problem and many approaches were developed for this problem. In this section, we briefly review some of the previous works on robust PCA.
Some of the earliest approaches to robust PCA are based on
robust estimation of the data covariance matrix, such as the minimum covariance determinant, the minimum volume ellipsoid, and the Stahel-Donoho estimator~\cite{lamport47,feng2012robust,xu2010principal}.
However, these methods mostly compute a full SVD
or eigenvalue decomposition in each iteration and their performance greatly degrades when $\frac{n_i}{n_o} < 0.5$. In addition, they lack performance guarantees with structured outliers.
Another approach is to replace the Frobenius Norm in the cost function of PCA with $\ell_1$-norm~\cite{lamport18,lamport29}, as $\ell_1$-norm was shown to be robust to the presence of the outliers~\cite{decod,lamport29}. In order to leverage the column-wise structure of the outliers,
the authors of~\cite{lamport21} replaced the $\ell_1$-norm minimization problem used in~\cite{lamport29} with an $\ell_{1,2}$-norm minimization problem. In~\cite{lerman2015robustnn} and~\cite{zhang2014novel}, the optimization problem used in~\cite{lamport21} was relaxed to two different convex optimization problems. The authors of~\cite{lerman2015robustnn,zhang2014novel} provided sufficient conditions \textcolor{black}{under which} the optimal points of the convex optimization problems proposed in~\cite{lerman2015robustnn,zhang2014novel} are guaranteed to yield an exact basis for $\mathcal{U}$ .
The approach presented in~\cite{tsakiris2015dual} focused on the scenario in which the data is predominantly unstructured outliers and the number of outliers is larger than $M_1$.
In~\cite{tsakiris2015dual}, it is essential to assume that the outliers are randomly distributed on $\mathbb{S}^{M_1 - 1}$ and the inliers are distributed randomly on the intersection of $\mathbb{S}^{M_1 - 1}$ and $\mathcal{U}$.
The outlier detection method proposed in~\cite{soltanolkotabi2012geometric} assumes that the outliers are randomly distributed on $\mathbb{S}^{M_1 - 1}$ and a small number of them are not linearly dependent which means that~\cite{soltanolkotabi2012geometric} is not able to detect the linearly dependent outliers and the outliers which are close to each other. Another approach is based on decomposing the given $\mathbf{D}$ into a low rank matrix and a column sparse matrix where the column sparse matrix models the presence of the outliers \cite{cherapanamjeri2017thresholding,lamport10}. However, this approach requires the number of outliers to be significantly smaller than the number of the outliers and the solver algorithms that are used to decompose the data need to compute a SVD of the data in each iteration.
The main shortcomings of the previous methods are sensitivity to structured outliers and the lack of comprehensive theoretical guarantees. The Coherence Pursuit method, proposed in~\cite{rahmani2017coherence}, was shown (theoretically and numerically) to be robust to different types of outliers. However, Coherence Pursuit can miss outliers which carry weak innovation with respect to the inliers. The iSearch algorithm, proposed in~\cite{rahmani2019outlier2}, was shown to notably outperform Coherence Pursuit in detecting outliers with weak innovation . Similar to Coherence Pursuit, the robustness of iSearch against different types of outliers were supported with several theoretical guarantees. However, in contrast to Coherence Pursuit which is a closed-form method, iSearch needs to run an iterative and computationally expensive solver to find the directions of innovation.
\textit{This paper presents robust PCA methods which have the advantages of both (CoP and iSearch) algorithms: while they are closed-form algorithms, their ability in distinguishing outliers are on a par with iSearch.
In the rest of this section, Coherence Pursuit and iSearch are reviewed in more details.}
\begin{comment}
In this section, some of the previous works on robust PCA are briefly reviewed.
Since the presented approach is more related to Coherence Pursuit ~\cite{rahmani2017coherence} and iSearch ~\cite{rahmani2019outlier2}, these two methods are reviewed with more details.
The earliest approaches to robust PCA were mostly based on a
robust estimation of the data covariance matrix~\cite{lamport47}.
These methods compute a full SVD
or eigenvalue decomposition in each iteration and their performance greatly degrades when $\frac{n_i}{n_o} < 0.5$.
Another approach to enhance the robustness of PCA against outliers is to replace the Frobenius Norm in the cost function of PCA with $\ell_1$-norm~\cite{lamport18,lamport29} because $\ell_1$-norm were shown to be robust to the presence of the outliers~\cite{decod,lamport29}. The authors of~\cite{lamport21} modified the $\ell_1$-norm minimization problem used in~\cite{lamport29} and replaced it with an $\ell_{1,2}$-norm minimization problem to leverage the column-wise structure of the outliers. In~\cite{lerman2015robustnn} and~\cite{zhang2014novel}, the optimization problem used in~\cite{lamport21} was relaxed to two different convex optimization problems.
The approach presented in~\cite{tsakiris2015dual} assumes that the data is predominantly unstructured outliers and the outliers/inliers are randomly distributed in $\mathbb{S}^{M_1-1}$/$\mathbb{S}^{M_1-1}\cap \mathcal{U}$
In~\cite{lamport10}, a convex optimization problem was proposed which decomposes the data into a low rank component and a column sparse component. The approach presented in~\cite{lamport10} is provable but it requires $n_o$ to be significantly smaller than $n_i$.
The outlier detection method proposed in~\cite{soltanolkotabi2012geometric} utilizes sparse self representation and it assumes that the outliers are randomly distributed on $\mathbb{S}^{M_1 - 1}$ and a small number of them are not linearly dependent.
\end{comment}
\vspace{0.05in}
\noindent
\textbf{Coherence Pursuit (CoP):}
CoP ~\cite{rahmani2017coherence} assigns a value, termed Coherence Value, to each data point and $\mathcal{U}$ is recovered using the span of the data points with the highest Coherence Values. The Coherence Value corresponding to data point $\mathbf{d}_i$ represents the similarity between $\mathbf{d}_i$ and the rest of the data points. CoP uses inner-product to measure the similarity between data points and it distinguishes the outliers based on the fact that an inlier bears more resemblance to the rest of the data than an outlier.
\vspace{0.05in}
\noindent
\textbf{Innovation Search (iSearch)~\cite{rahmani2019outlier2}:}
Innovation Pursuit was initially proposed as a data clustering algorithm~\cite{rahmani2017subspacedi,rahmani2015innovation}. The authors of~\cite{rahmani2017subspacedi,rahmani2015innovation} showed that Innovation Pursuit can notably outperform the self-representation based clustering methods (e.g. Sparse Subspace Clustering~\cite{lamport7}) specifically when the clusters are close to each other.
Innovation Pursuit computes an optimal direction corresponding to each data point $\mathbf{d}_i$ which can be written as the optimal point of
\begin{eqnarray}
\underset{ \mathbf{c}}{\min} \: \: \| \mathbf{c}^T \mathbf{D} \|_1 \quad \text{subject to} \quad \mathbf{c}^T \mathbf{d}_i = 1 \:.
\label{eq:el1_innov}
\end{eqnarray}
If $\mathbf{C}^{*} \in \mathbb{R}^{M_1 \times M_2}$ contains all the optimal directions, Innovation Pursuit builds the adjacency matrix as $\mathbf{Q} + \mathbf{Q}^T $ where $\mathbf{Q} = |\mathbf{D}^T \mathbf{C}^{*}|$. In~\cite{rahmani2019outlier2}, it was shown that the optimal directions can be utilized for outlier detection too. The approach proposed in~\cite{rahmani2019outlier2}, termed iSearch, assigns an Innovation Value to each data point and it distinguishes the outliers as the data points with the higher Innovation Values. The Innovation Value assigned to $\mathbf{d}_i$ is computed as
$$
\frac{1}{\| \mathbf{D}^T \mathbf{c}_i^{*} \|_1}
$$
where $\mathbf{c}_i^{*}$ is the optimal point of (\ref{eq:el1_innov}). iSearch needs to run an iterative solver to find the optimal directions. In contrast, the methods presented in this paper are closed-form and they can be hundreds of time faster.
\vspace{0.05in}
\noindent
\textbf{Leverage Statistics:}
In regression, Leverage Scores are defined as the diagonal values of the hat matrix $\mathbf{X}^T (\mathbf{X} \mathbf{X}^T)^{-1} \mathbf{X} $ where $\mathbf{X} \in \mathbb{R}^{m_1 \times m_2}$ is the design matrix~\cite{everitt2002cambridge}. Assuming that the rank of $\mathbf{X}$ is equal to $m_1$, then the $i^{\text{th}}$ leverage score is equal to $\| {\mathbf{v}_x}_i \|_2^2$ where the rows of $\mathbf{V}_x \in \mathbb{R}^{m_1 \times m_2}$ are equal to the right singular vectors of $\mathbf{X}$ and ${\mathbf{v}_x}_i$ is the $i^{\text{th}}$ column of $\mathbf{V}_x$.
Leverage has been typically used in the regression framework and there are few works which focused on using it for the robust PCA problem~\cite{mejia2017pca,naes1989leverage}. For instance,
\cite{mejia2017pca} utilized leverage to reject the outlying time points in an functional magnetic resonance images (fMRI) run.
However, there is not still an analysis and full understating of Leverage in the robust PCA setting. We show that Innovation Value introduced in~\cite{rahmani2019outlier2} is equivalent to Leverage Score if a quadratic cost function is used to find the optimal directions. This interesting connection is used to establish several theoretical guarantees and to design a new robust PCA method.
\begin{algorithm}
\caption{Asymmetric Normalized Coherence Pursuit (ANCP) for Robust PCA }
{
\textbf{Input.} The inputs are the data matrix $\mathbf{D} \in \mathbb{R}^{M_1 \times M_2}$ and $r$ which is the dimension of the recovered subspace.
\smallbreak
\textbf{1.} Normalize the $\ell_2$-norm of the columns of $\mathbf{D}$, i.e., set $\mathbf{d}_i$ equal to $\mathbf{d}_i / \| \mathbf{d}_i \|_2$ for all $1 \le i \le M_2$.
\smallbreak
\textbf{2.} Rows of $\mathbf{V} \in \mathbb{R}^{r_d \times M_2 }$ are equal to the first $r_d$ right singular vectors of $\mathbf{D}$ where $r_d$ is the rank of $\mathbf{D}$.
\smallbreak
\textbf{3.} Define $\mathbf{x} \in \mathbb{R}^{M_2}$, vector of Normalized Coherence Values, as
\begin{eqnarray}
\mathbf{x}(i) = \frac{1}{\| \mathbf{v}_i \|_2^2} \: .
\label{eq:ANCP}
\end{eqnarray}
\smallbreak
\textbf{4. } Construct matrix $\mathbf{Y}$ from the
columns of $\mathbf{D}$ corresponding to the largest elements of $\mathbf{x}$ such that they span an r-dimensional subspace.
\smallbreak
\textbf{ Output:} The column-space of $\mathbf{Y}$ is the identified subspace.
}
\end{algorithm}
\section{Robust PCA with Leverage and Innovation Search}
\label{sec:proposed}
The table of Algorithm 1 and the table of Algorithm 2 show the presented methods along with the used definitions. Algorithm 1 utilizes Leverage Scores to rank the data points and the data points with the minimum Leverage Scores are used to build a basis for $\mathcal{U}$.
In this section, we show the underlying connection between Algorithm 1 and iSearch and
the motivation for naming Algorithm 1 ``Asymmetric Normalized Coherence Pursuit'' is explained.
In addition, we explain the motivations behind the design of Algorithm 2 based on the connection between Algorithm 1 and iSearch.
Both algorithms are closed-form and they are faster than most of the existing methods. The computation complexity of ANCP is $\mathcal{O}( r_d M_1 M_2)$ and the computation complexity of SNCP is $\mathcal{O}( r_d M_1 M_2 + r_d M_2^2)$.
The presented robust PCA algorithms use the data points corresponding to the largest Normalized Coherence Values to form the basis matrix $\mathbf{Y}$. If the inliers are distributed uniformly at random in $\mathcal{U}$, then $r$ data points corresponding to the $r$ largest Normalized Coherence Values span $\mathcal{U}$ with high probability.
However, in real data, the inliers form some clustering structure and the algorithm should continue adding new columns to $\mathbf{Y}$ until the columns of $\mathbf{Y}$ span an $r$-dimensional subspace. It means that we need to check the singular values of $\mathbf{Y}$ multiple times. Two techniques can be utilized to \textcolor{black}{avoid these extra steps}~\cite{rahmani2019outlier2,rahmani2017coherence22f}. The first approach is based on leveraging side information that we mostly have about the population of the outliers. In most of the applications, we can have an upper-bound on $n_o/M_2$ because outliers are mostly associated with rare events. If we know that the number of outliers is less than $y$ $\%$ of the data, matrix $\mathbf{Y}$ can be constructed using $(1 - y)$ $\%$ of the data columns which are corresponding to the largest Normalized Coherence Values. The second technique is to use the adaptive column sampling method proposed in~\cite{rahmani2017coherence22f} which uses subspace projection to avoid sampling redundant columns.
\begin{algorithm}
\caption{Symmetric Normalized Coherence Pursuit (SNCP) for Robust PCA}
\textbf{Input.} The inputs are the data matrix $\mathbf{D} \in \mathbb{R}^{M_1 \times M_2}$ and $r$ which is the dimension of the recovered subspace.
\smallbreak
\textbf{1.} Similar to Step 1 in Algorithm 1.
\smallbreak
\textbf{2.} Define $\mathbf{V} \in \mathbb{R}^{r_d \times M_2 }$ as in Algorithm 1.
\smallbreak
\textbf{3.} The vector of Normalized Coherence Values is defined as
$$\mathbf{x}(i) = \sum_{j=1}^{M_2} \frac{({\mathbf{v}_i}^T \mathbf{v}_{j})^2}{ \|\mathbf{v}_i\|_2^2 \|\mathbf{v}_j\|_2^2 }\:.$$
\smallbreak
\textbf{4. } Construct matrix $\mathbf{Y}$ as in Algorithm 1.
\smallbreak
\textbf{ Output:} The column-space of $\mathbf{Y}$ is the identified subspace.
}
\end{algorithm}
\subsection{Explaining Leverage Score for Robust PCA Using Innovation Search}\label{sec:expl}
Algorithm 1 ranks the data points based on the inverse of their leverage scores. The following lemma shows that Leverage Score is directly related to Innovation Value.
\begin{lemma}
Suppose rows of $\mathbf{V} \in \mathbb{R}^{r_d \times M_2 }$ are equal to the first $r_d$ right singular vectors of $\mathbf{D}$ where $r_d$ is the rank of $\mathbf{D}$. Define $\mathbf{c}_i^{*}$ as the optimal point of
\begin{eqnarray}
\underset{ \mathbf{c}}{\min} \: \: \| \mathbf{c}^T \mathbf{D} \|_2 \quad \text{subject to} \quad \mathbf{c}^T \mathbf{d}_i = 1 \:.
\label{eq:el_2_inno}
\end{eqnarray}
Then,
$
\| \mathbf{D}^T \mathbf{c}_i^{*} \|_2^2 = \frac{1}{\| \mathbf{v}_i \|_2^2} \:.
$
\label{lem:equi}
\end{lemma}
\noindent
Lemma~\ref{lem:equi} indicates that if a quadratic cost function is used to compute the optimal directions in iSearch, Innovation Values are equivalent to Leverage Scores. Accordingly, we can use the idea of Innovation Search to explain the Leverage Score based robust PCA method.
First suppose that $\mathbf{d}_i$ is an outlier which means that $\mathbf{d}_i$ has a non-zero projection on $\mathcal{U}^{\perp}$. Since most of the data points are inliers, the optimization problem utilizes the projection of $\mathbf{d}$ in $\mathcal{U}^{\perp}$ and finds the optimal direction near $\mathcal{U}^{\perp}$ to minimize $\| \mathbf{A}^T \mathbf{c}_i\|_2^2$.
In sharp contrast, when $\mathbf{d}_i$ is an inlier, the linear constraint strongly discourages the optimal direction to be close to $\mathcal{U}^{\perp}$. Thus, $\| \mathbf{A}^T \mathbf{c}_i^{*}\|_2^2$ is notably larger when $\mathbf{d}_i$ is an inlier comparing to $\| \mathbf{A}^T \mathbf{c}_i^{*}\|_2$ when $\mathbf{d}_i$ is an outlier.
Accordingly, since $\frac{1}{\| \mathbf{v}_i \|_2^2} = \| \mathbf{D}^T \mathbf{c}_i^{*}\|_2^2 = \|\mathbf{A}^T \mathbf{c}_i^{*} \|_2^2 + \|\mathbf{B}^T \mathbf{c}_i^{*} \|_2^2$, $1/\| \mathbf{v}_i \|_2^2$
is much larger when $\mathbf{d}_i$ is an inlier comparing to the same value when $\mathbf{d}_i$ is an outlier because $\|\mathbf{A}^T \mathbf{c}_i^* \|_2^2$ is much larger when $\mathbf{d}_i$ is an inlier.
The following Lemma indicates that $\| \mathbf{D}^T \mathbf{c}_i^{*} \|_2^2$ can be written as the sum of the similarities between the columns of $\mathbf{V} \in \mathbb{R}^{r_d \times M_2}$.
\begin{lemma}
Define $\mathbf{c}_i^{*}$ and $\mathbf{V}$ as in Lemma~\ref{lem:equi}. Then,
\begin{eqnarray}
\| \mathbf{D}^T \mathbf{c}_i^{*} \|_2^2 = \sum_{j=1}^{M_2} \left(\frac{ \mathbf{v}_i^T \mathbf{v}_j }{ \|\mathbf{v}_i\|_2 \|\mathbf{v}_i\|_2 }\right)^2 \:.
\label{eq:sum_asymmm}
\end{eqnarray}
\label{lm:secondlm}
\end{lemma}
Thus, Algorithm 1 is inherently similar to CoP but Algorithm 1 utilizes the coherency between the columns of $\mathbf{V}$. In other word, the functionality of Algorithm 1 is similar to that of a CoP algorithm which is applied to a data matrix whose non-zero singular values are normalized to 1. This is the motivation to name the presented algorithms Normalized Coherence~Pursuit.
\subsection{Symmetric Normalized Coherence Pursuit}
The measure of similarity used in (\ref{eq:sum_asymmm}) is not symmetric. In other word, the similarity between $\mathbf{d}_i$ and $\mathbf{d}_j$ which is computed as $\left(\frac{ \mathbf{v}_i^T \mathbf{v}_j }{ \|\mathbf{v}_i\|_2 \|\mathbf{v}_i\|_2 }\right)^2 $ is not equal to the similarity between $\mathbf{d}_j$ and $\mathbf{d}_i$ which can be written as $\left(\frac{ \mathbf{v}_i^T \mathbf{v}_j }{ \|\mathbf{v}_j\|_2 \|\mathbf{v}_j\|_2 }\right)^2$. Accordingly, we modify the measure of similarity used in (\ref{eq:sum_asymmm}) into a symmetric measure of similarity. The Normalized Coherence Value corresponding to $\mathbf{d}_i$ using the symmetric measure of similarity is defined as
$
\mathbf{x}(i) = \sum_{j=1}^{M_2} \left(\frac{ \mathbf{v}_i^T \mathbf{v}_j }{ \|\mathbf{v}_i\|_2 \|\mathbf{v}_j\|_2 }\right)^2 \:.
$
Algorithm 2 uses the symmetric measure of similarity to compute the Normalized Coherence Values. The numerical experiments show that utilizing the symmetric function can notably improve the performance in most of the cases.
\begin{comment}
\begin{figure}[h!]
\begin{center}
\mbox{\hspace{-0.15in}
\includegraphics[width=1.55in]{figs/noise_values.eps}\hspace{-0.15in}
\includegraphics[width=1.55in]{figs/noise_values2.eps}
}
\end{center}
\vspace{-0.15in}
\caption{ In both plots, $n_i=100$, $n_o=500$, $r= 4$, $M_1 = 50$, the last 100 columns are the inliers, and the outliers are sampled uniformly at random from $\mathbb{S}^{M_1-1}$. In the left plot the data is not noisy and in the right plot $\mathbf{D} = [\mathbf{B} \:\:(\mathbf{A} + \mathbf{E})]$ where $\mathbf{E}$ represents the presence of noise and $\text{SNR} = \frac{\|\mathbf{A}\|_F^2}{\| \mathbf{E} \|_F^2} = 4$.
}
\label{fig:example_noise}
\end{figure}
\end{comment}
\section{Theoretical Studies}
In this section, we present analytical performance guarantees for Normalize Coherence Pursuit under different models for the distribution of the outliers.
The connection between Innovation Values and Leverage Scores is utilized to analyze the ANCP method and we leave the analysis of SNCP to future works. In the following subsections, the performance guarantees with unstructured outliers, linearly dependant outliers, noisy inliers, and clustered outliers is provided. Moreover, in contrast to most of the previous methods whose guarantees are limited to randomly distributed inliers, Normalized Coherence Pursuit is supported with theoretical guarantees even when the inliers are clustered.
The following sections provide the theoretical results and each theorem is followed by a short discussion
which highlights the important aspects of that theorem.
To simplify the exposition and notation, in the presented results, it is assumed without loss of generality that $\mathbf{T}$ is equal to the identity matrix, i.e, $\mathbf{D} = [\mathbf{B} \hspace{.2cm} \mathbf{A}]$.
The subspace $\mathcal{U}$ is recovered using the span of the data points with the largest Normalized Coherence Values. A sufficient condition which guarantees exact recovery of $\mathcal{U}$ is that the minimum of the Normalized Coherence Values corresponding to the inliers is larger than the maximum of the Normalized Coherence Values corresponding to the outliers, i.e.,
\begin{eqnarray}
\begin{aligned}
&\min \left( \{ \mathbf{x}(i) \}_{i=n_o+1}^{M_2} \right) > \max \left( \{ \mathbf{x}(i) \}_{i=1}^{n_o} \right) \:.
\end{aligned}
\label{cond:main_cond}
\end{eqnarray}
\noindent
This
is not a necessary condition but it is easier to guarantee.
In addition, we define $$\psi = \max \left( \left\{ \frac{1}{\| \mathbf{b}_i^T \mathbf{R}\|_2^2} \right\}_{i=1}^{n_o} \right)$$ where $\mathbf{R}$ is an orthonormal basis for $\mathcal{U}^{\perp}$ and $\mathbf{b}_i$ is the $i^{\text{th}}$ column of $\mathbf{B}$. The parameter $\psi$ indicates how close the outliers are to $\mathcal{U}$.
\vspace{0.05in}
\noindent
\textbf{Proof Strategy:} Although the presented theorems consider different scenarios for the distribution of the inliers/outliers and different techniques are required to guarantee (\ref{cond:main_cond}) in each case, but a similar strategy is used in the proofs of all the results. In contrast to Cop which analyzed $\{ |\mathbf{d}_i^T \mathbf{d}_j|\}_{i,j}$ based on the distribution of the data, it is not straightforward to directly bound $\{ |\mathbf{v}_i^T \mathbf{v}_j|\}_{i,j}$. In addition, in contrast to iSearch which leveraged the fact that the optimal direction of (\ref{eq:el1_innov}) is mostly orthogonal to $\mathcal{U}$ when $\mathbf{d}_i$ is an outlier, the optimal direction obtained by (\ref{eq:el_2_inno}) is not necessarily orthogonal to $\mathcal{U}$ when $\mathbf{d}_i$ is an outlier. In the proofs of the presented results, we utilized the geometry of the problem in which the optimal direction of (\ref{eq:el_2_inno}) is not close to $\mathcal{U}$ when $\mathbf{d}_i$ is an outlier. Specifically, corresponding to outlier $\mathbf{d}_i$, we define $\mathbf{d}_i^{\perp} = \frac{\mathbf{R} \mathbf{R}^T \mathbf{d}_i}{\| \mathbf{d}_i^T \mathbf{R} \|_2^2}$ and by definition $\mathbf{d}_i^T \mathbf{d}_i^{\perp} = 1$. According to the definition of $\mathbf{c}_i^{*}$ as the optimal point of (\ref{eq:el_2_inno}) and according to Lemma~\ref{lem:equi}, $\frac{1}{\| \mathbf{v}_i \|_2^2} \le \| \mathbf{D}^T \mathbf{d}_i^{\perp} \|_2^2 $. We utilized this inequality to derive the sufficient conditions which prove that (\ref{cond:main_cond}) holds with high probability. The detailed proofs of all the results are provided in the appendix.
\subsection{Outliers Distributed on $\mathbb{S}^{M_1 -1}$}
\label{sec:uns}
In most of the previous works on the robust PCA problem, the performance of the outlier detection method is analyzed under the assumption that the outliers are randomly distributed on $\mathbb{S}^{M_1 - 1}$. This is a simple scenario because the outliers are unstructured and the projection of each outlier on $\mathcal{U}$ is not strong with high probability given that $r$ is sufficiently small because when the outliers are randomly distributed as in Assumption \ref{assum_DistUni}, then $\mathbb{E} [\| \mathbf{U}^T \mathbf{b} \|_2^2] = \frac{r}{M}$ ( $\mathbf{b}$ is an outlying data point). The following assumption specifies the presumed model for the distribution of the inliers/outliers.
\begin{assumption}
The columns of $\mathbf{A}$ are drawn uniformly at random from $\mathcal{U} \cap \mathbb{S}^{M_1 -1}$ and the columns of $\mathbf{B}$ are drawn uniformly at random from $\mathbb{S}^{M_1 - 1}$.
\label{assum_DistUni}
\end{assumption}
The following theorem provides the sufficient conditions to guarantee the exact recovery of $\mathcal{U}$.
\begin{theorem}
\label{theo:randomrandom}
Suppose $\mathbf{D}$ follows Assumption 1. If $\mathbf{x}$ is defined as in (\ref{eq:ANCP}) and
\begin{eqnarray}
\begin{aligned}
& \frac{n_i}{r} - \max \left( \frac{4}{3} \log \frac{2r}{\delta} , \sqrt{4 \frac{n_i}{r} \log \frac{2 r}{\delta}} \right) > \frac{(\psi - 1)n_o}{M_1} + \\
& \quad \max \left( \frac{8}{3} \log \frac{2M_1}{\delta} , \sqrt{16 \frac{n_o}{M_1} \log \frac{2 M_1}{\delta}} \right) \:,
\end{aligned}
\label{eq:suff_1}
\end{eqnarray}
then (\ref{cond:main_cond}) holds and $\mathcal{U}$ is recovered exactly with probability at least $1 - 3 \delta$.
\label{theo:random}
\end{theorem}
Since the outliers are randomly distributed, the expected value of $\| \mathbf{b}_i^T \mathbf{R} \|_2^2$ is equal to $\frac{M_1- r}{M_1}$ which is nearly equal to~1 when $r/M_1$ is small~\cite{rahmani2017coherence,park2014greedy}.
Thus, Theorem~\ref{theo:random} roughly indicates that if ${n_i}/{r}$ is sufficiently larger than $n_o/M_1$, Normalized Coherence Values can successfully distinguish the outliers. It is important to note that $n_i$ is scaled with $r$ while $n_o$ is scaled with $M_1$. It means that if $r$ is sufficiently small and if the outliers are unstructured, $\mathcal{U}$ can be recovered exactly even if $n_o$ is much larger than $n_i$.
In addition, one can observe that when the outliers are unstructured, the requirements of Normalized Coherence Pursuit are similar with those of CoP~\cite{rahmani2017coherence}. In the next subsection, we observe a clear difference between their requirements when the outliers are close to $\mathcal{U}$.
\subsection{Outliers in an Outlying Subspace}
\label{sec:outlier_sub}
Although Assumption~\ref{assum_DistUni} is a popular data model in the literature of robust PCA, it is not a realistic assumption in the practical scenarios. In practice, outliers can be structured and they are not completely independent from each other as it is assumed in Assumption~\ref{assum_DistUni}.
For instance, in anomaly event detection, the outlying video frames are highly correlated or in misclassified data points identification, they can belong to the same cluster~\cite{gitlin2018improving}. In this section, we study the robustness against linearly dependant outliers. The following assumption specifies the presumed model for the outliers.
\begin{assumption}
Define
subspace $\mathcal{U}_o$ with dimension $r_o$ such that $\mathcal{U}_o \notin \mathcal{U}$ and $\mathcal{U} \notin \mathcal{U}_o$.
The columns of $\mathbf{A}$ are randomly distributed on $\mathcal{U} \cap \mathbb{S}^{M_1 -1}$ and the columns of $\mathbf{B}$ are randomly distributed on $\mathcal{U}_o \cap \mathbb{S}^{M_1 - 1} $
\label{asm:out}
\end{assumption}
The following theorem provides the sufficient condition to guarantee that (\ref{cond:main_cond}) holds.
\begin{theorem}
\label{theo:Linearly_dependant}
Suppose $\mathbf{D}$ follows Assumption~\ref{asm:out}. If $\mathbf{x}$ is defined as in (\ref{eq:ANCP}) and
\begin{eqnarray*}
\begin{aligned}
&\frac{n_i}{r} - \max \left( \frac{4}{3} \log \frac{2r}{\delta} , \sqrt{4 \frac{n_i}{r} \log \frac{2 r}{\delta}} \right) > \\
& \| \mathbf{U}_{o}^T \mathbf{U}^{\perp} \|\left( \frac{\psi n_o}{r_o} + \psi \max \left( \frac{4}{3} \log \frac{2r_o}{\delta} , \sqrt{\frac{n_o}{r_o} \log \frac{2 r_o}{\delta}} \right)\right)
\end{aligned}
\end{eqnarray*}
then (\ref{cond:main_cond}) is satisfied and $\mathcal{U}$ is recovered exactly with probability at least $1 - 2 \delta$.
\end{theorem}
Theorem~\ref{theo:Linearly_dependant} roughly states that if $n_i/r$ is sufficiently larger than $n_o/r_o$, the exact recovery is guaranteed with high probability. If $r_o$ is comparable to $r$, then the number of inliers should be sufficiently larger than the number of outliers. This confirms our intuition about the outliers because if $r_o$ is comparable to $r$ and $n_o$ is also large, we cannot label the columns of $\mathbf{B}$ as outliers.
It is informative to compare the requirements of Normalized Coherence Pursuit with that of CoP. The following theorem provides the sufficient conditions to guarantee that CoP successfully distinguishes the outliers.
\begin{theorem}
\label{theo:Linearly_dependant_CP}
Suppose $\mathbf{D}$ follows Assumption~\ref{asm:out}. If
\begin{eqnarray*}
\begin{aligned}
& \frac{n_i}{r} - \max \left( \frac{4}{3} \log \frac{2r}{\delta} , \sqrt{4 \frac{n_i}{r} \log \frac{2 r}{\delta}} \right) > \\
& \|\mathbf{U}^T \mathbf{U}_o \|^2 \left( \frac{n_i}{r} + \max \left( \frac{4}{3} \log \frac{2r}{\delta} , \sqrt{4 \frac{n_i}{r} \log \frac{2 r}{\delta}} \right) \right) + \\
& \frac{n_o}{r_o} + \max \left( \frac{4}{3} \log \frac{2r_o}{\delta} , \sqrt{4 \frac{n_o}{r_o} \log \frac{2 r_o}{\delta}} \right) \: ,
\end{aligned}
\label{eq:cop_reqq}
\end{eqnarray*}
then the CoP method proposed in~\cite{rahmani2017coherence} recovers $\mathcal{U}$ exactly with probability at least $1- 3\delta$.
\end{theorem}
One can observe that the requirement of Theorem~\ref{theo:Linearly_dependant_CP} is much stronger than that of Theorem~\ref{theo:Linearly_dependant} because $n_i/r$ appears on the right hand side of the sufficient condition of Theorem~\ref{theo:Linearly_dependant_CP}.
Theorem~\ref{theo:Linearly_dependant_CP} predicts that when $\mathcal{U}_o$ is close $\mathcal{U}$, the CoP Algorithm is more likely to fail. This is a correct prediction because when $\mathcal{U}$ and $\mathcal{U}_o$ are close, the inliers and the outliers are close to each other and their inner-product values are large.
In addition, by comparing the sufficient conditions of Normalized Coherence Pursuit with that of iSearch~\cite{rahmani2019outlier2} with linearly dependant outliers, we can observe that the nature of the sufficient conditions are similar. In the presented experiments, it is shown that Normalized Coherence Pursuit is on a par with iSearch in identifying outliers with weak innovation while it is a closed-from algorithm and its running time is much faster.
\begin{comment}
\subsection{Clustered Outliers}
In this section, we consider a different structure for the outliers. It is assumed that the outliers form a cluster outside the span of the inliers. Structured outliers are mostly associated with important rare events such as malignant tissues~\cite{karrila2011comparison} or web attacks~\cite{kruegel2003anomaly}. The following assumption specifies the presumed model for $\mathbf{B}$.
\begin{assumption}
Each column of $\mathbf{B}$ is formed as $\mathbf{b}_i = \frac{1}{\sqrt{1 + \eta^2}} ( \mathbf{q} + \eta \mathbf{f}_i)$. The unit $\ell_2$-norm vector $\mathbf{q}$ does not lie in $\mathcal{U}$, $\{\mathbf{f}_i \}_{i=1}^{n_o}$ are drawn uniformly at
random from $\mathbb{S}^{M_1 - 1}$, and $\eta$ is a positive number.
\label{asm:clus}
\end{assumption}
\noindent
In Assumption~\ref{asm:clus}, the outliers form a cluster around vector $\mathbf{q}$ which does not lie in $\mathcal{U}$ and $\eta$ determines how close they are to each other. The following theorem provides the sufficient conditions to guarantee that Normalized Coherence Values distinguish the cluster of outliers.
\begin{theorem}
\label{theo:clustered_outliers}
Suppose that the distribution of inliers/outliers follows Assumption-\ref{assum_DistUni}/Assumption-\ref{asm:clus}. If If $\mathbf{x}$ is defined as in (\ref{eq:ANCP}) and
$\frac{n_i}{r} - \max \left( \frac{4}{3} \log \frac{2r}{\delta} , \sqrt{4 \frac{n_i}{r} \log \frac{2 r}{\delta}} \right) > n_o \frac{\psi \| \mathbf{q}^T \mathbf{U}^{\perp} \|_2^2}{1 + \eta^2} + \frac{\psi \eta^2 n_o}{(1 + \eta^2 )M_1} + \frac{\eta^2 \psi}{1 + \eta^2} \max \left( \frac{4}{3} \log \frac{2M_1}{\delta} , \sqrt{\frac{n_o}{M_1} \log \frac{2 M_1}{\delta}} \right) + \frac{ \eta \sqrt{\psi}}{1+ \eta^2} \| \mathbf{q}^T \mathbf{U}^{\perp} \|_2 \left( \frac{n_o}{\sqrt{M_1}} + 2\sqrt{n_o} + \sqrt{\frac{2 n_o \log \frac{1}{\delta}}{M_1 -1 }} \right) \:,
$
then (\ref{cond:main_cond}) is satisfied and $\mathcal{U}$ is recovered exactly with probability at least $1 - 4 \delta$.
\end{theorem}
In sharp contrast to Theorem~\ref{theo:randomrandom}, $n_o$ is not scaled with $M_1$. This means that when $\eta$ is small (the outliers are close to each other), $n_i/r$ should be sufficiently larger than $n_o$. When $\eta$ goes to infinity, the distribution of outliers converges to the distribution of outliers in Assumption~\ref{assum_DistUni} and one can observe that the sufficient condition in Theorem~\ref{theo:clustered_outliers} converges to the sufficient condition in Theorem~\ref{theo:randomrandom}.
\end{comment}
\subsection{Noisy Inliers}
\label{sec:nooooise}
Although exact recovery of $\mathcal{U}$ is not feasible when the inliers are noisy but the Normalized Coherence Values can distinguish the outliers even in the strong presence of noise.
In this section , we present a theorem which guarantees that (\ref{cond:main_cond}) holds with high probability if $n_i/r$ and Signal to Noise Ratio (SNR) are sufficiently large. The following assumption specifies the presumed model.
\begin{assumption}
The matrix $\mathbf{D}$ can be expressed as
$$
\mathbf{D} = [\mathbf{B} \hspace{.2cm} \frac{1}{\sqrt{1+\sigma_n^2}}(\mathbf{A}+\mathbf{E})] \: \mathbf{T} \:.
$$
The matrix $\mathbf{E} \in \mathbb{R}^{M_1 \times n_i}$ represents the presence of noise and it can be written as $\mathbf{E} = \sigma_n \mathbf{N}$ where the columns of $\mathbf{N} \in \mathbb{R}^{M_1 \times n_i}$ are drawn uniformly at random from $\mathbb{S}^{M_1 - 1}$ and $\sigma_n$ is a positive number which controls the power of the added noise.
\label{asm:noiss}
\end{assumption}
Before we state the theorem, let us define vectors $\{ \mathbf{t}_i \}_{i=1}^{M_2}$ where $\mathbf{t}_i = \Sigma \mathbf{v}_i$ and the diagonal matrix $\Sigma$ contains the non-zero singular values of $\mathbf{D}$. Note that $\mathbf{d}_i = \mathbf{U}^{'} \mathbf{t}_i$ and $\| \mathbf{t}_i \|_2 = \| \mathbf{d}_i \|_2$.
In addition, define
$
t_{\min} = \min_i \left( \left\{ \frac{\| \Sigma^{-2} \mathbf{t}_i \|_2}{\mathbf{t}_i^T \Sigma^{-2} \mathbf{t}_i} \right\}_{i=n_o + 1}^{M_2} \right)$ and $ t_{\max} = \max_i \left( \left\{ \frac{\| \Sigma^{-2} \mathbf{t}_i \|_2}{\mathbf{t}_i^T \Sigma^{-2} \mathbf{t}_i} \right\}_{i=n_o + 1}^{M_2} \right) \:.
$
\begin{theorem}
\label{theo:noise}
Suppose $\mathbf{A}$ and $\mathbf{B}$ follow Assumption~\ref{assum_DistUni} and $\mathbf{D}$ is formed according to Assumption~\ref{asm:noiss}.
If $\mathbf{x}$ is defined as in (\ref{eq:ANCP}) and
\begin{eqnarray*}
\begin{aligned}
& \frac{(\sqrt{1 + \sigma_n^2} - t_{\max} \sigma_n)^2}{1 + \sigma_n^2} \\
& \Bigg( \frac{n_i}{r} - \max \left( \frac{4}{3} \log \frac{2r}{\delta} , \sqrt{4 \frac{n_i}{r} \log \frac{2 r}{\delta}} \right) \Bigg) > 2 \sigma_n n_i t_{\max}^2 + \\
& \psi \left( \frac{n_o}{M_1} + \max \left( \frac{4}{3} \log \frac{2 M_1}{\delta} , \sqrt{4 \frac{n_o}{M_1} \log \frac{2 M_1}{\delta}} \right) \right) + \\
& \frac{\sigma_n^2 \psi}{1+\sigma_n^2 } \Bigg( \frac{n_i}{M_1} + \max \left( \frac{4}{3} \log \frac{2 M_1}{\delta} , \sqrt{4 \frac{n_i}{M_1} \log \frac{2 M_1}{\delta}} \right) \Bigg)
\end{aligned}
\end{eqnarray*}
then (\ref{cond:main_cond}) holds with probability at least $1 - 3 \delta$.
\end{theorem}
In this section, we considered the unstructured outliers whose number can be much larger than $M_1$ and $n_i$. Consider the challenging scenario that the unstructured outliers dominate the data, thus the values of all the singular values are close to each other which indicates that the values of $t_{\min}$ and $t_{\max}$ are close to one. Thus, the sufficient condition of Theorem~\ref{theo:noise} roughly states that $\frac{(1- \sigma_n)^2}{1+ \sigma_n^2} \frac{n_i}{r}$ should be sufficiently larger than $n_i \sigma_n + \frac{n_o}{M_1}$. In practise, the algorithm works better than what the sufficient condition implies
because the proof is based on considering the worst case scenarios.
\begin{comment}
\vspace{-0.1in}
\begin{figure}[h!]
\begin{center}
\mbox{
\includegraphics[width=0.5\textwidth]{figs/phase.eps}
}
\end{center}
\vspace{-0.15in}
\caption{The phase transitions in presence of the unstructured outliers versus $n_i$ and $n_o$. White indicates correct
subspace recovery and black designates incorrect recovery. In this experiment, the data follows Assumption~\ref{assum_DistUni} with $M_1=50$ and $r=4$. }
\label{fig:phase}
\end{figure}
\end{comment}
\subsection{Clustered Outliers}
In this section, we consider a different structure for the outliers. It is assumed that the outliers form a cluster outside of the span of the inliers. Structured outliers are mostly associated with important rare events such as malignant tissues~\cite{karrila2011comparison} or web attacks~\cite{kruegel2003anomaly}. The following assumption specifies the presumed model for $\mathbf{B}$.
\begin{assumption}
Each column of $\mathbf{B}$ is formed as $\mathbf{b}_i = \frac{1}{\sqrt{1 + \eta^2}} ( \mathbf{q} + \eta \mathbf{f}_i)$. The unit $\ell_2$-norm vector $\mathbf{q}$ does not lie in $\mathcal{U}$, $\{\mathbf{f}_i \}_{i=1}^{n_o}$ are drawn uniformly at
random from $\mathbb{S}^{M_1 - 1}$, and $\eta$ is a positive number.
\label{asm:clus}
\end{assumption}
\noindent
In Assumption~\ref{asm:clus}, the outliers form a cluster around vector $\mathbf{q}$ which does not lie in $\mathcal{U}$ and $\eta$ determines how close they are to each other. The following theorem provides the sufficient conditions to guarantee that Normalized Coherence Values distinguish the cluster of outliers.
\begin{theorem}
\label{theo:clustered_outliers}
Suppose that the distribution of inliers follows Assumption ~\ref{assum_DistUni} and the distribution of outliers follows Assumption~\ref{asm:clus}. If $\mathbf{x}$ is defined as in (\ref{eq:ANCP}) and
\begin{eqnarray*}
\begin{aligned}
& \frac{n_i}{r} - \max \left( \frac{4}{3} \log \frac{2r}{\delta} , \sqrt{4 \frac{n_i}{r} \log \frac{2 r}{\delta}} \right) >
n_o \frac{\psi \| \mathbf{q}^T \mathbf{U}^{\perp} \|_2^2}{1 + \eta^2} + \\
& \frac{\psi \eta^2 n_o}{(1 + \eta^2 )M_1} + \frac{\eta^2 \psi}{1 + \eta^2} \max \left( \frac{4}{3} \log \frac{2M_1}{\delta} , \sqrt{\frac{n_o}{M_1} \log \frac{2 M_1}{\delta}} \right) + \\
&\frac{ \eta \sqrt{\psi}}{1+ \eta^2} \| \mathbf{q}^T \mathbf{U}^{\perp} \|_2 \left( \frac{n_o}{\sqrt{M_1}} + 2\sqrt{n_o} + \sqrt{\frac{2 n_o \log \frac{1}{\delta}}{M_1 -1 }} \right) \:,
\end{aligned}
\end{eqnarray*}
then (\ref{cond:main_cond}) is satisfied and $\mathcal{U}$ is recovered exactly with probability at least $1 - 4 \delta$.
\end{theorem}
In sharp contrast to Theorem~\ref{theo:randomrandom}, $n_o$ is not scaled with $M_1$. This means that when $\eta$ is small (the outliers are close to each other), $n_i/r$ should be sufficiently larger than $n_o$. When $\eta$ goes to infinity, the distribution of outliers converges to the distribution of outliers in Assumption~\ref{assum_DistUni} and one can observe that the sufficient condition in Theorem~\ref{theo:clustered_outliers} converges to the sufficient condition in Theorem~\ref{theo:randomrandom}.
\subsection{Outlier Detection in a Union of Subspaces}
In practice, the inliers are not randomly distributed in a subspace and they mostly form some structures. In this section, we assume that the inliers are clustered. It is assumed that the columns of $\mathbf{A}$ form $m$ clusters and the data points in each cluster span a $d$-dimensional subspace. The following assumption provides the details.
\begin{assumption}
The matrix of inliers can be written as $\mathbf{A} = [\mathbf{A}_1 \: ... \: \mathbf{A}_m] \mathbf{T}_A$ where $\mathbf{A}_k \in \mathbb{R}^{M_1 \times {n_i}_k}$, $\sum_{k=1}^{m} {n_i}_k = n_i$, and $\mathbf{T}_A$ is an arbitrary permutation matrix.
The columns of $\mathbf{A}_k$ are drawn uniformly at random from the
intersection of subspace $\mathcal{U}_k$ and $\mathbb{S}^{M_1-1}$ where $\mathcal{U}_k$ is a $d$-dimensional subspace. In other word, the columns of $\mathbf{A}$ lie in a union of subspaces $\{ \mathcal{U}_k \}_{k=1}^m$ and $\left(\mathcal{U}_1 \oplus \: ... \oplus \mathcal{U}_m \right)= \mathcal{U}$ where $\oplus$ denotes the direct sum operator.
\label{asm:union_of_sunb}
\end{assumption}
\noindent
The following theorem provides the sufficient conditions to guarantee that the computed Normalized Coherence Values satisfy (\ref{cond:main_cond}) with high probability.
\begin{theorem}
\label{theo:Linearly_dependant_inliers}
Suppose that the distribution of the inliers follows Assumption~\ref{asm:union_of_sunb} and the distribution of outliers follows Assumption~\ref{assum_DistUni}. If $\mathbf{x}$ is defined as in (\ref{eq:ANCP}) and
\begin{eqnarray*}
\begin{aligned}
\vartheta \mathcal{A} > (\psi - 1)\frac{n_o}{M_1} + 2 \max \left( \frac{4}{3} \log \frac{2M_1}{\delta} , \sqrt{4 \frac{n_o}{M_1} \log \frac{2 M_1}{\delta}} \right)
\end{aligned}
\end{eqnarray*}
where $ \vartheta = \underset{\mathbf{a} \in \mathcal{U} \atop \| \mathbf{a} \| = 1}{\inf} \sum_{k=1}^{m} \| \mathbf{a}^T \mathbf{U}_k \|_2^2$ and
$
\mathcal{A} = \min_i \Bigg\{ \frac{ {n_i}_k }{d} - \max \left( \frac{4}{3} \log \frac{2 m d}{\delta} , \sqrt{4 \frac{{n_i}_k}{d} \log \frac{2 m d}{\delta}} \right) \Bigg\}_{i=1}^m,
$
then (\ref{cond:main_cond}) is satisfied and $\mathcal{U}$ is recovered exactly with probability at least $1 - 3 \delta$.
\end{theorem}
Theorem~\ref{theo:Linearly_dependant_inliers} reveals an interesting property of the Normalized Coherence Values.
According to the definition of $\mathcal{A}$, $\mathcal{A}$ is roughly equal to ${\min \{{n_i}_k \}_{k=1}^m}/{d}$.
Thus, Theorem~\ref{theo:Linearly_dependant_inliers} states that when the inliers are clustered, the population of the cluster with the minimum population is the key factor.
This property matches with our intuition about outlier detection because if there is a cluster with few number of data points, we could label them as outliers similar to the outliers modeled in Assumption~\ref{asm:out}.
The parameter $\vartheta = \underset{\mathbf{a} \in \mathcal{U} \atop \| \mathbf{a} \| = 1}{\inf} \sum_{k=1}^{m} \| \mathbf{a}^T \mathbf{U}_k \|_2^2$ shows how well the inliers are diffused in $\mathcal{U}$. Clearly, if the inliers are present in all or most of the directions inside $\mathcal{U}$,
a robust PCA algorithm is more likely to recover $\mathcal{U}$ correctly. However, the presented methods do not require the inliers to occupy all the directions in $\mathcal{U}$. The reason that
$\vartheta$ appeared in the sufficient conditions is that the theorem guarantees the performance in the worst case scenarios.
\section{Numerical Experiments}
In this section, SNCP and ANCP are compared with the existing robust PCA approaches, including FMS~\cite{lerman2014fast}, GMS~\cite{zhang2014novel}, CoP~\cite{rahmani2017coherence}, iSearch~\cite{rahmani2019outlier2}, and R1-PCA~\cite{lamport21},
and their robustness against different types of outliers is examined with both real and synthetic data.
\begin{remark}
In the presented theoretical results, it was assumed that $r_d$ is known. When the data is noisy, one can utilize any rank estimation algorithm and the performance of the algorithms is not sensitive to the chosen rank as long as $r_d$ is sufficiently larger than $r$. In the presented experiments, we set $r_d$ equal to the number of singular values of $\mathbf{D}$ which are greater than $s_1/20$ where $s_1$ is the first singular value of $\mathbf{D}$.
\end{remark}
\subsection{Comparing Different Scores}
In this experiment,
we simulate a scenario in which the outliers are close to $\mathcal{U}$.
Suppose $r=8$, $n_i = 180$, and $n_o = 40$. The outliers are generated as $[\mathbf{U} \:\: \mathbf{H}]\: \mathbf{G}$ where $\mathbf{H} \in \mathbb{R}^{M_1 \times 4}$ spans a random 4-dimensional subspace and the elements of $\mathbf{G} \in \mathbb{R}^{12 \times 20} $ are sampled independently from $\mathcal{N}(0,1)$. Fig.~\ref{fig:compare_scores} demonstrates Innovation Values, Coherence Values, and Normalized Coherence Values computed by ANCP and SNCP (the first 40 columns are outliers). In this figure, we show the inverse of Coherence Values and the inverse of Normalized Coherence Values to make them comparable to Innovation Values. One can observe that the scores computed by iSearch, ANCP, and SNCP can be reliably used to form a basis for $\mathcal{U}$ but the scores computed by CoP do not distinguish the outliers well enough.
As it was predicted by Theorem~\ref{theo:Linearly_dependant_CP}, CoP can fail to distinguish the outliers when they are close to $\mathcal{U}$. The main reason is that CoP measures the similarity between the data points via a simple inner-product while iSearch and ANCP utilize the directions of innovation to measure the similarity between a data point and the rest of data. The functionality of SNCP is similar to that of ANCP while it uses a symmetric measure of similarity and the plots show that it distinguishes the outliers in a more clear way.
\begin{comment}
\vspace{-0.1in}
\subsection{Abundance of Unstructured Outliers}
\vspace{-0.1in}
Theorem~\ref{theo:random} predicted that when the outliers are randomly distributed, the number of outliers can be much larger than the number of inliers provided that $n_i/r$ is sufficiently large. Suppose the data follows Assumption~\ref{assum_DistUni} with $M_1=50$ and $r=4$.
Define $\hat{\mathbf{U}}$ as an orthonormal basis for the recovered subspace. A trial is considered successful if
$
\frac{\| (\mathbf{I} - \mathbf{U} \mathbf{U}^T) \hat{\mathbf{U}} \|_F}{ \|\mathbf{U} \|_F } < 10^{-3} \: .
$
Fig.~\ref{fig:phase} shows the phase transitions
in which white means correct subspace recover and black designates incorrect recovery (the number of evaluation runs was 20). The phase transitions indicate that when $n_i/r$ is larger than 5, the algorithms can successfully recover $\mathcal{U}$ even if $n_o = 2000$. In addition, SNCP shows more robustness against the outliers when $n_i$ is small.
\end{comment}
\begin{figure}[t]
\begin{center}
\mbox{
\includegraphics[width=1.78in]{figs/vs1.eps}\hspace{-0.1in}
\includegraphics[width=1.78in]{figs/vs2.eps}}
\mbox{
\includegraphics[width=1.78in]{figs/vs3.eps}\hspace{-0.1in}
\includegraphics[width=1.78in]{figs/vs4.eps}
}
\end{center}
\vspace{-0.15in}
\caption{The plots show the inverse of Normalized Coherence Values computed by SNCP and ANCP, the Innovation Values, and the inverse of Coherence Values. In this experiment, the first 40 columns are the outliers. }
\label{fig:compare_scores}
\end{figure}
\subsection{Noisy Data}
In this section, we examine the robustness of the robust PCA methods against noise. Suppose $\mathbf{B}$ follows Assumption~\ref{asm:out}, $r=5$, $r_o=10$, $M_1 = 200$, $n_i=100$, and $n_o=100$
where $\mathcal{U}_o$ is a random 10-dimensional subspace. We consider two models for the distribution of the inliers. The first model is random distribution on $\mathcal{U} \cap \mathbb{S}^{N-1}$ as described in Assumption~\ref{assum_DistUni}. In the second model, it is assumed that the inliers form a cluster in $\mathcal{U}$. The following assumption describes the second model.
\begin{assumption}
Each column of matrix $\mathbf{A}$ is formed as $\mathbf{a}_i = \frac{ \mathbf{U} \mathbf{s}_i}{\| \mathbf{U} \mathbf{s}_i \|_2} $ where $\mathbf{s}_i = \mathbf{w} + \gamma \mathbf{z}_i$, $\mathbf{w} \in \mathbb{R}^{r }$ is a unit $\ell_2$-norm vector, and $\{\mathbf{z}_i \}_{i=1}^{n_i}$ are sampled randomly from $\mathbb{S}^{r - 1}$.
\label{asm:inliers_clus}
\end{assumption}
\noindent
Since the data is noisy, exact subspace recovery is not feasible. Instead, we examine the probability that an algorithm distinguishes all the outliers correctly. Define vector $\mathbf{f} \in \mathbb{R}^{M_2 }$ such that $\mathbf{f}(k) = \| (\mathbf{I} - \hat{\mathbf{U}} \hat{\mathbf{U}}^T) \mathbf{d}_k \|_2$ where $\hat{\mathbf{U}}$ is the identified subspace. A trial is considered successful if
\begin{eqnarray}
\label{eq:out_conddi}
\max \bigg( \{\mathbf{f}(k) \: \: : \: \: k > n_o \} \bigg) < \min \bigg( \{\mathbf{f}(k) \: \: : \: \: k \le n_o \} \bigg),
\end{eqnarray}
which means that the norm of projection of all the inliers on $\hat{\mathcal{U}}^{\perp}$ should be smaller than the corresponding values for the outliers. Define $\text{SNR} = \frac{\| \mathbf{A} \|_F^2}{\| \mathbf{E} \|_F^2 }$ where $\mathbf{E}$ is the noise component which is added to the inliers. Fig.~\ref{fig:lin_dep_out} shows the probability that (\ref{eq:out_conddi}) is valid versus SNR (the number of evaluation runs was 200). In the left plot, the distribution of inliers follows Assumption~\ref{assum_DistUni} and in the right plot it follows Assumption~\ref{asm:inliers_clus} with $\gamma = 0.2$.
One can observe that SNCP outperforms most of the existing methods on both cases and the performance of iSearch and ANCP are close.
In addition, by comparing the two plots, it can be observed that the performance of some of the robust PCA methods is sensitive to the distribution of the inliers. For instance, FMS outperforms most of the other methods when the inliers are randomly distributed but its performance degrades significantly when the inliers form a cluster in $\mathcal{U}$.
\begin{figure}[h!]
\begin{center}
\mbox{
\includegraphics[width=1.8in]{figs/1000.eps}\hspace{-0.1in}
\includegraphics[width=1.8in]{figs/210n.eps}}
\end{center}
\vspace{-0.15in}
\caption{The outliers are linearly dependant and they lie in a 10-dimensional subspace. In the left plot, the inliers are randomly distributed in $\mathcal{U}$ (Assumption~\ref{assum_DistUni}) and in the right plot, the inliers form a cluster (Assumption~\ref{asm:inliers_clus} with $ \gamma = 0.2)$. }
\label{fig:lin_dep_out}
\end{figure}
\subsection{ Identifying the Permuted Columns }
The problem of regression with unknown permutation
is similar to the conventional regression but the correspondence between input variables and labels is missing or erroneous. Suppose $\mathbf{X} \in \mathbb{R}^{d \times n}$ is the measurement matrix where $n$ is the number of measurements. Define $\mathbf{Y} \in \mathbb{R}^{m \times n}$ as the observation matrix which can be written as $\mathbf{Y} = \Theta \mathbf{X}$ where $\Theta \in \mathbb{R}^{m \times d}$ is the unknown matrix which is estimated by the regression algorithm.
In the regression problem with unknown permutation, the observation matrix $\mathbf{Y}$ is affected by an unknown permutation matrix $\Pi$, i.e., matrix $\mathbf{Y}$ can be written as $\mathbf{Y} = \Theta \mathbf{X} \Pi$ where $\Pi \in \mathbb{R}^{n \times n}$. In this problem, it is assumed that $\Pi$ does not displace all the columns of $\Theta \mathbf{X}$ and only an unknown fraction of the columns are displaced. The authors of~\cite{slawski2019sparse} showed that this special regression problem can be translated into a robust PCA problem. Define matrix $\mathbf{Z} \in \mathbb{R}^{(d+m) \times n}$ as $\mathbf{Z} = ([\mathbf{X}^T \:\: \mathbf{Y}^T])^T$, i.e., each column of $\mathbf{Z}$ is equal to the concatenation of the corresponding columns of $\mathbf{X}$ and $\mathbf{Y}$.
Suppose $n > d$ and assume that the rank of $\mathbf{X}$ is equal to $d$.
If $\Pi$ is equal to the identity matrix, the rank of $\mathbf{Z}$ is equal to $d$. In contrast, when the columns of $\mathbf{Y}$ are displaced, the corresponding columns of $\mathbf{Z}$ do not lie in the $d$-dimensional subspace which the other columns of $\mathbf{Z}$ lie in. Therefore, the columns of $\mathbf{Z}$ which are corresponding to the displaced columns can be considered as outliers and a robust PCA method can be utilized to locate them. Once they are located and removed, the regression problem can be solved using the remaining measurements.
In this experiment, the elements of $\mathbf{X}$ and $\Theta$ are sampled from $\mathcal{N} (0,1)$, $d = 10$, and $m = 10$. Define $n_i$ as the number of columns of $\mathbf{Y}$ which are not affected by the permutation matrix and define $n_o$ as the number of displaced columns. The robust PCA methods are applied to $\mathbf{Z}$ to find a basis for the $d$-dimensional subspace which is spanned by the columns of $\mathbf{Z}$ corresponding to the inliers. If this subspace is estimated accurately, all the displaced columns can be exactly located~\cite{slawski2019sparse}. Define Log-Recovery Error as
$
\log_{10} \left( \frac{\| (\mathbf{I} - \mathbf{U} \mathbf{U}^T) \hat{\mathbf{U}} \|_F}{ \|\mathbf{U} \|_F } \right)
$, where $\hat{\mathbf{U}}$ is an orthonormal basis for the recovered subspace.
Fig.~\ref{fig:lin_dep_out2} shows Log-Recovery Error versus $n_o$ where $n_i$ is fixed equal to 200 (the number of evaluation runs was 400).
This is a challenging subspace recovery task because the outliers can be close to the span of inliers and this is the main reason that CoP did not perform well. One can observe that SNCP and FMS yielded the best performance.
\begin{figure}[h!]
\begin{center}
\mbox{
\includegraphics[width=2.1in]{figs/reg.eps}
}
\end{center}
\vspace{-0.15in}
\caption{This plot shows subspace recovery error versus the number of displaced measurements. The number of measurements which are not displaced are equal to $n_i=200$ and the total number of measurements are equal to $n_i + n_o$.}
\label{fig:lin_dep_out2}
\end{figure}
\subsection{Unstructured Outliers}
Theorem~\ref{theo:random} predicted that when the outliers are randomly distributed, the number of outliers can be much larger than the number of inliers provided that $n_i/r$ is sufficiently large. Suppose the data follows Assumption~\ref{assum_DistUni} with $M_1=50$ and $r=4$.
Define $\hat{\mathbf{U}}$ as an orthonormal basis for the recovered subspace. A trial is considered successful if
$
\frac{\| (\mathbf{I} - \mathbf{U} \mathbf{U}^T) \hat{\mathbf{U}} \|_F}{ \|\mathbf{U} \|_F } < 10^{-3} \: .
$
Fig.~\ref{fig:phase} shows the phase transitions
in which white means correct subspace recover and black designates incorrect recovery (the number of evaluation runs was 20). The phase transitions indicate that when $n_i/r$ is larger than 5, the algorithms can successfully recover $\mathcal{U}$ even if $n_o = 2000$. In addition, SNCP shows more robustness against the outliers when $n_i$ is small.
\begin{figure}[h!]
\begin{center}
\mbox{
\includegraphics[width=0.5\textwidth]{figs/phase.eps}
}
\end{center}
\vspace{-0.15in}
\caption{ The phase transitions in presence of the unstructured outliers versus $n_i$ and $n_o$. White indicates correct
subspace recovery and black designates incorrect recovery. In this experiment, the data follows Assumption~\ref{assum_DistUni} with $M_1=50$ and $r=4$. }
\label{fig:phase}
\end{figure}
\subsection{Structured Outlier Detection in Real Data}
The authors of~\cite{gitlin2018improving,rahmani2017coherence} proposed to use robust PCA to improve the accuracy of the clustering algorithms. The robust PCA method is applied to each identified cluster to find the miss-classified data points as outliers. We refer the reader to~\cite{gitlin2018improving,rahmani2017coherence} for further details. Similar to corresponding experiment in~\cite{rahmani2017coherence}, we use Hopkins155 dataset which makes the outlier detection problem challenging. In this dataset, the data points are linearly dependent and the clusters are close to each other. Therefore, the outliers are structured and they are close to the inliers. The clustering error of the clustering algorithm is $30 \%$ and we compute the final clustering error after applying the robust PCA methods and updating the clusters. Table~\ref{tab:accuracy} shows the clustering error after applying different robust PCA methods. One can observe that iSearch, ANCP, SNCP, and CoP yielded better performance and the main reason is that they leverage the clustering structure of the inliers and they are robust against structured outliers.
\begin{comment}
\vspace{-0.1in}
\subsection{Running Time}
\vspace{-0.1in}
In this section, we study the running time of the robust PCA methods. For ANCP, SNCP, CoP, and iSearch, we used 50 data points to build the basis matrix (matrix $\mathbf{Y}$). Table
\ref{tab:runnig_M2} shows the running times versus $M_2$ while $M_1 = 200$. Table~\ref{tab:runnig_M1} shows the running times versus $M_1$ while $M_2 = 1500$. In all the runs, $r = 5$ and $n_i = 200$. One can observe that CoP, SNCP, and ANCP are notably fast since they are single step algorithms. The running time of CoP and SNCP is longer than ANCP when $M_2$ is large because their computation complexity scale with $M_2^2$. GMS is also a fast algorithm when $M_1$ is small but its running time can be long when $M_1$ is large because its computation complexity scale with $M_1^3$.
\end{comment}
\begin{table}[h!]
\centering
\caption{Clustering error after using the robust PCA methods to detect the misclassified data points. }
\begin{tabular}{| c | c | c | c |c|c|c|c|}
\hline
CoP & FMS & R1-PCA & GMS & iSearch & PCA & ANCP & SNCP \\
\hline
6.93 & 28.5 & 22.56 & 17.25 & 3.72 & 12.01 & 6.64 & 3.65 \\
\hline
\end{tabular}
\label{tab:accuracy}
\end{table}
\begin{figure*}[t]
\begin{center}
\mbox{
\includegraphics[width=1.22in]{figs/Ancp.eps}\hspace{-0.12in}
\includegraphics[width=1.22in]{figs/sncp.eps}\hspace{-0.12in}
\includegraphics[width=1.22in]{figs/svd.eps}\hspace{-0.12in}
\includegraphics[width=1.22in]{figs/r1.eps}\hspace{-0.12in}
\includegraphics[width=1.22in]{figs/isearch.eps}\hspace{-0.12in}
\includegraphics[width=1.22in]{figs/FMS.eps}
}
\end{center}
\vspace{-0.15in}
\caption{\textcolor{black}{This figure shows the residual} values computed by different methods. Each element represents a frame of the video file and frames 65 to 80 are the outlying frames. }
\label{fig:activity}
\end{figure*}
\subsection{Event Detection in Video}
In this experiment, we utilize the robust PCA methods to identify an activity in a video files, i.e., the outlier detection methods identify the frames which contain the activity as the outlying frames/data-points.
We use the Waving Tree video file~\cite{li2004statistical} where in this video
a tree is smoothly waving and in
the middle of the video, a person crosses the frame. The frames which only contain the background (the tree and the environment) are inliers and the few frames corresponding to the event (the presence of the person) are the outliers.
The tree is smoothly waving and we use $r=3$ as the rank of inliers for all the methods. We use 100 frames where frames 65 to 80 are the outlying frames. In this interval (65 to 80), the person enters the frame from left, stay in the middle, and leaves from right. In this experiment, we vectorize each frame and form data matrix $\mathbf{D}$ by the vectorized frames. In addition, we reduce the dimensionality of $\mathbf{D}$ by projecting each column of $\mathbf{D}$ into the span of the first 50 left singular vectors. Thus, $\mathbf{D} \in \mathbb{R}^{50 \times 100}$.
Define $\hat{\mathbf{U}}$ as the estimated subspace and define the residual value corresponding to data point $\mathbf{d}_i$ as ${\| \mathbf{d}_i - \hat{\mathbf{U}}\hat{\mathbf{U}}^T \mathbf{d}_i\|_2}$. The outliers are detected as the data points with the larger residual values.
Fig.~\ref{fig:activity} shows the residual values computed by different methods. An important observation is that FMS, PCA (SVD), and R1-PCA clearly distinguished the first and the last outlying frames but they hardly distinguish the middle outliers (frames 69 to 74). The main reason is that in these frames the person does not move which means that these outlying frames are very similar to each other. Fig.~\ref{fig:activity} shows that the ANCP, SNCP, and iSearch successfully distinguish all the outlying frames since they are robust to structured and linearly dependant outliers.
\subsection{Running Time}
In this section, we study the running time of the robust PCA methods. For ANCP, SNCP, CoP, and iSearch, we used 50 data points to build the basis matrix (matrix $\mathbf{Y}$). Table
\ref{tab:runnig_M2} shows the running times versus $M_2$ while $M_1 = 200$. Table~\ref{tab:runnig_M1} shows the running times versus $M_1$ while $M_2 = 1500$. In all the runs, $r = 5$ and $n_i = 200$. One can observe that CoP, SNCP, and ANCP are notably fast since they are single step algorithms. The running time of CoP and SNCP is longer than ANCP when $M_2$ is large because their computation complexity scale with $M_2^2$. GMS is also a fast algorithm when $M_1$ is small but its running time can be long when $M_1$ is large because its computation complexity scale with $M_1^3$.
\begin{table}[h!]
\centering
\caption{Running time of the algorithms versus $M_2$ ($M_1 = 200$).}
\begin{tabular}{| c | c | c | c |c|c|c|}
\hline
$M_2 $& SNCP & ANCP & iSearch & CoP & FMS & GMS \\
\hline
500 & 0.0228 & 0.0120 & 0.2660 & 0.016 & 0.1130 & 0.0872\\
\hline
1000 & 0.0427 & 0.0160 & 0.8983 & 0.0325 & 0.2440 & 0.1265\\
\hline
5000 & 0.2622 & 0.0428 & 19.4080 & 0.3930 & 0.6635 & 0.2926\\
\hline
\end{tabular}
\label{tab:runnig_M2}
\end{table}
\begin{table}[h!]
\centering
\caption{Running time of the algorithms versus $M_1$ ($M_2 = 1500$).}
\begin{tabular}{| c | c | c | c |c|c|c|}
\hline
$M_1 $& SNCP & ANCP & iSearch & CoP & FMS & GMS \\
\hline
200 & 0.0614 & 0.0187 & 1.7279 & 0.0576 & 0.2978 & 0.1458 \\
\hline
500 & 0.1456 & 0.0727 & 2.1261 & 0.0710 & 0.9574 & 0.6399\\
\hline
1000 & 0.3145 & 0.2527 & 2.7695 & 0.0900 & 2.7731 & 2.5590 \\
\hline
\end{tabular}
\label{tab:runnig_M1}
\end{table}
\section{Conclusion}
It was shown that Innovation Value under the quadratic cost function is equivalent to Leverage Score. Two closed-form robust PCA methods were presented where the first one was based on Leverage Score and the second one was inspired by the connection between Leverage Score and Innovation Value. Several theoretical performance guarantees for the robust PCA method under different models for the distribution of the outliers and the distribution of the inliers were presented. In addition, it was shown with both theoretical and numerical investigations that the algorithms are robust to the strong presence of noise. Although the presented methods are fast closed-form algorithms, it was shown that they often outperform most of the existing methods.
\vspace{0.4in}
| {
"timestamp": "2021-06-24T02:12:16",
"yymm": "2106",
"arxiv_id": "2106.12190",
"language": "en",
"url": "https://arxiv.org/abs/2106.12190",
"abstract": "The idea of Innovation Search, which was initially proposed for data clustering, was recently used for outlier detection. In the application of Innovation Search for outlier detection, the directions of innovation were utilized to measure the innovation of the data points. We study the Innovation Values computed by the Innovation Search algorithm under a quadratic cost function and it is proved that Innovation Values with the new cost function are equivalent to Leverage Scores. This interesting connection is utilized to establish several theoretical guarantees for a Leverage Score based robust PCA method and to design a new robust PCA method. The theoretical results include performance guarantees with different models for the distribution of outliers and the distribution of inliers. In addition, we demonstrate the robustness of the algorithms against the presence of noise. The numerical and theoretical studies indicate that while the presented approach is fast and closed-form, it can outperform most of the existing algorithms.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)",
"title": "Closed-Form, Provable, and Robust PCA via Leverage Statistics and Innovation Search",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9857180685922242,
"lm_q2_score": 0.8128673133042217,
"lm_q1q2_score": 0.8012579980919878
} |
https://arxiv.org/abs/2106.03070 | Linear Rescaling to Accurately Interpret Logarithms | The standard approximation of a natural logarithm in statistical analysis interprets a linear change of \(p\) in \(\ln(X)\) as a \((1+p)\) proportional change in \(X\), which is only accurate for small values of \(p\). I suggest base-\((1+p)\) logarithms, where \(p\) is chosen ahead of time. A one-unit change in \(\log_{1+p}(X)\) is exactly equivalent to a \((1+p)\) proportional change in \(X\). This avoids an approximation applied too broadly, makes exact interpretation easier and less error-prone, improves approximation quality when approximations are used, makes the change of interest a one-log-unit change like other regression variables, and reduces error from the use of \(\log(1+X)\). | \section{The Traditional Interpretation of
Logarithms}\label{the-traditional-interpretation-of-logarithms}}
It is common practice in many statistical applications, especially in
regression analysis, to transform variables using the natural logarithm
\(\ln(X)\). This can be done for statistical reasons, for example to fit an apparent
functional form in the data or to reduce skew and the impact of positive
outliers in the variable \(X\). The logarithm transformation is also
used for theoretical reasons, when theory dictates the model relates to
proportional changes in \(X\) rather than linear changes.
The standard interpretation of a log-transformed variable in a
regression is that a linear increase of \(p\) in \(\ln(X)\) is
equivalent to a \(p\times 100\%\) increase in \(X\). This is not
literally true. A linear increase of \(p\) in \(\ln(X)\) is equivalent
to an \((e^p - 1)\times 100\%\) increase in \(X\). The standard
interpretation relies on the approximation \(e^p \approx 1+p\), or
equivalently \(p \approx \ln(1+p)\), which is fairly accurate for small
values of \(p\).
\begin{equation}
\ln(X) + p \approx \ln(X) + \ln(1+p) = \ln(X(1+p))
\end{equation}
In this paper, I provide an very simple alternative approach to using
and interpreting logarithms in the context of regression analysis, which
solves three major problems with the standard approach.
The first major problem with the standard approach is that the
approximation loses quality relatively quickly as \(p\) grows. The error
in approximation is equal to
\[ 1+p - e^p \]
which is always negative, and grows more negative with \(p\), such that
this approximation always understates the proportional increase in \(X\)
equivalent to a given linear increase in \(\ln(X)\). If \(\ln(X)\) is a
treatment variable, the approximation will always overstate its effect.
The quality of approximation is, subjectively, acceptable for small
values of \(p\), but the error becomes large within ranges of interest.
For linear increases in \(\ln(X)\) of .1, .2, or .3, respectively,
interpretations of these changes as 10\%, 20\%, or 30\% increases in
\(X\) would understate the actual change in \(X\) by about .5, 2.1, and
5 percentage points, respectively.\footnote{The approximation error expression also
demonstrates that \(e\) does not hold any special property in regards to
reducing approximation error, and is actually a poor choice of logarithmic base if
the traditional approximation is to be used. Many other bases, like
2.6, produce nearly identical errors for small \(p\) and then dominate
\(e\) afterwards. Base
2.35 is attractive in other ways: errors for base 2.35 are no larger in
absolute value than .014 all the way up to \(p = .43\), and
considerably improve on base \(e\) above that, although unfortunately
performance relative to base-\(e\) is worst around \(p=.1\). If a
researcher insists on the traditional approximation, I at least
recommend the use of base-2.6 logarithm, which is a clear improvement
on \(e\), or perhaps base-2.35 for something less sensitive to \(p\).}
This leads naturally to the second problem with the standard
interpretation, which is sociological in nature. The fact that the
base-\(e\) approximation breaks down quickly as \(p\) increases does not
appear to be universally known, nor is there a standard maximum \(p\)
for which the approximation is considered acceptable. It is not uncommon
to see papers describing 10\% or 20\% changes in a log-transformed
variable, and it is doubtful that these authors would willingly inject
biases of .5 or 2.1 percentage points into their analysis for no reason.
Calculations that give exact interpretations using \(e^p\) or
\(\ln(1+p)\) are available but are not universally applied even for
larger percentage changes. Speculatively, this may be because the author
assumes that approximation error is too small to bother, because the
additional calculation could confuse a reader, or because in some fields it is not expected. It may even be the case that the
researcher is not aware that the traditional interpretation \emph{is} an
approximation. It is common in published studies and in econometric
teaching materials to see the traditional
\(\ln(X)+p \approx \ln(X(1+p))\) approximation discussed without
reference to its approximate nature, implying by omission that it is an
equality. At the undergraduate level, see for example
Bailey (2017) Section 7.2. At the graduate level, see
Greene (2008) Section 4.7, specifically Example 4.3.
The third problem with the standard interpretation, when applied to
variables on the right-hand side of a regression, is that it requires
nonstandard interpretation of regression coefficients. Nearly all
regression coefficients are understood in terms of one-unit changes in
the associated predictor. Log-transformed variables are an exception to
this. A one-unit change in \(\ln(X)\) describes what would in most cases
be an unrealistically large increase in \(X\), and also would produce a
large 71.8 percentage point error if the traditional approximation were
applied.
All three of these problems can be solved by simply changing the base of
the logarithm.\footnote{After a literature search, I was unable to find
previous studies making this same recommendation. However, given the
long history of logarithms in regression, it seems unlikely that
nobody has thought of the insight in this paper before. So I will not
claim that this method is novel, but just that it is currently not
widely known or applied.} Selecting a percentage increase
\(p\times 100\%\) ahead of time and using \(\log_{1+p}(X)\) in place of
\(\ln(X)\) means that a one-unit change in \(\log_{1+p}(X)\) is exactly
equivalent to a \(p\times 100\%\) increase in \(X\). There is no need
for approximation, and the exact interpretation can be written directly
into a regression table, solving the first problem and much of the
second. The use of base \(1+p\) can be restated as \(\ln(X)/\ln(1+p)\),
framing the method in the easily-understood terms of linearly rescaling
the variable by the constant \(1/\ln(1+p)\). The third problem is also
solved because the relevant increase in \(\log_{1+p}(X)\) is 1, in line
with other variables in the regression.
Based on these results, I recommend the use of alternate logarithmic
bases, or the ``linear rescaling'' approach, when using logarithms in
statistical analysis, especially in regression. The benefits are most
apparent when applied to variables on the right-hand side of a
regression (the predictors/independent variables), but there are also
benefits on the left-hand side (the outcome). Additionally, the use of
linear rescaling helps ease some problems related to the use of
\(\ln(1+X)\) with \(X\) variables that contain values of zero.
\hypertarget{linearly-rescaling-logarithms}{%
\section{Linearly Rescaling
Logarithms}\label{linearly-rescaling-logarithms}}
\hypertarget{exact-interpretation-of-logarithms}{%
\subsection{Exact Interpretation of
Logarithms}\label{exact-interpretation-of-logarithms}}
\label{sec:exact}
For any base \(b\), a linear increase of \(p\) in a logarithm
\(\log_b(X)\) is equivalent to a proportional change in \(X\) of
\(b^p\), or a percentage increase of \((b^p-1)\times 100\%\).
Researchers could report exact interpretation of linear logarithmic
increases using this formula. However, this practice is far from
universal.
Another approach to exact interpretation of linear logarithmic increases
is to take advantage of the following feature of \(b^p-1\):
\[ (1+p)^1 - 1 = p \]
That is, a linear increase of \(1\) in \(\log_{1+p}(X)\) is exactly
equivalent to a percentage increase of \(p\times100\%\) (or a
proportional increase of \(1+p\)) in \(X\).
This means that a researcher can select ahead of time the percentage
increase in \(X\) that they are interested in, for example 10\%
(although any other percentage would work as well), and then use
\(\log_{1.1}(X)\) in their analysis instead of \(\ln(X)\). Then, a
one-unit increase in \(\log_{1.1}(X)\) can be exactly interpreted as a
10\% increase in \(X\).
Further, because of the change of base formula,
\(\log_{1.1}(X)\) can be calculated as \(\ln(X)/\ln(1.1)\).
\(\ln(1.1)\) is a constant, and so the researcher can achieve exact
interpretation of the change by scaling the variable they're already
using (\(\ln(X)\)) by a constant. Researchers using any estimation
method should already be aware of the implications of scaling by a
constant in that method, so the linear rescaling approach to changing
the base should be understandable by both researchers and readers of
research. The use of the change-of-base formula also allows another
clear demonstration of what the choice of logarithm base does for
interpretations of proportional change:
\[ \frac{\ln(X(1+p))}{\ln(1+p)} = \frac{\ln(X) + \ln(1+p)}{\ln(1+p)} = \frac{\ln(X)}{\ln(1+p)}+1 \]
There are several benefits to linear rescaling:
\begin{itemize}
\item
It produces exact interpretations of linear increases in logarithms.
\item
It is, arguably, conceptually more simple than using \(b^p-1\) or
\(log_b(1+p)\) to adjust the result after estimation, and avoids the introduction of another calculation where error may occur.
\item
The change of interest is one log unit, which is how most other
variables are understood.
\item
The exact interpretation can be written directly onto a regression
table rather than relying on supplemental calculations, as will be
shown in the following sections.
\end{itemize}
The main downside of this approach is that it requires the choice of the
percentage increase beforehand. In practice, this is unlikely to be a
major hurdle, as researchers often report only a single percentage
increase value anyway, typically a preselected value like 10\%, based on
what a reasonable observable change in \(X\) would be.
Additionally, if selecting a percentage increase beforehand is not
realistic, or if multiple percentage increases are desired, exact
interpretation is still available, as in the traditional method, using
\((1+b)^{p}-1\). Linear rescaling also improves the process of adjusting
the estimate, at least on the right-hand side of a regression (see
Section \ref{sec:rhs}).
Further, if an approximation is used instead of an exact interpretation,
linear rescaling often produces better approximations than the
traditional approximation. For example, if \(\log_{1.1}(X)\) has been
used and the researcher wants to approximate a 20\% increase in \(X\)
using a two-unit increase in \(\log_{1.1}(X)\), they will actually see
the effects of a \(1.1^2=1.21\), or 21\%, increase, rather than 20\%.
This is an error of one percentage point, compared to the traditional
approach, which produces an error of 2.1 percentage points for a .2
increase.
Figure \ref{fig:approxquality} examines the error in approximating
different percentage increases with a traditional approximation, using a
base-\(e\) logarithm where a \(p\)-unit increase in \(\ln(X)\) is taken
to be a \(p\times 100\%\) increase in \(X\). I contrast the traditional
approximations with approximations from the linear rescaling approach
using two different bases: a base-\(1.1\) logarithm, where a \(p\)-unit
increase in \(\log_{1.1}(X)\) is taken to be a \(p\times 10\%\) increase
in \(X\), and a base-\(1.4\) logarithm, where a \(p\)-unit increase in
\(\log_{1.4}(X)\) is taken to be a \(p\times 40\%\) increase in \(X\).
\begin{figure}
\centering
\includegraphics[width=.75\textwidth]{approxquality-1.pdf}
\caption{\label{fig:approxquality} Approximation Error with Traditional
and Linear Rescaling Methods}
\end{figure}
The traditional base-\(e\) method slightly outperforms the base-1.1
linear rescaling method, by a miniscule degree, up to a linear change of
.048 (although the linear rescaling method could outperform the
traditional method for any linear change by selecting a different base
than the ones shown in the graph). After .048, approximation with
base-1.1 linear rescaling dominates the traditional approximation,
especially near \(p=.1\). Both the traditional and base-1.1
approximations considerably outperform base-1.4 for small \(p\), but
this is to be expected - base-1.4 is to be used when the change of
interest is 40\%. In the region of \(p=.4\), the base-1.4 logarithm
considerably outperforms the other two, and approximation errors with
base-1.4 are relatively small for the entire graphed range up to
\(p=.5\). While the linear rescaling method allows for easy access to
exact interpretation, even in cases where approximation is used, the
linear rescaling approximation error will be smaller than the
approximation error for the traditional method as long as the linear
rescaling base is near enough to the proportional change of interest.
\hypertarget{linear-rescaling-on-the-right-hand-side}{%
\subsection{Linear Rescaling on the Right Hand
Side}\label{linear-rescaling-on-the-right-hand-side}}\label{sec:rhs}
The benefits of linear rescaling are clearest when applied to a
logarithmic transformation on the right-hand side of a regression.
Considering the model:
\[ Y = \beta_0 + \beta_1\ln(X) + \varepsilon \]
the interpretation of \(\hat{\beta}_1\) is often given in a format
similar to ``a 10\% increase in \(X\) is associated with a
\(.1\times\hat{\beta}_1\) increase in \(Y\).'' Or for an exact
interpretation, ``a 10\% increase in \(X\) is associated with a
\(\ln(1.1)\times\hat{\beta}_1 = .0953\times\hat{\beta}_1\) increase in
\(Y\).''
Under linear rescaling for a 10\% increase in \(X\), instead the model
is:
\[ Y = \beta_0 + \beta_1\frac{\ln(X)}{\ln(1.1)} + \varepsilon \]
in which the interpretation of \(\hat{\beta}_1\) is the simpler ``a 10\%
increase in \(X\) is associated with a \(\hat{\beta}_1\) increase in
\(Y\).''
The interpretation is simple enough that it can be written directly into
a regression table, as in Table \ref{tab:regtable} Column 1, rather than relying on additional in-text
calculations. The row heading in the table itself ``\(X\) (10\%
Increase)'' is able to convey that a 10\% change in \(X\) is associated
with a \(\hat{\beta}_1\) change in \(Y\), and the table note provides
more detail. The label ``\(\log_{1.1}(X)\) (10\% increase)'' may be
preferred.
If the researcher is interested in multiple percentage changes,
adjustment to get exact interpretation for each percentage change is
straightforward under linear rescaling, because of the change-of-base
formula. If \(\log_{1.1}(X)\) has been used, but the researcher wants an
exact interpretation for the linear change equivalent to a 20\% increase
in \(X\), the researcher can multiply the coefficient \(\hat{\beta}_1\)
by \(\ln(1.2)/\ln(1.1)\). This will work for any regression method where
scaling a predictor by \(c\) has the result of scaling its coefficient
by \(1/c\).
\begin{table}
\caption{\label{tab:regtable}Example Regression Table with Linear Rescaling}
\centering
\begin{tabular}[t]{lccc}
\toprule
& Y & Y (10\% Increase) & Y (10\% Increase) \\
\midrule
X (10\% Increase) & 0.194*** & & 0.460***\\
& (0.003) & & (0.007)\\
X & & 2.001*** & \\
& & (0.038) & \\
\midrule
Num.Obs. & 1000 & 1000 & 1000\\
\bottomrule
\multicolumn{4}{l}{\rule{0pt}{1em}* p $<$ 0.1, ** p $<$ 0.05, *** p $<$ 0.01}\\
\multicolumn{4}{l}{\textsuperscript{} Variables marked with 10\% increase use a base-1.1 logarithm}\\
\multicolumn{4}{l}{transformation. Data is simulated.}\\
\end{tabular}
\end{table}
\hypertarget{linear-rescaling-on-the-left-hand-side}{%
\subsection{Linear Rescaling on the Left Hand
Side}\label{linear-rescaling-on-the-left-hand-side}}
\label{sec:lhs}
The benefits of linear rescaling are less clear on the left-hand side of
the regression, since the proportional change of interest cannot be
exactly chosen ahead of time. Still, there are benefits.
Consider Table \ref{tab:regtable} Column 2, which uses the model
\[ \frac{\ln(Y)}{\ln(1.1)} = \beta_0 + \beta_1X + \varepsilon \] The
regression estimate is \(\hat{\beta}_1 = 2.001\). By itself, this does
not easily lead to exact interpretation of the coefficient.
At this point, the researcher can approximate the effect of \(2.001\) as
a \(2.001\times 10\% \approx 20\%\) increase. As in Section
\ref{sec:exact}, this will lead to less approximation error than in the
traditional method provided that \(\hat{\beta}_1\) is not too far from
\(1\).
There is also the option to provide an exact interpretation, where a
one-unit increase in \(X\) is associated with a \(b^{\hat{\beta}_1}\)
proporional change in \(Y\). This is not much different from the process
for getting exact interpretation using the traditional approach,
although it may be somewhat easier to understand if \(p\) is a more
natural object to think about than \(e\).
A third option is to rerun the model with a different logarithmic base
such that the \(\hat{\beta}_1\) will be near \(1\). Then,
\(\hat{\beta}_1\) can be very accurately interpreted as a proportional
increase of \(b\) in \(Y\). However, this is both laborious and would
result in the coefficient of interest being oddly located in the
logarithmic base.
One note about left-hand side use is that the traditional approximation
is known to perform particularly poorly, and exact interpretation is
especially important, when the logarithm is on the left-hand side and a
predictor of interest is binary (Halvorsen \& Palmquist, 1980). In
theoretical terms this is because the derivative does not exist. In
practical terms this is because, if the coefficient on the binary
variable is large, the researcher cannot naturally select a linear
change small enough for the approximation to perform well. Easy access
to exact interpretation, and improved approximation when used, are
especially important in this case.
\hypertarget{linear-rescaling-on-both-sides}{%
\subsection{Linear Rescaling on Both
Sides}\label{linear-rescaling-on-both-sides}}
\label{sec:bothsides}
In the case of the log-log model
\[ \ln(Y) = \beta_0 + \beta_1\ln(X) + \varepsilon \]
linear rescaling on both the left and right-hand sides using the same
bases will have no effect on \(\hat{\beta}_1\) or on its interpretation,
and offers no major improvement over the traditional method, unless
there is a reason to want different bases on the left and right-hand
sides, or if there is interest in interpreting the coefficients on
control variables with a linearly-rescaled left-hand side.
There are some minor expositional benefits. Linear rescaling could be
used here for consistency with other models that are not log-log.
Rescaling can also make clear to an audience unfamiliar with log-log
models how they can be interpreted. For example, it's not uncommon in a
log-log model to still report a result like ``a 10\% increase in \(X\)
is associated with a \(\hat{\beta}_1\times .1\)\% change in \(Y\).''
Herz \& Mejer (2016) is just one example of this. If the author wants the
reader to think in terms of a percentage increase of a particular size
in this way, linear rescaling can make that interpretation explicit on
the regression table, as in Column 3 of Table \ref{tab:regtable}.
\hypertarget{linear-rescaling-and-zeroes}{%
\subsection{Linear Rescaling and
Zeroes}\label{linear-rescaling-and-zeroes}}
Researchers often want to apply a logarithmic transformation to a
variable that can take values of zero. There are two common
approaches to this: \(\ln(1+X)\), and the asymptotic hyperbolic sine
transformation \(asinh(X) = \ln(X+\sqrt{X^2+1})\). Exact calculations
for elasticity interpretations using the \(asinh(X)\) transformation are
described in Bellemare \& Wichman (2020). Both \(\ln(1+X)\) and
\(asinh(X)\) reduce skew and accept values of zero. However, if the researcher
wishes to maintain a proportional-change interpretation (at values other
than zero), there are problems with any sort of ad-hoc
transformation like this, including a sensitivity to the scale of \(X\)
and the fact that the zero-censored variable is treated as uncensored.
For a left-hand side variable, poisson regression or a censoring model
are likely to be superior to an ad-hoc transformation. However, the use
of an ad-hoc transformation is still a concern on the right-hand side or
for researchers who want to use standard linear regression for other
reasons.
In this section, I will assume that the researcher's goal is interpret a
linear change in \(\ln(1+X)\) in terms of a proportional change in \(X\)
(rather than a proportional change in \(1+X\)). In this case, exact
interpretation is particularly important, whether performed using linear
rescaling or \(e^p\), but there are several details that still separate
the two approaches.
Consider a one-unit increase in \(\ln(1+X)/\ln(1+p)\). This is
equivalent to a proportional change of \(1+p\) in \(1+X\), or an
absolute increase of \(p(1+X)\). If \(X\) increases by \(p(1+X)\), what
proportional increase is that equivalent to?
\begin{equation}
\label{eq:linerror}
X + p(1+X) = X(1+p + \frac{p}{X})
\end{equation}
A one-unit linear increase in \(\ln(1+X)/\ln(1+p)\) interpreted as a
\(1+p\) proportional change in \(X\) will get the proportional change
wrong by \(\frac{p}{X}\).
Similarly, a linear increase of \(p\) in \(\ln(1+X)\) is a proportional
increase of \(e^p\) in \(1+p\). As above,
\begin{equation}
\label{eq:traderror}
X+(e^p-1)(1+X) = X(e^p + \frac{e^p-1}{X})
\end{equation}
A linear increase of \(p\) in \(\ln(1+X)\) interpreted as a proportional
increase of \(e^p\) in \(X\) will get the proportion wrong by
\(\frac{e^p-1}{X}\).
For a given \(p\), since \(p \leq e^p-1 \ \forall \ p \geq 0\), the
linear rescaling approach will always outperform the traditional
approach, and by a greater margin as \(p\) increases. However, this is
due to the fact that for a given \(p\), the traditional method describes
a proportional change of \(e^p \geq 1+p\). For a given
\emph{proportional change}, for example comparing a linear increase of
\(1\) under linear rescaling to a linear increase of \(\ln(1+p)\) in the
traditional method, both methods perform identically.
There are still several points to recommend linear rescaling here.
First, linear rescaling is an improvement if researchers using the
traditional method first select a \(p\) of interest and then calculate
\(e^p\), rather than selecting \(e^p\) directly (since, as above, \(p \leq e^p-1 \ \forall \ p \geq 0\)). Second, a bias of
\(p/X\) may be easier to reason about than \((e^p-1)/X\).
The comparison so far assumes that the researcher using the traditional
approach uses the exact interpretation, where a linear increase of \(p\)
is a proportional increase of \(e^p\). If they instead interpret a
linear increase of \(p\) as a proportional change of \(1+p\) in \(X\)
using the \(e^p \approx 1+p\) approximation, the error will be
\[ e^p - (1+p) + \frac{e^p-1}{X} \]
There are two problems here. First, the fact that linear rescaling and
the traditional method perform identically for a given proportional
increase doesn't matter, as by using the approximation the researcher
has chosen to fix \(p\), under which linear rescaling outperforms the
traditional method. Second, the use of the approximation adds the
traditional approximation error \(e^p-(1+p)\) on top of
\(\frac{e^p-1}{X}\), further increasing error relative to the linear
rescaling method, and making the error grow even faster in \(p\).
For either the linear rescaling or traditional approaches, the
recommendation from Bellemare \& Wichman (2020) to scale \(X\)
upwards in reference to \(asinh(X)\), elaborated upon in
Aihounton \& Henningsen (2019) to determine optimal scaling values, is also
implied by these results for \(\ln(1+X)\), as the interpretation error
declines proportionally with larger absolute values of \(X\).
\hypertarget{exact-interpretation-with-zeroes}{\subsection{Exact interpretation with
zeroes}\label{exact-interpretation-with-zeroes}}
As an aside (since it does not relate to linear rescaling in
particular), a researcher using either the linear rescaling or
traditional method could use one of the error formulas in Equations
\ref{eq:linerror} or \ref{eq:traderror} to adjust their
proportional-change interpretation, or decide whether the error is small
enough to ignore, for a given \(p\) and \(X\).
This may be especially useful in the calculation of elasticities, since
it allows proportional changes in both \(Y\) and \(X\) to be recovered
from proportional changes in \(1+Y\) and \(1+X\).
For example, in the log-log model
\[\frac{\ln(1+Y)}{\ln(1+p_Y)} = \beta_0 + \beta_1\frac{\ln(1+X)}{\ln(1+p_X)} + \varepsilon \]
a \(1+p_X\) proportional change in \(1+X\) is associated with a
\((1+p_Y)^{\hat{\beta}_1}\) proportional change in \(1+Y\). For the
specific values \(X = X_0\) and \(Y = Y_0\), this means that a
\(1 + p_X + \frac{p_X}{X_0}\) proportional change in \(X\) is associated
with a
\((1+p_Y)^{\hat{\beta}_1} + \frac{(1+p_Y)^{\hat{\beta}_1}-1}{Y_0}\)
proportional change in \(Y\). If desired, \(p_X\) and \(X_0\) can be
selected ahead of time so that \(1 + p_X + \frac{p_X}{X_0}\) is a round
number. Similar calculations follow for the \(\ln(1+X)\)-linear and
linear-\(\ln(1+X)\) cases. The only task remaining at that point is
calculation of standard errors for this nonlinear function of
\(\hat{\beta}_1\). The delta method is one acceptable approach, at least
in large samples.\footnote{Keep in mind that this adjustment does not
account for zero-censoring of the variables, which is still present
and may still harm performance for logarithms on the left-hand side
relative to using a model that properly accounts for censoring.}
\hypertarget{example-applications}{\section{Example Applications}
\label{sec:examples}}
In this section I refer to several published studies that use natural
logarithm transformations in regression analysis, and discuss how those
papers might have been different using linear rescaling.
The first study I look at, which is from economics, is
Eren, Onda, \& Unel (2019). This study looks at the impact of foreign
direct investment (FDI) on entrepreneurship in the United States,
covering business-creation, business-destruction, and self-employment
rates as outcome variables. The authors use a natural logarithmic
transformation of FDI, and the headline result specified in the abstract
is ``A 10\% increase in FDI decreases the average monthly rate of
business creation and destruction by roughly 4 and 2.5\% (relative to
the sample mean), respectively,'' where these figures refer
to results in states without Right-to-Work laws. Despite the percentage
interpretation given for the effects, business creation and destruction
are not log-transformed, only FDI is. The linear effect of log FDI is
reported as a percentage of the sample means of business creation and
destruction.
This study makes use of the traditional approximation. The result for
business creation comes from a regression of business creation rates on
\(\ln(FDI)\) (lagged two years), where the coefficient on \(\ln(FDI)\)
is -1.083. They interpret this as a 10\% increase in FDI reducing the
business creation rate by .1083 (or perhaps they round the effect to .11
before proceeding, this is not clear), which is roughly 4\% of the mean
(\(.1083/2.889 = .0375\) or \(.11/2.889=.038\)). Under linear rescaling
with a base of \(1.1\), the coefficient on \(\log_{1.1}(FDI)\) would be
\(.1032\), which indicates an effect roughly 3.5\% of the mean
(\(.1032/2.889=.0357\)), keeping in mind that the abstract reports its
other effect to the half-percent level of precision. In this case, the
use of the approximation made the effect seem larger than it was. The
correction is not enormous, but still there is no particular reason the
correct result could not have been in the original study. Traditional
exact interpretation or linear rescaling would have avoided this
problem. However, linear rescaling in this case offers the additional
benefit over traditional exact approximation that, if the dependent
variable had been divided by its mean, the coefficient on
\(\log_{1.1}(FDI)\) would have been the value of interest \(-.0357\)
directly, and no further calculation would have been necessary. This
ability to put the value of interest on the table also applies to the
2.5\% result, although in this case the substantive reported conclusion
does not change at the half-percent level of precision (the effect to
two decimal places drops from to 2.55\% to 2.43\%).
The second study is from public health. Kim \& Leigh (2020) look
at the effect of wages on obesity rates, finding that low wages increase
body mass and the prevalence of obesity. This study uses traditional
exact interpretation, so linear rescaling cannot change its results, but
it may change how they are presented. Their instrumental variables
estimate with BMI as a dependent variable reads ``the coefficient on
ln-wage was statistically significant (\(p < 0.01\)) and its value was
\(-3.3\). Standard errors appear in parentheses. This coefficient
suggests that a 10\% increase in wages is associated with 0.32 decline
in BMI.'' The .32 appears to be a slight miscalculation and can be
derived from \(\ln(1.1)\times(-3.3) = -.3145\), which should round to
\(-.31\), not \(-.32\). Under linear rescaling with a base of \(1.1\),
this additional calculation would not need to appear in the text, and
the coefficient on \(\log_{1.1}(Wage)\) would be rounded to \(-.3145\)
or \(-.315\), which would also avoid the slight miscalculation. The
authors also report the effect of a 100\% increase in wages as a 2.29
decline in BMI (\(\ln(2)\times(-3.3) = -2.29\)). To achieve this with
linear rescaling, the authors either could rerun analysis using a log
base of 2, or, more realistically, retain the calculation in the text
for this additional result, adjusting the 10\% estimate by
\(-.3145\times\ln(2)/\ln(1.1) = -2.29\). This simplified presentation
for the BMI results could be similarly applied in analysis of the
obesity rate dependent variable.
The final study I will mention is an example of how the complex
calculations necessary to produce exact interpretations of logarithms
using the traditional method can lead to error. Lin, Teymourian, \& Tursini (2018)
look at the effect of the natural logarithm of sugar and processed food
imports on the prevalence of obesity and overweight. In one model the
coefficient on logged imports is .085, which is interpreted as ``10\%
increase in import is associated with approximately 0.004 increase in
average BMI.'' However, the effect should instead be of a .0081 increase
in BMI using exact interpretation, or .0085 using the traditional
approximation, an effect more than twice as large. Similar errors are
made in another table, where a coefficient of .004 is again interpreted
as having about half the appropriate effect size for a 10\% increase.
They also provide an interpretation of a 50\% increase, which is about
40\% the appropriate size. If the authors had used linear rescaling,
they would not have needed to produce these calculations, and there
would have been no potential for this error to occur. Instead, the
effect size of interest would have automatically been produced by the
statistics software.
\hypertarget{conclusion}{%
\section{Conclusion}\label{conclusion}}
Despite their wide use across many fields, both in the context of
regression and in other statistical applications, logarithms are
frequently misinterpreted to greater or lesser degrees. This is partly
due to the standard interpretation of logarithms, which relies on an
approximation that produces non-negligible errors outside of a fairly
narrow range of percentage increases. It is possible to skip the
approximation and instead accurately interpret a linear increase \(p\)
in \(\ln(X)\) as a proportional increase of \(e^p\) in \(X\). However,
this practice is not universal, and carries a chance of error.
The interpretation of logarithms can be improved by changing the base of
the logarithm to \(1+p\), where \(1+p\) is a proportional change of
interest. The change of base can be achieved by linearly rescaling
\(\ln(X)\) to \(\ln(X)/\ln(1+p)\). This rescaling offers a way of
producing exact interpretations of logarithms, or in some cases
approximations with less error than the traditional approximation.
Linear rescaling can be easier for a reader to understand on a
regression table. The one-unit log increase that accompanies a given
percentage increase can be understood in the same way as changes in
untransformed variables. In order to interpret the coefficient the
reader does not need to search for additional calculations in the text,
nor does the author need to perform and provide them.
Crucially, rescaling is as easy to perform and explain as the
traditional approximation, and does not in most cases require the
additional post-analysis calculations that usually go along with exact
interpretations under the traditional method. All three of the studies
covered in Section \ref{sec:examples} contained either errors or
misrepresentations of their results because of these post-analysis
calculations. I did not select these studies because I knew they had
problems or anticipated any; it just so happens that the first three
studies I selected as good candidates for demonstrating the method all
had problems that could have been avoided with linear rescaling.
Ease of use for the researcher is important, because it may help
sidestep some of the reasons why researchers do not already report exact
results, and may help avoid calculation errors with traditional exact
interpretation. Researchers should in general be producing exact
interpretations of logarithms and be aware of the extent of error in the
traditional approximation. Because it is as easy as the traditional
approximation, linear rescaling is an attractive way of providing exact
results.
\hypertarget{references}{%
\section*{References}\label{references}}
\addcontentsline{toc}{section}{References}
\hypertarget{refs}{}
\begin{CSLReferences}{1}{0}
\leavevmode\vadjust pre{\hypertarget{ref-aihounton2019units}{}}%
Aihounton, Ghislain BD, and Arne Henningsen. 2019. {``Units of
Measurement and the Inverse Hyperbolic Sine Transformation.''} IFRO
Working Paper.
\leavevmode\vadjust pre{\hypertarget{ref-bailey2017real}{}}%
Bailey, Michael A. 2017. \emph{Real Econometrics: The Right Tools to
Answer Important Questions}. Oxford University Press.
\leavevmode\vadjust pre{\hypertarget{ref-bellemare2020elasticities}{}}%
Bellemare, Marc F, and Casey J Wichman. 2020. {``Elasticities and the
Inverse Hyperbolic Sine Transformation.''} \emph{Oxford Bulletin of
Economics and Statistics} 82 (1): 50--61.
\leavevmode\vadjust pre{\hypertarget{ref-eren2019}{}}%
Eren, Ozkan, Onda, Masayuki, and Bulent Unel. 2019. {``Effects of FDI on Entrepreneurship: Evidence from Right-to-work and Non-right-to-work States.''} \emph{Labour Economics} 58: 98--109.
\leavevmode\vadjust pre{\hypertarget{ref-greene2003econometric}{}}%
Greene, William H. 2008. \emph{Econometric Analysis}. Pearson Education.
\leavevmode\vadjust pre{\hypertarget{ref-halvorsen1980interpretation}{}}%
Halvorsen, Robert, and Raymond Palmquist. 1980. {``The Interpretation of
Dummy Variables in Semilogarithmic Equations.''} \emph{American Economic
Review} 70 (3): 474--75.
\leavevmode\vadjust pre{\hypertarget{ref-herz2016}{}}%
Herz, Benedikt, and Malwina Mejer. 2016. {``On the Fee Elasticity of the Demand for Trademarks in Europe.''} \emph{Oxford Economic Papers} 68 (4): 1039--1061.
\leavevmode\vadjust pre{\hypertarget{ref-kim2010estimating}{}}%
Kim, DaeHwan, and John Paul Leigh. 2010. {``Estimating the Effects of Wages on Obesity.''} \emph{Journal of Occupational and Environmental Medicine} 52 (5): 495--500.
\leavevmode\vadjust pre{\hypertarget{ref-lin2018effect}{}}%
Lin, Tracy Kuo, Teymourian, Yasmin, and Maitri Shila Tursini. 2018. {``The Effect of Sugar and Processed Food Imports on the Prevalence of Overweight and Obesity in 172 Countries.''} \emph{Globalization and health} 14 (1): 1--14.
\end{CSLReferences}
\end{document}
| {
"timestamp": "2021-10-07T02:12:48",
"yymm": "2106",
"arxiv_id": "2106.03070",
"language": "en",
"url": "https://arxiv.org/abs/2106.03070",
"abstract": "The standard approximation of a natural logarithm in statistical analysis interprets a linear change of \\(p\\) in \\(\\ln(X)\\) as a \\((1+p)\\) proportional change in \\(X\\), which is only accurate for small values of \\(p\\). I suggest base-\\((1+p)\\) logarithms, where \\(p\\) is chosen ahead of time. A one-unit change in \\(\\log_{1+p}(X)\\) is exactly equivalent to a \\((1+p)\\) proportional change in \\(X\\). This avoids an approximation applied too broadly, makes exact interpretation easier and less error-prone, improves approximation quality when approximations are used, makes the change of interest a one-log-unit change like other regression variables, and reduces error from the use of \\(\\log(1+X)\\).",
"subjects": "Econometrics (econ.EM)",
"title": "Linear Rescaling to Accurately Interpret Logarithms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9799765575409525,
"lm_q2_score": 0.8175744850834648,
"lm_q1q2_score": 0.8012038294254106
} |
https://arxiv.org/abs/2208.02559 | Equivalence between Time Series Predictability and Bayes Error Rate | Predictability is an emerging metric that quantifies the highest possible prediction accuracy for a given time series, being widely utilized in assessing known prediction algorithms and characterizing intrinsic regularities in human behaviors. Lately, increasing criticisms aim at the inaccuracy of the estimated predictability, caused by the original entropy-based method. In this brief report, we strictly prove that the time series predictability is equivalent to a seemingly unrelated metric called Bayes error rate that explores the lowest error rate unavoidable in classification. This proof bridges two independently developed fields, and thus each can immediately benefit from the other. For example, based on three theoretical models with known and controllable upper bounds of prediction accuracy, we show that the estimation based on Bayes error rate can largely solve the inaccuracy problem of predictability. | \section*{Theorem}
\begin{theorem}
\label{theorem:1}
Given a M-state time series, its predictability $\Pi$ is equivalent to the Bayes error rate $R$ in a M-classification problem as
\begin{equation}
\Pi = 1 - R, \label{eqn:1}
\end{equation}
if we treat each state as a class, and the series before the state as the feature.
\end{theorem}
\begin{proof}
Denote $x_{n-1}=\omega^{1}\omega^{2}\cdots\omega^{n-1}$ the historical series from time $1$ to $n-1$, where $\omega^{i}\in \Omega$ and $\Omega$ is the set of $M$ states. Denote $Pr[\omega^{n}=\hat{\omega}^{n}|x_{n-1}]$ the probability that our predicted state $\hat{\omega}^{n}$ is equal to the actual state $\omega^{n}$ given $x_{n-1}$, and $\pi(x_{n-1})=\sup_{\omega}\left\{Pr[\omega^{n}=\omega|x_{n-1}] \right\}$ the probability of occurrence of the most probable state at time $n$, then the predictability of the $n$th state given the historical states $x_{n-1}$ is $\Pi(n)=\sum_{x_{n-1}}P(x_{n-1})\pi(x_{n-1})$, where $P(x_{n-1})$ is the probability of observing a particular history $x_{n-1}$, and the sum is taken over all possible histories of length $n-1$. Notice that, $\pi(x_{n-1})$ contains the full predictive power including the potential long-range correlations in the time series, while in practice we usually use shorter historical series, such as $\omega^{n-r}\omega^{n-r+1}\cdots\omega^{n-1}$ with $r$ a cutoff parameter, instead of the full historical series $\omega^{1}\omega^{2}\cdots\omega^{n-1}$, so that in a more general case, the probability of observing a particular history can be smaller than 1. The overall predictability $\Pi$ is then defined as the time averaged predictability for a sufficiently long time series \cite{song2010limits}, as
\begin{align}
\Pi = \lim\limits_{n\to \infty}\frac{1}{n}\sum_{i=1}^{n}\Pi(i),
\end{align}
where $x_0=\varnothing$ and $\Pi(1)$ is the predictability of the first state without any available information.
Considering an $M$-classification problem with $n$ samples whose class labels are the $n$ states in the time series and whose features are series before the corresponding states. Table \ref{tbl:1} illustrates the one-to-one relationship between time series prediction and classification for an example series with $M=3$ and $n=5$. Denote $p\left(x | \omega_{j}\right)$ the conditional probability density of the feature $x \in X'$, $X'\subset X$ is the set of observed features, and $p\left(\omega_{j}\right)$ the prior probability of the class $\omega_{j}\in \Omega$ $(j = 1, 2, \cdots, M)$, the BER is expressed as:
\begin{equation}
R=1-\sum_{j=1}^{M} \int_{\Gamma_{j}} p\left(\omega_{j}\right) p\left(x | \omega_{j}\right) dx,
\end{equation}
with the partition $\Gamma_{j}$ defined as:
\begin{equation}
\Gamma_{j} \!\triangleq\! \left\{\! x\! \in\! X'\! \mid\! p\left(\omega_{j}\right) p\left(x | \omega_{j}\right)\!>\!\max _{\substack{k \neq j}}\left\{p\left(\omega_{k}\right) p\left(x | \omega_{k}\right)\right\}\! \right\}.
\end{equation}
Applying the Bayes formula $p(x)p\left(\omega_{j}|x\right) = p\left(\omega_j\right)p\left(x | \omega_{j}\right)$, where $p(x)$ is the prior probability of the feature $x$, we have
\begin{equation}
\label{eqn:5}
\sum_{j=1}^{M} \int_{\Gamma_{j}} p\left(\omega_{j}\right) p\left(x | \omega_{j}\right) dx = \sum_{j=1}^{M} \int_{\Gamma_{j}^{'}} p\left(x\right) p\left(\omega_{j} | x\right) dx,
\end{equation}
and the partition $\Gamma_{j}$ is equivalent to the partition $\Gamma_{j}^{'}$ with
\begin{equation}
\Gamma^{'}_{j} \triangleq \left\{ x \in X' \mid p\left(\omega_{j}|x \right)>\max _{\substack{k \neq j}}\left\{p\left(\omega_{k}|x\right)\right\} \right\}.
\end{equation}
According to our setting, there is a one-to-one relationship between features and historical series, namely for each $x_{i-1}$ $(1 \leq i \leq n)$, there exist a certain $x \in X'$ s.t. $x_{i-1}=x$, and vice versa. As $p(x)$ is the probability of observing the feature $x$ in the feature set $X'$ while $P(x_{i-1})$ is the probability of observing the series $x_{i-1}$ in all historical series of length $i-1$, in the large limit of $n$, $p(x)=\frac{1}{n}P(x_{i-1})$. In addition, $\pi(x_{i-1})=p\left(\omega_{j}|x\in \Gamma^{'}_{j} \right)$ according to the definition of $\pi(\cdot)$. As a consequence,
\begin{equation}
\sum\limits_{j=1}^{M} \int_{\Gamma^{'}_{j}} p(x) p\left(\omega_{j} |x\right) dx = \lim\limits_{n\to \infty}\frac{1}{n}\sum\limits_{i=1}^{n}\sum\limits_{x_{i-1}}P(x_{i-1})\pi(x_{i-1}).
\end{equation}
Therefore, the time series predictability is equivalent to the Bayes error rate, with the relationship $\Pi=1-R$.
\end{proof}
\section*{Results}
According to Theorem \ref{theorem:1}, we can directly take advantage of methods developed to calculate $R$ in real datasets to improve the estimation of $\Pi$. Considering a simple example with three states $\Omega = \{A,B,C\}$, where the next state only depends on the current state, according to the following Markovian transfer matrix
\begin{equation}
\begin{blockarray}{cccc}
&A&B&C\\
\begin{block}{c[ccc]}
A& q& \frac{2}{3}(1-q)& \frac{1}{3}(1-q)\\
B& \frac{1}{3}(1-q)& q& \frac{2}{3}(1-q)\\
C& \frac{2}{3}(1-q)& \frac{1}{3}(1-q)& q\\
\end{block}
\end{blockarray}.
\end{equation}
Obviously, when $0.4 \le q \le 1$, the true predictability $T=q$. Time series with arbitrary length $n$ can be generated by Eq. 8. We set $r=1$ to extract the features. Take $\{ABBCA\cdots\}$ as an example, the corresponding (feature, class) set is $\{(A,B),(B,B),(B,C),(C,A),\cdots\}$.
According to the entropy-based method, the estimated predictability $\bar{\Pi}$ is determined by
\begin{equation}
\label{eqn:9}
H = -\bar{\Pi}\log_2\bar{\Pi} - (1-\bar{\Pi})\log_2(1-\bar{\Pi}) + (1-\bar{\Pi})\log_2(M-1),
\end{equation}
where $M=3$ and $H$ is the entropy of the next-moment state that can be estimated by the data (see details in \cite{song2010limits}). In the corresponding classification problem, the lower and upper bounds of $R$ can be obtained by the inequality
\begin{equation}
\label{eqn:10}
\begin{gathered}
\frac{M-1}{(M-2) M} \sum_{i=1}^{M}\left[1-p(\omega_i)\right] R_{i}^{M-1} \leq R^{M} \leq
\min_{\alpha \in\{0,1\}} \frac{1}{M-2 \alpha} \sum_{i=1}^{M}\left[1-p(\omega_i)\right] R_{i}^{M-1}+\frac{1-\alpha}{M-2 \alpha},
\end{gathered}
\end{equation}
where $R^k$ is the BER for the $k$-classification subproblem and $R_{i}^{M-1}$ is the BER for the $(M-1)$-classification subproblem created by removing the $i$th class (see details in \cite{wisler2016empirically,renggli2021evaluating}). The upper and lower bounds of predictability can then be obtained through Eq. \ref{eqn:1} (Theorem \ref{theorem:1}), and the estimated predictability $\tilde{\Pi}$ is the average of the two bounds.
Figure 1A shows how $\bar{\Pi}$ changes with increasing $n$ for three specific cases $q=0.4$, $q=0.6$ and $q=0.8$. The result confirms two above-mentioned disadvantages of the entropy-based method, namely $\bar{\Pi}$ is sensitive to the length $n$ and much larger than the true predictability $T=q$. To ensure the stability, we set $n=2^{15}$ and compare the entropy-based method (Eq. \ref{eqn:9}) and the BER-inspired method (Eq. \ref{eqn:10}). As shown in figure 1B, the latter remarkably and consistently outperforms the former.
Considering a more complicated series generator with $M$ states $\Omega = \{S_1,S_2,\cdots,S_M\}$, where the next state $\omega^{t+1}$ is randomly drawn from the $M$ states with probability $1-q$, or determined by the two anterior states with probability $q$. In the latter case, if $\omega^{t-1}=S_i$ and $\omega^t=S_j$, then $\omega^{t+1}=S_k$, $k=i+j$ (if $k>M$, we set $k \leftarrow k-M$). Obviously, the true predictability is $T=q+(1-q)/M$. Figure 1C reports how $\bar{\Pi}$ changes with increasing $n$ for four specific cases $q=0.2$, $q=0.4$, $q=0.6$ and $q=0.8$, with $M=100$ fixed. As $1/M$ is much smaller than $q$ in the above four cases, $T \approx q$. Analogous to what found in figure 1A, $\bar{\Pi}$ is sensitive to $n$ and much larger than $T$ after being nearly stable ($n>2^{15}$). As shown in figure 1D, in most cases the BER-inspired method performs better than the entropy-based method, and only when the time series is highly predictable ($q\approx 1$, see the top right corner), the results of the entropy-based method and BER-inspired method are close to each other.
To reveal the effects of parameters $r$ and $M$, we further consider the third generator where the next state $\omega^{t+1}$ is equal to $\omega^{t}$ with probability $q_{1}=0.1$, equal to $\omega^{t-1}$ with probability $q_{2}=0.2$, equal to $\omega^{t-2}$ with probability $q_{3}=0.3$. With probability $1-q_1-q_2-q_3$, $\omega^{t+1}$ is randomly drawn from $M$ states. The true predictability is $T=\max\{q_1,\cdots,q_r\}+(1-q_1-q_2-q_3)/M$, sensitive to $r$ and $M$. As shown in figure 1E, the original entropy-based method does not consider the impacts of parameter $r$ while the BER-inspired method can well capture the effects of the memory length $r$. As shown in figure 1F, both the entropy-based and BER-inspired methods capture the decreasing tendency of predictability as the increase of $M$. One can clearly observed from figures 1E and 1F that the entropy-based method will largely overestimate the predictability even for sufficiently long time series, while the BER-inspired method performs much better.
\begin{figure*}[t]
\centering
\subfigure
\label{fig:side:a
\includegraphics[width=2in]{fig_a}
}
\subfigure
\label{fig:side:b
\includegraphics[width=2in]{fig_b}
}
\subfigure
\label{fig:side:c
\includegraphics[width=2in]{fig_c}
}
\subfigure
\label{fig:side:d
\includegraphics[width=2.08in]{fig_d}
}
\subfigure
\label{fig:side:e
\includegraphics[width=2in]{fig_e}
}
\subfigure
\label{fig:side:f
\includegraphics[width=2.07in]{fig_f}
}
\vspace{0pt}
\caption{(A) How the estimated predictability $\bar{\Pi}$ by the entropy-based method changes with the increasing $n$ under the first series generator. (B) The performance of the entropy-based method (Eq. 9) and the BER-inspired method (Eq. 10) under the first generator. (C) How the estimated predictability $\bar{\Pi}$ by the entropy-based method changes with the increasing $n$ under the second series generator. (D) The performance of the entropy-based and BER-inspired methods under the second generator. For the BER-inspired method, $\tilde{\Pi}^{\textup{low}}$ and $\tilde{\Pi}^{\textup{up}}$ are the lower and upper bounds by Eq. (10), and $\tilde{\Pi}=\frac{1}{2}\left( \tilde{\Pi}^{\textup{low}}+\tilde{\Pi}^{\textup{up}} \right)$ is the estimated predictability. In plots (B) and (D), the shadow areas indicate to what extent the BER-inspired method outperforms the entropy-based method. (E) The performance of the entropy-based and BER-inspired methods under the thrid generator with varying $r$, with $M=20$ fixed. (F) The performance of the entropy-based and BER-inspired methods under the thrid generator with varying $M$, with $r=3$ fixed. In plots (E) and (F), the shadow areas denote the standard errors. In all comparisons between the entropy-based and BER-inspired methods, the length of time series is fixed as $n=2^{15}$, and the corresponding results are averaged over 10 independent runs.}
\label{fig:results
\end{figure*}
\section*{Discussion}
The direct value of knowing predictability is to decide whether it is worthwhile to improve the current predictors \cite{song2010limits,lu2015toward}. The embodiment of such value requires an accurate estimate of predictability. Unfortunately, the entropy-based method \cite{song2010limits} usually fails as it largely overestimates the true predictability (see, for example, figure \ref{fig:results}). The dissatisfactory performance partially comes from the approximation that only accounts for the entropy of the state with the maximum next-moment occurrence probability. At the same time, such approximation is an indispensable part that guarantees the computational feasibility. Therefore, it is difficult to overcome the observed disadvantages within the entropic framework \cite{smith2014refined,zhang2022beyond}. This paper uncovers the equivalence between predictability and a seemingly unrelated metric BER, and immediately provides
a novel way to improve the estimation of predictability -- applying the BER-inspired methods.
\section*{Acknowledgement}
This work was supported in part by the National Natural Science Foundation of China (No. 61960206008, No. 62002294, No. 11975071) and the National Science Fund for Distinguished Young Scholars (No. 61725205).
| {
"timestamp": "2022-08-05T02:10:13",
"yymm": "2208",
"arxiv_id": "2208.02559",
"language": "en",
"url": "https://arxiv.org/abs/2208.02559",
"abstract": "Predictability is an emerging metric that quantifies the highest possible prediction accuracy for a given time series, being widely utilized in assessing known prediction algorithms and characterizing intrinsic regularities in human behaviors. Lately, increasing criticisms aim at the inaccuracy of the estimated predictability, caused by the original entropy-based method. In this brief report, we strictly prove that the time series predictability is equivalent to a seemingly unrelated metric called Bayes error rate that explores the lowest error rate unavoidable in classification. This proof bridges two independently developed fields, and thus each can immediately benefit from the other. For example, based on three theoretical models with known and controllable upper bounds of prediction accuracy, we show that the estimation based on Bayes error rate can largely solve the inaccuracy problem of predictability.",
"subjects": "Information Theory (cs.IT); Data Analysis, Statistics and Probability (physics.data-an)",
"title": "Equivalence between Time Series Predictability and Bayes Error Rate",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9799765552017675,
"lm_q2_score": 0.8175744761936437,
"lm_q1q2_score": 0.8012038188011363
} |
https://arxiv.org/abs/1212.1412 | Antiderivatives Exist without Integration | We present a proof that any continuous function with domain including a closed interval yields an antiderivative of that function on that interval. This is done without the need of any integration comparable to that of Riemann, Cauchy, or Darboux. The proof is based on one given by Lebesgue in 1905. | \section*{Lebesgue's Construction}
Throughout, let $f$ be a function that is continuous on an interval $[a, b]$.
We use Lebesgue's notation and give a modern rendition of his construction. Suppose a set of $n$ points $\{(a_{0},d_{0}), (a_{1},d_{1}), \dots, (a_{n},d_{n})\}$, $a=a_{0} < a_{1} < \dots < a_{n}=b$ are points of the interval $[a,b]$. If desired, allow $f(a_{i}) = d_{i}, i = 0, 1, \dots, n$.
Lebesgue defines a continuous function $\phi$ with domain including $[a,b]$ such that
for $i=0,1,\dots,n-1$, there are numbers $m_{i},b_{i}$ such that $\phi(x) = m_{i}x+b_{i}$ for each $x$ in $[a_{i},a_{i+1}]$, and $m_{i}a_{i+1}+b_{i}=m_{i+1}a_{i+1}+b_{i+1}$ holds for $i=0,1,\dots,n-2$.
Lebesgue defines an antiderivative $\Phi$ for $\phi$ on $[a,b]$ as follows:
\begin{enumerate}
\item Let $\Phi_{0}(x) = (m_{0}/2)x^{2} + b_{0}x - (m_{0}/2)a_{0}^{2}-b_{0}a_{0}$ for each $x$ in $[a_{0},a_{1}]$.
\item Define $\Phi_{1}(x) = (m_{1}/2)x^{2} + b_{1}x + \Phi_{0}(a_{1}) - (m_{1}/2)a_{1}^{2}-b_{1}a_{1}$ for each $x$ in $[a_{1},a_{2}]$. Inductively, define $\Phi_{i}, i=2,3,\dots,n-1$.
\item Since $\Phi_{0}(a_{0}), \Phi_{1}(a_{1}), \dots, \Phi_{n-1}(a_{n-1})$ are well defined, the function $\Phi$ is defined as follows:
\begin{equation*}
\Phi(x) = \begin{cases}
\displaystyle \frac{m_{0}}{2}x^{2} + b_{0}x - \frac{m_{0}}{2}a_{0}^{2} - b_{0}a_{0}, & \text{if $x \in [a_{0},a_{1}]$}; \\
& \\
\displaystyle \frac{m_{i}}{2}x^{2} + b_{i}x +\Phi_{i-1}(a_{i})- \frac{m_{i}}{2}a_{i}^{2} - b_{i}a_{i}, & \text{if $x \in [a_{i},a_{i+1}]$},i=1,2,\dots,n-1.
\end{cases}
\end{equation*}
\end{enumerate}
The constructed function $\Phi$ consists of $n$ second degree polynomials whose left and right slopes at $a_{1},a_{2},\dots,a_{n-1}$, respectively, are equal.
From the above construction, it can be shown that there is a function $F$ whose derivative is $f$. This is the unique contribution of Lebesgue. The reader may see how the proof might proceed from this point; however, we give an outline below.
\section*{Outline of Proof}
For the following, we use partition of $[a,b]$ to mean any finite collection of subintervals of $[a,b]$ that are non-overlapping and whose union is $[a,b]$. A refinement $P^{\prime}$ of a partition $P$ is merely another partition of $[a,b]$ such that each end point of each member of $P$ is also an end point of a member of $P^{\prime}$. For $n=1,2,\dots$, we let $P_{n}$ denote a regular partition of $[a,b]$ with $2^{n-1}$ members each having length $(b-a)/2^{n-1}$. This makes $P_{m}$ a refinement of $P_{n}$ for positive integers $m,n$ where $m \ge n$. The functions $\phi_{n}$ and $\Phi_{n}$ will be based on the partition $P_{n}$ for $n =1,2, \dots$.
When $P_{n}$, $\phi_{n}$, and $\Phi_{n}$ are used, assume that the subscript is a positive integer unless otherwise stated.
We remind the reader that any continuous function achieves extrema on any closed interval in its domain. This makes the following definition non-vacuous.
\begin{definition} Suppose $P$ is a partition of $[a,b]$. By the {\bf oscillation} of $f$ on $\delta \in P$ (written $\omega_{\delta}$), we mean the real number
\[\omega_{\delta} = \underset{\delta}{\max} f - \underset{\delta}{\min} f.\]
Moreover, by the {\bf total oscillation} of $f$ on a partition $P$ of $[a,b]$ (written $\Omega(P)$), we mean the maximum value of the finite set of oscillations $\{\omega_{\delta} : \delta \in P\}$.
\end{definition}
Whenever each of $m$ and $n$ is a positive integer with $m \ge n$ and $\Omega_{n} < \epsilon$ for some $\epsilon > 0$, then $\Omega_{m} < \epsilon$. (Remember that $P_{m}$ is a refinement of $P_{n}$.) This result is an application of the previous definitions of oscillation and total oscillation. We state the following lemmas without proofs each ultimately being an application of the Heine-Borel Theorem or the basic definitions of oscillation and total oscillation given above.
\begin{lemma} \label{ep2}
For each $\epsilon >0$, there is a positive integer $n$ such that
$\Omega_{m} < \epsilon$ for each positive integer $m \ge n$; that is, $\Omega_{n} \rightarrow 0$ as $n \rightarrow \infty$.
\end{lemma}
\begin{lemma} \label{Omega4}
For each positive integer $n$, $ |f(x) - \phi_{n}(x)| \le \Omega_{n}$ for each $x$ in $[a,b]$; that is, $\phi_{n}(x)$ converges to $f(x)$ for each $x \in [a,b]$.
\end{lemma}
\begin{comment}
\begin{proof} Suppose $x$ is any point in $[a,b]$, $P_{n}$ is any partition of $[a,b]$, $\delta$ is any member of $P_{n}$. Because $f$ is continuous on $[a,b]$, $\underset{\delta}{\min} f$ and $\underset{\delta}{\max} f$ both exist on $[a,b]$, positioning $f(x)$ between $\underset{\delta}{\min} f$ and $\underset{\delta}{\max} f$. Since
$\phi_{n}$ is a line segment agreeing with $f$ at its end points, $\phi_{n}(x)$ lies between $\underset{\delta}{\min} f$ and $\underset{\delta}{\max} f$.
Thus, we see that $\underset{\delta}{\min} f - \underset{\delta}{\max} f \le f(x) - \phi_{n}(x) \le \underset{\delta}{\max} f - \underset{\delta}{\min} f$. With the definition of $\omega_{\delta}$, we have
\begin{equation*} \label{Exist:Eq} |f(x) - \phi_{n}(x)| \le \underset{\delta}{\max} f - \underset{\delta}{\min} f = \omega_{\delta}.
\end{equation*}
Therefore, from the definition of $\Omega_{n}$,
\begin{equation*}\label{Omega} |f(x) - \phi_{n}(x)| \le \Omega_{n}.\end{equation*}
In summary, for each positive integer $n$, $ |f(x) - \phi_{n}(x)| \le \Omega_{n}$ for each $x$ in $[a,b]$.
\end{proof}
\end{comment}
\begin{theorem} The sequence $\{\Phi_{n}\}$ converges uniformly to a function $F$ that is an antiderivative of $f$.
\end{theorem}
\begin{proof}
By Lemma~\ref{Omega4} and Theorem 7.9 of \cite{Rudin}, we know that $\phi_{n} \rightarrow f$ uniformly on $[a,b]$. By Theorem 7.17 of \cite{Rudin}, $\Phi_{n}$ converges uniformly to $F$ and
\[\phi_{n}(x) \rightarrow F^{\prime}(x)\]
for each $x \in [a,b]$. Thus, $F^{\prime} = f$ on $[a,b]$.
\end{proof}
Lebesgue finished his paper by proving that the integral of $f$ exists on $[a,b]$. He did this by applying a construction that he developed in his proof that $F^{\prime} = f$.
| {
"timestamp": "2012-12-07T02:04:39",
"yymm": "1212",
"arxiv_id": "1212.1412",
"language": "en",
"url": "https://arxiv.org/abs/1212.1412",
"abstract": "We present a proof that any continuous function with domain including a closed interval yields an antiderivative of that function on that interval. This is done without the need of any integration comparable to that of Riemann, Cauchy, or Darboux. The proof is based on one given by Lebesgue in 1905.",
"subjects": "History and Overview (math.HO)",
"title": "Antiderivatives Exist without Integration",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9799765575409524,
"lm_q2_score": 0.817574471748733,
"lm_q1q2_score": 0.801203816357686
} |
https://arxiv.org/abs/2002.00080 | Convergence rate analysis and improved iterations for numerical radius computation | The main two algorithms for computing the numerical radius are the level-set method of Mengi and Overton and the cutting-plane method of Uhlig. Via new analyses, we explain why the cutting-plane approach is sometimes much faster or much slower than the level-set one and then propose a new hybrid algorithm that remains efficient in all cases. For matrices whose fields of values are a circular disk centered at the origin, we show that the cost of Uhlig's method blows up with respect to the desired relative accuracy. More generally, we also analyze the local behavior of Uhlig's cutting procedure at outermost points in the field of values, showing that it often has a fast Q-linear rate of convergence and is Q-superlinear at corners. Finally, we identify and address inefficiencies in both the level-set and cutting-plane approaches and propose refined versions of these techniques. | \section{Introduction}
Consider the discrete-time dynamical system
\begin{equation}
\label{eq:ode_disc}
x_{k+1} = Ax_k,
\end{equation}
where $A \in \mathbb{C}^{n \times n}$ and $x_k \in \mathbb{C}^n$.
The asymptotic behavior of \eqref{eq:ode_disc} is of course characterized by the moduli of $\Lambda(A)$.
Given the \emph{spectral radius} of~$A$,
\begin{equation}
\rho(A) \coloneqq \max \{ |\lambda| : \lambda \in \Lambda(A)\},
\end{equation}
$\lim_{k \to \infty} \| x_k\| = 0$ for all $x_0$ if and only if $\rho(A) < 1$, with the asymptotic decay rate being faster
the closer $\rho(A)$ is to zero.
However, knowing the transient behavior of~\eqref{eq:ode_disc} is often of interest.
Clearly, the trajectory of~\eqref{eq:ode_disc} is tied to powers of~$A$,
since $x_k = A^k x_0$ and so \mbox{$\|x_k\| \leq \|A^k\| \|x_0\|$}. Indeed,
a central theme of Trefethen and Embree's treatise on pseudospectra \cite{TreE05}
is how large $\sup_{k \geq 0} \|A^k\|$ can be.
One perspective is given by the \emph{field of values} (\emph{numerical range}) of $A$,
\begin{equation}
\label{eq:fov}
\fovX{A} \coloneqq \{ x^* A x : x \in \mathbb{C}^n, \|x\| = 1\}.
\end{equation}
Consider the maximum of the moduli of points in $\fovX{A}$, i.e., the \emph{numerical radius}
\begin{equation}
\label{eq:numr}
r(A) \coloneqq \max \{ | z | : z \in \fovX{A}\}.
\end{equation}
It is known that $\tfrac{1}{2}\|A\| \leq r(A) \leq \|A\|$; see \cite[p.~44]{HorJ91}.
Combining the lower bound with the power inequality $r(A^k) \leq (r(A))^k$ \cite{Ber65,Pea66} yields
\begin{equation}
\label{eq:numr_ineq}
\| A^k \| \leq 2 (r(A))^k.
\end{equation}
As $2(r(A))^k \leq \|A\|^k$ if and only if $r(A) \leq \sqrt[k]{0.5} \|A\|$, and $r(A) \leq \|A\|$ always holds,
it follows that $2(r(A))^k$ is often a tighter upper bound for $\|A^k\|$ than $\|A\|^k$ is,
and so the numerical radius can be useful in estimating the transient behavior
of \eqref{eq:ode_disc}.\footnote{Per \cite{TreE05},
the \emph{pseudospectral radius} and the \emph{Kreiss constant} \cite{Kre62}
also give information on the trajectory of~\eqref{eq:ode_disc}.
For computing these quantities, see \cite{MenO05,BenM19} and \cite{Mit20a,Mit21}.}
The concept of the numerical radius dates to at least 1961 (see \cite[p.~1005]{LewO20}),
but methods to compute it seem to have only first appeared in the 1990s.
Mathias showed that~$r(A)$ can be obtained by solving a semidefinite program~\cite{Mat93},
but doing so is expensive.
Much faster algorithms were then proposed by He and Watson \cite{Wat96,HeW97},
but these methods may not converge to $r(A)$.
In 2005, Mengi and Overton gave a fast globally convergent method
to compute $r(A)$~\cite{MenO05} by combining an idea of
He and Watson \cite{HeW97}
with the level-set approach of Boyd, Balakrishnan, Bruinsma, and Steinbuch (BBBS) \cite{BoyB90,BruS90}
for computing the $\Hcal_\infty$ norm.
While Mengi and Overton observed that their method converged quadratically,
this was only later proved in 2012 by G\"urb\"uzbalaban in his PhD thesis~\cite[section~3.4]{Gur12}.
In 2009, Uhlig proposed a geometric approach
to computing $r(A)$~\cite{Uhl09}.\footnote{In this same paper~ \cite[section~3]{Uhl09}, Uhlig also
discussed how Chebfun~\cite{DriHT14} can be used to reliably compute~$r(A)$ with just a few lines of MATLAB,
but that it is generally orders of magnitude slower than either his method
or the one of Mengi and Overton; see also~\cite{GreO18}.}
Uhlig's method is based on Johnson's cutting-plane technique for approximating $\fovX{A}$ \cite{Joh78},
which itself stems from the much earlier Bendixson-Hirsch theorem \cite{Ben02a} and fundamental results of Kippenhahn \cite{Kip51}. In \cite[Remark~3]{Joh78}, Johnson
observed that while his $\fovX{A}$ boundary method could be adapted to compute $r(A)$,
a modified version might be more efficient.
These geometric approaches all work by computing a number of supporting hyperplanes
to sufficiently approximate the boundary of $\fovX{A}$ or a region of it where $r(A)$ is attained.
A major benefit of cutting-plane methods is that they only require
computing $\lambda_\mathrm{max}$ of $n \times n$ Hermitian matrices.
If $A$ is sparse, this can be done efficiently and reliably using,
say, \texttt{eigs} in MATLAB\@.
Hence, Uhlig's method can be used on large-scale problems while still being globally convergent.
In contrast, at every iteration, the level-set approach requires
solving a generalized eigenvalue problem of order $2n$, which by standard convention on work
complexity, is an atomic operation with $\mathcal{O}(n^3)$~work.
While Uhlig noted that convergence of his method can sometimes be quite slow~\cite[p.~344]{Uhl09},
his experiments in the same paper showed several problems where
his cutting-plane method was decisively faster Mengi and Overton's level-set method.
The paper is organized as follows.
In~\cref{sec:background}, we give necessary preliminaries on the field of values, the numerical
radius, and earlier $r(A)$ algorithms.
We then identify and address some inefficiencies in the level-set method
of Mengi and Overton and propose a faster variant in~\cref{sec:alg1}.
We analyze Uhlig's method in \cref{sec:rate}, deriving
(a) its overall cost when the field of values is a disk centered at the origin, and (b)
a Q-linear local rate of convergence result for its cutting procedure.
These analyses precisely show how, depending on the problem,
Uhlig's method can be either extremely fast or extremely slow.
In \cref{sec:alg2}, we identify an inefficiency in Uhlig's cutting procedure
and address it via a more efficient cutting scheme whose exact convergence rate
we also derive.
Putting all of this together, we present our new hybrid algorithm in \cref{sec:hybrid}.
We validate our results experimentally in \cref{sec:experiments} and give concluding remarks in~\cref{sec:conclusion}.
\section{Preliminaries}
\label{sec:background}
We will need the following well-known facts~\cite{Kip51,HorJ91}:
\begin{remark}
\label{rem:fov}
Given $A \in \mathbb{C}^{n \times n}$,
\begin{enumerate}[leftmargin=30pt,label=(A\arabic*),font=\normalfont]
\item $\fovX{A} \subset \mathbb{C}$ is a compact, convex set,
\item if $A$ is real, then $\fovX{A}$ has real axis symmetry,
\item if $A$ is normal, then $\fovX{A}$ is the convex hull of $\Lambda(A)$,
\item $\fovX{A} = [\lambda_\mathrm{min}(A), \lambda_\mathrm{max}(A)]$ if and only if $A$ is Hermitian,
\item the boundary of $\fovX{A}$, $\bfovX{A}$, is a piecewise smooth algebraic curve,
\item if $v \in \bfovX{A}$ is a point where $\bfovX{A}$ is not differentiable, i.e., a corner, then $v \in \Lambda(A)$.
Corners always correspond to two line segments in $\bfovX{A}$ meeting at some angle less than $\pi$ radians.
\end{enumerate}
\end{remark}
\begin{definition}
Given a nonempty closed set $\mathcal{D} \subset \mathbb{C}$,
a point $\tilde z \in \mathcal{D}$ is (globally) \emph{outermost}
if $|\tilde z| = \max\{|z| : z \in \mathcal{D}\}$ and
\emph{locally outermost} if $\tilde z$ is an outermost point of $\mathcal{D} \cap \mathcal{N}$, for some neighborhood $\mathcal{N}$ of $\tilde z$.
\end{definition}
For continuous-time systems $\dot x = Ax$, we have the \emph{numerical abscissa}
\begin{equation}
\label{eq:numa}
\alpha_\fovsym(A) \coloneqq \max \{ \Re z : z \in \fovX{A}\},
\end{equation}
i.e., the maximal real part of all points in $\fovX{A}$.
Unlike the numerical radius, computing the numerical abscissa is straightforward, as~\cite[p.~34]{HorJ91}
\begin{equation}
\label{eq:tan_line}
\alpha_\fovsym(A) = \lambda_\mathrm{max} \left(\tfrac{1}{2} \left(A + A^*\right)\right).
\end{equation}
For $\theta \geq 0$, $\fovX{\eix{\theta} A}$ is $\fovX{A}$ rotated counter-clockwise about the origin. Consider
\begin{equation}
\label{eq:hmat}
H(\theta) \coloneqq \tfrac{1}{2} \left(\eix{\theta} A + \emix{\theta} A^*\right),
\end{equation}
so $\numaX{\eix{\theta} A} = \lambda_\mathrm{max}(H(\theta))$ and $\alpha_\fovsym(A) = \lambda_\mathrm{max}(H(0))$.
Let $\lambda_\theta$ and~$x_\theta$ denote, respectively, $\lambda_\mathrm{max}(H(\theta))$
and an associated normalized eigenvector. Furthermore, let $L_\theta$ denote the line $\{ \emix{\theta} (\lambda_\theta + \mathbf{i} t): t \in \mathbb{R}\}$
and $P_\theta$ the half plane $\emix{\theta} \{ z : \Re z \leq \lambda_\theta\}$.
Then $L_\theta$ is a \emph{supporting hyperplane} for~$\fovX{A}$
and \cite[p.~597]{Joh78}
\begin{enumerate}[leftmargin=30pt,label=(B\arabic*),font=\normalfont]
\item $\fovX{A} \subseteq P_\theta$ for all $\theta \in [0,2\pi)$,
\item $\fovX{A} = \cap_{\theta \in [0,2\pi)} P_\theta$,
\item $z_\theta = x_\theta^*Ax_\theta \in L_\theta$ is a \emph{boundary point} of $\fovX{A}$.
\end{enumerate}
As $H(\theta + \pi) = -H(\theta)$,
$P_{\theta+\pi}$ can also be obtained via $\lambda_\mathrm{min}(H(\theta))$ and an associated eigenvector.
The Bendixson-Hirsch theorem is a special case of these properties, defining the bounding box
of $\fovX{A}$ for $\theta = 0$ and~\mbox{$\theta = \tfrac{\pi}{2}$}.
When $\lambda_\mathrm{max}(H(\theta))$ is simple,
the following result of Fiedler~\cite[Theorem~3.3]{Fie81}
gives a formula for computing the \emph{radius of curvature $\tilde r$ of $\bfovX{A}$ at~$z_\theta$},
defined as the radius of the \emph{osculating circle of $\bfovX{A}$ at~$z_\theta$}, i.e.,
the circle with the same tangent and curvature as $\bfovX{A}$ at $z_\theta$.
At corners of $\bfovX{A}$, we say that~$\tilde r = 0$,
while at other boundary points where the radius of curvature is well defined,\footnote{An example
where $\tilde r$ is not well defined is given by $A = \begin{bsmallmatrix} J & 0 \\ 0 & J+I \end{bsmallmatrix}$
with $J = \begin{bsmallmatrix} 0 & 1 \\ 0 & 0 \end{bsmallmatrix}$.
At~\mbox{$b=0.5 \in \bfovX{A}$},
two of the algebraic curves, a line segment
and a semi-circle, comprising $\bfovX{A}$
meet, and $\bfovX{A}$ is only once differentiable at this non-corner boundary point.
Here, the radius of curvature of $\bfovX{A}$ jumps from $0.5$ (for the semi-circular piece)
to $\infty$ (for the line segment).}
$\tilde r > 0$ and becomes infinite at points inside line segments in $\bfovX{A}$.
Although the formula is given for $\theta=0$ and~$z_\theta = 0$,
by simple rotation and shifting, it can be applied generally.
See \cref{fig:demo_fov} for a depiction of the osculating circle of $\bfovX{A}$ at
an outermost point in~$\fovX{A}$.
\begin{theorem}[Fiedler]
\label{thm:curvature}
Let $H_1 = \tfrac{1}{2}(A + A^*)$, \mbox{$H_2 = \tfrac{1}{2\mathbf{i}}(A - A^*)$},
let $H_1^+$ be the Moore-Penrose pseudoinverse of $H_1$,
and let $x_\theta$ be a normalized eigenvector
corresponding to $\lambda_\mathrm{max}(H(\theta))$.
Noting that $A = H_1 + \mathbf{i} H_2$ and $H(0) = H_1$,
suppose that \mbox{$\lambda_\mathrm{max}(H_1) = 0$} and is simple,
and that the associated boundary point \mbox{$z_\theta = x_\theta^* A x_\theta = 0$}, where $\theta=0$.
Then the radius of curvature of $\bfovX{A}$ at $z_\theta$ is
\begin{equation}
\label{eq:curvature}
\tilde r = -2 (H_2 x_\theta)^* H_1^+ (H_2 x_\theta).
\end{equation}
\end{theorem}
\begin{figure}
\centering
\subfloat[The field of values and local curvature.]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=1.3cm 0cm 1.3cm 0cm,clip]{demo_fov}}
\label{fig:demo_fov}
}
\subfloat[Level-set iterates for $h(\theta)$.]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=1.3cm 0cm 1.3cm 0cm,clip]{demo_levelset}}
\label{fig:demo_levelset}
}
\caption{For a random matrix $A \in \mathbb{C}^{10 \times 10}$,
the left and right panes respectively show $\bfovX{A}$ (blue curve) and $h(\theta)$ (blue plot).
On the left, the following are also shown: $\Lambda(A)$ (black dots),
a polygonal approximation $\mathcal{G}_j$ to $\fovX{A}$ (blue polygon),
the outermost point in $\fovX{A}$ (small blue circle) with the corresponding
supporting hyperplane (blue line) and osculating circle (dashed red circle),
and the circle of radius~$r(A)$ centered at the origin (black dotted circle).
On the right, three iterations of the level-set method are also shown (small circles and dash-dotted lines in red).
}
\label{fig:demo}
\end{figure}
Via \eqref{eq:tan_line} and \eqref{eq:hmat},
the numerical radius can be written as
\begin{equation}
\label{eq:nr_opt}
r(A)
= \max_{\theta \in [0,2\pi)} h(\theta)
\qquad \text{where}
\qquad
h(\theta) \coloneqq \lambda_\mathrm{max} \left(H(\theta)\right),
\end{equation}
i.e., a one-variable maximization problem.
Via $H(\theta + \pi) = -H(\theta)$, it also follows that
\begin{equation}
\label{eq:nr_opt_abs}
r(A) = \max_{\theta \in [0,\pi)} \rho( H(\theta)).
\end{equation}
However, as \eqref{eq:nr_opt} and \eqref{eq:nr_opt_abs} may have multiple maxima,
it is not straightforward to find
a global maximizer of either, and crucially, assert that it is indeed a global maximizer in order to verify that $r(A)$ has been computed.
Per (A3), we generally assume that $A$ is non-normal, as otherwise $r(A) = \rho(A)$.
We now discuss earlier numerical radius algorithms in more detail.
In 1996, Watson proposed two $r(A)$ methods~\cite{Wat96}:
one which converges to local maximizers of~\eqref{eq:nr_opt_abs}
and a second which lacks convergence guarantees but is cheaper
(though they each do $\mathcal{O}(n^2)$ work per iteration).
However, as both iterations are related to the power
method, they may exhibit very slow convergence, and the cheaper iteration may
not converge at all. Shortly thereafter~\cite{HeW97}, He and Watson used
the second iteration (because it was cheaper) in combination with a new certificate test inspired by Byers’ distance to
instability algorithm~\cite{Bye88}. This certificate either asserts that $r(A)$ has been
computed to a desired accuracy or provides a way to restart Watson’s cheaper
iteration with the hope of more accurately estimating~$r(A)$.
However, He and Watson’s method is still not guaranteed to converge,
since Watson’s cheaper iteration may not converge.
Inspired by the BBBS algorithm for computing the $\Hcal_\infty$~norm,
Mengi and Overton then proposed a globally convergent iteration for~$r(A)$ in 2005
by using He and Watson's certificate test in a much more powerful way.
Given $\gamma \leq r(A)$, the test actually allows one to obtain the $\gamma$-level set
of $h(\theta)$, i.e., $\{ \theta : h(\theta) = \gamma\}$.
Assuming the level set is not empty, Mengi and Overton's method then evaluates~$h(\theta)$
at the midpoints of the intervals under~$h(\theta)$ determined by the $\gamma$-level~set points.
Estimate $\gamma$ is then updated (increased) to the highest of these corresponding function values.
This process is done in a loop, and as mentioned in the introduction, has local quadratic convergence.
See \cref{fig:demo_levelset} for a depiction of this level-set iteration.
The certificate (or level-set) test is based on \cite[Theorem~3.1]{MenO05},
which is a slight restatement of \cite[Theorem~2]{HeW97} from He and Watson.
We omit the proof.
\begin{theorem}
\label{thm:level}
Given $\gamma \in \mathbb{R}$,
the pencil $R_\gamma - \lambda S$ has $\eix{\theta}$ as an eigenvalue or is singular if and only if
$\gamma$ is an eigenvalue of $H(\theta)$ defined in \eqref{eq:hmat}, where
\begin{equation}
\label{eq:RS}
R_\gamma \coloneqq
\begin{bmatrix}
2\gamma I & -A^* \\
I & 0
\end{bmatrix}
\quad \text{and} \quad
S \coloneqq
\begin{bmatrix}
A & 0 \\
0 & I
\end{bmatrix}.
\end{equation}
\end{theorem}
Per \cref{thm:level}, the $\gamma$-level set of $h(\theta) = \lambda_\mathrm{max}(H(\theta))$
is associated with the unimodular eigenvalues of $R_\gamma - \lambda S$,
which can be obtained in $\mathcal{O}(n^3)$ work (with a significant constant factor).
Note that the converse may not hold, i.e.,
for a unimodular eigenvalue of $R_\gamma - \lambda S$,
$\gamma$ may correspond to an eigenvalue of $H(\theta)$ other than $\lambda_\mathrm{max}(H(\theta))$.
Given any $\theta \in [0,2\pi)$, also note that $R_\gamma - \lambda S$ is nonsingular for all $\gamma > h(\theta)$.
This is because if $\fovX{A}$ and a disk centered at the origin enclosing $\fovX{A}$ have more than $n$ shared boundary points,
then $\fovX{A}$ is that disk; see \cite[Lemma~6]{TamY99}.
Uhlig's method computes $r(A)$ via updating a bounded convex polygonal approximation
to $\fovX{A}$ and set of known points in $\bfovX{A}$ respectively given by:
\begin{equation*}
\mathcal{G}_j
\coloneqq
\bigcap_{\theta \in \left\{\theta_1,\ldots,\theta_j\right\}} P_\theta
\qquad \text{and} \qquad
\mathcal{Z}_j \coloneqq
\{ z_{\theta_1}, \ldots, z_{\theta_j} \},
\end{equation*}
where $\fovX{A} \subseteq \mathcal{G}_j$ (see~\cref{fig:demo_fov} for a depiction),
$0 \leq \theta_1 < \cdots < \theta_j < 2\pi$,
and
$z_{\theta_\ell} = x_{\theta_\ell}^* A x_{\theta_\ell}$
is a boundary point of $\fovX{A}$ on $L_{\theta_\ell}$
for $\ell = 1,\ldots,j$.
Note that the corners of $\mathcal{G}_j$ are given by
$L_{\theta_{\ell}} \cap L_{\theta_{\ell+1}}$ for $\ell = 1,\ldots,j-1$ and $L_{\theta_1} \cap L_{\theta_j}$.
Given $\mathcal{G}_j$ and $\mathcal{Z}_j$, lower and upper bounds
$l_j \leq r(A) \leq u_j$
are immediate, where
\[
l_j \coloneqq \max \{ |b| : b \in \mathcal{Z}_j \}
\quad \text{and} \quad
u_j \coloneqq \max \{ |c| : c \text{ a corner of } \mathcal{G}_j \},
\]
so we define the relative error estimate:
\begin{equation}
\label{eq:nr_err}
\varepsilon_j \coloneqq
\frac{ u_j - l_j }{ l_j}.
\end{equation}
By repeatedly cutting outermost corners of $\mathcal{G}_j$, and in turn,
adding computed boundary points of $\fovX{A}$ to $\mathcal{Z}_j$,
it follows that $\varepsilon_j$ must fall
below a desired relative tolerance for some~$k \geq j$;
hence, $r(A)$ can be computed to any desired accuracy.
Uhlig's method achieves this via a greedy strategy.
On each iteration, his algorithm chops off an outermost corner $c_j$ from $\mathcal{G}_j$,
which is done via computing the
supporting hyperplane~$L_{\theta_{j+1}}$ for~\mbox{$\theta_{j+1} = -\Arg(c_{j})$}
and the boundary point $z_{\theta_{j+1}} = x_{\theta_{j+1}}^* A x_{\theta_{j+1}}$.
Assuming that $c_j \not\in \fovX{A}$, the cutting operation results in
\mbox{$\mathcal{G}_{j+1} \coloneqq \mathcal{G}_j \cap P_{\theta_{j+1}}$},
a smaller polygonal region excluding the corner~$c_j$,
and
\mbox{$\mathcal{Z}_{j+1} \coloneqq \mathcal{Z}_j \cup \{z_{\theta_{j+1}}\}$};
therefore, $\varepsilon_{j+1} \leq \varepsilon_j$.
However, if~$c_j$ happens to be a corner of $\bfovX{A}$, then it cannot be cut from $\mathcal{G}_j$, and
instead this operation asserts that $|c_j| = r(A)$,
and so $r(A)$ has been computed.
In \cref{sec:rate}, \Cref{fig:uhlig} depicts Uhlig's method when a corner is cut.
\begin{remark}
\label{rem:two_hp}
Recall that the parallel supporting hyperplane $L_{\theta_{j+1} + \pi}$
and the corresponding boundary point
$z_{\theta_{j+1} + \pi}$
can be obtained via an eigenvector $\tilde x_{\theta_{j+1}}$ of $\lambda_\mathrm{min}(H(\theta_{j+1}))$.
If~$\tilde x_{\theta_{j+1}}$ is already available or relatively cheap to compute,
there is little reason \emph{not} to also update~$\mathcal{G}_j$
and~$\mathcal{Z}_j$ using this additional information.
\end{remark}
\section{Improvements to the level-set approach}
\label{sec:alg1}
We now propose two straightforward but important modifications
to make the level-set approach faster and more reliable.
We need the following immediate corollary of \cref{thm:level},
which clarifies that \cref{thm:level} also allows all points in any $\gamma$-level set
of $\rho(H(\theta))$ to be computed.
\begin{corollary}
\label{cor:remap}
Given $\gamma \geq 0$, if $\rho(H(\theta)) = \gamma$, then there exists
$\lambda \in \mathbb{C}$ such that $|\lambda| = 1$, $\det(R_\gamma - \lambda S) = 0$,
and $\theta = f(\Arg(\lambda))$,
where $f : (-\pi,\pi] \mapsto [0, \pi)$ is
\begin{equation}
\label{eq:remap}
f(\theta) \coloneqq
\begin{cases}
\theta + \pi & \text{if } \theta < 0 \\
0 & \text{if } \theta = 0 \text{ or } \theta = \pi \\
\theta & \text{otherwise}.
\end{cases}
\end{equation}
\end{corollary}
Thus, first we propose doing a BBBS-like iteration
using $\rho(H(\theta))$ instead of $h(\theta)$,
which also has local quadratic convergence.
By an extension of the argument of Boyd and Balakrishnan~\cite{BoyB90}, near maximizers,
$\rho(H(\theta))$ is unconditionally twice continuously differentiable with Lipschitz second derivative;
see~\cite{MitO21}.
Using $\rho(H(\theta))$ is also typically faster in terms of constant factors.
This is because $\rho(H(\theta)) \geq h(\theta)$ always holds, $\rho(H(\theta)) \geq 0$ (unlike $h(\theta)$, which can be negative),
and the optimization domain is reduced from $[0,2\pi)$ to $[0,\pi)$.
Thus, every update to the current estimate~$\gamma$ computed via
$\rho(H(\theta))$ must be at least as good as the one from using~$h(\theta)$ (and possibly much better),
and there may also be fewer level-set intervals per iteration,
which reduces the number of eigenproblems incurred involving $H(\theta)$.
Second, we also propose using local optimization on top of the BBBS-like step at every iteration, i.e.,
the BBBS-like step is used to initialize optimization in order to find a maximizer of $\rho(H(\theta))$.
The first benefit is speed, as optimization often results in much larger
updates to estimate $\gamma$ and these updates are now locally optimal.
This greatly reduces the total number of expensive eigenvalue computations done with
$R_\gamma - \lambda S$, often down to just one;
hence, the overall runtime can be substantially reduced since in comparison,
optimization is cheap (as we explain momentarily).
The second benefit is that using optimization also avoids some numerical difficulties
when solely working with $R_\gamma - \lambda S$ to update $\gamma$.
In their 1997 paper, He and Watson showed
that the condition number of a unimodular eigenvalue of $R_\gamma - \lambda S$ actually blows up
as $\theta$ approaches critical values of $h(\theta)$ or $\rho(H(\theta))$ \cite[Theorem~4]{HeW97},\footnote{The
exact statement appears in the last lines of the corresponding proof on p.~335.}
as this corresponds to a pair of unimodular eigenvalues of $R_\gamma - \lambda S$ coalescing
into a double eigenvalue.
Since this must always occur as a level-set method converges,
rounding errors may prevent all of the unimodular eigenvalues from being detected,
causing level-set points to go undetected, thus resulting in stagnation of
the algorithm before it finds~$r(A)$ to the desired accuracy.
He and Watson wrote that their analytical result was ``hardly encouraging" \cite[p.~336]{HeW97},
though they did not observe this issue in their experiments.
However, an example of such a deleterious effect is shown in~\cite[Figure~2]{BenM19},
where analogous eigenvalue computations are shown to greatly reduce numerical accuracy
when computing the \emph{pseudospectral abscissa}~\cite{BurLO03}.
In contrast, optimizing $\rho(H(\theta))$ does not lead to numerical difficulties.
This objective function is both Lipschitz (as $H(\theta)$ is Hermitian \cite[Theorem~II.6.8]{Kat82})
and smooth at its maximizers (as discussed above).
Thus, local maximizers of $\rho(H(\theta))$
can be found using, say, Newton's method, with only a handful of iterations.
Interestingly, in their concluding remarks \cite[p.~341--2]{HeW97}, He and Watson
seem to have been somewhat pessimistic about using Newton's method, writing that while it would have
faster local convergence than Watson's iteration,
``the price to be paid is at least a considerable increase in computation,
and possibly the need of the calculation of higher derivatives, and for the incorporation
of a line search."
As we now explain, using, say, secant or Newton's method, is
actually an overall big win. Also, note that with either secant or Newton,
steps of length one are always eventually accepted; hence, the cost of line searches
should not be a concern.
\begin{algfloat}[t]
\begin{algorithm}[H]
\floatname{algorithm}{Algorithm}
\caption{An Improved Level-Set Algorithm}
\label{alg1}
\begin{algorithmic}[1]
\REQUIRE{
$A \in \mathbb{C}^{n \times n}$ with $n \geq 2$,
initial guesses $\mathcal{M} = \{\theta_1,\ldots,\theta_q\}$, and \mbox{$\tau_\mathrm{tol} > 0$}.
}
\ENSURE{
$\gamma$ such that $|\gamma -r(A)| \leq \tau_\mathrm{tol} \cdot r(A)$.
\\ \quad
}
\STATE $\psi \gets f(\Arg(\lambda))$ where $\lambda \in \Lambda(A)$ such that $|\lambda| = \rho(A)$ \COMMENT{0 if $\lambda = 0$}
\STATE $\mathcal{M} \gets \mathcal{M} \cup \{ 0, \psi \}$
\WHILE { $\mathcal{M}$ is not empty }
\STATE $\theta_\mathrm{BBBS} \gets \argmax_{\theta \in \mathcal{M}} \rho(H(\theta))$ \COMMENT{In case of ties, just take any one}
\STATE $\gamma \gets$ maximization of $\rho(H(\theta))$ via local optimization initialized at $\theta_\mathrm{BBBS}$
\STATE $\gamma \gets \gamma (1 + \tau_\mathrm{tol})$
\STATE $\Theta \gets
\{ f(\Arg(\lambda)) : \det (R_\gamma - \lambda S) = 0, |\lambda | = 1 \}$
\STATE $[\theta_1,\ldots,\theta_q] \gets
\Theta~\text{sorted in increasing order with any duplicates removed}$
\STATE $\mathcal{M} \gets \{ \theta : \rho(H(\theta)) > \gamma ~\text{where}~ \theta = 0.5(\theta_\ell + \theta_{\ell+1}), \ell=1,\ldots,q-1\}$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\algnote{
For simplicity, we forgo giving pseudocode to exploit possible normality of~$A$ or symmetry of~$\fovX{A}$,
and assume that eigenvalues and local maximizers are obtained exactly and that the optimization solver is monotonic,
i.e., it guarantees $\rho(H(\theta_\mathrm{BBBS})) \leq \rho(H(\theta_\star))$, where $\theta_\star$ is the maximizer
computed in line~5.
Recall that~$f(\cdot)$ is defined in~\eqref{eq:remap}, and note that the method reduces to a BBBS-like iteration using \eqref{eq:nr_opt_abs}
if line~5 is replaced by \mbox{$\gamma \gets \rho(H(\theta_\mathrm{BBBS}))$}.
Running optimization from other angles in $\mathcal{M}$ (in addition to $\theta_\mathrm{BBBS}$) every iteration
may also be advantageous, particularly if this can be done via parallel processing.
Adding zero to the initial set $\mathcal{M}$ avoids having to deal
with any ``wrap-around" level-set intervals due to the periodicity of $\rho(H(\theta))$,
while $\psi$ is just a reasonable initial
guess for a global maximizer of \eqref{eq:nr_opt_abs}.
}
\end{algfloat}
Suppose $\rho(H(\theta))$ is attained by a unique eigenvalue~$\lambda_j$
with normalized eigenvector $x_j$.
Then by standard perturbation theory for simple eigenvalues,
\begin{equation}
\rho^\prime(H(\theta)) = \sgn(\lambda_j) \cdot x_j^*H^\prime(\theta)x_j = \sgn(\lambda_j) \cdot x_j^*\left(\tfrac{\mathbf{i}}{2} \left( \eix{\theta} A - \emix{\theta} A^* \right) \right) x_j.
\end{equation}
Thus, given $\lambda_j$ and $x_j$, the additional cost of obtaining $\rho^\prime(H(\theta))$
mostly amounts to the single matrix-vector product $H^\prime(\theta)x_j$.
To compute $\rho^{\prime\prime}(H(\theta))$, we will need the following result for second derivatives
of eigenvalues; see \cite{Lan64}.
\begin{theorem}
\label{thm:eigdx2}
For $t \in \mathbb{R}$, let $A(t)$ be a twice-differentiable $n \times n$ Hermitian matrix family
with, for $t=0$, eigenvalues $\lambda_1 \geq \ldots \geq \lambda_n$ and
associated eigenvectors $x_1,\ldots,x_n$, with $\|x_k\| = 1$ for all $k$.
Then assuming $\lambda_j$ is unique,
\[
\lambda_j^{\prime\prime}(t) \bigg|_{t=0}= x_j^* A''(0) x_j + 2 \sum_{k \ne j} \frac{| x_k^* A'(0) x_j |^2}{\lambda_k - \lambda_j}.
\]
\end{theorem}
Although obtaining the eigendecomposition of $H(\theta)$ is cubic work,
this is generally negligible compared to the cost of obtaining all the unimodular eigenvalues of $R_\gamma - \lambda S$
when using \cref{thm:level} computationally;
recall that $H(\theta)$ is an $n \times n$ Hermitian matrix,
while $R_\gamma - \lambda S$ is a generalized eigenvalue problem of order $2n$.
Moreover, $H^\prime(\theta)x_j$ would already be computed for~$\rho^\prime(H(\theta))$,
while
$
H^{\prime\prime}(\theta)x_j = -H(\theta)x_j = -\lambda_j x_j
$,
so there is no other work of consequence to obtain $\rho^{\prime\prime}(H(\theta))$ via \cref{thm:eigdx2}.
\begin{table}
\centering
\caption{The running time of a given operation \emph{divided by} the running time of \texttt{eig($H(\theta)$)}
for random $A \in \mathbb{C}^{n \times n}$.
Eigenvectors were requested for~$H(\theta)$ (for computing derivatives and boundary points)
but not for $R_\gamma - \lambda S$.
For \texttt{eigs}, \texttt{k} is the number of eigenvalues requested,
while \texttt{'LM'} (largest modulus), \texttt{'LR'} (largest real), and \texttt{'BE'} (both ends)
specifies which eigenvalues are desired.
}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{c | r SS | SS | SSS | S}
\toprule
\multicolumn{2}{c}{} &
\multicolumn{2}{c}{\texttt{eigs($H(\theta)$,k,'LM')}} &
\multicolumn{2}{c}{\texttt{eigs($H(\theta)$,k,'LR')}} &
\multicolumn{3}{c}{\texttt{eigs($H(\theta)$,k,'BE')}} &
\multicolumn{1}{c}{\texttt{eig($R_\gamma$,$S$)}} \\
\cmidrule(lr){3-4}
\cmidrule(lr){5-6}
\cmidrule(lr){7-9}
\cmidrule(lr){10-10}
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{$n$} &
\multicolumn{1}{c}{$\texttt{k}=1$} &
\multicolumn{1}{c}{$\texttt{k}=6$} &
\multicolumn{1}{c}{$\texttt{k}=1$} &
\multicolumn{1}{c}{$\texttt{k}=6$} &
\multicolumn{1}{c}{$\texttt{k}=2$} &
\multicolumn{1}{c}{$\texttt{k}=4$} &
\multicolumn{1}{c}{$\texttt{k}=6$} &
\multicolumn{1}{c}{} \\
\midrule
\multirow{4}{*}{\rotatebox[origin=c]{90}{Dense $A$}}
& \multicolumn{1}{r|}{200} & 1.4 & 1.4 & 0.7 & 1.1 & 1.0 & 1.1 & 1.1 & 36.7 \\
& \multicolumn{1}{r|}{400} & 0.5 & 0.8 & 0.5 & 0.8 & 0.6 & 0.7 & 0.8 & 79.3 \\
& \multicolumn{1}{r|}{800} & 0.6 & 1.0 & 0.6 & 1.1 & 1.2 & 1.0 & 1.1 & 180.2 \\
& \multicolumn{1}{r|}{1600} & 0.3 & 0.6 & 0.3 & 0.5 & 0.6 & 0.7 & 0.6 & 196.3 \\
\midrule
\multirow{4}{*}{\rotatebox[origin=c]{90}{Sparse $A$}}
& \multicolumn{1}{r|}{200} & 4.0 & 3.7 & 2.1 & 3.5 & 3.4 & 3.8 & 3.6 & 45.7 \\
& \multicolumn{1}{r|}{400} & 2.0 & 3.1 & 1.8 & 3.1 & 3.1 & 3.2 & 3.1 & 80.6 \\
& \multicolumn{1}{r|}{800} & 0.7 & 1.2 & 0.6 & 1.1 & 1.4 & 1.1 & 1.2 & 167.9 \\
& \multicolumn{1}{r|}{1600} & 0.4 & 0.6 & 0.4 & 0.7 & 0.8 & 0.8 & 0.6 & 177.2 \\
\bottomrule
\end{tabular}
\label{tbl:eig}
\end{table}
Pseudocode
for our improved level-set algorithm is given in~\cref{alg1}.
We now address some implementation concerns.
What method is used to find maximizers of~$\rho(H(\theta))$
depends on the relative costs of solving eigenvalue problems
involving $H(\theta)$ and $R_\gamma - \lambda S$.
\cref{tbl:eig} shows examples where 37--196 calls of
\texttt{eig($H(\theta)$)} can be done before the total cost exceeds that of
a single call of \texttt{eig($R_\gamma$,$S$)}.
This highlights just how beneficial it can be to incur a few more computations with $H(\theta)$ to find local maximizers
as \cref{alg1} does.
Comparisons for computing extremal eigenvalues of~$H(\theta)$ via \texttt{eig} and \texttt{eigs}
are also shown in the table. Such data inform whether
or not the increased cost of needing to use \texttt{eig} in order to compute
$\rho^{\prime\prime}(H(\theta))$ is offset by the advantages that second derivatives can bring,
e.g., faster local convergence.
Of course, fine-grained implementation decisions like these should ideally be made via tuning,
as such timings are generally also software and hardware dependent.
Nevertheless, \cref{tbl:eig} suggests that implementing \cref{alg1} using Newton's method via \texttt{eig}
might be a bit more efficient than using the secant method for $n \leq 800$ or so.\footnote{
Subspace methods such as \cite{KreLV18} might also be used to find local maximizers of $h(\theta)$ or~$\rho(H(\theta))$
and would likely provide similar benefits in terms of accelerating
the globally convergent algorithms in this paper.
}
There is one more subtle but important detail for implementing \cref{alg1}.
Suppose that $\theta_\mathrm{BBBS}$ in line~4 is close to the argument of a (nearly) double unimodular eigenvalue of $R_\gamma - \lambda S$, where $\gamma = \rho(H(\theta_\mathrm{BBBS}))$.
If rounding errors prevent this one or two eigenvalues from being detected as unimodular,
the computed $\gamma$-level set of $\rho(H(\theta))$ may be incomplete,
which again, can cause stagnation.
As pointed out in~\cite[p.~372--373]{BurLO03} in the context of computing the pseudospectral abscissa,
a robust fix is simple: explicitly add $a(\theta_\mathrm{BBBS})$ to $\Theta$ in line~7
if it appears to be missing.
\section{Analysis of Uhlig's method}
\label{sec:rate}
In the next two subsections, we respectively
(a) analyze the overall cost of Uhlig's method for so-called
\emph{disk matrices} and
(b) for general problems,
establish how the exact Q-linear local rate of convergence of Uhlig's cutting strategy
varies with respect to the local curvature of $\bfovX{A}$ at outermost points.
A disk matrix is one whose field of values is a circular disk centered at the origin,
and it is a worst-case scenario for Uhlig's method; as we show in this case,
the required number of supporting hyperplanes to compute $r(A)$ blows up
with respect to increasing the desired relative accuracy.
Although relatively rare, disk matrices can arise from minimizing the numerical radius
of parametrized matrices; see \cite{LewO20} for a thorough discussion.
For concreteness here, we make use of the $n \times n$ Crabb matrix:
\begin{equation}
\label{eq:crabb}
K_2 = \begin{bmatrix}
0 & 2 \\
0 & 0
\end{bmatrix},
\
K_3 = \begin{bmatrix}
0 & \sqrt{2} & 0 \\
0 & 0 & \sqrt{2} \\
0 & 0 & 0
\end{bmatrix},
\
K_n =
\begin{bmatrix}
0 & \sqrt{2} & & & & \\
& . & 1 & & & \\
& & . & . & & \\
& & & . & 1 & \\
& & & & . & \sqrt{2} \\
& & & & & 0 \\
\end{bmatrix},
\end{equation}
where for all $n$, $r(K_n) = 1$ and $W(K_n)$ is the unit disk.
However, note that not all disk matrices are variations of Jordan blocks corresponding
to the eigenvalue zero.
For other types of disk matrices and the history and relevance of $K_n$, see~\cite{LewO20}.
\subsection{Uhlig's method for disk matrices}
The following theorem completely characterizes the total cost of Uhlig's method
for disk matrices with respect to a desired relative tolerance.
Note that Uhlig's method begins with a rectangular approximation~$\mathcal{G}_4$ to $\fovX{A}$,
which for a disk matrix, is a square centered at the origin.
\begin{theorem}
\label{thm:uhlig_disk}
Suppose that $A \in \mathbb{C}^{n \times n}$ is a disk matrix with $r(A) > 0$
and that~$\fovX{A}$ is approximated by $\mathcal{G}_j$ with $j \geq 3$
and $\mathcal{G}_j$ a regular polygon, i.e.,
it is the intersection of $j$ half planes $P_{\theta_\ell}$, where
$\theta_\ell = \tfrac{2\pi }{j} \ell$ for $\ell = 1,\ldots,j$.
Then,
\begin{enumerate}[label=(\roman*),font=\normalfont]
\item $\varepsilon_j = \sec(\pi/j) - 1$,
\item if $\varepsilon_j \leq \tau_\mathrm{tol}$,
then $j \geq \left\lceil \tfrac{\pi}{\arcsec(1 + \tau_\mathrm{tol})} \right\rceil$,
where $\tau_\mathrm{tol} > 0$ is the desired relative error.
\end{enumerate}
Moreover, if $\mathcal{G}_k$ is a further refined version of $\mathcal{G}_j$,
so $\fovX{A} \subseteq \mathcal{G}_k \subseteq \mathcal{G}_j$, then
\begin{enumerate}[label=(\roman*),font=\normalfont,resume]
\item if $\varepsilon_k < \varepsilon_j$, then $k \geq 2j$.
\item if $\varepsilon_k \leq \tau_\mathrm{tol} < \varepsilon_j$, then $k \geq j \cdot 2^d$,
where
$d = \left\lceil \log_2 \left(\tfrac{\pi}{j \arcsec(1 + \tau_\mathrm{tol})}\right) \right\rceil$.
\end{enumerate}
\end{theorem}
\begin{proof}
As $\fovX{A}$ is a disk centered at zero with radius $r(A)$ and
$\bfovX{A}$ is a circle inscribed in the regular polygon $\mathcal{G}_j$,
every boundary point in $\mathcal{Z}_j$ has modulus~$r(A)$, and so $l_j = r(A)$,
and the moduli of the corners of~$\mathcal{G}_j$ are all identical.
Consider the corner $c$ with $\Arg(c) = \pi/j$
and the right triangle defined by zero, $r(A)$ on the real axis, and $c$.
Then~\mbox{$|c| = u_j = r(A) \sec(\pi/j)$}, and so~(i) holds.
Statement~(ii) simply holds by substituting~(i) into $\varepsilon_j \leq \tau_\mathrm{tol}$
and then solving for $j$.
For~(iii), as~$\varepsilon_j = |c|$ for any corner~$c$ of~$\mathcal{G}_j$,
all $j$~corners of $\mathcal{G}_j$ must be refined to lower the error;
thus,~$\mathcal{G}_k$ must have at least~$2j$~corners.
Finally, as $\lim_{j \to \infty} \varepsilon_j = 0$,
but the error only decreases when~$j$ is doubled,
it follows that in order for $\varepsilon_k \leq \tau_\mathrm{tol}$ to hold,
$k \geq j \cdot 2^d$ for some~$d \geq 1$.
The smallest possible integer is obtained by replacing~$j$ in~(ii) with $j \cdot 2^d$
and solving for~$d$, thus proving~(iv).
\end{proof}
\begin{table}
\centering
\caption{
For any disk matrix $A$,
the minimum number of supporting hyperplanes required to compute $r(A)$ to different accuracies is shown,
where $\texttt{eps} \approx 2.22~\times~10^{-16}$.
}
\setlength{\tabcolsep}{6pt}
\begin{tabular}{l | *{10}{S[table-format=9.0]}}
\toprule
\multicolumn{1}{c}{} &
\multicolumn{3}{c}{\# of supporting hyperplanes needed}\\
\cmidrule(lr){2-4}
\multicolumn{1}{c}{}&
\multicolumn{1}{c}{Minimum} &
\multicolumn{2}{c}{Uhlig's method}\\
\cmidrule(lr){2-2}
\cmidrule(lr){3-4}
\multicolumn{1}{c}{Relative Tolerance} &
\multicolumn{1}{c}{}&
\multicolumn{1}{c}{Starting with $\mathcal{G}_3$} &
\multicolumn{1}{c}{Starting with $\mathcal{G}_4$} \\
\midrule
$\tau_\mathrm{tol} = \texttt{1e-4}$ & 223 & 384 & 256 \\
$\tau_\mathrm{tol} = \texttt{1e-8}$ & 22215 & 24576 & 32768 \\
$\tau_\mathrm{tol} = \texttt{1e-12}$ & 2221343 & 3145728 & 4194304 \\
$\tau_\mathrm{tol} = \texttt{eps}$ & 149078414 & 201326592 & 268435456 \\
\bottomrule
\end{tabular}
\label{tbl:disk_thm}
\end{table}
Via \cref{thm:uhlig_disk}, we report the number of
supporting hyperplanes needed to compute the numerical radius of disk matrices
for increasing levels of accuracy in~\cref{tbl:disk_thm},
illustrating just how quickly the cost of Uhlig's method skyrockets.
Consequently, the level-set approach is typically much faster for disk matrices
or those whose fields of values are close to a disk centered at zero;
indeed, since $h(\theta)=\rho(H(\theta))$ is constant for disk matrices, it converges in a single iteration.
\subsection{Local rate of convergence}
\label{sec:uhlig_conv}
As we now explain, the local behavior of Uhlig's cutting procedure
at an outermost point in $\fovX{A}$
can actually be understood analyzing one key example.
For this analysis, we use Q-linear and Q-superlinear convergence,
where ``Q" stands for ``quotient"; see \cite[p.~619]{NocW99}.
\begin{definition}
Let $b_\star$ be an outermost point of $\fovX{A}$
such that the radius of curvature $\tilde r$ of~$\bfovX{A}$ is well defined at $b_\star$.
Then the \emph{normalized radius of curvature of~$\bfovX{A}$ at~$b_\star$}
is~\mbox{$\mu \coloneqq \tilde r/ r(A) \in [0,1]$}.
\end{definition}
Note that if $\mu = 0$, $b_\star$ is a corner of $\fovX{A}$.
If $\mu = 1$, then near~$b_\star$, $\bfovX{A}$ is well approximated by
an arc of the circle with radius $r(A)$ centered at the origin.
We show that the local convergence
is precisely determined by the value of $\mu$ at~$b_\star$.
In the upcoming analysis we use the following assumptions.
\begin{assumption}
\label{asm:wlog}
We assume that $r(A) > 0$ and that it is attained at a non-corner $b_\star \in \bfovX{A}$ with $\Arg(b_\star) = 0$.
\end{assumption}
\cref{asm:wlog} is essentially without any loss of generality.
Assuming $r(A) > 0$ is trivial, as it only excludes $A=0$. Since we are concerned
with finding the local rate of convergence at $b_\star$,
its location does not matter, and so we can assume a convenient one, that $b_\star$
is on the positive real axis.
As will be seen, our analysis does not lose any generality by assuming
that $b_\star$ is not a corner.
\begin{assumption}
\label{asm:c2}
We assume that the current approximation $\mathcal{G}_j$ has been constructed using the supporting hyperplane $L_0$
passing through~$b_\star$, and so $b_\star \in \mathcal{Z}_j$,
and that~$\bfovX{A}$ is twice continuously differentiable at $b_\star$.
\end{assumption}
\cref{asm:c2} is also quite mild.
Although Uhlig's method may sometimes only encounter supporting
hyperplanes for outermost points in the limit as it converges,
as we explain in \cref{sec:alg2}, they can actually be easily and cheaply obtained and used to update $\mathcal{G}_j$
and $\mathcal{Z}_j$.
Moreover, knowing outermost points and associated hyperplanes
does not guarantee convergence.
Uhlig's method only terminates once~$\varepsilon_k$
is sufficiently small for some~$k$,
which means that its cost is generally determined by how quickly
it can sufficiently approximate~$\bfovX{A}$ in \emph{neighborhoods about the outermost points};
per \cref{sec:background}, when $A$ is not a disk matrix, there can be up to $n$ such points.
The smoothness assumption ensures that there exists a unique osculating circle of $\bfovX{A}$ at $b_\star$,
and consequently,
the disagreement of the osculating circle and~$\bfovX{A}$ decays at least cubicly as~$b_\star$ is approached;
for more on osculation, see, e.g., \cite[chapter~2]{Kuh15}.
\begin{keyremark}
\label{key:why}
By our assumptions, $b_\star \in \mathcal{Z}_j$, and $\bfovX{A}$
is twice continuously differentiable and has normalized radius of curvature~$\mu > 0$ at $b_\star$.
Since $\partial \mathcal{G}_j$ is a piecewise linear approximation of $\bfovX{A}$,
the local behavior of a cutting-plane method
is determined by the resulting second-order
approximation errors, with
the higher-order errors being negligible.
As $\bfovX{A}$ is curved at $b_\star$, these second-order errors must be non-zero on both sides of~$b_\star$.
Now recall that the osculating circle of~$\bfovX{A}$ at $b_\star$ locally agrees with~$\bfovX{A}$ to at least second order.
Hence, near~$b_\star$, the second-order errors of a cutting-plane method applied to $A$
\emph{are identical} to the second-order errors of applying the method to
a matrix $M$ with $\bfovX{M}$ being the same as that osculating circle.
Thus, to understand the local convergence rate
of a cutting-plane method for general matrices, it actually suffices to study how the method
behaves on~$M$.
\end{keyremark}
We now define our key example $M$ such that, via two real parameters,
$\bfovX{M}$ is the osculating circle at the outermost point $b_\star$
\begin{figure}
\centering
\subfloat[Iteration $j$.]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=1.3cm 0cm 1.3cm 0cm,clip]{uhlig_j}}
\label{fig:uhlig_j}
}
\subfloat[Iteration $j+1$.]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=1.3cm 0cm 1.3cm 0cm,clip]{uhlig_jp1}}
\label{fig:uhlig_jp1}
}
\caption{Depiction of Uhlig's method for \cref{ex:circ} where
$z = 0.3$ and $\tilde r = 0.7$.
The dotted circle is the circle of radius $r(M) = 1$ centered at the origin.
See \cref{ex:circ} for a complete description of the plots.
}
\label{fig:uhlig}
\end{figure}
\defs{s}
\begin{example}[See \cref{fig:uhlig} for a visual description]
\label{ex:circ}
For $n \geq 2$, let
\begin{equation}
\label{eq:model}
M = s I + \tilde r K_n,
\end{equation}
where $K_n \in \mathbb{C}^{n \times n}$ is any disk matrix with $r(K_n)=1$, e.g., \eqref{eq:crabb},
$s \geq 0$, and~$\tilde r > 0$.
Clearly, $\fovX{M}$ is a disk with radius $\tilde r$ centered at $s$ on the real axis with
outermost point~$b_\star = s + \tilde r = r$
and $\mu = \tfrac{\tilde r}{r} > 0$ at $b_\star$,
where~$r \coloneqq r(M)$, a shorthand we will often use in \cref{sec:rate}~and~\cref{sec:alg2}.
Thus, given any matrix~$A$ satisfying \cref{asm:wlog,asm:c2} with any $\mu \in (0,1]$,
by choosing $s \geq 0$ and~$\tilde r > 0$ appropriately
we have that~$\bfovX{M}$ agrees exactly with the osculating circle
of~$\bfovX{A}$ at~$b_\star$.
Assume that~$\theta_\star = 0$ and~$\theta_j \in (-\pi,0)$
have been used to construct~$\mathcal{G}_j$ and~$\mathcal{Z}_j$, i.e.,
supporting hyperplanes~$L_{\theta_\star}$ and~$L_{\theta_j}$ respectively pass through
boundary points~$b_\star$ and~$b_j = s + \tilde r \eix{\phi_j}$ of $\fovX{M}$,
where~\mbox{$\phi_j \coloneqq \Arg (b_j - s) \in (0,\pi)$}.
Therefore, $b_\star = z_0 \in \mathcal{Z}_j$ and since $\fovX{M}$ is a disk,
it also follows that $b_j=z_{\theta_j} \in \mathcal{Z}_j$, where~$\theta_j = -\phi_j$.
Further assume that for all~\mbox{$\theta \in (\theta_j,\theta_\star)$}, $z_\theta \not\in \mathcal{Z}_j$,
and so~$c_j \coloneqq L_{\theta_\star} \cap L_{\theta_j}$ is a corner of~$\mathcal{G}_j$.
Now suppose~$c_j$ is cut, by any cutting procedure, Uhlig's or otherwise.
This results in a boundary point~$b_{j+1}$ of $\fovX{M}$ being added to $\mathcal{Z}_j$
and $c_j$ is replaced by two new corners, $\hat c_{j+1}$ and~$c_{j+1}$,
that respectively lie on~$L_{\theta_j}$ and $L_{\theta_\star}$;
due to their orientation with respect to~$b_{j+1}$, we refer to
$\hat c_{j+1}$ as a counter-clockwise (CCW) corner and~$c_{j+1}$ as a clockwise~(CW) corner.
If~$c_{j+1}$ is then cut next, this produces two more corners,~$\hat c_{j+2}$
and~$c_{j+2}$, with $c_{j+2}$ also on~$L_{\theta_\star}$ and between $c_{j+1}$ and~$b_\star$.
Note that if~$\{\phi_k\} \to 0$, then the sequence of CW corners $\{c_k\} = c_j,c_{j+1},c_{j+2},\ldots$
converges to $b_\star$.
To understand the local behavior of a cutting-plane technique, we will
analyze~$\{\phi_k\}$ and~$\{c_k\}$, i.e., the case when the cuts are applied
to the CW corners that are sequentially generated~on~$L_{\theta_\star}$.
\end{example}
Some remarks on \cref{ex:circ} are in order,
as it ignores the CCW corners $\hat c_j$,
and cutting these CCW corners may also introduce new corners that need to be cut.
However, since $\{c_k\}$ is a subsequence of all the corners generated
by Uhlig's method
to sufficiently approximate $\bfovX{M}$ between boundary points~$b_\star$ and~$b_j$,
analyzing $\{c_k\}$ gives a lower bound on its local efficiency.
Furthermore, this often describes the true local efficiency because, as will become clear,
for many problems, there are either no or few CCW corners
that requiring cutting.
Finally, in \cref{sec:alg2}, we introduce an improved cutting scheme
that guarantees only CW corners must be cut.
\begin{lemma}
\label{lem:uhlig_cuts}
Recalling that $\phi_j \coloneqq \Arg (b_j - s)$,
if Uhlig's cuts are sequentially applied to the corners $\{c_k\}$ described in \cref{ex:circ},
then for all $k \geq j$,
\begin{equation}
\label{eq:uhlig_angle}
\phi_{k+1}
= \arctan \big( \mu \tan \tfrac{1}{2}\phi_k \big).
\end{equation}
\end{lemma}
\begin{proof}
Since $\fovX{M}$ is a disk with tangents at $b_k$ and $b_\star$ determining $c_k$,
first note that the equality \mbox{$\Arg(c_k - s) = \tfrac{1}{2}\phi_k$} holds.
Then \mbox{$\tan \tfrac{1}{2} \phi_k = \tilde r^{-1}|c_k - b_\star|$},
and since the tangent at $b_\star$ is vertical, we also have that
$\Arg(c_k) = \arctan(r^{-1} |c_k - b_\star|)$.
Thus via substitution,
\[
\Arg(c_k)
= \arctan ( r^{-1} \tilde r \tan \tfrac{1}{2}\phi_k )
= \arctan ( \mu \tan \tfrac{1}{2}\phi_k ).
\]
The proof is completed since $\phi_{k+1} = \Arg (b_{k+1} - s)$
is also equal to $\Arg(c_k)$.
\end{proof}
\begin{theorem}
\label{thm:uhlig_angle}
The sequence $\{\phi_k\}$ produced by Uhlig's
cutting procedure and described by recursion \eqref{eq:uhlig_angle}
converges to zero Q-linearly with rate $\tfrac{1}{2} \mu$.
\end{theorem}
\begin{proof}
First note that $\lim_{k\to\infty} \phi_k = 0 = \phi_\star$ and $\phi_k > 0$ for all $k \geq j$. Then
\[
\lim_{k \to \infty}
\frac{| \phi_{k+1} - \phi_\star |}{| \phi_{k} - \phi_\star |}
= \lim_{k \to \infty} \frac{\phi_{k+1}}{\phi_{k}}
= \lim_{k \to \infty} \frac{\arctan \left( \mu \tan \tfrac{1}{2} \phi_k \right)}{\phi_k}.
\]
Since the numerator and denominator both go to zero as $k\to\infty$, the result
follows by considering the continuous version of the limit:
\[
\lim_{\phi \to 0} \frac{\arctan \big( \mu \tan \tfrac{\phi}{2} \big)}{\phi}
= \lim_{\phi \to 0} \frac{\mu \tan \tfrac{1}{2}\phi}{\phi}
= \lim_{\phi \to 0} \frac{\tfrac{1}{2}\mu \phi }{\phi}
= \tfrac{1}{2} \mu,
\]
where the first and second equalities are
obtained, respectively, using small-angle approximations
$\arctan x \approx x$ and $\tan x \approx x$ for $x\approx 0$.
\end{proof}
While \cref{thm:uhlig_angle} tells us how quickly $\{\phi_k\}$ will converge,
we really want to estimate how quickly the error $\varepsilon_j$ becomes sufficiently small.
For that, we must consider how fast the moduli of the corresponding outermost corners $c_k$
converge.
\begin{theorem}
\label{thm:uhlig_ub}
Given the sequence
$\{\phi_k\}$ from \cref{thm:uhlig_angle},
the corresponding sequence~$\{|c_k|\}$
converges to $r$ Q-linearly with rate $\tfrac{1}{4} \mu^2$.
\end{theorem}
\begin{proof}
First note that
\[
\cos \phi_{k+1} = \frac{r}{|c_k|}
\qquad \text{and so} \qquad
|c_k| = r \sec \phi_{k+1} > r
\]
for all $k\geq j$.
Thus, we consider the limit
\[
\lim_{k \to \infty} \frac{| |c_{k+1}| - r |}{| |c_k| - r |}
= \lim_{k \to \infty} \frac{r \sec \phi_{k+2} - r}{r \sec \phi_{k+1} - r}
= \lim_{k \to \infty} \frac{\sec \phi_{k+1} - 1}{\sec \phi_k - 1},
\]
which when substituting in $\phi_{k+1} = \arctan \big( \mu \tan \tfrac{1}{2}\phi_k \big)$ becomes
\[
\lim_{k \to \infty}
\frac{\sec \left( \arctan \left( \mu \tan \tfrac{1}{2}\phi_k \right)\right) - 1}{\sec \phi_k - 1}.
\]
Since the numerator and denominator both go to zero as $k\to\infty$, we
consider the continuous version of the limit, i.e.,
\[
\lim_{\phi \to 0}
\frac{\sec \left( \arctan \left( \mu \tan \tfrac{1}{2}\phi \right)\right) - 1}{\sec \phi - 1}
= \lim_{\phi \to 0}
\frac{\sec \left( \mu \tan \tfrac{1}{2}\phi \right) - 1}{\sec \phi - 1}
= \lim_{\phi \to 0}
\frac{\sec \left( \tfrac{1}{2}\mu \phi \right) - 1}{\sec \phi - 1},
\]
again using the small-angle approximations for $\arctan$ and $\tan$.
As this is an indeterminant form, we will apply L'H{\^ o}pital's Rule.
However, it will be convenient to first multiply by the following
identity to eliminate the term $\sec \phi$ in the denominator:
\[
\lim_{\phi \to 0}
\; \frac{\cos \phi}{\cos \phi} \cdot \frac{\sec \left( \tfrac{1}{2}\mu \phi \right) - 1}{\sec \phi - 1}
=
\lim_{\phi \to 0}
\frac{\cos \phi \left( \sec \left( \tfrac{1}{2} \mu \phi \right) - 1 \right)}{1 - \cos \phi}
\eqqcolon \lim_{\phi \to 0} \frac{f_1(\phi)}{f_2(\phi)}.
\]
For $f_2(\phi)$, $f_2^\prime(\phi) = \sin \phi$, while
for $f_1(\phi)$, we have
\begin{align*}
f_1^\prime(\phi) &=
-\sin \phi \left( \sec\left( \tfrac{1}{2}\mu \phi \right) - 1\right)
+ \tfrac{1}{2} \mu \cos \phi \cdot \sec\left( \tfrac{1}{2} \mu \phi \right) \cdot \tan \left( \tfrac{1}{2}\mu \phi \right)\\
&= -\sin \phi \left( \sec\left( \tfrac{1}{2}\mu \phi \right) - 1\right)
+ \tfrac{1}{2} \mu \cos \phi \cdot \frac{\sin \left(\tfrac{1}{2} \mu \phi \right) }{\cos^2 \left(\tfrac{1}{2}\mu \phi \right)}.
\end{align*}
As \mbox{$f_1^\prime(0) = 0$} and $f_2^\prime(0) = 0$, we still have an indeterminate form
and so will again apply L'H{\^ o}pital's Rule.
For the denominator, we have $f_2^{\prime\prime}(\phi) = \cos \phi$ and so $f_2^{\prime\prime}(0) = 1$.
For the numerator, it will be convenient to write
\mbox{$f_1^\prime(\phi) = -g_1(\phi) + \tfrac{1}{2}\mu g_2(\phi)$}, where
\[
g_1(\phi) = \sin \phi \left( \sec\left( \tfrac{1}{2}\mu \phi \right) - 1\right)
\quad \text{and} \quad
g_2(\phi) = \cos \phi \cdot \frac{\sin \left(\tfrac{1}{2} \mu \phi \right)}{\cos^2 \left(\tfrac{1}{2}\mu \phi \right)},
\]
and differentiate the parts separately. For $g_1(\phi)$, we have that
\[
g_1^\prime(\phi) =
\cos \phi \left(\sec \left(\tfrac{1}{2} \mu \phi \right) - 1\right)
+ \tfrac{1}{2}\mu \sin \phi \cdot \sec\left( \tfrac{1}{2}\mu \phi \right) \cdot \tan \left( \tfrac{1}{2}\mu \phi \right),
\]
and so $g_1^\prime(0) = 0 $.
For $g_2(\phi)$, we have that
\[
g_2^\prime(\phi) =
-\sin \phi \cdot \frac{\sin \left( \tfrac{1}{2}\mu \phi \right) }{\cos^2 \left( \tfrac{1}{2}\mu \phi \right)}
+ \cos \phi \cdot
\frac{\tfrac{1}{2}\mu \left(
\cos^3 \left(\tfrac{1}{2}\mu \phi \right) +
2\sin^2 \left( \tfrac{1}{2}\mu \phi \right) \cdot \cos \left( \tfrac{1}{2}\mu \phi \right)
\right)
}{\cos^4 \left(\tfrac{1}{2}\mu \phi \right)},
\]
and so $g_2^\prime(0) = \tfrac{1}{2}\mu$.
Thus, $f_1^{\prime\prime}(0) = -g_1^\prime(0) + \tfrac{1}{2}\mu g_2^\prime(0) = \tfrac{1}{4}\mu^2$,
proving the claim.
\end{proof}
As \cref{ex:circ} can model any $\mu \in (0,1]$,
per \cref{key:why}, \cref{thm:uhlig_angle,thm:uhlig_ub} also accurately describe
the local behavior of Uhlig's cutting procedure at outermost points in $\fovX{A}$
for any matrix $A$.
Moreover, due to the squaring and one-quarter factor in \cref{thm:uhlig_ub},
the linear rate of convergence becomes very fast rather rapidly
as~$\mu$ decreases from one,
ultimately becoming superlinear if the outermost point is a corner ($\mu = 0$).
We can also estimate the cost of approximating
$\bfovX{A}$ about~$b_\star$, determining how many iterations will be needed
until it is no longer necessary to refine corner $c_k$, i.e.,
the value of $k$ such that $|c_k| \leq r(A) \cdot ( 1 + \tau_\mathrm{tol} )$.
For simplicity, it will now be more convenient to assume that~$j=0$
with $|c_0| = \betar(A)$ for some scalar~$\beta > (1 + \tau_\mathrm{tol})$.
Via the Q-linear rate given by \cref{thm:uhlig_ub}, we have that
\[
|c_k| - r(A) \leq (|c_0| - r(A)) \cdot \left(\tfrac{1}{4}\mu^2\right)^k,
\]
and so if
\[
r(A) + (|c_0| - r(A)) \cdot \left(\tfrac{1}{4}\mu^2\right)^k \leq r(A) \cdot ( 1 + \tau_\mathrm{tol} ),
\]
then it follows that $|c_k| \leq r(A) \cdot ( 1 + \tau_\mathrm{tol} )$, i.e., it does not need to be refined further.
By first dividing the above equation by $r(A)$ and doing some simple manipulations,
we have that $|c_k|$ is indeed sufficiently close to $r(A)$~if
\begin{equation}
k \geq \frac{\log (\tau_\mathrm{tol}) - \log (\beta - 1) }{\log \left(\tfrac{1}{4}\mu^2\right)}.
\end{equation}
Using \cref{ex:circ} with $\beta = 100$, and $\tau_\mathrm{tol} = \texttt{1e-14}$, only
$k \approx 27$, $14$, $7$, and $4$ iterations are needed, respectively,
for $\mu = 1$, $0.5$, $0.1$, and $0.01$.
This is indeed rather fast for linear convergence.
Of course, if $\fovX{A}$ has more than one outermost point, the total cost of a cutting-plane method
increases commensurately, since $\bfovX{A}$ must be well approximated about all of these outermost points.
For disk matrices, all boundary points are outermost, and so the cost blows up, per \cref{thm:uhlig_disk}.
\section{An improved cutting-plane algorithm}
\label{sec:alg2}
We now address some inefficiencies in Uhlig's method
by giving an improved cutting-plane method.
The two main components of this refined algorithm are as follows.
First, any of the local optimization techniques from \cref{sec:alg1}
also allows us to more efficiently locate outermost points in~$\fovX{A}$.
This is possible because each outermost point is bracketed on~$\bfovX{A}$ by
two boundary points of $\fovX{A}$ in $\mathcal{Z}_j$,
and these brackets improve as $\mathcal{G}_j$ more accurately approximates $\fovX{A}$.
Therefore, once $\mathcal{G}_j$ is no longer a crude approximation,
these brackets can be used to initialize optimization to find global maximizers of~$h(\theta)$,
and thus, globally outermost points of~$\fovX{A}$.
Second, given a boundary point of $\fovX{A}$ that is also known to be locally outermost,
we use a new cutting procedure
that reduces the total number of cuts needed to sufficiently approximate $\bfovX{A}$
in this region.
When this new cut cannot be invoked, we will fall back on Uhlig's cutting procedure.
In the next three subsections, we describe our new cutting strategy, establish a Q-linear rate of convergence for it,
and finally, show how these cuts can be sufficiently well estimated so that our theoretical convergence rate result
is indeed realized in practice. Finally, pseudocode of our completed algorithm is given in~\cref{alg2}.
\subsection{An optimal-cut strategy}
\label{sec:opt_cut}
Again consider \cref{ex:circ}.
In \cref{fig:uhlig_jp1}, Uhlig's cut of corner $c_j$ between $b_j$ (with $|b_j| < r$) and $b_\star$ produces
two new corners~$\hat c_{j+1}$ and $c_{j+1}$, but since $|\hat c_{j+1}| < r$ and $|c_{j+1}| > r$,
it is only necessary to subsequently refine $c_{j+1}$.
However, in \cref{fig:uhlig_two} we show another scenario where both of the two new corners
produced by Uhlig's cut will require subsequent cutting as well.
While \cref{thm:uhlig_angle,thm:uhlig_ub} indicate
the number of iterations Uhlig's method needs to sufficiently refine the sequence $\{c_k\}$,
they do not take into account that the CCW corners that are generated may also need to be cut.
Thus, the total number of eigenvalue computations with $H(\theta)$
can be higher than what is suggested by these two theorems.
However, comparing \cref{fig:uhlig_jp1,fig:uhlig_two} immediately
suggests a better strategy, namely, to make the largest reduction in the angle $\phi_j$
such that the CCW corner~$\hat c_{j+1}$ (between $b_j$ and $c_j$ on the tangent line for $b_j$)
does not subsequently need to be refined, i.e., such that $|\hat c_{j+1}| = r$.
In \cref{fig:uhlig_two}, this ideal corner is labeled $d_j$, while the corresponding optimal cut for this same example
is shown in \cref{fig:opt_cut}, where $d_j$ coincides with~$\hat c_{j+1}$, and so the latter is not labeled.
\subsection{Convergence analysis of the optimal cut}
\label{sec:opt_conv}
Before describing how to compute optimal cuts,
we derive the convergence rate of the sequence of angles~$\{\phi_k\}$
this strategy produces.
Per \cref{key:why}, it again suffices to study \cref{ex:circ}.
\begin{figure}[t]
\centering
\subfloat[Uhlig's cut.]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=1.3cm 0cm 1.3cm 0cm,clip]{cut_uhlig}}
\label{fig:uhlig_two}
}
\subfloat[The optimal cut.]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=1.3cm 0cm 1.3cm 0cm,clip]{cut_optimal}}
\label{fig:opt_cut}
}
\caption{Depictions of corner $c_j$ between boundary points $b_j$ and $b_\star$
being cut by Uhlig's procedure (left) and the optimal cut (right),
where the latter always makes the largest possible reduction in $\phi_j$ such that
only corner $c_{j+1}$ must be refined.
}
\label{fig:cuts_comp}
\end{figure}
\begin{lemma}
\label{lem:dj}
Given \cref{ex:circ},
additionally assume
that $|b_j| < r$ and $\tilde r < r$, and so at $b_\star$,
$\mu \in (0,1)$.
Then the point on~$L_{\theta_j}$,
the supporting hyperplane passing through $b_j$ with \mbox{$\theta_j = -\phi_j \in (-\pi,0)$},
that is closest to $c_j$ and has modulus $r$ is
\begin{subequations}
\begin{align}
\label{eq:dlem}
d_j &= b_j - \mathbf{i} t_j \eix{\phi_j}, \quad \text{where} \\
\label{eq:tlem}
t_j &= -s \sin \phi_j + \sqrt{s^2 \sin^2 \phi_j + 2s \tilde r(1 - \cos \phi_j)} > 0.
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
As $\phi_j \in (0,\pi)$,
clearly \eqref{eq:dlem} must hold for some~$t_j > 0$.
To obtain~\eqref{eq:tlem}, we use the fact that $|d_j|^2 = r^2$ and solve for $t_j$ using \eqref{eq:dlem}.
Setting~$u = \eix{\phi_j}$, we have
\begin{align*}
0 &= |d_j|^2 - r^2 = (b_j - \mathbf{i} t_j u) \big(\overline b_j + \mathbf{i} t_j \overline{u}) - r^2
= t_j^2 + \mathbf{i} (b_j \overline{u} - \overline b_j u)t_j + |b_j|^2 - r^2.
\end{align*}
which by substituting in the following two equivalences
\begin{align*}
b_j \overline{u} - \overline b_j u
&= (s + \tilde r u) \overline{u} - (s + \tilde r \overline{u}) u
= s (\overline{u} - u) = -\mathbf{i} 2s \sin \phi_j, \\
|b_j|^2 &= s^2 + \tilde r^2 + s \tilde r(u + \overline{u}) = s^2 + \tilde r^2 + 2s \tilde r \cos \phi_j,
\end{align*}
yields
\[
0 = t_j^2 + (2 s \sin \phi_j) t_j + (s^2 + \tilde r^2 - r^2 + 2s\tilde r \cos \phi_j).
\]
By substituting in $r^2 = (s + \tilde r)^2 = s^2 + \tilde r^2 + 2s\tilde r$, this simplifies further to
\[
0 = t_j^2 + (2 s \sin \phi_j) t_j + 2s \tilde r (\cos \phi_j - 1).
\]
Thus, by the quadratic formula, we obtain \eqref{eq:tlem}.
\end{proof}
\begin{lemma}
\label{lem:opt_angle}
Given the assumptions in \cref{lem:dj} and $t_j > 0$ from \eqref{eq:tlem},
also suppose that \mbox{$\phi_j \in (0,\tfrac{\pi}{2})$}.
Then if optimal cuts are sequentially applied to the corners~$\{c_k\}$ described in \cref{ex:circ},
for all $k \geq j$,
\begin{equation}
\label{eq:opt_thetajp1}
\phi_{j+1}
= -\phi_j + 2\arctan \left( \frac{\tilde r \sin \phi_j - t_j \cos \phi_j}{\tilde r \cos \phi_j + t_j\sin\phi_j} \right).
\end{equation}
\end{lemma}
\begin{proof}
Let $\hat \phi = \phi_j - \Arg(d_j - s)$. Then it follows that
\begin{equation}
\label{eq:theta_jp1_proof}
\phi_{j+1} = \phi_j - 2\hat \phi
= -\phi_j + 2\Arg(d_j - s)
= - \phi_j + 2\arctan \left( \frac{\Im (d_j - s)}{\Re (d_j - s)} \right),
\end{equation}
where the last equality follows because $\Re (d_j - s) > 0$, as $\Re b_j > s$.
Using \eqref{eq:dlem} and substituting in $b_j = s + \tilde r\mathrm{e}^{\mathbf{i} \phi_j}$,
we have that
\[
d_j - s = \tilde r \mathrm{e}^{\mathbf{i} \phi_j} - \mathbf{i} t_j \mathrm{e}^{\mathbf{i} \phi_j}
\qquad \Longleftrightarrow \qquad
\begin{aligned}
\Re (d_j - s) &= \tilde r \cos \phi_j + t_j \sin \phi_j,\\
\Im (d_j - s) &= \tilde r \sin \phi_j - t_j \cos \phi_j.
\end{aligned}
\]
Substituting these into~\eqref{eq:theta_jp1_proof} completes the proof.
\end{proof}
Before deriving how fast $\{\phi_k\}$ converges, we show that it indeed converges to zero.
\begin{lemma}
\label{lem:opt_zero}
Given the assumptions of \cref{lem:opt_angle},
the recursion \eqref{eq:opt_thetajp1} for
optimal cuts produces a sequence of angles $\{\phi_k\}$ converging to zero.
\end{lemma}
\begin{proof}
By construction, $\{\phi_k\}$ is monotone, i.e., $\phi_{k+1} < \phi_k$ for all $k$,
and bounded below by zero, and so $\{\phi_k\}$ converges to some limit $l$.
Per \eqref{eq:tlem}, the $t_j$ values appearing in \eqref{eq:opt_thetajp1} depend on $\phi_j$,
so we define the analogous continuous function
\begin{equation}
\label{eq:t_cont}
t(\phi) = -s \sin \phi + \sqrt{s^2 \sin^2 \phi + 2s \tilde r (1 - \cos \phi )}.
\end{equation}
Now by way of contradiction, assume that $l > 0$ and so $0 < l < \phi_j < \tfrac{\pi}{2}$.
Thus,
\[
\lim_{k\to\infty} \phi_{k+1} = l = -l + 2\arctan \left( \frac{\tilde r \sin l - t(l) \cos l}{\tilde r \cos l + t(l) \sin l} \right)
\ \Leftrightarrow \
\tan l = \frac{\tilde r \sin l - t(l) \cos l}{\tilde r \cos l + t(l) \sin l}.
\]
By multiplying both sides by $\tilde r \cos l + t(l) \sin l$ and rearranging terms,
we obtain the equality \mbox{$t(l)(\sin^2 l + \cos^2 l) = 0$}, and so $t(l) = 0$.
However, \cref{lem:dj} states that $t(l) > 0$ should hold since $l \in (0,\tfrac{\pi}{2})$,
a contradiction, and so $l=0$.
\end{proof}
We now have the necessary pieces to derive the exact rate of convergence of
the angles produced by optimal cuts.
\begin{theorem}
\label{thm:opt_conv}
The sequence $\{\phi_k\}$ produced by optimal cuts
and described by recursion \eqref{eq:opt_thetajp1}
converges to zero Q-linearly with rate $\tfrac{2(1 - \sqrt{1 - \mu})}{\mu} - 1$.
\end{theorem}
\begin{proof}
By \cref{lem:opt_angle,lem:opt_zero},
\eqref{eq:opt_thetajp1} holds, $\phi_k \to \phi_\star = 0$, and $\phi_k \geq 0$ for all $k$, so
\[
\lim_{k \to \infty} \frac{|\phi_{k+1} - \phi_\star|}{| \phi_{k} - \phi_\star |}
= \lim_{k \to \infty} \frac{\phi_{k+1}}{\phi_{k}}
= \lim_{k \to \infty} \frac{
-\phi_k + 2\arctan \left( \frac{\tilde r \sin \phi_k - t_k \cos \phi_k}{\tilde r \cos \phi_k + t_k \sin\phi_k} \right)
}{\phi_k}.
\]
Using the continuous version of $t_k$ given in \eqref{eq:t_cont},
we instead consider the entire limit in continuous form:
\begin{equation}
\label{eq:opt_lim_cont}
\lim_{\phi \to 0} \frac{
-\phi + 2\arctan \left( \frac{\tilde r \sin \phi - t(\phi) \cos \phi}{\tilde r \cos \phi + t(\phi) \sin\phi} \right)
}{\phi}
= -1 + \lim_{\phi \to 0}
\frac{2}{\phi} \cdot
\frac{ \tilde r \phi - t(\phi)\cos \phi}{\tilde r \cos\phi + t(\phi) \phi},
\end{equation}
where the equality holds by using the small-angle approximations $\arctan x \approx x$
(as the ratio inside the $\arctan$ above goes to zero as $\phi \to 0$)
and $\sin x \approx x$.
Again using
$\sin x \approx x$ as well as the small-angle approximation $1 - \cos x \approx \tfrac{1}{2}x^2$,
we also have the small-angle approximation
\begin{equation}
\label{eq:t_approx}
t(\phi)
\approx -s \phi + \sqrt{s^2 \phi^2 + 2s\tilde r \big(\tfrac{1}{2}\phi^2\big)}
= -\phi \left(s - \phi \sqrt{s^2 + s\tilde r}\right)
= -\phi \left(s - \sqrt{s r} \right),
\end{equation}
where the last equality holds since $\tilde r = r - s$.
Via substituting in \eqref{eq:t_approx}, the limit on the right-hand side of \eqref{eq:opt_lim_cont} is
\[
\lim_{\phi \to 0}
\frac{2}{\phi}\cdot
\frac{\tilde r \phi + \phi \left(s - \sqrt{s r} \right) \cos \phi
}{\tilde r \cos \phi - \phi \left(s - \sqrt{s r} \right) \phi }
= \lim_{\phi \to 0}
\frac{2 \left( \tilde r + \left(s - \sqrt{s r} \right) \cos \phi \right)
}{\tilde r \cos \phi - \phi^2 \left(s - \sqrt{s r} \right) } \\
= \frac{
2 \left( \tilde r + s - \sqrt{s r} \right)
}{\tilde r}.
\]
Recalling that $\tilde r = \mu r$ and that $s = r - \tilde r = r - \mu r$,
by substitutions
we can rewrite the ratio above as
\[
\frac{2\left(\mu r + (r - \mu r) - \sqrt{(r - \mu r) r} \right) }{\mu r}
= \frac{2\left(r - r \sqrt{1 - \mu} \right) }{\mu r}
= \frac{2\left(1 - \sqrt{1 - \mu} \right) }{\mu}.
\]
Subtracting one from the value above completes the proof.
\end{proof}
As we show momentarily, optimal cuts have a total lower cost than Uhlig's cutting procedure.
Thus, there is no need to derive an analogue of \cref{thm:uhlig_ub} for describing
the convergence rate of the moduli of corners~$c_k$ produced by the optimal-cut strategy.
\subsection{Computing the optimal cut}
\label{sec:compute_opt}
Suppose that $b_\star \in \mathcal{Z}_j$ attains the value of~$l_j$,
and that $b_\star$ is also locally outermost in~$\fovX{A}$,
and let \mbox{$\gamma = |b_\star| \leq r(A)$}.
Without loss of generality, we assume that $\Arg(b_\star) = 0$,
and let $b_j \in \mathcal{Z}_j$ be the next known boundary point of $\fovX{A}$ with $\Arg(b_j) \in (0,\pi)$.
We can model $\bfovX{A}$ between~$b_\star$ and~$b_j$ by fitting a quadratic
that interpolates $\bfovX{A}$ at~$b_\star$ and~$b_j$.
If this model is a good fit, then it can be used to estimate~$d_j$,
and thus, also the optimal cut.
Since $b_\star$ is also a locally outermost point of $\fovX{A}$ and $\Arg(b_\star) = 0$,
we can interpolate these boundary points using the
sideways quadratic (opening up to the left in the complex plane)
\[
q(y) = q_2 y^2 + q_1 y + q_0,
\]
with the remaining degree of freedom used to specify
that $q(y)$ should be tangent to~$\fovX{A}$ at~$b_\star$.
Clearly, $q(y)$ cannot be a good fit if $L_{\theta_j}$,
the supporting hyperplane passing through $b_j$, is increasing from left to right in the complex plane;
hence, we also assume that \mbox{$\theta_j \in (-\tfrac{\pi}{2},0)$}.
Let~\mbox{$\theta_\dagger \in (\theta_j,0)$} denote the angle of the supporting hyperplane
for the optimal cut, e.g., for \cref{ex:circ}, the one that passes through $d_j$ and the boundary point $b_{j+1}$
between $b_j$ and $b_\star$.
By our criteria, the equations
\begin{equation}
q(0) = \gamma,
\quad
q(\Im b_j) = \Re b_j,
\quad \text{and} \quad
q^\prime(0) = 0
\end{equation}
determine the coefficients $q_0$, $q_1$, and $q_2$, and solving these yields
\begin{equation}
q_2 = \frac{ \Re b_j - \gamma}{(\Im b_j)^2},
\quad
q_1 = 0,
\quad \text{and} \quad
q_0 = \gamma.
\end{equation}
\begin{algfloat}[t]
\begin{algorithm}[H]
\floatname{algorithm}{Algorithm}
\caption{An Improved Cutting-Plane Algorithm}
\label{alg2}
\begin{algorithmic}[1]
\REQUIRE{
$A \in \mathbb{C}^{n \times n}$ with $n \geq 2$ and $\tau_\mathrm{tol} > 0$.
}
\ENSURE{
$l$ such that $|l -r(A)| \leq \tau_\mathrm{tol} \cdot r(A)$.
\\ \quad
}
\STATE $\psi \gets \Arg(\lambda)$ where $\lambda \in \Lambda(A)$ attains $\rho(A)$ \COMMENT{0 if $\lambda = 0$}
\STATE $\mathcal{G} \gets P_{\theta_1} \cap \cdots \cap P_{\theta_4}$ where $\theta_\ell = \tfrac{\ell -1 }{2}\pi - \psi$
for $\ell = 1,2,3,4$
\STATE $\mathcal{Z} \gets \{ z_{\theta_\ell} : \ell = 1,2,3,4\}$
\STATE $l \gets \max \{ |b| : b \in \mathcal{Z} \}$, $u \gets \max \{ |c| : c \text{ a corner of } \mathcal{G} \}$
\WHILE {$\tfrac{ u - l }{l} > \tau_\mathrm{tol}$ }
\STATE $L_{\theta_1} \gets$ supporting hyperplane
for the boundary point in $\mathcal{Z}$ attaining~$l$
\STATE $\gamma \gets$ local max of $\rho(H(\theta))$ via optimization initialized at $\theta_1$
\STATE $\mathcal{G} \gets \mathcal{G} \cap
P_{\theta_1} \cap \cdots \cap P_{\theta_q}$
for the $q$ angles $\theta_\ell$ encountered during optimization
\STATE $\mathcal{Z} \gets \mathcal{Z} \cup \{ z_{\theta_\ell} : \ell = 1,\ldots,q\}$
\STATE $c \gets \text{ outermost corner of } \mathcal{G}$
\IF { the optimal cut should be applied to $c$ per \cref{sec:compute_opt} }
\STATE $\theta \gets$ angle $\theta_\dagger$ is given by \eqref{eq:opt_cut_angle}
(rotated and flipped as necessary)
\ELSE
\STATE $\theta \gets -\Arg(c)$ \COMMENT{Uhlig's cut}
\ENDIF
\STATE $\mathcal{G} \gets \mathcal{G} \cap P_\theta$
\STATE $\mathcal{Z} \gets \mathcal{Z} \cup \{ z_{\theta} \}$
\STATE $l \gets \max \{ |b| : b \in \mathcal{Z} \}$, $u \gets \max \{ |c| : c \text{ a corner of } \mathcal{G} \}$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\algnote{
For simplicity, we forgo describing pseudocode to exploit possible normality of~$A$ or symmetry of $\fovX{A}$,
and assume that $A\neq0$, eigenvalues and local maximizers are obtained exactly,
optimization is monotonic, i.e., $l \leq \gamma$ is guaranteed,
and there are no ties for the boundary point in line~6.
Note that $\psi$ just specifies the rotation of the initial rectangular bounding box~$\mathcal{G}$ for~$\fovX{A}$.
}
\end{algfloat}
We can assess whether $q(y)$ is a good fit for $\bfovX{A}$ about $b_\star$
by checking how close $q(y)$ is to being tangent to $\bfovX{A}$ at $b_j$,
i.e., $q(y)$ is a good fit if
\begin{equation}
q^\prime(\Im b_j) \approx \tan \theta_j.
\end{equation}
If these two values are not sufficiently close, then we consider $q(y)$ a poor local approximation of $\bfovX{A}$ at $b_j$ (and $b_\star$)
and use Uhlig's cutting procedure to update $\mathcal{G}_j$ and $\mathcal{Z}_j$.
Otherwise, we assume that $q(y)$ does accurately model $\bfovX{A}$ in this region
and do an optimal cut.
To estimate $\theta_\dagger$, we need to determine the line
\[
a(y) = a_1 y + a_0
\]
such that $a(y)$ passes through $d_j$ for $y = \Im d_j$ and is tangent to $q(y)$
for some \mbox{$\tilde y \in (0,\Im d_j)$}.
Thus, we solve the following set of equations:
\begin{subequations}
\begin{alignat}{4}
\label{eq:lin1}
\Re d_j &= a(\Im d_j)
&& \qquad \Longleftrightarrow \qquad &
\Re d_j &= a_1 \Im d_j + a_0,\\
\label{eq:lin2}
q(\tilde y) &= a(\tilde y)
&& \qquad \Longleftrightarrow \qquad &
q_2 \tilde y^2 + q_0 &= a_1 \tilde y + a_0, \\
\label{eq:lin3}
q^\prime(\tilde y) &= a^\prime(\tilde y)
&& \qquad \Longleftrightarrow \qquad &
2q_2 \tilde y &= a_1,
\end{alignat}
\end{subequations}
to determine $a_0$, $a_1$ and $\tilde y$. This yields
\begin{equation}
\label{eq:lincoeffs}
\tilde y = \Im d_j - \sqrt{(\Im d_j)^2 + \tfrac{q_0 - \Re d_j }{q_2}},
\ \ \
a_0 = -q_2 \tilde y^2 + q_0,
\ \ \ \text{and} \ \ \
a_1 = \frac{\Re d_j - a_0}{\Im d_j},
\end{equation}
where $a_1$ follows directly from \eqref{eq:lin1}, $a_0$ is obtained by substituting
the value of $a_1$ given in~\eqref{eq:lin3} into \eqref{eq:lin2}, and $\tilde y$ follows
from substituting the value of $a_0$ given in \eqref{eq:lincoeffs} into $a_1$ in \eqref{eq:lincoeffs}
(so that $a_1$ now only has $\tilde y$ as an unknown),
and then substituting this version of $a_1$ into \eqref{eq:lin3},
which results in a quadratic equation in~$\tilde y$.
Since $q(y)$ is a sufficiently accurate local model of $\bfovX{A}$,
it follows that
\begin{equation}
\label{eq:opt_cut_angle}
\theta_\dagger \approx \arctan a_1.
\end{equation}
If $\gamma = r(A)$, we can also estimate the value of $\mu$ at~$b_\star$ via
\begin{equation}
\label{eq:mu_est}
\mu_\mathrm{est} \coloneqq \frac{1}{2|q_2|\gamma},
\end{equation}
as the osculating circle of $q(y)$ at $y=0$ has radius $\tfrac{1}{2}|q_2|$.
While the value of $\mu$ at $b_\star$ might be computed using \cref{thm:curvature},
this would be much more expensive and it requires that $\lambda_\mathrm{max}(H(0))$ be simple,
which may not hold.
Detecting the normalized radius of curvature at outermost points via~\eqref{eq:mu_est}
will be a key component of our hybrid algorithm in \cref{sec:hybrid}.
Our formulas for computing $\theta_\dagger$ can be
used for any outermost point simply by rotating and flipping the problem
as necessary to satisfy the assumptions on~$b_\star$ and~$b_j$.
To be robust against even small rounding errors,
instead of $d_j$, we use \mbox{$(1 - \delta) d_j + \delta b_j$} for some small $\delta \in (0,1)$,
i.e., a point slightly closer to $b_j$.
For different values of $\mu \in [0,1)$,
\cref{fig:conv_rates} plots the convergence rates for $\{\phi_k\}$ given by \cref{thm:uhlig_angle,thm:opt_conv},
while \cref{fig:total_cuts} shows the total number of cuts needed by
each cutting strategy in order to sufficiently approximate~$\bfovX{M}$ near~$b_\star$.
Uhlig's method is usually slightly more expensive
but becomes significantly worse than optimal cutting
for normalized curvatures $\mu \approx 0.84$ and higher,
requiring about double the number of cuts at this transition point.
A variant of \cref{fig:total_cuts} (not shown) also reveals that optimal cuts become slightly more expensive
than Uhlig's cuts for~$\mu \approx 0.999961$ and~above, and so we only use the optimal cut
when $\mu_\mathrm{est}$ is less than this value.
\begin{figure}
\centering
\subfloat[Convergence rates of $\{\phi_k\}$.]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=1.3cm 0cm 1.3cm 0cm,clip]{convergence_rates}}
\label{fig:conv_rates}
}
\subfloat[Cost to approximate $\bfovX{M}$ region.]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=1.3cm 0cm 1.3cm 0cm,clip]{total_cuts_disk}}
\label{fig:total_cuts}
}
\caption{
Left: the respective convergence rates of $\{\phi_j\}$ given by \cref{thm:uhlig_angle,thm:opt_conv}.
Right: for tolerance \mbox{$\tau_\mathrm{tol} = \texttt{1e-14}$},
the total number of cuts required
to sufficiently approximate the region of~$\bfovX{M}$ specified by the supporting hyperplanes
$L_{\theta_\star}$ and~$L_{\theta_j}$ respectively passing through $b_\star$ and $b_j$
and the corner $c_j$, where $\theta_\star = 0$ and $\theta_j = -\tfrac{\pi}{50}$.
}
\label{fig:rates_comp}
\end{figure}
\section{A hybrid algorithm}
\label{sec:hybrid}
\cref{tbl:eig} and our new analyses suggest
that it would be far more efficient to combine level-set and cutting-plane techniques in a single hybrid algorithm
rather than rely on either technique alone.
For the smallest values of~$n$, the level-set approach would generally be most efficient,
while for larger problem sizes, which approach would
be fastest depends on the specific shape of $\fovX{A}$ and the normalized
radius of curvature at outermost points.
While we cannot know these things \emph{a priori}, per \eqref{eq:mu_est},
\cref{alg2} can estimate $\mu$ as it iterates, and
so we can predict how many more cuts may be needed about a particular outermost point.
The current approximation~$\mathcal{G}$ and $\mathcal{Z}$ can also be used to obtain
cheaply computed estimates of how many more cuts will be needed
to approximate regions of $\bfovX{A}$ to sufficient accuracy.
Consequently, as \cref{alg2} iterates,
we can maintain an evolving estimate of how many more cuts would be needed
in order to compute $r(A)$ to the desired tolerance.
Thus, our hybrid
algorithm can automatically determine if the cutting-plane approach
is likely to be fast or slow, and if the latter, automatically switch to the level-set approach.
For example, in practice, \cref{alg1} often only requires one to two eigenvalue computations
with $R_\gamma - \lambda S$ and several more with~$H(\theta)$.
Hence, in conjunction with tuning/benchmark data, such as that shown in \cref{tbl:eig},
our hybrid algorithm can reliably estimate whether it will be faster to continue \cref{alg2}
or immediately switch to \cref{alg1}, which will be warm-started using the angle of the supporting hyperplane
that passes through the point in $\mathcal{Z}$ that attains $l$ in line~6,
as well as the arguments of the corners to the left and right of this point.
\section{Numerical validation}
\label{sec:experiments}
Experiments were done in MATLAB\ R2021a
on a 2020 13" MacBook Pro with an Intel i5 1038NG7 quad-core CPU laptop, 16GB of RAM, and macOS v10.15.
For each problem, all computed estimates for $r(A)$ agreed to at least 14 digits, the tolerance we set,
so we only show cost comparisons here.
We used \texttt{eig} for all eigenvalue computations\footnote{In practice, note the following recommendations.
As $n$ increases, \texttt{eigs} should be preferred over \texttt{eig} for computing eigenvalues of $H(\theta)$,
but this can be determined automatically via tuning.
Relatedly, we suggest using \texttt{eigs} with $\texttt{k} > 1$ for robustness,
as the desired eigenvalue may not always be the first to converge.
For robustly identifying all the unimodular eigenvalues of $R_\gamma - \lambda S$,
it is generally recommended to use structure-preserving eigensolvers, e.g., \cite{BenBMetal02,BenSV16}.
}
as (a) it sufficed to verify the benefits of our new methods
and theoretical results and (b) this consistency
simplifies the comparisons; e.g., \texttt{numr}, Mengi's implementation of his level-set method with Overton,
also only uses \texttt{eig}.
All code and data are included as supplementary material for reproducibility.
Implementations of our new methods will be added to ROSTAPACK~\cite{rostapack}.
We begin by comparing \cref{alg1} to \texttt{numr}.
Per \cref{tbl:level},
\cref{alg1} generally only needed a single eigenvalue computation with $R_\gamma - \lambda S$ and at most two,
and for $n \geq 300$, ranged from 4.1--6.5 times faster than \texttt{numr}.
Even with optimization disabled, our iteration using $\rho(H(\theta))$
was still faster than~\texttt{numr}.
\begin{table}
\centering
\caption{
For dense random $A$ matrices,
the costs of \cref{alg1}
and Mengi and Overton's method (\texttt{numr}) are shown.
We tested \cref{alg1} with optimization enabled (Opt., done via Newton's method) and disabled (MP, for midpoints only).
}
\setlength{\tabcolsep}{6pt}
\begin{tabular}{r SSc | ccc | SSS}
\toprule
\multicolumn{1}{c}{} &
\multicolumn{3}{c}{\# of \texttt{eig($H(\theta)$)}} &
\multicolumn{3}{c}{\# of \texttt{eig($R_\gamma$,$S$)}} &
\multicolumn{3}{c}{Time (sec.)} \\
\cmidrule(lr){2-4}
\cmidrule(lr){5-7}
\cmidrule(lr){8-10}
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{Alg.~\ref{alg1}} &
\multicolumn{1}{c}{\texttt{numr}} &
\multicolumn{2}{c}{Alg.~\ref{alg1}} &
\multicolumn{1}{c}{\texttt{numr}} &
\multicolumn{2}{c}{Alg.~\ref{alg1}} &
\multicolumn{1}{c}{\texttt{numr}} \\
\cmidrule(lr){2-3}
\cmidrule(lr){4-4}
\cmidrule(lr){5-6}
\cmidrule(lr){7-7}
\cmidrule(lr){8-9}
\cmidrule(lr){10-10}
\multicolumn{1}{c}{$n$} &
\multicolumn{1}{c}{Opt.} &
\multicolumn{1}{c}{MP} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{Opt.} &
\multicolumn{1}{c}{MP} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{Opt.} &
\multicolumn{1}{c}{MP} &
\multicolumn{1}{c}{} \\
\midrule
\multicolumn{1}{r|}{ 100} & 9 & 10 & 39 & 1 & 4 & 6 & 0.1 & 0.1 & 0.2 \\
\multicolumn{1}{r|}{ 200} & 14 & 11 & 34 & 2 & 4 & 5 & 0.6 & 0.9 & 1.2 \\
\multicolumn{1}{r|}{ 300} & 17 & 9 & 26 & 1 & 4 & 5 & 1.1 & 3.6 & 4.4 \\
\multicolumn{1}{r|}{ 400} & 17 & 15 & 42 & 1 & 5 & 7 & 2.5 & 10.7 & 14.9 \\
\multicolumn{1}{r|}{ 500} & 7 & 8 & 54 & 1 & 3 & 7 & 4.3 & 12.0 & 28.0 \\
\multicolumn{1}{r|}{ 600} & 13 & 10 & 45 & 1 & 4 & 7 & 8.0 & 28.6 & 49.7 \\
\bottomrule
\end{tabular}
\label{tbl:level}
\end{table}
We now verify that our local convergence rate analyses from \cref{sec:uhlig_conv} and \cref{sec:opt_conv}
do indeed hold for general matrices and that our procedure for computing optimal cuts is sufficiently accurate
to realize the convergence rate given by \cref{thm:opt_conv}.
First, we obtained 200 general examples with roughly equally spaced values of~\mbox{$\mu \in [0,1]$}.
This was done by running optimization on $\min_{X} r(A+BXC)$,
where $A \in \mathbb{C}^{10 \times 10}$ is diagonal, while $B \in \mathbb{C}^{10 \times 5}$, $C \in \mathbb{C}^{5 \times 10}$,
and $X \in \mathbb{R}^{5 \times 5}$ are dense, and collecting the iterates.
By starting at $X=0$, we obtain an example with~$\mu=0$; since~$A$ is diagonal, $\fovX{A}$ is a polygon.
Since minimizing $r(A+BXC)$
often causes $\mu \to 1$ as optimization progresses~\cite{LewO20},
we also obtain a sequence of examples $A+BX_kC$ for iterates $\{X_k\}$ with various~$\mu$~values
(computed via \cref{thm:curvature}).
Generating new $A$, $B$, and $C$ matrices and running optimization from $X_0=0$ was repeated
in a loop until the desired set of 200 general examples had been obtained.
For each problem, we recorded the total number of cuts
that Uhlig's cutting procedure and the optimal-cut strategy needed to
sufficiently approximate the field of values boundary in a small neighborhood to one side of the outermost point in its field of values. More specifically,
we performed an analogous experiment to the one we showed earlier for approximating
a region of the boundary of \cref{ex:circ}.
As can be seen by comparing \cref{fig:total_cuts,fig:alg2_exp}, for any given $\mu$,
the total number of cuts needed on arbitrarily shaped fields of values
is essentially the same as that needed for \cref{ex:circ},
thus validating the generality of our convergence rate analysis and the reliability of our method for computing optimal cuts.
\begin{figure}[t]
\centering
\resizebox*{7.0cm}{!}{\includegraphics[trim=1.3cm 0cm 1.3cm 0cm,clip]{total_cuts_sof}}
\caption{
For 200 arbitrarily shaped fields of values with roughly equally spaced values of $\mu \in [0,1]$
at outermost points,
Uhlig's cutting procedure and optimal cuts
were used to approximate the boundary region specified by supporting hyperplanes
$L_{\theta_\star}$ and $L_{\theta_j}$ and the corner $c_j$ of $\mathcal{G}_j$ that they define,
where~$L_{\theta_\star}$ passes through the outermost point~$b_\star$ and $\theta_j = \theta_\star - \tfrac{\pi}{50}$.
See \cref{fig:total_cuts} for the analogous experiments for approximating
the same-sized region of~$\bfovX{M}$ in the sense that~$\theta_\star - \theta_j = \tfrac{\pi}{50}$.
}
\label{fig:alg2_exp}
\end{figure}
For comparing our improved level-set and cutting-plane methods,
we also set \cref{alg2} to do optimization via Newton's method, and per \cref{rem:two_hp},
had it add supporting hyperplanes for both $\lambda_\mathrm{max}$
and $\lambda_\mathrm{min}$ on every cut.
For test problems, we used
the Gear, Grcar, and FM examples used by Uhlig in \cite{Uhl09},
\texttt{randn}-based complex matrices,
and $\mathrm{e}^{\mathbf{i} 0.25\pi}((1 - \mu) I + \mu K_n)$ with
\mbox{$\mu=0.9999$} and $K_n$ from~\eqref{eq:crabb},
which is a rotated version of \cref{ex:circ} that we call Nearly Disk.
In~\cref{tbl:both}, we again see that \cref{alg1}
is well optimized in terms of its overall possible efficiency,
as it often only required a single computation with $R_\gamma - \lambda S$
and at most two. As predicted by our analysis,
we also see that the cost of \cref{alg2} is highly correlated
with the value of $\mu$.
On Gear ($\mu \approx 0$), \cref{alg2} was extremely fast, essentially showing Q-superlinear convergence.
In fact, \cref{alg2} was much faster (2 to 87 times) than \cref{alg1} on Gear, Grcar, FM, and \texttt{randn},
as $\mu < 0.9$ for all of these problems.
In contrast, for Nearly Disk ($\mu=0.9999$),
\cref{alg2} was noticeably slower, with our level-set approach now being 5.9 to~13.2 times faster.
\begin{table}
\centering
\caption{
The respective costs of \cref{alg1,alg2} are shown.
The values of $\mu$ at outermost points are also shown,
computed via \cref{thm:curvature}.
}
\setlength{\tabcolsep}{6pt}
\begin{tabular}{lrc | rc | r | SS }
\toprule
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{3}{c}{\# of calls to \texttt{eig($\cdot$)}} &
\multicolumn{2}{c}{Time (sec.)} \\
\cmidrule(lr){4-6}
\cmidrule(lr){7-8}
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{Alg.~\ref{alg1}} &
\multicolumn{1}{c}{Alg.~\ref{alg2}} &
\multicolumn{1}{c}{Alg.~\ref{alg1}} &
\multicolumn{1}{c}{Alg.~\ref{alg2}} \\
\cmidrule(lr){4-5}
\cmidrule(lr){6-6}
\cmidrule(lr){7-7}
\cmidrule(lr){8-8}
\multicolumn{1}{l}{Problem} &
\multicolumn{1}{c}{$n$} &
\multicolumn{1}{c}{$\mu$} &
\multicolumn{1}{c}{$H(\theta)$} &
\multicolumn{1}{c}{$R_\gamma - \lambda S$} &
\multicolumn{1}{c}{$H(\theta)$} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} \\
\midrule
Gear & \multicolumn{1}{|r|}{ 320} & $1.194 \times 10^{-6}$ & 2 & 1 & 5 & 1.1 & 0.2 \\
Gear & \multicolumn{1}{|r|}{ 640} & $1.499 \times 10^{-7}$ & 5 & 1 & 4 & 14.2 & 0.4 \\
Gear & \multicolumn{1}{|r|}{1280} & $1.878 \times 10^{-8}$ & 5 & 1 & 4 & 216.0 & 2.5 \\
\midrule
Grcar & \multicolumn{1}{|r|}{ 320} & $ 0.6543$ & 34 & 1 & 30 & 1.4 & 0.4 \\
Grcar & \multicolumn{1}{|r|}{ 640} & $ 0.6544$ & 33 & 1 & 31 & 14.8 & 2.3 \\
Grcar & \multicolumn{1}{|r|}{1280} & $ 0.6544$ & 26 & 1 & 28 & 196.5 & 15.0 \\
\midrule
FM & \multicolumn{1}{|r|}{ 320} & $ 0.1851$ & 11 & 1 & 20 & 1.2 & 0.2 \\
FM & \multicolumn{1}{|r|}{ 640} & $ 0.1836$ & 8 & 1 & 20 & 10.0 & 0.8 \\
FM & \multicolumn{1}{|r|}{1280} & $ 0.1829$ & 9 & 1 & 20 & 80.1 & 5.3 \\
\midrule
\texttt{randn} & \multicolumn{1}{|r|}{ 320} & $ 0.7576$ & 7 & 1 & 49 & 1.4 & 0.7 \\
\texttt{randn} & \multicolumn{1}{|r|}{ 640} & $ 0.8663$ & 21 & 2 & 76 & 22.9 & 4.4 \\
\texttt{randn} & \multicolumn{1}{|r|}{1280} & $ 0.7971$ & 23 & 2 & 56 & 187.7 & 26.5 \\
\midrule
Nearly Disk & \multicolumn{1}{|r|}{ 320} & $ 0.9999$ & 6 & 1 & 1571 & 1.4 & 18.7 \\
Nearly Disk & \multicolumn{1}{|r|}{ 640} & $ 0.9999$ & 6 & 1 & 1567 & 12.1 & 81.6 \\
Nearly Disk & \multicolumn{1}{|r|}{1280} & $ 0.9999$ & 7 & 1 & 1566 & 104.3 & 618.1 \\
\bottomrule
\end{tabular}
\label{tbl:both}
\end{table}
Finally, we benchmark our hybrid algorithm.
We tested our three algorithms on $n=400$ and $n=800$ examples with
$\mu \in \{0.1,0.2,\ldots,0.9,0.99,\ldots,0.9999999\}$.
Since minimizing $r(A+BXC)$ to generate such matrices would be prohibitively expensive
for these values of $n$ and $\mu$,
we instead generated examples of the
form~\mbox{$T_{n,\mu} = \eix{\theta} \begin{bsmallmatrix} M & 0 \\ 0 & D\end{bsmallmatrix}$}, where
$\theta \in [0,2\pi)$ was chosen randomly, $M$ is an instance of \cref{ex:circ}
with the desired value of $\mu$ and dimension $n-100$, and
$D \in \mathbb{C}^{100 \times 100}$ is a complex diagonal matrix.
In order to make $h(\theta)$ have many local maximizers,
we chose $M$ such that~$r(M)=1$ and then picked the elements of $D$ so that
they were roughly placed near a circle drawn between $\bfovX{\eix{\theta} M}$ and the unit circle,
biased towards the latter;
see \cref{fig:hybrid_ex_fov} for a visualization.
\cref{fig:hybrid_ex_htheta} shows how this choice of~$D$
indeed causes~$h(\theta)$ to have many local maximizers,
while the randomly chosen $\eix{\theta}$ scalar
means that the unique global maximizer may occur anywhere.
The running times of our three algorithms on these $T_{n,\mu}$ examples are
shown in \cref{fig:hybrid_comp}.
Once again, we see that the running time of \cref{alg1} remains fairly constant
across all the values of $\mu$, while the running time of \cref{alg2} is much faster
for small values of $\mu$ but then blows up as $\mu \to 1$.
Most importantly, \cref{fig:hybrid_comp} verifies that our hybrid algorithm indeed remains efficient
for all values of~$\mu$ since it automatically detects when to switch from
the cutting-plane approach to the level-set approach.
In fact, our hybrid algorithm even becomes more efficient than \cref{alg1} for $\mu$ close to one.
This is because when it switches to the level-set approach, \cref{alg2} often provided
such good starting points
that only one eigenvalue computation with~$R_\gamma - \lambda S$ was needed.
In contrast, \cref{alg1} always required two eigenvalue computations with $R_\gamma - \lambda S$
on our $T_{n,\mu}$ test problems.
\begin{figure}
\centering
\subfloat[$n=400$, $\mu = 0.4$.]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=1.3cm 0cm 1.3cm 0cm,clip]{T_400_4_fov}}
\label{fig:hybrid_ex_fov}
}
\subfloat[$n=400$, $\mu = 0.99$]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=1.3cm 0cm 1.3cm 0cm,clip]{T_400_10_htheta}}
\label{fig:hybrid_ex_htheta}
}
\caption{
Left: the field of values, eigenvalues, and the osculating circle at the unique point attaining the
numerical radius are shown for one instance of $T_{n,\mu}$;
to read the plot, see the caption of \cref{fig:demo}.
Right: a plot of $h(\theta)$ for another instance of $T_{n,\mu}$.
}
\label{fig:hybrid_exs}
\end{figure}
\begin{figure}
\centering
\subfloat[$n=400$.]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=0.06cm 0cm 0.25cm 0cm,clip]{hybrid_comparison_400}}
\label{fig:hybrid_400}
}
\subfloat[$n=800$]{
\resizebox*{7.0cm}{!}{\includegraphics[trim=0.06cm 0cm 0.25cm 0cm,clip]{hybrid_comparison_800}}
\label{fig:hybrid_800.}
}
\caption{
Running times (in minutes, $\log_{10}$ scale) of the algorithms on $T_{n,\mu}$ with different normalized curvatures.
}
\label{fig:hybrid_comp}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
Via our new understanding of the local convergence rate of Uhlig's method, as well as how
its overall cost blows up for disk matrices, we have
precisely explained why Uhlig's method is sometimes much faster or much slower
than the level-set method of Mengi and Overton. Moreover, this analysis has motivated
our new hybrid algorithm that automatically switches between cutting-plane and level-set techniques
in order to remain efficient across all numerical radius problems.
Along the way, we have also identified inefficiencies in the earlier level-set and cutting-plane algorithms
and addressed them via our improved versions of these two methodologies.
\vspace{1cm}
\small
\bibliographystyle{alpha}
| {
"timestamp": "2021-10-28T02:26:16",
"yymm": "2002",
"arxiv_id": "2002.00080",
"language": "en",
"url": "https://arxiv.org/abs/2002.00080",
"abstract": "The main two algorithms for computing the numerical radius are the level-set method of Mengi and Overton and the cutting-plane method of Uhlig. Via new analyses, we explain why the cutting-plane approach is sometimes much faster or much slower than the level-set one and then propose a new hybrid algorithm that remains efficient in all cases. For matrices whose fields of values are a circular disk centered at the origin, we show that the cost of Uhlig's method blows up with respect to the desired relative accuracy. More generally, we also analyze the local behavior of Uhlig's cutting procedure at outermost points in the field of values, showing that it often has a fast Q-linear rate of convergence and is Q-superlinear at corners. Finally, we identify and address inefficiencies in both the level-set and cutting-plane approaches and propose refined versions of these techniques.",
"subjects": "Numerical Analysis (math.NA); Optimization and Control (math.OC)",
"title": "Convergence rate analysis and improved iterations for numerical radius computation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.97997655637136,
"lm_q2_score": 0.8175744695262775,
"lm_q1q2_score": 0.8012038132235029
} |
https://arxiv.org/abs/1707.01883 | Hermann Hankel's "On the general theory of motion of fluids", an essay including an English translation of the complete Preisschrift from 1861 | The present is a companion paper to "A contemporary look at Hermann Hankel's 1861 pioneering work on Lagrangian fluid dynamics" by Frisch, Grimberg and Villone (2017). Here we present the English translation of the 1861 prize manuscript from Göttingen University "Zur allgemeinen Theorie der Bewegung der Flüssigkeiten" (On the general theory of the motion of the fluids) of Hermann Hankel (1839-1873), which was originally submitted in Latin and then translated into German by the Author for publication. We also provide the English translation of two important reports on the manuscript, one written by Bernhard Riemann and the other by Wilhelm Eduard Weber, during the assessment process for the prize. Finally we give a short biography of Hermann Hankel with his complete bibliography. | \section{Introductory notes}
\label{s:intro}
\deffootnotemark{\textsuperscript{\thefootnotemark}}\deffootnote{2em}{1.6em}{${}^\thefootnotemark$\hspace{0.0319cm}\enskip}
\setcounter{footnote}{0}
Here we present, with some supplementary documents, a full translation from German of
the winning essay, originally written in Latin, by Hermann Hankel, in response to the ``extraordinary mathematical prize''\footnote{The ordinary prize from the Philosophical Faculty of the University of G\"ottingen was a philosophical one; extraordinary prizes, however, could be launched in another discipline.}
launched on 4th June 1860 by the Philosophical Faculty of the University
of G\"ottingen with a deadline of end of March 1861. By request of the Prize Committee to Hankel, the essay was then revised by him and finally published in 1861, in German, as a \textit{Preisschrift}.\footnote{Hankel, \hyperlink{Hankel}{1861.} Hankel asked for permission to have his revised \textit{Preisschrift} published in German.} The Latin manuscript has been very probably returned to the Author and now appears to be irremediably lost.\footnote{This information has been conveyed to us through the G\"ottingen Archive. An attempt by us to obtain the Latin document from a descendant of Hankel was unsuccessful.}
The G\"ottingen University Library possesses two copies of the 1861 German original edition of the \textit{Preisschrift}.
Hankel participated in this prize competition shortly after his arrival in the Spring 1860 in G\"ottingen as a 21-years old student in Mathematics, from the University of Leipzig.
The formulation of the prize (see section~\ref{s:Preisschrift}) highlighted the problem of the equations of fluid motion in Lagrangian coordinates and was presented in memory of Peter Gustav Lejeune-Dirichlet. A \textit{post mortem} paper of his had outlined the advantages of the use of the Lagrangian approach, compared to the Eulerian one, for the description of the fluid motion.\footnote{Lejeune-Dirichlet, \hyperlink{Dirichlet}{1860}; for more details, see the companion paper: Frisch, Grimberg and Villone, \hyperlink{FGV}{2017}.}
During the decision process for the winning essay, there was an exchange of German-written letters among the committee members.\footnote{G\"ottingen University Archive, \hyperlink{G\"ottingen University Archive}{1860/1861}.}
Notably, among all of these letters, the ones from Wilhelm Eduard Weber (1804--1891) and from Bernhard Riemann (1826--1866) have been decisive for the evaluation of the essay. Hankel had been the only one to submit an {essay, and the discussion among the committee members was devoted to deciding whether or not} the essay written by Hankel deserved to win the prize.
For completeness, we provide an English translation of two of these letters, one signed by Bernhard Riemann, the other by Wilhelm Eduard Weber.
As is thoroughly discussed in the companion paper \mbox{(Frisch, Grimberg and Villone, \hyperlink{FGV}{2017}),} Hankel's \textit {Preisschrift} reveals indeed a truly
deep understanding of Lagrangian fluid mechanics, with innovative use of variational
methods and differential geometry. Until these days, this innovative work of Hankel has remained apparently poorly known among scholars, with some exceptions. For details and references, see the companion paper.
The paper is organised as follows.
In section~\ref{s:Preisschrift}, we present the translation of the \textit{Preis-schrift}; this section begins with a preface signed by Hankel, containing the stated prize question together with the decision by the committee members. For our translation we used the digitized copy of the \textit{Preisschrift} from the HathyTrust Digital Library (indicated as a link in the reference). It has been verified by the G\"ottingen University Library that the text that we used is indeed the digitized copy of the 1861 original printed copy.
Section~\ref {s:letters} contains the translation of two written judgements on Hankel's essay in the procedure of assessment of the prize, one by Weber and the other by Riemann.
In section~\ref{s:HHpapers} we provide some biographical notes, together with a full publication list of Hankel.
Let us elucidate our conventions for author/translator footnotes, comments and equation numbering.
Footnotes by Hankel are denoted by an ``A.'' followed by a number (A stands for Author), enclosed in square brackets; translator footnotes
are treated identically except that the letter ``A'' is replaced with ``T'' (standing for translator).
For both author and translator footnotes, we apply a single number count, e.g., [T.1], [A.2], [A.3].
Very short translator comments are added directly in the text, and such comments are surrounded by square brackets.
Only a few equations have been numbered by Hankel (denoted by numbers in round brackets).
To be able to refer to all equations, especially relevant for the companion paper, we have added additional
equation numbers in the format [p.n], which means the nth equation of \S p.
Finally, we note that the abbreviations ``S.'' and ``Bd.'', which occur in the {\it Preisschrift}, refer respectively to the German words for ``page'' and ``volume''.
\deffootnotemark{\textsuperscript{[T.\thefootnotemark]}}\deffootnote{2em}{1.6em}{[T.\thefootnotemark]\enskip}
\setlength{\footnotesep}{0.24cm}
\section{Hankel's Preisschrift translation}
\label{s:Preisschrift}
\noindent
About the prize question set by the philosophical Faculty of Georgia Augusta
on 4th June 1860\,:\,\footnote{The prize question is in Latin in
the {\it Preisschrift}\,:\,``Aequationes generales
motui fluidorum determinando inservientes
duobus modis exhiberi possunt, quorum alter Eulero, alter Lagrangio
debetur. Lagrangiani modi utilitates adhuc fere penitus neglecti
clarissimus Dirichlet indicavit in commentatione postuma 'de problemate
quodam hydrodynamico' inscripta; sed ab explicatione earum uberiore
morbo supremo impeditus esse videtur. Itaque postulat ordo theoriam
motus fluidorum aequationibus Lagrangianis superstructam eamque eo
saltem perductam, ut leges motus rotatorii a clarissimo Helmholtz
alio modo erutae inde redundent.'' At that time it was common to state prizes
in Latin and to have the submitted essays in the same language.}
\\\\
\indent\indent
The most useful equations for determining fluid motion may be presented
in two ways, one of which is Eulerian, the other one is
Lagrangian.
The illustrious Dirichlet pointed out in the posthumous unpublished
paper
``On a problem of hydrodynamics'' the almost completely overlooked
advantages of the
Lagrangian way, but he was prevented from unfolding this way further by a
fatal illness. So, this institution asks for a theory of fluid motion based on
the equations of Lagrange, yielding, at least, the laws of vortex motion
already derived in another way by the illustrious
Helmholtz.\\\\
\noindent
Decision of the philosophical Faculty on the present manuscript: \\\\
\indent\indent
The extraordinary mathematical-physical prize question
about the derivation of laws of fluid motion and, in particular, of vortical motion, described by the so-called Lagrangian equations,
was answered
by an essay carrying the motto: \,\,{\em The more signs express relationships in nature, the more useful they are.}\footnote{This motto is in Latin in the \textit{Preisschrift}: ``Tanto utiliores sunt notae, quanto magis exprimunt rerum relationes''. At that time it was
common
that an Author submitting his work for a prize signed it anonymously
with a motto.}\hspace{0.1cm}
This manuscript gives commendable evidence of the Author's diligence, of his knowledge and ability in using the methods of computation recently developed by contemporary mathematicians.\hspace{0.4cm}In particular $\S\,6$\footnotemark \footnotetext{A number such as ``$\S\,6$'' refers to a section number in the (lost) Latin version of the manuscript. Numbers are different in the revised German translation.}
contains an elegant method
to establish the equations of motion for a flow in a fully arbitrary coordinate system, from a point of view which is commonly referred to as Lagrangian.\hspace{0.4cm}However, when developing the general laws of vortex motion,
the Lagrangian approach is
unnecessarily left aside, and,
as a consequence, the various
laws have to be found by
quite independent means.\hspace{0.4cm}Also the relation between the
vortex laws and the investigations of Clebsch, reported in $\S.\,14.\,15 $, is omitted by the Author.\hspace{0.4cm}Nonetheless,
as his derivation actually begins from the Lagrangian equations, one may consider the prize-question as fully answered by
this manuscript.\hspace{0.4cm}Amongst the many good things to be found in this essay, the evoked incompleteness and some
mistakes due to rushing
which are easy to improve, do not prevent this Faculty from assigning the prize to the manuscript, but the Author would be obliged
to submit a revised copy of the manuscript, improved according to the suggestions made above before it goes into print.\\
\noindent
\centerline {\rule{5cm}{0.2mm}}
\\\\
\indent\indent On my request, I have been permitted by the philosophical Faculty to have the manuscript, originally submitted in Latin,
printed in German. ---
\\\\
\indent\indent
The above mentioned $\S.\,6$ coincides with $\S.\,5$ of the present essay.
The $\S.\S\,14. 15$ included what is now in the note of S.\,45 of the text [now page~\pageref{footnoteClebsch}];\,\,these $\S.\S.$ were left out, on the one hand, because of lack of space, and on the other hand because, in the present view,\,\,these $\S.\S.$ are not anymore connected to the rest of the essay.\\
\indent\indent Leipzig,\hspace{0.1cm} September 1861.\\
$\phantom{C}$ \hfill Hermann Hankel. \quad\quad
\vspace{2cm}
\centerline{\fett{$\S.\,1.$} }
\vspace{0.3cm}
\noindent
The conditions and forces which underlie most of the natural phenomena are so complicated and various, that it is rarely
possible to take them into account by fully analytical means.{\hspace{0.4cm}Therefore, one should,
for the time being, discard
those forces and properties which evidently have little impact on the
motion or the changes;
this is done by just retaining the forces that are of essential and of fundamental importance.\hspace{0.4cm}
Only then, once this first approximation has been made,
one may reconsider the previously disregarded forces and properties, and
modify the underlying hypotheses.\hspace{0.4cm}In the case of the general theory of motion of liquid fluids, it seems therefore advisable to take into account solely the continuity [of the fluid] and the constancy of volume, and
to disregard both viscosity and internal friction which are also present as suggested by experience;
it is advisable to take into account just the continuity and the elasticity, and consider the pressure determined as a function of the density.
\hspace{0.1cm} Even if, in specific cases, the analytical
methods have improved and may apply as well to more realistic hypotheses, these hypotheses are not yet suitable for a fundamental general theory.\\
\indent\indent
The hypotheses of the general hydrodynamical equations, as given by
\mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r,} are the following. The fluid is considered as being
composed of an aggregate of molecules,
which are so small that one can find an infinitely large number of them in an arbitrarily small space.\hspace{0.4cm}Therefore, the fluid is considered as divided into infinitely small parts of dimensionality of the first order; each of these parts is filled with an infinite number of molecules, whose sizes have to be considered as infinitely small quantities of the second order.\hspace{0.4cm}These molecules fill space continuously and move without friction against each other.\\
\indent \indent
The flow can be set in motion either by accelerating forces from the individual molecules or from external pressure forces.\footnotemark \footnotetext{Literally, Hankel wrote ``that emanates from the molecules''.} Considering the nature of fluids, one easily comes to the conclusion, also confirmed by experience, that the pressure on each fluid element of the external surface acts normally and proportionally to the size of that element. In order to have also a clear definition of the pressure at a given point within the fluid, let us think of an element of an arbitrary surface through this point: the pressure will be normal to this surface element and proportional to its size, but independent on its direction.\hspace{0.4cm}The difference between liquid and elastic flows\footnote {Nowadays, in this context, \textit {liquid} flows would be called \textit{incompressible} flows.}{ is then that,
for the former, the density is a constant and,
for the latter, the density depends on the pressure, and, conversely, the pressure depends on the density in a special way.\\
\indent\indent
We shall not insist here on a detailed discussion of these properties as they are usually discussed in the better textbooks on mechanics.\\
\indent \indent
\deffootnotemark{\textsuperscript{[A.\thefootnotemark]}} \deffootnote{2em}{1.6em}{[A.\thefootnotemark]\enskip}
In order to study the above properties analytically, two methods have been so far applied, both owed to \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r.} \hspace{0.4cm}The first method\footnote {First published in the essay: Principes g\'en\'eraux du mouvement des fluides\,\, (Hist.\ de l'Acad.\ de Berlin, ann\'ee, \hyperlink{Euler226}{1755}).}
considers the velocity of each point of the fluid as a function of position and time. If $u,v,w$ are the velocity components in the orthogonal coordinates $x,y,z$, then $u,v,w$ are functions of position $x,y,z$ and time $t$.\hspace{0.4cm}The velocity of the fluid in a given point is thus the velocity of the fluid particles flowing through that point.\hspace{0.4cm}This method was exclusively used for the study of motion of fluids until \mbox{D\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! t}\footnotemark \footnotetext{Untersuchungen \"uber ein Problem der Hydrodynamik.\,\, (Crelle's Journal, Bd.\ 55, [actually it is Bd. 58], S.181. [\hyperlink{Dirichlet2}{1861}]).}
observed that this method had necessarily the drawback that the absolute space filled by the flow in general changes over time and, as a consequence, the coordinates $x,y,z$ are not entirely independent variables.
The method just discussed seems appropriate if the flow is always filling the same space, i.e., in the case when the flow is filling the infinite space, or when the motion is stationary.\footnote{D\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! t used for this case the first \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r}ian method: Ueber einige F\"alle, in denen sich die Bewegung eines festen K\"orpers in einem incompressibelen fl\"ussigen Medium theoretisch bestimmen l\"asst [Ueber die Bewegung eines festen K\"orpers in einem incompressibeln fl\"ussigen Medium]. (Berichte der Berliner Akademie, \hyperlink{Dirichlet1}{1852}, S.12.).\,\,\, Also \mbox{R\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\!} used the first \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! rian} form in his {\em Essai}: Ueber die Fortpflanzung ebener Luftwellen von endlicher Schwingungsweite. (Bd. VIII d. Abhdlg. der G\"ott. Soc. \hyperlink{Riemann}{1860}).}\hspace{0.1cm}\\
\indent \indent The second, ingenious method of \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r}\footnote {De principiis motus fluidorum (Novi comm. acad. sc. Petropolitanae. Bd. XIV. Theil I. pro anno 1759 [\hyperlink{Euler1759}{1770}, in German].) im 6. Capitel: De motu fluidorum ex statu initiali definiendo. S.\,358.}}
considers the coordinates $x,y,z$ of a flow particle, in any reference system, as a function of time $t$ and of its position $a,b,c$ at initial time $t=0$.\hspace{0.4cm}This method, by which the same fluid particle is followed during its motion, was reproduced, indeed in a slightly more elegant way,
\deffootnotemark{\textsuperscript{[A.\thefootnotemark]}} \deffootnote{2em}{1.6em}{[A.\thefootnotemark]\enskip}
by \mbox{L\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! g\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! g\hspace{0.036cm}} %{$\phantom{!}$\! e},\footnote{M\'ecanique analytique, \'ed III. par Bertrand. Bd.\,II. S. 250--261.\,\, The first edition of the M\'ecanique is of the year \hyperlink{Lagrange1}{1788}.} without giving any reference.\hspace{0.1cm} Since it appears that nowadays \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r}'s work is
rarely read in detail, this method is considered due to \mbox{L\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! g\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! g\hspace{0.036cm}} %{$\phantom{!}$\! e.} \hspace{0.4cm}However, the method was already present in its full completeness in E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r's work, 29 years before.\hspace{0.4cm}I owe this interesting, historical note to my honoured Professor B. \mbox{R\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! n.}\hspace{0.4cm} According to this method, one has thus
\begin{equation*}
x=\varphi_1 (a,b,c,t), \qquad y=\varphi_2 (a,b,c,t), \qquad z=\varphi_3 (a,b,c,t), \tag*{[1.1]}
\end{equation*}
where $\varphi_1,\varphi_2,\varphi_3 $ are continuous functions of $a,b,c,t$.\,\, We have at initial time $t=0$ that
\begin{equation*} \tag*{[1.2]}
x=a, \hspace{2cm}y=b, \hspace{2cm}z=c
\end{equation*}
and, thus, the following conditions are valid for $\varphi_1,\,\varphi_2,\,\varphi_3$\,:
\begin{equation*} \tag*{[1.3]}
a=\varphi_1 (a,b,c,0), \qquad b=\varphi_2 (a,b,c,0), \qquad c=\varphi_3 (a,b,c,0) .
\end{equation*}
Obviously, one can take values of $t$ so small, that, according to Taylor's theorem, $x,y,z$
can be expanded in powers of $t$.
Since at time $t=0$ we have $x=a,\,\, y=b,\,\, z=c$, it follows evidently that
\begin{align*} \tag*{[1.4]}
\begin{aligned}
x\,&= \,a\,+\, A_1t\,+\,A_2 t^2\,+\, \ldots \\
y\,&= \,b\,+\, B_1t\,+\,B_2 t^2\,+ \hspace{0.09cm} \ldots \\
z\,&= \,c\,+\, C_1t\,+\,C_2 t^2\,+ \hspace{0.11cm} \ldots
\end{aligned}
\end{align*}
From these equations, one easily finds that at time $t=0$:
\begin{mynewequation} \tag*{[1.5]}
\begin{aligned}
\frac{dx}{da} &= 1, \,\,\, \,\frac{dx}{db} = 0, \,\,\,\,\frac{dx}{dc} = 0 \\
\frac{dy}{da} &= 0, \,\,\,\,\frac{dy}{db} = 1, \,\,\,\,\frac{dy}{dc} = 0 \\
\frac{dz}{da} &= 0, \,\, \,\,\frac{dz}{db} = 0,\,\, \,\,\frac{dz}{dc} = 1
\end{aligned}
\end{mynewequation}
\indent %
In the following, we will try to give a presentation of the general theory of hydrodynamics based on the second method.
It will turn out that the second [Lagrangian] form also merits to be preferred over the first one in some cases, because
the fundamental equations of the former are more closely connected to the customary forms in mechanics.}\\[1cm]
\centerline{\fett{$\S.\,2.$}}\\[0.3cm]
\indent\indent
The two infinitely near particles\\
\centerline {$a,\,\,b,\,\,c$}
and\\
\centerline {$a+{\rm d}a, \,\,b+{\rm d}b, \,\,c+{\rm d}c$}\\\\
after some time $t$, will be at the two points\\\\
\centerline{$x,\,\, y, \,\,z$}
\noindent
and\\
\centerline {$x+{\rm d}x,\,\, y+{\rm d}y,\,\, z+{\rm d}z$,}\\\\
where one needs to put
\begin{mynewequation} \tag*{[2.1]}
\begin{aligned}
{\rm d}x &=\frac{dx}{da} {\rm d}a + \frac{dx}{db} {\rm d}b + \frac{dx}{dc} {\rm d}c\\
{\rm d}y &=\frac{dy}{da} {\rm d}a + \frac{dy}{db} {\rm d}b + \frac{dy}{dc} {\rm d}c\\
{\rm d}z &=\frac{dz}{da} {\rm d}a + \frac{dz}{db} {\rm d}b + \frac{dz}{dc} {\rm d}c
\end{aligned}
\end{mynewequation}
\noindent
Let us think of $a,\,b,\, c$ as linear --- in general
non-orthogonal
--- coordinates;
then, we have that ${\rm d}a,\, {\rm d}b,\, {\rm d}c$
are the coordinates of a point with respect to a congruent system of coordinates $S_0$, whose
origin is at $a,\,b,\, c$.\hspace{0.4cm}As
$x,\,y,\, z$ refer to the same coordinate system as $a,\,b,\, c$ do, then
${\rm d}x,\, {\rm d}y,\, {\rm d}z$ are the analogous coordinates with respect to a coordinate system, whose origin is at $x,\,y,\, z$.\hspace{0.4cm}Now, let us think of an infinitely small surface
\begin{equation*} \tag*{[2.2]}
F({\rm d}a,\, {\rm d}b,\, {\rm d}c) = 0
\end{equation*}
with reference to $S_0$, then, at time $t$, the surface
will have been moved into another surface
\begin{equation*} \tag*{[2.3]}
F({\rm d}x,\, {\rm d}y,\, {\rm d}z) = 0
\end{equation*}
\noindent
{with reference to $S$, or, \\
\begin{equation*} \tag*{[2.4]}
F\Big(\frac{dx}{da} {\rm d}a + \frac{dx}{db} {\rm d}b + \frac{dx}{dc} {\rm d}c, \frac{dy}{da} {\rm d}a + \frac{dy}{db} {\rm d}b + \frac{dy}{dc} {\rm d}c, \frac{dz}{da} {\rm d}a + \frac{dz}{db} {\rm d}b + \frac{dz}{dc} {\rm d}c \Big)=0
\end{equation*}
We see from this that an infinitely small algebraic surface of degree $n$ always
remains of the same degree.\\\\
\indent \indent
Hence, at any time, infinitely close points on a plane, always stay on a \mbox{plane; and since a}
straight line may be thought of as an intersection of two planes,
infinitely close points on a line at a certain time always stay on a line.\\\\
\indent\indent
The points within an infinitely small ellipsoid will always stay in such an ellipsoid, because a closed surface cannot transform into a non-closed
surface and, with the exception of the ellipsoid, all the surfaces of second degree are not closed.\hspace{0.4cm} The section of a plane and of an infinitely small ellipsoid will thus have to remain always the same.\hspace{0.4cm}Since that section always constitutes an ellipse, so an infinitely small ellipse will always
remain the same.
\\\\
\indent\indent
The four points:
\begin{mynewequation} \notag
\begin{aligned}
a\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,b\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,c\,\,\,\,\,\,\,\,\,\\
a+m\,{\rm d}a\,\,\,\,\,\,\,\,\,\,\,b + n\,\,{\rm d}b\,\,\,\,\,\,\,\,c + p\,\,{\rm d}c\\
\,\,\,\,\,\, a+ m'\,{\rm d}a\,\,\,\,\,\,\,\,\,b + n'\,{\rm d}b\,\,\,\,\,\,\, c + p'\,{\rm d}c\\
a+m''{\rm d}a \,\,\,\,\,\,\,\,b + n''{\rm d}b\,\,\,\,\,\,\, c + p''{\rm d}c\\
\end{aligned}
\end{mynewequation}
where $m,\,n,\,p,\,m',\,n',\,p',\,m'',\,n'',\,p''$ are finite numbers,
at time $t$ will be at the position\\ \\
\centerline{\em \,\,\,\,\,\,x\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,y\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,z}
\begin{tinyequation} \nonumber
\begin {aligned}
&\!\!x + \frac{dx}{da} m\,\,{\rm d}a + \frac{dx}{db} n\,\,{\rm d}b + \frac{dx}{dc} p\,\,{\rm d}c,\,\,\,\,\,\,\,\,\,\,y + \frac{dy}{da} m\,\,{\rm d}a + \frac{dy}{db} n\,\,{\rm d}b + \frac{dy}{dc} p\,\,{\rm d}c, \,\,\,\, z + \frac{dz}{da} m\,\,{\rm d}a + \frac{dz}{db} n\,\,{\rm d}b + \frac{dz}{dc} p\,{\rm d}c \\\\
\!\!&\!\!x+\frac{dx}{da} m'{\rm d}a + \frac{dx}{db} n'{\rm d}b + \frac{dx}{dc} p'{\rm d}c,\,\,\,\,\,\,\,\,\,\,y + \frac{dy}{da} m'{\rm d}a + \frac{dy}{db} n'{\rm d}b + \frac{dy}{dc} p'{\rm d}c, \,\,\,\,\,\, z + \frac{dz}{da} m'{\rm d}a + \frac{dz}{db} n'{\rm d}b + \frac{dz}{dc} p'{\rm d}c\,\\\\
\!\!&\!\!x+\frac{dx}{da} m''{\rm d}a + \frac{dx}{db} n''{\rm d}b + \frac{dx}{dc} p''{\rm d}c,\,\, y + \frac{dy}{da} m''{\rm d}a + \frac{dy}{db} n''{\rm d}b + \frac{dy}{dc} p''{\rm d}c, \,\,\, z + \frac{dz}{da} m''{\rm d}a + \frac{dz}{db} n''{\rm d}b + \frac{dz}{dc} p''{\rm d}c
\end {aligned}
\end{tinyequation}
at time $t$.
\indent\indent
The volume of the tetrahedron $T_0$ whose vertices are those points at time $t = 0$, is expressed by the determinant\,:\\
\begin{equation*} \tag*{[2.5]}
6T_0=
\begin{vmatrix} m& n & p\\m'& n' & p'\\m''&n''&p''\end{vmatrix} {\rm d}a\,{\rm d}b\,{\rm d}c
\end{equation*}
\indent\indent
The volume of the tetrahedron $T$ whose vertices at time $t$ are formed by the same particles, is\,:
\begin{smallequation} \tag*{[2.6]}
6T=
\begin{vmatrix}
\frac{dx}{da} m\,{\rm d}a + \frac{dx}{db} n\,{\rm d}b + \frac{dx}{dc} p\,{\rm d}c,\,\,\,
&\frac{dy}{da} m\,{\rm d}a + \frac{dy}{db} n\,{\rm d}b + \frac{dy}{dc} p\,{\rm d}c,\,\,\, &\frac{dz}{da} m\,{\rm d}a + \frac{dz}{db} n\,{\rm d}b + \frac{dz}{dc} p\,{\rm d}c \\\\
\frac{dx}{da}m'{\rm d}a + \frac{dx}{db}n'{\rm d}b + \frac{dx}{dc}p'{\rm d}c,\,\,\,
&\frac{dy}{da} m'{\rm d}a + \frac{dy}{db}n'{\rm d}b + \frac{dy}{dc} p'{\rm d}c, & \frac{dz}{da} m'{\rm d}a + \frac{dz}{db} n'{\rm d}b + \frac{dz}{dc} p'{\rm d}c\\\\
\frac{dx}{da}m''{\rm d}a + \frac{dx}{db} n'' {\rm d}b + \frac{dx}{dc} p'' {\rm d}c,\,\,\,
&\frac{dy}{da} m'' {\rm d}a + \frac{dy}{db} n'' {\rm d}b + \frac{dy}{dc} p'' {\rm d}c, & \frac{dz}{da} m'' {\rm d}a + \frac{dz}{db} n'' {\rm d}b + \frac{dz}{dc} p'' {\rm d}c
\end{vmatrix}
\end{smallequation}
\indent\indent
By known theorems, this determinant can however be
written as a product of two determinants\,:
\begin{equation} \tag*{[2.7]}
6T=
\begin{vmatrix}
\frac{dx}{da}&\frac{dx}{db}&\frac{dx}{dc} \\\\
\frac{dy}{da}&\frac{dy}{db}&\frac{dy}{dc} \\\\
\frac{dz}{da}&\frac{dz}{db}&\frac{dz}{dc}
\end{vmatrix}
\begin{vmatrix} m& n & p\\\\m'& n' & p'\\\\m''&n''&p''\end{vmatrix} {\rm d}a\,{\rm d}b\,{\rm d}c
\end{equation}
or
\begin{mynewequation} \tag*{[2.8]}
\text{\small $T=T_0$} \begin{vmatrix}
\frac{dx}{da}\,\,&\frac{dx}{db}\,\,&\frac{dx}{dc} \\\\
\frac{dy}{da}\,\,&\frac{dy}{db}\,\,&\frac{dy}{dc} \\\\
\frac{dz}{da}\,\,&\frac{dz}{db}\,\,&\frac{dz}{dc}
\end{vmatrix}
\end{mynewequation}
\indent\indent
It results from the preceding considerations, that all particles which at time $t=0$ are in the tetrahedron $T_0$,
will be also in the tetrahedron $T$ at time $t$.\hspace{0.4cm}Let $\varrho_0$ be
the mean density of the tetrahedron $T_0$ at time $t=0$, and $\varrho$ the mean density in $T$ at time $t$, so one has $T:T_0=\varrho_0:\varrho$ and hence
\begin{mynewequation} \tag*{(1), [2.9]}
\begin{vmatrix}
\frac{dx}{da}&\frac{dx}{db}&\frac{dx}{dc} \\\\
\frac{dy}{da}&\frac{dy}{db}&\frac{dy}{dc} \\\\
\frac{dz}{da}&\frac{dz}{db}&\frac{dz}{dc}
\end{vmatrix}
\text{\small $=$} \,\frac{\varrho_0}{\varrho} \,.
\end{mynewequation}
If the density is constant, the fluid is a liquid flow and thus
\deffootnotemark{\textsuperscript{[T.\thefootnotemark]}} \deffootnote{2em}{1.6em}{[T.\thefootnotemark]\enskip}
\begin{mynewequation} \tag*{(2), [2.10]}
\begin{vmatrix}
\frac{dx}{da}&\frac{dx}{db}&\frac{dx}{dc} \\\\
\frac{dy}{da}&\frac{dy}{db}&\frac{dy}{dc} \\\\
\frac{dz}{da}&\frac{dz}{db}&\frac{dz}{dc}
\end{vmatrix}
\text{\small $=$} \,1.
\end{mynewequation}
\indent\indent
One can reasonably
refer to these equations as the density equations, more particularly the last one
as the equation of the constancy of the volume.\\\\
\indent\indent
The values of the functional determinant of $x, y, z$ with respect to $a,b,c$ may be used to develop a set
of relationships, which are often needed to pass from the \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! rian representation to the other [i.e., to the Lagrangian representation; or reciprocally].}
Indeed, if one solves the system of equations\,:
\begin{mynewequation} \tag*{[2.11]}
\begin{aligned}
{\rm d}x=\frac{dx}{da} {\rm d}a + \frac{dx}{db} {\rm d}b + \frac{dx}{dc} {\rm d}c\\
{\rm d}y=\frac{dy}{da} {\rm d}a + \frac{dy}{db} {\rm d}b + \frac{dy}{dc} {\rm d}c\\
{\rm d}z=\frac{dz}{da} {\rm d}a + \frac{dz}{db} {\rm d}b + \frac{dz}{dc} {\rm d}c
\end{aligned}
\end{mynewequation}
one has\,:
\begin{mynewequation} \tag*{[2.12]}
\begin{aligned}
\Big(\frac{dy}{db} \frac{dz}{dc} - \frac{dy}{dc}\frac{dz}{db}\Big) {\rm d}x +
\Big(\frac{dz}{db} \frac{dx}{dc} - \frac{dz}{dc}\frac{dx}{db}\Big) {\rm d}y +
\Big(\frac{dx}{db} \frac{dy}{dc} - \frac{dx}{dc}\frac{dy}{db}\Big) {\rm d}z = \frac{\varrho_0}{\varrho} {\rm d}a\\\\
\Big(\frac{dy}{dc} \frac{dz}{da} - \frac{dy}{da}\frac{dz}{dc}\Big) {\rm d}x +
\Big(\frac{dz}{dc} \frac{dx}{da} - \frac{dz}{da}\frac{dx}{dc}\Big) {\rm d}y +
\Big(\frac{dx}{dc} \frac{dy}{da} - \frac{dx}{da}\frac{dy}{dc}\Big) {\rm d}z = \frac{\varrho_0}{\varrho} {\rm d}b\\\\
\Big(\frac{dy}{da} \frac{dz}{db} - \frac{dy}{db}\frac{dz}{da}\Big) {\rm d}x +
\Big(\frac{dz}{da} \frac{dx}{db} - \frac{dz}{db}\frac{dx}{da}\Big) {\rm d}y +
\Big(\frac{dx}{da} \frac{dy}{db} - \frac{dx}{db}\frac{dy}{da}\Big) {\rm d}z = \frac{\varrho_0}{\varrho} {\rm d}c
\end{aligned}
\end{mynewequation}
where we have substituted the value
$\varrho_0/\varrho$ for the functional determinant.
\hspace{0.4cm}The comparison of these equations with\,:
\begin{mynewequation} \tag*{[2.13]}
\begin{aligned}
\frac{da}{dx}\,{\rm d}x + \frac{da}{dy}\,{\rm d}y + \frac{da}{dz}\,{\rm d}z = {\rm d}a\\\\
\frac{db}{dx}\,{\rm d}x + \frac{db}{dy}\,{\rm d}y + \frac{db}{dz}\,{\rm d}z = {\rm d}b\\\\
\frac{dc}{dx}\, {\rm d}x + \frac{dc}{dy}\,{\rm d}y + \frac{dc}{dz}\,{\rm d}z = {\rm d}c
\end{aligned}
\end{mynewequation}
gives this equation system\,:
\begin{mynewequation}\tag*{(3), [2.14]}
\left.
\begin{aligned}
\frac{\varrho_0}{\varrho}\frac{da}{dx} & = \frac{dy}{db} \frac{dz}{dc} - \frac{dy}{dc}\frac{dz}{db},\,
\frac{\varrho_0}{\varrho}\frac{da}{dy} = \frac{dz}{db} \frac{dx}{dc} - \frac{dz}{dc}\frac{dx}{db}, \,
\frac{\varrho_0}{\varrho}\frac{da}{dz} = \frac{dx}{db} \frac{dy}{dc} - \frac{dx}{dc}\frac{dy}{db}
\\
\frac{\varrho_0}{\varrho}\frac{db}{dx} & = \frac{dy}{dc} \frac{dz}{da} - \frac{dy}{da}\frac{dz}{dc},\,
\frac{\varrho_0}{\varrho}\frac{db}{dy} = \frac{dz}{dc} \frac{dx}{da} - \frac{dz}{da}\frac{dx}{dc}, \,
\frac{\varrho_0}{\varrho}\frac{db}{dz} = \frac{dx}{dc} \frac{dy}{da} - \frac{dx}{da}\frac{dy}{dc}
\\
\frac{\varrho_0}{\varrho}\frac{dc}{dx} & = \frac{dy}{da} \frac{dz}{db} - \frac{dy}{db}\frac{dz}{da},\,
\frac{\varrho_0}{\varrho}\frac{dc}{dy} = \frac{dz}{da} \frac{dx}{db} - \frac{dz}{db}\frac{dx}{da}, \,
\frac{\varrho_0}{\varrho}\frac{dc}{dz} = \frac{dx}{da} \frac{dy}{db} - \frac{dx}{db}\frac{dy}{da}
\end{aligned} \qquad \right\}
\end{mynewequation}
\indent\indent
If the equation (1) is differentiated with respect to one of the independent variables $a,b,c$, and these differentiations are indicated
with $\delta$, one obtains
\begin{align}
\Big(\frac{dy}{db} \frac{dz}{dc} - \frac{dy}{dc}\frac{dz}{db}\Big) \frac{d \delta x}{da} +
\Big(\frac{dy}{dc} \frac{dz}{da} - \frac{dy}{da}\frac{dz}{dc}\Big) \frac{d \delta x}{db} +
\Big(\frac{dy}{da} \frac{dz}{db} - \frac{dy}{db}\frac{dz}{da}\Big) \frac{d \delta x}{dc} \nonumber \\ \nonumber \\
+ \Big(\frac{dz}{db} \frac{dx}{dc} - \frac{dz}{dc}\frac{dx}{db}\Big) \frac{d \delta y}{da} +
\Big(\frac{dz}{dc} \frac{dx}{da} - \frac{dz}{da}\frac{dx}{dc}\Big) \frac {d \delta y}{db} +
\Big(\frac{dz}{da} \frac{dx}{db} - \frac{dz}{db}\frac{dx}{da}\Big) \frac {d \delta y}{dc} \nonumber \\ \nonumber \\
+ \Big( \frac{dx}{db} \frac{dy}{dc} - \frac{dx}{dc}\frac{dy}{db}\Big) \frac{d \delta z}{da} +
\Big(\frac{dx}{dc} \frac{dy}{da} - \frac{dx}{da}\frac{dy}{da}\Big) \frac{d \delta z}{db} +
\Big(\frac{dx}{da} \frac{dy}{db} - \frac{dx}{db}\frac{dy}{da}\Big) \frac{d \delta z}{dc}\,\,\,\,\,\,\,\hspace{-0.4cm} \nonumber \\ \nonumber \\
= -\frac{\varrho_0}{\varrho} \frac{\delta \varrho}{\varrho}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \tag*{[2.15]}
\end{align}
and, by equations~(3):
\begin{equation*} \tag*{[2.16]}
\frac{d \delta x}{da}\frac{da}{dx} + \frac{d \delta x}{db}\frac{db}{dx} + \frac{d \delta x}{dc}\frac{dc}{dx} + \frac{d \delta y}{da}\frac{da}{dy} + \frac{d \delta y}{db}\frac{db}{dy} + \frac{d \delta y}{dc}\frac{dc}{dy} + \frac{d \delta z}{da}\frac{da}{dz} + \frac{d \delta z}{db}\frac{db}{dz} + \frac{d \delta z}{dc}\frac{dc}{dz} + \frac{\delta \varrho}{\varrho}= 0
\end{equation*}
or
\begin{equation*}\tag*{(4), [2.17]}
\frac{d \delta x}{dx} + \frac{d \delta y}{dy}+ \frac{d \delta z}{dz}+ \frac{\delta \varrho}{\varrho}= 0
\end{equation*}
\indent
If by $\delta$, one understands the differentiation with respect to $t$, one has:
\begin{equation*} \tag*{[2.18]}
\varrho \Big(\frac{d \frac{dx}{dt}}{dx} + \frac{d \frac{dy}{dt}}{dy}+ \frac{d \frac{dz}{dt}}{dz}\Big) + \frac{{ \rm d} \varrho} {{\rm d} t} = 0
\end{equation*}
Since $t$ appears not only explicitly in $\varrho$, but also implicitly in $\varrho$ through its dependence on $x$,$y$,$z$, one has
\begin{equation*} \tag*{[2.19]}
\frac{{ \rm d} \varrho} {{\rm d} t} = \frac{d\varrho}{dt} + \frac{d\varrho}{dx} \frac{dx}{dt} + \frac{d\varrho}{dy} \frac{dy}{dt} + \frac{d\varrho}{dz} \frac{dz}{dt}
\end{equation*}
If one sets
\begin{equation*} \tag*{[2.20]}
u = \frac{dx}{dt }, \,\, v= \frac{dy}{dt }, \,\, w = \frac{dx}{dt }
\end{equation*}
one thus has\,:
\begin{equation*} \tag*{[2.21]}
\varrho\Big( \frac{du}{dx}+ \frac{dv}{dy} + \frac{dw}{dz}\Big) + { \frac{d\varrho}{dt} } + \frac{d\varrho}{dx} u + \frac{d\varrho}{dy} v + \frac{d\varrho}{dz} w = 0
\end{equation*}
or
\begin{equation*}\tag*{(5), [2.22]}
\frac{d\varrho}{dt} +\frac {d (\varrho u)}{dx} + \frac {d (\varrho v)}{dy} +\frac {d (\varrho w)}{dz} = 0
\end{equation*}
This is the form in which \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r} first presented the density equation.\hspace{0.4cm}For the case when $\varrho$ is constant in time,
in this first form of dependence, one obtains as the constancy-of-volume equation\,:
\deffootnotemark{\textsuperscript{[A.\thefootnotemark]}} \deffootnote{2em}{1.6em}{[A.\thefootnotemark]\enskip}
\begin{equation*}\tag*{(6), [2.23]}
\frac{du}{dx} + \frac{dv}{dy} + \frac{dw}{dz} = 0
\end{equation*}
L\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! g\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! g\hspace{0.036cm}} %{$\phantom{!}$\! e\footnote{M\'ecanique analytique, \,\, Bd.\ I., S. 179--183, Bd.\ II, S. 257--261. [First edition, \hyperlink{Lagrange1}{1788}].}
treats the relation of these equations with (2) quite extensively; but this connection seems
to be a special case of%
\deffootnotemark{\textsuperscript{[A.\thefootnotemark]}} \deffootnote{2em}{1.6em}{[A.\thefootnotemark]\enskip}%
a theorem by \mbox{J\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! b\hspace{0.036cm}} %{$\phantom{!}$\! i}},\footnote{C.\ G.\ J.\ Jacobi, Theoria novi multiplicatoris systemati aequationum differentialium vulgarium applicandi. \,\,\, Crelle's Journal, Bd.\ 27. S.\ 209 [\hyperlink{Jacobi}{1844}].}
which for three variables is [actually] fully included in equations (1) and (5).\\
\indent\indent
If an integral:
\begin{equation*}
\iiint f(x,y,z) \,\,\varrho \,\, {\rm d}x\,\, {\rm d}y\,\, {\rm d}z
\end{equation*}
which is extended over the whole fluid mass, is transformed into an integral over $a,\,b,\,c$, one has:
\begin{equation*} \tag*{[2.24]}
\iiint f(x,y,z) \,\,\varrho \,\, {\rm d}x\,\, {\rm d}y\,\, {\rm d}z = \iiint f(a,b,c) \,\,\varrho\,\, {\rm d}a\,\, {\rm d}b\,\, {\rm d}c
\begin{vmatrix}
\frac{dx}{da}\,\,&\frac{dx}{db}\,\,&\frac{dx}{dc} \\\\
\frac{dy}{da}\,\,&\frac{dy}{db}\,\,&\frac{dy}{dc} \\\\
\frac{dz}{da}\,\,&\frac{dz}{db}\,\,&\frac{dz}{dc}
\end{vmatrix}
\end{equation*}
where also the second integral has to be extended over all particles, and $f(a,\,b,\,c)$ is the function into which
$f(x,\,y,\,z) $ transforms by substituting $a,\,b,\,c$ for $x,\,y,\,z$.\hspace{0.4cm}
Thus, from the density equation (1) follows\,:
\begin{equation*} \tag*{[2.25]}
\iiint f(x,y,z) \,\,\varrho \,\, {\rm d}x\,\, {\rm d}y\,\, {\rm d}z = \iiint f(a,b,c) \,\,\varrho_0\,\, {\rm d}a\,\, {\rm d}b\,\, {\rm d}c
\end{equation*}
--- an important transformation.
\\[1cm]
\centerline{\fett{$\S.\,3.$}}\\[0.3cm]
\indent\indent
Despite the
complete analogy of these equations for the equilibrium and
motion of liquid fluids with the equations for elastic fluids,
\deffootnotemark{\textsuperscript{[T.\thefootnotemark]}} \deffootnote{2em}{1.6em}{[T.\thefootnotemark]\enskip}%
there is still an essential difference with regard to
their derivation.
\hspace{0.4cm}For this reason, these cases have to be treated separately.\\\\
\indent\indent
Before developing the equations of motion,
it will be convenient to study
more in detail the equilibrium conditions,
at first for liquid fluids.\\\\
\indent\indent
If $X, Y, Z$ are the accelerating forces in the direction of the coordinates axes, acting on the point $ x,\, y,\, z$, then it follows easily from the principle of virtual velocities that\,:
\begin{equation*} \tag*{[3.1]}
\iiint \big[\varrho\,\big(X \delta x + Y\delta y + Z \delta z \big) + p \delta L \big]\,\, {\rm d}x\,\,{\rm d}y\,\,{\rm d}z\,=\,0
\end{equation*}
in which $L=0$ gives the density equation for incompressible fluids; $\delta L$ is
its relative variation
corresponding to the variations of coordinates $\delta x, \delta y, \delta z$, and $p$ is a not yet determined quantity. \,\,The integral has to be extended over all parts of the continuous flow.\hspace{0.4cm}From (4) in $\S.\,2$, since $\delta \varrho=0$, we have
\begin{equation*} \tag*{[3.2]}
\delta L = \frac{d \delta x}{dx} + \frac{d \delta y}{dy} + \frac{d \delta z}{dz}
\end{equation*}
And thus the previous integral becomes:
\begin{equation} \tag*{(1), [3.3]}
\iiint \Big[ \varrho \Big( X\delta x + Y \delta y + Z \delta z \Big) + p\Big(\frac{d \delta x}{dx} + \frac{d \delta y} {dy} +\frac{d \delta z}{dz} \Big ) \Big] {\rm d}x\,{\rm d}y\,{\rm d}z = 0
\end{equation}
Integrating by parts, one finds\,:
\begin{align} \tag*{[3.4]}
\begin{aligned}
\iiint p \frac{d\delta x}{dx} {\rm d}x\,{\rm d}y\,{\rm d}z = \iint p\, \delta x\,{\rm d}y\,{\rm d}z - \iiint \delta x \frac{dp}{dx}\, {\rm d}x\, {\rm d}y\, {\rm d}z \nonumber \\
\iiint p \frac{d\delta y}{dy} {\rm d}x\,{\rm d}y\,{\rm d}z = \iint p\, \delta y\,{\rm d}z\,{\rm d}x - \iiint \delta y \frac{dp}{dy}\, {\rm d}x\, {\rm d}y\, {\rm d}z \\
\iiint p \frac{d\delta z}{dz} {\rm d}x\,{\rm d}y\,{\rm d}z = \iint p\, \delta z\,{\rm d}x\,{\rm d}y - \iiint \delta z \frac{dp}{dz}\, {\rm d}x\, {\rm d}y\, {\rm d}z \nonumber
\end{aligned}
\end{align}
where the double integrals extend over the surface of the fluid mass.\hspace{0.4cm}
Thus, one has for the equation of the principle of virtual velocity \,:
\begin{smallequation} \tag*{[3.5]}
0=\iiint\!\!\Big [\!\big(\varrho X \!-\! \frac{dp}{dx} \big) \delta x + \big(\varrho Y - \frac{dp}{dy} \big) \delta y + \big(\varrho Z - \frac{dp}{dz} \big) \delta z \! \Big] {\rm d}x {\rm d}y {\rm d}z + \!\int\! p ({\rm d}x \cos \alpha + {\rm d}y \cos \beta + {\rm d}z \cos \gamma) {\rm d}\omega
\end{smallequation}
where ${\rm d}\omega$ is an element of the external surface, and $\alpha$,\,$\beta$,\,$\gamma$ indicate the angles between the normal to the element $d\omega$ and the coordinates axes.\hspace{0.4cm}From these equations, it follows that the
equilibrium conditions are\,:
\begin{equation} \tag* {(2), [3.6]}
\int p (\delta x \,\cos \alpha + \delta y\, \cos \beta + \delta z\, \cos \gamma) {\rm d} \omega =0
\end{equation}
\begin{equation} \tag* {(3), [3.7]}
\frac{dp}{dx} = \varrho X, \,\,\, \frac{dp}{dy} = \varrho Y, \,\,\, \frac{dp}{dz} = \varrho Z
\end{equation}
The last three equations
require that the components $X,Y,Z$ be the
differential quotients of
an arbitrary function $V$ with respect to $x,y,z$; thus
\begin{equation*} \tag*{[3.8]}
X=\frac{dV}{dx}, \,\, Y=\frac{dV}{dy},\,\,Z=\frac{dV}{dz}
\end{equation*}
so that one has\,:
\begin{equation*} \tag*{[3.9]}
p = \varrho V + c
\end{equation*}
where $p$ is determined up to an arbitrary constant $c$.\\\\
\indent\indent
Instead of equations $(3)$, one can also write\,:
\begin{mynewequation} \tag*{[3.10]}
\begin{aligned}
(p_{x + {\rm d}x}- p_x)\,{\rm d}y\, {\rm d}z-\varrho X\, {\rm d}x\, {\rm d}y\, {\rm d}z=0\\
(p_{y + {\rm d}y}- p_y)\,{\rm d}z\, {\rm d}x-\varrho Y\, {\rm d}x\, {\rm d}y\, {\rm d}z=0\\
(p_{z + {\rm d}z}-p_z)\, {\rm d}x\, {\rm d}y-\varrho Z\,\,{\rm d}x\, {\rm d}y\, {\rm d}z=0\\
\end{aligned}
\end{mynewequation}
from which it is apparent
that $p$ is the pressure at the point $x,y,z$, which acts against the given
accelerating forces.\hspace{0.4cm}This pressure is determined up to an additive constant by $p=\varrho V + c$\,; it is fully determined if its value is given at any point.\hspace{0.4cm}
Suppose now that in addition to the acceleration of the fluid particles there are also pressure forces acting on the external surface. We then find as an equilibrium condition that at each point of the external surface, these pressure forces must be equal and opposite to the pressure $p = \varrho V + c$.\\\\
\indent\indent
In equation (2), the variations $\delta x, \delta y, \delta z $ depend on certain conditions which result from the nature of the walls. If in special cases the actual meaning of equation (2) is specified more precisely, then its mechanical necessity becomes manifest. Here we want to discuss just a few such cases.
\\\\
\indent\indent
Let us assume that a part of the external surface be free and that the same forces act on all parts of it. One can set the pressure to zero in the points of
that free surface, so that $p$, the difference between the pressure in a certain point and the pressure in a point of the free surface,
be exactly determined in all other remaining points of the fluid.\\\\
\indent\indent
However, since $ \delta x, \delta y, \delta z$ are evidently arbitrary in a free surface, it follows that, if $(2)$ has to be satisfied, we must have $p=0$.\\\\
\indent\indent
For fluid parts that are lying on a fixed wall, it is evident that no motion normal to the wall surface can take place.\hspace{0.4cm}The normal component of the motion is obviously\,:
\begin{equation*}
\delta x\,\cos \alpha + \delta y \,\cos \beta + \delta z\,\cos \gamma
\end{equation*}
and, since, this must vanish, equation (2) is indeed satisfied.\\\\
\indent\indent
In the points which are on a moving wall, one can set
\begin{equation*} \tag*{[3.11]}
\delta x = \delta {x'} + \delta\xi,\,\, \delta y = \delta {y'} + \delta \eta, \,\,\delta z = \delta {z'} + \delta \zeta
\end{equation*}
where
$\delta {x'}, \,\,\delta {y'} ,\,\,\delta {z'}$ are the motions relative to the wall and $\delta\xi,\,\, \delta \eta,\,\,\delta\chi$ are the motions of the fluid particles simultaneously in motion with the wall.\hspace{0.4cm}Therefore, one has instead of equation (2)
\begin{equation*} \tag*{[3.12]}
0=\int p (\delta x' \,\, \cos\alpha + \delta y'\,\, \cos\beta + \delta z'\,\, \cos \gamma) {\rm d} \omega+ \int p (\delta\xi\,\, \cos\alpha + \delta \eta\,\, \cos\beta + \delta \zeta\,\, \cos \gamma) {\rm d} \omega
\end{equation*}
but, since no motion may happen against the wall, one has\,:
\begin{equation*} \tag*{[3.13]}
\delta x' \,\, \cos\alpha + \delta y'\,\, \cos\beta+ \delta z'\,\, \cos\gamma=0
\end{equation*}
and therefore
\begin{equation*} \tag*{[3.14]}
\int p (\delta \xi \,\,\cos \alpha + \delta \eta\,\, \cos \beta + \delta \zeta \,\, \cos \gamma) {\rm d} \omega = 0
\end{equation*}
which amounts to setting
\begin{equation*} \tag*{[3.15]}
\int \big (p_x \delta \xi + p_y \delta \eta + p_z \delta \zeta \,) {\rm d} \omega =0
\end{equation*}
provided that $p_x, p_y, p_z$ are the components of the pressure with respect to the coordinates axes.\hspace{0.4cm}However this integral is the
equilibrium condition of a body, on which act external surface forces $p_x, p_y, p_z$, where
$ \delta \xi, \delta \eta, \delta \zeta$ indicate the variations, that the body can have, under the given circumstances.\hspace{0.4cm}Actually, also in this case, equation (2) is needed by the nature of things.%
\deffootnotemark{\textsuperscript{[A.\thefootnotemark]}}\deffootnote{2em}{1.6em}{[A.\thefootnotemark]\enskip}%
\footnote {Cf.\ M\'ec.\ analy.\ Bd. I, S.\ 193--201. [First edition, \hyperlink{Lagrange1}{1788}].}
\\\\
\indent\indent
We now discuss the case of elastic fluids; here the relation $L=0$ is not valid anymore.\hspace{0.4cm}In that case one has to
consider also the forces due to elasticity of the fluid in addition to the accelerating and external pressure forces.\hspace{0.4cm}Let $p$ be the pressure at a point $x, y, z$, then this tends to reduce the
volume of the element ${\rm d}x,\, {\rm d}y,\, {\rm d}z$\,;
the momentum by this force is thus $p \delta ({\rm d}x \, {\rm d}y\, {\rm d}z)$; in order to get a different expression for $ \delta ({\rm d}x \, {\rm d}y \,{\rm d}z)$, we note that $\rho\,{\rm d}x\,{\rm d}y\,{\rm d}z$, being the mass of an element, is always the same; thus we have $\delta (\rho\,{\rm d}x\, {\rm d}y\,{\rm d}z)=0$; herefrom it follows that\:
\begin{equation*} \tag*{[3.16]}
\varrho \delta ( {\rm d} x \,\, {\rm d}y\,\, {\rm d}z) + {\rm d}x\,\, {\rm d}y\,\, {\rm d}z\,\, \delta\varrho=0
\end{equation*}
and thus
\begin{equation*} \tag*{[3.17]}
\delta ( {\rm d}x\,\, {\rm d}y\,\, {\rm d}z) = - \frac{\delta \varrho}{\varrho} {\rm d}x\,\, {\rm d}y\,\, {\rm d} z
\end{equation*}
or, by equation $(4)$ in \S$.2$,
\begin{equation*} \tag*{[3.18]}
\delta ( {\rm d}x\,\, {\rm d}y\,\, {\rm d}z) \,\,=\,\,\Big(\frac{d \delta x}{dx}\,\,+ \frac{d \delta y}{dy}\,\,+\frac{d \delta z}{dz}\,\,\Big)\,\ {\rm d}x\,\,{\rm d}y\,\, {\rm d}z;
\end{equation*}
Therefore the equilibrium conditions are\,:
\begin{equation*} \tag*{[3.19]}
\iiint \Big[ \varrho \Big( X \delta x\,\,+\,\,Y\delta y\,\,+ Z\delta z \Big) \,\, + p \Big(\frac{d \delta x}{dx}\,\,+ \frac{d \delta y}{dy}\,\,+\frac{d \delta z}{dz}\,\,\Big) \Big]\,\ {\rm d}x\,\, {\rm d}y\,\, {\rm d}z = 0
\end{equation*}
\indent\indent
Since this equation is identical with $(1)$, the equilibrium equations for elastic and liquid flows are formally the same.\hspace{0.4cm} Also here,
in accordance with equation $(2)$, we have\,:
\begin{equation} \tag*{[3.20]}
\frac{dp}{dx} = \varrho X, \,\,\frac{dp}{dy} = \varrho Y,\,\,\frac{dp}{dz} = \varrho Z
\end{equation}
For elastic flows, $\rho$ is a
given function of $p$,
say, $\varrho = \varphi (p)$.\hspace{0.4cm} Let us put
\begin{equation*} \tag*{[3.21]}
f(p) = \int \frac{{\rm d}p}{\varphi(p)}
\end{equation*}
from which it follows obviously that
\begin{math} \displaystyle \frac{1}{\varrho} \frac{dp}{dx} = \frac{1}{\varphi(p)} \frac{dp}{dx} = \frac{df(p)}{dp} \frac{dp}{dx} = \frac{df(p)}{dx}, \end{math}
therefore, the three equations for the equilibrium condition become\,:
\begin{equation*} \tag*{[3.22]}
\frac{df(p)}{dx} = X, \,\,\, \frac{df(p)}{dy} = Y, \,\,\, \frac{df(p)}{dx} = Z\,,
\end{equation*}
so that, also for elastic fluids in equilibrium, $X,Y,Z$ have to be
partial differential quotients
of the same function with respect to $x,y,z$.\hspace{0.4cm}As $\varphi(p)$, and consequently also $f(p)$ is known, $p$
can always be expressed through $X,Y,Z$.\\[1cm]
\centerline{\fett{$\S.\,4.$}}\\[0.3cm]
\indent\indent
As follows from the considerations of
$\S.\,3$, the principle of virtual velocities and lost forces
for the motion of liquid and elastic fluids, implies\,:
\tagsleft@true
\begin{smallequation} \notag
\,\,\,\,0=\iiint \biggl\{ \varrho \biggl[ \Big( X - \frac{d^2 x}{d t^2} \Big) \delta x + \Big( Y - \frac{d^2 y}{d t^2}\Big) \delta y + \Big(Z - \frac{d^2 z}{d t^2}\Big) \delta z \biggr] + p \biggl[ \frac{d \delta x}{dx} +
\frac{d \delta y}{dy} + \frac{d \delta z}{dz} \biggr] \biggr\} {\rm d}x {\rm d}y {\rm d}z
\end{smallequation}
\tagsleft@false
\vskip-1.15cm\begin{align}\tag*{[4.1]}
\end{align}
\vskip-1.07cm \mbox{\!\!\!\!\!\!(1)}\vskip0.8cm
\noindent from which follows firstly
equation $(2)$ of $\S.\,3$\,:
\tagsleft@true
\begin{align} \tag{2}
\int p \big(\delta x \cos \alpha + \delta y \cos \beta + \delta z \cos \gamma \big) {\rm d} \omega = 0,
\end{align}
\tagsleft@false
\vskip-1.5cm\begin{align}\tag*{[4.2]}
\end{align}
which concerns only the external surface;
secondly, we have for the fundamental equations of liquid or elastic fluids,
\tagsleft@true
\begin{align}\tag{3}
\varrho \Big ( \frac{d^2 x}{d t^2} - X \Big) + \frac{dp}{dx} =0,\,\, \varrho \Big ( \frac{d^2 y}{d t^2} - Y \Big) + \frac{dp}{dy} =0, \,\,\varrho \Big ( \frac{d^2 z}{d t^2} - Z \Big) + \frac{dp}{dz} =0 \,,
\end{align}
\tagsleft@false
\vskip-1.5cm\begin{align}\tag*{[4.3]}
\end{align}
\noindent
where $p$ indicates the pressure in each point.
\\\\
\indent \indent According to the first \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! ian} method, the components $u,\,v,\,\omega$ are considered as function of time and space $x,\,y,\,z$. Therefore,
\begin{mynewequation}\tag*{[4.4]}
\begin{aligned}
\frac{d^2 x}{dt^2} &=\frac{du}{dt} + \frac{du}{dx} u + \frac{du}{dy} v + \frac{du}{dz} w\,\,\,\\
\frac{d^2 y}{dt^2} &=\frac{dv}{dt} + \frac{dv}{dx} u + \frac{dv}{dy} v + \frac{dv}{dz} w\,\,\,\,\\
\frac{d^2 z}{dt^2} &=\frac{dw}{dt} + \frac{dw}{dx} u + \frac{dw}{dy} v + \frac{dw}{dz} w
\end{aligned}
\end{mynewequation}
so that the fundamental equations in this form are the following\,:
\begin{mynewequation}\tag*{(4), [4.5]}
\left.
\begin{aligned}
\frac{du}{dt}\, +\, \frac{du}{dx} u + \frac{du}{dy} v + \frac{du}{dz} w - X + \frac{1}{\varrho}\frac{dp}{dx}=0\\\\
\frac{dv}{dt}\, +\,\, \frac{dv}{dx} u + \frac{dv}{dy} v + \frac{dv}{dz} w - Y + \frac{1}{\varrho}\frac{dp}{dy}=0\\\\
\frac{dw}{dt}\, + \frac{dw}{dx} u + \frac{dw}{dy} v + \frac{dw}{dz} w - Z + \frac{1}{\varrho}\frac{dp}{dz}=0
\end{aligned} \qquad \right\}
\end{mynewequation}
which may be also written in such a way that each equation is obtained from the other by cyclic permutation, namely\,:
\begin{mynewequation}\tag*{(5), [4.6]}
\left.
\begin{aligned}
\frac{du}{dt}\, +\, \frac{du}{dx} u + \frac{du}{dy} v + \frac{du}{dz} w - X + \frac{1}{\varrho}\frac{dp}{dx}=0\\\\
\frac{dv}{dt}\, +\,\, \frac{dv}{dy} v + \frac{dv}{dz} w + \frac{dv}{dx} u - Y + \frac{1}{\varrho}\frac{dp}{dy}=0\\\\
\frac{dw}{dt}\, + \frac{dw}{dz} w + \frac{dw}{dx} u + \frac{dw}{dy} v - Z + \frac{1}{\varrho}\frac{dp}{dz}=0
\end{aligned} \qquad \right\}
\end{mynewequation}
In addition to these formulae, there is also the density equation~(5) of the $\S.\,2$:
\begin{equation*} \tag*{[4.7]}
\frac{d\varrho}{dt} + \frac{d(\varrho u )}{dx} + \frac{d(\varrho v )}{dy} + \frac{d(\varrho w )}{dz} =0
\end{equation*}
or,
in particular, for liquid flows
\begin{equation*} \tag*{[4.8]}
\frac{du}{dx} + \frac{dv}{dy} + \frac{dw}{dz}=0
\end{equation*}
We see that these four equations are sufficient to determine the four unknowns $u,v,w$ and $p$ as functions of $x,y,z$ and $t$;\, $\varrho$ is either a known function of $p$ or a constant.\\\\
\indent\indent
By the second
E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! erian representation [Lagrangian representation], equations~$(3)$
are respectively multiplied by
$$
\frac{dx}{da},\,\,\frac{dy}{da},\,\,\frac{dz}{da}$$
and summed, then, similarly, by
$$
\frac{dx}{db},\,\,\frac{dy}{db},\,\,\frac{dz}{db}$$
and
$$
\frac{dx}{dc},\,\,\frac{dy}{dc},\,\,\frac{dz}{dc}$$
\noindent
Thus, one obtains for the three fundamental equations:
\begin{equation*}\tag*{(6), [4.9]}
\left.
\begin{aligned}
\Big( \frac{d^2 x}{d t^2} - X \Big)\frac{dx}{da} + \Big( \frac{d^2 y}{d t^2} - Y \Big) \frac{dy}{da}+\Big( \frac{d^2 z}{d t^2} - Z \Big) \frac{dz}{da} + \frac{1}{\varrho} \frac{dp}{da} =0\\\
\Big( \frac{d^2 x}{d t^2} - X \Big)\frac{dx}{db} + \Big( \frac{d^2 y}{d t^2} - Y \Big) \frac{dy}{db}+\Big( \frac{d^2 z}{d t^2} - Z \Big) \frac{dz}{db} + \frac{1}{\varrho} \frac{dp}{db} =0\\\
\Big( \frac{d^2 x}{d t^2} - X \Big)\frac{dx}{dc} + \Big( \frac{d^2 y}{d t^2} - Y \Big) \frac{dy}{dc}+\Big( \frac{d^2 z}{d t^2} - Z \Big) \frac{dz}{dc} + \frac{1}{\varrho} \frac{dp}{dc} =0
\end{aligned} \qquad \right\}
\end{equation*}
and, in addition, there is the density equation (1) of $\S.\,2$
\begin{myequation} \tag*{[4.10]}
\begin{vmatrix}
\frac{dx}{da}&\frac{dx}{db}&\frac{dx}{dc} \\\\
\frac{dy}{da}&\frac{dy}{db}&\frac{dy}{dc} \\\\
\frac{dz}{da}&\frac{dz}{db}&\frac{dz}{dc}
\end{vmatrix}
=\frac{\varrho_0}{\varrho}
\end{myequation}
From these four equations $x,y,z$ and $p$ are found as functions of the initial
location $a,b,c$ and time $t$.\\
\indent\indent Evidently, the solutions of these partial differential equations must contain arbitrary functions, which have to be determined from initial conditions and are in accordance with the nature of the walls and the flow boundaries.\\
\indent\indent These last equations, which are usually called after \mbox{L\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! g\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! g\hspace{0.036cm}} %{$\phantom{!}$\! e,} significantly simplify their form by
setting\,:
\begin{equation*} \tag*{[4.11]}
X= \frac{dV}{dx}, \,\,Y=\frac{dV}{dy}, \,\, Z=\frac{dV}{dz}
\end{equation*}
so they become\,:
\begin{equation*}\tag*{(7), [4.12]}
\left.
\begin{aligned}
\frac{d^2 x}{d t^2}\frac{dx}{da} + \frac{d^2 y}{d t^2} \frac{dy}{da}+\frac{d^2 z}{d t^2} \frac{dz}{da} - \frac{dV}{da} + \frac{1}{\varrho}\frac{dp}{da}=0\\\\
\frac{d^2 x}{d t^2}\frac{dx}{db} + \frac{d^2 y}{d t^2} \frac{dy}{db}+\frac{d^2 z}{d t^2} \frac{dz}{db} - \frac{dV}{db} + \frac{1}{\varrho}\frac{dp}{db}=0\\\\
\frac{d^2 x}{d t^2}\frac{dx}{dc} + \frac{d^2 y}{d t^2} \frac{dy}{dc}+\frac{d^2 z}{d t^2} \frac{dz}{dc} - \frac{dV}{dc} + \frac{1}{\varrho}\frac{dp}{dc}=0
\end{aligned} \qquad \right\}
\end{equation*}
We will limit ourselves to this
assumption about $X,Y,Z$ which, apart from the boundary conditions,
\noindent
coincides with the one that necessarily has to hold in the equilibrium state of the fluid.
\\
\centerline{\fett{$\S.\,5.$}}\\[0.3cm]
\indent\indent
Partially integrating the last three terms of equation (1) of $\S.\,4$,
ignoring the boundary contributions to the double integrals and using as already stated
\begin{equation*} \tag*{[5.1]}
X= \frac{dV}{dx}, \,\,Y=\frac{dV}{dy}, \,\, Z=\frac{dV}{dz} \,,
\end{equation*}
one obtains the following equation\,:
\begin{equation*} \tag*{[5.2]}
0 = \iiint \varrho\,{\rm d}x\,{\rm d}y\, {\rm d}z\ \bigg\{\Big(\frac{d^2 x}{d t^2} - \frac{dV}{dx} + \frac{1}{\varrho}\frac{dp}{dx}\Big) \delta x + \Big(\frac{d^2 y}{d t^2} - \frac{dV}{dy} + \frac{1}{\varrho}\frac{dp}{dy}\Big) \delta y + \Big(\frac{d^2 z}{d t^2} - \frac{dV}{dz} + \frac{1}{\varrho}\frac{dp}{dz}\Big) \delta z \bigg\}
\end{equation*}
If one puts, as in $\S.\,3$, $\varrho= \varphi (p )$ and:
\begin{equation*} \tag*{[5.3]}
f(p) =\int\frac{ {\rm d}p}{\varphi(p)}
\end{equation*}
one has,
\begin{smallequation} \tag*{[5.4]}
\!\!\!\!0 = \iiint \varrho_0 \, {\rm d}a\, {\rm d}b\, {\rm d}c \bigg\{\Big(\frac{d^2 x}{d t^2} - \frac{dV}{dx} + \frac{df(p)}{dx}\Big) \delta x + \Big(\frac{d^2 y}{d t^2} - \frac{dV}{dy} + \frac{df(p)}{dy}\Big) \delta y + \Big(\frac{d^2 z}{d t^2} - \frac{dV}{dz} + \frac{df(p)}{dz}\Big) \delta z \bigg\} \,,
\end{smallequation}
provided that the transformation of the integral is made according to \S.\,2.
Now, if one sets
\begin{equation*} \tag*{[5.5]} \label{pointerOmega}
V - f(p) = \Omega,
\end{equation*}
one can write\,:
\begin{equation*} \tag*{[5.6]}
0 = \iiint \varrho_0 \, {\rm d}a\, {\rm d}b\, {\rm d}c \bigg[ \frac{d^2 x}{d t^2} \delta x + \frac{d^2 y}{d t^2} \delta y + \frac{d^2 z}{d t^2} \delta z -\delta \Omega \bigg]
\end{equation*}
If one now integrates under the triple integral
with respect to
the variable $t$ which is independent of $a,b,c$, one obtains
\begin{equation} \tag*{[5.7]}
\begin{aligned}
\int \frac{d^2 x}{dt^2} \delta x\, {\rm d}t = \Big[\frac{dx}{dt} \delta x\Big] - \int \frac{dx}{dt}
\frac{d \delta x}{dt} {\rm d} t\\\\
\int\frac{d^2 y}{dt^2} \delta y\, {\rm d}t= \Big[\frac{dy}{dt} \delta y\Big] - \int\frac{dy}{dt}\frac{d \delta y}{dt} {\rm d }t\\\\
\int\frac{d^2 z}{dt^2} \delta z\, {\rm d}t = \Big[\frac{dz}{dt} \delta z\Big] - \int\frac{dz}{dt}\frac{d \delta z}{dt} {\rm d} t
\end{aligned}
\end{equation}
\noindent Dropping the triple integrals,
one has the equation\,:
\begin{equation*} \tag*{[5.8]}
0 = \iiint \varrho_0 {\rm d}a\, {\rm d}b\, {\rm d}c \int {\rm d}t \bigg\{\frac{dx}{dt} \delta \frac{dx}{dt} + \frac{dy}{dt} \delta \frac{dy}{dt} +\frac{dz}{dt} \delta \frac{dz}{dt} + \delta \Omega\bigg\}
\end{equation*}
which coincides with the following\,:\hypertarget{eq1}{}
\begin{equation} \tag*{(1), [5.9]}
0 = \delta \iiint \varrho_0\, {\rm d}a\, {\rm d}b\, {\rm d}c \int {\rm d}t \Big [\Big(\frac{ds}{dt}\Big)^2 + 2 \Omega \Big] \,.
\end{equation}
The first three hydrodynamical fundamental equations are satisfied.
Hence
\begin{equation*} \notag
\iiint \varrho_0\, {\rm d}a\, {\rm d}b\, {\rm d}c \int {\rm d}t \Big [\Big(\frac{ds}{dt}\Big)^2 + 2 \Omega \Big]
\end{equation*}
disappears; conversely, if the first variation of this integral with respect to $x,y,z$ is set to zero,
then one obtains these first three equations. \\\\
\indent\indent
This theorem, which can be easily considered, in view of the meaning of $\displaystyle \frac{ds}{dt}$ and $\Omega$,
as a mechanical principle, has a certain analogy with the principle of least action.
For us it possesses an analytical importance, since it gives an
extremely simple tool for transforming the hydrodynamical equations.
\hspace{0.4cm}
Indeed, in order to introduce in these equations new coordinates, instead of $x,y,z$, it is only necessary to write the arc element
\begin{equation*} \tag*{[5.10]}
{\rm d}s^2= {\rm d}x^2 + {\rm d}y^2 + {\rm d}z^2
\end{equation*}
in terms of the new coordinates and, then,
apply the simple operation of variation, using the integral in the new coordinates\,:
\begin{equation*} \notag
\iiint \varrho_0\,{\rm d} a\, {\rm d} b\, {\rm d}c \int {\rm d} t \Big[\Big(\frac{ds}{dt}\Big)^2 + 2 \Omega \Big]
\end{equation*}
By setting the coefficients of the three variations to zero, we thereby obtain three equations in a similar form as equations (3) in $\S.\,4$\,; in order to write them into the first or second E\,u\,l\,e\,r\,ian form, one has to apply analogous procedures as was done in $\S\,4$.\\
\indent\indent
If the new coordinates\,:
\begin{equation*} \tag*{[5.11]}
\varrho_1=f_1(x,\,\,y,\,\,z), \,\,\,\,\varrho_2=f_2(x,\,\,y,\,\,z),\,\,\,\,\varrho_3=f_3(x,\,\,y,\,\,z)
\end{equation*}
are used instead of $x,\,y,\,z$, one obtains:
\begin{mynewequation} \tag*{[5.12]}
\begin{aligned}
{\rm d}x = \frac{dx}{d \varrho_1}{\rm d} \varrho_1 + \frac{dx}{d\varrho_2}{\rm d} \varrho_2 + \frac{dx}{d \varrho_3} {\rm d} \varrho_3\\
{\rm d}y = \frac{dy}{d \varrho_1}{\rm d} \varrho_1 + \frac{dy}{d\varrho_2}{\rm d} \varrho_2 + \frac{dy}{d \varrho_3} {\rm d} \varrho_3\\
{\rm d}z = \frac{dz}{d \varrho_1}{\rm d} \varrho_1 + \frac{dz}{d\varrho_2}{\rm d} \varrho_2 + \frac{dz}{d \varrho_3} {\rm d} \varrho_3\\
\end{aligned}
\end{mynewequation}
therefore
\begin{equation*} \tag*{[5.13]}
{\rm d}s^2 = {\rm d}x^2+{\rm d}y^2+{\rm d}z^2=N_1 {\rm d} \varrho_1^2 + N_2 {\rm d} \varrho_2^2 +N_3 {\rm d} \varrho_3^2+ 2 n_3 {\rm d} \varrho_1 {\rm d}\varrho_2 + 2n_1 {\rm d} \varrho_2 {\rm d} \varrho_3 + 2n_2 {\rm d} \varrho_3 {\rm d}\varrho_1
\end{equation*}
where
\begin{align} \tag*{[5.14]}
\begin{aligned}
N_1= \Big(\frac{dx}{d \varrho_1}\Big)^2 + \Big(\frac{dy}{d \varrho_1}\Big)^2 + \Big(\frac{dz}{d \varrho_1}\Big)^2\\
N_2= \Big(\frac{dx}{d \varrho_2}\Big) ^2 + \Big(\frac{dy}{d \varrho_2}\Big)^2 + \Big(\frac{dz}{d \varrho_2}\Big)^2\\
N_3= \Big(\frac{dx}{d \varrho_3}\Big) ^2 + \Big(\frac{dy}{d \varrho_3}\Big)^2 + \Big(\frac{dz}{d \varrho_3}\Big)^2
\end{aligned} \\[0.3cm] \tag*{[5.15]}
\begin{aligned}
n_3= \frac{dx}{d \varrho_1} \frac{dx}{d \varrho_2} + \frac{dy}{d \varrho_1} \frac{dy}{d \varrho_2} + \frac{dz}{d \varrho_1} \frac{dz}{d \varrho_2}\\\\
n_1= \frac{dx}{d \varrho_2} \frac{dx}{d \varrho_3} + \frac{dy}{d \varrho_2} \frac{dy}{d \varrho_3} + \frac{dz}{d \varrho_2} \frac{dz}{d \varrho_3}\\\\
n_2= \frac{dx}{d \varrho_3} \frac{dx}{d \varrho_1} + \frac{dy}{d \varrho_3} \frac{dy}{d \varrho_1} + \frac{dz}{d \varrho_3} \frac{dz}{d \varrho_1}
\end{aligned}
\end{align}
\indent\indent
Once $N_1, N_2, N_3,n_1,n_2,n_3$ are expressed in terms of the new variables $\varrho_1, \varrho_2, \varrho_3$, one has to vary the integral
\begin{smallequation} \notag
\iint\!\!\varrho_0 \, {\rm d}a\, {\rm d}b\, {\rm d}c \int\!\! {\rm d}t \Big[N_1 \Big(\frac{d \varrho_1}{dt} \Big)^2\! + N_2 \Big(\frac{d \varrho_2}{dt} \Big)^2 \!+N_3 \Big(\frac{d \varrho_3}{dt} \Big)^2\! + 2 n_3 \frac{d\varrho_1}{dt} \frac{d \varrho_2}{dt} + 2 n_1 \frac{d\varrho_2}{dt} \frac{d \varrho_3}{dt} + 2 n_2 \frac{d\varrho_3}{dt} \frac{d \varrho_1}{dt} + 2 \Omega \Big]
\end{smallequation}
with respect to these new variables.
Then, after integration by parts with respect to $t$, one removes from the quadruple integral the quantities in which appear the time derivatives of the variations $\delta \varrho_1, \delta \varrho_2, \delta \varrho_3$.
Then one has to set the coefficients of $\delta \varrho_1, \delta \varrho_2, \delta \varrho_3$ equal to zero.\hspace{0.4cm}
After that one obtains the first three hydrodynamical fundamental equations,
which are completed by a fourth one, the density equation.\hspace{0.6cm} In order to express also the density equation in the new coordinates, we notice that
\noindent the volume element ${\rm d}x\,{\rm d}y\,{\rm d}z$ may be expressed in such coordinates very easily, namely\,:
\begin{equation}\tag*{[5.16]}
\begin{aligned}
{\rm d}x\,{\rm d}y\,{\rm d}z= {\rm d}\varrho_1\, {\rm d}\varrho_2\, {\rm d}\varrho_3 \begin{vmatrix}
\frac{dx}{d\varrho_1}&\frac{dx}{d\varrho_2}&\frac{dx}{d\varrho_3} \\\\
\frac{dy}{d\varrho_1}&\frac{dy}{d\varrho_2}&\frac{dy}{d\varrho_3} \\\\
\frac{dz}{d\varrho_3}&\frac{dz}{d\varrho_3}&\frac{dz}{d\varrho_3}
\end{vmatrix}
\end{aligned}
\end{equation}
herefrom follows
\begin{equation} \notag
({\rm d} x\,{\rm d}y\,{\rm d}z)^2= ({\rm d}\varrho_1\,{\rm d}\varrho_2\,{\rm d}\varrho_3)^2 {\bf { \times}} \\[-0.5cm]
\end{equation}
\begin{equation}\tag*{[5.17]}
\begin{vmatrix}
(\frac{dx}{d\varrho_1})^2 +(\frac{dy}{d\varrho_1})^2+(\frac{dz}{d\varrho_1})^2,&
\frac{dx}{d\varrho_1} \frac{dx}{d\varrho_2} + \frac{dy}{d\varrho_1} \frac{dy}{d\varrho_2}+\frac{dz}{d\varrho_1} \frac{dz}{d\varrho_2},&
\frac{dx}{d\varrho_3} \frac{dx}{d \varrho_1} + \frac{dy}{d\varrho_3} \frac{dy}{d \varrho_1}+\frac{dz}{d\varrho_3}\frac{dz}{d\varrho_1}\\\\
\frac{dx}{d\varrho_1 }\frac{dx}{d\varrho_2} + \frac{dy}{d\varrho_1}\frac{dy}{d\varrho_2}+\frac{dz}{d\varrho_1} \frac{dz}{d\varrho_2},&
(\frac{dx}{d\varrho_2})^2 +(\frac{dy}{d\varrho_2})^2+(\frac{dz}{d\varrho_2})^2,&
\frac{dx}{d\varrho_2} \frac{dx}{d\varrho_3} + \frac{dy}{d\varrho_2} \frac{dy}{d\varrho_3}+\frac{dz}{d\varrho_2} \frac{dz}{d\varrho_3}\\\\
\frac{dx}{d\varrho_3} \frac{dx}{d\varrho_1} + \frac{dy}{d\varrho_3} \frac{dy}{d\varrho_1}+\frac{dz}{d\varrho_3} \frac{dz}{d\varrho_1},&
\frac{dx}{d\varrho_2} \frac{dx}{d\varrho_3} + \frac{dy}{d\varrho_2} \frac{dy}{d\varrho_3}+\frac{dz}{d\varrho_2} \frac{dz}{d\varrho_3},&
(\frac{dx}{d\varrho_3})^2 +(\frac{dy}{d\varrho_3})^2+(\frac{dz}{d\varrho_3})^2&
\end{vmatrix}
\end{equation}
or, with the notation as defined above\,:
\begin{mynewequation} \tag*{[5.18]}
({\rm d}x\,{\rm d}y\,{\rm d}z)^2= ({\rm d}\varrho_1\,{\rm d}\varrho_2\,{\rm d}\varrho_3)^2 \begin{vmatrix}
N_1&n_3&n_2\\\\
n_3&N_2&n_1\\\\
n_2&n_1&N_3
\end{vmatrix}
\end{mynewequation}
\indent\indent
Let us denote the values of $\varrho_1, \varrho_2,\varrho_3, N_1, N_2, N_3, n_1,n_2, n_3 $ at time $t=0$ with $ \varrho_1^0, \varrho_2^0,\varrho_3^0, N_1^0, N_2^0, N_3^0, n_1^0,n_2^0, n_3^0 $, then we get from the previous equation for $t=0$
\begin{mynewequation} \tag*{[5.19]}
({\rm d}a\,{\rm d}b\,{\rm d}c)^2= ({\rm d}\varrho_1^0\,{\rm d}\varrho_2^0\,{\rm d}\varrho_3^0)^2 \begin{vmatrix}
N_1^0&n_3^0&n_2^0\\\\
n_3^0&N_2^0&n_1^0\\\\
n_2^0&n_1^0&N_3^0
\end{vmatrix}
\end{mynewequation}
One also has to think of the ensuing value of ${\rm d}a\,{\rm d}b\,{\rm d}c$ as substituted into the integral to be varied. But now the density equation is, for the general case\,:
\begin{equation*} \tag*{[5.20]}
\frac{ {\rm d}x\,{\rm d}y\,{\rm d}z}{{\rm d}a\,{\rm d}b\,{\rm d}c} = \frac{\varrho_0}{\varrho} \,.
\end{equation*}
Then, dividing $({\rm d}x\,\,{\rm d}y\,\,{\rm d}z)^2$ by $({\rm d}a\,\,{\rm d}b\,\,{\rm d}c)^2$, one obtains the density equation in the new variables\,:
\begin{mynewequation} \tag*{[5.21]}
\Big(\frac{{\rm d} \varrho_1}{{\rm d}\varrho_1^0} \frac{{\rm d} \varrho_2}{{\rm d}\varrho_2^0} \frac{{\rm d} \varrho_3}{{\rm d} \varrho_3^0} \Big)^2\,\,\,\Big(\frac{\varrho}{\varrho_0}\Big)^2 = \begin{vmatrix} N_1^0& n_3^0&n_2^0\\
n_3^0&N_2^0&n_1^0\\
n_2^0&n_1^0&N_3^0
\end{vmatrix}
:
\begin{vmatrix} N_1& n_3&n_2\\
n_3&N_2&n_1\\
n_2&n_1&N_3
\end{vmatrix}
\end{mynewequation}
or, as it is well-known
\begin{mynewequation} \tag*{[5.22]}
\text{\small ${\rm d}\varrho_1\,{\rm d}\varrho_2\,{\rm d}\varrho_3 = {\rm d}\varrho_1^0\,{\rm d}\varrho_2^0\,{\rm d}\varrho_3^0$}
\begin{vmatrix}
\frac{d\varrho_1}{d\varrho_1^0} &\frac{d\varrho_1}{d\varrho_2^0} &\frac{d\varrho_1}{d\varrho_3^0} \\\\
\frac{d\varrho_2}{d\varrho_1^0} &\frac{d\varrho_2}{d\varrho_2^0} &\frac{d\varrho_2}{d\varrho_3^0} \\\\
\frac{d\varrho_3}{d\varrho_1^0} &\frac{d\varrho_3}{d\varrho_2^0} &\frac{d\varrho_3}{d\varrho_3^0}
\end{vmatrix} \,,
\end{mynewequation}
one finally obtains\,:
\begin{mynewequation}\tag*{[5.23]}
\begin{vmatrix}
\frac{d\varrho_1}{d\varrho_1^0} &\frac{d\varrho_1}{d\varrho_2^0} &\frac{d\varrho_1}{d\varrho_3^0} \\\\
\frac{d\varrho_2}{d\varrho_1^0} &\frac{d\varrho_2}{d\varrho_2^0} &\frac{d\varrho_2}{d\varrho_3^0} \\\\
\frac{d\varrho_3}{d\varrho_1^0} &\frac{d\varrho_3}{d\varrho_2^0} &\frac{d\varrho_3}{d\varrho_3^0}
\end{vmatrix} \cdot
\text{\Large $\frac{\varrho}{\varrho_0}$} ={\boldsymbol {\surd }}\begin{vmatrix} N_1^0& n_3^0&n_2^0\\
n_3^0&N_2^0&n_1^0\\
n_2^0&n_1^0&N_3^0
\end{vmatrix}
:
\begin{vmatrix} N_1& n_3&n_2\\
n_3&N_2^0&n_1\\
n_2&n_1&N_3
\end{vmatrix}
\end{mynewequation}
\indent\indent
All quantities appearing in this transformed density equation are already known through the transformation of the arc element.\hspace{0.4cm}Here, the problem of the transformation of the four hydrodynamical equations in an arbitrary coordinate system is reduced to the problem of the transformation of the arc element.\\
\indent\indent
It is obvious that the applicability of this procedure does not depend on the number of variables and that, in the same way, by variation of the integral with respect to $x_1,x_2,...,x_n$:
\begin{equation*} \notag
\iiint \varrho_0\, {\rm d}a_1 {\rm d}a_2\,...\,{\rm d}a_n \int {\rm d}t \biggl\{ \Big(\frac{ds}{dt}\Big)^2 + 2 \Omega\biggr\}
\end{equation*}
one obtains
\begin{mynewequation}\tag*{[5.24]}
\left.
\begin{aligned}
\frac{d^2 x_1}{d t^2} \frac{dx_1}{da_1} + \frac{d^2 x_2}{d t^2} \frac{dx_2}{da_1}+...+ \frac{d^2 x_n}{d t^2} \frac{dx_n}{da_1}- \frac{dV}{da_1}+ \frac{1}{\varrho}\, \frac{dp}{da_1} =0\\
\,\,.\,\,\,\,.\,\,\,\,.\,\,\,\,.\,\,\,\,.\,\,\,\,.\,\,\,\,.\,\,\,\,.\,\,\,\,.\,\qquad \qquad \qquad \qquad \qquad \\
\frac{d^2 x_1}{d t^2} \frac{dx_1}{da_n} + \frac{d^2 x_2}{d t^2} \frac{dx_2}{da_n}+...+ \frac{d^2 x_n}{d t^2} \frac{dx_n}{da_n}- \frac{dV}{da_n}+ \frac{1}{\varrho} \frac{dp}{da_n} =0\\
\end{aligned} \qquad \right\}
\end{mynewequation}
\noindent
where $a_1, a_2, .\,. \,a_n$ are the values of $x_1, x_2,.\,. \,x_n$ at time $t=0$.\hspace{0.4cm}Then, the transformation of these equations happens exactly in the same way.\hspace{0.4cm}For the sake of brevity, we here limit ourselves to three variables.\\
\indent\indent
This transformation happens to be very simple when the new variables form an orthogonal system, a case, which apart from this simplification, is also very interesting since all frequently used coordinate systems are included therein.\\
\indent\indent
The points where $\varrho_1$ takes a given value will in general
form a surface, whose equation with respect to the axes of $x,\,y,\,z$ is
\begin{equation*} \tag*{[5.25]}
\varrho_1 = f_1 (x,\, y,\,z)
\end{equation*}
The cosines of the angles formed by the normal to the point $x, \,y, \,z$ of this surface and the coordinates axes, are\,:
\begin{equation*} \tag*{[5.26]}
\frac{1}{\Delta_1} \,\frac{d \varrho_1}{dx}, \,\,\,\,\,\,\,\frac{1}{\Delta_1}\,\frac{d \varrho_1}{dy}, \,\,\,\,\,\,\frac{1}{\Delta_1}\,
\frac{d \varrho_1}{dz}, \,\,\,\,\,\,\,\,\,\Delta_1^2= \Big(\frac{d\varrho_1}{dx}\Big)^2 + \Big(\frac{d\varrho_1}{dy}\Big)^2 +\Big(\frac{d\varrho_1}{dz}\Big)^2
\end{equation*}
The analogous cosines for the normal to the surface
\begin{equation*} \tag*{[5.27]}
\varrho_2 = f_2 (x,\, y,\, z)
\end{equation*}
are\,:
\begin{equation*} \tag*{[5.28]}
\frac{1}{\Delta_2} \,\frac{d \varrho_2}{dx}, \frac{1}{\Delta_2}\,\frac{d \varrho_2}{dy}, \frac{1}{\Delta_2}\,
\frac{d \varrho_2}{dz}, \,\,\,\,\,\,\,\,\,\Delta_2^2= \Big(\frac{d\varrho_2}{dx}\Big)^2 + \Big(\frac{d\varrho_2}{dy}\Big)^2 +\Big(\frac{d\varrho_2}{dz}\Big)^2;
\end{equation*}
for the surface
\begin{equation*} \tag*{[5.29]}
\varrho_3 = f_3 (x, y, z)
\end{equation*}
the analogous cosines will be\,:
\begin{equation*} \tag*{[5.30]}
\frac{1}{\Delta_3} \,\frac{d \varrho_3}{dx}, \frac{1}{\Delta_3}\,\frac{d \varrho_3}{dy}, \frac{1}{\Delta_3}\,
\frac{d \varrho_3}{dz}, \,\,\,\,\,\,\,\,\,\Delta_3^2= \Big(\frac{d\varrho_3}{dx}\Big)^2 + \Big(\frac{d\varrho_3}{dy}\Big)^2 +\Big(\frac{d\varrho_3}{dz}\Big)^2
\end{equation*}
\indent\indent
Suppose now that $\varrho_1, \, \varrho_2, \, \varrho_3$ form an orthogonal system;
then, in the point of intersection the normals to the three surfaces $\varrho_1, \, \varrho_2, \, \varrho_3$ are mutually orthogonal to each other. The conditions that the axes of $x,\,y,\,z$ form an orthogonal system, when referring their positions to the normals of the surfaces $\varrho_1, \varrho_2,\,\varrho_3$, will be the following\,:
\begin{mynewequation} \tag*{[5.31]}
\left.
\begin{aligned}
\frac{1}{\Delta_1^2} \Big(\frac{d \varrho_1}{dx}\Big)^2 + \frac{1}{\Delta_2^2} \Big(\frac{d \varrho_2}{dx}\Big)^2 + \frac{1}{\Delta_3^2} \Big(\frac{d \varrho_3}{dx}\Big)^2 =1\\\\
\frac{1}{\Delta_1^2} \Big(\frac{d \varrho_1}{dy}\Big)^2 + \frac{1}{\Delta_2^2} \Big(\frac{d \varrho_2}{dy}\Big)^2 + \frac{1}{\Delta_3^2} \Big(\frac{d \varrho_3}{dy}\Big)^2 =1\\\\
\frac{1}{\Delta_1^2} \Big(\frac{d \varrho_1}{dz}\Big)^2 + \frac{1}{\Delta_2^2} \Big(\frac{d \varrho_2}{dz}\Big)^2 + \frac{1}{\Delta_3^2} \Big(\frac{d \varrho_3}{dz}\Big)^2 =1
\end{aligned} \qquad\right\}
\end{mynewequation}
\begin{mynewequation} \tag*{[5.32]}
\left.
\begin{aligned}
\frac{1}{\Delta_1^2} \frac{d \varrho_1}{dx} \frac{d \varrho_1}{dy} + \frac{1}{\Delta_2^2} \frac{d \varrho_2}{dx} \frac{d \varrho_2}{dy}+ \frac{1}{\Delta_3^2} \frac{d \varrho_3}{dx} \frac{d \varrho_3}{dy} =0\\\\
\frac{1}{\Delta_1^2} \frac{d \varrho_1}{dy} \frac{d \varrho_1}{dz} + \frac{1}{\Delta_2^2} \frac{d \varrho_2}{dy} \frac{d \varrho_2}{dz}+ \frac{1}{\Delta_3^2} \frac{d \varrho_3}{dy} \frac{d \varrho_3}{dz} =0\\\\
\frac{1}{\Delta_1^2} \frac{d \varrho_1}{dz} \frac{d \varrho_1}{dx} + \frac{1}{\Delta_2^2} \frac{d \varrho_2}{dz} \frac{d \varrho_2}{dx}+ \frac{1}{\Delta_3^2} \frac{d \varrho_3}{dz} \frac{d \varrho_3}{dx} =0
\end{aligned} \qquad \right\}
\end{mynewequation}
\indent \indent Now we have:
\begin{mynewequation} \tag*{[5.33]}
\begin{aligned}
\frac{\rm {d} \varrho_1}{\Delta_1} =\frac{1}{\Delta_1} \frac{d \varrho_1}{dx} {\rm {d}x} + \frac{1}{\Delta_1} \frac{d \varrho_1}{dy} {\rm {d}y} + \frac{1}{\Delta_1} \frac{d \varrho_1}{dz} {\rm {d}z} \\\\
\frac{\rm {d} \varrho_2}{\Delta_2} =\frac{1}{\Delta_2} \frac{d \varrho_2}{dx} {\rm {d}x} + \frac{1}{\Delta_2} \frac{d \varrho_2}{dy} {\rm {d}y} + \frac{1}{\Delta_2} \frac{d \varrho_2}{dz} {\rm {d}z} \\\\
\frac{\rm {d} \varrho_3}{\Delta_3} =\frac{1}{\Delta_3} \frac{d \varrho_3}{dx} {\rm {d}x} + \frac{1}{\Delta_3} \frac{d \varrho_3}{dy} {\rm {d}y} + \frac{1}{\Delta_2} \frac{d \varrho_3}{dz}{\rm {d}z}
\end{aligned}
\end{mynewequation}
\indent\indent
By squaring and adding these equations, using the given relations, one obtains\,:
\begin{mynewequation} \tag*{[5.34]}
\Big(\frac{\rm {d} \varrho_1}{\Delta_1}\Big)^2 + \Big(\frac{\rm {d} \varrho_2}{\Delta_2}\Big)^2 + \Big(\frac{\rm {d} \varrho_3}{\Delta_3}\Big)^2 \text{\small $={\rm d}x^2+{\rm d}y^2+{\rm d}z^2= {\rm d}s^2$}
\end{mynewequation}
The comparison of this expression with the general one yields\,:
\begin{mynewequation} \tag*{[5.35]}
N_1 = \frac{1}{A_1^2},\,\,\,N_2 = \frac{1}{A_2^2},\,\,\,N_3 = \frac{1}{A_3^2},\,\,\,n_1=n_2=n_3=0
\end{mynewequation}
If one sets\,:
\begin{mynewequation} \tag*{[5.36]}
\begin{aligned}
N_1=\Big(\frac{dx}{d\varrho_1}\Big)^2 + \Big(\frac{dy}{d\varrho_1}\Big)^2 + \Big(\frac{dz}{d\varrho_1}\Big)^2=
\frac{1}{(\frac{d \varrho_1}{dx})^2 + (\frac{d \varrho_1}{dy})^2 + (\frac{d \varrho_1}{dz})^2 }\\
N_2=\Big(\frac{dx}{d\varrho_2}\Big)^2 + \Big(\frac{dy}{d\varrho_2}\Big)^2 + \Big(\frac{dz}{d\varrho_3}\Big)^2=
\frac{1}{(\frac{d \varrho_2}{dx})^2 + (\frac{d \varrho_2}{dy})^2 + (\frac{d \varrho_2}{dz})^2 }\\
N_3=\Big(\frac{dx}{d\varrho_3}\Big)^2 + \Big(\frac{dy}{d\varrho_3}\Big)^2 + \Big(\frac{dz}{d\varrho_3}\Big)^2=
\frac{1}{(\frac{d \varrho_3}{dx})^2 + (\frac{d \varrho_3}{dy})^2 + (\frac{d \varrho_3}{dz})^2 }
\end{aligned}
\end{mynewequation}
so the equation for the density becomes\,:
\begin{myequation} \tag*{(2), [5.37]}
\begin{vmatrix}
\frac{d\varrho_1}{d\varrho_1^0}\,\,\,\, &\frac{d\varrho_1}{d\varrho_2^0}\,\,\,\, &\frac{d\varrho_1}{d\varrho_3^0} \\\\
\frac{d\varrho_2}{d\varrho_1^0}\,\,\,\, &\frac{d\varrho_2}{d\varrho_2^0}\,\,\,\, &\frac{d\varrho_2}{d\varrho_3^0} \\\\
\frac{d\varrho_3}{d\varrho_1^0}\,\,\,\, &\frac{d\varrho_3}{d\varrho_2^0}\,\,\,\, &\frac{d\varrho_3}{d\varrho_3^0}
\end{vmatrix}
\cdot \frac{\varrho}{\varrho_0} =\sqrt {\frac{ N_1^0\, N_2^0\, N_3^0}{N_1\, N_2\, N_3}}
\end{myequation}
and the equation [``expression'' is here meant]
to be varied is\,:
\begin{mynewequation} \notag
\int {\rm d} t \int {\rm d} T \left \{ N_1 \left(\frac{d\varrho_1}{dt} \right)^2 + N_2 \left(\frac{d \varrho_2}{dt} \right)^2 + N_3 \left(\frac{d \varrho_3}{dt} \right)^2 + 2 \Omega \right \}
\end{mynewequation}
where ${\rm d}T$ indicates the new element of mass $\varrho_0\,{\rm d}a\,{\rm d}b\,{\rm d}c$ written in the new coordinates.\hspace{0.4cm}The part
dependent on $\delta\hspace{-0.03cm}\varrho_1$ of the variation of this integral is\,:
\begin{smallequation} \notag
\int {\rm d} t \int {\rm d} T \left \{ 2 N_1 \frac{d \varrho_1}{dt} \frac{d \delta\hspace{-0.03cm}\varrho_1}{dt} + \left( \frac{d \varrho_1}{dt} \right)^2 \frac{d N_1}{d \varrho_1} \delta\hspace{-0.03cm}\varrho_1 + \left( \frac{d \varrho_2}{dt} \right)^2 \frac{d N_2}{d \varrho_1} \delta\hspace{-0.03cm}\varrho_1 + \left( \frac{d \varrho_3}{dt} \right)^2 \frac{d N_3}{d \varrho_1} \delta\hspace{-0.03cm}\varrho_1 + 2 \frac{d \Omega}{d \varrho_1} \delta\hspace{-0.03cm}\varrho_1 \right \} \,.
\end{smallequation}
When the first member of this expression is integrated by parts in $t$, all members have the factor $\delta\hspace{-0.03cm}\varrho_1$. After it is set to zero, one obtains\,:
\begin{myequation} \tag*{[5.38]}
{\text{\footnotesize $2$}} \frac{d \Omega}{d\varrho_1} = \text{\footnotesize $2$} \frac{d \left(N_1 \frac{d \varrho_1}{dt}\right)} {dt} -\left( \frac{d \varrho_1}{dt} \right)^2 \frac{dN_1}{d \varrho_1} -\left( \frac{d \varrho_2}{dt} \right)^2 \frac{dN_2}{d \varrho_1}
-\left( \frac{d \varrho_3}{dt} \right)^2 \frac{dN_3}{d \varrho_1} \,,
\end{myequation}
similarly, one has\,:
\begin{equation*} \tag*{(3), [5.39]}
2\frac{d \Omega}{d\varrho_2} = 2 \frac{d \left(N_2 \frac{d \varrho_2}{dt}\right)} {dt} -\left( \frac{d \varrho_1}{dt} \right)^2 \frac{dN_1}{d \varrho_2} -\left( \frac{d \varrho_2}{dt} \right)^2 \frac{dN_2}{d \varrho_2}
-\left( \frac{d \varrho_3}{dt} \right)^2 \frac{dN_3}{d \varrho_2}
\end{equation*}
\begin{equation*} \tag*{[5.40]}
2\frac{d \Omega}{d\varrho_3} = 2 \frac{d \left(N_3 \frac{d \varrho_3}{dt}\right)} {dt} -\left( \frac{d \varrho_1}{dt} \right)^2 \frac{dN_1}{d \varrho_3} -\left( \frac{d \varrho_2}{dt} \right)^2 \frac{dN_2}{d \varrho_3}
-\left( \frac{d \varrho_3}{dt} \right)^2 \frac{dN_3}{d \varrho_3}
\end{equation*}
These are equations which are built analogously to (3) of \S\,4\,; in order for
these equations to take the second \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! rian} form,
the so-called \mbox{L\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! g\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! g\hspace{0.036cm}} %{$\phantom{!}$\! ian} form, we multiply in turn the previous equations by\,:
$$
\frac{d \varrho_1}{d \varrho_1^0}, \,\,\,\,\,\, \frac{d \varrho_2}{d \varrho_1^0}, \,\,\,\,\,\, \frac{d \varrho_3}{d \varrho_1^0},
$$
and we add them; then we multiply by
$$
\frac{d \varrho_1}{d \varrho_2^0}, \,\,\,\,\,\, \frac{d \varrho_2}{d \varrho_2^0}, \,\,\,\,\,\, \frac{d \varrho_3}{d \varrho_2^0},
$$
and we add them as well; finally, we multiply by
$$
\frac{d \varrho_1}{d \varrho_3^0}, \,\,\,\,\,\, \frac{d \varrho_2}{d \varrho_3^0}, \,\,\,\,\,\, \frac{d \varrho_3}{d \varrho_3^0},
$$
and we add them too.\hspace{0.4cm}In this way, we get the following equations\,:
\begin{mynewequation} \tag* {(4), [5.41]}
\left.
\begin{aligned}
2 \frac{d \Omega}{d {\varrho_1}^0} = 2 \frac{d\left(N_1 \frac{d\varrho_1}{dt} \right)}{dt} \frac{d \varrho_1}{d \varrho_1^0} + 2 \frac{d\left(N_2 \frac{d\varrho_2}{dt} \right)}{dt} \frac{d \varrho_2}{d \varrho_1^0} + 2 \frac{d\left(N_3 \frac{d\varrho_3}{dt} \right)}{dt} \frac{d \varrho_3}{d \varrho_1^0} \\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, -\left( \frac{d \varrho_1}{dt} \right)^2 \frac{dN_1}{d \varrho_1^0} -\left( \frac{d \varrho_2}{dt} \right)^2 \frac{dN_2}{d \varrho_1^0}
-\left( \frac{d \varrho_3}{dt} \right)^2 \frac{dN_3}{d \varrho_1^0} \\\\
2 \frac{d \Omega}{d {\varrho_2}^0} = 2 \frac{d\left(N_1 \frac{d\varrho_1}{dt} \right)}{dt} \frac{d \varrho_1}{d \varrho_2^0} + 2 \frac{d\left(N_2 \frac{d\varrho_2}{dt} \right)}{dt} \frac{d \varrho_2}{d \varrho_2^0} + 2 \frac{d\left(N_3 \frac{d\varrho_3}{dt} \right)}{dt} \frac{d \varrho_3}{d \varrho_2^0} \\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, -\left( \frac{d \varrho_1}{dt} \right)^2 \frac{dN_1}{d \varrho_2^0} -\left( \frac{d \varrho_2}{dt} \right)^2 \frac{dN_2}{d \varrho_2^0}
-\left( \frac{d \varrho_3}{dt} \right)^2 \frac{dN_3}{d \varrho_2^0} \\\\
2 \frac{d \Omega}{d {\varrho_3}^0} = 2 \frac{d\left(N_1 \frac{d\varrho_1}{dt} \right)}{dt} \frac{d \varrho_1}{d \varrho_3^0} + 2 \frac{d\left(N_2 \frac{d\varrho_2}{dt} \right)}{dt} \frac{d \varrho_2}{d \varrho_3^0} + 2 \frac{d\left(N_3 \frac{d\varrho_3}{dt} \right)}{dt} \frac{d \varrho_3}{d \varrho_3^0} \\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, -\left( \frac{d \varrho_1}{dt} \right)^2 \frac{dN_1}{d \varrho_3^0} -\left( \frac{d \varrho_2}{dt} \right)^2 \frac{dN_2}{d \varrho_3^0}
-\left( \frac{d \varrho_3}{dt} \right)^2 \frac{dN_3}{d \varrho_3^0}
\end{aligned} \qquad \right\}
\end{mynewequation}
\indent\indent
A very elegant example of an orthogonal system are the elliptical coordinates $\varrho_1, \varrho_2,\varrho_3$ which can be defined as the roots of the equation with respect to $\varepsilon$\,:
\begin{equation*} \tag*{[5.42]}
\frac{x^2}{\alpha ^2 - \varepsilon^2} + \frac{y^2}{\beta^2 - \varepsilon^2} + \frac{z^2}{\gamma^2 - \varepsilon^2} =1
\end{equation*}
and so taken that one has\,:
\begin{equation*} \tag*{[5.43]}
\alpha > \varrho_1 > \beta > \varrho_2>\gamma >\varrho_3 >0
\end{equation*}
\noindent
From the identity obtained after partial fraction decomposition\,:
\begin{align}
\frac {\left( \varepsilon^2 - \varrho_1^2 \right) \left( \varepsilon^2 - \varrho_2^2 \right) \left( \varepsilon^2 - \varrho_3^2 \right)}
{\left( \varepsilon^2 - \alpha^2 \right) \left( \varepsilon^2 - \beta^2 \right) \left( \varepsilon^2 - \gamma^2\right)} = 1 - \frac{\left(\alpha^2 - \varrho_1^2 \right) \left(\alpha^2 - \varrho_2^2 \right) \left(\alpha^2 - \varrho_3^2 \right) }{ \left (\alpha^2 - \beta^2 \right)\left(\alpha^2 - \gamma^2 \right)} \cdot \frac{1}{\alpha^2 - \varepsilon^2} \hspace{2cm} \nonumber \\
- \frac{\left(\beta^2 - \varrho_1^2 \right) \left(\beta^2 - \varrho_2^2 \right) \left(\beta^2 - \varrho_3^2 \right) }{ \left (\beta^2 - \alpha^2 \right)\left(\beta^2 - \gamma^2 \right)} \frac{1}{\beta^2 - \varepsilon^2} - \frac{\left(\gamma^2 - \varrho_1^2 \right) \left(\gamma^2 - \varrho_2^2 \right) \left(\gamma^2 - \varrho_3^2 \right) }{ \left (\gamma^2 - \beta^2 \right)\left(\gamma^2 - \alpha^2 \right)} \frac{1}{\gamma^2 - \varepsilon^2} \,. \tag*{[5.44]}
\end{align}
When $\varepsilon$ is put equal to one of the roots
$\varrho_1,\varrho_2,\varrho_3$ of the equation\,:
\begin{equation*} \tag*{[5.45]}
\frac{x^2}{\alpha^2 - \varepsilon^2} + \frac{y^2}{\beta^2 - \varepsilon^2} + \frac{z^2}{\gamma^2 -\varepsilon^2}=1
\end{equation*}
one obtains\,:
\begin{align}
\frac{\left(\alpha^2 - \varrho_1^2 \right) \left(\alpha^2 - \varrho_2^2 \right) \left(\alpha^2 - \varrho_3^2 \right) }{ \left (\alpha^2 - \beta^2 \right)\left(\alpha^2 - \gamma^2 \right)} \frac{1}{\alpha^2 - \varepsilon^2} + \frac{\left(\beta^2 - \varrho_1^2 \right) \left(\beta^2 - \varrho_2^2 \right) \left(\beta^2 - \varrho_3^2 \right) }{ \left (\beta^2 - \alpha^2 \right)\left(\beta^2 - \gamma^2 \right)} \frac{1}{\beta^2 - \varepsilon^2} \nonumber \\ + \frac{\left(\gamma^2 - \varrho_1^2 \right) \left(\gamma^2 - \varrho_2^2 \right) \left(\gamma^2 - \varrho_3^2 \right) }{ \left (\gamma^2 - \beta^2 \right)\left(\gamma^2 - \alpha^2 \right)} \frac{1}{\gamma^2 - \varepsilon^2} =1 \tag*{[5.46]}
\end{align}
which compared with the previous equations gives the relations\,:
\begin{align} \tag*{[5.47]}
\begin{aligned}
x^2 = \frac {\left( \alpha^2 - \varrho_1^2 \right) \left( \alpha^2 - \varrho_2^2 \right) \left( \alpha^2 - \varrho_3^2 \right)}
{\left( \alpha^2 - \beta^2 \right) \left( \alpha^2 - \gamma^2 \right)} \\ \\
y^2 = \frac {\left( \beta^2 - \varrho_1^2 \right) \left( \beta^2 - \varrho_2^2 \right) \left( \beta^2 - \varrho_3^2 \right)}
{\left( \beta^2 - \alpha^2 \right) \left( \beta^2 - \gamma^2 \right)} \\ \\
z^2 = \frac {\left( \gamma^2 - \varrho_1^2 \right) \left( \gamma^2 - \varrho_2^2 \right) \left( \gamma^2 - \varrho_3^2 \right)}
{\left( \gamma^2 - \alpha^2 \right) \left( \gamma^2 - \beta^2 \right)}
\end{aligned}
\end{align}
so that, if $\varepsilon$ is an arbitrary variable, one has\,:
\begin{equation*} \tag*{[5.48]}
\frac {\left( \varepsilon^2 - \varrho_1^2 \right) \left( \varepsilon^2 - \varrho_2^2 \right) \left( \varepsilon^2 - \varrho_3^2 \right)}
{\left( \varepsilon^2 - \alpha^2 \right) \left( \varepsilon^2 - \beta^2 \right) \left( \varepsilon^2 - \gamma^2\right)} = 1 - \frac{x^2}{\alpha^2 - \varepsilon^2} - \frac{y^2}{\beta^2 - \varepsilon^2} - \frac{z^2}{\gamma^2 -\varepsilon^2}
\end{equation*}
When this equation is differentiated with respect to $\varepsilon$ and after setting $\varepsilon=\varrho_1, \varrho_2, \varrho_3$, one finds\,:
\begin{mynewequation} \tag*{[5.49]}
\begin{aligned}
- \frac{\left(\varrho_1^2 - \varrho_2^2 \right) \left(\varrho_1^2 - \varrho_3^2 \right)}{ \left (\varrho_1^2 - \alpha^2 \right)\left(\varrho_1^2 - \beta^2 \right)\left(\varrho_1^2 - \gamma^2 \right)} =\left( \frac{x}{\alpha^2 - \varrho_1^2}\right)^2 + \left( \frac{y}{\beta^2 - \varrho_1^2}\right)^2 + \left( \frac{z}{\gamma^2 - \varrho_1^2}\right)^2 \\
- \frac{\left(\varrho_2^2 - \varrho_1^2 \right) \left(\varrho_2^2 - \varrho_3^2 \right)}{ \left (\varrho_2^2 - \alpha^2 \right)\left(\varrho_2^2 - \beta^2 \right)\left(\varrho_2^2 - \gamma^2 \right)} =\left( \frac{x}{\alpha^2 - \varrho_2^2}\right)^2 + \left( \frac{y}{\beta^2 - \varrho_2^2}\right)^2 + \left( \frac{z}{\gamma^2 - \varrho_2^2}\right)^2 \\
- \frac{\left(\varrho_3^2 - \varrho_1^2 \right) \left(\varrho_3^2 - \varrho_2^2 \right)}{ \left (\varrho_3^2 - \alpha^2 \right)\left(\varrho_3^2 - \beta^2 \right)\left(\varrho_3^2 - \gamma^2 \right)}
=\left( \frac{x}{\alpha^2 - \varrho_3^2}\right)^2 + \left( \frac{y}{\beta^2 - \varrho_3^2}\right)^2 + \left( \frac{z}{\gamma^2 - \varrho_3^2}\right)^2 \\
\end{aligned}
\end{mynewequation}
Logarithmical differentiation of the equations by which $x^2,y^2, z^2$ are represented as functions of $\varrho_1^2,\varrho_2^2, \varrho_3^2$ gives\,:
\begin{mynewequation} \tag*{[5.50]}
\begin{aligned}
- \frac{dx}{d \varrho_1} = \frac{x \varrho_1}{\alpha^2 - \varrho_1^2}, \,\,\, - \frac{dx}{d \varrho_2} = \frac{x \varrho_2}{\alpha^2 - \varrho_2^2}, \,\,\, -\frac{dx}{d \varrho_3} = \frac{x \varrho_3}{\alpha^2 - \varrho_3^2}\\
- \frac{dy}{d \varrho_1} = \frac{y \varrho_1}{\beta^2 - \varrho_1^2}, \,\,\, - \frac{dy}{d \varrho_2} = \frac{y \varrho_2}{\beta^2 - \varrho_2^2}, \,\,\, -\frac{dy}{d \varrho_3} = \frac{y \varrho_3}{\beta^2 - \varrho_3^2}\\
\frac{dz}{d \varrho_1} = \frac{z \varrho_1}{\gamma^2 - \varrho_1^2}, \,\,\, - \frac{dz}{d \varrho_2} = \frac{z \varrho_2}{\gamma^2 - \varrho_2^2}, \,\,\, -\frac{dz}{d \varrho_3} = \frac{z \varrho_3}{\gamma^2 - \varrho_3^2}
\end{aligned}
\end{mynewequation}
so that one has\,:
\begin{mynewequation} \tag*{[5.51]}
\begin{aligned}
N_1 = \varrho_1^2 \left\{ \left( \frac{x}{\alpha^2 - \varrho_1^2}\right)^2 + \left( \frac{y}{\beta^2 - \varrho_1^2}\right)^2 + \left( \frac{z}{\gamma^2 - \varrho_1^2}\right)^2 \right \}\\
N_2 = \varrho_2^2 \left \{ \left( \frac{x}{\alpha^2 - \varrho_2^2}\right)^2 + \left( \frac{y}{\beta^2 - \varrho_2^2}\right)^2 + \left( \frac{z}{\gamma^2 - \varrho_2^2}\right)^2 \right \}\\
N_3 = \varrho_3^2 \left \{ \left( \frac{x}{\alpha^2 - \varrho_3^2}\right)^2 + \left( \frac{y}{\beta^2 - \varrho_3^2}\right)^2 + \left( \frac{z}{\gamma^2 - \varrho_3^2}\right)^2 \right \}
\end{aligned}
\end{mynewequation}
From these relations, one can therefore write\,:
\begin{mynewequation} \tag*{[5.52]}
\begin{aligned}
N_1 = -\varrho_1^2 \frac{\left(\varrho_1^2 - \varrho_2^2 \right) \left(\varrho_1^2 - \varrho_3^2 \right)}{ \left (\varrho_1^2 - \alpha^2 \right)\left(\varrho_1^2 - \beta^2 \right)\left(\varrho_1^2 - \gamma^2 \right)}\\\\
N_2 = -\varrho_2^2 \frac{\left(\varrho_2^2 - \varrho_1^2 \right) \left(\varrho_2^2 - \varrho_3^2 \right)}{ \left (\varrho_2^2 - \alpha^2 \right)\left(\varrho_2^2 - \beta^2 \right)\left(\varrho_2^2 - \gamma^2 \right)}\\\\
N_3 = - \varrho_3^2 \frac{\left(\varrho_3^2 - \varrho_1^2 \right) \left(\varrho_3^2 - \varrho_2^2 \right)}{ \left (\varrho_3^2 - \alpha^2 \right)\left(\varrho_3^2 - \beta^2 \right)\left(\varrho_3^2 - \gamma^2 \right)}
\end{aligned}
\end{mynewequation}
\indent\indent
These three quantities
are recognised as positive because of
\begin{mynewequation} \tag*{[5.53]}
\alpha > \varrho_1 > \beta > \varrho_2>\gamma >\varrho_3 >0
\end{mynewequation}
have only to be substituted into equations~(4)
to obtain the hydrodynamical fundamental equations for elliptical coordinates.\\
\indent\indent
The polar coordinate system $r, \theta, \varphi$, which is determined by\,:
\begin{mynewequation} \tag*{[5.54]}
x = r \cos \theta, \, \, y = r \sin \theta \cos \varphi, \, \, z= r \sin \theta \sin \varphi
\end{mynewequation}
is orthogonal\,;\hspace{0.4cm}in fact one has\,:
\begin{mynewequation} \tag*{[5.55]}
{\rm d}s^2= {\rm d}x^2 + {\rm d} y^2 + {\rm d} z^2 = {\rm d} r^2 + r^2 {\rm d} \theta^2 + r^2 \sin^2 \theta {\rm d} \varphi
\end{mynewequation}
so that one has
\begin{mynewequation} \tag*{[5.56]}
N_1 = 1 , \,\,\,N_2 = r^2,\,\, N_3 = r^2 \sin^2 \theta .
\end{mynewequation}
\noindent
The density equation (2) becomes\,:
\begin{myequation} \tag*{[5.57]}
\frac{\varrho}{\varrho_0} \begin{vmatrix}
\frac{dr}{dr_0}\,\,\,\,&\frac{d\theta}{dr_0}\,\,\,\,&\frac{d\varphi}{d r_0} \\\\
\frac{dr}{d\theta_0}\,\,\,\,&\frac{d \theta}{d\theta_0}\,\,\,\,&\frac{d \varphi}{d\theta_0} \\\\
\frac{dr}{d\varphi_0}\,\,\,\,&\frac{d \theta}{d\varphi_0}\,\,\,\,&\frac{d \varphi}{d\varphi_0}
\end{vmatrix}
= \frac{r_0^2 \sin \theta_0}{r^2 \sin \theta}
\end{myequation}
In this case, by setting
\begin{align} \tag*{[5.58]}
\begin{aligned}
\Phi_1&= \frac{d^2 r}{dt^2} - r \left( \frac{d \theta}{dt}\right)^2 - r \sin ^2 \theta \left(\frac{d \varphi}{dt} \right )^2 \\
\Phi_2&= \frac{d \left( r^2 \frac{ d\theta}{dt}\right)}{dt} - \left( \frac{d \varphi}{dt} \right )^2 r^2 \sin \theta \cos \theta \, \, \, \,\, \\
\Phi_3 &= \frac{d \left(r^2 \sin^2 \theta \frac{d \varphi}{dt}\right)}{dt}
\end{aligned}
\end{align}
equations~(3) become very simply\,:
\begin{equation*} \tag*{[5.59]}
\frac{d \Omega}{dr} = \Phi_1, \,\,\, \frac{d \Omega}{d \theta} = \Phi_2, \,\,\,\frac{d \Omega}{d \varphi} = \Phi_3, \,\,\,
\end{equation*}
and equations~(4) become\,:
\begin{mynewequation} \tag*{[5.60]}
\begin{aligned}
\Phi_1 \frac{dr}{dr_0} + \Phi_2 \frac{d \theta}{d r_0} + \Phi_3 \frac{d \varphi}{dr_0} - \frac{d \Omega}{d r_0}\,\,\,=0\\
\Phi_1\,\, \frac{dr}{d\theta_0} + \Phi_2 \frac{d \theta}{d\theta_0} + \Phi_3 \frac{d \varphi}{d\theta_0} - \frac{d \Omega}{d \theta_0}\hspace{0.09cm}=0\\
\Phi_1 \frac{dr}{d \varphi_0} + \Phi_2 \frac{d \theta_0}{d \varphi_0} + \Phi_3 \frac{d \varphi}{d\varphi_0} -
\frac{d \Omega}{d \varphi_0}=0\\
\end{aligned}
\end{mynewequation}
\indent\indent
The same transformation can be directly performed in the following way.\hspace{0,4cm} One observes that\,:
\begin{smallequation} \tag*{[5.61]}
\begin{aligned}
\!\!\frac{d^2 x}{dt^2} &= \cos \theta \frac{d^2 r }{d t^2} - r \sin \theta \frac{d^2 \theta}{d t^2} - r \cos \theta \left(\frac{d \theta}{dt} \right )^2 - 2 \sin \theta \frac{dr}{dt } \frac{d \theta}{dt}\,\,\hspace{5.6cm} \\ \\
\!\!\frac{d^2 y}{dt^2} &= \sin \theta \cos \varphi \frac{d^2 r}{d t^2} + r \cos \theta \cos \varphi \frac{d^2 \theta}{d t^2} - r \sin \theta \sin \varphi \frac{d^2 \varphi}{d t^2} - r \sin \theta \sin \varphi \frac{d^2 \varphi }{d t^2} - r \sin \theta \cos \varphi \left(\frac{d \theta}{dt} \right )^2 \\
&-r \sin \theta \cos \varphi \left(\frac{d \varphi}{dt} \right )^2 + 2 \cos \theta \cos \varphi \frac{dr}{dt } \frac{d \theta}{dt} - 2 \sin \theta \sin \varphi \frac{dr}{dt } \frac{d \varphi}{dt} - 2 r \cos \theta \sin \varphi \frac{d \theta}{dt } \frac{d \varphi}{dt}\hspace{1.5cm} \\ \\
\!\!\frac{d^2 z}{dt^2} &=\sin \theta \sin \varphi \frac{d^2 r }{d t^2} + r \cos \theta \sin \varphi \frac{d^2 \theta }{d t^2} + r \sin \theta \cos \varphi \frac{d^2 \varphi}{d t^2} - r \sin \theta \sin \varphi \left(\frac{d \theta}{dt} \right )^2\,\hspace{2.7cm} \\
&- r \sin \theta \sin \varphi \left(\frac{d \varphi}{dt} \right )^2 + 2 \cos \theta \sin \varphi \frac{dr}{dt } \frac{d \theta}{dt} + 2 \sin \theta \cos \varphi \frac{dr}{dt } \frac{d \varphi}{dt} + 2 r \cos \theta \cos \varphi \frac{d \theta}{dt } \frac{d \varphi}{dt} \hspace{1.3cm}
\end{aligned}
\end{smallequation}
Furthermore one has\,:
\begin{align} \tag*{[5.62]}
\begin{aligned}
\frac{dx}{da} &= \cos \theta \frac{dr}{da} - r \sin \theta \frac{d \theta}{da}\hspace{4.2cm} \\ \\
\frac{dy}{da} &= \sin \theta \cos \varphi \frac{dr}{da } + r \cos \theta \cos \varphi \frac{d \theta}{da } - r \sin \theta \sin \varphi \frac{d \varphi}{da } \\ \\
\frac{dz}{da} &= \sin \theta \sin \varphi \frac{dr}{da } + r \cos \theta \sin \varphi \frac{d \theta}{da } + r \sin \theta \sin \varphi \frac{d \varphi}{da }\,;
\end{aligned}
\end{align}
then the equation\,:
\begin{equation*} \tag*{[5.63]}
\frac{d^2 x}{dt^2} \frac{dx}{da} + \frac{d^2 y}{dt^2} \frac{dy}{da} + \frac{d^2 z}{dt^2} \frac{dz}{da} - \frac{d \Omega}{da}=0
\end{equation*}
becomes\,:
\begin{equation*} \tag*{[5.64]}
\Phi_1 \frac{dr}{da} + \Phi_2 \frac{d \theta}{da} + \Phi_3 \frac{d \varphi}{da} - \frac{d \Omega}{da}=0
\end{equation*}
where $\Phi_1,\Phi_2,\Phi_3$ have the meaning as written above.\hspace{0.4cm}In addition to this equation, there are two others\,:
\begin{equation*} \tag*{[5.65]}
\begin{aligned}
\Phi_1\frac{dr}{db} + \Phi_2 \frac{d \theta}{db} + \Phi_3 \frac{d \varphi}{db} - \frac{d \Omega}{db}=0\\\\
\Phi_1\frac{dr}{dc} + \Phi_2 \frac{d \theta}{dc} + \Phi_3 \frac{d \varphi}{dc} - \frac{d \Omega}{dc}=0
\end{aligned}
\end{equation*}
where $a, b, c$ depend on $r_0, \theta_0, \varphi_0$ through the relations\,:
\begin{equation*} \tag*{[5.66]}
a=r_0 \cos \theta_0,\,\, b= r_0 \sin \theta_0\cos \varphi_0,\,\, c=r_0 \sin \theta_0 \sin \varphi_0
\end{equation*}
The change of variables from
$a,\,b,\,c$
to
$r_0, \,\theta_0,\,\varphi_0$
into
the hydrodynamical equations can be easily carried out by multiplying with the appropriate factors and adding the equations.
Then, one arrives at the above formulae in a different way. \\
\indent\indent The transformation of the fundamental equations into cylindrical coordinates is extremely simple.\hspace{0.4cm}Namely,
if one sets\,:
\begin{equation*} \tag*{[5.67]}
x = r \cos \theta, \,\,y = r \sin \theta, \,\,z=z
\end{equation*}
then one has\,:
\begin{equation*} \tag*{[5.68]}
{\rm d}s^2= {\rm d}x^2 + {\rm d}y^2 + {\rm d}z^2= {\rm d}r^2 + r^2 {\rm d} \theta^2 + {\rm d}z^2
\end{equation*}
so that
\begin{equation*} \tag*{[5.69]}
N_1 = 1,\,\, N_2= r^2,\,\, N_3=1
\end{equation*}
hence\,:
\begin{equation*} \tag*{[5.70]}
\begin{aligned}
\frac{d^2 r}{dt^2} - r \left(\frac{d \theta}{dt} \right)^2 &= \frac{d \Omega}{dr}\\
\frac{d \left( r^2 \frac{d \theta}{dt} \right) }{dt} &= \frac{d \Omega}{d \theta}\\
\frac{d^2 z}{dt^2} &= \frac{d \Omega}{dz}
\end{aligned}
\end{equation*}
\indent\indent
In case $r,\, \theta,\, z$ need to be expressed as functions of the initial values $r_0, \,\, \theta_0, \,\, z_0$, one obtains the equations\,:
\begin{equation*} \tag*{[5.71]}
\begin{aligned}
\left( \frac {d^2 r}{dt^2} - r \left( \frac{d \theta}{dt} \right)^2 \right) \frac{dr}{dr_0} + \frac{d \left(r^2 \frac{d \theta}{dt}\right)}{dt} \frac{d \theta}{d r_0} + \frac{d^2 z}{dt^2} \frac{dz}{d r_0} &= \frac{d \Omega}{dr_0}\\\\
\left( \frac {d^2 r}{dt^2} - r \left( \frac{d \theta}{dt} \right)^2 \right) \frac{dr}{d \theta_0} + \frac{d \left(r^2 \frac{d \theta}{dt}\right)}{dt} \frac{d \theta}{d \theta_0} + \frac{d^2 z}{dt^2} \frac{dz}{d \theta_0} &= \frac{d \Omega}{d \theta_0} \\\\
\left( \frac {d^2 r}{dt^2} - r \left( \frac{d \theta}{dt} \right)^2 \right) \frac{dr}{dz_0} + \frac{d \left(r^2 \frac{d \theta}{dt}\right)}{dt} \frac{d \theta}{d z_0} + \frac{d^2 z}{dt^2} \frac{dz}{d z_0} &= \frac{d \Omega}{dz_0}
\end{aligned}
\end{equation*}
The density equation becomes\,:
\begin{mynewequation}\tag*{[5.72]}
\left| \begin{aligned}
\frac{dr}{dr_0}\,\,\,\,\,\,&\frac{dr}{d\theta_0}&\frac{dr}{d\varphi_0} \\\\
\frac{d\theta}{dr_0}\,\,\,\,\,&\frac{d\theta}{d\theta_0}&\frac{d\theta}{d\varphi_0} \\\\
\frac{d\varphi}{dr_0}\,\,\,\,\,&\frac{d\varphi}{d\theta_0}&\frac{d\varphi}{d\varphi_0}
\end{aligned} \right| =
\text{\Large $\frac{r_0}{r}$}
\end{mynewequation}
If we take the initial conditions and the accelerating forces
to be symmetric with respect to the $z$ axis, these equations become advantageous\,;
then we have
$\frac{d \Omega}{d \theta} = 0$,
therefore
$\frac{d \left( r^2 \frac{d \theta}{dt}\right)}{dt}=0,$ and thus\,:
\begin{equation*} \tag*{[5.73]}
\frac{d \theta}{dt}=\frac{H}{r^2},
\end{equation*}
where $H$ is a time-independent constant which has always the same value for a certain particle, but varies from particle to particle and has to be determined by the initial conditions.\hspace{0.4cm}Since $\frac{d \theta}{dt}$ is the rotational velocity of a particle around the $z$ axis,
the rotational velocity of one and the same particle around the symmetry axis is inversely proportional to the
relative distance squared of the particle to the axis.\hspace{0.4cm}We see from this that no particle initially rotating ceases to rotate under the influence of forces generated by a potential
and, conversely, no particle begins to rotate if
it is not initially in rotation.
\deffootnotemark{\textsuperscript{[A.\thefootnotemark]}} \deffootnote{2em}{1.6em}{[A.\thefootnotemark]\enskip}
\hspace{0.4cm}This elegant theorem is thanks to \mbox{S\hspace{0.036cm}} %{$\phantom{!}$\! v\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! b\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! g,} {who obtained it in cylindrical coordinates from the first \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! ian} equations which, actually, are for this purpose slightly less convenient than the second \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! ian} equations
\footnote{On fluides r\"orelse. Kongl. Vetenskaps-Academiens Handlingar f\"or {\aa}r 1839, S.\,139. Stockholm. Also cf: Sur le mouvement des fluides.\,\, Crelle's Journal Bd. 24, S.\,157, [\hyperlink{Svanberg}{1842}].}
\vspace{0.6cm}
\centerline{\fett{$\S.\,6.$}}
\vspace{0.3cm}
\indent\indent
{Using the $\Omega$ function introduced above (cf. S.\,18), the hydrodynamical fundamental equations can be written as\,:
\begin{equation*} \tag*{[6.1]}
\begin{aligned}
\frac{d u}{d t}\frac{dx}{da} + \frac{d v}{d t} \frac{dy}{da}+\frac{dw }{d t} \frac{dz}{da} - \frac{d \Omega}{da} =0\\\\
\frac{d u}{d t}\frac{dx}{db} + \frac{d v}{d t} \frac{dy}{db}+\frac{dw }{d t} \frac{dz}{db} - \frac{d \Omega}{db} =0\\\\\frac{d u}{d t}\frac{dx}{dc} + \frac{d v}{d t} \frac{dy}{dc}+\frac{dw }{d t} \frac{dz}{dc} - \frac{d \Omega}{dc} =0
\end{aligned}
\end{equation*}
\indent\indent
From these equations, one can easily eliminate the function $\Omega$ and obtain equations which represent all possible fluid motions under the influence of potential forces.\hspace{0.4cm}The elimination is easily done through differentiations with respect to $a,b,c$ and subtractions
from which one obtains\,:
\begin{equation*} \tag*{[6.2]}
\begin{aligned}
\frac{d^2 u}{d t dc}\frac{dx}{db} - \frac{d^2 u}{d t db}\frac{dx}{dc} + \frac{d^2 v}{d t dc} \frac{dy}{db} -\frac{d^2 v}{d t db} \frac{dy}{dc} +\frac{d^2 w }{d t dc} \frac{dz}{db} - \frac{d^2 w }{d t db} \frac{dz}{dc} =0\\\\
\frac{d^2 u}{d t da}\frac{dx}{dc} - \frac{d^2 u}{d t dc}\frac{dx}{da} + \frac{d^2 v}{d t da} \frac{dy}{dc} -\frac{d^2 v}{d t dc} \frac{dy}{da} +\frac{d^2 w }{d t da} \frac{dz}{dc} - \frac{d^2 w }{d t dc} \frac{dz}{da} =0\\\\
\frac{d^2 u}{d t db}\frac{dx}{da} - \frac{d^2 u}{d t da}\frac{dx}{db} + \frac{d^2 v}{d t db} \frac{dy}{da} -\frac{d^2 v}{d t da} \frac{dy}{db} +\frac{d^2 w }{d t db} \frac{dz}{da} - \frac{d^2 w }{d t da} \frac{dz}{db} =0
\end{aligned}
\end{equation*}
One can readily integrate these equations with respect to time
by writing each of the three differences in these equations
as an exact time derivative.\hspace{0.4cm} Denoting the time-independent integration constants as $2A,2B,2C$, one finds\,:
\begin{equation*}\tag*{(1), [6.3]}
\left.
\begin{aligned}
\frac{du}{d c}\frac{dx}{db} - \frac{du}{d b}\frac{dx}{dc} + \frac{dv}{d c}\frac{dy}{db} - \frac{dv}{d b}\frac{dy}{dc} + \frac{dw}{d c}\frac{dz}{db} - \frac{dw}{d b}\frac{dz}{dc} =2A\\\\
\frac{du}{d a}\frac{dx}{dc} - \frac{du}{d c}\frac{dx}{da} + \frac{dv}{d a}\frac{dy}{dc} - \frac{dv}{d c}\frac{dy}{da} + \frac{dw}{d a}\frac{dz}{dc} - \frac{dw}{d c}\frac{dz}{da} =2B\\\\
\frac{du}{d b}\frac{dx}{da} - \frac{du}{d a}\frac{dx}{db} + \frac{dv}{d b}\frac{dy}{da} - \frac{dv}{d a}\frac{dy}{db} + \frac{dw}{d b}\frac{dz}{da} - \frac{dw}{d a}\frac{dz}{db} =2C
\end{aligned} \qquad \right\}
\end{equation*}
The left sides of these integral equations
can be seen as the difference of two differential quotients [a curl is meant].
\hspace{0.4cm}Namely, defining\,: \\
\begin{equation*} \tag*{(2), [6.4]}
\left.
\begin{aligned}
\alpha = u \frac{dx}{da} + v \frac{dy}{da} + w \frac{dz}{da}\\\\
\beta = u \frac{dx}{db} + v \frac{dy}{db} + w \frac{dz}{db}\\\\
\gamma = u \frac{dx}{dc} + v \frac{dy}{dc} + w \frac{dz}{dc}
\end{aligned} \qquad \right\}
\end{equation*}
one obtains, instead of $(1)$\,:
\begin{equation*} \tag*{(3), [6.5]} \label{pointer2}
\frac{d \beta}{dc} - \frac{d \gamma}{db} = 2A,\,\,\, \frac{d \gamma}{da} - \frac{d \alpha}{dc} = 2B, \,\,\,\frac{d \alpha}{db} - \frac{d \beta}{da} = 2C
\end{equation*}
These interesting relations are the analogues of the equations that \mbox{C\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! y}\footnote{%
In an essay prized by the Paris Academy: M\'emoire sur la th\'eorie de la propagation des ondes \`a la surface d'un fluide pesant d'une profondeur infinie (M\'em. sav. \'etran.\ Bd.\ 1) [\hyperlink{Cauchy1}{1827}].}
found already in 1816 [actually, already in 1815] for the first E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! ian dependence.\hspace{0.4cm}They attained their actual importance
only when \mbox{H\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! z}\footnote{%
Ueber Integrale der hydrodynamischen Gleichungen, welche den Wirbelbewegungen entsprechen, Crelle's Journal, Bd. 55, S.105, [\hyperlink{Helmholtz1}{1858}]. [\textit {English translation}: On Integrals of the hydrodynamic equations that correspond to vortex motions.
Philos. Mag. {\bf 4}, Vol. 33, 485--511, [\hyperlink{Helmholtz1}{1868}].}
realised their mechanical significance%
\deffootnotemark{\textsuperscript{[T.\thefootnotemark]}}\deffootnote{2em}{1.6em}{[\hspace{0.01cm}T.\thefootnotemark]\enskip}%
\footnote{It appears likely that Helmholtz was not aware of Cauchy's equations but stressed the importance of vortex dynamics.} [of these equations], and thereby laid the foundations for a peculiar treatment of hydrodynamics.\hspace{0.4cm}It will be our next task to investigate this significance using a method adapted to the second \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r} dependence; for this
we first need to develop an appropriate theorem.\\\\
\centerline{\fett{$\S.\,7.$}}\\[0.3cm]
\indent\indent
For this purpose
we start from the known theorem that if $\xi$ and $\eta$ are arbitrary
continuous functions of $x$ and $y$, one has the relation\,:
\begin{equation*} \tag*{[7.1]}
\int \left( \xi {\rm d}x + \eta {\rm d}y \right) =\iint \left( \frac{d \xi}{dy} - \frac{d \eta}{dx}\right) {\rm d}x \, {\rm d}y
\end{equation*}
where the double integral has to be extended over all elements of a domain on the $xy$ plane, and the simple integral is over the boundary of the domain suitably oriented.%
\deffootnotemark{\textsuperscript{[A.\thefootnotemark]}}\deffootnote{2em}{1.6em}{[A.\thefootnotemark]\enskip}%
\footnote{Cf.\ B.\ Riemann, Lehrs\"atze aus der analysis situs f\"ur die Theorie der Integrale von zweigliedrigen vollst\"andigen Differentialien. Crelle's Journal Bd.\ 54, S.\ 105, [\hyperlink{Riemann1}{1857}].}\\
\indent\indent
One can generalise this theorem in the following way\,:
Let there be an arbitrary
closed curve in space and consider the path integral over all elements of this curve\,:
\begin{equation*} \notag
\int \left( \xi\, {\rm d}x + \eta\, {\rm d}y + \zeta\, {\rm d}z \right)
\end{equation*}
where $\xi,\,\eta,\,\chi$ are arbitrary continuous functions of $x,y,z$.\hspace{0.4cm}Let us think of an arbitrary connected surface limited by this curve, so that one can consider on this whole surface $z$ as a function of $x$ and $y$ and set\,:
\begin{equation*} \tag*{[7.2]}
{\rm d}z = \frac{dz}{dx} {\rm d}x + \frac{dz}{dy} {\rm d}y,
\end{equation*}
from which the given integral transforms into\,:
\begin{equation*} \notag
\int \left\{ \left (\xi + \frac{dz}{dx} \zeta \right) {\rm d}x + \left (\eta + \frac{dz}{dy} \zeta \right) {\rm d}y \right \}
\end{equation*}
By the {stated theorem for [the case of] two independent variables, this integral becomes\,:
\begin{equation*} \notag
\iint \left\{ \frac{ d\left(\xi + \frac{dz}{dx} \zeta \right)}{dy} - \frac{\left(\eta + \frac{dz}{dy} \zeta \right)}{dx}\right\} {\rm d}x {\rm d}y
\end{equation*}
Since $z$ is a function of $x$ and $y$, one has\,:
\begin{equation*} \tag*{[7.3]}
\begin{aligned}
\frac{ d\left(\xi + \frac{dz}{dx} \zeta \right)}{dy} &= \frac{d \xi}{dy} + \frac{d \xi}{dz} \frac{d z}{dy} + \frac{d \zeta}{dy} \frac{d z}{dx} + \frac{d \zeta}{dz} \frac{d z}{dx}\frac{d z}{dy} + \frac{d^2 z}{dx dy} \zeta\\
\frac{ d\left(\eta + \frac{dz}{dy} \zeta \right)}{dx} &= \frac{d \eta}{dx} + \frac{d \eta}{dz} \frac{d z}{dx} + \frac{d \zeta}{dx} \frac{d z}{dy} + \frac{d \zeta}{dz} \frac{d z}{dy}\frac{d z}{dx} + \frac{d^2 z}{dy dx} \zeta
\end{aligned}
\end{equation*}
and so one has the equation\,:
\begin{smallequation} \tag*{[7.4]}
\int \left( \xi {\rm d}x + \eta {\rm d}y + \zeta {\rm d}z \right) = \iint \left \{ \left(\frac{d \xi} {dy} - \frac{d \eta}{dx} \right) + \left(\frac{d \zeta} {dy} - \frac{d \eta}{dz} \right)\frac{dz}{dx} + \left( \frac{d \xi}{dz} - \frac{d \zeta}{dx}\right) \frac{dz}{dy} \right\}{\rm d}x {\rm d}y,
\end{smallequation}
where now the double integral has to be extended over all the elements of the surface
limited by the curve.\hspace{0.4cm}This otherwise arbitrary surface through the curve has just to satisfy the condition that the part embedded in the curve is not multiply-connected and that the curve forms a complete boundary to it.\hspace{0.4cm}Let $\lambda,\mu, \nu$ be the angles of the normal drawn to the surface with the coordinates axes, one has\,:
\begin{equation*} \tag*{[7.5]}
\frac{dz}{dx} = - \frac{\cos \lambda}{\cos \nu}, \,\,\, \frac{dz}{dy} = - \frac{\cos \mu}{\cos \nu};
\end{equation*}
from which follows that one has
\begin{equation*}\tag*{(1), [7.6]}
\begin{aligned}
&\hspace{4cm}\int \left( \xi {\rm d}x + \eta {\rm d}y +\zeta {\rm d}z \right)=\\
&\int \left \{ \left(\frac{d \eta} {dz} - \frac{d \zeta}{dy} \right)\cos \lambda + \left(\frac{d \zeta} {dx} - \frac{d \xi}{dz} \right) \cos \mu + \left( \frac{d \xi}{dy} - \frac{d \eta}{dx}\right) \cos \nu \right\} {\rm d} \sigma
\end{aligned}
\end{equation*}
where $\displaystyle \frac{ {\rm d}x\,{\rm d}y}{\cos \nu} = {\rm d} \sigma$ denotes the surface element. \\
\indent\indent
For our purposes, we can
bring
both of the above integrals into more convenient forms\,:\\\\
\indent\indent
If one determines three angles $\lambda', \, \, \mu', \,\,\nu'$ so that\,:
\begin{equation*} \tag*{[7.7]}
\cos \lambda': \cos \mu': \cos \nu'=\left(\frac{d \eta} {dz} - \frac{d \zeta}{dy} \right) : \left(\frac{d \zeta} {dx} - \frac{d \xi}{dz} \right) : \left( \frac{d \xi}{dy} - \frac{d \eta}{dx} \right)
\end{equation*}
and, at the same time, assumes that they are the angles of a definite direction with the coordinates axes, so that\,:
\begin{equation*} \tag*{[7.8]}
\cos^2 \lambda' + \cos^2 \mu' + \cos^2 \nu' =1
\end{equation*}
one finds\,:
\begin{equation*} \tag*{[7.9]}
2 \Delta \cos \lambda' =\frac{d \eta} {dz} - \frac{d \zeta}{dy},\,\,
2 \Delta \cos \mu' = \frac{d \zeta} {dx} - \frac{d \xi}{dz}, \,\,
2 \Delta \cos \nu' = \frac{d \xi}{dy} - \frac{d \eta}{dx}
\end{equation*}
where\,:
\begin{equation*} \tag*{[7.10]}
4 \Delta^2 = \left(\frac{d \eta} {dz} - \frac{d \zeta}{dy} \right) ^2 + \left(\frac{d \zeta} {dx} - \frac{d \xi}{dz} \right) ^2 + \left( \frac{d \xi}{dy} - \frac{d \eta}{dx} \right)^2
\end{equation*}
The integral above, whose element is ${\rm d}\sigma$, now goes into
\begin{equation*} \notag
2 \int \Delta \left(\cos \lambda \cos \lambda' + \cos \mu \cos \mu' + \cos \nu \cos \nu' \right) {\rm d}\sigma
\end{equation*}
and one obtains\,:
\begin{equation*} \tag*{(2), [7.11]}
\int \left\{\left( \frac{d \eta}{dz} - \frac{d \zeta}{dy} \right) \cos \lambda
+ \left( \frac{d \zeta}{dx} - \frac{d \xi}{dz} \right) \cos \mu + \left( \frac{d \xi}{dy} - \frac{d \eta}{dx} \right) \cos \nu \right \}{\rm d}\sigma = 2\int \Delta \cos \theta {\rm d}\sigma \,,
\end{equation*}
where $\theta$ is the angle between the directions determined by $\lambda, \mu, r$ and
$\lambda',\mu', r' $ .\\
\indent\indent
Indicating by $\varrho, \sigma, \tau$ the angles of the coordinate axes with the tangent to the curve at the point $x,y,z$, one has\,:
\begin{equation*} \tag*{[7.12]}
{\rm d}x= \cos \varrho {\rm d}s, \,\,\, {\rm d}y = \cos \sigma {\rm d}s, \, \, \, {\rm d}z= \cos \tau {\rm d}s,
\end{equation*}
where ${\rm d}s$ indicates the arc element.\hspace{0.4cm}Let now $ \varrho', \sigma', \tau'$ be the angles of a
direction with the coordinate axes,
so that\,:
\begin{equation*} \tag*{[7.13]}
\cos^2 \varrho' + \cos^2 \sigma' + \cos^2 \tau' = 1,
\end{equation*}
and furthermore [require that]
\begin{equation*} \tag*{[7.14]}
\cos \varrho': \cos \sigma': \cos \tau'= \xi:\eta:\zeta,
\end{equation*}
so one has\,:
\begin{equation*} \tag*{[7.15]}
U^2 = \xi^2 + \eta^2 + \zeta^2
\end{equation*}
and
\begin{equation*} \tag*{[7.16]}
U \cos \rho'=\xi,\,\, U \cos \sigma' = \eta, \,\,U \cos \tau' = \zeta.
\end{equation*}
These values of $\xi,\,\eta, \,\zeta$ and ${\rm d}x$, ${\rm d}y$, ${\rm d}z$ substituted into the simple integral, give\,:
\begin{equation*} \tag*{[7.17]}
\int \left( \xi {\rm d} x + \eta {\rm d}y + \zeta {\rm d}z\right)\, = \int U (\cos \varrho \cos \varrho' +\cos \sigma \cos \sigma' + \cos \tau \cos \tau' ) {\rm d}s
\end{equation*}
or
\begin{equation*}\tag*{(3), [7.18]}
\int\left( \xi {\rm d} x + {\eta\rm d} y + \zeta {\rm d}z \right)\, =\,\int U \cos \theta' {\rm d}s
\end{equation*}
where $\theta'$ indicates the angle between the directions determined by $\varrho,\sigma,\tau$ and $\varrho',\sigma',\tau'$. \\[1cm]
\centerline{\fett{$\S.\,8.$}}\\[0.3cm]
\indent\indent
After development of this lemma we come back to the task
described at the end of $\S\,6$\,:\\
\indent\indent
According to $(1)$ of $\S.\,7.$, one can write the equation\,:
\begin{smallequation} \tag*{[8.1]}
\int\left( \alpha {\rm d}a + \beta {\rm d}b + \gamma {\rm d}c \right)\, =\, \int \left \{ \left( \frac{d \beta}{dc} -
\frac {d \gamma}{db} \right)\cos \lambda + \left( \frac{d \gamma}{da} -
\frac {d \alpha}{dc} \right)\cos \mu + \left(\frac{d \alpha}{db} -
\frac {d \beta}{da} \right) \cos \nu \right \} {\rm d} \sigma_0
\end{smallequation}
where the first [l.h.s.] integral is over a closed curve, the second [r.h.s.] over an arbitrary simply-connected surface bounded by that curve, and where $\lambda, \mu, \nu$, are the angles of the normal to that surface with the coordinate axes,
${\rm d} \sigma_0$ is the surface element and $\alpha,\beta,\gamma $ are arbitrary functions of $ a,b,c $.\hspace{0.4cm}According to $(2)$ of the $\S\,7$, the second integral can be transformed, so that\,:
\begin{equation*} \tag*{(1), [8.2]}
\int\left( \alpha\,{\rm d}a + \beta\, {\rm d}b + \gamma \, {\rm d}c \right)\, =\, 2 \int \Delta_0 \cos \theta_0 {\rm d} \sigma_0 \,.
\end{equation*}
If one now takes $\alpha, \beta,\gamma $ to be the quantities defined in $(2)$ of $\S\,6$ and makes use of equations~(3) of $\S\,6$, one obtains\,:
\begin{equation*} \tag*{[8.3]}
\Delta_0 = \sqrt{A^2 + B^2 + C^2} \,.
\end{equation*}
As to $\theta_0$, it is the angle between the directions determined by
$\lambda, \mu, \nu$ and $\lambda', \mu', \nu'$, the latter being determined by\,:
\begin{equation*} \tag*{[8.4]}
\Delta_0 \cos \lambda' = \Delta, \, \, \Delta_0 \cos \mu' = B, \, \, \Delta_0 \cos \nu' = C \,.
\end{equation*}
The integral on the right-hand side of
the preceding equation $(1)$ does not depend on time,
thus
one may choose an arbitrary value of $t$ on its left-hand side;
if one sets $t=0$,
then
$\alpha, \beta, \gamma $
go into the initial values of $u,\,v,\,w$
that we denote with $u_0, v_0, w_0$.
One then obtains\,:
\begin{equation*} \tag*{[8.5]}
\int \left ( u_0 {\rm d}a + v_0 {\rm d}b + w_0 {\rm d}c \right) = 2 \int \Delta_0 \cos \theta_0 {\rm d} \sigma_0
\end{equation*}
According to $(3)$ of the $\S.\,7.$, the first integral may be transformed and thus one finds\,:
\begin{equation*} \tag*{(2)\,, [8.6]}
\int U_0 \cos \theta'_0 {\rm d} s_0 = 2 \int \Delta_0 \cos \theta_0 {\rm d} \sigma_0
\end{equation*}
where
\begin{equation*} \tag*{[8.7]}
U_0 = \sqrt{u_0 ^2 + v_0^2 +w_0^2}
\end{equation*}
is the initial velocity, $\theta'_0$ is the angle between $U_0$ and the element ${\rm d}s_0$ of the closed curve.\\
\indent\indent
Let us assume, that the given closed curve
is a circle with an infinitely small radius $r$
and the surface spanned by the curve is the disk of the circle. So it is clear that $\Delta_0 \cos \theta_0 $ for different points of the circle surface changes only by infinitely small quantities.\hspace{0.4cm} Then, we have $\int \Delta_0\, \cos \theta_0 \,{\rm d} \sigma =\pi r^2 \cdot \Delta_0 \cos \theta_0$.\hspace{0.4cm}The component of the initial velocities with respect to the tangent to this circle is $U_0 \cos \theta_0'$, a quantity, that for differents points of the circumference will take different values.\hspace{0.4cm}However, $U_0 \cos \theta_0'$ in each point may be considered as the sum of two velocities $T_0 + T_0'$, where $T_0'$ is the progressive motion projected to a tangent, which is common to all points of the circle and $T_0$ the tangential velocity by the rotation around the center of the infinitely small circle.\hspace{0.4cm}The progressive motion of all particles is the same\,: as a consequence its component $T'_0$ with respect to the tangent of the circle will be the same for two particles diametrically opposite, but of opposite signs, so that $\int T_0' {\rm d}s_0=0$, when one integrates over the whole circle.\hspace{0.4cm}The relative velocity of the particles will be the same except for infinitely small quantities provided the velocities are continous functions of the position, which we always suppose.\hspace{0.4cm}So, it follows that $\int T_0 {\rm d}s_0=T_0 \cdot \int {\rm d}s_0=2\pi r \cdot T_0$.\hspace{0.4cm}From the equation (2) it follows now $2\pi r \cdot T_0= 2 \cdot \pi r^2 \cdot \Delta_0 \cos \theta_0$ or\,:
\begin{equation*} \tag*{[8.8]}
\Delta_0 \cos \theta_0= \frac{T_0}{r}
\end{equation*}
Since $T_0$ is the tangential velocity, so $T_0\,: r$ is the rotational velocity around the nfinitely small distant center of the circle, or --- as one can say, in order to take into account, also the location and orientation of the circle --- is the rotational velocity around the normal to the surface of the circle taken as axis.
Since $\theta_0$ is the angle between the normal to the surface element and the direction determined by the angles $\lambda',\mu',\nu'$, then $\Delta_0$ in a point becomes the rotational velocity around an axis passing by this point and oriented in the direction $\lambda',\mu',\nu'$.\\
\indent\indent
Furthermore $A,\,\,B,\,\,C$ are the components of the rotational velocity of a particle $a,\,b,\,c$
around axes, which are parallel to the coordinates axes through the point $a,\,b,\,c$.
\\[1cm]
\centerline{\fett{$\S\,9.$}}\\[0.3cm]
\indent\indent
Analogously to $(2)$ of $\S.\,8$, one has at each time $t$\,:
\begin{equation*} \tag*{(1), [9.1]}
\int U \cos \theta' {\rm d} s = 2 \int \Delta \cos \theta {\rm d}\sigma
\end{equation*}
where now $ U \cos \theta' $ is the component of the velocity of a particle $x, y, z$ with respect to the tangent to the closed curve, over whose element ${\rm d}s$ the first integral has to be extended, and where $\Delta \cos \theta$ is the component of the rotational velocity expressed with respect to the normal to the element ${\rm d} \sigma$ of the surface, which is bounded by that curve.\\
\indent\indent
According to $(3)$ of $\S.\,7$, one has
\begin{equation*} \tag*{[9.2]}
\int U \cos \theta' {\rm d}s = \int \left( u{\rm d}x + v {\rm d}y + w {\rm d}z \right)
\end{equation*}
but, since from $ (2) $ of $\S.\,6$ one easily infers that
\begin{equation*} \tag*{[9.3]}
\alpha {\rm d} a + \beta {\rm d} b + \gamma {\rm d} c = u {\rm d}x + v {\rm d}y + w {\rm d}z,
\end{equation*}
one has\,:
\begin{equation*} \tag*{[9.4]}
\int U \cos \theta' {\rm d}s = \int \left(\alpha {\rm d}a + \beta {\rm d}b + \gamma {\rm d}c \right)
\end{equation*}
The second integral is calculated according to $(1)$ of
$\S.\,8$\,:
\begin{equation*} \tag*{[9.5]}
\int U \cos \theta' {\rm d}s = 2 \int \Delta_0 \cos \theta_0 {\rm d} \sigma_0 \,.
\end{equation*}
Hence, because of equation $(1)$, one obtains\,:
\begin{equation*} \tag*{(2), [9.6]}
\int \Delta \cos \theta {\rm d}\sigma = \int \Delta_0 \cos \theta_0 {\rm d} \sigma_0
\end{equation*}
Herefrom follows that $\int \Delta \cos \theta' {\rm d}\sigma$ is constant with respect to time, provided the integral is always extended over a surface moving with the flow
and consisting of the same particles.
Following \mbox{H\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! z,} if one designates by \mbox{r\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! l} \,\mbox{i\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! s\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! y} the product of the rotational velocity around the normal to the surface as an axis, times the size of the surface element, one obtains the following result\,:\\
\indent\indent
The integral of the rotational intensity over a surface always formed by the same particles remains unchanged in time.
\\
\indent\indent
Since this theorem is valid,
irrespective of how small the surface may be, it is also valid for each single surface element\,: the rotational intensity of a surface element always stays
the same.\hspace{0.4cm}Because such a particle cannot spread out infinitely, its rotational velocity cannot decrease infinitely. It follows herefrom that no particle once put into rotational motion
can stop rotating\,; and on the other hand, one easily sees that no particle which at initial time is not rotating, may ever begin to rotate.\\
\indent\indent
One must remark that these results are obtained under the assumption that the accelerating forces acting on the fluid are partial derivatives of a potential function. If, however, the accelerating forces do not possess this property, these theorems do not apply.\hspace{0.4cm}Herefrom one obtains a criterion to know whether accelerating forces acting on a fluid without pressure forces, have or do not have a potential.{ In the latter case, the rotational intensity of single particles is not conserved; in general new particles begin rotating, and rotating particles will lose this characteristic motion.\\
\indent\indent
So far we have considered only rotating surface elements, but let us take into account a mass element which is contained in a cylinder whose axis is the rotation axis. Its constant mass is the product of its transverse section, its length and its density.\hspace{0.4cm}Since the product of the rotational velocity by the transverse section is constant, one sees that for each element the ratio of its rotational velocity to the product of the distance measured in the direction of its rotation axis by the density is constant.\label{pointer}\hspace{0.4cm} Therefore, if the fluid is liquid, i.e., its density considered as constant, the ratio of the rotational velocity to the length of the particle is constant.\\[1cm]
\centerline{\fett{$\S.\,10.$}}\\[0.3cm]
\indent\indent
In his theory of rotational motion,
\mbox{H\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! z} introduced this following important principle\,: instead of considering the whole rotating mass, one should fragment it into \mbox{v\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! x} \mbox{l\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! s.} Here, a vortex line is a line lying in the flow so that its direction will stay always parallel to the instantaneous rotation axis.}
By v\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! x f\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! t, we understand the infinitely thin cylinder which, wrapped around the vortex line, includes the rotating particles.\\
\indent \indent
Denoting by ${\rm d}a, {\rm d}b, {\rm d}c$ [the three components of]} an element of such a vortex line at time $t=0$, one obviously has\,:
\begin{equation*} \tag*{(1), [10.1]}
{\rm d}a : {\rm d}b :{\rm d}c = A\,:\,B\,:\,C
\end{equation*}
Let $\varphi$ and $\psi$ be functions of $a,\, b,\,c$, such that the vortex lines
at time $t=0$ are obtained by setting these two functions to constant values.
Then the following conditions must hold\,:
\begin{equation*} \tag*{[10.2]}
\begin{aligned}
\frac{d \varphi}{da} {\rm d}a + \frac{d \varphi}{db} {\rm d}b + \frac{{\rm d}\varphi}{dc} {\rm d}c = 0\\\\
\frac{d \psi}{da} {\rm d}a + \frac{d \psi}{db} {\rm d}b + \frac{d \psi}{dc} {\rm d}c = 0\\
\end{aligned}
\end{equation*}
or, according to $(1)$\,:
\begin{equation*} \tag*{[10.3]}
\begin{aligned}
\frac{d \varphi}{da} A + \frac{d \varphi}{db} B + \frac{d \varphi}{dc} C &= 0\\
\frac{d \psi}{da} A + \frac{d \psi}{db} B + \frac{d \psi}{dc} C &= 0 \,.\\
\end{aligned}
\end{equation*}
If $A,\,B,\,C$ were} known, one could find $\varphi$ and $\psi$ from these equations by integration.
\hspace {0.4cm}One can easily observe that $\varphi, \,\psi$ must be such that, one may write\,:
\begin{equation*} \tag*{(2), [10.4]}
-2 A = \begin{vmatrix} \frac{d \varphi}{db}& \frac{d \varphi}{dc} \\\\
\frac{d \psi}{db}& \frac{d \psi}{dc} \end{vmatrix},\\
\,\,-2 B = \begin{vmatrix} \frac{d \varphi}{dc}& \frac{d \varphi}{da} \\\\
\frac{d \psi}{dc}& \frac{d \psi}{da} \end{vmatrix},\\
\,\,-2 C = \begin{vmatrix} \frac{d \varphi}{da}& \frac{d \varphi}{db} \\\\
\frac{d \psi}{da}& \frac{d \psi}{db} \end{vmatrix} \,.\\
\end{equation*}
Indeed, from the preceding equations follows that $A, B, C$ are proportional to these determinants\,; however, by substitution of~(2) into~(3) of $\S.\,6$,
the following relation is identically satisfied\,:
\begin {equation*} \tag*{(3), [10.5]}
\frac{dA}{da} + \frac{dB}{db} +\frac{dC}{dc} =0 \,.
\end{equation*}
Thus, $\varphi,\, \psi$ are always determined by integration in such a way that equations~(2) are satisfied.\\
\indent\indent
From these considerations it follows
that rotating particles may always be considered as arranged in vortex filaments. \hspace{0.4cm} When such a vortex filament moves along with the fluid, always the same particles will belong to it, as no particle belonging to the vortex filament may lose its rotational motion and, furthermore, each element parallel to the rotation axis of the vortex filament always remains parallel.
Namely, a surface element, which, at some time, is parallel to the rotation axis, will have a vanishing rotational intensity with respect to the direction of its normal; since the rotational intensity always stays the same, the normal to the surface element always stays perpendicular to the rotation axis, because the particles themselves always maintain their rotational motion around the axis of the vortex filament.\hspace{0.4cm}Therefore, one obtains the equations of the vortex lines at time $t$, if one expresses the values of $a,\,b,\,c$ in $\varphi$ and $\psi$ through $x,\,y,\,z,\,t$.\hspace{0.4cm}The equations of the vortex lines at time $t$ must be such
that $\displaystyle \frac{{\rm d} \varphi}{{\rm d}t} =0$ and $\displaystyle \frac{{\rm d}\psi}{{\rm d}t} =0$
are satisfied.
Here one has to differentiate with respect to $t$, whether it appears explicitly
or implicitly, through $x,\,y,\,z$.
\hspace{0.4cm}
In motion, the vortex filament will sometimes grow in density and size and sometimes decrease, namely in such a way that the length of a vortex-tube element multiplied by the density remains proportional to the rotational velocity.
\mbox{(Cf.\ S.\,40 [now page~\pageref{pointer}].)}\\
\indent\indent
The rotational intensity
in each section of this vortex filament will remain unchanged over time.\hspace{0.2cm} Moreover, it is also the same for all sections.
\hspace{0.4cm}Obviously we have $\int \Delta \cos \theta {{\rm d} \sigma}=0$ if the integral extends over a closed surface.\hspace{0.4cm}
Namely, let us think of a closed curve drawn on this surface. In order to extend $\int \Delta \cos \theta {{\rm d}\sigma}$ over {the two portions of the surface
separated by the curve, one has to integrate $\int U \cos \theta' {\rm d}s$ twice
but in the opposite direction over this curve, so that one has $\int \Delta \cos \theta {{\rm d}\sigma}=0$.\hspace{0.4cm}
Let this closed surface be formed by two sections of a vortex filament and the part of the filament surface between them,
then $\int \Delta \cos \theta {{\rm d}\sigma}$
will disappear for the latter part, and so
will the sum of the integrals for both sections.
Indeed, since $\cos \theta$, in a section, is $+1$ [and] $-1$ in the other one, the rotational intensity with regard to the axis of the vortex filament is the same for each section. \\
\indent\indent
The results of the preceding investigations about these particular rotational motions are essentially due to \mbox{H\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! z,} who, by restricting himself to liquid flows, derived them in a classical essay\footnote{Crelle's Journal, Bd.\ 55,
S.\,33 and ff
[\hyperlink{Helmholtz1}{1858}].} from the \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! }ian first form of the equations.
\indent\indent
A special case of the theorem on the constancy of the rotational velocity has been already given by \mbox{S\hspace{0.036cm}} %{$\phantom{!}$\! v\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! b\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! g\,:}\\
\indent\indent
Namely, when all motions happen symmetrically around an axis and circular vortex filaments centered on the symmetry axis} are given in the flow with radius $r$ and infinitely small section $\omega$,
such a vortex filament may be considered as a cylinder of height $2 \pi r$ and basis $\omega$ so that its content will be $ 2 \pi r \omega$.\hspace{0.4cm}Since the rotational motion of the particles is conserved, the content $2 \pi r \omega$ for each vortex filament must be constant in time.\hspace{0.4cm}Then $\Delta \omega r$ must be proportional to the rotational velocity $\Delta$.\hspace{0.4cm}Since $\Delta \omega$, as rotational intensity, is constant, so the rotational velocity $\Delta$ for each vortex ring at different times must be proportional to its radius.\\
\indent\indent
This is the meaning of the formulae derived by \mbox{S\hspace{0.036cm}} %{$\phantom{!}$\! v\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! b\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! g} from the first \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r\,ian} form\footnote {Crelle's Journal Bd.\ 24. S.\ 159. Nro.\ 31 [\hyperlink{Svanberg}{1842}].} of the fundamental equations.\\[1cm]
\centerline{\fett{$\S \,11.$}}
\\[0.3cm]
\indent\indent
The concept of vortex lines gives rise to an interesting transformation of the hydrodynamical equations in their first form: Namely, by introducing as dependent variables, functions which are in a certain relation to $u,v,w$.\\
\indent\indent From equations (2) of $\S \,10$ follows indeed\,:
\begin{equation*} \tag*{[11.1]}
\begin{aligned}
\frac{d\gamma}{db} - \frac{d\beta }{dc}= \frac{d\varphi}{db}\frac{d\psi}{dc} - \frac{d\varphi}{dc}\frac{d\psi}{db}\\\\
\frac{d\alpha}{dc} - \frac{d\gamma }{da}= \frac{d\varphi}{dc}\frac{d\psi}{da} - \frac{d\varphi}{da}\frac{d \psi}{dc}\\\\
\frac{d \beta}{da} - \frac{d\alpha }{db} = \frac{d\varphi}{da}\frac{d\psi}{db} - \frac{d\varphi}{db}\frac{d\psi}{dc}
\end{aligned}
\end{equation*}
provided that
$\varphi$ and $\psi$
are
functions of $a,b,c$ that represent the vortex line, when set equal to constants.
Herefrom follows that one can represent $\alpha, \beta, \gamma$ in the
form\,:
\begin{equation*} \tag*{(1), [11.2]}
\alpha = \frac{dF}{d a} + \varphi \frac{d \psi}{d a}, \,\,\,\, \beta= \frac{dF}{d b} + \varphi \frac{d \psi}{d b}, \,\,\,\,\gamma = \frac{dF}{d c} + \varphi \frac{d \psi}{d c}
\end{equation*}
where in general $F$ will be function of $a,\,b,\,c,\,t$.\hspace{0.4cm}Herefrom one finds\,:
\begin{equation*} \tag*{[11.3]}
\begin{aligned}
\alpha \frac{da}{dx} + \beta \frac{db}{dx} + \gamma \frac{dc}{dx} = \frac{dF}{dx} + \varphi \frac{d \psi}{dx}\\\\
\alpha \frac{da}{dy} + \beta \frac{db}{dy} + \gamma \frac{dc}{dy} = \frac{dF}{dy} + \varphi \frac{d \psi}{dy}\\\\
\alpha \frac{da}{dz} + \beta \frac{db}{dz} + \gamma \frac{dc}{dz} = \frac{dF}{dx} + \varphi \frac{d \psi}{dz}
\end{aligned}
\end{equation*}
\hspace{0.6cm}By solving the system (2) of $\S\,6$ using the relations $(3)$ of $\S\,2$, one finds\,:
\begin{equation*} \tag*{[11.4]}
\begin{aligned}
\alpha \frac{da}{dx} + \beta \frac{db}{dx} + \gamma \frac{dc}{dx} &= u\\\\
\alpha \frac{da}{dy} + \beta \frac{db}{dy} + \gamma \frac{dc}{dy} &= v\\\\
\alpha \frac{da}{dz} + \beta \frac{db}{dz} + \gamma \frac{dc}{dz} &= w
\end{aligned}
\end{equation*}
so that one can always set\,:
\begin{equation*} \tag*{(2), [11.5]}
u = \frac{dF}{dx} + \varphi \frac{d \psi} {dx}, \,\,\, v= \frac{dF}{dy} + \varphi \frac{d \psi}{dy},\,\,\, w= \frac{dF}{dz} + \varphi \frac{d \psi}{dz}
\end{equation*}
where $\varphi= Const$ and $\psi=Const$ are the equations of the vortex lines.\hspace{0.4cm} If one replaces the quantities $a,\,b,\,c$ in $\varphi$ and $\psi$ in terms of
$x,y,z$ [at time] $t$, one obtains\;:
\begin{equation*} \tag*{[11.6]}
\frac{{\rm d} \varphi}{{\rm d}t} = 0, \qquad \frac{{\rm d} \psi}{{\rm d}t}=0
\end{equation*}
whereby differentiation has to be performed when $t$ appears explicitly as well as implicitly in $x,y,z$.
Hence we have\,:
\begin{equation*} \tag*{[11.7]}
\begin{aligned}
\frac{d \varphi}{dt} + \frac{d \varphi}{dx} u + \frac{d \varphi}{dy} v +\frac{d \varphi}{dz} w &=0\\\\
\frac{d \psi}{dt} + \frac{d \psi}{dx} u + \frac{d \psi}{dy} v +\frac{d \psi}{dz} w &=0 \
\end{aligned} \,.
\end{equation*}
Substituting the values of $u,\,v,\,w$ [taken from~(2)], we obtain\,:
\begin{equation*}\tag*{(3), [11.8]}
\left.
\begin{aligned} \frac{d \varphi}{dt} + \frac{d \varphi}{dx} \left (\frac{dF}{dx}+ \varphi \frac{d \psi}{dx}\right) + \frac{d \varphi}{dy} \left (\frac{dF}{dy}+ \varphi \frac{d \psi}{dy}\right) + \frac{d \varphi}{dz} \left (\frac{dF}{dz}+ \varphi \frac{d \psi}{dz}\right) =0 \\\\
\frac{d \psi}{dt} + \frac{d \psi}{dx} \left (\frac{dF}{dx}+ \varphi \frac{d \psi}{dx}\right) + \frac{d \psi}{dy} \left (\frac{dF}{dy}+ \varphi \frac{d \psi}{dy}\right) + \frac{d \psi}{dz} \left (\frac{dF}{dz}+ \varphi \frac{d \psi}{dz}\right) =0
\end{aligned} \qquad \right\}
\end{equation*}
In addition to these relations there is also the density equation $(5)$ of $\S.\,2.$
\begin{equation*} \tag*{[11.9]}
\frac{d \varrho}{dt} + \frac{d }{dx} \varrho \left (\frac{dF}{dx}+ \varphi \frac{d \psi}{dx}\right) + \frac{d }{dy} \varrho \left (\frac{dF}{dy}+ \varphi \frac{d \psi}{dy}\right) + \frac{d }{dz} \varrho \left (\frac{dF}{dz}+ \varphi \frac{d \psi}{dz}\right) =0
\end{equation*}
We observe that, in general, these three equations are not sufficient to determine the four unknown functions $F,\varphi,\psi$ and $\varrho$. Only in the case of liquid fluids, where $\varrho$ is constant,
are these sufficient, insofar as the latter expression transforms into\,:
\begin{equation*} \tag*{(4), [11.10]}
\frac{d}{dx} \left(\frac{dF}{dx}+ \varphi \frac{d \psi}{dx} \right) + \frac{d}{dy} \left(\frac{dF}{dy}+ \varphi \frac{d \psi}{dy} \right) + \frac{d}{dz} \left(\frac{dF}{dz}+ \varphi \frac{d \psi}{dz} \right) =0
\end{equation*}
These are the transformed equations in the elegant form that \mbox{C\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! b\hspace{0.036cm}} %{$\phantom{!}$\! s\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! h} found by another way,
without giving the meaning of $\varphi$ and $\psi$.\footnote{%
Ueber die Integration der hydrodynamischen Gleichungen (Crelle's Journal Bd.\ 56, S.\ 1 [\hyperlink{Clebsch1}{1859}]).
In this essay \mbox{C\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! b\hspace{0.036cm}} %{$\phantom{!}$\! s\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! h} starts from the first form of the \mbox{E\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! r} equation to perform the above transformation. On the one hand, the method applied by him is susceptible of a bigger simplification; on the other hand,
one can give a procedure quite analogous to that used above to prove that
$\displaystyle\frac{{\rm d} \varphi} {dt}= 0$ and $\displaystyle\frac{{\rm d} \psi} {dt}= 0.$\hspace{0.4cm}
\mbox{C\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! b\hspace{0.036cm}} %{$\phantom{!}$\! s\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! h} showed (in Theorem 2) that the equations transformed by the introduction of $F, \varphi, \psi$ are the conditions for the variation of a quadruple integral to disappear;
this is [somewhat] similar to what \mbox{C\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! b\hspace{0.036cm}} %{$\phantom{!}$\! s\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! h} did for steady-state flows in his paper\,: Ueber eine allgemeine Transformation der hydrodynamischen Gleichungen (Crelle's Journal Bd.\ 54, S.\ 301 [\hyperlink{Clebsch2}{1857}]).
\hspace{0.4cm}In another form, this is the Theorem, I derived directly from the principle of the virtual velocities $\S.\,5$.\,eq.\ (\hyperlink{eq1}{1}).\hspace{0.4cm}One can easily show that from this result one arrives at the introduction of the functions $a$ for the case of the stationary motion in a simpler way than \mbox{C\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! b\hspace{0.036cm}} %{$\phantom{!}$\! s\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! h} does in the latter essays\,; for a lack of space, we are not giving a more accurate argumentation here.
}\label{footnoteClebsch}\\[1cm]
\centerline {\fett{$\S.\,12$}}
\\[0.3cm]
If one puts into the equations $(3)$ of $\S.\,6$, $t=0$, one finds\,:
\begin{equation*} \tag*{[12.1]}
\frac{d v_0}{dc} - \frac{d w_0}{db} = 2A, \,\,\,\, \frac{d w_0}{da} - \frac{d u_0}{dc} = 2B,\,\,\,\, \frac{d u_0}{db} - \frac{d v_0}{da} = 2 C
\end{equation*}
where $A,\,\,B,\,\,C$ are the rotational velocities with respect to the coordinates axes in the point $a,\,\,b,\,\,c$ at time $t=0$.\hspace{0.4cm}Quite similarly, at time $t$ we have\,:
\begin{equation*} \tag*{(1), [12.2]}
\frac{d v}{dz} - \frac{d w}{dy} = 2X, \,\, \frac{d w}{dx} - \frac{d u}{dz} = 2Y,\,\,\, \frac{du}{dy} - \frac{d v}{dz} = 2 Z
\end{equation*}
These are the equations cited at page 34 [now page~\pageref{pointer2}] found by \mbox{C\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! y,} where $X,\,Y,\,Z$ mean the rotational velocities with respect to the coordinates $x,\,\,y,\,\,z$ at time $t$ taken as axes.\hspace{0.4cm}If $A,\,B,\,C$ are zero overall, i.e.\ at the beginning of the motion, there are no rotating particles and thus
$X,\,Y,\,Z$ will stay zero and hence, one can write
\begin{equation*} \tag*{(2), [12.3]}
u= \frac{dF}{dx}, \,\, v= \frac{dF}{dy}, \,\, w = \frac{dF}{dz}.
\end{equation*}
Restricting ourselves to liquid flows, by this substitution the density equation (6) of $\S.\,2.$ becomes\,:
\begin{equation*} \tag*{(3), [12.4]}
\frac{d^2 F}{dx^2} + \frac{d^2 F}{dy^2} + \frac{d^2 F}{dz^2} =0\,;
\end{equation*}
therefore,
$F$ satisfies \mbox{L\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! p\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! e}'s differential equation for all fluid particles\,;
for this reason the function $F$ has been designated by \mbox{H\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! z} as the \mbox{v\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! y} \mbox{p\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! a\hspace{0.036cm}} %{$\phantom{!}$\! l.}\hspace{0.4cm}
From the equation~(3) derives an interesting analogy between the fluid motions in a simply connected space and the effects of magnetic masses\,:
the velocities are equal and aligned to the forces exerted by a certain distribution of magnetic masses on the surface on a magnetic particle in the interior. \\
\indent \indent
If the function $F$ is determined by~(3)
by suitable boundary conditions on the surface,
it is still necessary to determine the pressure $p$ in order to get a complete solution to the problem.
The required equation for that is easily obtained,
by using the values of $u, \, v, \, w$ from~(2)
in~(4) of $\S.\,4$.
Then, one obtains\,:
\begin{equation*} \tag*{[12.5]}
\left.
\begin{aligned}
\frac{d^2 F}{dt dx} + \frac{d^2 F}{ dx^2} \frac{dF}{dx }+ \frac{d^2 F}{ dx dy} \frac{dF}{dy } + \frac{d^2 F}{ dx dz} \frac{dF}{dz } - \frac{d V}{dx} + \frac{1}{\varrho} \frac{dp}{dz} = 0\\\\
\frac{d^2 F}{dt dy} + \frac{d^2 F}{ dx dy} \frac{dF}{dx }+ \frac{d^2 F}{ dy^2} \frac{dF}{dy } + \frac{d^2 F}{ dy dz} \frac{dF}{dz } - \frac{d V}{dy} + \frac{1}{\varrho} \frac{dp}{dy} = 0\\\\
\frac{d^2 F}{dt dz} + \frac{d^2 F}{ dz dx} \frac{dF}{dx }+ \frac{d^2 F}{ dz dy} \frac{dF}{dy } + \frac{d^2 F} {dz^2} \frac{dF}{dz } - \frac{d V}{dz} + \frac{1}{\varrho} \frac{dp}{dz}=0
\end{aligned} \qquad \right\}
\end{equation*}
Through multiplication with ${\rm d} x, {\rm d} y, {\rm d} z$ and
summation, one gets\,:
\begin{equation*} \tag*{[12.6]}
{\rm d} \left(\frac{d F}{dt} \right) + \frac{1}{2} {\rm d}\left( \frac{dF}{dx}\right)^2 + \frac{1}{2} {\rm d} \left( \frac{dF}{dy}\right)^2 + \frac{1}{2} {\rm d} \left( \frac{dF}{dz}\right)^2 - {\rm d} \Omega = 0
\end{equation*}
where $\Omega$ is the function defined on S.\,18 [eq.\,\ref{pointerOmega}, now on page~\pageref{pointerOmega}].\hspace{0.4cm}
It follows herefrom by integration\,:
\begin{equation*} \tag*{[12.7]}
\frac{d F}{dt} +\frac{1}{2} \left[ \left( \frac{dF}{dx}\right)^2 + \left( \frac{dF}{dy}\right)^2 + \left( \frac{dF}{dz}\right)^2 \right] - \Omega =0
\end{equation*}
where the additive integration constant, which is a function of $t$, can be included in $F$.
\hspace{0.4cm}
Apart from the unknown $p$ the known function $V$ is contained in $\Omega$, from which one can easily determine $p$ once $F$ is obtained. \\[0.5cm]
\centerline{\fett{$\S.\,13.$}}
\vspace{0.3cm}
\indent\indent
From the relations $(1)$ of $\S.\,12$,
\begin{equation*} \tag*{(1), [13.1]}
\frac{d v}{dz} - \frac{d w}{dy} = 2X, \,\,\frac{d w}{dx} - \frac{d u}{dz} =2Y,\,\,\, \frac{d u}{dy} - \frac{d v}{dz} = 2 Z
\end{equation*}
and from the density equation $(6)$ of $\S.\,2$ for liquid
fluids
\begin{equation*} \tag*{[13.2]}
\frac{du}{dx} + \frac{dv}{dy} + \frac{d w}{dz} =0 \,,
\end{equation*}
$u,\,v,\,w$
{can be determined
as functions of $X,\,Y,\,Z$.
\hspace{0.4cm} Indeed, one finds easily from these equations\,:
\begin{equation*} \tag*{[13.3]}
\begin{aligned}
\frac{d^2 u}{d x^2}\,\, + \frac{d^2 u}{d y^2}\, + \frac{d^2 u}{d z^2}\, = 2 \left( \frac{d Z}{dy}\,- \frac{d Y}{dz}\right)\\\\
\frac{d^2 v}{d x^2}\,\, + \frac{d^2 v}{d y^2}\, + \frac{d^2 v}{d z^2}\, = 2 \left( \frac{d X}{dz}\, - \frac{d Z}{dx}\right)\\\\
\frac{d^2 w}{d x^2} + \frac{d^2 w}{d y^2} + \frac{d^2 w}{d z^2}\, = 2 \left( \frac{d Y}{dx} - \frac{d X}{dy}\right)
\end{aligned}
\end{equation*}
The integration of these partial differential equations follows from known theorems\,: $u,\,v,\,w$ appear as potential functions of fictitious, attracting masses,
which are distributed through the fluid-filled space with the density,
\begin{equation*} \tag*{[13.4]}
-\frac{1}{2 \pi} \left( \frac{d Z}{dy} - \frac{d Y}{dz}\right), \,\, -\frac{1}{2 \pi} \left( \frac{d X}{dz} - \frac{d Z}{dx}\right),\,\, -\frac{1}{2 \pi} \left( \frac{d Y}{dz} - \frac{d X}{dy}\right) \,.
\end{equation*}
Denoting by $u_1,\, v_1, \, w_1$ the velocity components at point $x_1, \, y_1,\, z_1$, and by $r$ the distance of this point to $x,\,y\,z$, one has\,:
\begin{equation*} \tag*{[13.5]}
\begin{aligned}
u_1 =\frac{1}{2 \pi} \iiint \left( \frac {dY}{dz} - \frac{dZ}{dy}\right) \frac{{\rm d}x {\rm d}y {\rm d}z}{r}\\\\
v_1 =\frac{1}{2 \pi} \iiint \left( \frac {dZ}{dx} - \frac{dX}{dz}\right) \frac{{\rm d}x {\rm d}y {\rm d}z}{r}\\\\
w_1 =\frac{1}{2 \pi} \iiint \left( \frac {dX}{dy} - \frac{dY}{dx}\right) \frac{{\rm d}x {\rm d}y {\rm d}z}{r}
\end{aligned}
\end{equation*}
where the integrals have to be extended over all points of the continuous fluid.\hspace{0.4cm}These are not yet the most general values of $u_1, \, v_1, \, w_1$\,:
since these values $u_1, \, v_1, \, w_1$ are determined,
one can always add to them successively
\begin{equation*} \notag
\begin{aligned}
\frac{dP_1}{dx_1},\,\,\,\frac{dP_1}{dy_1}, \,\,\,\frac{dP_1}{dz_1} \,,
\end{aligned}
\end{equation*}
while equations~(1) are still satisfied
as well as the density equation, provided that\,:
\begin{equation*} \tag*{[13.6]}
\frac{d^2 P}{dx^2} + \frac{d^2P}{dy^2} + \frac{d^2P}{dz^2} =0
\end{equation*}
is assumed for all points of the fluid.\hspace{0.4cm}One can consider here $P$ as the potential function of attracting masses which are outside of the space filled with the fluid, and must be dermined so that the conditions for $u_1,\, v_1, \, w_1$ on the fluid's surface are satisfied.\\\\
\indent\indent
The values found for $u_1,\, v_1, \, w_1$ in this way can be transformed through integration by parts.\hspace{0.4cm}Since\,:
\begin{equation*} \tag*{[13.7]}
r^2=\left(x -x_1 \right)^2 + \left(y - y_1 \right)^2+\left(z -z_1 \right)^2
\end{equation*}
one finds\,:
\begin{equation*} \tag*{[13.8]}
\begin{aligned}
\iiint \frac{dY}{dz} \frac{{\rm d}x\,{\rm d}y\, {\rm d}z}{r}= \iint Y \frac{{\rm d}x\, {\rm d}y}{r} + \iiint Y \frac {z-z_1}{r^3} {\rm d}x\, {\rm d}y\, {\rm d}z\\\\
\iiint \frac{dZ}{dy} \frac {{\rm d}x\, {\rm d}y\, {\rm d}z}{r}= \iint Z \frac{{\rm d}x\, {\rm d}z}{r} + \iiint Z \frac {y-y_1}{r^3} {\rm d}x\, {\rm d}y\,
{\rm d}z
\end{aligned}
\end{equation*}
and thus
\begin{equation*}\tag*{(2), [13.9]}
\left.
\begin{aligned}
u_1 &=\frac{dP_1}{dx_1} + \frac{1}{2\pi} \int \left( Y \cos \gamma - Z \cos \beta \right ) \frac{{\rm d} \omega}{r} + \frac{1}{2\pi} \iiint \left\{ Y(z-z_1) - Z (y - y_1) \right\}\frac{{\rm d}x\, {\rm d}y\, {\rm d}z }{r^3} \,\,\,\,\\\\
&\text{\qquad\qquad\qquad\qquad and in the same way\,:} \\\\
v_1 &=\frac{dP_1}{dy_1} + \frac{1}{2\pi} \int ( Z \cos \alpha - X \cos \gamma ) \frac{{\rm d} \omega}{r} + \frac{1}{2\pi} \iiint \left\{ Z(x-x_1) - X (z - z_1) \right\}\frac{{\rm d}x\, {\rm d}y\, {\rm d}z }{r^3}\,\,\,\, \\\\
w_1 &=\frac{dP_1}{dz_1} + \frac{1}{2\pi} \int ( X \cos \beta - X \cos \alpha ) \frac{{\rm d} \omega}{r} + \frac{1}{2\pi} \iiint \left\{ X(y-y_1) - Y (x - x_1) \right\} \frac{{\rm d}x\, {\rm d}y\, {\rm d}z }{r^3}
\end{aligned} \right\}
\end{equation*}
where ${\rm d}\omega$ denotes a surface element and $\alpha,\beta, \gamma$ the angle formed by the normal to it with the coordinates axes.\\
\indent\indent
These analytical formulas for the representation of $u_1,\,v_1,\,w_1$ in terms of $X,\,Y,\,Z$ allow for an interesting interpretation leading to a striking analogy
between the effect of vortex filaments and that of electrical currents.\hspace{0.4cm}Namely, if we indicate the parts of $u_1,\,v_1,\,w_1$ which originate from the elements ${\rm d}x\, { \rm d}y\, {\rm d}z$ of the triple integrals with ${\rm d}u_1 { \rm d}v_1 {\rm d}w_1$ then one has\,:
\begin{equation*} \tag*{[13.10]}
\begin{aligned}
{\rm d} u_1 = \frac{1}{2\pi} \left \{ Y(z - z_1, -Z(y-y_1)\right \} \frac{{\rm d}x\, {\rm d}y\, {\rm d}z }{r^3}\\\\
{\rm d} v_1 = \frac{1}{2\pi} \left \{ Z (x - x_1, -Z(z-z_1)\right \} \frac{{\rm d}x\, {\rm d}y\, {\rm d}z }{r^3}\\\\
{\rm d} w_1 = \frac{1}{2\pi} \left \{ X (y - y_1, -Y(x-x_1)\right \} \frac{{\rm d}x\, {\rm d}y\, {\rm d}z }{r^3}
\end{aligned}
\end{equation*}
By known theorems, from these equations one derives the relations\,:
\begin{align}
\tag*{[13.11]} &(x - x_1)\, {\rm d} u_1 + (y - y_1) \,{\rm d} v_1 + (z - z_1)\, {\rm d} w_1=0, \\ \nonumber \\
&X {\rm d}u_1 + Y{\rm d} v_1 + Z {\rm d} w_1 =0,\hspace{6.8cm} \tag*{[13.12]}\\\nonumber\\
&{\rm d}u_1^2 + {\rm d}v_1^2 + {\rm d}w_1^2 = \left(\frac{{\rm d}x {\rm d}y {\rm d}z}{2 \pi r^3}\right)^2 \Big\{ (X^2 + Y^2 + Z^2) [(x-x_1)^2 + (y-y_1)^2 + (z - z_1)^2] \nonumber\\
&\quad\hspace{5cm} - [X(x-x_1)+ Y(y-y_1) + Z(z-z_{3})]^2 \Big\} \,. \tag*{[13.13]}
\end{align}
\noindent The first two equations show that the velocity with the components $ {\rm d}u_1, {\rm d}v_1, {\rm d}w_1$
\begin{equation*} \tag*{[13.14]}
{\rm d} U_1 = \sqrt{{\rm d}u_1^2 + {\rm d}v_1^2 + {\rm d}w_1^2 }
\end{equation*}
is normal to the plane containing the point $x_1, y_1, z_1$ and
the rotation axis of the fluid particles $x,\, y,\,z$.\,\,The previous equation can also be written\,:
\begin{equation*} \tag*{[13.15]}
{\rm d} U_1^2=\Big(\frac{{\rm d}x\, {\rm d}y\, {\rm d}z}{2 \pi r^2} \Delta \Big)^2 \left\{ 1-\Big[\frac{X}{\Delta} \frac{x-x_1}{r } +\frac{Y}{\Delta} \frac{y-y_1}{r } + \frac{Z}{\Delta} \frac{z-z_1}{r }\Big]^2 \right\}
\end{equation*}
where $\Delta$ means the rotational velocity.\hspace{0.4cm}Herefrom one finds\,:
\begin{equation*} \tag*{[13.16]}
{\rm d} U_1= \frac{{\rm d}x\,{\rm d}y\,{\rm d}z}{2 \pi r^2} \Delta \sin \varepsilon
\end{equation*}
where $\varepsilon$ indicates the angle between the rotation axis of the particle $x, \,y, \,z$ and the connecting line $r$ between this point and $x_1, y_1, z_1$.\hspace{0.4cm}Each rotating particle $x,\,y,\,z$ generates, then, into another particle of the fluid $x_1, y_1, z_1$ a velocity which is normal to the plane passing through the rotation axis of the particle $x,\,y,\,z$ and $x_1, y_1, z_1$. This velocity is directly proportional to the volume ${\rm d}x\, {\rm d}y\, {\rm d}z$ of the particle $x,\,y,\,z$, to its rotational velocity $\Delta$ and to the sinus of the angle $\varepsilon$ between the rotation axis of $x,\, y,\, z$ and the connecting line $r$ of both particles,
is inversely proportional to the square of the distance $r$
between both particles.\\
\indent\indent
However, according to A\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! p\hspace{0.036cm}} %{$\phantom{!}$\! \`e\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! e's Law, this is the same force that an electrical particle of intensity $\Delta$
located at $x,\,y,\,z$ in a current aligned parallely to the rotational axis, would exert on a little magnet located at $x_1,\,y_1,\,z_1$.\\
\indent\indent
This highly remarkable analogy, whose discovery is due
to \mbox{H\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! z,} is firstly
of great importance for the theory of the vortex filaments in liquid fluids --- since equations~(2) are only applicable to such liquid
fluids. Indeed, it allows to apply theorems developed in electrodynamics, with minor modifications, to hydrodynamics, making the visualisation significantly simpler.
Furthermore, this analogy is also of a certain value for electrodynamics, since it allows to analyse the electrodynamical processes not based on the mutual elementary interactions between two particles
--- as is it is usually done following \mbox{A\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! p\hspace{0.036cm}} %{$\phantom{!}$\! \`e\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! e's} procedure --- but by considering infinitely thin closed currents as a basis for the whole theory, in analogy with vortex filaments}. \\[1cm]
\centerline{\fett{$\S.\,14.$}}\vspace{0.3cm}
\indent\indent
The principle of virtual velocities for
the motion of liquid
fluids\:
\begin{equation*}\tag*{(1), [14.1]}
\frac{d \delta x}{dx} + \frac{d \delta y}{dy} + \frac{d \delta z}{dz} = 0,
\end{equation*}
gives the
next equation,
following from (1) of $\S.\,4$\,:
\begin{equation*} \tag*{[14.2]}
\iint \Big[(\frac{d^2 x}{dt^2} -X) \delta x + (\frac{d^2 y}{dt^2} -Y) \delta y + (\frac{d^2 z}{dt^2} -Z) \delta z \Big] {\rm d}x\, {\rm d}y\,{\rm d}z =0 .
\end{equation*}
\noindent If one takes, instead of the virtual velocities, the actual velocities, one has to put, instead of $\delta x, \delta y, \delta z$\,:
\begin{equation*} \tag*{[14.3]}
{\rm d}x = \frac{dx}{dt}{{\rm d}t}, \,\,\, {\rm d}y=\frac{dy}{dt}{{\rm d}t}, \,\,\,{\rm d}z=\frac{dz}{dt}{{\rm d}t},
\end{equation*}
and, since (1), turns into the continuity equation, when limiting oneself to liquid
flow, one finds, by the usual assumptions about $X,Y,Z$\,:
\begin{equation*} \tag*{[14.4]}
\iiint \Big(\frac{d^2 x}{dt^2}\frac{d x}{dt} +\frac{d^2 y}{dt^2}\frac{d y}{dt} +\frac{d^2 z}{dt^2}\frac{d z}{dt}\Big ) {\rm d} x\,{\rm d} y\,{\rm d}z = \iiint
\Big( \frac{dV}{dx} \frac{dx}{dt} + \frac{dV}{dy} \frac{dy}{dt} + \frac{dV}{dz}\frac{dz}{dt}\Big) {\rm d} x\,{\rm d} y\,{\rm d}z
\end{equation*}
From this equation, after integration in time, denoting by
\begin{equation*} \tag*{[14.5]}
K=\frac{1}{2}\iiint \Big\{\Big(\frac{dx}{dt}\Big)^2 + \Big(\frac{dy}{dt}\Big)^2 + \Big(\frac{dz}{dt}\Big)^2 \Big\}{\rm d} x\,{\rm d} y\,{\rm d}z
\end{equation*}
the living force
\begin{equation*} \tag*{[14.6]}
K = \text{{\em Const.}} + \iiint V {\rm d}x\,{\rm d}y\,{\rm d}z
\end{equation*}
wherein the constant is
independent of $t$.\hspace{0.4cm}This equation of the living force
has been presumably given for the first time by \mbox{L\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! j\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! u\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! e-D\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! t}.\footnote{Untersuchungen \"uber ein Problem der Hydrodynamik. Crelle's Journal, Bd. 58, S. 202, [\hyperlink{Dirichlet}{1860}].}\\
\indent\indent
Consequently,
the living force will be independent of time, provided that the integral $\iiint V {\rm d}x\,{\rm d}y\,{\rm d}z$ is constant in time,
in other words,
since $V$ for each point in space is independent of $t$.\hspace{0.4cm}The condition that the fluid always fills the same absolute space may be also expressed by the fact that the normal velocity components of the particles at the surface is zero.\hspace{0.4cm}One
may show directly that, in this case, $K$ is constant.
Namely, one has\,:
\begin{equation*} \tag*{[14.7]}
\frac{d K}{dt }= \iiint \Big(\frac{dV}{dx} u + \frac{dV}{dy}v + \frac{dV}{dz}w\Big) {\rm d } x\,\,{\rm d} y\,\,{\rm d}z
\end{equation*}
and because of
\begin{equation*} \tag*{[14.8]}
\begin{aligned}
\iiint \frac{dV}{dx} u {\rm d}x\,{\rm d}y\,{\rm d}z &= \iint Vu{\rm d}y\,{\rm d}z - \iiint V \frac{du}{dx} {\rm d}x\,{\rm d}y\,{\rm d}z \\\
\iiint \frac{dV}{dy} v {\rm d}x\,{\rm d}y\,{\rm d}z &= \iint Vv{\rm d}z\,{\rm d}x - \iiint V \frac{dv}{dy} {\rm d}x\,{\rm d}y\,{\rm d}z\\\
\iiint \frac{dV}{dz} w {\rm d}x\,{\rm d}y\,{\rm d}z &= \iint Vw{\rm d}x\hspace{0.01cm}{\rm d}y - \iiint V \frac{dw}{dx}{\rm d}x\hspace{0.02cm}{\rm d}y\hspace{0.02cm}{\rm d}z
\end{aligned}
\end{equation*}
\noindent one has\,:
\begin{equation*} \tag*{[14.9]}
\frac{d K}{dt} = \iint Vu {\rm d}y\,{\rm d}z+\iint Vv {\rm d}z\,{\rm d}x + \iint Vw{\rm d}x {\rm d}y - \iiint V\Big( \frac{du}{dx} + \frac{dv}{dy} +\frac{dw}{dz}\Big) {\rm d}x\, {\rm d}y \,{\rm d}z
\end{equation*}
The density equation cancels out the last integral and one finds that
\begin{equation*} \tag*{[14.10]}
\frac{d K}{dt} =\int V\,U_n\, {\rm d} \omega,
\end{equation*}
where $U_n$ denotes the normal velocity of a particle at the external
surface, and where
${\rm d} \omega$ is the surface element.\hspace{0.4cm}If $U_n =0$ on the whole surface, we find indeed that $\displaystyle \frac{dK}{dt}=0$.\\\\
\indent\indent
For every stationary motion the normal velocity component at the surface vanishes\,; the same happens when the fluid is surrounded by motionless rigid walls\,;
this includes also the case when the liquid mass extends to infinity in all directions, since this amounts to having the flow contained in an infinitely
large sphere.\hspace{0.4cm}In all these cases the living forces are constant in time.\\\\
\indent\indent
When there is no rotational motion in the flow, according to $\S.\,12$, one can set\,:
\begin{equation*} \tag*{[14.11]}
u = \frac{dF}{dx}, \hspace{1cm} v = \frac{dF}{dy}, \hspace{1cm} w = \frac{dF}{dz}
\end{equation*}
so that we obtain
\begin{equation*} \tag*{[14.12]}
K= \frac{1}{2} \iiint \Big\{ \Big( \frac{dF}{dx} \Big)^2 + \Big( \frac{dF}{dy} \Big)^2 + \Big( \frac{dF}{dz} \Big)^2 \Big\} {\rm d}x\,\,{\rm d}y\,\,{\rm d}z
\end{equation*}
\indent\indent
By a well-known theorem, one finds\,:%
\deffootnotemark{\textsuperscript{[T.\thefootnotemark]}}\deffootnote{2em}{1.6em}{[T.\thefootnotemark]\enskip}%
\footnote{The factor $F$ in front of the Laplacian in the right-most term was omitted by Hankel.}
\begin{smallequation} \tag*{[14.13]}
\iiint \Big\{ \Big(\frac{dF}{dx}\Big)^2 + \Big( \frac{dF}{dy}\Big)^2 + \Big( \frac{dF}{dz}\Big)^2 \Big\}{\rm d}x\,{\rm d}y\,{\rm d}z = -\int F\frac{dF}{dn} {\rm d} \omega -
\iiint F \big( \frac{d^2 F}{d x^2} + \frac{d^2 F}{d y^2}+ \frac{d^2 F}{d z^2} \big) {\rm d}x\,{\rm d}y\,{\rm d}z
\end{smallequation}
But, by (3) of $\S.\,12$, the last integral vanishes, and thus one has\,:
\begin{equation*} \tag*{[14.14]}
K=- \int F\frac{dF}{dn} \, {\rm d} \omega
\end{equation*}
Furthermore, if the flow is in a stationary motion,
then {\small $\displaystyle\frac{dF}{dn}$}, being the velocity component normal to the external surface, must vanish,
and thus $K=0$ from which follows\,:
\begin{equation*} \tag*{[14.15]}
\frac{dF}{dx}=0, \hspace{1cm} \frac{dF}{dy} = 0, \hspace{1cm} \frac{dF}{dz} = 0
\end{equation*}
so that there is no motion at all.\hspace{0.4cm}
Thus, a motion
driven by a velocity potential can never become stationary and conversely, a stationary motion will never have a velocity potential\;--- an interesting theorem discovered
by \mbox{H\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! m\hspace{0.036cm}} %{$\phantom{!}$\! h\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! l\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! z.}
\vspace{1.2cm}
\\\
\centerline{C\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! r\hspace{0.036cm}} %{$\phantom{!}$\! e\hspace{0.036cm}} %{$\phantom{!}$\! c\hspace{0.036cm}} %{$\phantom{!}$\! t\hspace{0.036cm}} %{$\phantom{!}$\! i\hspace{0.036cm}} %{$\phantom{!}$\! o\hspace{0.036cm}} %{$\phantom{!}$\! n\hspace{0.036cm}} %{$\phantom{!}$\! s [already implemented] }
\vspace{0.5cm}
\noindent
p.\,20.\,\, l.\,7 must be $\big(\frac{dx}{d \varrho_1} \big)^2$ instead of $\big(\frac{dx}{d \varrho_1} \big)^3$ \\
p.\,30.\,\, l.\,10 in front of $2r \cos \theta \cos \varphi \frac{d \theta}{dt} \frac{d \varphi}{d t}$ there must be a $+$ sign
\vspace{0.8cm}
\section{Letters exchanged for the prize assignment:
judgements of Bernhard Riemann and Wilhelm Eduard Weber}
\label{s:letters}
\subsection*{{\bf Judgement on Hankel's manuscript by Bernhard Riemann}}
Decision on the received manuscript on the mathematical prize question carrying the motto: \,\,{\em The more signs express relationships in nature, the more useful they are}:\footnote{This motto is in Latin in the \textit{Preisschrift}: ``Tanto utiliores sunt notae, quanto magis exprimunt rerum relationes''.}
\indent\indent
The manuscript gives commendable evidence of the Author's diligence, of his knowledge and ability in using the methods of computation recently developed by contemporary mathematicians.\hspace{0.4cm}This is particularly shown in $\S\,6$ of the manuscript, which contains an elegant method for building the equations of motion for a flow in a fully arbitrary coordinate system, from a point of view which is commonly referred to as Lagrangian.
\hspace{0.1cm}
However, when further developing the general laws of vortex motion, the Lagrangian approach is unnecessarily left aside, and,
as a consequence, the various laws had to be found by other and
quite independent means.\hspace{0.4cm} Also the relation between the equations of motion and the investigations of Clebsch, reported in $\S.\,14.\,15 $, is omitted by the Author.\hspace{0.4cm}Nonetheless,
as his derivation actually begins from the Lagrangian equations, one may consider the requirement posed by the prize-question as fulfilled by this part of the manuscript
(if one substitutes the wrong text of a given proof of a theorem used by the Author with the right one contained in his note
and leaves out of consideration the mistakes due to rushing in $\S.\,9$).\footnote{The $\S\,9$ refers to the Latin manuscript.} \hspace{0.4cm} In the opinion of the referee,
the imperfectness just evoked in handling this part of the question did not give any sufficient reason to deny the prize to the manuscript. However, the Author will have to consolidate this part of the manuscript by a reworking, after which the same would gain in shortness and uniformity. A more important reason against the assignment of the prize could be the incorrectnesses
occuring in several places.
These [incorrectnesses] do not get to the core of the argument,
except [in] two [paragraphs] (\S 3 and \S 8) in which the obscurity of another writer may as well serve as an excuse for the Author.
[These incorrectnesses] may still be passed over,
if our highly esteemed Faculty would like to assign the prize to this manuscript,
in view of all manner of good things it contains.\\
\begin{figure}[t]
\includegraphics[width=\textwidth]{LettersRiemannWeber.pdf}
\caption {Scanned image of the letters from Riemann and Weber, taken from the \textit{G\"ottingen University Archive}. The image is the result of merging two consecutive pages of the full
text of the exchanged letters amongst the commission members for the prize assignment.
Riemann's letter begins on the left page and ends halfway on the right page. Weber's
letter follows after Riemann's letter.}
\end{figure}
\vskip-0.3cm
\indent
G\"ottingen, 7th Mai 1861
$\phantom{C}$ \hfill B.\ Riemann \quad \quad
\vspace{0.7cm}
\subsection*{\bf{Judgement on Hankel's manuscript by Wilhelm Eduard Weber}}
With the Faculty approval, I have consulted Prof.\ Riemann, whose judgement I
fully agree with, both for the launching
of a pure mathematical prize and for the evaluation of the received manuscript.
In any case, the work deserves
praiseful recognition,
and since it needs just a few corrections indicated by my colleague Riemann,
thereby easy to do, in order to meet the task requirements, it seems to me that the prize assignment does not give rise to any concern.
Nevertheless, the Author will have to consign again his revised work before it is going to print, according to the given suggestions.
In my consideration, some incorrectnesses and hastinesses find a fair excuse, in the indeed sparsely proportionated time for such a task, given that only the autumn holiday could be used for the scope.\\
$\phantom{C}$ \hfill Wilhelm Weber \quad \quad
\vspace{0.7cm}
\section{Hankel's biographical notes and list of published papers}
\label{s:HHpapers}
\begin{figure}[t]
\includegraphics[width=0.4\textwidth]{Hankel.jpg}
\caption{Portrait of Hermann Hankel (from Wikimedia Commons).}
\end{figure}
\subsection*{\bf{Biographical notes about Hermann Hankel}}
\deffootnotemark{\textsuperscript{[T.\thefootnotemark]}}\deffootnote{2em}{1.6em}{[T.\thefootnotemark]\enskip}
Hermann Hankel was born in Halle, near Leipzig, on 14th February 1839. His father was Gottfried Wilhelm Hankel, a renowned physicist. Hermann Hankel was a brilliant student already in high school, with a particular interest in mathematics and its history. From 1857, he studied Mathematics at the Leipzig University under the mathematicians Scheibner, Moebius and Drobisch. Then, he continued his studies in G\"ottingen, where, arriving in April 1860, he could attend, among others, Riemann's lectures. In G\"ottingen he won in 1861 the extraordinary mathematical prize launched
in June 1860 by the Faculty of G\"ottingen with an essay on the fluid motion theory to be elaborated in a Lagrangian framework. Also in 1861, he obtained his Doctor degree in Leipzig with the dissertation: ``\"Uber eine besondere Classe der symmetrischen Determinanten''.\footnote{``On a particular class of symmetric determinants''.} Then, in the autumn of the same year, he went to Berlin, where he could attend courses of Weierstrass and Kronecker.
In 1862 he returned to Leipzig and, in 1863, at the same place,
he habilitated as a \textit{Privatdozent} with a thesis
on the Euler integrals with an unlimited variability of the argument.
The writing of the habilitation thesis
was probably firstly induced by the lectures of Riemann about functions of complex variables.
In the spring 1867, he became extraordinary Professor at Leipzig University and, in the same year, ordinary Professor in Erlangen, then, in T\"ubingen in 1869. He was married to
Marie Dippe, who much later became a very important Esperantist. During his life, Hankel was advisor for doctoral dissertations in mechanics, real functions and geometry. He died prematurely on 29th August 1873. Hermann Hankel is known for his Hankel functions, a type of cylindrical functions, Hankel transforms, integral transformations whose kernels are Bessel functions of the first kind, and Hankel matrices, with constant skew diagonals. Hankel was the first to recognise the significance of Grassmann's extension theory (``Ausdehnungslehre'').
Hankel had a passion for research in history of mathematics and published meaningful writings also in this domain (his inaugural lesson in T\"ubingen was about the development of Mathematics in the last centuries).
Curiously, his prized work on the fluid-dynamic theory in Lagrangian coordinates written as a student,
is little known.\footnote{For Hankel's biography see: Cantor, \hyperlink{Cantor}{1879}; Crowe, \hyperlink{Crowe}{2008} Monna, \hyperlink{Monna}{1973}, von Zahn, \hyperlink{Zahn}{1874}.}
\vspace{0.7cm}
\subsection*{\bf{List of papers of Hermann Hankel}}
\begin{enumerate}
\item[1)] Hankel, Hermann. 1861. \textit {Zur allgemeinen Theorie der Bewegung der Fl\"ussigkeiten.} Eine von der philosophischen Facult\"at der Georgia Augusta am 4.\ Juni 1861 gekr\"onte Preisschrift, G\"ottingen. \HD{http://babel.hathitrust.org/cgi/pt?id=mdp.39015035826760;view=1up;seq=5}{Druck der Dieterichschen Univ.-Buchdruckerei. W.FR.Kaestner, G\"ottingen.}
\item[2)] Hankel, Hermann. 1861. \textit {\"Uber eine besondere Classe der symmetrischen Determinanten.} \HD{https://books.google.fr/books/about/Ueber_eine_besondere_Classe_der_symmetri.html?id=vHZaAAAAcAAJ&redir_esc=y}{Inaugural-Dissertation zur Erlangung der philosophischen Doktorw\"urde an der Universit\"at Leipzig von Hermann Hankel.}
\item[3)]
Hankel, Hermann. 1862. \"Uber die Transformation von Reihen in Kettenbr\"uche.
\HD{http://gdz.sub.uni-goettingen.de/pdfcache/PPN599415665_0007/PPN599415665_0007___LOG_0026.pdf} {\textit{Zeitschrift f\"ur Mathematik und Physik}, {\bf 7}, 338--343.}
Also in \textit {Berichte \"uber die Verhandlungen der k\"oniglich s\"achsichen Gesellschaft der Wissenschaften zu Leipzig}, mathematisch-physische Classe {\bf 14}, 17-22, 1862. Verlag der S\"achsischen Akademie der Wissenschaften zu Leipzig.
\item[4)] Hankel, Hermann, (signed as Hl.). 1863. Aufsatz \"uber \textit {Ein Beitrag zu den Untersuchungen \"uber die Bewegung eines fl\"ussigen gleichartigen Ellipsoides} by B.Riemann.
\HD{https://play.google.com/books/reader?id=zt0EAAAAQAAJ&printsec=frontcover&output=reader&hl=en&pg=GBS.PA50}{\textit{Die Fortschritte der Physik} im Jahre 1861, {\bf 17}, 50--57.}
\item[5)] Hankel, Hermann, (signed as Hl.). 1863. Aufsatz \"uber \textit {Zur allgemeinen Theorie der Bewegung der Fl\"ussigkeiten. Eine von der philosophischen Facult\"at der Georgia Augusta am 4.\ Juni 1861 gekr\"onte Preisschrift, G\"ottingen} by H.\ Hankel.
\HD{https://play.google.com/books/reader?id=zt0EAAAAQAAJ&printsec=frontcover&output=reader&hl=en&pg=GBS.PA57} {\textit{Die Fortschritte der Physik} im Jahre 1861, {\bf 17}, 57--61.}
\item[6)]
Hankel, Hermann, (signed as Hl.). 1863. Aufsatz \"uber \textit {D\'eveloppements relatifs au \S 3 de recherches de Dirichlet sur un probl\`eme d'hydrodynamique} by F.\ Brioschi.
\HD{https://play.google.com/books/reader?id=zt0EAAAAQAAJ&printsec=frontcover&output=reader&hl=en&pg=GBS.PA61}{\textit{Die Fortschritte der Physik} im Jahre 1861, {\bf 17}, 61--62.}
\item[7)] Hankel, Hermann. 1863. \textit {Die Euler'schen integrale bei unbeschr\"ankter variabilit\"at des Argumentes}:
\HD{https://ia902205.us.archive.org/28/items/dieeulerschenin00hankgoog/dieeulerschenin00hankgoog.pdf}{zur Habilitation in der Philosophischen Facult\"at der Universit\"at, Leipzig, Voss.}
An extrait is published in \HD{http://gdz.sub.uni-goettingen.de/pdfcache/PPN599415665_0009/PPN599415665_0009___LOG_0006.pdf}{\textit{Zeitschrift f\"ur Mathematik und Physik}, 1864, {\bf 9}, 1--21.}
\item[8)]
Hankel, Hermann. 1864. Die Zerlegung algebraischer Functionen in Partialbr\"uche nach den Prinzipien der complexen Functionentheorie. \HD{http://gdz.sub.uni-goettingen.de/dms/load/img/?PID=PPN599415665_0009\%7CLOG_0025} {\textit{Zeitschrift f\"ur Mathematik und Physik} {\bf 9}, 425--433.}
\item[9)] Hankel, Hermann. 1864. Mathematische Bestimmung des Horopters.
\HD{http://gallica.bnf.fr/ark:/12148/bpt6k15207b/f600.image.r=}{\textit{Annalen der Physik und Chemie}, {\bf 122}, 575--588.}
\item[10)] Hankel, Hermann. 1864. \textit{\"Uber die Vieldeutigkeit der Quadratur und rectification algebraischer Curven}.
\HD{https://archive.org/stream/bub_gb_M6dWcNSvxJYC\#page/n0/mode/2up}{Eine Gratulationsschrift zur Feier des f\"unfzigjaehrigen Doctorjubilaeums
des Herren August Ferdinand Moebius am 11 Dezember 1864.}
\item[11)]
Hankel, Hermann. 1867. Ein Beitrag zur Beurteilung der Naturwissenschaft des griechischen Altertum. \HD{https://babel.hathitrust.org/cgi/pt?id=hvd.hw2918;view=1up;seq=436}{\textit {Deutsche Vierteljahresschrift} {\bf 4}, 120-155.}
\item[12)]
Hankel, Hermann. 1867. \textit {Vorlesungen \"uber die complexen Zahlen und ihre Functionen} in \HD{https://books.google.it/books?id=MkttAAAAMAAJ&printsec=frontcover&hl=it&source=gbs_ge_summary_r&cad=0\#v=onepage&q&f=false}{zwei Theilen, Voss, Leipzig.}
\item[13)]
Hankel, Hermann. 1867. Darstellung symmetrischer Functionen durch die Potenzsummen.
\HD{https://www.digizeitschriften.de/download/PPN243919689_0067/PPN243919689_0067___log8.pdf}{\textit {Journal f\"ur die reine und angewandte Physik}, {\bf 67}, 90--94.}
\item[14)]
Hankel, Hermann. 1868. Die Astrologie um 1600 mit besonderer R\"ucksicht auf das Verhaeltnis Keppler's und Wallenstein's.
\HD{http://reader.digitale-sammlungen.de/de/fs1/object/display/bsb10612800_00295.html?contextType=scan&contextSort=score\%2Cdescending&contextRows=10&context=hankel}{\textit {Westermann Monatshefte}, {\bf 25}, 281--294. }
\item[15)]
Hankel, Hermann. 1869. \textit {Die Entwickelung der Mathematik in den letzten Jahrhunderte}.
\HD{https://archive.org/details/bub_gb_TE3kAAAAMAAJ}{Ein Vortrag beim Eintritt in den akademischen Senat der Universita\"at T\"ubingen am 29. April 1869 gehalten, Fuessche, T\"ubingen. }
\item[16)]
Hankel, Hermann. 1869. Die Entdeckung der Gravitation -- und Pascal - Ein literarisches Bericht von Dr.\ Hermann Hankel.
\HD{http://gdz.sub.uni-goettingen.de/pdfcache/PPN599415665_0014/PPN599415665_0014___LOG_0014.pdf}{\textit{Zeitschrift f\"ur Mathematik und Physik}, {\bf 14}, 165--173.}
\item[17)]
Hankel, Hermann. 1869. Beweis eines Hilfsatzes in der Theorie der bestimmten Integrale. \HD{http://gdz.sub.uni-goettingen.de/dms/load/img/?PID=PPN599415665_0014\%7CLOG_0030&physid=PHYS_0440}{\textit {Zeitschrift f\"ur Mathematik und Physik}, {\bf 14}, 436--437.}
\item[18)] Hankel, Hermann. 1869. Die Cylinderfunctionen erster und zweiter Art.
\HD{http://gdz.sub.uni-goettingen.de/pdfcache/PPN235181684_0001/PPN235181684_0001___LOG_0041.pdf}{\textit {Mathematische Annalen}, {\bf 1}, 467--501.}
\item[19)] Hankel, Hermann. 1870. Die Entdeckung der Gravitation durch Newton.
\HD{http://reader.digitale-sammlungen.de/de/fs1/object/goToPage/bsb10612803.html?pageNo=496}{\textit {Westermann Monatshefte}, {\bf 27}, 482--493.}
\item[20)] Hankel, Hermann. 1872. Intorno al volume intitolato: \textit{Geschichte der mathematischen Wissenschaften. 1.\ Theil. Von den \"altesten Zeiten bis Ende des 16.\ Jahrhunderts} of H.Suter. Relazione del dottor Ermanno Hankel.
\HD{http://gdz.sub.uni-goettingen.de/pdfcache/PPN599471603_0005/PPN599471603_0005___LOG_0022.pdf}{\textit {Bullettino di bibliografia e di storia delle scienze matematiche e fisiche}, {\bf 5}, 297--300.}
\item[21)] Hankel, Hermann. 1874. \textit{Zur Geschichte der Mathematik in Althertum und im Mittelalter}, (published \textit{post-mortem}),
\HD{http://gallica.bnf.fr/ark:/12148/bpt6k82883t}{Druck und Verlag von B.G.\ Teubner.}
\item[22)] Hankel, Hermann. 1875.
\textit{Die Elemente der projectivischen Geometrie in synthetischer Behandlung.}
\HD{https://archive.org/stream/dieelementederp00hankgoog\#page/n7/mode/2up}{Vorlesungen von Dr. Hermann Hankel, Teubner, Leipzig.}
\item[23)] Hankel, Hermann. 1875. Bestimmte Integrale mit Cylinderfunctionen.
\HD{http://gdz.sub.uni-goettingen.de/pdfcache/PPN235181684_0008/PPN235181684_0008___LOG_0037.pdf}{\textit {Mathematische Annalen}, { \bf 8}, 453--470. }
\item[24)] Hankel, Hermann. 1875.
Die Fourier'schen Reihen und Integrale f\"ur Cylinderfunctionen.
\HD{http://www.digizeitschriften.de/download/PPN235181684_0008/PPN235181684_0008___log38.pdf}{\textit {Mathematische Annalen}, {\bf 8}, 471--494.}
\item[25)] Hankel, Hermann. 1882 (1870). Untersuchungen \"uber die unendlich oft oszillierenden und unstetigen Funktionen. Abdruck aus dem Gratulationsprogramm der T\"ubinger Universit\"at vom 6. M\"arz 1870.
\HD{http://www.digizeitschriften.de/download/PPN235181684_0020/PPN235181684_0020___log14.pdf}{\textit { Mathematische Annalen}, {\bf 20}, 63--112.}
\item[26)] Hankel, Hermann. 1818--1881, Lagrange Lehrsatz. In \HD{http://gdz.sub.uni-goettingen.de/pdfcache/PPN358976863/PPN358976863___LOG_0132.pdf} {\textit {Allgemeine Encyclop\"adie der Wissenschaften und K\"unste},
J.S.\ Ersch, J.G.\ Grube, 353--367.}
\item[27)]
Hankel, Hermann. 1818--1881. Gravitation. In \HD{http://gdz.sub.uni-goettingen.de/pdfcache/PPN358787696/PPN358787696___LOG_0301.pdf} {\textit {Allgemeine Encyclop\"adie der Wissenschaften und K\"unste},
J.S.\ Ersch, J.G.\ Grube, 313--355.}
\item[28)]
Hankel, Hermann. 1818--1881. Grenze. In
\HD{http://gdz.sub.uni-goettingen.de/pdfcache/PPN358976863/PPN358976863___LOG_0132.pdf} {\textit {Allgemeine Encyclop\"adie der Wissenschaften und K\"unste} J.S.\ Ersch, J.G.\ Grube, 185--211.}
\end{enumerate}
\vspace{0.5cm}
\noindent {\it Acknowledgements.} We are grateful to Uriel Frisch and tho the referees for useful remarks.
We thank the Observatoire de la C\^ote d'Azur and the Laboratoire J.-L.~Lagrange
for their hospitality.
CR is supported by the DFG through the SFB-Transregio TRR33 ``The Dark Universe''.
| {
"timestamp": "2017-09-19T02:14:27",
"yymm": "1707",
"arxiv_id": "1707.01883",
"language": "en",
"url": "https://arxiv.org/abs/1707.01883",
"abstract": "The present is a companion paper to \"A contemporary look at Hermann Hankel's 1861 pioneering work on Lagrangian fluid dynamics\" by Frisch, Grimberg and Villone (2017). Here we present the English translation of the 1861 prize manuscript from Göttingen University \"Zur allgemeinen Theorie der Bewegung der Flüssigkeiten\" (On the general theory of the motion of the fluids) of Hermann Hankel (1839-1873), which was originally submitted in Latin and then translated into German by the Author for publication. We also provide the English translation of two important reports on the manuscript, one written by Bernhard Riemann and the other by Wilhelm Eduard Weber, during the assessment process for the prize. Finally we give a short biography of Hermann Hankel with his complete bibliography.",
"subjects": "History and Overview (math.HO); Analysis of PDEs (math.AP); History and Philosophy of Physics (physics.hist-ph)",
"title": "Hermann Hankel's \"On the general theory of motion of fluids\", an essay including an English translation of the complete Preisschrift from 1861",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347857055181,
"lm_q2_score": 0.8221891392358015,
"lm_q1q2_score": 0.8011696977006426
} |
https://arxiv.org/abs/0805.4174 | On a generalization of Christoffel words: epichristoffel words | Sturmian sequences are well-known as the ones having minimal complexity over a 2-letter alphabet. They are also the balanced sequences over a 2-letter alphabet and the sequences describing discrete lines. They are famous and have been extensively studied since the 18th century. One of the {extensions} of these sequences over a $k$-letter alphabet, with $k\geq 3$, are the episturmian sequences, which generalizes a construction of Sturmian sequences using the palindromic closure operation. There exists a finite version of the Sturmian sequences called the Christoffel words. They are known since the works of Christoffel and have interested many mathematicians. In this paper, we introduce a generalization of Christoffel words for an alphabet with 3 letters or more, using the episturmian morphisms. We call them the {\it epichristoffel words}. We define this new class of finite words and show how some of the properties of the Christoffel words can be generalized naturally or not for this class. | \section{Introduction}
As far as we know, Sturmian sequences first appeared in the literature at the 18th century in the precursory works of the astronomer Bernoulli \cite{jb1772}. They later appeared in the 19th century in Christoffel \cite{ebc1875} and Markov \cite{am1882} works. The first deep study of these sequences is given in \cite{mh1938,mh1940} where the name {\it Sturmian sequence} appears for the first time. At the end of the 20th century and more recently, many mathematicians have been interested in those sequences, for instance \cite{ch1973,emc1974,kbs1976,tcb1993,bpr1994,gz1995,adl1997,jb2002}. Recent books also show this interest \cite{ml2002,pf2002,as2003,blrs2008} as well as a recent survey \cite{jb2007}. In this wide literature, we find different characterizations of the Sturmian sequences. In particular, they are the sequences over a 2-letter alphabet having the minimal complexity, they also are the balanced sequences over a 2-letter alphabet and they code discrete lines. These different characterizations show how the Sturmian sequences occur in different fields as number theory \cite{rm19852,rjs1991,rt2000,rt2001,bv2003,rjs2004,go2005}, discrete geometry, crystallography \cite{bt1986} and symbolic dynamics \cite{mh1938,mh1940,gah1944,mq1987}.
Since the end of the 20th century, numerous generalizations of Sturmian sequences have been introduced for an alphabet with more than 2 letters. Among them, one natural generalization is called the {\it episturmian sequences} and is using the palindromic closure property of Sturmian sequences \cite{adl19972}. The first construction of episturmian sequences is due to \cite{djp2001}. Previously the first introduction and study of an episturmian sequence was that of the Tribonacci word \cite{gr1982} and an important class of episturmian sequences, now called the Arnoux-Rauzy sequences, had been considered in \cite{ar1991,rz2000}. More recently the whole class was extensively studied, for instance in \cite{jv2000,rz2000,djp2001,jp2002,jp2004,jj2005,ag2005,pv2007,gr20072,ag2006,bdldlz2008,gjp2007,glr2008}. For surveys about episturmian sequences, see for instance \cite{jb2007,gj2009}.
The finite version of Sturmian sequences, called {\it Christoffel words}, has been also well studied \cite{ebc1875,ml2002,br2006,bdlr2007,kr2007}. It is known that any finite standard Sturmian word, that is the words obtained by standard Sturmian morphisms to a letter, is conjugate to a Christoffel word. A Christoffel word is then the smallest word, with respect to the lexicographic order, in the conjugacy class of a finite standard Sturmian word. Finite factors of the episturmian sequences appeared for instance in \cite{gjp2007}. The class of standard episturmian words is naturally defined as the set of finite words obtained by standard episturmian morphisms to letter, but no generalization of the Christoffel words have been introduced yet. In this paper, we introduce such a generalization that we naturally call the {\it epichristoffel words}. Note that it naturally appears that for each standard episturmian word, there exists a conjugate which is an epichristoffel word, and conversely.
The paper is organized as follows.
We first recall some basic definitions of combinatorics on words and we establish the notation used in this paper. We recall the definitions and some properties of the Sturmian sequences, the Christoffel words and the episturmian sequences. Then we introduce our new class of finite words: the {\it epichristoffel ones}. We prove how some of the properties of the Christoffel words can be generalized for an alphabet with more than $2$ letters. We then describe an algorithm which determines if a given $k$-tuple describes the occurrence numbers of letters in an epichristoffel word or not. If so, we show how to construct it. Finally, we prove the next theorem, which is a generalization of a result for Christoffel words \cite{dldl2006}, that characterizes epichristoffel conjugates. \\
\noindent {\bf Theorem } {\it Let $w$ be a finite primitive word different from a letter. Then the conjugates of $w$ are all factors of the same episturmian sequence if and only if $w$ is conjugate to an epichristoffel word.
}
\section{Definitions and notation}
Throughout this paper, $\mathcal{A}$ denotes a finite alphabet containing $k$ letters $a_0, a_1, \ldots, a_{k-1}$. A {\it finite word} is an element of the free monoid $\mathcal{A} ^*$. If $w=w[0]w[1]\cdots w[n-1]$, with $w[i] \in \mathcal{A}$, then $w$ is said to be a finite word of {\it length} $n$ and we write $|w|=n$. By convention, the {\it empty word} is denoted $\varepsilon$ and its length is 0. We define $\mathcal{A} ^\omega$ the set of right infinite words, also called {\it sequences}, over the alphabet $\mathcal{A}$ and then, $\mathcal{A} ^\infty=\mathcal{A}^* \cup \mathcal{A}^\omega$ is the set of finite and infinite words.
The number of occurrences of the letter $a_i$ in $w$ is denoted $|w|_{a_i}$. The {\it reversal} of the word $w=w[0]w[1]\cdots w[n-1]$ is $\widetilde{w}=w[n-1]w[n-2] \cdots w[0]$ and if $\widetilde{w}=w$, then $w$ is said to be a {\it palindrome}. A finite word $f$ is a {\it factor} of $w \in \mathcal{A}^\infty$ if $w=pfs$ for some $p \in \mathcal{A}^*, s \in \mathcal{A}^\infty$. If $p=\varepsilon$ (resp. $s=\varepsilon$), $f$ is called a {\it prefix} (resp. a {\it suffix}) of $w$. Let $u\in \mathcal{A}^*$ and $n\in \mathbb N$. We denote by $u^n$ the word $u$ repeated $n$ times and we called it a {\it $n$-th power word}. A factor $\alpha^k$ of the word $w$, with $\alpha \in \mathcal{A}$ and $k \in \mathbb N$ locally maximum, is called a {\it block of $\alpha$ of length $k$ in $w$}. Let $u, v$ be two palindromes, then $u$ is a {\it central factor} of $v$ if $v=wu\tilde w$ for some $w \in \mathcal{A}^*$. The {\it right palindromic closure} of $w \in \mathcal{A}^*$ is the shortest palindrome $u=w^{(+)}$ having $w$ as prefix.
The {\it set of factors} of $w \in \mathcal{A}^\omega$ is denoted $F(w)$ and $F_n(w)=F(w)\cap \mathcal{A} ^n$ is the set of all factors of $w$ of length $n \in \mathbb N$. The complexity function is given by $P(n)=|F_n(w)|$ and is the number of distinct factors of $w$ of length $n \in \mathbb N$. Two words $w$ and $w'$ are said {\it equivalent} if they have the same set of factors: $F(w)=F(w')$.
The {\it conjugacy class} $[w]$ of $w \in \mathcal{A} ^n$ is the set of all words $w[i]w[i+1]\cdots w[n-1]w[0]\cdots w[i-1]$, for $0 \leq i \leq n-1$. If $w$ is not the power of a shorter word, then $w$ is said to be {\it primitive} and has exactly $n$ conjugates. If $w$ is the smallest of its conjugacy class, relatively to some lexicographic order, then $w$ is called {\it a Lyndon word}.
Let $w$ be an infinite word, then a factor $f$ of $w$ is {\it right} (resp. {\it left}) {\it special} in $w$ if there exist $a,b \in \mathcal{A}$, $a \neq b$, such that $fa, fb \in F(w)$ (resp. $af, bf \in F(w)$). A word $w$ over $\mathcal{A}$ is {\it balanced} if for all factors $u$ and $v$ of $w$ having the same length, for all letters $a \in \mathcal{A}$, one has
$$\big ||u|_a-|v|_a\big |\leq 1.$$
If $w=pus \in \mathcal{A}^\infty$, with $p,u \in \mathcal{A}^*$ and $s \in \mathcal{A}^\infty$, then $p^{-1}w$ denotes the word $us$. Similarly, $ws^{-1}$ denotes the word $pu$.
An integer $p\in \mathbb N$ is a {\it period} of the word $w=w[0]w[1]\cdots w[n-1]\in \mathcal{A}^*$ if $w[i]=w[i+p]$ for $0\leq i < n-p$. When $p=0$, the period is {\it trivial}. If $p$ is the smallest non trivial period of $w$, then the {\it fractionnary root} of $w$ is defined as the prefix $z_w$ of $w$ of length $p$. An infinite word $w\in \mathcal{A}^\omega$ is {\it periodic} (resp. {\it ultimately periodic}) if it can be written as $w=u^\omega$ (resp. $w=vu^\omega$), for some $u,v \in \mathcal{A}^*$. If $w$ is not ultimately periodic, then it is {\it aperiodic}.
A {\it morphism} $f$ from $\mathcal{A}^*$ to $\mathcal{A}^*$ is a mapping from $\mathcal{A}^*$ to $\mathcal{A}^*$ such that for all words $u,v\in\mathcal{A}^*$, $f(uv)=f(u)f(v)$. A morphism extends naturally on infinite words.
\section{Sturmian, Christoffel and episturmian words}
Before introducing our generalization of Christoffel words, inspired by the definition of episturmian sequences, let us recall the definition of these well-known families and some of their properties.
\subsection{Sturmian words and morphisms}
One of the classical definitions of Sturmian sequences is the one given by Morse and Hedlund \cite{mh1940}: \\
\noindent {\bf Definition } Let $\rho$, called the intercept, and $\alpha$, called the slope, be two real numbers with $\alpha$ irrational such that $0\leq \alpha< 1$. For $n\geq 0$, let
\begin{center}
$s[n] = \left \{ \begin{tabular}{l}
$a$ \textnormal{if} $\lfloor \alpha(n+1) +\rho \rfloor = \lfloor \alpha n +\rho \rfloor$,\\
$b$ \textnormal {otherwise},
\end{tabular} \right . $ \\
$s'[n] = \left \{ \begin{tabular}{l}
$a$ \textnormal{if} $\lceil \alpha(n+1) +\rho \rceil = \lceil \alpha n +\rho \rceil$,\\
$b$ \textnormal {otherwise}.
\end{tabular} \right . $\\
\end{center}
Then the sequences
$$s_{\alpha,\rho}=s[0] s[1]s[2]\cdots \quad \quad \textnormal{ and } \quad \quad s'_{\alpha, \rho}=s'[0]s'[1]s'[2]\cdots$$ are
{\it Sturmian} and conversely, a Sturmian sequence can be written as $s_{\alpha, \rho}$ or $s'_{\alpha,\rho}$ for $\alpha$ irrational and $\rho \in \mathbb{R}$. \\
Sturmian sequences have several characterizations. For more details about this class of words, we refer the reader to the section in \cite{ml2002} devoted to Sturmian sequences.\\
\noindent {\bf Proposition } \cite{ch1973} A sequence $s$ is Sturmian if and only if for all $n \in \mathbb N$, $P(n)=n+1$.\\
\newpage
\begin{theo}{\rm \cite{ml2002}} Let $s$ be a sequence. The following assertions are equivalent: \vspace{-0.2cm}
\begin{itemize}
\item [\rm i)] $s$ is Sturmian;
\item [\rm ii)] $s$ is balanced and aperiodic.
\end{itemize}
\end{theo}
\begin{defn}\cite{ml2002} A {\it morphism} $f$ is {\it Sturmian} if $f(s)$ is Sturmian for all Sturmian sequences~$s$.
\end{defn}
\subsection{Christoffel words}
In discrete geometry, Christoffel words are defined as the discretization of a line having a rational slope, as introduced in \cite{bl1993}. In symbolic dynamics, they are defined by exchange of intervals \cite{mh1940} as follows.\\
\noindent {\bf Definition } Let $p$ and $q$ be positive relatively prime integers and $n=p+q$. Given an ordered $2$-letter alphabet $\{a < b\}$, the {\it Christoffel word w of slope $p/q$} over this alphabet is defined as $w=w[0]w[1]\cdots w[n-1]$, with
$$w[i] = \left \{ \begin{tabular}{l}
$a$ \textnormal{if} $ip \mod n > (i-1)p \mod n,$ \\
$b$ \textnormal{if} $ip \mod n < (i-1)p \mod n,$
\end{tabular}
\right . $$ for $0\leq i\leq n-1$, where $k \mod n$ denotes the remainder of the Euclidean division of $k$ by $n$.\\
Notice that since $p$ and $q$ are relatively prime, a Christoffel word is always primitive. {Other important properties of Christoffel words will be recalled just before their generalizations in Section 4.}
\subsection{Episturmian sequences and morphisms}
One of the possible generalizations of Sturmian sequences for an alphabet with $3$ letters or more is the set of episturmian sequences. Let us first recall the definition of standard episturmian sequences as introduced initially by Droubay, Justin and Pirillo.
\begin{defn} \cite{djp2001} \label{def1epis} A sequence $s$ is {\it standard episturmian} if it satisfies one of the following equivalent conditions.
\vspace{-0.2cm}
\begin{itemize}
\item [\rm i)] For every prefix $u$ of $s$, $u^{(+)}$ is also a prefix of $s$.
\item [\rm ii)] Every leftmost occurrence of a palindrome in $s$ is a central factor of a palindromic prefix of $s$.
\item [\rm iii)] There exist a sequence $u_0=\varepsilon, u_1, u_2, \ldots$ of palindromes and a sequence $\Delta(s)=x[0]x[1]\cdots$, with $x[i] \in \mathcal{A}$, such that $u_n$ defined by $u_{n+1}=(u_nx[n])^{(+)}$, with $n \geq 0$, is a prefix of $s$.
\end{itemize}
\end{defn}
\begin{defn} \cite{djp2001} \label{episst} A sequence $t$ is {\it episturmian} if $F(t)=F(s)$ for a standard episturmian sequence $s$.
\end{defn}
An equivalent definition is that a sequence $s \in A^\omega$ is {\it episturmian} if its set of factors is closed under reversal and $s$ has at most one right (or equivalently left) special factor for each length.
\begin{nota} \textnormal{\cite{jj2005}} Let $w=w[0]w[1]\cdots w[n-1]$, with $w[i] \in \mathcal{A}$, and $u_0=\varepsilon$, \ldots, $u_{n}=(u_{n-1}w[n-1])^{(+)}$, the palindromic prefixes of $u_{n}$. Then \textnormal{Pal}$(w)$\index{$\textnormal{Pal}(w)$} denotes the word $u_{n}$.
\end{nota}
In Definition \ref{def1epis}, $\Delta(s)$ is called the {\it directive sequence} of the standard episturmian sequence~$s$. Since $\Delta(s)$ is the limit of its prefixes and $s$ is the limit of the $u_n$, it is natural to write $s=\hbox{\rm Pal}(\Delta(s))$.
Let us recall from \cite {jj2005} a useful property of the operator $\hbox{\rm Pal}$.
\begin{lem} \label{lemjj} {\rm \cite{jj2005}} Let $x \in \mathcal{A}$, $w \in \mathcal{A}^*$. If $|w|_x=0$, then $\hbox{\rm Pal}(wx)=\hbox{\rm Pal}(w)x\hbox{\rm Pal}(w)$. Otherwise, we write $w=w_1xw_2$ with $|w_2|_x=0$. The longest palindromic prefix of $\hbox{\rm Pal}(w)$ which is followed by $x$ in $\hbox{\rm Pal}(w)$ is $\hbox{\rm Pal}(w_1)$. Thus, $\hbox{\rm Pal}(wx)=\hbox{\rm Pal}(w)\hbox{\rm Pal}(w_1)^{-1}\hbox{\rm Pal}(w)$.
\end{lem}
\begin{defn} For $a, b \in \mathcal{A}$, we define the following endomorphisms of $\mathcal{A} ^*$:
\vspace{-0.2cm}
\begin{itemize}
\item [\rm i)] $\psi_a(a)=\overline{\psi }_a(a)=a$;
\item [\rm ii)] $\psi_a(x)=ax$, if $x \in \mathcal{A} \setminus \{a\}$;
\item [\rm iii)] $\overline{\psi }_a(x)=xa$, if $x \in \mathcal{A} \setminus \{a\}$;
\item [\rm iv)] $\theta_{ab}(a)=b$ , $\theta_{ab}(b)=a$, $\theta_{ab}(x)=x$, $x \in \mathcal{A} \setminus \{a,b\}$.
\end{itemize}
\end{defn}
The endomorphisms $\psi$ and $\overline \psi$ can be naturally extended to a finite word $w=w[0]w[1]\cdots w[n-1]$. Then $\psi_w(a)=\psi_{w[0]}(\psi_{w[1]}(\cdots (\psi_{w[n-1]}(a))\cdots ))$ and $\overline \psi_w(a)=\overline \psi_{w[0]}(\overline \psi_{w[1]}(\cdots (\overline \psi_{w[n-1]}(a))\cdots ))$ , with $a \in \mathcal{A}$.
Similarly to the Sturmian morphisms, we can define the episturmian morphisms as follows.
\begin{defn}\cite{jp2002} The set $\mathscr{E}$ of {\it episturmian morphisms} is the monoid generated by the morphisms $\psi_a, \overline{\psi}_a, \theta_{ab}$ \index{$\psi_a$}\index{$\overline{\psi}_a$}\index{$ \theta_{ab}$} under composition. The set $\mathscr{S}$ of {\it standard episturmian morphisms} is the submonoid generated by the $\psi_a$ and $\theta_{ab}$; the set of {\it pure episturmian morphisms} is the submonoid generated by the $\psi_a$ and $\overline \psi_a$.
\end{defn}
As the Sturmian morphism, the episturmian ones have the following characteristic property: a morphism $f$ is {\it episturmian} if $f(s)$ is episturmian for any episturmian sequence~$s$.
\section{Epichristoffel words}
In this section, we generalize Christoffel words to a $k$-letter alphabet and we call this generalization {\it epichristoffel words}.
Let us first recall some properties of Christoffel words that will be used to define their generalization.
\begin{lem} {\rm \cite{bdl1997}} \label{LyndonChristo}A word $w$ is a Christoffel word if and only if $w$ is a balanced Lyndon word.
\end{lem}
The next proposition follows from S\'e\'ebold, Richomme, Kassel and Reutenauer works \cite{ps1996,ps1998,gr2007,kr2007} and is proved in \cite{wfc1999}.
\begin{prop} \label{christoMorph} Christoffel words and their conjugates are exactly the words obtained by the application of Sturmian morphisms to a letter.
\end{prop}
{Lemma }\ref{LyndonChristo} {and Proposition} \ref{christoMorph} {have for consequence the following corollary.}
\begin{cor} \label{agen} In the conjugation class of a Christoffel word, the Lyndon word is the Christoffel word.
\end{cor}
{Note that Corollary }\ref{agen} { is the result we will extend as a definition of epichristoffel words.}
\begin{defn}\label{defEpiClass} A finite word $w \in \mathcal{A}^*$ belongs to an {\it epichristoffel class} if it is the image of a letter by an episturmian morphism.
\end{defn}
\begin{defn} A finite word $w \in \mathcal{A}^*$ is {\it epichristoffel}\index{mot!epichristoffel@\'epichristoffel} if it is the unique Lyndon word occurring in an epichristoffel class.
\end{defn}
In the sequel, a word in an epichristoffel class will be called {\it $c$-epichristoffel}, {for short.}
{The following result insures that the epichristoffel classes are well-defined.}
\begin{prop} \label{propguil} Let $w$ and $w'$ be conjugate finite words. Then $w=\phi(u)$ and $w'=\phi'(u')$, with $\phi, \phi' \in \{\psi_a, \overline{\psi}_a \}$, for $u, u' \in \mathcal{A}^*$, $a \in \mathcal{A}$ if and only if $u$ and $u'$ are conjugate.
\end{prop}
{\nl\it Proof.\ } \begin{itemize}
\item [($\Longrightarrow$)] Without loss of generality, we can suppose that $\phi=\phi'=\psi_a$, since $\psi_a(w)=a\overline \psi_a(w) a^{-1}$ and so, $\psi_a(w)$ is conjugate to $\overline \psi_a(w)$ for any word $w$. Thus, we can write $w=a^{n_0}v[0]a^{n_1}v[1]\cdots a^{n_k}v[k]$, with $v[i] \neq a$ and $n_i >0$ for $0 \leq i \leq k$. Since $w=\psi_a(u)$, using injectivity of $\psi_a$, we have $u=a^{n_0-1}v[0]a^{n_1-1}v[1]\cdots a^{n_k-1}v[k]$. Since $w$ and $w'$ are conjugate, we can write $w'=a^{\alpha}v[i]a^{n_{i+1}}v[i+1]\cdots a^{n_{i-1}}v[i-1]a^\beta$, with $\alpha+\beta = n_i$ and $\alpha \geq 1$. Thus, $u'=a^{\alpha-1}v[i]a^{n_{i+1}-1}v[i+1]\cdots a^{n_{i-1}-1}v[i-1]a^\beta$. Comparing $u$ and $u'$, we conclude that $u$ is conjugate to $u'$.
\item[($\Longleftarrow$)] If $u$ and $u'$ are conjugate, then there exist $v,t$ such that $u=vt$ and $u'=tv$. Applying respectively the morphisms $\phi$ and $\phi'$ over $u$ and $u'$, we obtain $\phi(u)=\phi(v)\phi(t)$ and $\phi'(u')=\phi'(t)\phi'(v)$. If $\phi=\phi'$ the result follows. Otherwise, let us suppose $\phi=\psi_a$ and $\phi'=\overline \psi_a$. Then we conclude using the fact that $\psi_a(u)=a\overline \psi_a(u)a^{-1}$:
$$\psi_a(u)=a\overline \psi_a(v)a^{-1}a\overline \psi(t)a^{-1}= a\overline \psi_a(v)\overline \psi_a(t)a^{-1}.$$
\vspace{-1.5cm}
{\flushright \rule{1ex}{1ex} \par\medskip}
\end{itemize}
The finite factors of episturmian sequences, also called finite Arnoux-Rauzy words, have already been studied. In \cite {jp2002}, the authors used a subclass of $c$-epichristoffel words without mentioning that it is a generalization of Christoffel words. In their paper, they denoted by $h_n$, the standard episturmian words, that is the words obtained by the application of standard episturmian morphisms to a letter. The $c$-epichristoffel words are exactly the set of all conjugates of the standard episturmian words and the smallest one in the conjugacy class is epichristoffel. Notice that they form a subclass of the Arnoux-Rauzy word, since they all are factor of episturmian sequences, but any factor of episturmian sequence is not necessarily obtained by an episturmian morphism to a letter. For instance, the word $abacab\underline{aabac}ababacabaabacaba\cdots$ contains the finite Arnoux-Rauzy word $aabac$ which is not $c$-epichristoffel.
In \cite{jp2002}, the authors proved the $2$ following properties.
\begin{prop} \textnormal{(\cite{jp2002}, prop. 2.8, prop. 2.12)} \label{propal1}Every standard episturmian word is primitive and can be written as the product of $2$ palindromic words.
\end{prop}
It is clear that any standard episturmian word is conjugate to an epichristoffel word. Proposition \ref{propguil} can be used to show the converse. Consequently, Proposition \ref{propal1} can be generalized for any $c$-epichristoffel word, {using the following lemma.}
\begin{lem} \textnormal{(\cite{djp2001}, Lemma 3)} \label{lemcon} {The word $u\in \mathcal{A}^*$ is a palindrome if and only if $\psi_a(u)a$ and $a \overline \psi_a(u)$ are so, $a \in \mathcal{A}$.}
\end{lem}
\begin{prop} \label{propal} Every $c$-epichristoffel word is primitive and can be written as the product of $2$ palindromic words.
\end{prop}
{\nl\it Proof.\ } {By induction over the number of morphisms. For a single morphism applied over a letter, we get $w=ab$, with $a, b \in \mathcal{A}$ and $a\neq b$, which is the product of two palindromes. Let us suppose that for a $c$-epichristoffel word $w$, there exist palindromic words $u$, $v$ such that $w=uv$. Let $x=\psi _c(w)=\psi _c(uv)=\psi _c(u) \psi _c(v)$ (resp. $x=\overline \psi_c(w)=\overline \psi_c(u)\overline \psi_c(v)$), for $c \in \mathcal{A}$. Then $x=\psi _c(u) cc ^{-1} \psi _c(v)$ (resp. $x=\overline \psi _c(u) c ^{-1} c\overline \psi _c(v)$), where $\psi _c(u) c$, $c ^{-1} \psi _c(v)$ (resp. $\overline \psi _c(u) c ^{-1}$, $c \overline \psi _c(v)$) are palindromic words by Lemma} \ref{lemcon}. \rule{1ex}{1ex} \par\medskip
Let now show how some of the properties of Christoffel words can be generalized to epichristoffel words.
Recall that for Christoffel {words}, we have:
\begin{theo} \label{thdldl}{\rm \cite{dldl2006}} Let $w$ be a non empty finite word. The following conditions are equivalent: \vspace{-0.2cm}
\begin{itemize}
\item [\rm i)] $w$ is a factor of a Sturmian sequence;
\item [\rm ii)] the fractionnary root $z_w$ of $w$ is conjugate to a Christoffel word.
\end{itemize}
\end{theo}
First, note that the equivalence in Theorem \ref{thdldl} cannot be generalized to epichristoffel words. Indeed, let us consider the episturmian sequence
$$s=aabaacaabaacaabaabaa \cdot{ caabaacaabaaa}\cdots$$
Then $w=caabaacaabaaa$ is a factor of $s$, but its fractionnary root $z_w=w$ is not $c$-epichristoffel, as we will see later in Example \ref{exa35}.
On the other hand, the converse holds for episturmian sequences and epichristoffel words.
\begin{theo} Let $w$ be a non empty word such that its fractionnary root is $c$-epichristoffel. Then $w$ is a factor of an episturmian sequence.
\end{theo}
{\nl\it Proof.\ } Let $w=z^k_w$, with $k\geq 1 \in \mathbb{Q}$, $z_w$ the fractionnary root of $w$. Let us suppose that $z_w$ is $c$-epichristoffel. Thus there exist $x \in \mathcal{A}^*$ and $a \in \mathcal{A}$ such that $\phi^{(0)}\phi^{(1)}\cdots \phi^{(n)}(a)=z_w$, with $\phi^{(i)}\in \{\psi_{x[i]}, \overline \psi_{x[i]}\}$. Then $w$ is a factor of $z_w^{\lceil k \rceil}=(\phi^{(0)}\phi^{(1)}\cdots \phi^{(n)} (a))^{\lceil k \rceil}=\phi^{(0)}\phi^{(1)}\cdots \phi^{(n)}(a^{\lceil k \rceil})$. It is sufficient to take an episturmian sequence having $a^{\lceil k \rceil}$ as a factor and apply the morphism $\phi^{(0)}\phi^{(1)}\cdots \phi^{(n)}$: we obtain that $\phi^{(0)}\phi^{(1)}\cdots \phi^{(n)} (a^{\lceil k \rceil})$ is a factor of an episturmian sequence and so is $w$. \rule{1ex}{1ex} \par\medskip
\begin{prop} Let $w \in \mathcal{A}^*$ be a $c$-epichristoffel word. Then, the set of factors of length $\leq |w|$ of its conjugacy class is closed under mirror image.
\end{prop}
{\nl\it Proof.\ } First note that the set of factors of length $\leq |w|$ of the epichristoffel class of $w$ is the same as the one of $w^2$. Since any $c$-epichristoffel word $w$ is the product of $2$ palindromes (by Proposition \ref{propal}), let $w=p_1p_2$, with $p_1$, $p_2$ palindromes. Then $w^2=p_1p_2p_1p_2$ and it follows that $\widetilde{w}=\widetilde{p_1p_2}=p_2p_1$ is a factor of $w^2$. Thus, the mirror image of any factor of $w$ is also a factor of $w^2$ and consequently, is in the epichristoffel class of $w$. \rule{1ex}{1ex} \par\medskip
{\rem The right palindromic closure of a $c$-epichristoffel word is often {a} prefix of $w^2$, but it is not the case in general. It suffices to take the word $w=abcbab$ for which $w^{(+)}=abcbab\cdot cba$.
}
For Christoffel words, we have:
\begin{lem} {\rm \cite{dlm1994}}A Christoffel word can always be written as the product of two Christoffel words.
\end{lem}
But:
\begin{lem} \label{lemneg} An epichristoffel word cannot always be written as the product of two epichristoffel words. \end{lem}
{\nl\it Proof.\ } It is sufficient to consider the epichristoffel word $aabacab$. The only decompositions in $c$-epichristoffel factors are $a\cdot abacab$ and $aab\cdot acab$, but $abacab$ and $acab$ are not Lyndon words, assuming $a< b< c$.
\rule{1ex}{1ex} \par\medskip
\begin{lem} Any $c$-epichristoffel word having length $> 1$ can be non-uniquely written as the product of two $c$-epichristoffel words.
\end{lem}
{\nl\it Proof.\ } For the non unicity, it is sufficient to consider the example of the word $aabacab$ given in the proof of Lemma \ref{lemneg}. By definition, any $c$-epichristoffel word can be written as $\phi^{(0)}\phi^{(1)} \cdots \phi^{(n-1)}(a)$, with $a \in \mathcal{A}$, $\phi^{(i)}\in \{\psi_{w[i]}, \overline \psi_{w[i]} \}$, $w \in \mathcal{A}^n$ and $w[n-1]\neq a$. Assume $\phi^{(n-1)}=\psi_{w[n-1]}$. To prove the existence of the product, it is then sufficient to consider the words $\phi^{(0)} \phi^{(1)}\cdots \phi^{(n-1)}(w[n-1])$ and $\phi^{(0)}\phi^{(1)}\cdots \phi^{(n-2)}(a)$, since
\vspace{-0.2cm}
\begin{eqnarray*}
\phi^{(0)}\phi^{(1)}\cdots \phi^{(n-1)}(a)&=& \phi^{(0)}\phi^{(1)}\cdots \phi^{(n-2)}(w[n-1]a)\\
&=& \phi^{(0)}\phi^{(1)}\cdots \phi^{(n-2)}(w[n-1]) \cdot \phi^{(0)}\phi^{(1)}\cdots \phi^{(n-2)}(a).
\end{eqnarray*}
The case $\phi^{(n-1)}=\overline \psi_{w[n-1]}$ is analogue: we would have obtained a conjugate.
{\flushright \rule{1ex}{1ex} \par\medskip}
\vspace{-0.2cm}
\section{Epichristoffel $k$-tuples}
Recall from \cite{bl1993} that for a given $(p,q)$, with $p, q \in \mathbb N$, there exists a Christoffel word with occurrence numbers of letters $p$ and $q$ if and only if $p$ and $q$ are relatively primes. Moreover, it is possible to construct the corresponding Christoffel word, using a Cayley graph (see \cite{br2006}).
In this section, we give an algorithm which determines if there exists or not an epichristoffel word $w$ over the alphabet $\mathcal{A} =\{a_0,a_1,\ldots,a_{k-1}\}$ such that $p=(p_0,p_1,\dots, p_{k-1})$ with $p_i=|w|_{a_i}$, for $0\leq i \leq k-1$. If so, we also give an algorithm that constructs it.
\begin{defn} Let $p=(p_0,p_1,\ldots,p_{k-1})$ be a $k$-tuple of non negative integers. Then the {\it operator} $T: \mathbb N^k \rightarrow \mathbb Z^k$ is defined over the $k$-tuple $p$ as
\vspace{-0.2cm}
$$T(p)=T(p_0,p_1,\ldots,p_{k-1})=(p_0,p_1,\ldots, p_{i-1},\left(p_i - \sum _{j=0, j \neq i}^{k-1} p_j\right ), p_{i+1}, \ldots,p_{k-1}),$$
\vspace{-0.2cm}
where $p_i \geq p_j$, $\forall j\neq i$.
\end{defn}
\begin{prop} \label{ktuplets} Let $p$ be a $k$-tuple. There exists an epichristoffel word with occurrence numbers of letters $p$ if and only if iterating $T$ over $p$ yields a $k$-tuple $p'$ with $p'_j=0$ for $j\neq m$ and $p'_m=1$, for a unique $m$ such that $0\leq m \leq k-1$.
\end{prop}
The idea of using the operator $T$ comes from the algorithm computing the greatest common divisor of $3$ integers as described in \cite{cmr1999} and of the tuples described in \cite{jj2000}.
{\exa \label{exa35} There is no epichristoffel word with the occurrence numbers of letters $(2,2,9)$. Indeed, $T(2,2,9)=(2,2,5)$, $T^2(2,2,9)=T(2,2,5)=(2,2,1)$, $T^3(2,2,9)=T(2,2,1)=(2,-1,1)$.
On the other hand, the $6$-tuple $q=(1,1,2,4,8,16)$ does so:
\begin{eqnarray*}
T(1,1,2,4,8,16)&=&(1,1,2,4,8,0)\\
T^2(q)&=&T(1,1,2,4,8,0)=(1,1,2,4,0,0)\\
T^3(q)&=&T(1,1,2,4,0,0)=(1,1,2,0,0,0)\\
T^4(q)&=&T(1,1,2,0,0,0)=(1,1,0,0,0,0)\\
T^5(q)&=&T(1,1,0,0,0,0)=(1,0,0,0,0,0).
\end{eqnarray*}
}
Some lemmas are required in order to prove Proposition \ref{ktuplets}.
\begin{lem} \label{freq} Let $w=\phi(u)$, with $\phi \in \{\psi_{a_0}, \overline \psi_{a_0}\}$, $\mathcal{A}=\{a_0,a_1,\ldots, a_{k-1}\}$ and $u\in \mathcal{A}^*$. Then
\vspace{-0.2cm}
\begin{itemize}
\item [\rm i)] $\displaystyle |w|_{a_0}=\sum_{i=0}^{k-1} |u|_{a_i}=|u|$;
\vspace{-0.2cm}
\item [\rm ii)] $\displaystyle |w|_{a_0}=|u|_{a_0}+\sum_{i=1}^{k-1} |w|_{a_i}$.
\end{itemize}
\end{lem}
{\nl\it Proof.\ } The first equality comes from the definition of $\psi_{a_0}$ and $\overline \psi_{a_0}$. For each letter $\alpha \neq a_0$, $\psi_{a_0}(\alpha)=a_0\alpha$, $\overline \psi_{a_0}=\alpha a_0$ and $\overline\psi_{a_0}=\psi_{a_0}(a_0)=a_0$: $\phi$ adds as much $a_0$ as the occurrence numbers of the other letters in the word $u$. The second equality follows from the first one, since $|w|_{a_i}=|u|_{a_i}$ for $i\neq0$.
\vspace{-0.8cm}
{\flushright \rule{1ex}{1ex} \par\medskip }
\begin{lem} \label{letterMax} Let $w\in \mathcal{A}^*$ be a $c$-epichristoffel word. Then, there exist a $c$-epichristoffel word $u \in \mathcal{A}^*$, $|u| >1$ and an episturmian morphism $\phi \in \{\psi_{a_0}, \overline \psi_{a_0}\}$, with $a_0 \in \mathcal{A}$, such that $w=\phi(u)$ if and only if $|w|_{a_0} > |w|_{a_i}$ for all $a_i \in \mathcal{A}$, $i \neq 0$.
\end{lem}
{\nl\it Proof.\ }
\begin{itemize}
\item [($\Longrightarrow$)] By contradiction. Let us suppose there exists $u$ with $|u|>1$ such that $w=\phi(u)$ and $|w|_{a_0}$ is not maximum. Then, there exists at least one letter $a_i \in \mathcal{A}$ such that $|w|_{a_i} \geq |w|_{a_0}$. Without loss of generality, let us suppose that $i=1$. By Lemma \ref{freq}, $|w|_{a_0}=\sum_{i=0}^{k-1} |u|_{a_i}=|u|_{a_0}+|w|_{a_1}+\sum_{i =2}^{k-1}|u|_{a_i}$ that implies $|w|_{a_0}-|w|_{a_1}=|u|_{a_0}+\sum_{i=2}^{k-1}|u|_{a_i} \leq 0$, which is possible only if $|u|_{a_i}=0$ for all $i \neq 1$ and then $|w|_{a_1}=|w|_{a_0}$. Hence, we would have that $u={a_1}^n$ and $w=\phi({a_1}^n)$. The only possibility is that $n=1$, since a $c$-epichristoffel word is primitive. Then $|u|=1$: contradiction. Hence, if $w=\phi(u)$, with $|u|>1$, $|w|_{a_0}$ is maximum.
\item [($\Longleftarrow$)] Let us now suppose that $|w|_{a_0} > |w|_{a_i}$ for all $a_i \in \mathcal{A}$, $i \neq 0$. Since $w$ is $c$-epichristoffel, there exist an episturmian morphism $\phi \in \{ \psi_{a_i}, \overline \psi_{a_i}\}$ and a $c$-epichristoffel word $u \in \mathcal{A}^*$ such that $\phi(u)=w$. Let us suppose that $i \neq 0$. Using Lemma \ref{freq}, $|w|_{a_i}=|w|_{a_0}+|u|_{a_i}+ \sum_{1\leq j \leq k-1, j\neq i}|w|_{a_j}$. Since $|w|_{a_0}> |w|_{a_i}$, it implies that $|u|_{a_i}+\sum_{1\leq j\leq k-1,j\neq i}|w|_{a_j}<0$, which is impossible. Thus $i=0$. \rule{1ex}{1ex} \par\medskip
\end{itemize}
An interesting consequence of Lemma \ref{letterMax} is the following.
\begin{prop} \label{propUnique} Let $u$ and $v$ be $c$-epichristoffel words. If $|u|_\alpha=|v|_\alpha$ for all $\alpha \in \mathcal{A}$, then $u$ and $v$ are conjugate. In other words, a $k$-tuple of occurrence numbers of letters determines at most one epichristoffel conjugacy class.
\end{prop}
{\nl\it Proof.\ } {By induction. The result is true when $|u|=|v| \leq 2$. Assume by now that $|u| \geq 3$. By definition of epichristoffel words, there exist letters $a$ and $b$, and epichristoffel words $u', v'$ such that $u=\phi(u')$, $v=\phi'(v'), \phi \in \{\psi_a,\overline \psi_a\}$ and $\phi' \in \{\psi_b,\overline\psi_b\}$. From $|u| \geq 3$ and definitions of morphisms $\psi_a, \overline \psi_a, \psi_b, \overline \psi_b$, we get $|u'| \geq 2, |v'| \geq 2$. From Lemma} \ref{letterMax} {and the fact that $|u|_\alpha=|v|_\alpha$ for all letters $\alpha$, it comes that $a=b$ (and $|u|_a=|v|_a \geq |u|_\alpha=|v|_\alpha$ for all letters $\alpha$). Now from definition of $u'$ and $v'$ and properties of $u$ and $v$, we deduce that $|u'|_\alpha=|v'|_\alpha$ for all letters $\alpha$. By inductive hypothesis, $u'$ and $v'$ are conjugate. Proposition} \ref{propguil} {allows to conclude.} \rule{1ex}{1ex} \par\medskip
The algorithm induced by the iteration of Lemma \ref{letterMax} leads to a construction of words which are images of a letter by an episturmian morphism, that is $c$-epichristoffel words. Indeed, iterating $T$ gives a construction of an $c$-epichristoffel word with $p$ describing the occurrence numbers of letters. We take $p$ as the initial $k$-tuple. The iteration over $p$ of the operator $T$ described previously yields a finite sequence of $k$-tuples $p^{(0)}$, $p^{(1)}$, $p^{(2)}, \dots$ We do as in Proposition \ref{ktuplets}, applying the operator $T$ and moreover, we keep an important information that allows us to construct the word: the letter with maximal number of occurrences. Let
\vspace{-0.2cm}
$$p^{(s)} \xrightarrow [ ]{\text{$i$}} p^{(s+1)}$$
denote the relation $T(p^{(s)})=p^{(s+1)}$, where $p^{(s)}_i$ is the maximal integer of $p^{(s)}$.
Then, performing $T$ until $p^{(r)}_i=0$ for all $i$ except for one $i_{r-1}$ for which $p^{(r)}_{i_{r-1}}=1$, we get the sequence of $k$-tuples
$$p^{(0)} \xrightarrow [ ]{\text{$i_0$}} p^{(1)} \xrightarrow [ ]{\text{$i_1$}} p^{(2)} \xrightarrow [ ]{\text{$i_2$}} \cdots \xrightarrow [ ]{\text{$i_{r-2}$}} p^{(r-1)} \xrightarrow [ ]{\text{$i_{r-1}$}} p^{(r)}.$$
Then,
$$\psi_{a_{i_0}}(\psi_{a_{i_1}}(\dots(\psi_{a_{i_{r-1}}}(\alpha))\dots))$$
is a $c$-epichristoffel word having $p$ as occurrence numbers of letters, with $\alpha$ the letter such that $p^{(r)}_{i_{r-1}}=1$. The epichristoffel word is the Lyndon word of the conjugacy class of the word obtained. Here, Proposition \ref{propUnique} insures that it is sufficient to consider the standard episturmian morphism in order to construct a $c$-epichristoffel word with $p$ describing the occurrences of the letters. \\
\noindent {\it {Proof of Proposition} \ref{ktuplets}.} Follows directly from Lemmas \ref{freq}, \ref{letterMax} and from the ideas described in the previous paragraph. The only difficulty concerns the last iteration, that is when $w=\phi(u)$, with $|w|_{a_0}$ not maximum. As seen in the previous proof, it implies that $u=a_1$ and $w=\phi(a_1) \in \{a_0a_1, a_1a_0\}$, which is clearly a $c$-epichristoffel word. Notice here that $\psi_{a_0}(a_1)=\overline \psi_{a_1}(a_0)$ and $\overline \psi_{a_0}(a_1)= \psi_{a_1}(a_0)$ are conjugate. \rule{1ex}{1ex} \par\medskip
{\exa For the triplet $(5,10,16)$ describing the occurrence numbers of respectively the letters $a,b$ and $c$, the sequence obtained is
$$(5,10,16) \xrightarrow[ ]{\text{$c$}} (5,10,1) \xrightarrow[ ]{\text{$b$}} (5,4,1) \xrightarrow [ ]{\text{$a$}}(0,4,1)\xrightarrow[ ]{\text{$b$}} (0,3,1)\xrightarrow[ ]{\text{$b$}} (0,2,1)\xrightarrow[ ]{\text{$b$}} (0,1,1) \xrightarrow[]{\text{$b$}} (0,0,1).$$
Performing the algorithm, we find the word
\vspace{-0.2cm}
\begin{eqnarray*}
\psi_{cbabbbb}(c)&=&\psi_{cbabbb}(bc)\\
&=&\psi_{cbabb}(\psi_b(bc))\\
&=&\psi_{cbab}(\psi_b(bbc))\\
&=&\psi_{cba}(\psi_b(bbbc))\\
&=&\psi_{cb}(\psi_a(bbbbc))\\
&=&\psi_{c}(\psi_b(ababababac))\\
&=&\psi_{c}(babbabbabbabbabc)\\
&=& cbcacbcbcacbcbcacbcbcacbcbcacbc.\\
\end{eqnarray*}
Since it is obtained by a standard episturmian morphism to a letter, this standard episturmian word is a representant of the epichristoffel conjugacy class. Moreover, its conjugate which is a Lyndon word, and so, an epichristoffel word, is $acbcbcacbcbcacbcbcacbcbcacbc\cdot cbc$ for the order $a<b<c$.
}
Note that in the previous example, the choice of the last transition is arbitrary: we could have chosen the transition $(0,1,1) \xrightarrow[]{\text{$c$}}(0,1,0)$ instead of $(0,1,1) \xrightarrow[]{\text{$b$}} (0,0,1)$ and we would have obtained a conjugate of $\psi_{cbabbbb}(c)$ which is also $c$-epichristoffel.
\section{Criteria to be in an epichristoffel class}
Let us recall a characterization of words in the conjugacy class of a Christoffel word.
\begin{theo} \label{factSturm} {\rm \cite{dldl2006}} Let $w \in \mathcal{A}^*$ be a primitive word. Every conjugate $w' $ is a factor of a Sturmian sequence, not necessarily the same, if and only if $w$ is conjugate to a Christoffel word.
\end{theo}
The goal of this section is to prove the following generalization of Theorem \ref{factSturm}.
\begin{theo} \label{leth} Let $w$ be a finite primitive word different from a letter. Then there exists an episturmian sequence $z$ such that all the conjugates of $w$ are factors of $z$ if and only if $w$ is a $c$-epichristoffel word.
\end{theo}
Note that in order to generalize Theorem \ref{factSturm} to a $k$-letter alphabet, $k \geq 3$, an additional condition is necessary: the conjugates must be factor of the {\bf same} episturmian sequence. For example, every conjugates of the word $abc$ are factors of episturmian sequences, but $abc$ is not a $c$-epichristoffel word, since $T(1,1,1)=(1,1,-1)$.
Let us recall the following results of Justin and Pirillo that allow us to write any episturmian sequence as the image by an episturmian morphism of an other episturmian sequence.
\begin{cor} {\rm \cite{jp2002}}\label{cor37} Let $s \in \mathcal{A}^\omega$ and $\Delta=x[0]x[1]x[2]\cdots$, $x[i] \in \mathcal{A}$. Then $s$ is a standard episturmian sequence with directive sequence $\Delta$ if and only if it exists an infinite sequence of sequences $s^{(0)}=s, s^{(1)}, s^{(2)}, \ldots$ such that for any $i \in \mathbb N$, {$s^{(i-1)}=\psi_{x[i]}(s^{(i)})$}.
\end{cor}
It can also be generalized to non standard episturmian sequences. In order to do so, let us recall what is a {\it spinned word}. Let $\overline \mathcal{A}= \{\overline a \, | \, a \in \mathcal{A}\}$. A letter $\overline x$ is considered as $x$ with {\it spin} $1$ while $x$ itself is considered as $x$ with spin $0$. Then, an {\it infinite spinned word} $\check s=\check s[0] \check s[1] \check s[2] \cdots $ is an element of $(\mathcal{A} \cup \overline \mathcal{A})^\omega$.
\begin{theo} {\rm \cite{jp2002}} \label{dirseq}A sequence $t\in \mathcal{A}^\omega$ is episturmian if and only if there exist a spinned sequence $\check{\Delta}=\check x[0]\check x[1]\check x[2]\cdots$, $\check x[i] \in \{\mathcal{A} \cup \overline \mathcal{A}\}$ and an infinite sequence of recurrent sequences $t^{(0)}=t$, $t^{(1)},t^{(2)}, \ldots$ such that for $i \in \mathbb N$, $t^{(i-1)}=\psi_{x[i]}(t^{(i)})$ if $\check x[i]$ has spin $0$ (resp. $\overline \psi_{x[i]}(t^{(i)})$ if $\check x[i]$ has spin $1$). Moreover $t$ is equivalent to the standard episturmian sequence with directive sequence $\Delta=x[0]x[1]\cdots$.
\end{theo}
{Theorem} \ref{dirseq} {allows us to write the directive sequence of a non standard episturmian sequence, as we do in the following lemma.}
\begin{lem} \label{gp1} Let $\check \Delta(s)=(\check a)^k\check b \check z$ be the directive sequence of an episturmian sequence $s$, with $a \neq b \in \mathcal{A}$ and $z \in \mathcal{A}^\omega$. Then the blocks of $c\neq a$ have length $1$ and the blocks of $a$'s have length $\ell$, $k$ or $(k+1)$, where $\ell \leq k+1$ is the length of the block of $a$'s prefix of the sequence.
\end{lem}
{\nl\it Proof.\ } Let us consider the equivalent standard episturmian sequence $t$ directed by $\Delta(t)=a^kbz$.
{By Corollary} \ref{cor37}, {$t=\psi_{a^kb}(t')$ for a standard episturmian word $t'$. Since $\psi_{a^kb}(a)=a^kba$, $\psi_{a^kb}(b)=a^kb$ and for $c \notin \{a,b\}$, $\psi_{a^kb}(c)=a^kba^kc$, the statement is true for $t$. Since the langage of $s$ and $t$ are equals, it only remains to consider the prefix of $s$ where a block of length $<k$ can appear. Indeed, for the episturmian sequence $s$, since it is directed by $\check \Delta (s)=(\check a)^k\check b \check z$, we easily deduce that $s$ begins by a prefix of $a$'s of length $\ell$ equals to the number of $\check a$ having spin $0$ in the prefix $(\check a)^k$ of its directive sequence, which is less or equal to $k$. \rule{1ex}{1ex} \par\medskip
}
{\rem An episturmian sequence may not have blocks of $a$'s of length $(k+1)$. It is the case if its directive sequence has the form $\check a^k\check z$, with $|\check z|_{\check a}=0$.
}
One can be easily convinced of the following statement.
\begin{lem} \label{gp2} In an episturmian sequence $w=\psi_\alpha(t)$ or $w=\overline \psi _\alpha (t)$, any letter different from $\alpha$ is preceded and followed by the letter $\alpha$, except for the first letter of the sequence, if it is different from $\alpha$.
\end{lem}
\begin{lem} \label{lemDeco} Let $z=\psi _{a_0}(t)$ be a standard episturmian sequence and $w=a_0ya_1$ a factor of $z$, with $a_0\neq a_1 \in \mathcal{A}$ and $y \in \mathcal{A}^*$. Then, there exists a factor $u$ of $t$ such that $\psi_{a_0}(u)=w$.
\end{lem}
{\nl\it Proof.\ } If $z=\psi_{a_0}(t)$, $t=t[0]t[1]t[2]\cdots$ and $\hbox{\rm Card}(\mathcal{A})=k$, then by the definition of $\psi$, $z=\psi_{a_0}(t[0])\psi_{a_0}(t[1])\cdots \in \{a_0,a_0a_1, a_0a_2,\ldots, a_0a_{k-1}\}^\omega$. Since $w$ starts with $a_0$ and ends by $a_1$, then any factor $w$ of $z=\psi_{a_0}(t)$ can be written as $w \in \{a_0,a_0a_1, a_0a_2, \ldots, a_0a_{k-1}\}^*$. Thus we can construct a word $u$ by associating to $a_0a_i$ the letter $a_i$ for $i\neq 0$ and to $a_0$ the letter $a_0$. Thus, $w$ is the image of the word $u$ by the morphism $\psi_{a_0}$. \rule{1ex}{1ex} \par\medskip
\begin{prop} \label{prop4} Let $z=\psi _a(t)$, where $t$ and $z$ are standard episturmian sequences. Let $w$ be a factor of $z$ not power of a letter, such that $|w|>1$ and all its conjugates are also factors of $z$. Then, there exists a factor $u$ of $t$ such that $w=\psi _a(u)$ or $w=\overline{\psi}_a(u)$.
\end{prop}
{\nl\it Proof.\ } Let $\beta, \gamma \in \mathcal{A}$, with $\beta,\gamma \neq a$, $y \in \mathcal{A} ^*$ and $w$ factor of $z$. There are $4$ cases to consider.
\begin{itemize}
\item [i)] $w=\beta y\gamma$: its conjugate $y\gamma \beta$ is not a factor of $z$, since any occurrence of the letter $\beta$ is preceded by the letter $a$, by Lemma \ref{gp2}. Then $w$ does not satisfied the hypothesis.
\item [ii)] $w=ay\beta$: by Lemma \ref{lemDeco}, there exists $u$ factor of $t$ such that $\psi _a(u)=w$.
\item [iii)] $w=\beta ya$: symmetric to the case ii). If $w=\beta ya$ is a factor of $z=\psi _a(t)$ and satisfies the hypothesis, then there exists $u$ factor of $t$ such that $\overline{\psi _a}(u)=w$.
\item [iv)] $w=aya$: rewrite $w=a ^ my'a ^n$, with $m$, $n$ $\geq 1$ and $m, n$ maximum. The factor $y' $ is {not} empty, since $w$ is supposed not to be a power of a letter. Let us suppose that there exists $\beta \in \mathcal{A}$, $\beta \neq a$ such that $w\beta=a ^my'a ^n\beta$ is a factor of $z$. Since {by Lemma} \ref{gp1} any block of $a$ has length $k$ or $(k+1)$, {for some $k\in \mathbb N\setminus \{0\}$}, we have that $n=k$ or $n=k+1$. On the other hand, by the hypothesis, the conjugate $y'a ^{m+n} $ of $w=a ^m y' a ^n$ is also a factor of $z$. Thus $m+n \leq k+1$. But since $m\neq 0$, the only possibility is that $n=k$ and $m=1$. Consequently $w=ay'a ^k$. Its conjugate $y' a ^{k+1}$ is also a factor of $z$ and since $y'$ does not start by $a$ by the maximality of $m$, it should be preceded by $a$: $ay' a ^{k+1}=ay'a ^ka=wa$ is a factor of $z$. Since $z$ is episturmian, $wa$ factor of $z$ implies that there exist $\ell \in \mathbb N$ and $\beta \neq a \in \mathcal{A}$ such that $wa^\ell\beta$ is so. {By Lemma} \ref{lemDeco}, { there exists a word $u'=ua^{\ell-1}\beta$ such that $\psi_a(u')=wa^\ell\beta$. Since $\psi_a(a^{\ell-1}\beta)=a^\ell\beta$, $w=\psi_a(u)$. }
\end{itemize}
{\flushright \rule{1ex}{1ex} \par\medskip}
We can now prove our main Theorem.\\
\noindent {\it {Proof of Theorem} \ref{leth}}. \begin{itemize}
\item [($\Longrightarrow$)] \begin{itemize} \item [i)] Let us suppose that all conjugates of $w$ are factor of a standard episturmian sequence $z=\psi_a(t)$. We proceed by induction on the number of morphisms. Since $z=\psi_a(t)$, by Proposition \ref{prop4}, there exists $u$ such that $w=\psi_a(u)$ or $w=\overline \psi_a(u)$. Let us now prove that all conjugates $u'$ of $u$ are also factors of $t$. Since $u, u'$ are conjugate, using Proposition \ref{propguil}, we have $\psi_a(u')$ is a conjugate of $ \psi_a(u)$. Hence, again by Proposition \ref{prop4}, there exists a factor $u''$ of $t$ with $\psi_a(u')=\psi_a(u'')$ or $ \psi_a(u')=\overline \psi_a(u'')$. The second case is possible only if $u''$ is a power of $a$ and then the first case holds. This first case by injectivity of $\psi$ implies $u'=u''$, that is $u''$ is a factor of $t$. We then find a sequence of episturmian morphisms $\phi _0, \phi _1,.. ,\phi _k \in \{\psi_a,\overline \psi_a \, |\, a \in \mathcal{A}\}^{k+1}$ and a sequence of words $w, w_1, w_2,... ,w_k$ such that $|w| \geq |w_1| \geq |w_2| \geq \ldots \geq |w_k|=1$, $w=\phi _0(\phi _1(...(\phi _k(w_k))...))$ and $w_i=\phi_i(\phi_{i+1}( \ldots (\phi_k(w_k))))$. Thus, $w$ is the image of a letter by an episturmian morphism, implying that $w$ is $c$-epichristoffel.
\item [ii)] If $z$ is not standard, by Definition \ref{episst}, we know that there exists an episturmian sequence $z'$ such that $F(z)=F(z')$. Thus, we can then consider the sequence $z'$ and conclude as in i).
\end{itemize}
\item [($\Longleftarrow$)] Since $w$ is $c$-epichristoffel, we can write $w=f(a)$, where $f \in \mathscr E$ and $a \in \mathcal{A}$. Let $s$ be an episturmian sequence having the factor $aa$ and let consider the episturmian sequence $f(s)$. Thus, it contains the factor $ww$ and we conclude.
\end{itemize}
\vspace{-1cm}{\flushright \rule{1ex}{1ex} \par\medskip}
\section{Concluding remarks}
In this paper, we have most of the time consider the $c$-epichristoffel words, also known as the conjugates of the finite standard episturmian words. Some of the properties of standard Sturmian words can be generalized naturally to the $c$-epichristoffel ones. We unfortunately didn't find a characterization of the epichristoffel word of each conjugacy class. Geometrical properties of Christoffel words are well known and very interesting. It would be nice to know if there is a similar geometrical interpretation for the epichristoffel words. In this paper, we only verify if a few properties of the Christoffel words could be generalized or not to the epichristoffel ones. Since the literature of Christoffel words is wide, there are still a lot of open problems about epichristoffel words. For instance: do they satisfy a kind of balanced property? for a fixed $k \geq 3$, does there exist an epichristoffel word over a $k$-letter alphabet of any given length? is it possible to give a closed formula for the number of epichristoffel words of a given length? Episturmian morphisms have been extensively studied for instance in \cite{jj2001,jp2002,gr20032,gr2003,jp2004,jj2005,gr20072}. It might be useful to use their properties to work on the epichristoffel words.
Epichristoffel words are still more interesting since they seem to be related to the Fraenkel conjecture. This conjecture states that for a finite $k$-letter alphabet, there exists a unique infinite word, up to letter permutation and conjugation, that is balanced and has pair-wise distinct letter frequencies. This unique word, if it exists, is conjectured to be periodic and can be written as $p^\omega$, with $p$ an epichristoffel word. Then, knowing more about epichristoffel words might help to prove the Fraenkel conjecture.
\section*{Acknowledgments}
This paper is an extended version of a paper presented in Mons (Belgium) during the 12th Mons Theoretical Computer Science days \cite{gp2008}. The author would also like to thank Christophe Reutenauer for giving her the idea of considering this interesting class of words, for useful discussions and remarks. Many thanks also to the two anonymous referees whose suggestions and constructive remarks helped to improve considerably the quality of the paper.
\bibliographystyle{alpha}
{\footnotesize
| {
"timestamp": "2009-04-24T10:50:25",
"yymm": "0805",
"arxiv_id": "0805.4174",
"language": "en",
"url": "https://arxiv.org/abs/0805.4174",
"abstract": "Sturmian sequences are well-known as the ones having minimal complexity over a 2-letter alphabet. They are also the balanced sequences over a 2-letter alphabet and the sequences describing discrete lines. They are famous and have been extensively studied since the 18th century. One of the {extensions} of these sequences over a $k$-letter alphabet, with $k\\geq 3$, are the episturmian sequences, which generalizes a construction of Sturmian sequences using the palindromic closure operation. There exists a finite version of the Sturmian sequences called the Christoffel words. They are known since the works of Christoffel and have interested many mathematicians. In this paper, we introduce a generalization of Christoffel words for an alphabet with 3 letters or more, using the episturmian morphisms. We call them the {\\it epichristoffel words}. We define this new class of finite words and show how some of the properties of the Christoffel words can be generalized naturally or not for this class.",
"subjects": "Combinatorics (math.CO)",
"title": "On a generalization of Christoffel words: epichristoffel words",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.974434788304004,
"lm_q2_score": 0.8221891283434877,
"lm_q1q2_score": 0.80116968922324
} |
https://arxiv.org/abs/2302.02872 | Limiting distributions of conjugate algebraic integers | Let $\Sigma \subset \mathbb{C}$ be a compact subset of the complex plane, and $\mu$ be a probability distribution on $\Sigma$. We give necessary and sufficient conditions for $\mu$ to be the weak* limit of a sequence of uniform probability measures on a complete set of conjugate algebraic integers lying eventually in any open set containing $\Sigma$. Given $n\geq 0$, any probability measure $\mu$ satisfying our necessary conditions, and any open set $D$ containing $\Sigma$, we develop and implement a polynomial time algorithm in $n$ that returns an integral monic irreducible polynomial of degree $n$ such that all of its roots are inside $D$ and their root distributions converge weakly to $\mu$ as $n\to \infty$. We also prove our theorem for $\Sigma\subset \mathbb{R}$ and open sets inside $\mathbb{R}$ that recovers Smith's main theorem~\cite{Smith} as special case. Given any finite field $\mathbb{F}_q$ and any integer $n$, our algorithm returns infinitely many abelian varieties over $\mathbb{F}_q$ which are not isogenous to the Jacobian of any curve over $\mathbb{F}_{q^n}$. | \section{Introduction}
\subsection{Background}
This paper is motivated by the following question. What compact sets $\Sigma$ in the complex plane $\mathbb{C}$ contain infinitely many sets of conjugate algebraic integers, and how are they distributed in $\Sigma$? The ideas of this project originated in the works of many, including Schur~\cite{Schur}, Fekete~\cite{Fekete1923}, Siegel~\cite{MR12092}, Robinson~\cite{MR175881}, Serre~\cite{MR4093205}, Smyth~\cite{MR736460} and Smith~\cite{Smith} on algebraic integers. We introduce the basic concepts and state our theorems which answer some open problems raised in previous works.
\newline
Let $\Sigma$ be a compact subset of the complex plane. Let
\begin{equation}\label{transfn}
d_{\Sigma}(n):= \max_{z_1,\dots,z_n\in \Sigma}\prod_{i<j}|z_i-z_j|^{\frac{2}{n(n-1)}}.
\end{equation}
Fekete proved the following limit exists and called it the transfinite diameter of $\Sigma$:
\begin{equation}\label{transf}
d_{\Sigma}:=\lim_{n\to \infty} d_{\Sigma}(n).
\end{equation}
For example, the transfinite diameter of a circle of radius $r$ is $r$. The transfinite diameter of $\Sigma$ equals the \textit{capacity} of $\Sigma$~\cite[Chapter 2]{MR2730573}.
Fekete~\cite{Fekete1923}, generalizing some results of Schur~\cite{Schur}, proved that if $d_\Sigma<1$, then there is only a finite number of irreducible monic integral polynomials such that all of their roots lie in $\Sigma$.
\\
Note that Fekete's condition $d_{\Sigma}<1$ is optimal. For example, let $\Sigma$ be the unit circle. Then primitive roots of unity give an infinite number of irreducible integral polynomials such that their roots lie all in $\Sigma$. The strict converse is not true. For example, the circle of radius $r>1$ around the origin where $r$ is not an algebraic integer gives a counter-example. However, Fekete and Szeg\"o \cite{MR72941} proved that if $\Sigma$ is symmetric about the real axis and $d_{\Sigma}\geq 1$, then any open set $D$ including $\Sigma$ will contain infinitely many sets of conjugate algebraic integers. Robinson~\cite{Robinson} proved an analogous theorem about real point sets using the properties of Chebyshev's polynomials of $\Sigma$.
\\
Recently, Smith~\cite{Smith} studied the weak* limit of a sequence of uniform probability measures on a set of conjugate algebraic integers inside $\Sigma\subset \mathbb{R},$ where $d_{\Sigma}>1$ and $\Sigma$ is a countable union of intervals. Smith used two results form the geometry of numbers (flatness theorem and Minkowski's second theorem) and Robinson's method~\cite{Robinson} to give necessary and sufficient conditions for a probability distribution $\mu$ on the real line to be a weak* limit of such uniform probability measures on totally real conjugate algebraic integers.
\\
We generalize Smith's work to the complex plane in the next subsection. Our method is different from Smith's method. In particular, we do not use a flatness theorem or the properties of square-free integral polynomials in our paper. Both of them are crucial ingredients in Smith's work. Instead, we prove a new result in Theorem~\ref{mdim} which is of independent interest. Theorem~\ref{mdim} allows us to improve some results of Smith~\cite[Proposition 3.5]{Smith}.
\\
Furthermore, our results do not rely on the existence of Chebychev's polynomials of $\Sigma$; see subsection~\ref{chebsec} and Proposition~\ref{chpolreal}. In fact, our method gives a new proof of Robinson's result and Smith's results without using the properties of Chebychev's polynomials; see Proposition~\ref{chpolreal}. Our method is based on a initial probabilistic sampling of roots with respect to the equilibrium measure and then deforming the roots with a greedy algorithm along a gradient vector.
\\
Our algorithm and its implementation has no prior analogue to the best of our knowledge. Our numerical results shows some new features of the algebraic integers that we discuss further in subsection~\ref{complex}.
\subsection{Main results}\label{result}\subsubsection{Arithmetic probability measures}
Suppose that $\Sigma\subset \mathbb{C}$ is compact and symmetric about the real axis with $d_{\Sigma}\geq 1.$ Let $\mathcal{P}_\Sigma$ be the space of probability measures supported on $\Sigma$ equipped with the weak* topology. Let
\[
\Sigma(\rho):=\{z\in \mathbb{C}: |z-\sigma|<\rho \text{ for some }\sigma\in\Sigma \}.
\]
If $\Sigma\subset \mathbb{R}$, let
\[
\Sigma_{\mathbb{R}}(\rho):=\{x\in \mathbb{R}: |x-\sigma|<\rho \text{ for some }\sigma\in\Sigma \}.
\]
Note that $\Sigma_{\mathbb{R}}(\rho)\subset \mathbb{R}$, and only defined for $\Sigma\subset \mathbb{R}.$
\begin{definition}
A measure $\mu \in \mathcal{P}_\Sigma$ is called an \textit{(real) arithmetic probability measure} if $\mu$ is the weak* limit of a sequence of distinct uniform probability measures on a complete set of conjugate (totally real) algebraic integers lying eventually inside ($\Sigma_{\mathbb{R}}(\varepsilon)$ for every $\varepsilon > 0$) $\Sigma(\varepsilon)$ for every $\varepsilon > 0$.
The set of all arithmetic probability measures is denoted (by $\mathcal{A}_{\Sigma_{\mathbb{R}}}\subset \mathcal{P}_\Sigma$) by $\mathcal{A}_{\Sigma}\subset \mathcal{P}_\Sigma.$
\end{definition}
Next, we define a convex subset of $\mathcal{P}_\Sigma$ that includes $\mathcal{A}_{\Sigma}.$
Let $P(x)$ be a polynomial with complex coefficients of degree $n$. Define its associated root probability measure on the complex plane to be
\[
\mu_P:=\frac{1}{n} \sum_{i=1}^{n} \delta_{\alpha_i},
\]
where $\alpha_i$ for $1 \leq i\leq n$ are roots of $P$ and $\delta_{\alpha}$ is the delta probability measure at $\alpha.$ Suppose that $\mu\in \mathcal{A}_{\Sigma}.$
Then $\mu_{P_n}\stackrel{\ast}{\rightharpoonup} \mu$ as $n\to\infty$ for some sequence $\{P_n \}$ of distinct irreducible monic polynomials with integral coefficients. Since $P_n$ has real coefficients, its roots are symmetric about the real axis, so $\mu$ should also be symmetric about the real axis. Let $Q$ be a polynomial with integral coefficients. Note that if $P_n \nmid Q,$
\[
\int \log |Q(x)| d\mu_{P_n}(x)= \frac{\log|\text{Res}(Q,P_n)|}{\deg(P_n)} \geq 0,
\]
where $\text{Res}(Q,P_n) \in \mathbb{Z}$ is the resultant of $P_n$ and $Q.$ By taking $n\to \infty,$ Serre~\cite{MR4093205} proved that
\begin{equation}\label{conds}
\int \log |Q(x)| d\mu(x) \geq 0
\end{equation}
for every non-zero $Q(x)\in \mathbb{Z}[x].$ Let $\mathcal{B}_{\Sigma}\subset \mathcal{P}_{\Sigma}$ be the set of all $\mu\in \mathcal{P}_{\Sigma}$ that are symmetric about the real axis and satisfy~\eqref{conds} for every non-zero $Q(x)\in \mathbb{Z}[x].$ Since $d_{\Sigma}\geq 1,$ it follows that the equilibrium measure of $\Sigma$ belongs to $\mathcal{B}_{\Sigma}$.
Note that $\mathcal{B}_{\Sigma}$ is a convex subset of $\mathcal{P}_{\Sigma}$ and $\mathcal{A}_{\Sigma}\subset \mathcal{B}_{\Sigma}.$ Serre~\cite{MR2428512} proved that there exists $\Sigma \subset \mathbb{R}^+$ and $\mu \in \mathcal{B}_{\Sigma}$ such that
$\int_{\mathbb{R}} x d\mu<1.8984.$ Smith~\cite{Smith} proved that if $\Sigma\subset \mathbb{R}$ satisfies certain technical conditions (including Serre's construction) then $\mathcal{A}_{\Sigma_{\mathbb{R}}}= \mathcal{B}_{\Sigma}.$ This gaves a definite answer to the Schur--Siegel--Smyth Trace Problem. Finding $\mu \in \mathcal{B}_{\Sigma}$ with the minimal expected value $\int_{\mathbb{R}} x d\mu$ is a linear programming problem
and finding its optimal solution is an open problem; see~\cite{Smith}. It is the analog of finding the magic function in the linear programming problem formulated by Cohn-Elkies for the sphere packing problem.
\begin{theorem}\label{general}
Suppose that $\Sigma\subset \mathbb{C}$ is compact and symmetric about the real axis. If $d_{\Sigma}<1$, then $\mathcal{A}_{\Sigma}= \mathcal{B}_{\Sigma}=\emptyset.$ Otherwise, $d_{\Sigma}\geq 1$ and
\(\mathcal{A}_{\Sigma}= \mathcal{B}_{\Sigma}\neq \emptyset.\) Moreover, if $\Sigma\subset \mathbb{R}$ is compact then \(\mathcal{A}_{\Sigma}= \mathcal{B}_{\Sigma}=\mathcal{A}_{\Sigma_{\mathbb{R}}}.\)
\end{theorem}
\begin{remark} Suppose that $\Sigma \subset \mathbb{R}$ is a finite union of closed intervals with $d_{\Sigma}>1$ and $\mu \in \mathcal{B}_{\Sigma}.$
Smith's main theorem~\cite[Theorem 1.6]{Smith} is equivalent to~\cite[Proposition 2.4]{Smith} the existence of a sequence of conjugate algebraic integers lying inside $\Sigma$ and equidistributing to $\mu.$ We show that $\mathcal{B}_{\Sigma}=\mathcal{A}_{\Sigma_{\mathbb{R}}}$ implies this statement.
As we discussed, the circle of radius $r>1$ around the origin where $r$ is not an algebraic integer gives a counter-example~\cite{MR72941}. Knowing this, Smith raised the question of extending his result to $\Sigma\subset \mathbb{C}.$ Let $int_{\Sigma}$ be the interior of $\Sigma.$ If $\mu(int_\Sigma) =1$ and $d_{int_\Sigma}>1$, then Theorem~\ref{general} (with some extra work and using a sequence of open sets inside $int_\Sigma$ and a diagonal argument) implies the existence of a sequence of conjugate algebraic integers lying inside $int_\Sigma$ and equidistributing to $\mu.$ This answers Smith's question when $\mu(int_\Sigma) =1.$
\end{remark}
Smith's method does not directly imply Theorem~\ref{general}. For example, his method requires the complement of $\Sigma\subset \mathbb{C}$ to be connected. This condition is trivially satisfied for compact $\Sigma\subset \mathbb{R},$ but fails for some $\Sigma\subset \mathbb{C}$; e.g for the unit circle. We introduce new ideas to prove the above theorem. This is discussed in Section~\ref{method}.
\\
Next, we introduce some new notation to state a quantitative version of the Theorem~\ref{general} under some assumptions. Let \begin{equation}\label{logpot}
U_{\mu}(z):= \int \log|z-w| d\mu(w),
\end{equation}
which is called the logarithmic potential of $\mu$ (in other works it is with a negative sign).
A measure $\mu$ is a H\"older probability measure if there exists $\delta>0$ and $A>0$ such that
\[
\mu([a,b]\times[c,d]) \leq A\max(|b-a|, |c-d|)^{\delta}
\]
for every $a<b, c<d.$
\\
\begin{comment}
Fix a smooth weight function $\omega$ with compact support on the complex plane, where $w(x+iy)\geq 0$ for every $x+iy\in \mathbb{C},$ and \[
\int \omega(x+iy) dxdy=1.
\]
Let $\omega_\delta(z) = \frac{1}{\delta^2}\omega(\frac{z}{\delta})$ for any $\delta > 0$. The smooth discrepancy between two probability measures $\mu_1$ and $\mu_2$ on the complex plane is defined by
\[
\text{disc}_{\delta,\omega}(\mu_1,\mu_2):= \sup_{a<b, c<d} \left|\omega_\delta\ast\mu_1([a,b]\times[c,d])-\omega_\delta\ast\mu_2([a,b]\times[c,d])\right|
\]
where $[a,b]\times[c,d]:=\{ z\in \mathbb{C}: a \leq \text{Re}(z) \leq b,c \leq \Im(z)\leq d\},$ and $\omega\ast\mu$ is the convolution of $\mu$ and $\omega.$
\\
\end{comment}
We cite the following definition from Smith~\cite[Definition 2.7]{Smith}.
\begin{definition}
Fix a H\"older measure $\mu$ with support contained in the compact subset $\Sigma\subset \mathbb{C}$. Given a complex polynomial $P$ of degree less than or equal to $n$, define the $n$-norm of $P$ with respect to $(\mu,\Sigma)$ by
\[
\|P\|_n:=\max_{z\in \mathbb{C}}\left(e^{-nU_{\mu}(z)}|P(z)| \right).
\]
It follows from~\cite[Theorem 4.1]{logpotentials} that if $\Sigma\subset \mathbb{C}$ is compact and has empty interior with connected complement and for a sequence of monic polynomials $P_n$ of degree $n$ for $n\geq 1$
\[
\limsup_n \|P_n\|_n^{1/n} \leq 1,
\]
then the roots of $P_n$ are equidistibuted to $\mu.$
\end{definition}
\begin{theorem}\label{main1}
Suppose $\Sigma\subset \mathbb{C}$ is compact and has empty interior with connected complement. Suppose that $\mu \in \mathcal{B}_{\Sigma}$ is a H\"older measure with exponent $\delta,$ and $D$ is any open set containing $\Sigma$. For every large enough integer $n\geq 1$, there exists an irreducible polynomial $h_n$ of degree n such that all roots of $h_n$ are inside $D$, and
\[
\|h_n\|_n\leq e^{n^{1-\delta'}}
\]
for some $\delta'>0$ which only dependents on the H\"older exponent of $\mu.$ As a result, the roots of $h_n$ are equidistibuted to $\mu.$ Moreover, if $\Sigma\subset \mathbb{R}$ is compact then it is possible to take any open set $D\subset \mathbb{R}$ in the real point set topology containing $\Sigma$ and prove the same result.
\end{theorem}
\subsubsection{Complexity of approximating arithmetic probability measures}\label{complex}
The next goal is to study the complexity of finding a sequence of irreducible monic polynomials $\{P_n\}$ such that $\{ \mu_{P_n} \}$ converges weakly to a given $\mu \in \mathcal{B}_{\Sigma}.$
\begin{theorem}\label{main2}
Let $\Sigma$ be as in Theorem \ref{main1}.
Suppose that $\mu \in \mathcal{B}_{\Sigma}$ is a H\"older probability measure and $D$ is any open set containing $\Sigma$. There is a polynomial time algorithm in $n$ that returns an integral, irreducible, monic polynomial $g_n$ of degree $n$ such that all roots of $P_n$ are inside $D$, and
\[
\|g_n\|_n\leq e^{\frac{Cn\log(\log(n))^3}{\log(n)}}
\]
where $C > 0$ depends only on $\mu$ and is independent of $n$. As a result, the roots of $g_n$ are equidistibuted to $\mu.$ Moreover, if $\Sigma\subset \mathbb{R}$ is compact then it is possible to take any open set $D\subset \mathbb{R}$ in the real point set topology containing $\Sigma$ and prove the same result.
\end{theorem}
We implemented a version of this algorithm for arithmetic probability measures $\mu\in \mathcal{B}_{\Sigma}$ constructed by Serre~\cite{MR2428512}. Explicitly, let $\Sigma=[a,b],$
\[
\mu=c\mu_{[a,b]}+(1-c)\nu_{[a,b]},
\]
$a=0.1715,$ $ b=5.8255,$ $c=0.5004,$ $\mu_{[a,b]}$ is the equilibrium measure on $[a,b]$, and $\nu_{[a,b]}$ is the pushforward of the equilibrium measure
on $[b^{-1},a^{-1}]$ under the map $z\to 1/z.$
Figure~\ref{fig} shows the density of $\mu$, and Figure~\ref{fig2} shows the number of roots of the output polynomial of our algorithm for $n=100$.
\begin{figure}
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{rootdis.png}
\caption{Density function of $\mu$}
\label{fig}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering \includegraphics[width=\textwidth]{poly_degree_100.png}
\caption{Number of roots of $P_{100}(x)$}
\label{fig2}
\end{subfigure}
\caption{This compares the density function of $\mu$ with the histogram of roots of our algorithm's output with input $n=100$.}\label{interval}
\end{figure}
\subsection{Applications to abelian varieties over finite fields} Recently, Shankar and Tsimerman~\cite{shankar_tsimerman_2018}, \cite{Tsimerman1} conjectured that for every $g\geq 4$, every Abelian variety defined over $\overline{\mathbb{F}_q}$ is isogenous to a Jacobian of a curve defined over $\overline{\mathbb{F}_q}.$ By Honda Tate theory the isogeny class of simple abelian varieties over $\mathbb{F}_q$ and isogenies over $\mathbb{F}_q$ is corresponded to algebraic integers such that all their conjugates have norm $\sqrt{q}.$ We apply our main theorem and Honda Tate theory to prove the following.
\begin{theorem}
Given any finite field $\mathbb{F}_q$ and any integer $n$, our algorithm returns infinitely many abelian varieties over $\mathbb{F}_q$ which are not isogenous to the Jacobian of any curve over $\mathbb{F}_{q^m}$ for every $1\leq m\leq n$.
\end{theorem}
Using the algorithm discussed in section \ref{main} of the paper, the authors found an explicit polynomial, with highest order terms being
$$x^{290} - 28x^{289} - {484}x^{288} + 20784x^{287} + \cdots,$$
which is irreducible with all roots inside $[-2\sqrt{3},2\sqrt{3}]\subset\mathbb R$ representing an abelian variety that is not isogenous to the Jacobian of any curve over $\mathbb F_3$ or $\mathbb F_9$.
\begin{comment}
We present our results in this section.
Let $P(x)=\prod_{i=1}^{n}(x-\alpha_i)$ be a monic polynomial with real coefficients of degree $n$. We define its associated root probability measure on the complex plane to be
\[
\mu_P:=\frac{1}{n} \sum_{i=1}^{n} \delta_{\alpha_i},
\]
where $\delta_{\alpha}$ is the delta probability measure at $\alpha.$
\subsubsection{Weak* limit of conjugate algebraic integers}
Let $\mu$ be a probability measure with compact support in the complex plane. In this paper, we find necessary and sufficient conditions for $\mu$ to be the weak limit of $\{ \mu_{P_n} \}$ of some sequence $\{P_n \}$ of distinct irreducible monic polynomials with integral coefficients.
First, we list some necessary conditions. Suppose $\mu$ is the weak* limit of $\{ \mu_{P_n} \}$ of some sequence $\{P_n \}$ of irreducible monic polynomials with integral coefficients. Since $P_n$ has real coefficients, its roots are invariant under complex conjugation, so $\mu$ should also be invariant under the complex conjugation. The other family of conditions follows from the repulsion of algebraic integers. More precisely, let $Q$ be a polynomial with integral coefficients. We note that if $P_n \nmid Q,$
\[
\int \log |Q(x)| d\mu_{P_n}(x)= \frac{\log|\text{Res}(Q,P_n)|}{\deg(P_n)} \geq 0,
\]
where $\text{Res}(Q,P_n) \in \mathbb{Z}$ is the resultant of $P_n$ and $Q.$ By taking $n\to \infty,$ it follows that
\begin{equation}\label{conds}
\int \log |Q(x)| d\mu(x) \geq 0
\end{equation}
for every non-zero $Q(x)\in \mathbb{Z}[x].$ Our first result shows that the above inequality implies a stronger version for every integral polynomial in $m\geq 1$ variables. Namely, we show that \eqref{conds} implies
\begin{equation}\label{sconds}
\int \log |Q_m(x_1,\dots,x_m)| d\mu(x_1)\dots d\mu(x_m) \geq 0
\end{equation}
for every non-zero $Q_m(x_1,\dots,x_m)\in \mathbb{Z}[x_1,\dots,x_m].$ Our main theorem states that these necessary conditions are sufficient.
\begin{theorem}\label{main1}
Suppose that $\mu$ is a \textcolor{blue}{H\"older} probability measure with compact support on the complex plane. The following are equivalent:
\begin{enumerate}
\item \label{listcond1} $\mu$ is invariant by complex conjugation and~\eqref{conds} holds for every non-zero $Q(x)\in \mathbb{Z}[x].$
\item \label{listcond2} $\mu$ is invariant by complex conjugation and ~\eqref{sconds} holds for every $m$ and every non-zero $Q_m(x_1,\dots, x_m)\in \mathbb{Z}[x_1,\dots,x_m].$
\item \label{listcond3} $\mu$ is the weak* limit of $\{ \mu_{P_n} \}$ for some family $\{P_n \}$ of irreducible monic polynomials with integral coefficients.
\end{enumerate}
\end{theorem}
\subsubsection{Complexity of finding algebraic integers approximating a given distribution}
Our next goal is to study the complexity of finding a sequence of irreducible monic polynomials $\{P_n\}$ such that $\{ \mu_{P_n} \}$ converges weakly to a given probability measure $\mu$ satisfying the necessary conditions \eqref{listcond1} in Theorem~\ref{main1}. We introduce some new notations in order to state our next result. The discrepancy of two probability measures
$\mu_1$ and $\mu_2$ on the complex plane is defined as
\[
\text{Disc} (\mu_1,\mu_2):= \sup_{a<b, c<d} \left|\mu_1([a,b]\times[c,d])-\mu_2([a,b]\times[c,d])\right|
\]
where $[a,b]\times[c,d]:=\{ z\in \mathbb{C}: a \leq \Re(z) \leq b,c \leq \Im(z)\leq d\}.$ We say $\mu$ is a H\"older probability measure if there exists $\delta>0$ and $C>0$ such that
\[
\mu([a,b]\times[c,d]) \leq C\max(|b-a|, |c-d|)^{\delta}
\]
for every $a<b, c<d.$
\begin{theorem}\label{main2}
Suppose that $\mu$ is a H\"older probability measure with compact support satisfying the necessary conditions of Theorem~\ref{main1}.
We develop and implement a polynomial time algorithm in $n$ that returns an integral irreducible monic polynomial $P_n$ of degree $n$ such that
\[
\text{Disc}(\mu,\mu_{P_n}) \leq \frac{C_1}{\log{n}},
\]
where $C_1$ only depends on $\mu.$
\end{theorem}
\end{comment}
\subsection{Method of proofs}\label{method}
In this section, we sketch proofs of our main theorems. Our main tool for studying the distribution of the roots of polynomials is
Jensen's formula. Let $P(z)$ be a polynomial with complex coefficients and fix $w\in \mathbb{C}.$ If $z_1, \dots, z_m$ are the roots of $P(z)$ inside the disc $|z-w|<R$ and there is no zero on $|z-w|=R,$ then Jensen's formula states
\begin{equation*}
\frac{1}{2\pi} \int_{0}^{2\pi} \log |P(Re^{i\theta}+w)| d\theta - \log |P(w)| =\log \frac{R^m}{|z_1-w|\dots |z_m-w|}.
\end{equation*}
In particular,
\begin{equation}\label{jensen}
\frac{1}{2\pi} \int_{0}^{2\pi} \log |P(Re^{i\theta}+w)| d\theta - \log |P(w)| \geq 0.
\end{equation}
Suppose that $P$ is a monic polynomial. We have $|P(w)|=|z_1-w|\cdots |z_{n}-w|$, where $n=\deg(P).$ Hence for a monic polynomial $P$, we have
\[
\frac{1}{2\pi} \int_{0}^{2\pi} \frac{\log |P(Re^{i\theta}+w)|}{n} d\theta =\frac{m}{n}\log R+ \frac{1}{n}\sum_{i=m+1}^n\log|z_{i}-w|,
\]
where $|z_i-w|<R$ for $1\leq i\leq m$ and $|z_i-w|>R$ for $m+1\leq i\leq n.$ We rewrite the above as
\begin{equation}\label{Jensen}
\frac{1}{2\pi} \int_{0}^{2\pi} \frac{\log |P(Re^{i\theta}+w)|}{n} d\theta = \int \log(z;w,R) d\mu_{P}(z),
\end{equation}
where
\[
\log(z;w,R):=\begin{cases} \log|z-w| &\text{ if } |z-w|>R, \\ \log{R} &\text{otherwise.}\end{cases}
\]
It follows from \eqref{Jensen} that $\{ \mu_{P_n} \}$ converges weakly to a given probability measure $\mu$ if
\begin{equation}\label{conv1}
\frac{\log|P_n(z)|}{n}= U_{\mu}(z)+o(1)
\end{equation}
for every $z\in \mathbb{C}$ such that $|z-z_{i,n}|>e^{-\sqrt{n}}$, where $z_{i,n}$ are complex roots of $P_n.$
We note that $\frac{\log|P_n(z)|}{n}$ is a harmonic function with singularities at roots of $P_n$ and
\[\frac{\log|P_n(z)|}{n} = \log|z|+ O\left(\frac{1}{|z|}\right).\]
We note that $ U_{\mu}(z)$ has a similar behavior as $|z| \to \infty.$ Since $U_{\mu}(z)$ and $\frac{\log|P_n(z)|}{n}$ are both harmonic functions with similar asymptotic behavior at infinity, it follows from the mean value theorem for harmonic functions that \eqref{conv1} is equivalent to
\begin{equation}\label{cvx}
\frac{\log|P_n(z)|}{n}\leq U_{\mu}(z)+o(1)
\end{equation}
for every $z\in \mathbb{C}$ assuming $\mathbb{C}\setminus\Sigma$ is connected and the interior of $\Sigma$ is empty. Let
\begin{equation}\label{defKn}
K_n:=\left\{ p(x)\in \mathbb{R}[x]: \deg(p)\leq n, \frac{\log |p(z)|}{n} \leq U_{\mu}(z) \text{ for every } z\in \mathbb{C} \right\}.
\end{equation}
$K_n$ is a symmetric, convex region inside the vector space of polynomials with real coefficients and degree less than or equal to $n$. For proving Theorem~\ref{main1}, we use Minkowski's second theorem to prove the existence of integral polynomials of degree $n$ inside $K_n$ up to a sub-exponential factor in $n$. The key observation is that the necessary condition \eqref{conds} implies $K_n$ is well-rounded which means the successive minima of the lattice of integral polynomials with respect to $K_n$ are close to each other and a lower bound on the volume of $K$ implies the existence of integral polynomials of degree $n$ inside $K_n$ up to a sub-exponential factor in $n$. The following theorem is an important technical result that allows us to use Minkowski's second theorem.
\begin{theorem}\label{mdim}
Suppose that $\mu$ is a probability measure with compact support on the complex plane. The following are equivalent:
\begin{enumerate}
\item \label{listcond1} $\mu$ is invariant by complex conjugation and~\eqref{conds} holds for every non-zero $Q(x)\in \mathbb{Z}[x].$
\item \label{listcond2} $\mu$ is invariant by complex conjugation and \begin{equation}\label{sconds}
\int \log |Q_m(x_1,\dots,x_m)| d\mu(x_1)\dots d\mu(x_m) \geq 0
\end{equation}
for every non-zero $Q_m(x_1,\dots,x_m)\in \mathbb{Z}[x_1,\dots,x_m].$
\end{enumerate}
\end{theorem}
Recall that
\[
\Sigma(\rho):=\{z\in \mathbb{C}: |z-\sigma|<\rho \text{ for some }\sigma\in\Sigma \}.
\]
Note that for any open set $D$ containing $\Sigma,$ there exist $\rho>0$ such that $\Sigma(\rho)\subset D.$
\\
For $\Sigma\subset \mathbb{C}$, we use Rouch\'e's theorem to show that all roots of our irreducible polynomial are inside $\Sigma(\rho).$ This is similar to the method of Fekete and Szeg\"o~\cite[Theorem I]{MR72941}.
\\
For $\Sigma\subset \mathbb{R}$, Robinson used the Chebychev's polynomials and constructed an integral polynomial of degree $n$ with $n$ sign changes on $\Sigma(\rho)\subset \mathbb{R}$ to prove roots are real and are inside $\Sigma(\rho).$ Our results do not rely on the existence of Chebychev's polynomials of $\Sigma$; see subsection~\ref{chebsec} and Proposition~\ref{chpolreal}. In fact, our method gives a new proof of Robinson's result and Smith's results without using the properties of Chebychev's polynomials; see Proposition~\ref{chpolreal}. Our method is based on a initial probabilistic sampling of roots with respect to the equilibrium measure and then deforming the roots with a greedy algorithm along a gradient vector. Proposition~\ref{chpolreal} allows us to improve some results of Smith~\cite[Proposition 4.1]{Smith}.
\\
For proving Theorem~\ref{main2}, we apply a lattice algorithm due to Schnorr and find an irreducible, integral polynomial inside $e^{\frac{Cn\log(\log(n))^3}{\log(n)}}K_n$ in polynomial time in $n$ for some constant $C$ dependent on $\mu$. In fact, we take the polynomial to be monic and Eisenstein at the prime 2. Note that this differs from the strongest bound for which we prove existence which is $e^{Cn^{1-\delta_0}}K_n$ for some $\delta_0>0$. Other versions of Schnorr's algorithm can achieve stronger bounds at a super-polynomial run time.
\section{Minkowski's theorem applied to the lattice of integral polynomials}
\subsection{ Logarithmic potentials}
Recall the notation
in subsections~\ref{result} and~\ref{method}. In this section, we assume that $\mu$ is supported on a compact set $\Sigma\subset \mathbb{C}$ and is a H\"older probability measure on the complex plane with exponent $\delta>0.$ Hence for every $a<b, c<d$ and some $A>0$
\[
\mu([a,b]\times[c,d]) \leq A\max(|b-a|, |c-d|)^{\delta}.
\]
Recall the definition of the logarithmic potential of $\mu$ from~\eqref{logpot}
$$
U_{\mu}(z)= \int \log|z-w| d\mu(w).
$$
We prove a lemma on the asymptotic behaviour of logarithmic potentials.
\begin{lemma}\label{lem1}
Suppose that $\mu$ is a probability measure with compact support inside $|z|<r$ for $z\in \mathbb{C}.$ If $|w|>2r$, we have
\[
U_{\mu}(w)=\log|w|+O\left(\frac{r}{|w|}\right).
\]
\end{lemma}
\begin{proof}
We have
\begin{align*}
\left|U_{\mu}(w)-\log|w|\right|\leq \int \left|\log |w-x| -\log|w| \right|d\mu(x)
\\
=\int \left|\log |1-\frac{x}{w}|\right| d\mu(x) \ll \frac{r}{|w|}.
\end{align*}
\end{proof}
\begin{lemma}\label{holderr}
Suppose that $\mu$ is a H\"older probability measure on the complex plane with exponent $\delta>0$. Then for any $z_1,z_2\in \mathbb{C}$ with $|z_1-z_2|<1/2,$ we have
\[
\left|U_{\mu}(z_1)-U_{\mu}(z_2)\right|\ll |z_1-z_2|^{\delta'},
\]
where $\delta'<\min(1/2,\delta/2)$ is any positive exponent.
\end{lemma}
\begin{proof}
Suppose that $z_1,z_2\in \mathbb{C}$ and $|z_1-z_2|<1/2.$ We have
\begin{align*}
\left|U_{\mu}(z_1)-U_{\mu}(z_2)\right|&=\left| \int \log\frac{|z_1-w|}{|z_2-w|} d\mu(w)\right|.
\end{align*}
We define the following covering of the complex plane:
\begin{align*}
U_0:&=\left\{w\in \mathbb{C}: |w-z_2|>2 \sqrt{|z_1-z_2|} \right\},
\\
U_1:&=\left\{w\in \mathbb{C}: |z_1-z_2|<\min(|w-z_2|,|w-z_1|)\leq 2\sqrt{|z_1-z_2|} \right\},
\\
U_k:&= \left\{w\in \mathbb{C}: |z_1-z_2|^k<\min(|w-z_2|,|w-z_1|)\leq |z_1-z_2|^{k-1}\right\},
\end{align*}
where $k\geq 2$. Hence,
\begin{align*}
\left|U_{\mu}(z_1)-U_{\mu}(z_2)\right|\leq \sum_{i\geq 0}\left| \int_{U_i} \log\frac{|z_1-w|}{|z_2-w|} d\mu(w)
\right|.
\end{align*}
For $w\in U_0$, we have $\left|\frac{z_1-z_2}{z_2-w}\right|<1/2$. We write the Taylor expansion of $\log$ and obtain
\[
\left| \int_{U_0} \log\frac{|z_1-w|}{|z_2-w|} d\mu(w)
\right|=\left| \int_{U_0} \log \left| 1+\frac{z_1-z_2}{z_2-w}\right| d\mu(w)
\right| \leq \sum_{m\geq 1}\int_{U_0} \frac{1}{m}\left| \frac{z_1-z_2}{z_2-w}\right|^m d\mu(w)\ll |z_1-z_2|^{1/2}.
\]
For $w\in U_k$, where $k\geq 1$, we have
\[
\left| \int_{U_k} \log\frac{|z_1-w|}{|z_2-w|} d\mu(w)
\right| \leq \mu(U_k)k\big|\log|z_1-z_2| \big|\ll |z_1-z_2|^{\frac{k\delta}{2}}k\big|\log|z_1-z_2|\big|,
\]
where we used $\mu(U_k)\ll |z_1-z_2|^{\frac{k\delta}{2}}$, because $\mu$ is a H\"older probability measure with exponent $\delta>0$.
Therefore,
\[
\left|U_{\mu}(z_1)-U_{\mu}(z_2)\right|\ll \sqrt{|z_1-z_2|} +\sum_{k\geq 1}|z_1-z_2|^{k\delta/2}k\big|\log|z_1-z_2|\big|\ll |z_1-z_2|^{1/2}+|z_1-z_2|^{\delta/2}\big|\log|z_1-z_2|\big|.
\]
Therefore, we have
\[
\left|U_{\mu}(z_1)-U_{\mu}(z_2)\right|\ll |z_1-z_2|^{\delta'},
\]
where $\delta'<\min(1/2,\delta/2)$ is any positive exponent.
\end{proof}
\subsection{Minkowski's theorem}
Recall the definition of $K_n$ from~\eqref{defKn}:
\[
K_n=\left\{ p(x)\in \mathbb{R}[x]: \deg(p)\leq n, \text{ and } \frac{\log |p(z)|}{n} \leq U_{\mu}(z) \text{ for every } z\in \mathbb{C} \right\}.
\]
We identify the space of polynomials with real coefficients of degree less than or equal to $n$ with $\mathbb{R}^{n+1}$ by sending a polynomial to its coefficients. It is easy to see that $K_n$ is a symmetric convex region of $\mathbb{R}^{n+1}$.
\begin{lemma}\label{multlem}
Suppose that $\mu$ is a probability measure with compact support inside $|z|<r$ for $z\in \mathbb{C}.$ Let $p(x)\in \lambda K_n$ for some $\lambda>0,$ where $\deg(p)<n$. Then
\[
xp(x)\in r\lambda K_n.
\]
\end{lemma}
\begin{proof} Let
\[
h(z):=U_{\mu}(z)-\frac{\log |zp(z)|}{n}.
\]
We note that $h(z)$ is a harmonic function on $|z|>r $ and outside roots of $p.$ By Lemma~\ref{lem1}, we have
\[
h(z)=\frac{n-\deg(p)-1}{n} \log|z|+\frac{\log|a_p|}{n}+O(1/|z|),
\]
where $a_p$ is the top coefficient of $p.$ By the above and since $\deg(p)< n$ and $h(z)$ is harmonic, the minimum of $h(z)$ is obtained inside $ |z|<r$. Let $C:=\inf_{z\in \mathbb{C}} h(z)=\inf_{|z|<r} h(z).$ Since $p(x)\in \lambda K_n$, we have
\[
|zp(z)| \leq r \lambda e^{n U_{\mu}(z)}
\]
for any $|z|<r.$ This implies that
\[
C\geq \frac{\log r\lambda}{n}.
\]
Therefore,
\[
e^{nU_{\mu}(z)-\log |zp(z)|}= e^{nh(z)}\geq e^{nC}\geq r\lambda
\]
for any $z\in \mathbb{C}.$ This completes the proof of our lemma.
\end{proof}
\begin{lemma}\label{derlem}
Suppose that $p(x)\in \lambda K_n$ for some $\lambda>0.$ We claim that
\[
p'(x)\in An^C\lambda K_n,
\]
where $p'$ is the derivative of $p$ and $A$ and $C$ are constants that only depend on $\mu.$
\end{lemma}
\begin{proof}
Let $D(z_0,r)$ be the disk centered at some $z_0\in \mathbb{C}$ and radius $r>0.$ By Bernstein inequality~\cite[Corollary 5.1.6]{MR1367960} for any $r>0$
\[
\sup_{z\in D(z_0,r)} \left|p'(z)\right| \leq \frac{\deg p}{r}\sup _{z\in D(z_0,r)} |p(z)|,
\]
for some constant $C$ that only depends on $\mu.$ Indeed, we have
\[
|p'(z_0)|\leq \frac{n}{r}\sup _{z\in D(z_0,r)} |p(z)| \leq \lambda \frac{n}{r}\sup _{z\in D(z_0,r)} e^{nU_{\mu}(z) }.
\]
By Lemma~\ref{holderr}, we have for any $z\in D(z_0,r)$
\[
e^{nU_{\mu}(z) } = e^{n(U_{\mu}(z)-U_{\mu}(z_0)) }e^{nU_{\mu}(z_0) } \leq e^{A'nr^{\delta'}}e^{nU_{\mu}(z_0) }.
\]
for some $A'>0$ and $\delta'>0$ that only depends on $\mu.$ By taking $r=n^{-C'}$ for any $C'>1/\delta'$, it follows that
\[
|p'(z_0)|\leq \lambda \frac{n}{r} e^{A'nr^{\delta'}} e^{nU_{\mu}(z_0) } \leq A\lambda n^{C'+1}e^{nU_{\mu}(z_0) },
\]
for any $z_0\in \mathbb{C},$ where $A=e^{A'}$. This implies that $ p'(x)\in An^C\lambda K_n,$ where $C=C'+1.$
\end{proof}
Let $\Gamma_n$ be the lattice of integral polynomials of degree less than or equal to $n$ in $\mathbb{R}^{n+1}$. The successive minima of $K_n$ on $\Gamma_n$ are defined by setting the $k$-th successive minimum $\lambda_k$ to be the infimum of the numbers $\lambda$ such that $\lambda K_n$ contains $k+1$ linearly-independent vectors of $\Gamma_n$.
We have $0 < \lambda_0\leq \lambda_1 \leq \dots \leq \lambda_n <\infty$. Minkowski's second theorem states that
\begin{equation}\label{minksec}
\frac{2^{n+1}\textup{vol}(K_n)^{-1}}{(n+1)!} \leq \lambda_0\lambda_1\dots \lambda_n\leq 2^{n+1}\textup{vol}(K_n)^{-1}.
\end{equation}
\begin{proposition}\label{successive}
We have
\[
\lambda_{m+1}\leq A n^{C}\lambda_{m}
\]
for any $0\leq m< n$, and some constants $A$ and $C$ that only depend on $\mu.$
\end{proposition}
\begin{proof}
Suppose that $p_i\in \Gamma_n$ for $0\leq i\leq m$ are the set of linearly independent integral polynomials such that $p_i\in \lambda_i K_n.$ If for some $0\leq i\leq m$, $p_i'$ is linearly independent to $\{p_i:0\leq i\leq m \}$, then by Lemma~\ref{derlem}
\[
p'_i(x)\in An^C\lambda_m K_n.
\]
Hence,
\[
\lambda_{m+1}\leq A n^C\lambda_m,
\]
and this implies our proposition. Otherwise, $V_m:=\text{span}_{\mathbb{R}}\left<p_0,\dots,p_m\right>$ is closed under derivation. It follows that
\[
V_m=\text{span}_{\mathbb{R}}\left<1,x,\dots,x^m\right>.
\]
This implies that there exists $0\leq l\leq m$ such that $\deg(p_j)\leq \deg (p_l)= m$ for every $0\leq j\leq m.$ Let $q(x):=xp_l(x).$ Since $\deg q> m,$ $q$ is linearly independent to $V_m.$ By Lemma~\ref{multlem},
\[
q(x)\in A\lambda_m K_n
\]
for some constant $A>0$ that only depends on $\mu.$ Hence,
\[
\lambda_{m+1}\leq A\lambda_m
\]
and this completes the proof of our proposition.
\end{proof}
\subsection{Lower bound on the volume of $K_n$}\label{bound_Kn}
In this subsection, we give a lower bound on the Euclidean volume of $K_n$ as a subset of $\mathbb{R}^{n+1}.$ Our method is to find $n+1$ linearly independent polynomials of degree at most $n$ inside $K_n$ and compute the volume of the simplex given by the convex hull of the origin and these $n+1$ points. Since $K_n$ is symmetric, there are $2^{n+1}$ simplexes with the same volume and disjoint interior inside $K_n$ by choosing different signs for the $n+1$ vertices. The total volume of these simplexes give a lower bound for the volume of $K_n.$
\subsubsection{Finding a simplex inside $K_n$} For simplicity in this section, we assume that $n$ is odd and we find a simplex inside $K_n$ and estimate its volume. Recall that $\mu$ is invariant by the complex conjugation.
Let $[a,b]\times[-c,c]\subset \mathbb{C}$ be a rectangle containing $\Sigma.$ If $\Sigma\subset \mathbb{R}$, we take $[a,b]\subset \mathbb{R}$ containing $\Sigma$.
Fix $0<M\in \mathbb{Z}$ and $0<L \in \mathbb{R}$ where $n^{1/10}<M<n^{1/3}$, and $L<nM^{-2}.$
We partition each side of the rectangle into $2M$ equal length sub-intervals, and obtain a partition of the rectangle as
\[
[a,b]\times[-c,c]= \bigcup_{0\leq i,j< 2M}B_{ij},
\]
where $B_{ij}:=[a_i,a_{i+1}]\times[c_{j},c_{j+1}]$ and $a_i:=a+\frac{i(b-a)}{2M}$ and $c_j:=-c+\frac{j(2c)}{2M}.$ Since $\mu$ is a H\"older measure with exponent $\delta$, we have
\begin{equation}\label{holder}
\mu(B_{ij})\ll M^{-\delta}
\end{equation}
for every $i,j.$ Similarly, for $\Sigma\subset \mathbb{R}$, we write
\[
[a,b]=\bigcup_{0\leq i< 2M}B_{i},
\]
where $B_{i}:=[a_i,a_{i+1}]$ and $a_i:=a+\frac{i(b-a)}{2M}.$
Let
\begin{equation}\label{n_ij}
n_{ij}:=\lfloor (n+1)\mu(B_{ij})\rfloor+\varepsilon_{ij} \ll nM^{-\delta},
\end{equation}
where $\varepsilon_{ij} \in \mathbb{Z}$ and $|\varepsilon_{ij}| \leq L $ are chosen such that
$\sum_{i,j}n_{ij}=n+1$ and $n_{ij}=n_{i(2M-j-1)}.$ Since we assumed $n$ is odd, this is possible.
By a covering argument, it is possible to find $z_{ijk}\in B_{ij}$ for $0\leq i,j<2M$ and $1\leq k\leq n_{ij},$ such that $\overline{z_{ijk}}=z_{i(2M-j-1)k}$ and for every $w\in \partial B_{ij}$ and every $1\leq k,k'\leq n_{ij}$
\begin{equation}\label{Bij}
\begin{split}
|z_{ijk}-w| &\gg M^{-1}
\\
|z_{ijk}-z_{ijk'}|&\gg n^{-\frac{1}{2}}M^{-1+\frac{\delta}{2}}\gg n^{-5/6} .
\end{split}
\end{equation}
Similarly, for $\Sigma\subset \mathbb{R}$, we define $n_{i}:=\lfloor (n+1)\mu(B_{i})\rfloor+\varepsilon_{i} \ll nM^{-\delta}$ and $z_{ik}\in B_i$ for $k\leq n_i$ such that $|\varepsilon_{i}|<L,$ $\sum_{i}n_{i}=n+1$ and for every $w\in \partial B_{i}$ and every $1\leq k,k'\leq n_{i}$
\begin{equation}\label{Bireal}
\begin{split}
|z_{ik}-w| &\gg M^{-1}
\\
|z_{ik}-z_{ik'}|&\gg n^{-1} M^{-1+\delta}\gg n^{-4/3} .
\end{split}
\end{equation}
For two compact subsets $B, B'\subset \mathbb{C}$, let
\[d(B,B'):=\inf_{\substack{z\in B\\ z'\in B'}} |z-z'|.\]Let
\[
I(\mu):=\int \log|z_1-z_2|d\mu(z_1)d\mu(z_2)=\int U_\mu(z)d\mu(z).
\]
\begin{proposition}
We have
\begin{equation}\label{diag}
\sum_{\substack{ij,i'j' \\ d(B_{ij},B_{i'j'})<M^{-1} }}\sum_{\substack{k,k' \\ i'j'k' \neq ijk}}\frac{\log|z_{ijk}-z_{i'j'k'}|}{(n+1)^2}=O(\log(n)M^{-\delta}),
\end{equation}
and
\begin{equation}\label{maineq}
\sum_{ijk}\sum_{ijk\neq i'j'k'} \frac{\log|z_{ijk}-z_{i'j'k'}|}{(n+1)^2}=I(\mu)+O\left(\log(n)M^{-\delta}+ \frac{M^2\log(n)}{n}+ M^{-\delta/2}\log(M)^{1/2}\right).
\end{equation}
Similarly, for $\Sigma\subset \mathbb{R}$, we have
\begin{equation}\label{diagreal}
\sum_{\substack{i,i' \\ d(B_{i},B_{i'})<M^{-1} }}\sum_{\substack{k,k' \\ i'k' \neq ik}}\frac{\log|z_{ik}-z_{i'k'}|}{(n+1)^2}=O(\log(n)M^{-\delta}),
\end{equation}
and
\begin{equation}\label{maineqreal}
\sum_{ik}\sum_{ik\neq i'k'} \frac{\log|z_{ik}-z_{i'k'}|}{(n+1)^2}=I(\mu)+O\left(\log(n)M^{-\delta}+ \frac{M^2\log(n)}{n}+ M^{-\delta/2}\log(M)^{1/2}\right).
\end{equation}
\end{proposition}
\begin{proof}
Note that for a pair $(i,j)$, there are at most $O(1)$ pairs $(i',j')$ where $d(B_{ij},B_{i'j'})<M^{-1}$.
By \eqref{n_ij} and \eqref{Bij}, we have
\begin{equation*}
\sum_{\substack{ij,i'j' \\ d(B_{ij}, B_{i'j'})<M^{-1}}}\sum_{\substack{k,k' \\ i'j'k' \neq ijk}}\frac{\log|z_{ijk}-z_{i'j'k'}|}{(n+1)^2}\ll\sum_{\substack{ij,i'j' \\ d(B_{ij}, B_{i'j'})<M^{-1}}} \frac{n_{ij}n_{i'j'}\log(n)}{(n+1)^2}=O(\log(n)M^{-\delta}).
\end{equation*}
This completes the proof of the first part of our proposition. We have
\begin{equation}\label{summ}
I(\mu)= \sum_{ij}\sum_{i'j'}\int_{z\in B_{ij}}\int_{z'\in B_{i'j'}} \log|z-z'|d\mu(z)d\mu(z').
\end{equation}
Similarly, we have
\[
\sum_{\substack{ij,i'j' \\ d(B_{ij},B_{i'j'})<M^{-1}}}\int_{z\in B_{ij}}\int_{z'\in B_{i'j'}} \log|z-z'|d\mu(z)d\mu(z')=O(\log(n)M^{-\delta}).
\]
Suppose that $d(B_{ij},B_{i'j'})>M^{-1}.$ We have
\begin{multline}
\frac{1}{(n+1)^2}\sum_{k}\sum_{ k'} \log|z_{ijk}-z_{i'j'k'}|=\int_{z\in B_{ij}}\int_{z'\in B_{i'j'}} \log|z-z'|d\mu(z)d\mu(z')
\\
+O\left(\frac{M_1L}{n}\left(\mu(B_{ij})+\mu(B_{i'j'}) \right)+\mu(B_{ij})\mu(B_{i'j'})M_2\right).
\end{multline}
where $M_1:=\left|\sup_{\substack{z\in B_{ij}\\ z'\in B_{i'j'}}} \log |z-z'|\right|$ and $M_2:=\sup_{\substack{z_1,z_2\in B_{ij}\\ z_1',z_2'\in B_{i'j'}}}\left|\log|z_1-z_1'|-\log|z_2-z_2'|\right|.$ We have
\[
M_1\ll \log(n)
\]
and
\[
M_2\ll \frac{M^{-1}}{d(B_{ij},B_{i'j'})}.
\]
Therefore,
\begin{multline*}
\left|\sum_{ijk}\sum_{ijk\neq i'j'k'} \frac{\log|z_{ijk}-z_{i'j'k'}|}{(n+1)^2}-I(\mu) \right| \ll \log(n)M^{-\delta}
\\+\sum_{\substack{ij,i'j' \\ d(B_{ij},B_{i'j'})>M^{-1}}} \frac{\log(n)L}{n}\left(\mu(B_{ij})+\mu(B_{i'j'}) \right)+\mu(B_{ij})\mu(B_{i'j'})\frac{M^{-1}}{d(B_{ij},B_{i'j'})}.
\end{multline*}
Note that
\[
\sum_{\substack{ij,i'j' \\ d(B_{ij},B_{i'j'})>M^{-1}}} \frac{\log(n)L}{n}\left(\mu(B_{ij})+\mu(B_{i'j'}) \right) \ll \frac{\log(n)M^2L}{n}.
\]
By Cauchy-Schwarz inequality,
\[
\sum_{ij,i'j'}\mu(B_{ij})\mu(B_{i'j'})\frac{M^{-1}}{d(B_{ij},B_{i'j'})}\leq \left( \sum_{ij,i'j'} \mu(B_{ij})^2\mu(B_{i'j'})\right)^{1/2}\left(\sum_{ij,i'j'}\frac{M^{-2}\mu(B_{i'j'})}{d(B_{ij},B_{i'j'})^2} \right)^{1/2},\]
where the sum is over $ij,i'j'$ such that $d(B_{ij},B_{i'j'})\geq M^{-1}.$ We have
\[
\sum_{ij,i'j'} \mu(B_{ij})^2\mu(B_{i'j'}) \ll M^{-\delta} \sum_{ij,i'j'} \mu(B_{ij})\mu(B_{i'j'})=M^{-\delta} .
\]
Moreover,
\[
\sum_{ij,i'j'}\frac{M^{-2}\mu(B_{i'j'})}{d(B_{ij},B_{i'j'})^2} \leq \max_{i'j'} \sum_{ij}\frac{M^{-2}}{d(B_{ij},B_{i'j'})^2}\ll \log(M).
\]
Therefore,
\[
\left|\sum_{ijk}\sum_{ijk\neq i'j'k'} \frac{\log|z_{ijk}-z_{i'j'k'}|}{(n+1)^2}-I(\mu) \right| \ll \log(n)M^{-\delta}+\frac{\log(n)M^2L}{n}+M^{-\delta/2}\log(M)^{1/2}.
\]
\end{proof}
Let
\begin{equation}\label{measpol}
p_{M,n,\mu}(x):=\prod_{i,j,k} (x-z_{ijk}),
\end{equation}
and for $e,f< 2M$ and $g\leq n_{ef}$
\begin{equation}\label{pmdef}
\begin{split}
p_{M,n,\mu}^{efg+}(x)&:=\frac{\frac{p_{M,n,\mu}(x)}{(x-z_{efg})}+\frac{p_{M,n,\mu}(x)}{(x-\overline{z_{efg}})}}{2},
\\
p_{M,n,\mu}^{efg-}(x):&=\frac{\frac{p_{M,n,\mu}(x)}{(x-z_{efg})}-\frac{p_{M,n,\mu}(x)}{(x-\overline{z_{efg}})}}{2i\Im(z_{efg})}.
\end{split}
\end{equation}
Similarly, for $\Sigma\subset \mathbb{R}$,
let
\begin{equation}\label{measpolreal}
p_{M,n,\mu}(x):=\prod_{i,k} (x-z_{ik}),
\end{equation}
and for $e< 2M$ and $g\leq n_{e}$
\begin{equation}\label{pmdefreal}
p_{M,n,\mu}^{eg}(x):=\frac{p_{M,n,\mu}(x)}{(x-z_{eg})}.
\end{equation}
\begin{lemma}
We have $p_{M,n,\mu}(x)\in \mathbb{R}[x]$ and
\[
p_{M,n,\mu}^{efg+}(x)=\frac{p_{M,n,\mu}(x)}{(x-z_{efg})(x-\overline{z_{efg}})}(x-\Re(z_{efg}))=p_{M,n,\mu}^{e(2M-f-1)g+}(x)
\]
and
\[
p_{M,n,\mu}^{efg-}(x)=\frac{p_{M,n,\mu}(x)}{(x-z_{efg})(x-\overline{z_{efg}})}=-p_{M,n,\mu}^{e(2M-f-1)g-}(x).
\]
In particular, $p_{M,n,\mu}^{efg\pm}(x)\in \mathbb{R}[x],$ $\deg(p_{M,n,\mu}^{efg+}(x))=n$, and $ \deg p_{M,n,\mu}^{efg-}(x)=n-1$.
\end{lemma}
\begin{proof}
It follows easily from $\overline{z_{ijk}}=z_{i(2M-j-1)k}.$
\end{proof}
\begin{proposition}\label{simplex} Suppose that $n \ge 2\max\{|a|,|b|,|c|\}.$
We have
\[
p_{M,n,\mu}^{efg\pm}(x) \in \lambda^{efg\pm}K_n
\]
for some $\lambda^{efg\pm}>0$, where
$$\frac{\log(\lambda^{efg\pm})}{n}\leq C\left(\frac{M^2L\log(n)}{n}+ M^{-\delta/2}\log(M)^{1/2}
\right)$$
for some constant $C$ that only depends on $\mu.$ In particular, by taking $M=\lfloor n^{1/3}\rfloor$ and $L=n^{1/3-\delta/6},$ we obtain
\begin{equation}\label{simplex_potential}
|\lambda^{efg\pm}| \leq e^{Cn^{1-\frac{\delta}{6}}\log(n)}.
\end{equation}
\end{proposition}
\begin{proof}
We give the proof for $p_{M,n,\mu}^{efg+}(x)$; the other case follows from a similar argument.
Note that $\deg(p_{M,n,\mu}^{efg+})=n$ and
\[
\frac{\log |p_{M,n,\mu}^{efg+}(z)|}{n} = \int \log|z-x| d\mu_{p_{M,n,\mu}^{efg+}}= U_{\mu_{p_{M,n,\mu}^{efg+}}}(z).
\]
Hence if $|z|\geq n$ and $n \ge 2\max\{|a|,|b|,|c|\}$, then by
Lemma~\ref{lem1},
\[
\frac{\log |p_{M,n,\mu}^{efg+}(z)|}{n} - U_{\mu}(z)=O\left(\frac{1}{n}\right).
\]
So without loss of generality, we assume that $|z|<n$. For $z\in \mathbb{C},$ we have
\begin{multline}\label{sum}
\frac{\log |p_{M,n,\mu}^{efg+}(z)|}{n} - U_{\mu}(z)=\sum_{i,j}\left(\sum_{k\leq n_{ij}} \frac{\log |z-z_{ijk}|}{n} -\int_{B_{ij}} \log|z-x|d\mu(x)\right)
\\
+\frac{\log \frac{|z-\Re(z_{efg})|}{|z-z_{efg}||z-\overline{z_{efg}}|}}{n}.
\end{multline}
Let $d(z,B_{ij}):=\inf_{w\in B_{ij}} |z-w|.$ As noted previously, there are at most $O(1)$ pairs $(i,j)$ such that $d(z,B_{ij})<M^{-1}$. First, we give an upper bound on the sum in \eqref{sum} over $i,j$ where $d(z,B_{ij})\leq M^{-1}$. Suppose $z_{hrs}\in B_{ij}$ has minimal distance to $z$ among $z_{ijk}$ where $k\leq n_{ij}$ and $d(z,B_{ij})\leq M^{-1}$. By \eqref{holder}, \eqref{n_ij} and \eqref{Bij}, we have
\begin{equation}\label{sum1}
\sum_{i,j}\left(\sum_{k\leq n_{ij}} \frac{\log |z-z_{ijk}|}{n} -\int_{B_{ij}} \log|z-x|d\mu(x)\right)=\frac{\log |z-z_{hrs}|}{n}+O(\log(n)M^{-\delta}),
\end{equation}
where the sum is over $i,j$, where $d(z,B_{ij})\leq M^{-1}.$ This is because by \eqref{Bij}, $|z_{ijk}-z_{ijk'}|\gg n^{-1}.$
Next, we give an upper bound on the above sum over $i,j$, where $d(z,B_{ij})\geq M^{-1}$. Suppose that $d(z,B_{ij})\geq M^{-1},$ we have
\[
\left|\sum_{k\leq n_{ij}} \frac{\log |z-z_{ijk}|}{n} -\int_{B_{ij}} \log|z-x|d\mu(x)\right|\leq \frac{M_1L}{n}+\mu(B_{ij})M_2.
\]
where $M_1:=\left|\sup_{z_1\in B_{ij}} \log |z-z_1|\right|$ and $M_2:=\sup_{z_1,z_2\in B_{ij}}\left|\log|z-z_2|-\log|z-z_1|\right|.$ Because we know $d(z,B_{ij})\geq M^{-1},$ we have
\[
M_1\ll \max (|\log(z)|, \log(M))\ll \log(n).
\]
Suppose that $z_1,z_2\in B_{ij}.$ Let $r_1:=|z-z_1|$ and $r_2:=|z-z_2|$ and assume that $r_1 \leq r_2.$ By the mean value theorem, we have
\[
M_2=\frac{\left|r_1-r_2\right|}{r_3}
\]
for some $r_3,$ where $r_1 \leq r_3 \leq r_2.$ We have $|r_1-r_2|\leq |z_1-z_2|\leq M^{-1}$ and
\[
r_3\gg d(z,B_{ij}).
\]
Hence,
\[
M_2\ll \frac{M^{-1}}{d(z,B_{ij})}.
\]
Therefore,
\begin{equation}\label{b1}
\sum_{i,j}\left|\sum_{k\leq n_{ij}} \frac{\log |z-z_{ijk}|}{n} -\int_{B_{ij}} \log|z-x|d\mu(x) \right|\ll \frac{M^2L\log(n)}{n}+ \sum_{i,j}\mu(B_{ij})\frac{M^{-1}}{d(z,B_{ij})},
\end{equation}
where the sum is over $i,j,$ where $d(z,B_{ij})\geq M^{-1}$. By the Cauchy-Schwarz inequality,
\[
\sum_{i,j}\mu(B_{ij})\frac{M^{-1}}{d(z,B_{ij})}\leq \left( \sum_{i,j} \mu(B_{ij})^2\right)^{1/2}\left(\sum_{i,j}\frac{M^{-2}}{d(z,B_{ij})^2} \right)^{1/2}\ll M^{-\delta/2}\log(M)^{1/2},\]
where we used $\sum_{i,j}\mu(B_{ij})^2\ll M^{-\delta} \sum_{i,j}\mu(B_{ij})=M^{-\delta}$ and $\sum_{i,j}\frac{M^{-2}}{d(z,B_{ij})^2} \ll \log(M)$
where the sum is over $i,j,$ with $d(z,B_{ij})\geq M^{-1}.$ Therefore by \eqref{b1} and the above inequality, we have
\begin{equation}\label{sum2}
\sum_{i,j}\left|\sum_{k\leq n_{ij}} \frac{\log |z-z_{ijk}|}{n} -\int_{B_{ij}} \log|z-x|d\mu(x)\right| \ll
\frac{M^2L\log(n)}{n}+ M^{-\delta/2}\log(M)^{1/2},
\end{equation}
where the sum is over $i,j,$ where $d(z,B_{ij})\geq M^{-1}.$ Finally, by \eqref{sum}, \eqref{sum1} and \eqref{sum2}, we obtain
\begin{multline}\label{potapp}
\frac{\log |p_{M,n,\mu}^{efg+}(z)|}{n} - U_{\mu}(z)= \frac{\log \frac{|z-z_{hrs}||z-\Re(z_{efg})|}{|z-z_{efg}||z-\overline{z_{efg}}|}}{n}
\\
+O\left(\log(n)M^{-\delta}+ \frac{M^2L\log(n)}{n}+ M^{-\delta/2}\log(M)^{1/2}\right).
\end{multline}
By the definition of $z_{hrs},$ $\frac{|z-z_{hrs}||z-\Re(z_{efg})|}{|z-z_{efg}||z-\overline{z_{efg}}|}\leq 1$ and by letting $M=\lfloor n^{1/3}\rfloor$ and $L=n^{1/3-\delta/6}$, we obtain
\begin{equation*}
\frac{\log |p_{M,n,\mu}^{efg+}(z)|}{n} \leq U_{\mu}(z)+O(n^{-\frac{\delta}{6}}\log(n)).
\end{equation*}
\end{proof}
\subsubsection{Lower bound on the volume of the simplex}
Let
\[
A:=\left[p_{M,n,\mu}^{ijk\pm} \right],
\]
where $0\leq i<2M, 0\leq j< M, k\leq n_{ij}$
be the square matrix of size $(n+1)\times (n+1)$ with coefficients of $p_{M,n,\mu}^{efg\pm}$ as its column vectors.
\begin{proposition}\label{detprop}
We have
\[
|\det(A)|=2^{-\frac{n+1}{2}}\prod_{ijk} \frac{1}{|\Im(z_{ijk})|^{1/2}}\prod_{ijk< i'j'k'} |z_{ijk}-z_{i'j'k'}|,
\]
where $0\leq i<2M, 0\leq j< 2M, k\leq n_{ij}$
\end{proposition}
\begin{proof}
Let
\[
B:=\left[\frac{p_{M,n,\mu}(x)}{(x-z_{ijk})}\right],
\]
where $0\leq i<2M, 0\leq j\leq M-1, k\leq n_{ij}$ be the square matrix of size $(n+1)\times (n+1)$ with coefficients of $\frac{p_{M,n,\mu}(x)}{(x-z_{ijk})}$ as its column vectors.
By \eqref{pmdef} for fixed indices $efg$, we have
\[
\left[p_{M,n,\mu}^{efg+},p_{M,n,\mu}^{efg-} \right]=\left[\frac{p_{M,n,\mu}(x)}{(x-z_{efg})},\frac{p_{M,n,\mu}(x)}{(x-\overline{z_{efg}})}\right]
\begin{bmatrix}1/2 & \frac{1}{2i\Im(z_{efg})} \\ 1/2 & -\frac{1}{2i\Im(z_{efg})} \end{bmatrix}.
\]
Hence,
\begin{equation}\label{detA}
|\det(A)|=|\det(B)|2^{-\frac{n+1}{2}}\prod_{ijk} \frac{1}{|\Im(z_{ijk})|^{1/2}}
\end{equation} where $0\leq i<2M, 0\leq j< 2M, k\leq n_{ij}.$
Let
\[
V:=[z_{ijk}^e]
\]
be the Vandermonde matrix of size $(n+1)\times (n+1),$ where $0\leq e<n+1.$ It is well-known that
\[
|\det V|=\prod_{ijk}\prod_{ijk< i'j'k'} |z_{ijk}-z_{i'j'k'}|.
\]
We have
\[
VB=\textup{diag}\left[\prod_{ijk\neq efg} (z_{efg}-z_{ijk})\right].
\]
Hence,
\[
|\det(B)|=\prod_{ijk}\prod_{ijk< i'j'k'} |z_{ijk}-z_{i'j'k'}|.
\]
Therefore, by \eqref{detA}
\[
|\det(A)|=2^{-\frac{n+1}{2}}\prod_{ijk} \frac{1}{|\Im(z_{ijk})|^{1/2}}\prod_{ijk< i'j'k'} |z_{ijk}-z_{i'j'k'}|.
\]
\end{proof}
\begin{proposition}\label{vol_K_n}
We have
\[
\textup{vol}(K_n) \gg n^{-cn^{2-\frac{\delta}{6}}}e^{\frac{n^2}{2}I(\mu)}
\]
where $c$ is a constant dependents only on $\mu$.
\end{proposition}
\begin{proof}
By Proposition~\ref{simplex}, $e^{Cn^{1-\frac{\delta}{6}}\log(n)}K_n$ includes the simplex with vertices $\left\{0, p_{M,n,\mu}^{ijk\pm} \right\}$ with volume $\frac{|\det(A)|}{(n+1)!}$, where $0\leq i<2M, 0\leq j\leq M-1, k\leq n_{ij}.$ Since $K_n$ is symmetric, there are $2^{n+1}$ simplexes with the same volume and disjoint interior inside $e^{Cn^{1-\frac{\delta}{6}}\log(n)}K_n$ by choosing different signs for the $n+1$ vertices $\left\{0,\pm p_{M,n,\mu}^{ijk\pm} \right\}$.
Hence,
\[
\textup{vol}(K_n) \geq \frac{2^{n+1}}{(n+1)!}e^{-Cn^{2-\frac{\delta}{6}}\log(n)}|\det(A)|.
\]
By Proposition~\eqref{detprop},
\[
\textup{vol}(K_n) \geq \frac{2^{\frac{n+1}{2}}}{(n+1)!}e^{-Cn^{2-\frac{\delta}{6}}\log(n)}\prod_{ijk} \frac{1}{|\Im(z_{ijk})|^{1/2}}\prod_{ijk< i'j'k'} |z_{ijk}-z_{i'j'k'}|.
\]
We have that $|\Im(z_{ijk})| \le c$ where $\text{supp}(\mu)\in [a,b]\times[-c,c]$ with $c\ge 1$.
Therefore
$$\prod_{ijk}\frac{1}{|\Im(z_{ijk})|^{1/2}}\prod_{ijk<i'j'k'}|z_{ijk}-z_{i'j'k'}|\ge c^{-n/2}\prod_{ijk<i'j'k'}|z_{ijk}-z_{i'j'k'}|.$$
From \eqref{maineq} using $M=\lfloor n^{1/3}\rfloor$,
we have $\prod_{ijk<i'j'k'}|z_{ijk}-z_{i'j'k'}|= e^{\frac{n^2}{2}I(\mu)+O(n^{2-\frac{\delta}{6}}\log(n))}$ and so for some (potentially negative) constant $C'$, this is at least $e^{\frac{n^2}{2}I(\mu)+C'n^{2-\frac{\delta}{6}}\log(n)}$. This gives
$$\textup{vol}(K_n)\gg \frac{2^\frac{n+1}{2}}{(n+1)!c^{n/2}}e^{\frac{n^2}{2}I(\mu)+(C'-C)n^{2-\frac{\delta}{6}}\log(n)}\gg
\frac{n^{(C'-C)n^{2-\frac{\delta}{6}}}e^{\frac{n^2}{2}I(\mu)}}{\left(n+1\right)^{n+1}c^n}\gg n^{\tilde C n^{2-\frac{\delta}{6}}}e^{\frac{n^2}{2}I(\mu)}$$
since $\frac{(n+1)^{n+1}}{n^n}\sim (n+1)e$ for some constant $\tilde C$.
\end{proof}
\begin{proposition}\label{mink}
Suppose that $\mu \in \mathcal{B}_{\Sigma}.$ There exists an integral polynomial $p_0$ with $\deg(p_0)\leq n$ such that
\[
\frac{\log |p_0(x)|}{n} \leq U_{\mu}(x)-\frac{I(\mu)}{2}+C\log(n)n^{-\delta/6},
\]
where $C$ is a constant independent of $n.$
Moreover,
\[
I(\mu)\geq 0.
\]
\end{proposition}
\begin{proof}
By Proposition ~\eqref{vol_K_n} there are constants $c,c'$ such that for all $n>0$, $\textup{vol}(K_n) \ge c'n^{cn^{2-\frac{\delta}{6}}}e^{\frac{n^2}{2}I(\mu)}$.
By Minkowski's second theorem~\eqref{minksec}, there exists $p_0\in\mathbb Z[x]\cap \lambda_0K_n$ of degree at most $n$, where
\begin{equation}\label{upperb}
\lambda_{0}\leq 2 \left( \textup{vol}(K_n) \right)^{-1/(n+1)} \ll n^{cn^{1-\frac{\delta}{6}}}e^{-\frac{n}{2}I(\mu)}.
\end{equation}
By definition of $K_n,$
\[
\frac{\log |p_0(x)|}{n} \leq U_{\mu}(x)+ \frac{\log(\lambda_0)}{n}
\]
By using~\eqref{upperb}, we obtain
\[
\frac{\log |p_0(x)|}{n} \leq U_{\mu}(x)-\frac{I(\mu)}{2}+c\log(n)n^{-\frac{\delta}{6}}.
\]
By taking the expected value of the above inequality, we have
\[
0\leq \int \frac{\log |p_0(x)|}{n} d\mu(x) \leq \frac{I(\mu)}{2}+c\log(n)n^{-\frac{\delta}{6}},
\]
where we used the assumption $\mu \in \mathcal{B}_{\Sigma}$ for the first inequality. By letting $n\to \infty$, we have
\[
I(\mu)\geq 0.
\]
This completes the proof of our proposition.
\end{proof}
\subsection{Proof of Theorem~\ref{mdim} }
In this section, we prove Theorem~\ref{mdim}. Suppose that $\mu$ is a probability measure with compact support on the complex plane and that for every non-zero $Q(x)\in \mathbb{Z}[x]$,
\[
\int \log |Q(z)| d\mu(z) \geq 0.
\]
First, we reduce the proof of Theorem~\ref{mdim} to the case of H\"older probability measure by the following proposition.
\begin{proposition}\label{holderreduction}
Suppose that $\Sigma\subset \mathbb{C}$ is compact and $\mu\in \mathcal{B}_{\Sigma}.$ For any $\rho>0,$ there exists a sequence of H\"older probability measure $\mu_n\in\mathcal{B}_{\Sigma(\rho)}$ such that \( \lim_{n\to \infty} \mu_n=\mu\).
\end{proposition}
\begin{proof}
Let $\lambda_{r}$ be the uniform probability measure on the circle of radius $r$ centered at the origin. Let $\mu*\lambda_r$ be the convolution of $\mu$ with $\lambda_r.$ We show that $U_{\mu*\lambda_r}(z)\geq U_{\mu}(z)$ for every $z\in \mathbb{C}$ and every $r\geq 0.$ In fact, we
have
\[
U_{\mu*\lambda_r}(z)-U_{\mu}(z)=\int \left( \int_{0}^1\log|z-x+re^{2\pi i \theta}|- \log|z-x| d\theta \right)d\mu(x)\geq 0,
\]
where we used Jensen's formula~\eqref{jensen} to show the inner integral is positive. Therefore, $\mu*\lambda_r \in \mathcal{B}_{\Sigma(\rho)}$ for any $r\leq \rho.$
Define
\[
\mu_n:=\mu*\lambda_{\rho/n}.
\]
It is clear from the definition that \( \lim_{n\to \infty} \mu_n=\mu\) and $\mu_n$ is a H\"older measure with exponent at least 1. This completes the poof of our proposition.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{mdim}]
Part~\eqref{listcond1} is the special case of part~\eqref{listcond2} for $m=1.$ So, it is enough to show that part~\eqref{listcond1} implies part~\eqref{listcond2}. Suppose that $\mu$ satisfies the conditions of part part~\eqref{listcond1}, which is equivalent to $\mu\in \mathcal{B}_{\Sigma}$ by definition.
Next, we reduce the theorem to the case where $\mu$ is H\"older. By Proposition~\ref{holderreduction}, there exists a sequence of H\"older probability measure $\mu_n\in\mathcal{B}_{\Sigma(\rho)}$ such that \( \lim_{n\to \infty} \mu_n=\mu\). Suppose that Theorem~\ref{mdim} holds for H\"older probability measures. Then
\[
\int \log |Q(z_1,\dots,z_m)|d\mu_n(z_1)\dots d
\mu_n(z_m)\geq0,
\]
for every $n.$ Note that $\log |Q(z_1,\dots,z_m)|$ is an upper semicontinuous function. Hence, by monotone convergence theorem, we have
\[
\int \log |Q(z_1,\dots,z_m)|d\mu(z_1)\dots d
\mu(z_m) \geq \limsup_{n\to \infty }\int \log |Q(z_1,\dots,z_m)|d\mu_n(z_1)\dots d
\mu_n(z_m)\geq 0.
\]
So without loss of generality we assume that $\mu\in \mathcal{B}_{\Sigma}$ is H\"older. The proof is by induction on $m,$ and part~\eqref{listcond1} is the base of our induction hypothesis. We assume that part~\eqref{listcond2} holds for $m$ and we show that it holds for $m+1.$ Suppose that $Q(x_1,\dots,x_{m+1})\in \mathbb{Z}[x_1,\dots,x_{m+1}].$ Without loss of generality, we assume that $Q$ is irreducible in $\mathbb{Z}[x_1,\dots,x_{m+1}]$ and does not belong to $\mathbb{Z}[x_{m+1}].$ Let
\[
U_{Q}(z_{m+1}):=\int \log |Q(z_1,\dots,z_m,z_{m+1})|d\mu(z_1)\dots d\mu(z_m).
\]
It is enough to show that
\begin{equation}\label{post}
\int U_{Q}(z) d\mu(z)\geq 0.
\end{equation}
We fix $z_i\in \mathbb{C}$ for $1\leq i\leq m$ and consider $Q$ as a polynomial in $x_{m+1}$ variable and assume that it has degree $d$ in $x_{m+1}.$ Then
\[
Q(z_1,\dots,z_m,x_{m+1})=\sum_{i=0}^d a_i(z_1\dots,z_m)x_{m+1}^i=a_d(z_1,\dots,z_m)\prod_{i=1}^d (x_{m+1}-\xi_i(z_1,\dots,z_{m}))
\]
where $\xi_i(z_1,\dots,z_{m}) \in \mathbb{C}$ are complex roots of $Q$. We define the measure $\mu_Q$ on $\mathbb{C}$ as follows
\begin{equation}\label{defmuq}
\mu_Q:= \int \sum_{i} \delta_{\xi_i(z_1,\dots,z_{m})} d\mu(z_1)\dots d\mu(z_m)
\end{equation}
where $\delta_{a}$ is the delta probability measure at point $a\in \mathbb{C}.$ We have
\[
U_Q(z)=a_Q+ \int \log|z-x|d\mu_{Q}(x)
\]
where
\begin{equation}\label{defaq}
a_Q:=\int \log|a_d(z_1,\dots,z_m)|d\mu(z_1)\dots d\mu(z_m).
\end{equation}
Note that by induction hypothesis, we have
\[
a_Q\geq 0.
\]
By Proposition~\ref{mink}, there exists an integral polynomial $P_0(x)$ with $\deg(P_0)\leq n$ such that
\[
\frac{\log |P_0(x)|}{n} \leq \int\log |z-x| d\mu(z)+Cn^{-\delta}.
\]
We take the expected value of the above inequality with respect to $d\mu_Q$ and obtain
\[
a_Q+\int \frac{\log|P_0(x)|}{n} d\mu_Q(x) \leq a_Q+\int \log|z-x| d\mu(z)d\mu_{Q}(x) +\frac{Cd}{n^{\delta}}=\int U_{Q}(z) d\mu(z)+\frac{Cd}{n^{\delta}}.
\]
It is enough to show that
\begin{equation}\label{pn}
na_Q+\int \log |P_0(x)| d\mu_Q(x) \geq 0,
\end{equation}
since \eqref{post} follows by letting $n\to \infty.$
In fact, we prove that for every integral polynomial $P(x),$
\begin{equation}\label{pns}
\deg(P)a_Q+\int \log |P(x)| d\mu_Q(x) \geq 0.
\end{equation}
This implies \eqref{pn}, since $\deg(P_n)\leq n$ and $a_Q\geq0.$
For proving~\eqref{pns}, we consider $P$ as an integral polynomial in $x_{m+1}$. Define
\[
R(Q,P)(x_1,\dots,x_m):= Res(Q(x_1,\dots,x_{m+1}),P(x_{m+1}))\in \mathbb{Z}[x_1,\dots,x_m]
\]
to be the resultant of $Q$ and $P$ as polynomials in $x_{m+1}$ variable. By our assumption, $Q$ is irreducible and does not divide $P(x_{m+1}).$ Hence, $Res(Q(x_1,\dots,x_{m+1}),P(x_{m+1}))\neq 0.$
By our induction hypothesis,
\[
\int \log R(Q,P)(z_1,\dots,z_m)d\mu(z_1)\dots d\mu(z_m)\geq 0.
\]
We note that
\[
\log R(Q,P)(z_1,\dots,z_m)=\deg(P)\log|a_d(z_1,\dots,z_d)| + \sum_{i=1}^d \log|P(\xi_i(z_1,\dots,z_{m}))|
\]
Hence,
\[
\int \deg(P)\log|a_d(z_1,\dots,z_d)| + \sum_{i=1}^d \log|P(\xi_i(z_1,\dots,z_{m}))|d\mu(z_1)\dots d\mu(z_m)\geq 0.
\]
The above implies~\eqref{pns} by definitions of $\mu_Q$ and $a_Q$ in \eqref{defmuq} and \eqref{defaq}. This complete our proof and implies
\[
\int \log |Q(z_1,\dots,z_m,z_{m+1})|d\mu(z_1)\dots d
\mu(z_m)d\mu(z_{m+1})\geq0.
\]
\end{proof}
\subsection{Lower bound on the product of Minkowski's minima factors}
First, we give a lower bound on the product of the Minkowski minima factors by using Theorem~\ref{mdim}.
\begin{proposition}\label{lowerd}
We have
\[
\prod_{i=0}^{m-1} \lambda_i \geq \frac{1}{m!} e^{(-nm+\frac{(m-1)m}{2})I_\mu}.
\]
\end{proposition}
\begin{proof}
By definition of $\{\lambda_i: 0\leq i\leq m-1\}$, there exists linearly independent integral polynomials $\{P_0,\dots,P_{m-1}\}$, where
\[
|P_i(z)|\leq \lambda_i e^{nU_\mu(z)}.
\]
for every $z\in \mathbb{C}.$ Let $\{z_1,\dots,z_m \}\subset \mathbb{C}.$
Therefore,
\begin{equation}\label{detinq}
\det [P_i(z_j)]\leq m!\prod_{i=0}^{m-1} \lambda_i e^{nU_\mu(z_i)}.
\end{equation}
On the other hand, we have
\[
\det [P_i(x_j)]=Q_m(x_1,\dots,x_m)\prod_{i<j} (x_i-x_j)
\]
for some $Q_m\in \mathbb{Z}[x_1,\dots,x_m].$ By taking the average of the above, we obtain
\[
\int \log |\det [P_i(z_j)]| d\mu(z_1)\dots d\mu(z_m)=\frac{(m-1)m}{2}I_{\mu}+\int \log |Q_m(z_1,\dots,z_m)| d\mu(z_1)\dots d\mu(z_m),
\]
where
\[
I(\mu):=\int \log|z_1-z_2|d\mu(z_1)d\mu(z_2).
\]
By second part of Theorem~\eqref{mdim}, we have
\[
\int \log |Q_m(z_1,\dots,z_m)| d\mu(z_1)\dots d\mu(z_m)\geq 0.
\]
Hence,
\[
\int \log |\det [P_i(z_j)]| d\mu(z_1)\dots d\mu(z_m)\geq \frac{(m+1)m}{2}I_{\mu}.
\]
By~\eqref{detinq}, we obtain
\[
mnI_{\mu}+\sum_{i} \log |\lambda_i|+ \log (m!)\geq\int \log |\det [P_i(z_j)]| d\mu(z_1)\dots d\mu(z_m)\geq \frac{(m+1)m}{2}I_{\mu}
\]
Therefore,
\[
\prod_i \lambda_i \geq \frac{1}{m!} e^{(-nm+\frac{(m-1)m}{2})I_\mu}.
\]
\end{proof}
\begin{proposition}\label{upperbound}
We have
\[
\lambda_n \leq n^{cn^{1-\frac{\delta}{12}}}.
\]
\end{proposition}
\begin{proof}
Let $m=n-n^{\delta_1}$ where $0<\delta_1<1$ is specified later. By Proposition~\ref{lowerd},
\[
\prod_{i=0}^m \lambda_i \geq \frac{1}{m!} e^{-nm+\frac{(m-1)m}{2}I_\mu}.
\]
By Minkowski's second theorem and Proposition~\ref{vol_K_n},
\[
\lambda_0\lambda_1\dots \lambda_n\leq 2^{n+1}\textup{vol}(K_n)^{-1} \ll2^{n+1} n^{cn^{2-\frac{\delta}{6}}}e^{-\frac{n^2}{2}I(\mu)}.
\]
By combining the above inequalities, we have
\[
\prod_{i=m+1}^n \lambda_i \ll n^{cn^{2-\frac{\delta}{6}}} e^{-\frac{(n-m)^2}{2}I(\mu)}.
\]
By Proposition~\ref{mink}, $I(\mu)\geq 0,$ and we have
\[
\prod_{i=m+1}^n \lambda_i \ll n^{cn^{2-\frac{\delta}{6}}}.
\]
Hence, by taking $\delta_1=1-\delta/12,$ we obtain
\[
\lambda_{m+1}\leq n^{cn^{2-\frac{\delta}{6}-\delta_1}} \leq n^{cn^{1-\frac{\delta}{12}}}.
\]
By Proposition~\ref{successive}, we have
\[
\lambda_n \leq \lambda_m (An)^{C(n-m)} \leq n^{cn^{1-\frac{\delta}{12}}}.
\]
This completes the proof of our proposition. Note that we may change constant $c$ to a larger constant in the above lines of argument.
\end{proof}
\subsection{Equilibrium measure}\label{chebsec}
In this subsection, we apply the results of our previous section to the equilibrium measure and prove Proposition~\ref{chpol}. Our proof is based on Fekete and Szeg\"o~\cite[Theorem D]{MR72941} which is also used by Robinson~\cite{Robinson} and Smith~\cite[Proposition 4.1]{Smith} to prove a similar result. All these results are essential in controlling the roots to remain inside $\Sigma(\rho).$
\\
Suppose that $\Sigma$ is compact and its complement is open and connected. Let $D$ be any open set containing $\Sigma$. Let
\[
\Sigma(\rho):=\{z\in \mathbb{C}: |z-\sigma|<\rho \text{ for some }\sigma\in\Sigma \},
\]
where $\Sigma(\rho)\subset D.$
Note that for any open set $D$ containing $\Sigma,$ there exist $\rho>0$ such that $\Sigma(\rho)\subset D.$
Let $\mu_{eq}$ be the equilibrium measure of $\Sigma.$ Fix any $\rho>0$ and $z_0\in \Sigma.$ It is well known that
\begin{equation}\label{eqbd}
U_{\mu_{eq}}(z)\geq \log (d_{\Sigma})+\delta_{\rho,\Sigma}
\end{equation}
for any $z\notin\Sigma(\rho),$ where $\delta_{\rho,\Sigma}>0 $ is a constant that only depends on $\Sigma$ and $\rho.$ Let $p_{M,n,\mu_{eq}}(x)$ be the polynomial constructed in~\eqref{measpol} associated to the equilibrium measure.
By~\eqref{potapp} and~\eqref{eqbd} there exists some $R>d_{\Sigma}$ and $M_0,n_0\in\mathbb{Z}$ such that
\[
p_{M_0,n_0,\mu_{eq}}(z)\geq R^{n_0}
\]
for any $z\notin\Sigma(\rho).$
Fix $z_0\in \Sigma(\rho)$ and $p_{M_0,n_0,\mu_{eq}}(z),$ and define
\begin{equation}\label{defTm}
T_m(x):=(x-z_0)^rp_{M_0,n_0,\mu_{eq}}(x)^q,
\end{equation}
where $m=n_0q+r$ is the Euclidean division of $m$ by $n_0$.
Let $m\geq h$ be two positive integers. It follows that
\begin{equation}\label{cheby}
T_m(z)\geq CR^{m-h}T_h(z)
\end{equation}
for any $z\notin\Sigma(\rho),$ where $C$ only depends on fixed parameters $\rho$ and $n_0.$
\begin{proposition}\label{chpol}
Given compact subset $\Sigma\subset \mathbb{C}$ with connected complement, a monic polynomial with real coefficients $p(x)$ and $\rho>0$, there exists fixed integers $l_0$ and $k_0$ independent of $p(x)$ and a monic polynomial $Q_m(x)$ with $\deg(Q_m)=m$ for every $m=kl_0,$ where $k>k_0$ such that
\[
|Q_m(z)|\geq \kappa^m
\]
for every $z\notin \Sigma(\rho)$ where $\kappa>d_{\Sigma}.$ Moreover, the coefficient of $x^i$ in $p(x)Q_m(x)$ is an even integer for every $\deg(p)\leq i\leq \deg(p)+m-1$, and all complex roots of $Q_m(x)$ are inside $\Sigma(\rho).$
\end{proposition}
\begin{proof}
Recall $T_m(x)$ in~\eqref{defTm} for $m\in \mathbb{Z},$ and define
\[
q_{k,l}(x):=\left(T_k(x)+a_1T_{k-1}(x)+\dots+a_kT_0(x) \right)^l
\]
where $|a_i|\leq \frac{1}{l}$ are chosen recursively for $1\leq i\leq k$ such that $q_{k,l}(x)p(x)$ has even coefficients for $x^{\deg(p)+\deg(q_{k,l})-i},$ where $1\leq i\leq k.$ Note that this is possible since the coefficient of $x^{\deg(p)+\deg(q_{k,l})-j}$ for $j\leq l$ in $q_{k,l}(x)p(x)$ is given by
\[
la_j+F_j(a_1,\dots,a_{j-1})
\]
where $F_j(a_1,\dots,a_{j-1})$ is a polynomial in terms of $a_i$ for
$i<j.$ By~\eqref{cheby}, we have
\begin{align*}
|q_{k,l}(z)|&=\left|\left(T_k(z)+a_1T_{k-1}(z)+\dots+a_kT_0(z) \right)^l\right|
\\
&\geq \left|T_k(z)\left(1-\frac{C}{l}\sum_{j\geq 1}\frac{1}{R^j}\right)\right|^l \geq |T_k(z)|^l\left(1-\frac{C}{l(R-1)}\right)^{l}.
\end{align*}
We fix an integer $l_0> \frac{C}{R-1}$, and let
\[
C':=\left(1-\frac{C}{l_0(R-1)}\right)^{l_0}>0.
\]
Hence,
\begin{equation}\label{fineq}
|q_{k,l_0}(z)| \geq C' T_k(z)^{l_0}\geq C''|p_{M_0,n_0,\mu_{eq}}(z)|^{\left\lfloor\frac{k}{n_0}\right\rfloor l_0}
\end{equation}
where $C''$ is a fixed constant depending on our fixed parameters.
Finally, let $m:=l_0k$ and
\[
Q_m(x):=q_{k,l_0}(x)-\sum_{i=0}^{m-k}b_iT_i(x),
\]
where $|b_i|\leq 1$ are chosen recursively for $0\leq i\leq m-k$ such that $Q_m(x)p(x)$ has even coefficients for $x^{j},$ where $\deg(p)\leq j\leq \deg(p)+ m-k.$ By \eqref{fineq}, \eqref{cheby}, and \eqref{defTm}, we have
\begin{align*}
|Q_m(z)|\geq |q_{k,l_0}(z)|-\sum_{i=0}^{m-k}|T_i(z)|\geq |p_{M_0,n_0,\mu_{eq}}(z)|^\frac{kl_0}{n_0} \left( C''-\frac{C_1}{R^{k-1}(R-1)} \right).
\end{align*}
Our proposition follows by taking $k>k_0$ where $k_0$ is a fixed constant.
\end{proof}
Let $\Sigma\subset \mathbb{R}$ be a finite union of closed intervals. For applications to abelian varieties and the trace problem, we use the following proposition. The statement of this proposition is motivated by~\cite[Proposition 4.1]{Smith} which is based on Robinson~\cite{Robinson}. All these results in Robinson's~\cite{Robinson} Smith's\cite{{Smith}}, and our work are essential in controlling the roots to remain inside $\Sigma_{\mathbb{R}}(\rho).$ Smith and Robinson used the properties of the Chebyshev's polynomial to prove their result and their method are very similar to the proof of Proposition~\ref{chpol}. Our proof is conceptually different.
\begin{proposition}\label{chpolreal} Suppose that $\Sigma\subset \mathbb{R}$ is a finite union of closed intervals with $d_{\Sigma}\geq 1$. Given a monic polynomial with real coefficients $p(x)$ of degree $n$, $\rho>0$, and every $m\leq n,$ there exists a monic polynomial $Q_m(x)$ with $\deg(Q_m)=m$ such that
\begin{enumerate}
\item The degree $i$ coefficient of the product $pQ_m$ is an even integer for $n-m\leq i\leq n-1$.
\item \label{2deform} Take $X$ to be the set of roots of $Q_m$. Then $X$ is a subset of $\Sigma_{\mathbb{R}}(\rho)\subset \mathbb{R}$, and we have
\begin{equation*}
\frac{\log |Q_{m}(z)|}{m} =\frac{\min_{\alpha\in X}\log|z-\alpha|}{m}+ U_{\mu_{eq}}(z)+O(m^{-\frac{\delta}{6}}\log(m))
\end{equation*}
for all $z\in \mathbb{C}$ and some $\delta>0,$ where $\mu_{eq}$ is the equilibrium measure for $\Sigma_{\mathbb{R}}(\rho/10)\subset \mathbb{R}.$
\item~\label{3deform} Given any root $\alpha$ of $Q_m$, and given any $\alpha'$ that is either a root of $p$ or a boundary
point of $\Sigma_{\mathbb{R}}(\rho)$, we have
\(
n^{-3} \leq |\alpha-\alpha'|.
\)
\end{enumerate}
\end{proposition}
\begin{comment}
We cite a version of the previous proposition from Smith's work~\cite[Proposition 4.1]{Smith}. We note that Smith's proof is constructive when the Chebyshev's polynomials of $\Sigma$ are explicit. For example when $\Sigma=[-2,2]\subset \mathbb{R}$, the $n$-th degree Chebyshev's polynomial is~\cite{Robinson}:
\[
T_n(x)=x^n+\sum_{k=1}^{\lfloor n/2\rfloor}(-1)^k\frac{n}{k}\binom{n-k-1}{k-1}x^{n-2k}.
\]
For applications to abelian varieties and Siegel's trace problem, we use the following proposition for closed intervals of real line in our proofs and the implementation of our algorithm.
\begin{proposition}\cite[Proposition 4.1]{Smith}\label{chpolreal}
Choose a compact finite union of intervals $\Sigma$ with $d_\Sigma>1.$ Then there is a positive integer $D_0$ and positive real $C$ so we have the following:
\\
Choose positive integers $m$ and $n$ satisfying $m < n < d_{\Sigma}^{m/2}$, with $m$ divisible by $D_0$ and greater than $C$. Choose any monic real polynomial $P$ of degree $n-m$. Then there is a monic real polynomial $Q$ of degree $m$ satisfying the following three conditions:
\begin{enumerate}
\item The degree $i$ coefficient of the product $PQ$ is an even integer for $n-m\leq i\leq n-1$.
\item Take $X$ to be the set of roots of $Q$.Then $X$ is a subset of $\Sigma$,and we have
\[
d_{\Sigma}^mn^{-C} \min_{\alpha\in X}|x-\alpha| \leq |Q(x)| \leq d_{\Sigma}^mn^C
\]
for all $x\in \Sigma$.
\item Given any root $\alpha$ of $Q$, and given any $\alpha'$ that is either a root of $P$ or a boundary
point of $\Sigma$, we have
\[
n^{-C} \leq |\alpha-\alpha′|.
\]
\end{enumerate}
\end{proposition}
\end{comment}
\begin{proof}
Let $P_{M,m,\mu_{eq}}$ be the polynomial constructed in~\eqref{measpolreal} associated to the equilibrium measure for $\Sigma_{\mathbb{R}}(\rho/10)$ with roots $x_1,\dots,x_m,$ such that $M=m^{1/3},$ and by a covering argument we pick $x_1,\dots,x_m$ such that for every $x_i$ we have \(
n^{-3} \leq |x_i-\alpha'|
\) for any $\alpha'$ that is either a root of $p$, any other $x_j\neq x_i$ or a boundary
point of $\Sigma_{\mathbb{R}}(\rho).$
By~\eqref{potapp} and~\eqref{eqbd}, this polynomial satisfies all the conditions above except the first one.
Our method is to deform the roots of $P_{M,m,\mu_{eq}}$ with a greedy algorithm so that it satisfies all the properties of Proposition~\ref{chpolreal}. We start with the degree $n-1$ coefficient of $pP_{M,m,\mu_{eq}}.$
We pick $\lceil 10/\rho\rceil$ of them say $x_1,\dots,x_{10/\rho}$ and deform each of them less than $\rho/10$ so that the degree $n-1$ coefficient of $pQ_m$ becomes even and the roots remain inside $\Sigma_{\mathbb{R}}(\rho).$ We show that this is possible. Let $Q_{m,0}(x):=P_{M,m,\mu_{eq}}(x)=x^m+\sum_{i=1}^{m}a_ix^{m-i}.$ The coefficient of $x^{\deg(p)+m-j}$ for $j\leq l$ in $Q_{m,0}(x)p(x)$ is given by
\[
a_j+F_j(a_1,\dots,a_{j-1})
\]
where $F_j(a_1,\dots,a_{j-1})$ is a polynomial in terms of $a_i$ for
$i<j.$ We define $x_i(t):=x_i+t\frac{\rho}{10}$ for $i\leq 10/\rho$ and call the deformed polynomial $Q_{m,0,t}.$
By the intermediate value theorem there exists $|t|\leq 1$ such that the degree $n-1$ coefficient of $pQ_{m,0,t}$ becomes an even number and roots remains inside $\Sigma_{\mathbb{R}}(\rho).$ Suppose that after deformation, condition~\eqref{3deform} is violated for $x_1,\dots,x_k.$ By \eqref{Bireal}, it is possible to pick $k$ other roots of $Q_{m,0}(x)$ say $y_1,\dots, y_k$ such that there are no other roots close to $y_i$ within distance $n^{-4/3}.$ We updated $x_1$ and $y_1$ by $x_1+t$ and $y_1-t.$
By a covering argument, there exists $t\ll n^{-2}$ such that $x_1+t$ is away from other roots with distance at least $n^{-3}.$ Similarly, update $x_i, y_i$ so that the third condition is satisfied.
Denote the updated polynomial by $Q_{m,1}.$ Note that $Q_{m,1}$
satisfies the second and third conditions above, and
the degree $n-1$ coefficient of $pQ_{m,1}$ is even.
\\
We construct $Q_{m,l}$ for $l\geq 1$ recursively as follows. Suppose that $Q_{m,l}$
satisfies the second and third conditions above above, and the degree $i$ coefficient of the product $pQ_{m,l}$ is an even integer for $n-l\leq i\leq n-1$, we want to make the coefficient of $n-l-1$ even as well without changing the higher degree coefficients by deforming the roots of $Q_{m,l}$ as we did in the previous paragraph.
\\
Suppose that $l\leq \log\log(n).$ We pick $n^{\varepsilon }$ distinct $l$-tuples $\vec{x}:=[x_1,\dots,x_l]$ among roots inside $\Sigma_{\mathbb{R}}(\rho/10)$ for some $\varepsilon\geq 0$ such that $|x_i-x_j|\gg 1/l$ . Fix $\vec{x}=[x_1,\dots,x_l]$, and
let
\(
s_k:=\sum_{i=1}^lx_i^k,
\)
and
\(
e_k:=\sum_{1\leq i_1<\dots<i_k\leq l}x_{i_1}\dots x_{i_k},
\)
where $e_0=1$ and $s_0=l.$
Let
\(\nabla s_k=k[x_i^{k-1}]\) be the gradient of $s_k.$ Let
\[
v_l:=\left[\frac{1}{\prod(x_1-x_i)},\frac{1}{\prod(x_2-x_i)},\dots, \frac{1}{\prod(x_l-x_i)}\right].
\]
It follows that $v_l$ is orthogonal to $\nabla s_k$ for $1\leq k\leq l-1$ and
\(
\left<v_l,\nabla s_l \right>=l.
\)
We note that
\[
e_k=\frac{1}{k} \sum_{j=1}^k (-1)^{j-1}e_{k-j}s_j.
\]
This implies that $\left<v_l,\nabla e_k \right>=0$ for every $0\leq k\leq l-1$ and
\(
\left<v_l,\nabla e_l \right>=1.
\)
We deform $\vec{x}$ with the following ODE
$\frac{d\vec{x}(t)}{dt}=v_l(t)$ and the initial condition $\vec{x}(0)=\vec{x}.$
We note that each coordinate of $v_l$ is less than $l^l \leq \frac{n^{\varepsilon}\rho}{10}$.
Therefore, by the intermediate value theorem, it is possible to deform $n^{\varepsilon }$ distinct $l$-tuples $[x_1,\dots,x_l]$ along the direction of $v_l$ at most $\rho/10$ amount and make the coefficient of $n-l-1$ also even.
Suppose that after deformation, condition~\eqref{3deform} is violated for $x_1,\dots,x_k,$ where $k\leq n^{\varepsilon}$.
By \eqref{Bireal}, it is possible to pick $kl\leq n^{\varepsilon}$ other roots of $Q_{m,0}(x)$ say $y_{i,j}$ for $1\leq i\leq k$ and $1\leq j\leq l$ such that there are no other roots close to $y_{i,j}$ within distance $n^{-4/3}$, $|y_{i,k}-y_{i,k'}|\gg 1/l$ for $k\neq k',$ and $|x_i-y_{i,j}|\gg 1/l$. We deform $l+1$ tuple $\vec{x}_i:=\left[x_i,y_{i,1},\dots, y_{i,l}\right]$ with the following ODE
$\frac{d\vec{x}_i(t)}{dt}=v_{l+1,i}(t)$ and the initial condition $\vec{x}_i(0)=\vec{x}_i$ where
\[
v_{i,l+1}(t):=\left[\frac{1}{\prod_j(x_i-y_{1,j})},\frac{1}{(y_{i,1}-x_i)\prod_j(y_{i,1}-y_{i,j})},\dots, \frac{1}{(y_{i,l}-x_i)\prod_j(y_{i,l}-y_{i,j})}\right].
\]
By a covering argument, there exists $t\ll n^{-2+\varepsilon}$ such that $x_1(t)$ is away from other roots with distance at least $n^{-3}$.
Denote the updated polynomial by $Q_{m,l+1}.$ Note that $Q_{m,l+1}$
satisfies second and third conditions above, and the degree $i$ coefficient of the product $pQ_{m,l+1}$ is an even integer for $n-l-1\leq i\leq n-1$.
\\
Suppose that $n^{\varepsilon} \geq l\geq \log\log(n)$ for some $\varepsilon >0.$ We pick $x_1,\dots,x_l$ according to the equilibrium measure among the roots inside $\Sigma(\rho/10)$. As before, let $\vec{x}=[x_1,\dots,x_l]$ and
\[
v_l:=\left[\frac{1}{\prod(x_1-x_i)},\frac{1}{\prod(x_2-x_i)},\dots, \frac{1}{\prod(x_l-x_i)}\right].
\]
By~\eqref{potapp} and~\eqref{eqbd}, the size of $v_l$ is exponentially small and we have
\[
|v_l|\leq e^{-ld_{\Sigma}}.
\]
We deform $\vec{x}$ with the following ODE
$\frac{d\vec{x}(t)}{dt}=v_l(t)$ and the initial condition $\vec{x}(0)=\vec{x}.$ By the intermediate value theorem there exists $|t|\leq 1$ so that the degree $n-l-1$ coefficient of $pQ_{m,l+1}$ is even.
It is clear that the perturbation remains inside $\Sigma(\rho).$ As before, we can insure the third condition is satisfied by picking $y_{i,j}$ according to the equilibrium measure and deform the roots so that they are away from each other with distance at least $n^{-3}.$
\newline
Finally, suppose that $m \geq l\geq n^{\varepsilon}.$ We pick $x_1,\dots,x_l$ according to the equilibrium measure among the roots inside $\Sigma_{\mathbb{R}}(\rho/10)$. As before, let $\vec{x}=[x_1,\dots,x_l]$ and
\[
v_l:=\left[\frac{1}{\prod(x_1-x_i)},\frac{1}{\prod(x_2-x_i)},\dots, \frac{1}{\prod(x_l-x_i)}\right].
\]
By~\eqref{potapp} and~\eqref{eqbd}, the size of $v_l$ is smaller than any negative power of $n$
\[
|v_l|\leq e^{-ld_{\Sigma}}\leq e^{-n^{\varepsilon}}\ll n^{-A}
\]
for any $A>0.$
We deform $\vec{x}$ with the following ODE
$\frac{d\vec{x}(t)}{dt}=v_l(t)$ and the initial condition $\vec{x}(0)=\vec{x}.$ By mean value theorem there exits $|t|\leq 1$ so that the degree $n-l-1$ coefficient of $pQ_{m,1}$ is even. This time the perturbation does not change conditions \eqref{2deform} and \eqref{3deform}, since the perturbation is smaller than $n^{-A}$ for any $A.$
This completes the proof of our proposition.
\end{proof}
\section{Proofs of Theorem~\ref{main1} and Theorem~\ref{general}}
\subsection{Proof of Theorem~\ref{main1}}
In this subsection, we assume that $\Sigma\subset \mathbb{C}$ is compact and has empty interior with connected complement. We also assume that $\mu \in \mathcal{B}_{\Sigma}$ is a H\"older measure.
\begin{comment}
\begin{proposition}\label{discp}
Suppose that $g_n(x)$ is a monic polynomial of degree n with complex coefficients and let $\mu $ and $\Sigma$ be as above. Moreover, suppose that
\[
\frac{\log |g(z)|}{n} \leq U_{\mu}(z)+O(n^{-a})
\]
for some $a>0.$ Then
\[
\frac{\log |g(z)|}{n} = U_{\mu}(z)+O(n^{-a}),
\]
where $\inf_{i}|z-\xi_i|>n^{-A}$ for any fixed $A>0.$ Moreover,
\[
\text{disc}_{\delta,\omega}(\mu_{h_n},\mu)=O(n^{-b})
\]
for some $b>0$ that only depends on $a$ and the Holder exponent of $\mu.$
\end{proposition}
\begin{proof}
\end{proof}
\end{comment}
\begin{proof}[Proof of Theorem~\ref{main1}]
Suppose that $\mu \in \mathcal{B}_{\Sigma}$ is a H\"older measure with exponent $\delta,$ and $D$ is any open set containing $\Sigma$. Fix $\rho>0$ such that $\Sigma(\rho)\subset D.$
We apply Proposition~\ref{chpol} to $p_{M,n-m,\mu}(x)$ defined in~\eqref{measpol} and $m:= n^{1-\delta_0}$ for some $\delta_0>0$ that we specify later and obtain $Q_m(x)$ such that
\[
p_{M,n-m,\mu}(x)Q_m(x)=x^{n}+\sum_{i=0}^{n-1} a_ix^i
\]
where $a_i$ are even for $n-m\leq i\leq n-1.$ Let
\[
r(x):=\frac{1}{2}+\sum_{i=0}^{n-1-m} \frac{a_i}{4} x^i.
\]We write $r(x)$ in terms of linearly independent integral polynomials $\{P_0,\dots,P_{n-1-m}\}$ obtained from Minkowski's successive minima of $K_{n-1-m}$, and obtain
\[
r(x)=\sum_{i=0}^{n-1-m} \alpha_i P_i(x).
\]
Let
\[
w(x):=\sum_{i=0}^{n-1-m} \lfloor\alpha_i\rfloor P_i(x),
\]
and
\[
h_n(x):= x^{n}+\sum_{i=n-m}^{n-1} a_ix^i +4w(x)-2.
\]
By the Eisenstein criteria at the prime 2, $w(x)$ is irreducible. Moreover,
\[
h_n(x)=p_{M,n-m,\mu}Q_m(x)+\sum_{i=0}^{n-1-m} \beta_iP_i(x),
\]
where $|\beta_i|=4|\alpha_i-\lfloor\alpha_i\rfloor|<4.$ Note that
\[
|P_i(z)|\leq \lambda_i e^{(n-m)U_\mu(z)}
\]
for every $z\in \mathbb{C},$ and by Proposition~\ref{upperbound},
\[\lambda_i\leq n^{cn^{1-\frac{\delta}{12}}}.\]
Moreover, by \eqref{potapp} and Proposition~\ref{chpol}
\[
\left| p_{M,n-m,\mu}(z)Q_m(z) \right|\geq e^{(n-m)U_\mu(z)} \kappa^m n^{-Cn^{1-\frac{\delta}{6}}}.
\]
for every $z\notin \Sigma(\rho)$,
where $M=\lfloor n^{1/3}\rfloor.$
By taking $\frac{\delta}{12}>\delta_0>0$ and $m=n^{1-\delta_0}$, it follows that
\[
\left| p_{M,n-m,\mu}Q_m(z) \right|> \left|\sum_{i=0}^{n-m-1} \beta_iP_i(z)\right|
\]
for large enough $n$ and every $z\notin \Sigma(\rho).$
By Rouch\'e's theorem, $h_n(x)$ and $p_{M,n,\mu}Q_m(z)$ has the same number of roots inside $\Sigma(\rho).$ Since all roots of $p_{M,n-m,\mu}Q_m(x)$ are inside $\Sigma(\rho),$ all roots of $h_n(x)$ are also inside $\Sigma(\rho).$ By Lemma~\ref{lem1}, we have
\[
\frac{\log|Q_m(z)|}{m}\leq \max(\log|z|,0)+C_{\Sigma},
\]
where $C_{\Sigma}$ is a constant that only depends on $\Sigma.$ Since $m=n^{1-\delta_0},$ we have
\[
\frac{\log |h_n(z)|}{n} \leq U_\mu(z)+ O(n^{-\delta_0}).
\]
This completes the proof of Theorem~\ref{main1}. For $\Sigma\subset \mathbb{R}$, our argument is similar. We apply Proposition~\ref{chpolreal} instead of Proposition~\ref{chpol} and $p_{M,n-m,\mu}(x)Q_m(x)$ which has $n$ distinct roots which are $n^{-3}$ apart. We use the intermediate value theorem and the fact that we have $n$ sign changes on $\Sigma(\rho)$ to prove all roots are real and inside $\Sigma(\rho)$ instead of the Rouch\'e's theorem.
\end{proof}
\subsection{Proof of Theorem~\ref{general}}
In this section, we show that Theorem~\ref{main1} implies Theorem~\ref{general}. To prove Theorem~\eqref{main1}, we assumed that $\mu$ is a H\"older measure, the support of $\mu$ has empty interior, and that its complement is connected.
Here we show that these conditions are unnecessary for proving Theorem~\ref{general}.
By Proposition~\ref{holderreduction}, for any $\rho>0,$ there exists a sequence of H\"older probability measure $\mu_n\in\mathcal{B}_{\Sigma(\rho)}$ such that \( \lim_{n\to \infty} \mu_n=\mu\). So by a diagonal argument the the proof is reduced to the case where $\mu$ is H\"older.
\newline
Recall $\Sigma\subset [a,b]\times[-c,c]$ as in section \eqref{bound_Kn} and
$[a,b]\times[-c,c]= \bigcup_{0\leq i,j< 2M}B_{ij}$,
where we define $B_{ij}=[a_i,a_{i+1}]\times[c_{j},c_{j+1}]$ and $a_i=a+\frac{i(b-a)}{2M}$ and $c_j=-c+\frac{j(2c)}{2M}$. Furthermore, $z_{ijk}\in B_{ij}$ for each $1\le k\le n_{ij}=\lfloor(n+1)\mu(B_{ij})\rfloor+\epsilon_{ij}$ are chosen with distinct imaginary parts.
We now define a measure which imitates $\mu$ by being uniformly distributed on a union of intervals around the $z_{ijk}$. Let $\tilde \mu$ be the measure supported on $\bigcup_{ijk}\{z:|\Re(z-z_{ijk})|<\frac{1}{n},\Im(z)=\Im(z_{ijk})\}$
where $\tilde \mu$ restricted to each interval is the one-dimensional Lebesgue measure on that interval.
Now let $\tilde\nu$ be the equilibrium measure on the support of $\tilde\mu$ and $\tilde \nu_{\epsilon}$ be the normalized pushforward of $\tilde\nu$ under the scaling map $x\mapsto (1+\epsilon)x$.
Finally, define $\mu_{\epsilon,\epsilon'}=(1-\epsilon')\tilde\mu+\epsilon'\tilde\nu_\epsilon$ for $\epsilon,\epsilon'\in(0,1)$.
\begin{theorem}\label{no_interior_bound}
For all $Q\in\mathbb Z[x]$ and $0<\epsilon,\epsilon'<\frac{1}{2}$, $\int \log|Q(x)|d\mu_{\epsilon,\epsilon'}(x)\ge 0$ for large enough $n$.
\end{theorem}
\begin{proof}First, we relate $U_{\tilde{\mu}}$ with $U_\mu$. We compute
$$U_{\tilde\mu}(x)=\int\log|x-z|d\tilde\mu = \sum_{ijk}\int_{\frac{-1}{2n}}^\frac{1}{2n}\log|x-y -z_{ijk}|dy.$$
Let $z_{hrs}$ be the closest root of $P_{M,n,\mu}$ to $x$. Then $\forall ijk\neq hrs$, $|x-z_{ijk}|\gg n^{-5/6}$ by \eqref{Bij} choosing $M\sim n^{1/3}$.
So for $ijk\neq hrs$, we have that
$$\int_{\frac{-1}{2n}}^\frac{1}{2n}\log|x-y-z_{ijk}|dy - \frac{1}{n}\log|x-z_{ijk}| = \int_{\frac{-1}{2n}}^\frac{1}{2n}\log\left|1-\frac{y}{x-z_{ijk}}\right| dy\le \int_\frac{-1}{2n}^\frac{1}{2n}\frac{|y|dy}{|x-z_{ijk}|} \ll n^{-7/6}$$
Thus if $|x-z_{hrs}|\ge 2\cdot\text{diam}(\text{supp}(\mu))$, we have by Lemma \ref{lem1} that
$$U_{\tilde\mu}(x)=U_{\mu}(x)+O(n^{-1/6}),$$
and otherwise,
$$U_{\tilde\mu}(x)=\int_{\frac{-1}{2n}}^\frac{1}{2n}\log|x-y-z_{hrs}|dy
+ \frac{1}{n}\sum_{ijk\neq hrs}\log|x-z_{ijk}|+O(n^{-1/6})$$
In the latter case, $\int_{\frac{-1}{2n}}^\frac{1}{2n}\log|x-y-z_{hrs}|dy \ll\max(n^{-1},\int_{\frac{-1}{2n}}^\frac{1}{2n}\log|y|dy)\ll n^{-1}\log(n)$.
Therefore, we have that $U_{\tilde\mu}(x)=U_{\mu_{p_{M,n,\mu}}}(x)-\frac{1}{n}\log |x-z_{hrs}| + O(n^{-1/6})$.
From our proof of Proposition \ref{simplex},
\begin{multline*}
\frac{\log |p_{M,n,\mu}^{efg+}(x)|}{n} - U_{\mu}(x)= \frac{\log \frac{|x-z_{hrs}||x-\Re(z_{efg})|}{|x-z_{efg}||x-\overline{z_{efg}}|}}{n}
\\
+O\left(\log(n)M^{-\delta}+ \frac{M^2\log(n)}{n}+ M^{-\delta/2}\log(M)^{1/2}\right).
\end{multline*}
Taking $M\sim n^{1/3}$, we have now shown that
$$U_{\tilde\mu}(x) = U_\mu(x)
+O\left(n^{-1/6}+n^{\frac{-\delta}{6}}\log(n)^\frac{1}{2}\right)$$
Therefore, in either case, we have $U_{\tilde\mu(x)}=U_\mu(x)+O(n^{-1/6}+n^{-\delta/6}\log(n)^{1/2})$.
\newline
Now for any $Q(x)=\prod_{i=1}^m(x-\alpha_i)\in\mathbb Z[x]$, we have that
$$\int\log|Q(x)|d\mu_{\epsilon, \epsilon'} =(1-\epsilon')\int\log|Q(x)|d\tilde\mu + \epsilon'\int\log|Q(x)|d\tilde\nu_\epsilon$$
Using our calculation above,
$$\int\log|Q(x)|d\tilde\mu = \sum_{i=1}^mU_{\tilde \mu}(\alpha_i) = \int \log|Q(x)|d\mu
+ mO(n^{-1/6}+n^\frac{-\delta}{6}\log(n)^{\frac{1}{2}})$$
Since $I_{\tilde\nu}\ge I_\mu\ge0$, we have that
\begin{equation}\label{pushcap}I(\tilde\nu_\epsilon) = \int\int\log|(1+\epsilon)(x-y)|d\tilde\nu d\tilde\nu=I(\tilde\nu)+\log|1+\epsilon|>I(\tilde\nu)+\epsilon-\epsilon^2.\end{equation}
We now bound $I(\tilde\nu)$. Since $\tilde \nu$ is the equilibrium measure on the support of $\tilde \mu$, $I(\tilde \mu)\le I(\tilde\nu)$.
To compute $I(\tilde\mu)$, we first see that
$\int_{\frac{-1}{2n}}^\frac{1}{2n}\int_{\frac{-1}{2n}}^\frac{1}{2n}\log|x-y|dxdy\ll n^{-2}\log(n)$.
Therefore
$$I(\tilde\mu)=\sum_{ijk\neq i'j'k'}\int_{\frac{-1}{2n}}^\frac{1}{2n}\int_\frac{-1}{2n}^\frac{1}{2n}\log|z_{ijk}-z_{i'j'k'}+x-y|dxdy+O(n^{-1}\log(n)).$$
By the same argument before, this is
$\frac{1}{n^2}\sum_{ijk\neq i'j'k'}\log|z_{ijk}-z_{i'j'k'}|+O(n^{-1/6})$.
Applying \eqref{maineq} with $M\sim n^{-1/3}$,
$I(\tilde\mu) = O(n^{-1/6}+n^{\frac{-\delta}{6}}\log(n))$. Therefore for large enough $n$, $|I(\tilde \mu)|\le \frac{\epsilon}{4}$. Using \eqref{pushcap}, we have $I(\tilde\nu_\epsilon)>\frac{\epsilon}{4}$ since $\epsilon<1/2$.
Thus since
$$\int\log|Q(x)|d\mu_{\epsilon,\epsilon'}=(1-\epsilon')\int\log|Q(x)|d\mu+m\left(\epsilon'I(\tilde\nu_\epsilon)+(1-\epsilon')O(n^{-1/6}+n^\frac{-\delta}{6}\log(n)^\frac{1}{2})\right)$$
by equation (1.4) of \cite{logpotentials}, this shows that $\int\log|Q(x)|d\mu_{\epsilon,\epsilon'}>0$
for sufficiently large $n$. So for each $0<\epsilon,\epsilon'<\frac{1}{2}$, there is a large enough $n$ for which $\int\log|Q(x)|d\mu_{\epsilon,\epsilon'}>0$ as desired.
\end{proof}
\begin{comment}
\subsection{Discrepancy of $\mu_{P_n}$ and $\mu$}
Suppose you have two probability measures $\mu$ and $\nu$. Fix a non-zero smooth function $\omega$ with compact support and $\delta > 0$. Recall that the discrepancy between $\mu$ and $\nu$ is $\text{disc}_{\delta,\omega}(\mu,\nu) = \sup_{a<b,c<d} |\omega_\delta\ast\mu([a,b]\times[c,d])-\omega_\delta\ast\nu([a,b]\times[c,d])|$
where $\omega_\delta(x) = \frac{1}{\delta^2}\omega(\frac{x}{\delta})$.
\begin{proposition}\label{discprop}
If $||U_\mu-U_\nu||_1<\epsilon$, then $\text{disc}_{\epsilon^{1/8},\omega}(\mu,\nu)< C \sqrt\epsilon$ for some positive constant $C$ depending only on $\omega$.
\end{proposition}
\begin{proof}
Fix $B:=[a,b]\times [c,d]$.
Let $\ell_z(x) = \log|z-x|$ so that $U_\mu(z) = \ell_z\ast \mu$ and $U_\nu(z) = \ell_z\ast \nu$. It follows that $U_{\omega\ast\mu}(z) = \omega\ast\ell_z\ast\mu$ and similarly $U_{\omega\ast\nu}(z) = \omega\ast\ell_z\ast\nu$.
Since $\omega$ is smooth, $U_{\omega\ast\mu}$ is smooth and thus
$\Delta (U_{\omega\ast\mu})(z) = (\Delta\omega)\ast \ell_z \ast \mu$.
Since $\omega\ast \mu$ is smooth, it is absolutely continuous with respect to the Lebesgue measure $m$. Therefore by Theorem 1.3 in \cite{logpotentials}, we have that $d(\omega\ast\mu) = -\frac{1}{2\pi}\Delta U_{\omega\ast\mu} dm$. This proves the following inequality:
$$|\omega\ast\mu(B)-\omega\ast\nu(B)| = \frac{1}{2\pi}\int_{B}|\Delta\omega \ast (U_\mu-U_\nu)|dm
\le \frac{1}{2\pi}m(\text{supp}(\omega))||\Delta \omega||_\infty ||U_\mu-U_\nu||_1.$$
Therefore, $|\omega\ast\mu(B) - \omega\ast\nu(B)| \le \frac{m(\text{supp}(\omega))}{2\pi}||\Delta\omega||_\infty||U_\mu-U_\nu||_1<C\epsilon$ for some constant $C>0$
only depending on $\omega$.
Now if we do the same calculation with $\omega_\delta$, we get
$$|\omega_\delta\ast\mu(B)-\omega_\delta\ast\nu(B)| = \frac{1}{2\pi\delta^4}\int_B\left|(\Delta \omega)\left(\frac{1}{\delta}z\right)\ast(U_\mu-U_\nu)\right|dm < \frac{C\epsilon}{\delta^4}$$
since $\Delta \omega_\delta(z) = \frac{1}{\delta^4}(\Delta\omega)(\frac{1}{\delta}z)$.
Therefore, we have shown that
$$\text{disc}_{\epsilon^{1/8},\omega}(\mu,\nu) =\sup_{a<b,c<d}|\omega_{\epsilon^{1/8}}\ast\mu([a,b]\times[c,d])-\omega_{\epsilon^{1/8}}\ast\nu([a,b]\times[c,d])| < C\sqrt\epsilon.$$
This completes the proof.
\end{proof}
\end{comment}
\begin{proof}[Proof of Theorem~\ref{general}]
Suppose that $d_{\Sigma}<1.$ Note that
\[
\log d_{\Sigma}= I(\mu_{eq})=\sup_{\mu\in \mathcal{P}_{\Sigma}}I(\mu),
\]
where $\mu_{eq}$ is the equilibrium measure of $\Sigma.$
Hence,
\[
I(\mu)<0
\]
for every $\mu\in \mathcal{P}_{\Sigma}.$ By Theorem~\ref{mdim} for polynomial $Q(x,y)=x-y$,
\[
\mathcal{B}_{\Sigma}=\emptyset.
\]
Since $\mathcal{A}_{\Sigma}\subset \mathcal{B}_{\Sigma},$ this completes the proof of Theorem~\ref{general} if $d_{\Sigma}<1.$
\\
Otherwise, $d_{\Sigma}\geq 1$ and $\mu_{eq}\in \mathcal{B}_{\Sigma}.$
This implies that $\mathcal{B}_{\Sigma}\neq \emptyset.$
Denote by $\tilde\mu_{\epsilon,\epsilon'}$ the measure described in Theorem \ref{no_interior_bound} for the minimal value of $n$ giving the theorem. This gives us that $\tilde\mu_{\frac{1}{m},\frac{1}{m}}\stackrel{\ast}{\rightharpoonup}\mu$ as $m\to\infty$. Further, for each $m$, there is a sequence $P_{m,n}$ of polynomials with $\mu_{P_{m,n}}\stackrel{\ast}{\rightharpoonup}\tilde\mu_{\frac{1}{m},\frac{1}{m}}$ as $n\to \infty$. Taking a subsequence $P_{m,n_m}$, we have $\mu_{P_{m,n_m}}\stackrel{\ast}{\rightharpoonup}\mu$ as $m\to\infty$.
Therefore, we have shown that the assumptions that the support of $\mu$ must have non-empty interior and have its complement be connected are unnecessary, and Theorem~\ref{general} follows from Theorem~\ref{main1}.
\end{proof}
\section{The Algorithm}\label{main}
\subsection{Lattice Algorithms}\label{lattice_alg}
The algorithm we develop requires access to short vectors in a lattice. As seen in the work by Micciancio ~\cite{SVP}, finding a vector whose length is within a constant multiple of the shortest vector in a lattice is NP-hard to compute. This is known as the shortest vector problem or SVP. It is still desirable in general to have access to short vectors in a lattice. An algorithm known as the LLL-algorithm, developed by Lenstra, Lenstra, and Lov\'asz, finds a basis of vectors in a lattice in polynomial time ~\cite{LLL} where the shortest vector of the output has length within an exponential factor of the shortest vector of the lattice. In the implementation of this algorithm, we use the built-in method in PARI/GP named \texttt{qflll}. When the \texttt{qflll} method is given a matrix $A$ whose columns generate a lattice, it returns a transformation matrix $T$ so that $AT$ is an ``LLL-reduced" basis of the lattice. While the LLL-algorithm is quite reasonable in practice, the asymptotic bounds are not strong enough to prove a sub-exponential factor. \newline
In 1986, Schnorr~\cite{Schnorr} devised a parametrized family of algorithms attacking the SVP problem which output reduced bases somewhere between the stringent requirements of Korkine-Zolotarev reduction and the looser requirements of LLL reduction based on the input parameter. Careful parameter selection achieves slightly sub-exponential constant factors in polynomial time. Schnorr proved that if the algorithm produces a basis $b_1,\dots, b_n$, then $||b_1||$ is within a subexponential factor of a shortest vector in the lattice. We prove that in addition, if $\lambda_1,\dots, \lambda_n$ are so that $\lambda_i$ is the least value for which there are $i$ linearly independent vectors of norm at most $\lambda_i$, then for each $i=1,\dots, n$, $||b_i||$ is within a sub-exponential factor of $\lambda_i$.
We now introduce some notation from Schnorr's work and recall some of his results.
\newline
Let $b_1,\dots, b_n$ be a basis. Then let $b_i^*$ be the projection of $b_i$ onto the orthogonal complement of $\text{span}\{b_1,\dots, b_{i-1}\}$ for $i=1,\dots, n$. We can get $b_1=b_1^*,b_2^*,\dots,$ and $b_n^*$ by Gram-Schmidt process without normalizing.
Now define $\mu_{i,j}:=\frac{\langle b_i,b_j^*\rangle}{||b_j^*||^2}$.
\begin{definition}We say $b_1,\dots, b_n$ is size-reduced if $\mu_{i,j}\le 1/2$ for all $i>j$.\end{definition}
\begin{definition}
Let $\Lambda_i$ be the lattice spanned by the projections of $b_i,b_{i+1},\dots, b_n$ onto the orthogonal complement of $\text{span}\{b_1,\dots, b_{i-1}\}$. Then $b_1,\dots, b_n$ is Korkine-Zolotarev reduced if $||b_i^*||$ is minimal among non-zero vectors in $\Lambda_i$.
\end{definition}
\begin{definition}Let $k\mid n$. We say that $b_1,\dots, b_n$ is semi $k$-reduced if it is size-reduced and it satisfies both
\begin{equation}
||b_{ik}^*||^2 \le 2||b_{ik+1}^*||^2,
\end{equation}
and the components of $b_{ik+j}$ for $j=1,\dots, k$ orthogonal to $b_1,\dots, b_{ik-1}$ are Korkine-Zolotarev reduced for each $i=0,\dots,\frac{n}{k}-1$. Call this property (KZ).
\end{definition}
Lastly, the constant $\alpha_k$ is defined as $\max\frac{||b_1||^2}{||b_k^*||^2}$ where the max is over all Korkine-Zolotarev reduced bases on lattices of rank $k$. Corollary 2.5 in Schnorr's paper shows that $\alpha_k\le k^{1+\ln k}$.
\newline
\begin{theorem}\label{factors}
Let $b_1,\dots,b_n$ be a semi $k$-reduced basis and $\lambda_j$ be the least real number so there are $j$ linearly independent vectors in the lattice spanned by $b_1,\dots, b_n$ of norm at most $\lambda_j$. Then $||b_j||\le k^{\frac{n}{k}\ln k +O(\frac{n}{k}+\ln k)}\lambda_j$.
\end{theorem}
The essence is a combination of the proofs of Theorems 2.3 and 3.4 in ~\cite{Schnorr}.
\begin{proof}
Fix $1\le j\le n$. Suppose that $v_1,\dots, v_j$ are linearly independent vectors in the lattice with $||v_i||=\lambda_i$ for $i=1,\dots, j$.
For each $i$, write $v_i = \sum_{s=1}^n c_{s,i}b_s$.
Let $s_i$ be the largest index for which $c_{s_i,i}\neq 0$. There is some $i=1,\dots, j$ so that $s_i\ge j$ by linear independence. If $c_{s,i}^*$ is so that $v_i = \sum_{s=1}^nc_{s,i}^*b_s^*$, then $c_{s_i,i}=c_{s_i,i}^*\in \mathbb Z$.
This implies that
$\lambda_j = ||v_j||\ge ||v_i||\ge||c_{s_i,i}^*b_{s_i}^*||\ge ||b_{s_i}^*||$. Now note that for each $0< s<t\le k$ and $m$ so that $(m+1)k\le n$, then $||b_{mk+s}^*||\le \alpha_k||b_{mk+t}^*||$ as stated in the proof of Theorem 2.3 of ~\cite{Schnorr} and call this inequality $(\ast)$.
\newline
Using the fact that $b_1,\dots, b_n$ is size reduced,
$$||b_j||=\left|\left|b_j^*+\sum_{s=1}^j\mu_{j,s}b_s^*\right|\right|\le \sum_{s=1}^j||b_s^*||.$$
To bound $||b_s^*||$ for $1\le s\le j$, suppose $s=m_1k+t_1$ and $s_i=m_2k+t_2$ for $0\le t_2,t_2<k$. Then
\begin{align*}
||b_s^*|| &\le \alpha_k||b_{(m_1+1)k}^*|| &\text{by property (KZ) and }(\ast)\\
&\le 2\alpha_k ||b_{(m_1+1)k+1}^*||&\text{by }(6)\\
&\le (2\alpha_k)^{m_2-m_1}||b_{m_2k+1}||&\text{by induction}\\
&\le (2\alpha_k)^{m_2-m_1}\alpha_k||b_{s_i}^*||&\text{by property (KZ) and }(\ast)\text{ since }j\le s_i\\
&\le (2\alpha_k)^{\frac{n}{k}-\lfloor\frac{s}{k}\rfloor}\alpha_k\lambda_j&\because ||b_{s_i}^*||\le \lambda_j
\end{align*}
Since $\alpha_k \le k^{1+\ln k}$, we have $||b_s^*||\le k^{(\frac{n}{k}+1)(\ln k+2)-\lfloor\frac{s}{k}\rfloor\ln k}$.
Therefore,
\begin{align*}
||b_j||&\le k^{\frac{n}{k}\ln k + O(\frac{n}{k}+\ln k)}\sum_{s=0}^\infty k^{-\lfloor\frac{s}{k}\rfloor \ln k}\lambda_j\\
&\le k^{\frac{n}{k}\ln k + O(\frac{n}{k}+\ln k)}k\sum_{s=0}^\infty k^{-s\ln k}\lambda_j&\text{since every group of }k\text{ is equal}\\
&\le k^{\frac{n}{k}\ln k +O(\frac{n}{k}+\ln k)}\left(1-k^{-\ln k}\right)^{-1}\lambda_j\\
&\le k^{\frac{n}{k}\ln k +O(\frac{n}{k}+\ln k)}\lambda_j&\text{since }(1-k^{-\ln k})^{-1}\le 3\text{ for }k\ge 2.
\end{align*}
This completes the proof.
\end{proof}
According to Schnorr's paper, this algorithm takes $O(n^4\log B+k^{\frac{k}{2}+o(k)}n^2\log B)$ arithmetical steps on $O(n\log B)$ integers where $B$ is the Euclidean length of the longest input vector. In our case, $B$ is a constant depending on $\mu$. Taking $k=\frac{\log(n)}{\log(\log(n))}$, this is $O(n^4 + n^2\log(n)^{\frac{3\log n}{2\log\log n}})$ which is contained in $O(n^4)$ and thus this choice of $k$ makes the algorithm polynomial time in $n$.
In this case, the factor in Theorem \ref{factors} is at most $e^{\frac{2n\log(\log (n))^{3}}{\log(n)}}$ for large enough $n$; in particular, it is sub-exponential.
\subsection{Geometry of numbers}
There are two different sub-methods of our algorithm regarding geometry. The first finds a short basis of a given lattice.
The second finds an integral polynomial near a real polynomial where distance is measured with respect to a given basis. In these two algorithms, $P$ is an integral, square-free, monic polynomial with roots $\alpha_1, \dots, \alpha_n$.
\subsubsection{Finding Short Lattice Bases}\label{lattice_bases}
To find short lattice bases, we implement a version of Proposition ~\ref{mink}. This theorem shows the existence of a short basis of polynomials in a given convex space. In particular, let $K$ be the set of real polynomials of degree at most $n$ with $\log|P(x)|\le nU_\mu(x)$.
There are linearly-independent, integral polynomials $P_0,\dots,P_n$ so that for each $0\le k\le n$, if $P_k\in \lambda K$ where $\lambda\in\mathbb{R}$ is minimal, then there is no integral polynomial in $\rho K$ for any $\rho<\lambda$ linearly-independent from $P_0,\dots, P_{k-1}$. We showed that the largest of these $\lambda$'s is sub-exponential in $n$ in Proposition~\ref{upperbound}. Thus $\{P_0,\dots, P_n\}$ forms a basis of lattice points in a subexponential multiple of $K$. By Theorem \ref{factors}, we can compute explicit integral polynomials which are within a sub-exponential factor of $K$ in polynomial time in $n$. This is formalized in Corollary \ref{find_lattice_basis}.
\begin{corollary}\label{find_lattice_basis}
If $\mu\in\mathcal B_\Sigma$ is a H\"older probability measure, then we can compute a basis $Q_0,\dots, Q_n$ of integral polynomials of degree at most $n$ in polynomial time where each $Q_i\in e^{Cn\frac{\log(\log(n))^3}{\log(n)}}K_n$ for some fixed constant $C$ depending only on $\Sigma$ and $\mu$. More specifically, this runs in $O(n^4)$ time and assuming $n$ is sufficiently large, we can take $C=3$.
\end{corollary}
\begin{proof}
First compute the basis $p_{M,n,\mu}^{efg\pm}$ of polynomials of degree less than $n$ as defined in \eqref{pmdef} choosing $M = \lfloor n^{1/3}\rfloor$. Call this basis $\mathcal S$. We want to find an integral basis which is short in the basis $\mathcal S$ by Proposition \ref{simplex}.
Now write $\mathcal E:=\{1,x,\dots, x^{n-1}\}$ in the basis $\mathcal S$ and apply Schnorr's algorithm to get a semi $k$-reduced basis choosing $k=\frac{\log n}{\log\log n}$. We showed at the end of Section \ref{lattice_alg} that this runs in $O(n^4)$ time and that the output polynomials $Q_0,\dots, Q_n$ are so that $Q_i\in e^{2n\cdot\frac{\log(\log(n))^3}{\log n}}n^{cn^{1-\frac{\delta}{12}}}K_n$ by Proposition \ref{upperbound} for each $i$ and fixed $c$ for large enough $n$.
It follows that $Q_i\in e^{3n\cdot\frac{\log(\log(n))^3}{\log n}}K_n$ for large enough $n$.
\end{proof}
One way of computing the integral polynomials is as follows. Suppose $T$ is a matrix so that $AT$ is a reduced form (as in Schnorr's algorithm) of $A$ where $A$ is the matrix of $\mathcal E$ written in base $\mathcal S$.
Then $AT$ has linearly independent columns representing such polynomials in base $\mathcal S$. So $T$ has columns representing those polynomials in base $\mathcal E$ as desired. This gives the desired $n+1$ linearly independent integer polynomials.
Further note that that doing the change of basis via matrix inversion is numerically unstable for large matrices. Fortunately, we can compute the change of basis matrix explicitly. As is typically done, the time bounds given in this paper is in number of arithmetic operations. Numerical stability is not studied in this paper.
\subsubsection{Finding Close Integer Polynomials}\label{int_poly}
The following discussion allows us to algorithmically produce polynomials like those whose existence is proved in Theorem 3.2 of \cite{Smith}. The intention of this method is to find an integral polynomial close to a given real polynomial in polynomial time where distance is measured with respect to a given basis. In our case, we use the basis produced by Corollary \ref{find_lattice_basis}.
\begin{comment}
Define $P_j(z) = P(z)/(z-\alpha_j)$. This forms a basis $\mathcal{P}$ of the vector space of degree $n-1$ polynomials. Indeed, $P_j(\alpha_k) = \begin{cases} 0,&\text{if }k\neq j\\\prod_{l\neq j}(\alpha_j-\alpha_l),&\text{otherwise}\end{cases}$. As in the proof of Theorem 3.2 in Smith, let $P_j^o$ be $P_j$ if $\alpha_j$ is real and
$\begin{cases}
\frac{1}{\sqrt{2}}(P_j+\overline{P_j}), &\text{if }\Im(\alpha_j)>0\\
\frac{i}{\sqrt{2}}(P_j-\overline{P_j}), & \text{if }\Im(\alpha_j) < 0
\end{cases}$. This is still a basis and we define $\mathcal O:=\{P_1^o,\dots, P_n^o\}$. For notation, if $Q$ is a polynomial and $\mathcal B$ is a basis of the polynomials of degree at most $\deg(Q)$, then let $[Q]_\mathcal B$ be the coordinate vector of $Q$ in basis $\mathcal B$.
For example, if $Q(x)=\sum_{i=0}^{n-1} a_i x^i$, then $[Q]_\mathcal E = \begin{pmatrix}a_0\\ \vdots\\a_{n-1}\end{pmatrix}$.
\end{comment}
\begin{theorem}\label{close_poly}
Given a H\"older probability measure $\mu\in\mathcal B_\Sigma$, there is a polynomial time algorithm which takes in a real polynomial $Q(z)$ of degree less than $n$ and produces an integer polynomial $H$ of degree less than $n$ such that $\frac{\log(Q-H)(x)}{n}\le U_\mu(x) + Cn\cdot \frac{\log(\log(n))^3}{\log(n)}$. The run-time is $O(n^4)$ where $C$ is dependent only on $\mu$ and $\Sigma$ and assuming $n$ is sufficiently large, we can choose $C=4$.
\end{theorem}
\begin{proof}
Let $Q_0,\dots, Q_{n-1}$ be the output of our algorithm described in Corollary \ref{find_lattice_basis}. Call this basis $\mathcal Q$. Suppose the $j$-th coordinate of $[Q]_\mathcal Q$ is $c_j$.
Take $\tilde c_j$ to be the nearest integer (rounding up if $c_j\in\frac{1}{2}\mathbb{Z}$), and let $H$ be the polynomial so that $[H]_\mathcal Q=\begin{pmatrix}\tilde c_0\\\vdots\\\tilde c_{n-1}\end{pmatrix}$.
Then $H$ is an integer polynomial since $[H]_\mathcal Q$ is an integer linear combination of the integral polynomials in the basis $\mathcal Q$.
We know $Q-H = \sum_{j=1}^n(c_j-\tilde c_j)Q_j$. Thus $Q-H\le \frac{n}{2}e^{3n\cdot\frac{\log(\log(n))^3}{\log(n)}}e^{nU_\mu}$ and so $Q-H\le e^{\frac{4n\log(\log(n))^3}{\log(n)}}e^{nU_\mu}$ for large enough $n$ as desired.
For an analysis of the run-time, we note that Schnorr's algorithm is the bottleneck. It runs in $O(n^4)$ time by Corollary~\ref{find_lattice_basis}.
\end{proof}
\subsection{Main Algorithm}
Here we prove Theorem \ref{main2}. We have already covered the main concepts. By Corollary \ref{find_lattice_basis}, we can get a basis of integral polynomials in a sub-exponential multiple of $K_n$. If we now apply Theorem \ref{close_poly} to $\frac{1}{4}(p_{M,n,\mu} - x^n)+\frac{1}{2}$ to get $H(x)$, we could output $P_n(x):=x^n+4H(x)-2$. This gives us an Eisenstein polynomial in a sub-exponential multiple of $K_n$. If we simply want a polynomial $P$ with small $n$-norm, this would be sufficient; however, we have to introduce some extra steps to ensure $\text{supp}(\mu_{P_n})\subset D$ for a given open $D\supset \Sigma$. The proof will imitate the proof of Theorem \ref{main1}.
\begin{proof}[Proof of Theorem \ref{main2}]
Suppose that $\mu\in\mathcal B_\Sigma$ is a H\"older measure with exponent $\delta$, let $D$ be any open set containing $\Sigma$, and choose $\rho>0$ so that $\Sigma(\rho)\subset D$.
Apply Proposition \ref{chpol} to $p_{M,n-m,\mu}$ with $m$ chosen later to get the polynomial $Q_m(x)$.
Note that the proof of Proposition \ref{chpol} explicitly computes $Q_m(x)$ in polynomial time.
As in the proof of Theorem \ref{main1}, let $p_{M,n-m,\mu}(x)Q_m(x)$ be given by $x^n+\sum_{i=0}^{n-1}a_ix^i$.
Apply Theorem \ref{close_poly} to $$h(x):=\frac{1}{4}\left(p_{M,n-m,\mu}(x)Q_m(x)-x^n-\sum_{i=n-m}^{n-1}a_ix^i\right)+\frac{1}{2}$$
to get an integer polynomial $H(x)$. Finally,
$$p(x):=x^n+\sum_{i=n-m}^{n-1}a_ix^i + 4H(x)-2$$
is an Eisenstein polynomial at $2$.
We see that
\begin{equation}\label{close_res}
|p_{M,n-m,\mu}(x)Q_m(x)-p(x)|= |4\left(h(x) - H(x)\right) - 2| \le e^{Cn\cdot\frac{\log(\log(n))^3}{\log(n)}}e^{(n-m)U_\mu(x)}\end{equation}
for some constant $C$ depending only on $\mu$ and $\Sigma$ by Theorem \ref{close_poly}. As in the proof of Theorem \ref{main1}, we have
$$|p_{M,n-m,\mu}(x)Q_m(x)|\ge \kappa^mn^{-cn^{1-\frac{\delta}{6}}}e^{(n-m)U_\mu(x)}$$
for every $x\not\in \Sigma(\rho)$ where $M=\lfloor n^{1/3}\rfloor$. For large enough $n$, we can choose $m<n$ so that $m=\lceil\frac{2Cn}{\log(\kappa)}\cdot \frac{\log(\log(n))^3}{\log(n)}\rceil$ where we choose $C=4$ as in Theorem \ref{close_poly}.
Then for sufficiently large $n$,
$$|p_{M,n-m,\mu}(x)Q_m(x)|\ge|p_{M,n-m,\mu}(x)Q_m(x)-p(x)|$$ for all $x\not\in \Sigma(\rho)$.
By Rouch\'e's theorem, since $p_{M,n-m,\mu}(x)Q_m(x)$ has all its roots in $\Sigma(\rho)$, so does $p(x)$.
Furthermore by \eqref{close_res}, $$||p||_n\le e^{Cn\cdot\frac{\log(\log(n))^3}{\log(n)}} + ||p_{M,n-m,\mu}(x)Q_m(x)||_n\le e^{\tilde{C}n\cdot\frac{\log(\log(n))^3}{\log(n)}}$$
for some constant $\tilde C$ dependent only on $\Sigma$ and $\mu$.
\end{proof}
\section{Applications}\label{applications}
\subsection{Numerical Data}
We implemented the algorithm described above with some simplifications.\footnote{All examples in section \ref{applications} can be found at https://github.com/Bryce-Orloski/Limiting-Distributions-of-Conjugate-Algebraic-Integers-Applications.} Firstly, we used the LLL-algorithm implemented in PARI/GP named \texttt{qflll}. Secondly, instead of choosing $z_{ijk}$ as indicated in the paper, we use reasonable sampling methods as described in each numerical example given. We also do not apply the extra step of forcing the roots in a $\rho$-neighborhood of the measure's support. So these algorithmic outputs do not employ the full power of the algorithm, but as we see, it still outputs strong results quickly.
All of the examples given in this subsection were computed using Pennsylvania State University's ROAR servers and gave the output in at most two minutes.
\newline
The first example we discuss is pictured in Figure \ref{interval}.
The measure depicted is one in a family which was constructed by Serre~\cite{MR2428512}. In particular, it is the probability distribution $\mu$ on $\Sigma=[a,b]$ with $\mu=c\mu_{[a,b]}+(1-c)\nu_{[a,b]}$ where
$a=0.1715,$ $ b=5.8255,$ $c=0.5004,$ $\mu_{[a,b]}$ is the equilibrium measure on $[a,b]$, and $\nu_{[a,b]}$ is the pushforward of the equilibrium measure on $[b^{-1},a^{-1}]$ under the map $z\to 1/z$.
We compute the sample points of $\mu$ using by taking the $k^{th}$ sample point to be the inverse distribution of $\mu$ at $k/n$. The algorithm ran on the ROAR servers for roughly $3$ seconds when asked to compute a degree 100 polynomial (with the sample points pre-computed). The output polynomial has all real roots with their histogram being displayed in Figure \ref{fig2}. The endpoints and interval width in Figure \ref{fig2} are approximate.
\newline
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{wolfram_lemniscate.png}
\caption{Plot of $|z^2-1|=1$}
\label{lemniscate_wolfram}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering \includegraphics[width=\textwidth]{lemniscate_200_roots.png}
\caption{Plotted roots or degree 200 polynomial}
\label{lemniscate_output}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering \includegraphics[width=\textwidth]{unif_circle_200.png}
\caption{Mapping of output roots via $z\mapsto z^2-1$}
\label{circle_unif}
\end{subfigure}
\caption{These demonstrate the effectiveness of the algorithm on a lemniscate.}
\label{lemniscate}
\end{figure}
The second example is pictured in Figure \ref{lemniscate}. Figure \ref{lemniscate_wolfram} depicts the support of the pull-back of the uniform distribution on the unit circle via $|z^2-1|=1$. We sampled this support via this distribution and ran the algorithm on 200 samples. It finished in under two minutes and a plot of its roots are given in Figure \ref{lemniscate_output}.
We want this to sample the pull-back of the uniform distribution on the circle. To see that this is well sampled, Figure \ref{circle_unif} plots the image of these roots under the map $z\mapsto z^2-1$ and we see that it roughly approximates the uniform distribution on the unit circle.
\newline
Our last example is depicted in Figure \ref{annulus}. Here we sampled the annulus $1\le |z|\le 2$ with 100 complex numbers by sampling 50 using the Monte Carlo method and taking their complex conjugates. This is shown in Figure \ref{annulus_pivots}. Using these samples in our algorithm, it returned a degree 100 monic, integral, irreducible polynomial in under two minutes whose roots are plotted in Figure \ref{annulus_output}.
\newline
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{unif_rand_100_annulus.png}
\caption{Plot of 50 sampled points from the annulus $1\le |z|\le 2$ and their conjugates}
\label{annulus_pivots}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering \includegraphics[width=\textwidth]{unif_rand_100_annulus_output.png}
\caption{Plotted roots of 100 degree polynomial from algorithm given the samples as input}
\label{annulus_output}
\end{subfigure}
\caption{The left shows 50 points sampled uniformly from $1\le |z|\le 2$ and right plots the roots of the output of the algorithm which tried to approximate the distribution of the left plot.}
\label{annulus}
\end{figure}
Notice how we cannot provably show the desired convergence with the LLL-algorithm, and even with Schnorr's algorithm, the convergence we prove is very slow. However, here we are applying the LLL-algorithm and in each of Figures \ref{interval}, \ref{circle_unif}, and \ref{annulus}, we see that the plots of the roots of the polynomial produced by the algorithm are very close to the plots of the corresponding samples.
\subsection{Applications to Abelian varieties}
Recently, Shankar and Tsimerman~\cite{shankar_tsimerman_2018}, \cite{Tsimerman1} conjectured that for every $g\geq 4$, every Abelian variety defined over $\overline{\mathbb{F}_q}$ is isogenous to a Jacobian of a curve defined over $\overline{\mathbb{F}_q}.$ Given a finite field $\mathbb{F}_q$ and any arbitrary large extension $\mathbb{F}_{q^n}$, we prove that there are infinitely many abelian varieties over $\mathbb{F}_q$ which are not isogenous to the Jacobian of any curve over $\mathbb{F}_{q^n}$. We use Honda Tate theory and construct an arithmetic measure with some conditions on its moments. First, we introduce our moment conditions which are motivated by the work of Tsfasman and Vladut~\cite{MR1465522}.
\\
Let $X$ be a finite curve defined over $\mathbb{F}_q$. Let
\(
N_r(X):= \# X(\mathbb{F}_{q^r})
\)
be the number of $\mathbb{F}_{q^r}$ of $X.$
By Weil's formula, we have
\begin{equation}\label{positivity}
0
\leq N_r(X)=q^r+1-2q^{r/2}\sum_{j=1}^g\cos(r2\pi \theta_j),
\end{equation}
where $\sqrt{q}e^{2\pi i \theta_i}$ are Frobenius eigenvalues.
We have
\[
N_r(X)=\sum_{d|r}dM_d(X),
\]
where $M_d(X)$ is the number of points with degree $d.$ By Mobius inversion formula,
\[
M_r(X)=\frac{1}{r}\sum_{d|r} \mu(d)N_{\frac{r}{d}}(X).
\]
In particular, we have
\[
\sum_{d|r} \mu(d)N_{\frac{r}{d}}(X) \geq 0
\]
for every $r\geq0.$
\\
Let $S_{\sqrt{\alpha}}:=\left\{ \sqrt{\alpha}e^{2\pi i\theta} : \theta \in [0,1) \right\}$ be the circle of radius $\sqrt{\alpha}$ centered at the origin, and defined the following probability measure on $S_{\sqrt{\alpha}}$
\[
d\mu_q:=\left(1+\sum_{k=1}^{\infty}\frac{c}{k^2}\cos(k2\pi \theta)\right) d\theta.
\]
where $c=\frac{6}{\pi^210}.$ Note that
\begin{equation}\label{bound}
0 \leq 0.9 d\theta \leq d\mu_q \leq 1.1 d\theta.
\end{equation}
\begin{proposition}\label{arithme} Suppose that $\alpha>1.65$ then
$U_{\mu_\alpha}(z)\geq 0$ for any $z\in \mathbb{C}.$ As a result, $d\mu_\alpha$ is an arithmetic measure.
\end{proposition}
\begin{proof}
It is enough to show that
$U_{\mu_\alpha}(z)\geq 0$ for any $z\in \mathbb{C}.$ Suppose that $|z|\geq 2\sqrt{\alpha}$, then
\[
U_{\mu_\alpha}(z)=\int_{S_{\sqrt{\alpha}}} \log|z-\sqrt{\alpha}e^{2\pi i\theta}|d\mu_\alpha \geq \log\sqrt{\alpha}\geq 0.
\]
Otherwise, suppose that $|z|<2\sqrt{\alpha}.$
By \eqref{bound}, we have
\[
\begin{split}
U_{\mu_\alpha}(z)&=\int_{S_{\sqrt{\alpha}}} \log|z-\sqrt{\alpha}e^{2\pi i\theta}|d\mu_\alpha
\\
&=\int_{ |z-\sqrt{\alpha}e^{2\pi i\theta}|<1} \log|z-\sqrt{\alpha}e^{2\pi i\theta}|d\mu_\alpha+ \int_{ |z-\sqrt{\alpha}e^{2\pi i\theta}|>1} \log|z-\sqrt{\alpha}e^{2\pi i\theta}|d\mu_\alpha
\\
&\geq 1.1 \int_{ |z-\sqrt{\alpha}e^{2\pi i\theta}|<1} \log|z-\sqrt{\alpha}e^{2\pi i\theta}| d\theta + 0.9\int_{ |z-\sqrt{\alpha}e^{2\pi i\theta}|>1} \log|z-\sqrt{\alpha}e^{2\pi i\theta}| d\theta
\\
&=1.1\int_{S_{\sqrt{\alpha}}} \log|z-\sqrt{\alpha}e^{2\pi i\theta}|d\theta- 0.2 \int_{ |z-\sqrt{\alpha}e^{2\pi i\theta}|>1} \log|z-\sqrt{\alpha}e^{2\pi i\theta}| d\theta
\\
&\geq 1.1\log(\sqrt{\alpha})-0.2\log(3\sqrt{\alpha})>0.
\end{split}
\]
This completes the proof of our propositions.
\end{proof}
\begin{corollary}
There exists a sequence $\{A_g\}$ of abelian varities over $\mathbb{F}_q$ such that $\dim(A_g)=g$ and their Frobenious eigenvalues equidistributes with $\mu_q.$
\end{corollary}
\begin{proof}
Let $\alpha_n:=\sqrt{q}-\frac{1}{10n},$ $\Sigma_n:=[-2\alpha_n,2\alpha_n]$ and
\(
h_n(z):=z+\frac{\alpha_n}{z}.
\)
Note that $h_n$ is a conformal map with fixed point at infinity and derivative 1 infinity that sends $\mathbb{C}\backslash S_{\alpha_n}$ to the $\mathbb{C}\backslash \Sigma_n$. Let $h_nd\mu_{\alpha_n}$ be the push-forward of $\mu_{\alpha_n}$ by $h_n.$ By conformal in-variance of the potential function and Proposition~\ref{arithme}, $h_n d\mu_{\alpha_n}$ has a positive potential function and hence $h_nd\mu_{\alpha_n}$ is arithmetic.
It follows from Theorem~\ref{general} that there exists a sequence of irreducible polynomial $\{p_m\}$ with real roots contained in $\Sigma_n(\frac{1}{100 n})\subset [2\sqrt{q},-2\sqrt{q}]$ with root distribution converging to $h_nd\mu_{\alpha_n}.$ Let
\[
f_m(x):=x^{\deg{p_m}} p_m(x+\frac{q}{x}).
\]
Note that $f_m(x)$ has all its roots on $S_{\sqrt{q}}.$ Now by a diagonal argument and letting $n\to \infty$ and taking $m$ large enough it follows that there exists a sequecen of $\{A_g\}$ of abelian varities over $\mathbb{F}_q$ such that $\dim(A_g)=g$ and their Frobenious eigenvalues equidistributes with $\mu_q.$
\end{proof}
\begin{theorem}
Let $\{A_g\}$ be any family of abelian varieties over $\mathbb{F}_q$ such that $\dim(A_g)=g$ and their Frobenius eigenvalues distribution converges to $\mu_q.$ Given any integer $r\geq 0$, there exists $N$ such that if $g\geq N$ then $A_g$ is not isogenous to the Jacobian of any curve over $\mathbb{F}_{q^r}$.
\end{theorem}
\begin{proof}
Suppose the contrary that there exists a sub-sequence $\{ A_{g_i}\}$ of abelian varieties over $\mathbb{F}_q$ such that their Frobenius eigenvalues equidistribute with $\mu_q$ and also they are isogenous to Jacobian of curves $\{ X_{g_i}\}$ over $\mathbb{F}_{q^r}$ for some $r\geq 0.$ By~\eqref{positivity}, it follows that
\[
0 \leq \frac{N_r(X_{g_i})}{g_i}= \frac{q^r+1-2q^{r/2}\sum_{j=1}^{g_i}\cos(r2\pi \theta_j)}{g_i}.
\]
By taking the limit of the above as $g_i\to \infty$, we obtain
\[
0 \leq -2q^{r/2} \lim_{g_i\to \infty}\frac{\sum_{j=1}^{g_i}\cos(r2\pi \theta_j)}{g_i}=-2q^{r/2} \int \cos(r2\pi \theta) d\mu_q= -q^{r/2}\frac{c}{r^2} <0,
\]
which is a contradiction. This completes the proof of our theorem.
\end{proof}
As was mentioned in the introduction, we constructed a polynomial using the algorithm described in section \ref{main} of the paper. Like the other results in section \ref{applications}, this was implemented with the LLL-algorithm and without using the construction that forces the roots to lie inside a desired support.
The highest order terms were
$$x^{290} - 28x^{289} - {484}x^{288} + 20784x^{287} + \cdots.$$
The largest coefficient of this polynomial has 105 digits. The roots all lie inside $[-2\sqrt{3},2\sqrt{3}]\subset\mathbb R$ and so by Honda-Tate theory, it represents an abelian variety over $\mathbb F_3$. Computing $N_2$ of this variety, we get $-2$. Thus it is not the Jacobian of any curve over $\mathbb F_9$.
\begin{comment}
For $r=1$, this gives us
\[
M_1(X)= q+1- 2\sqrt{q}\sum_{j=1}^g \cos(\theta_j)\geq 0.
\]
In terms of the limiting measure, we have
\[
\frac{q+1}{g} \geq \int x d\mu(x).
\]
As $g\to \infty,$ we have
\[
\int x d\mu(x)\leq 0.
\]
The above inequality will be violated if the expected value would be positive.
For $r=2$, we have
\[
M_2(X)= N_2(X)-N_1(X)=\left(q^2+1-2q\sum_{j=1}^g\cos(2\theta_j) \right)- \left(q+1-2\sqrt{q}\sum_{j=1}^g \cos(\theta_j)\right)\geq 0.
\]
The above gives us another moment inequality:\\
\textcolor{blue}{INCOMPLETE SECTION}
\end{comment}
\bibliographystyle{alpha}
\subsection{Finding Square-free Polynomials}\label{square_free}
We see in section \ref{int_poly} that Theorem \ref{smith_3_2} requires a square-free polynomial $P$ to get the basis $\mathcal O$. To do this, we need to find monic, square-free integer polynomials of a given degree. Furthermore, we want this polynomial to have a small $n$-norm. The reason for this is that applying Theorem \ref{smith_3_2} gives a polynomial close to the original polynomial in the $n$-norm metric. This is justified in Corollary 3.3 in Smith. Since we know that the basis we created in Section \ref{lattice_bases} is short and has small $n$-norm, we can take a small linear-combination of these basis vectors to get a square-free polynomial of small norm. In practice, we can simply take a degree $n$-polynomial from this basis since it will usually be square-free.
\newline
\subsection{Chebyshev Polynomials}\label{cheb}
Here we discuss the implementation of Proposition 4.1 in Smith. The idea of this proposition is to construct an oscillatory monic polynomial $Q$ given another real monic polynomial $P$ such that $PQ$ has $\deg(Q)$ of the highest degree coefficients being even (after the highest order term). \textcolor{red}{Explain why the oscillation is important. The oscillation forces $Q$ to have all of its roots in our interval $I$ and also that when we approximate with Theorem \ref{smith_3_2}, the number of roots in $I$ does not change}.
\newline
The way we achieve this is with Chebyshev polynomials. Given an interval $I$ with capacity $\kappa$, there is a unique polynomial $f$ of degree $n$ which achieves $\max |f(x)|=2\kappa^r$ precisely $n+1$ times where $\kappa$ is the capacity of $I$. We will call this the Chebyshev polynomial on interval $I$. \textcolor{red}{A proof for this can be seen in Lemma 4.3 of Smith}. We algorithmically compute $Q$ as described above in the manner suggested in Smith's proof of Proposition 4.1 in Smith.
\subsection{Main Algorithm}
First, we find the short square-free, integer polynomial as described in section \ref{square_free}. We then find polynomial $Q$ of degree $m$ (\textcolor{red}{explain where $m$ comes from}) as described in section \ref{cheb} using $P=\tilde{P}_{\mu,n+2}$. Suppose
$$PQ=x^{n+m}+a_{n+m-1}x^{n+m-1}+\dots+a_nx^n+b_{n-1}x^{n-1}+\dots+b_0$$
where all of the $a_i$'s are even integers. Then apply the algorithm discussed in section \ref{int_poly} to $\frac{1}{4}\left(\sum_{i=0}^nb_ix^i\right)+\frac{1}{2}$ get an integer polynomial $H$ close to the original. Finally,
$$x^{n+m}+a_{n+m-1}x^{n+m-1}+\dots+a_nx^n+4H-2$$
is an Eisenstein polynomial at $2$ close in $(n+m)$-norm to $PQ$. If the norm between $PQ$ and the Eisenstein polynomial is small enough, the roots are forced to remain inside $I$. | {
"timestamp": "2023-02-07T02:31:16",
"yymm": "2302",
"arxiv_id": "2302.02872",
"language": "en",
"url": "https://arxiv.org/abs/2302.02872",
"abstract": "Let $\\Sigma \\subset \\mathbb{C}$ be a compact subset of the complex plane, and $\\mu$ be a probability distribution on $\\Sigma$. We give necessary and sufficient conditions for $\\mu$ to be the weak* limit of a sequence of uniform probability measures on a complete set of conjugate algebraic integers lying eventually in any open set containing $\\Sigma$. Given $n\\geq 0$, any probability measure $\\mu$ satisfying our necessary conditions, and any open set $D$ containing $\\Sigma$, we develop and implement a polynomial time algorithm in $n$ that returns an integral monic irreducible polynomial of degree $n$ such that all of its roots are inside $D$ and their root distributions converge weakly to $\\mu$ as $n\\to \\infty$. We also prove our theorem for $\\Sigma\\subset \\mathbb{R}$ and open sets inside $\\mathbb{R}$ that recovers Smith's main theorem~\\cite{Smith} as special case. Given any finite field $\\mathbb{F}_q$ and any integer $n$, our algorithm returns infinitely many abelian varieties over $\\mathbb{F}_q$ which are not isogenous to the Jacobian of any curve over $\\mathbb{F}_{q^n}$.",
"subjects": "Number Theory (math.NT)",
"title": "Limiting distributions of conjugate algebraic integers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.988491852291787,
"lm_q2_score": 0.810478913248044,
"lm_q1q2_score": 0.8011518021999936
} |
https://arxiv.org/abs/1907.02004 | On Hamiltonian cycles in balanced $k$-partite graphs | For all integers $k$ with $k\geq 2$, if $G$ is a balanced $k$-partite graph on $n\geq 3$ vertices with minimum degree at least \[ \left\lceil\frac{n}{2}\right\rceil+\left\lfloor\frac{n+2}{2\lceil\frac{k+1}{2}\rceil}\right\rfloor-\frac{n}{k}=\begin{cases} \lceil\frac{n}{2}\rceil+\lfloor\frac{n+2}{k+1}\rfloor-\frac{n}{k} & : k \text{ odd }\\ \frac{n}{2}+\lfloor\frac{n+2}{k+2}\rfloor-\frac{n}{k} & : k \text{ even } \end{cases}, \] then $G$ has a Hamiltonian cycle unless $k=2$ and 4 divides $n$, or $k=\frac{n}{2}$ and 4 divides $n$. In the case where $k=2$ and 4 divides $n$, or $k=\frac{n}{2}$ and 4 divides $n$, we can characterize the graphs which do not have a Hamiltonian cycle and see that $\left\lceil\frac{n}{2}\right\rceil+\left\lfloor\frac{n+2}{2\lceil\frac{k+1}{2}\rceil}\right\rfloor-\frac{n}{k}+1$ suffices. This result is tight for all $k\geq 2$ and $n\geq 3$ divisible by $k$. | \section{Introduction}
The study of Hamiltonian cycles in balanced $k$-partite graphs begins with the following classic results of Dirac, and Moon and Moser. Dirac \cite{D} proved that for all graphs $G$ on $n\geq 3$ vertices, if $\delta(G)\geq \ceiling{\frac{n}{2}}$, then $G$ has a Hamiltonian cycle. Moon and Moser \cite{MM} proved that for all balanced bipartite graphs $G$ on $n\geq 4$ vertices, if $\delta(G)\geq \frac{n+2}{4}$, then $G$ has a Hamiltonian cycle.
Over 30 years later Chen, Faudree, Gould, Jacobson, and Lesniak \cite{CFGJL} beautifully tied these results together by proving that for all $k\geq 2$, if $G$ is a balanced $k$-partite graph on $n$ vertices with
\begin{equation}\label{cfgjl}
\delta(G)> \frac{n}{2}+\frac{n}{2\ceiling{\frac{k+1}{2}}}-\frac{n}{k},
\end{equation}
then $G$ has a Hamiltonian cycle. It turns out that while their result is nearly optimal, in most cases the degree condition can be improved by 1. The purpose of this note is simply to provide the precise minimum degree condition in all cases thereby filling the lacuna in the above result (and as we point out in the Appendix, it is unfortunately not as simple as replacing the strict inequality in \eqref{cfgjl} with a weak inequality).
\begin{thm}\label{main} Let $k$ be an integer with $k\geq 2$. For all balanced $k$-partite graphs $G$ on $n$ vertices, if
\begin{equation}\label{cf}
\delta (G) \geq \ceiling{\frac{n}{2}}+\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}-\frac{n}{k}=\Dk,
\end{equation}
then $G$ has a Hamiltonian cycle unless $k=2$ and 4 divides $n$, or $k=\frac{n}{2}$ and 4 divides $n$.
\end{thm}
Since a graph on $n$ vertices can be viewed as a $k$-partite graph with $k=n$, note that when $k=n$, we have $\ceiling{\frac{n}{2}}+\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}-\frac{n}{k}=\ceiling{\frac{n}{2}}$ and thus Theorem \ref{main} reduces to Dirac's theorem. When $k=2$, we have $\ceiling{\frac{n}{2}}+\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}-\frac{n}{k}=\floor{\frac{n+2}{4}}$ and thus when 4 does not divide $n$, Theorem \ref{main} reduces to Moon and Moser's theorem; and when 4 does divide $n$, Ferrara, Jacobson, and Powell \cite{FJP} characterized all balanced bipartite graphs $G$ on $n\geq 4$ vertices such that $\delta(G)\geq \frac{n}{4}$, yet $G$ does not have a Hamiltonian cycle. So our proof will only handle the cases when $3\leq k\leq \frac{n}{2}$.
We will also prove the following which will handle the case when $k=\frac{n}{2}$ and 4 divides $n$. Together with the results in \cite{FJP}, this gives a complete characterization of balanced $k$-partite graphs $G$ on $n$ vertices which satisfy $\delta (G) \geq \ceiling{\frac{n}{2}}+\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}-\frac{n}{k}$, but do not have a Hamiltonian cycle.
\begin{prop}\label{prop:n=2k}
Let $n\geq 8$ be divisible by $4$, let $k=\frac{n}{2}$, and let $G$ be a balanced $k$-partite graph on $n$ vertices. If $\delta(G)\geq \frac{n}{2}+\floor{\frac{n+2}{k+2}}-\frac{n}{k}=\frac{n}{2}-1$ and $G$ does not have a Hamiltonian cycle, then $G$ belongs to one of the families of examples described in Example \ref{n=2k}.
\end{prop}
\subsection{Overview}
We give the lower bound examples in Section \ref{sec:example}, we collect the main lemmas in Section \ref{sec:example} (while it is a combination of existing results, Lemma \ref{domcycle} may be of independent interest), we deal with the first two exceptions in Section \ref{sec:filter} before starting the main proof in Section \ref{sec:main}. Finally, in the Appendix, we collect some numerical lemmas which are needed because of the floors and ceilings appearing in \eqref{cf}.
This project grew out of an earlier work of the first author together with Krueger, Pritikin, and Thompson \cite{DKPT}, where we considered Hamiltonian cycles in unbalanced $k$-partite graphs for $k\geq 3$. The upcoming Example \ref{tight_gen} first appeared in a more general form in \cite{DKPT}. In fact, it was this example which indicated to us that \eqref{cfgjl} is not always tight. By using Theorem \ref{chv} and Lemma \ref{domcycle}, we were able to streamline the original proof of Chen et al.\ with the correct degree condition; however, because of the unexpected (to us) exceptional cases which arose when $k=\frac{n}{2}$, our overall proof didn't end up being any shorter than the original. Again, we emphasize that Chen et al.\ have a beautiful result which places Dirac's theorem and Moon and Moser's theorem on a common spectrum. It is only because of the fundamental nature of these results that we have expended the effort necessary to provide the tight degree condition in all cases.
\subsection{Notation}
For $S\subseteq V(G)$, we let $N(S) = \bigcup_{v}N(v)$ and $\overline{S}= V(G)\setminus S$.
Given disjoint sets $A, B\subseteq V(G)$, we let $\delta(A,B)=\min\{|N(v)\cap B|: v\in A\}|$.
Given a cycle $v_1v_2\dots v_kv_1$, $i\in [k]$, and an integer $t$, we assume that the addition in the indices, such as $v_{i+t}$, is taken modulo $k$.
\section{Tightness examples}\label{sec:example}
\begin{exa}\label{tight_gen}
For all $k\geq 2$ and all $n$ divisible by $k$, there exists a family $\mathcal{F}$ of balanced $k$-partite graphs on $n$ vertices such that for all $F\in \mathcal{F}$,
$$\delta(F)\geq \ceiling{\frac{n}{2}}+\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}-\frac{n}{k}-1,$$ but $F$ does not have a Hamiltonian cycle.
\end{exa}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale = .25]
\draw[pattern=north west lines, pattern color=gray] (-18, 1) rectangle (-16, 2);
\draw[pattern=north west lines, pattern color=gray] (-16, 1) rectangle (-12, 1.5);
\draw[pattern=north west lines, pattern color=gray] (-18, -6) rectangle (-4, 1);
\draw (-21, 1) node[left]{$\floor{\frac{\ceiling{\frac{n+1}{2}}}{\ceiling{\frac{k+1}{2}}}}$} -- (-19, 1);
\draw[dashed] (-20, 1) -- (8, 1);
\draw (-18, -6) node[below right]{$V_1$} rectangle (-16, 4);
\draw (-16, -6) node[below right]{$V_2$} rectangle (-14, 4);
\draw (-14, -6) rectangle (-12, 4);
\draw (-12, -6) rectangle (-10, 4);
\draw (-10, -6) rectangle (-8, 4);
\draw (-8, -6) rectangle (-6, 4);
\draw (-6, -6) node[below]{$~~~~~~~~~V_{\ceiling{\frac{k+1}{2}}}$} rectangle (-4, 4);
\draw (-4, -6) rectangle (-2, 4);
\draw (-2, -6) rectangle (0, 4);
\draw (0, -6) rectangle (2, 4);
\draw (2, -6) rectangle (4, 4);
\draw (4, -6) rectangle (6, 4);
\draw (6, -6) node[below]{$~~~~V_k$} rectangle (8, 4);
\end{tikzpicture}
\caption{The family of graphs $\mathcal{F}$. The shaded sets represent $X_1, \dots, X_{\ceiling{\frac{k+1}{2}}}$.}
\end{figure}
\begin{proof}
First note that
\begin{equation}\label{equal}
\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}=\floor{\frac{\ceiling{\frac{n+1}{2}}}{\ceiling{\frac{k+1}{2}}}}.
\end{equation}
Since if $k$ is even, then both sides of the equation equal $\floor{\frac{n+2}{k+2}}$; if $k$ is odd and $n$ is even, then both sides of the equation equal $\floor{\frac{n+2}{k+1}}$; and if $k$ is odd and $n$ is odd then we get that $\floor{\frac{n+1}{k+1}} = \floor{\frac{n+2}{k+1}}$, which is true since $\frac{n+2}{k+1}$ is not an integer.
Let $\mathcal{F}$ be the family of graphs which can be obtained from a complete $k$-partite graph with parts $V_1, \dots, V_k$ such that $|V_i|=\frac{n}{k}$ for all $i\in [k]$, by selecting some $X_i\subseteq V_i$ for all $i\in [\ceiling{\frac{k+1}{2}}]$ such that $|X_1|\geq \dots\geq |X_{\ceiling{\frac{k+1}{2}}}|=\floor{\frac{\ceiling{\frac{n+1}{2}}}{\ceiling{\frac{k+1}{2}}}}$ and $|X_1\cup\dots\cup X_{\ceiling{\frac{k+1}{2}}}|=\ceiling{\frac{n+1}{2}}$. Add all edges between parts except for those between a vertex in $X_i$ and $X_j$ for all $i, j \in [\ceiling{\frac{k+1}{2}}]$. Note that every $F\in \mathcal{F}$ has an independent set of size $\ceiling{\frac{n+1}{2}}$ and thus does not contain a Hamiltonian cycle.
Finally to see that the degree condition is satisfied, let $i\in [\ceiling{\frac{k+1}{2}}]$ and let $v\in X_{i}$. We have by \eqref{equal}
\begin{align*}
d(v)&=(1-\frac{1}{k})n-|X_1\cup \dots X_{i-1}\cup X_{i+1}\cup \dots\cup X_{\ceiling{\frac{k+1}{2}}}|\\
&\geq (1-\frac{1}{k})n-\left(\ceiling{\frac{n+1}{2}}-\floor{\frac{\ceiling{\frac{n+1}{2}}}{\ceiling{\frac{k+1}{2}}}}\right)
=\ceiling{\frac{n}{2}}+\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}-\frac{n}{k}-1. \qedhere
\end{align*}
\end{proof}
\begin{exa}\label{n=2k} Let $n\geq 8$ be divisible by $4$ and let $k=\frac{n}{2}$.
\begin{enumerate}[label=\emph{(\roman*)}]
\item\label{e1} There exists a family $\mathcal{F}_1$ of balanced $k$-partite graphs on $n$ vertices such that for all $F_1\in \mathcal{F}_1$, $\delta(F_1)\geq \frac{n}{2}+\floor{\frac{n+2}{k+2}}-\frac{n}{k}=\frac{n}{2}-1$, but $\kappa(F_1)\leq 1$ and thus $F_1$ does not have a Hamiltonian cycle.
\item\label{e2} There exists a 2-connected balanced 4-partite graph $F_2$ on 8 vertices with $\alpha(F_2)=3$ such that $F_2$ does not have a Hamiltonian cycle.
\item\label{e3} There exists a family $\mathcal{F}_3$ of balanced $k$-partite graphs on $n$ vertices such that for all $F_3\in \mathcal{F}_3$, $\delta(F_3)\geq \frac{n}{2}+\floor{\frac{n+2}{k+2}}-\frac{n}{k}=\frac{n}{2}-1$, $\kappa(F_2)\geq 2$, and $\alpha(F_3)=\frac{n}{2}$, but $F_3$ does not have a Hamiltonian cycle.
\end{enumerate}
\end{exa}
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{F2.pdf}
\caption{The graph $F_2$.}\label{fig_F2}
\end{figure}
\begin{proof}
\begin{enumerate}
\item Let $V_i=\{x_i, y_i\}$ for all $i\in [k]$. Add all edges inside $\{x_1, \dots, x_k\}$, add all edges from $y_k$ to $\{y_1, \dots, y_{k-1}\}$, and for all $i\in [k-1]$ add at least $k-1$ edges from $y_i$ to $\{y_1, \dots, y_{i-1}, y_{i+1}, \dots, y_{k-1}\}\cup \{x_{k}\}$. Let $\mathcal{F}_1$ be the family of graphs thus obtained. Note that every graph $F_1\in \mathcal{F}_1$ has $\delta(F_1)=k-1=\frac{n}{2}-1$ and $\kappa(F_1)\leq 1$ and thus $F_1$ does not have a Hamiltonian cycle.
\item We let $F_2$ be the graph in Figure \ref{fig_F2} which can be seen to be a balanced 4-partite graph (with vertices of the same shape being in the same part of the partition) which is 2-connected and has $\alpha(F_2)=3$. Note that $F_2$ has no Hamiltonian cycle since $G-x_1-x_4$ has three components.
\item Let the parts be labeled $X_1, \dots, X_{k/2}$, $Y_1, \dots, Y_{k/2}$ and let $X=\cup_{i=1}^{k/2} X_i$ and $Y=\cup_{i=1}^{k/2}Y_i$. Let $y'\in Y_{k/2}$, let $y''\in Y\setminus \{y'\}$, and let $x'\in X$. Add all edges between $X$ and $Y\setminus \{y', y''\}$, all edges from $y''$ to $X\setminus \{x'\}$, and all edges from $y'$ to $\{x'\}\cup (Y\setminus Y_{k/2})$. Furthermore, we may add any number of other edges between the parts $Y_1, \dots, Y_{k/2}$ and we may add the edge $x'y''$. Let $\mathcal{F}_3$ be the family of graphs thus obtained. Let $F_3\in \mathcal{F}_3$ and let $H$ be the bipartite graph induced by $[X,Y]$. It is easily seen that $\delta(F_3)\geq \frac{n}{2}-1$, $\kappa(F_2)\geq 2$, and $\alpha(F_3)=\frac{n}{2}$. Since $X$ is an independent set, if $F_3$ has a Hamiltonian cycle, it must be in $H$; however, since $y'$ has degree 1 in $H$, there is no Hamiltonian cycle in $H$.
\end{enumerate}
\end{proof}
\section{General lemmas}\label{sec:lemmas}
In this section we state three general results which are useful for finding Hamiltonian cycles, beginning with two classics.
\begin{thm}[Dirac \cite{D}]\label{dir}
Let $n\geq d\geq 3$. If $G$ is 2-connected and $\delta(G)\geq d/2$, then $G$ has a cycle of length at least $d$.
\end{thm}
\begin{thm}[Chv\'atal \cite{C}]\label{chv}
Let $G=(U,V,E)$ be a bipartite graph on $n\geq 4$ vertices with vertex sets $U=\{u_1, \dots, u_{n/2}\}$ and $V=\{v_1, \dots, v_{n/2}\}$. If for all $1\leq k<n/2$, $$d(v_k)\leq k \Rightarrow d(u_{n/2-k})\geq \frac{n}{2}-k+1,$$
then $G$ has a Hamiltonian cycle.
\end{thm}
The main lemma which we use to begin the proof of Theorem \ref{main} and Proposition \ref{prop:n=2k} is the following combination of the well known result of Nash-Williams \cite{NW} and a (slight weakening of a) result of Bauer, Veldman, Morgana, Schmeichel \cite{BVMS}. We provide a proof for completeness.
We say that a cycle $C$ in a graph $G$ is \emph{strongly dominating} if $V(G)\setminus V(C)$ is an independent set and no two vertices of $\bigcup_{u\in V(G)\setminus V(C)}N(u)$ appear consecutively on $C$.
\begin{lem}[see {\cite[Lemmas 1,2,3,4]{NW}} and {\cite[Lemma 8]{BVMS}}]\label{domcycle}
Let $G$ be a graph on $n$ vertices. If $G$ is 2-connected and $\delta(G)\geq \frac{n+2}{3}$, then every longest cycle of $G$ is strongly dominating.
\end{lem}
\begin{proof}
Let $C=v_1v_2\dots v_kv_1$ be a longest cycle in $G$ and let $P=u_1u_2\dots u_r$ be a longest path in $G-C$. If $r\leq 1$, then we are done; so suppose $r\geq 2$. We have $k+r\leq n$ and note that by Theorem \ref{dir}, we have $k\geq 2\delta(G)\geq \frac{2n+4}{3}$ and thus
\begin{equation}\label{nrlower}
3r+4\leq n.
\end{equation}
Let $X=N(u_1)\cap V(C)$ and $Y=N(u_r)\cap V(C)$ and note that
\begin{equation}\label{XYlower}
|X|, |Y|\geq \delta(G)-(r-1).
\end{equation}
The key observation is that by the maximality of $C$, no two vertices in $X\cup Y$ are consecutive along $C$, and furthermore if $v_i\in X\cap Y$, then none of $v_{i-r}, \dots, v_{i-1}, v_{i+1}, \dots, v_{i+r}$ are in $X\cup Y$.
First suppose that $X\subseteq Y$ or $Y\subseteq X$; without loss of generality $X\subseteq Y$. In this case we have by \eqref{XYlower},
\begin{equation}\label{Xineq}
n-r\geq k\geq (r+1)|X|\geq (r+1)(\delta(G)-(r-1)).
\end{equation}
First suppose $r=2$, in which case \eqref{Xineq} becomes $n-2\geq 3\left(\frac{n+2}{3}-1\right)=n-1$, a contradiction. Now suppose $r\geq 3$ in which case \eqref{Xineq} becomes $$n\leq \frac{3r^2-5r-5}{r-2}=3r+1-\frac{3}{r-2},$$ contradicting \eqref{nrlower}.
Now suppose that $X\setminus Y\neq \emptyset$ and $Y\setminus X\neq \emptyset$. There are vertices $v_i, v_j\in V(C)$ with the following properties: $v_i\in X\setminus Y$ and the next vertex $v_{i'}\in X\cup Y$ which appears after $v_i$ satisfies $v_{i'}\in Y$ (meaning that $i'\geq i+r+1$), and $v_j\in Y\setminus X$ and the next vertex $v_{j'}\in X\cup Y$ from $X\cup Y$ which appears after $v_j$ satisfies $v_{j'}\in X$ (meaning that $j'\geq j+r+1$). Each vertex of $((X\setminus Y)\cup (Y\setminus X))\setminus \{v_i, v_j\}$ is followed by at least one vertex from $V(C)\setminus (X\cup Y)$ and each vertex of $(X\cap Y)\cup \{v_i, v_j\}$ is followed by at least $r$ vertices from $V(C)\setminus (X\cup Y)$. So we have
\begin{align*}
n-r\geq k&\geq 2(|X\setminus Y|+|Y\setminus X|-2)+(r+1)(|X\cap Y|+2)\\
&=|X|+|Y|+|X\cup Y|+(r-2)|X\cap Y|+2(r-1)\\
&\geq \min\{4\delta(G)-2(r-1), 3\delta(G)-1\}\\
&\geq \min\{4(n+2)/3-2(r-1), n+1\},
\end{align*}
where the second to last inequality is seen by using \eqref{XYlower} and splitting into cases whether $|X\cap Y|=0$ or not. However, $4(n+2)/3-2(r-1)\leq n-r$ implies $n\leq 3r-14$, contradicting \eqref{nrlower}.
To see that the second part of the definition of strongly dominating is satisfied, suppose that $C=v_1\dots v_kv_1$ is a longest cycle and suppose $V(G)\setminus V(C)=\{u_1, \dots, u_r\}$ is an independent set. If $|V(G)\setminus V(C)|\leq 1$, we are done, so suppose $r\geq 2$. Let $X=N(u_1)\cap V(C)$ and suppose (without loss of generality) for contradiction that $v_1\in X$ and $v_2\in N(u_2)$. By the maximality of $C$, this implies that $v_3\not \in N(u_2)$ and for all $i\geq 3$, if $v_i\in X$, then $v_{i+1}, v_{i+2}\not\in N(u_2)$. Since $k\leq n-2$, this implies that
$$\frac{n+2}{3}\leq |N(u_2)|\leq k-(2|N(u_1)|-1)\leq n-1-2\left(\frac{n+2}{3}\right)=\frac{n-7}{3},$$ a contradiction.
\end{proof}
\section{Filtering out $\mathcal{F}_1$ and $F_2$}\label{sec:filter}
We want to say that every balanced $k$-partite graph satisfying \eqref{cf} is both 2-connected and every longest cycle in strongly dominating; however, there are two exceptions and we deal with those exceptions before beginning the main proof in next Section.
First, we show that every balanced $k$-partite graph satisfying \eqref{cf} is either $2$-connected or belongs to the family $\mathcal{F}_1$ in Example \ref{n=2k}.\ref{e1}.
\begin{lem}\label{F1}
Let $k\geq 3$, let $n$ be an integer such that $n\geq 2k$, and let $G$ be a balanced $k$-partite graph on $n$ vertices. If $$\delta(G) \geq \ceiling{\frac{n}{2}}+\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}-\frac{n}{k},$$ then $G$ is 2-connected unless $n=2k$ and $G\in \mathcal{F}_1$ (see Example \ref{n=2k}.\ref{e1}).
\end{lem}
\begin{proof} Let $V_1, \dots, V_k$ be the parts of $G$. Suppose for contradiction that $\kappa(G)\leq 1$. Let $A, B, C$ be a partition of $V(G)$ such that $|C|\leq 1$ and $G-C$ is not connected.
First suppose that there exists $i\in [k]$ such that $V_i\subseteq A\cup C$ or $V_i\subseteq B\cup C$. Without loss of generality suppose $V_i\subseteq B\cup C$ and let $u\in A$ and $v\in B\cap V_i$. We have
$$2\delta(G)\leq d(u)+d(v)\leq |A|+|C|-1+|B|+|C|-\frac{n}{k}=(1-\frac{1}{k})n+|C|-1\leq (1-\frac{1}{k})n,$$
contradicting Fact \ref{ineq}.\ref{f1} when $k$ is even and $n\geq 3k$, and contradicting Fact \ref{ineq}.\ref{f2} when $k$ is odd and $n\geq 2k$.
So unless $k$ is even and $n=2k$, we must have that for all $i\in [k]$, $V_i\cap A\neq \emptyset$ and $V_i\cap B\neq \emptyset$. Either $C=\emptyset$ and we let $u\in A$ and $v\in B$, or $C\neq \emptyset$ and suppose without loss of generality that $V_1\cap C\neq \emptyset$ in which case we let $u\in V_1\cap A$ and $v\in V_1\cap B$. Either way we have
$$2\delta(G)\leq d(u)+d(v)\leq n-\frac{n}{k},$$
contradicting Fact \ref{ineq}.\ref{f1} when $k$ is even and $n\geq 3k$, and contradicting Fact \ref{ineq}.\ref{f2} when $k$ is odd and $n\geq 2k$.
Finally, suppose $k$ is even and $n=2k$ which implies $\delta(G)\geq \frac{n}{2}-1$. For all $u\in A$ we have $\frac{n}{2}-1\leq d(u)\leq |A|+|C|-1$ which implies $|A|\geq \frac{n}{2}-|C|$ and for all $v\in B$ we have $\frac{n}{2}-1\leq d(v)\leq |B|+|C|-1$ which implies $|B|\geq \frac{n}{2}-|C|$. If $C=\emptyset$, this implies $|A|=\frac{n}{2}$ and $|B|=\frac{n}{2}$. If $C\neq \emptyset$, we have $|A|\geq \frac{n}{2}-1$ and $|B|\geq \frac{n}{2}-1$, so without loss of generality suppose $|A|+|C|=\frac{n}{2}$ and $|B|=\frac{n}{2}$.
If there exists $i\in [k]$ such that $V_i\subseteq A\cup C$ or $V_i\subseteq B$; say $V_i\subseteq A\cup C$, then for $u\in V_i\cap A$, we have
\[\frac{n}{2}-1\leq d(u)\leq |A|+|C|-|V_i|=\frac{n}{2}-2,\]
a contradiction. Thus for all $i\in [k]$, we have $|V_i\cap (A\cup C)|=1$ and $|V_i\cap B|=1$. Let $V_i=\{x_i,y_i\}$ for all $i\in [k]$ and let $X=\{x_1, \dots, x_k\}$ and $Y=\{y_1, \dots, y_k\}$ and suppose $X=A\cup C$ and $Y=B$. So it must be the case that every vertex $u\in A$ is adjacent to precisely the vertices in $X\setminus \{u\}$ which means $G[X]$ is a clique. Also every vertex $v\in Y$ is adjacent to at least $\frac{n}{2}-1$ of the $\frac{n}{2}$ vertices in $(Y\cup C)\setminus \{v\}$. Thus $G\in \mathcal{F}_1$.
\end{proof}
We now prove a lemma which shows that when $G$ is a 2-connected balanced $k$-partite graph satisfying \eqref{cf}, we either have that every longest cycle in $G$ is strongly dominating or $G$ is isomorphic to the graph $F_2$ in Example \ref{n=2k}.\ref{e2}.
\begin{lem}\label{F2}
Let $k\geq 3$ and let $n\geq 2k$ and let $G$ be a balanced $k$-partite graph on $n$ vertices. If $G$ is 2-connected and $\delta(G)\geq \ceiling{\frac{n}{2}}+\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}-\frac{n}{k}$, then every longest cycle of $G$ is strongly dominating unless $n=8$ and $G\cong F_2$ (see Example \ref{n=2k}.\ref{e2}).
\end{lem}
\begin{proof}
We will show that, unless $n=8$ and $k=4$, we have $\delta(G)\geq \ceiling{\frac{n}{2}}+\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}-\frac{n}{k}\geq \frac{n+2}{3}$ and thus we are done by Lemma \ref{domcycle}.
First suppose $k$ is odd, in which case $\ceiling{\frac{n}{2}}+\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}-\frac{n}{k}=\ceiling{\frac{n}{2}}+\floor{\frac{n+2}{k+1}}-\frac{n}{k}$. First note that when $k=3$ and $n=6$ or $n=9$ we have by direct inspection that $\ceiling{\frac{n}{2}}+\floor{\frac{n+2}{k+1}}-\frac{n}{k}\geq \frac{n+2}{3}$. So in the remaining cases we have
\begin{equation}\label{n10}
n\geq 10\geq 10-\frac{24k-60}{k^2+k-6}=\frac{2k(5k-7)}{k^2+k-6}.
\end{equation}
Thus, using Fact \ref{kodd}, we have
\begin{align*}
\ceiling{\frac{n}{2}}+\floor{\frac{n+2}{k+1}}-\frac{n}{k}-\frac{n+2}{3}&\geq \frac{n}{2}+\frac{n+2-(k-1)}{k+1}-\frac{n}{k}-\frac{n+2}{3}\\
&=\left(\frac{1}{6}-\frac{1}{k(k+1)}\right)n-\frac{2}{3}-\frac{k-3}{k+1}\\
&\stackrel{\eqref{n10}}{\geq} \left(\frac{1}{6}-\frac{1}{k(k+1)}\right)\frac{2k(5k-7)}{k^2+k-6}-\frac{2}{3}-\frac{k-3}{k+1}=0,
\end{align*}
as desired.
Now suppose $k$ is even, in which case $\ceiling{\frac{n}{2}}+\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}-\frac{n}{k}=\frac{n}{2}+\floor{\frac{n+2}{k+2}}-\frac{n}{k}$. Note that aside from the case $n=8$ and $k=4$ we have
\begin{equation}\label{n12}
n\geq 12\geq 12-\frac{2(k+18)(k-4)}{k^2+2k-12}=\frac{2k(5k-2)}{k^2+2k-12}.
\end{equation}
Thus
\begin{align*}
\frac{n}{2}+\floor{\frac{n+2}{k+2}}-\frac{n}{k}-\frac{n+2}{3}&\geq \frac{n}{2}+\frac{n+2-k}{k+2}-\frac{n}{k}-\frac{n+2}{3}\\
&=\left(\frac{1}{6}-\frac{2}{k(k+2)}\right)n-\frac{2}{3}-\frac{k-2}{k+2}\\
&\stackrel{\eqref{n12}}{\geq} \left(\frac{1}{6}-\frac{2}{k(k+2)}\right)\frac{2k(5k-2)}{k^2+2k-12}-\frac{2}{3}-\frac{k-2}{k+2}=0,
\end{align*}
as desired.
Finally suppose $n=8$ and $k=4$ and let $C$ be a longest cycle of $G$. Since $G$ is 2-connected and $\delta(G)\geq \frac{8}{2}+\floor{\frac{10}{6}}-\frac{8}{4}=3$, Theorem \ref{dir} implies that $C$ has length at least $6$. If $C$ had length at least 7, it would be a strongly dominating cycle, so suppose $C$ has length 6. Let $C=x_1x_2x_3x_4x_5x_6$ and let $x'$ and $x''$ be the two vertices in $V(G)\setminus V(C)$. If $x'x''\not\in E(G)$, then by the maximality of $C$ it is easily seen that, without loss of generality, $N(x')=N(x'')=\{x_1,x_3,x_5\}$ and thus $C$ is strongly dominating; so suppose that $x'x'' \in E(G)$.
Without loss of generality suppose $x'x_1 \in E(G)$. If either $x''x_2\in E(G)$ or $x''x_6\in E(G)$, then $G$ has a Hamiltonian cycle; and if either $x''x_3\in E(G)$ or $x''x_5\in E(G)$, then $G$ has a cycle longer than $C$, a contradiction. Since $\delta(G)\geq 3$, this forces $x''x_1,x''x_4 \in E(G)$. By the same argument we get $x'x_4 \in E(G)$. If $x_6x_3 \in E(G)$, then $x_6x_3x_2x_1x'x''x_4x_5x_6$ is a Hamiltonian cycle, so $x_6x_3 \not \in E(G)$, and by symmetry $x_2x_5 \not \in E(G)$. If $x_6x_2 \in E(G)$, then $x_6x_2x_3x_4x'x''x_1x_6$ is a cycle longer than $C$, a contradiction. So $x_6x_2 \not \in E(G)$ and by symmetry $x_3x_5 \not \in E(G)$. Since $\delta(G)\geq 3$, this forces $x_6x_4,x_5x_1,x_2x_4,x_3x_1 \in E(G)$. Therefore $G \cong \mathcal{F}_2$.
\end{proof}
\section{Proof of Theorem \ref{main} and Proposition \ref{prop:n=2k}}\label{sec:main}
Let $k\geq 3$ and let $G$ be a balanced $k$-partite graph on $n$ vertices. Let $V_1,V_2, \dots, V_k$ denote the parts and note that $|V_i|=\frac{n}{k}=:m $ for all $i\in [k]$. Since the case $k=n$ is Dirac's theorem, we suppose $k\leq \frac{n}{2}$ and since the case $k=2$ is handled in \cite{MM} and \cite{FJP}, we suppose $k\geq 3$. Furthermore, if $k=\frac{n}{2}$, we suppose that $G\not\in \mathcal{F}_1$ and $G\not\cong F_2$ (see Example \ref{n=2k}). Now let $C$ be a maximum length cycle and suppose for contradiction that $C$ is not Hamiltonian. By Lemma \ref{F1} and Lemma \ref{F2} we may assume that $C$ is strongly dominating.
Without loss of generality, let
$z\in V_1\setminus V(C)$.
Let $S=(V(G)\setminus V(C))\cup \{v_{i+1}: v_i\in N(z)\}$ and $R=(V(G)\setminus V(C))\cup \{v_{i-1}: v_i\in N(z)\}$ and note that
\begin{equation} \label{SR}
|S|, |R|\geq \delta(G)+1.
\end{equation}
Since $C$ is strongly dominating, both $S$ and $R$ are independent sets.
For each $i \in [k]$, set $$S_i = S
\cap V_i \text{ and } R_i = R \cap V_i.$$
Define $\ell=|\{i\in [k]: S_i\neq \emptyset\}|$ and $\ell'=|\{i\in [k]: R_i\neq \emptyset\}|$ and without loss of generality suppose $$\ell\leq \ell'.$$ Furthermore, without loss of generality, we may suppose that
\begin{align*}
S_i \neq \emptyset \text{ for all } i\in [\ell] \text{ and } S_j=\emptyset \text{ for all } j\in [k]\setminus [\ell].
\end{align*}
\begin{cla}\label{elllower}
$\ell, \ell'\geq \ceiling{\frac{k}{2}}$
\end{cla}
\begin{proof}
We claim that $|R|,|S|\geq \delta(G)+1>(\ceiling{\frac{k}{2}}-1)\frac{n}{k}$, which implies the result. Indeed, we have
\begin{equation}\label{k/2}
\delta(G) + 1 - (\ceiling{\frac{k}{2}} - 1)\frac{n}{k} \geq \ceiling{\frac{n}{2}} + \floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}} + 1 - \frac{n}{k}\ceiling{\frac{k}{2}}.
\end{equation}
When $k$ is even, \eqref{k/2} reduces to $\floor{\frac{n+2}{k+2}} + 1 > 0$, and when $k$ is odd, by Fact \ref{kodd}, \eqref{k/2} reduces to $\frac{n}{2}+\frac{n}{k+1}-\frac{k-3}{k+1}-\frac{n}{k}\left(\frac{k+1}{2}\right)+1=\frac{n}{k+1}-\frac{n}{2k}+1-\frac{k-3}{k+1}=\frac{(k-1)n}{2k(k+1)}+\frac{4}{k+1}>0$.
\end{proof}
\begin{cla}\label{Si}~
\begin{enumerate}[label=\emph{(\roman*)}]
\item \label{c1} For all $y \in S$,
$|\overline {N(y)} \setminus S| \leq n-2\delta(G)-1.$ For all $y \in R$,
$|\overline {N(y)} \setminus R| \leq n-2\delta(G)-1.$
\item \label{c2} For all $i\in [k]$, if $S_i\neq \emptyset$, then $|S_i| \geq 2\delta(G)+1-(1-\frac{1}{k})n\geq \frac{1}{2}(\frac{n}{k}-(\floor{\frac{n-1}{2}}-\delta(G))).$ For all $i\in [k]$, if $R_i\neq \emptyset$, then $|R_i| \geq 2\delta(G)+1-(1-\frac{1}{k})n\geq \frac{1}{2}(\frac{n}{k}-(\floor{\frac{n-1}{2}}-\delta(G))).$
\end{enumerate}
\end{cla}
\begin{proof}
\begin{enumerate}
\item Since $C$ is a longest cycle of $G$, the vertex subsets
$N(y)$, $S$, and $\overline{N(y)} \setminus S$ are pairwise disjoint for all $y \in S$.
Thus
$ n= |N(y)| + |S| + |\overline{N(y)} \setminus S|\geq 2\delta(G)+1+|\overline{N(y)} \setminus S|$, where the inequality holds by \eqref{SR}. Thus $|\overline {N(y)} \setminus S| \leq n-2\delta(G)-1.$ Similarly, $N(y)$, $R$, and $\overline{N(y)} \setminus R$ are pairwise disjoint for all $y \in R$, so $|\overline{N(y)} \setminus R| \leq n - 2\delta(G) - 1$
\item Let $y\in S_i$. We have that $V_i\setminus S_i\subseteq \overline {N(y)} \setminus S$ so by (i) we have that $|S_i|\geq \frac{n}{k}-|\overline {N(y)} \setminus S|\geq 2\delta(G)+1-(1-\frac{1}{k})n$ as desired. Similarly, if $y \in R_i$, then by (i) we have that $|R_i|\geq \frac{n}{k} - |\overline{N(y)} \setminus R| \geq 2\delta(G) + 1 - (1 - \frac{1}{k})n$.
Finally, we have $2\delta(G)+1-(1-\frac{1}{k})n\geq \frac{1}{2}(\frac{n}{k}-(\floor{\frac{n-1}{2}}-\delta(G)))$ by Fact \ref{ineq}.\ref{f3}.\qedhere
\end{enumerate}
\end{proof}
\begin{cla}\label{zVi}~
For all $i\in [k]$, if $S_i\cap R_i\neq \emptyset$, then $|N(z)\cap V_i| \leq \floor{\frac{n-1}{2}}-\delta(G).$
\end{cla}
\begin{proof}
Let $2\leq i\leq k$ such that $S_i\cap R_i\neq \emptyset$ and let $y \in S_i\cap R_i$. So $y$ is a successor along $C$ of some vertex
in $N(z)$, and a predecessor along $C$ of some vertex in $N(z)$ as well.
Since $C$ is a longest cycle of $G$, neither $N(z)$ nor $N(y)$ contains
two consecutive vertices of $C$, so $N(y)\cap (S\cup R)
= \emptyset$. Thus,
$$n-1\geq |V(C)| \geq 2 |N(z)\cup N(y)| = 2(d(y) + |N(z) \setminus N(y)|)
\geq 2(d(y) +|N(z)\cap V_i|).$$
Rearranging gives the result.
\end{proof}
\begin{cla}\label{ellupper}
$\frac{\ell+\ell'}{2}< \ceiling{\frac{k+1}{2}}$
\end{cla}
\begin{proof}
Let $i_1\leq i_2\leq \dots\leq i_{\ell'}$ be the indices such that $R_{i_j}\neq \emptyset$ for all $j\in [\ell']$. By Claim \ref{Si}.\ref{c2} and Claim \ref{zVi} and the fact that $z\in S_1\cap R_1$ we see that each of the sets $S_2, \dots, S_\ell$, $R_{i_2}, \dots, R_{i_{\ell'}}$ contributes at least $\frac{1}{2}\left(\frac{n}{k}-\left(\floor{\frac{n-1}{2}}-\delta(G)\right)\right)$ to $|\overline{N(z)} \setminus V_1|$. So we have
$$
\delta(G)\leq d(z) \leq (1-\frac{1}{k})n-\frac{1}{2}\left(\frac{n}{k}-\left(\floor{\frac{n-1}{2}}-\delta(G)\right)\right)(\ell+\ell'-2).
$$
Solving the above inequality for $\frac{\ell+\ell'}{2}$, we have
$$
\frac{\ell+\ell'}{2}\leq \frac{\ceiling{\frac{n+1}{2}}}{\floor{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}}+1}\leq \frac{\frac{n+2}{2}}{\frac{n+2}{2\ceiling{\frac{k+1}{2}}}-\frac{2\ceiling{\frac{k+1}{2}}-1}{2\ceiling{\frac{k+1}{2}}}+1}=\frac{n+2}{n+3}\ceiling{\frac{k+1}{2}}<\ceiling{\frac{k+1}{2}},$$
as desired.
\end{proof}
Since we are supposing without loss of generality that $\ell\leq \ell'$, we have by Claim \ref{elllower} and Claim \ref{ellupper} that $$\ceiling{\frac{k}{2}}\leq \ell<\ceiling{\frac{k+1}{2}}.$$ Thus if $k$ is odd, we have a contradiction. So for the rest of the proof we will suppose that $k$ is even and consequently by Claim \ref{elllower} and \ref{ellupper}, we have $\ell=\frac{k}{2}$.
\subsection{$k$ is even and $\ell=\frac{k}{2}$}
Let $$A = \bigcup_{i=1}^{\ell}V_i ~\text{ and }~ B = \bigcup_{i= \ell+1}^kV_i,$$ and let $H$ be the bipartite graph induced by $[A,B]$. Label the vertices of $A$ as $u_1, \dots, u_{n/2}$ such that $d_H(u_1)\leq \dots\leq d_H(u_{n/2})$ and label the vertices of $B$ as $v_1, \dots, v_{n/2}$ such that $d_H(v_1)\leq \dots\leq d_H(v_{n/2})$. Recall that $S\subseteq A$.
Since we are in the case where $k$ is even, \eqref{cf} reduces to $$\delta(G)\geq \frac{n}{2}+\floor{\frac{n+2}{k+2}}-\frac{n}{k}.$$
\begin{cla}\label{AB}~
\begin{enumerate}[label=\emph{(\roman*)}]
\item\label{SB} $\delta(S, B)\geq \frac{n}{2}+2\floor{\frac{n+2}{k+2}}-\frac{2n}{k}+1$, with equality only if $S_i=V_i$ for some $i\in [\ell]$
\item\label{ASB} $\delta(A\setminus S, B)\geq \floor{\frac{n+2}{k+2}}\geq |A\setminus S|+1$,
\item\label{BA} $\delta(B,A)\geq \floor{\frac{n+2}{k+2}}\geq |A\setminus S|+1$.
\end{enumerate}
\end{cla}
\begin{proof}
\begin{enumerate}
\item This follows from Claim \ref{Si}.\ref{c1} since for all $y\in S$, $$d(y, B)\geq |B|-|\overline{N(y)} \setminus S|\geq |B|-(n-2\delta(G)-1)\geq \frac{n}{2}+2\floor{\frac{n+2}{k+2}}-\frac{2n}{k}+1.$$
If we have equality above, this implies that $\overline{N(y)} \setminus S\subseteq B$, which in particular implies that if $y\in V_i$, then $V_i\setminus S_i=\emptyset$.
\item We have
\begin{align*}
\delta(A\setminus S, B)&\geq \delta(G)-(\frac{n}{2}-\frac{n}{k})\\
&\geq \floor{\frac{n+2}{k+2}}
\geq \frac{n}{k}-\floor{\frac{n+2}{k+2}}
\geq \frac{n}{2}-\delta(G)\stackrel{\eqref{SR}}{\geq} \frac{n}{2}-|S|+1=|A\setminus S|+1,
\end{align*}
where the third inequality holds by Fact \ref{ff}.\ref{ff1}. \qedhere
\item Since $\delta(B,A)\geq \delta(G)-(\frac{n}{2}-\frac{n}{k})$, the rest of the calculation is the same as in \ref{ASB}.
\end{enumerate}
\end{proof}
Before proceeding with the rest of the proof, we finally filter out $\mathcal{F}_3$.
\begin{cla}\label{n/2-1}
If $\delta(G)\geq \frac{n}{2}-1$, then either $G$ has a Hamiltonian cycle or $k=\frac{n}{2}$ and $G\in \mathcal{F}_3$.
\end{cla}
\begin{proof}
By \eqref{SR} and the fact that $\ell=\frac{k}{2}$, we have $|S|=\frac{n}{2}$ and thus $A=S$ which means $A$ is an independent set. So by Claim \ref{AB}.\ref{SB}, we have $\delta(A,B)\geq \frac{n}{2}-1$. Furthermore we have by Claim \ref{AB}.\ref{BA} that $\delta(B,A)\geq \floor{\frac{n+2}{k+2}}$. If $\delta(B,A)\geq 2$, then by Theorem \ref{chv}, $G$ has a Hamiltonian cycle.
So suppose $\delta(B,A)=1$ which implies $n=2k$. In this case there is a vertex $y' \in B$ such that $y'$ only has one neighbor in $A$, say $x'$. We have $\delta(A,B)\geq \frac{n}{2}-1=|B|-1$ so every vertex in $A\setminus \{x'\}$ is adjacent to every vertex in $B\setminus \{y'\}$. Since $d(y', A)=1$ and $d(y')=\frac{n}{2}-1$, it must be the case that $y'$ is adjacent to everything in $B$ except the other vertex in its own part. Now we have all the edges between $A \setminus \{x'\}$ and $B \setminus \{y'\}$, the edge $x'y'$, all the edges from $y'$ to $B$ excluding the vertex in its own part, and we have all but possibly one edge from $x'$ to $B \setminus \{y'\}$, so $G \in \mathcal{F}_3$.
\end{proof}
Now for the rest of the proof we may suppose that $n\geq 3k$ (i.e. $m\geq 3$). We now use Theorem \ref{chv} to show that $H$, and therefore $G$, has a Hamiltonian cycle.
Suppose there exists $i\in [\frac{n}{2}]$ such that $d_H(v_i)\leq i$. By Claim \ref{AB}.\ref{BA}, we must have
\begin{equation}\label{i}
i\geq \delta(B, A)\geq \floor{\frac{n+2}{k+2}}.
\end{equation}
\noindent
\tbf{Case 1} ($\frac{n}{2}-i\leq |A\setminus S|$) By Claim \ref{AB}.\ref{ASB} we have $d_H(u_{\frac{n}{2}-i})\geq \delta(A\setminus S, B) \geq |A\setminus S|+1\geq \frac{n}{2}-i+1$.
\noindent
\tbf{Case 2} ($\frac{n}{2}-i\geq |A\setminus S|+1$)
\tbf{Case 2.1} ($k\geq 6$) Note that since $k\geq 6$, when $3k\leq n\leq 5k$, we have $\delta(G)\geq \frac{n}{2}-1$ and thus we are done by Claim \ref{n/2-1}. So for remainder of this case, suppose $n\geq 6k$ (i.e. $m\geq 6$).
By Claim \ref{AB}.\ref{SB} $$d_H(u_{\frac{n}{2}-i})\geq \delta(S, B)\geq \frac{n}{2}+2\floor{\frac{n+2}{k+2}}-\frac{2n}{k}+1\geq \frac{n}{2}-\floor{\frac{n+2}{k+2}}+1\stackrel{\eqref{i}}{\geq} \frac{n}{2}-i+1,$$
where the third inequality holds by Fact \ref{ff}.\ref{ff2} since $m\geq 6$ and $k\geq 6$.
Thus the conditions of Theorem \ref{chv} are satisfied and therefore $H$ has a Hamiltonian cycle.
\tbf{Case 2.2} ($k=4$)
By Claim \ref{AB}.\ref{SB}, we either don't have equality and thus $$d_H(u_{\frac{n}{2}-i})\geq \delta(S, B)\geq \frac{n}{2}+2\floor{\frac{n+2}{k+2}}-\frac{2n}{k}+2\geq \frac{n}{2}-\floor{\frac{n+2}{k+2}}+1\stackrel{\eqref{i}}{\geq} \frac{n}{2}-i+1,$$
where the third inequality holds by Fact \ref{ff}.\ref{ff2} since $m\geq 3$ and $k\geq 4$, and thus we are done as in the previous case; or $\delta(S, B)= \frac{n}{2}+2\floor{\frac{n+2}{k+2}}-\frac{2n}{k}+1$ and without loss of generality, $S_1=V_1$.
When $3k\leq n\leq 4k$, we have $\delta(G)\geq \frac{n}{2}-1$ and thus we are done by Claim \ref{n/2-1}. So for remainder of this case, suppose $n\geq 5k$ (i.e. $m\geq 5$). Also note that
since $k=4$, \eqref{cf} reduces to $$\delta(G) \geq \frac{n}{4}+\floor{\frac{n+2}{6}}.$$
\begin{cla}\label{42}~
\begin{enumerate}[label=\emph{(\roman*)}]
\item \label{42i} $|S_2|\geq \floor{\frac{n+2}{6}}+1$.
\item \label{42ii} $\delta(S_1, B)\geq 2\floor{\frac{n+2}{6}}+1$.
\item \label{42iii} $\delta(S_2, B)\geq \delta(G)\geq \frac{n}{4}+\floor{\frac{n+2}{6}}$
\end{enumerate}
\end{cla}
\begin{proof}
\begin{enumerate}
\item Since $|S|\geq \delta(G)+1$, we have $$|S_2|=|S|-|S_1|\geq |S|-\frac{n}{4}\geq \delta(G)+1-\frac{n}{4}=\floor{\frac{n+2}{6}}+1.$$
\item Each vertex in $S_1$ has at most $|V_{2}\setminus S_{2}|$ neighbors in $V_{2}$ and thus by \ref{42i}, at least $\frac{n}{4}+\floor{\frac{n+2}{6}}-(\frac{n}{4}-|S_{2}|)\geq \floor{\frac{n+2}{6}}+\floor{\frac{n+2}{6}}+1=2\floor{\frac{n+2}{6}}+1$
neighbors in $B$.
\item Since $S_1=V_1$, the vertices in $S_2$ have no neighbors in $A$ and thus all of their neighbors are in $B$. \qedhere
\end{enumerate}
\end{proof}
We are in the case where $\frac{n}{2}-i\geq |A\setminus S|+1$, so if
\begin{equation}\label{i2}
\frac{n}{2}-i\leq \frac{n}{2}-\floor{\frac{n+2}{6}}-1,
\end{equation}
then by Claim \ref{42}.\ref{42ii} we have
$$d_H(u_{\frac{n}{2}-i})\geq \delta(S_1, B)\geq 2\floor{\frac{n+2}{6}}+1\geq \frac{n}{2}-\floor{\frac{n+2}{6}}\stackrel{\eqref{i2}}{\geq} \frac{n}{2}-i+1,$$
where the third inequality holds by Fact \ref{ff}.\ref{ff2} (in particular $3\floor{\frac{n+2}{6}}+1\geq 3(\frac{n+2-4}{6})+1=\frac{n}{2}$).
Otherwise together with \eqref{i}, we have
$\frac{n}{2}-i=\frac{n}{2}-\floor{\frac{n+2}{6}},$
so by Claim \ref{42}.\ref{42i},\ref{42iii} we have
$$d_H(u_{\frac{n}{2}-i})\geq \delta(S_2, B)\geq\frac{n}{4}+\floor{\frac{n+2}{6}}\geq \frac{n}{2}-\floor{\frac{n+2}{6}}+1= \frac{n}{2}-i+1,$$
where the third inequality holds by Fact \ref{ff}.\ref{ff1} since $m\geq 5$ and $k=4$.
This completes the proof of Theorem \ref{main} and Proposition \ref{prop:n=2k}. \qed
| {
"timestamp": "2020-05-28T02:07:59",
"yymm": "1907",
"arxiv_id": "1907.02004",
"language": "en",
"url": "https://arxiv.org/abs/1907.02004",
"abstract": "For all integers $k$ with $k\\geq 2$, if $G$ is a balanced $k$-partite graph on $n\\geq 3$ vertices with minimum degree at least \\[ \\left\\lceil\\frac{n}{2}\\right\\rceil+\\left\\lfloor\\frac{n+2}{2\\lceil\\frac{k+1}{2}\\rceil}\\right\\rfloor-\\frac{n}{k}=\\begin{cases} \\lceil\\frac{n}{2}\\rceil+\\lfloor\\frac{n+2}{k+1}\\rfloor-\\frac{n}{k} & : k \\text{ odd }\\\\ \\frac{n}{2}+\\lfloor\\frac{n+2}{k+2}\\rfloor-\\frac{n}{k} & : k \\text{ even } \\end{cases}, \\] then $G$ has a Hamiltonian cycle unless $k=2$ and 4 divides $n$, or $k=\\frac{n}{2}$ and 4 divides $n$. In the case where $k=2$ and 4 divides $n$, or $k=\\frac{n}{2}$ and 4 divides $n$, we can characterize the graphs which do not have a Hamiltonian cycle and see that $\\left\\lceil\\frac{n}{2}\\right\\rceil+\\left\\lfloor\\frac{n+2}{2\\lceil\\frac{k+1}{2}\\rceil}\\right\\rfloor-\\frac{n}{k}+1$ suffices. This result is tight for all $k\\geq 2$ and $n\\geq 3$ divisible by $k$.",
"subjects": "Combinatorics (math.CO)",
"title": "On Hamiltonian cycles in balanced $k$-partite graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918522917869,
"lm_q2_score": 0.810478913248044,
"lm_q1q2_score": 0.8011518021999935
} |
https://arxiv.org/abs/1710.03249 | Optimal Graphs for Independence and $k$-Independence Polynomials | The independence polynomial $I(G,x)$ of a finite graph $G$ is the generating function for the sequence of the number of independent sets of each cardinality. We investigate whether, given a fixed number of vertices and edges, there exists optimally-least (optimally-greatest) graphs, that are least (respectively, greatest) for all non-negative $x$. Moreover, we broaden our scope to $k$-independence polynomials, which are generating functions for the $k$-clique-free subsets of vertices. For $k \geq 3$, the results can be quite different from the $k = 2$ (i.e. independence) case. | \section{Introduction}
Given a property of subsets of the vertex or edge set -- such as independent, complete, dominating for vertices and matching for edges -- one is often interested in maximizing or minimizing the size of the set in a given graph $G$. However, one can get a much more nuanced study of the subsets by studying the number of such sets of each cardinality in $G$, and in this guise, one often encapsulates the sequence by forming a generating function. Independence, clique, dominating and matching polynomials have all arisen and been studied in this setting.
In all cases, the generating polynomial $f(G,x)$ is naturally a function on the domain $[0,\infty)$.
If the number of vertices $n$ and edges $m$ are fixed, one can ask whether there exists an extremal graph. Let $\mathcal{S}_{n,m}$ denote the set of simple graphs of order $n$ ($n$ vertices) and size $m$ ($m$ edges). Let $H\in \mathcal{S}_{n,m}$. $H$ is {\em optimally-greatest} (or {\em optimally-least}) if $f(H,x) \geq f(G,x)$ ($f(H,x) \leq f(G,x)$, respectively) for all graphs $G\in \mathcal{S}_{n,m}$ and {\em all} $x \geq 0$ (for any particular value of $x \geq 0$, of course, there is such a graph $H$, as the number of graphs of order $n$ and size $m$ is finite, but we are interested in {\em uniformly} optimal graphs).
Such questions (related to simple substitutions of generating polynomials) have attracted considerable attention in the areas of network reliability \cite{ath,suffel,boesch91,browncox,gross,myrvold} and chromatic polynomials \cite{sakaloglu,chromatic}.
Here we consider optimality for independence polynomials
\[ I(G,x) = \sum_{j} i_{j}x^{j},\]
and a broad generalization, for fixed $k \geq 2$, to $k$-independence polynomials
\[ I_k(G,x) = \sum_{j} i_{k,j}x^{j},\]
where $i_{k,j}$ is the number of subsets of the vertex set of size $j$ that induce a $k$-clique-free subgraph (that is, the induced subgraph contains no complete subgraph of order $k$). Clearly independence polynomials are precisely the $2$-independence polynomials, while the $3$-independence polynomials are generating functions for the numbers of triangle-free induced subgraphs.
In this paper, we look at the optimality of independence polynomials and more generally $k$-independence polynomials.
In the case of the former, optimally-greatest graphs always exist for independence polynomials, but we do not know whether optimally-least graphs necessarily exist for independence polynomials as well (although we will prove for some $n$ and $m$ they do). In contrast, we shall show that for $k\geq 3$, optimality behaves quite differently, in that for some $n$ and $m$, optimally-least and optimally-greatest graphs for $k$-independence polynomials do not exist.
\section{Optimally for Independence Polynomials}
\subsection{Optimally-Greatest Graphs for Independence Polynomials}
Our first result shows that for independence polynomials, optimally-greatest graphs indeed do always exist.
\begin{theorem}
For all $n \geq 1$ and all $m \in \{0,\ldots,{n \choose 2}\}$, an optimally-greatest graph always exists.
\end{theorem}
\begin{proof}
A key observation is that a sufficient condition for $H$ to be optimally-greatest (or optimally-least) for independence polynomials is that $i(H,x) = \sum i_{j}(H) x^{j}$ is {\em coefficient-wise greatest}, that is, for all other graphs $G$ of the same order and size, the generating polynomials $f(G,x) = \sum i_{j}(G) x^{j}$ satisfies $i_{j}(H) \geq i_{j}(G)$ for all $j$ (respectively, $i_{j}(H) \leq i_{j}(G)$). (The coefficient-wise condition is not, in general, necessary for optimality as, for example, $5x^{2}+x+5 \geq x^{2}+4x+1$ for $x \geq 0$, but clearly $5x^{2}+x+5$ is not coefficient-wise greater than or equal to $x^{2}+4x+1$.)
Consider the following graph construction. For a given $n$ and $m$, take a fixed linear order $\preceq$ of the vertices, $v_{n} \preceq v_{n-1} \preceq \cdots \preceq v_{1}$, and select the $m$ largest edges in lexicographic order. We will denote this graph as $G_{n,m,\preceq}$ (it is dependent on the linear order, but of course all such graphs are isomorphic). It was shown in \cite{cutler} that $G_{n,m,\preceq}$ is the graph of order $n$ and size $m$ with the most number of independent sets of size $j$, for {\em all} $j \geq 0$.
It follows that the independence polynomial for $G_{n,m,\preceq}$ is coefficient-wise optimally-greatest for independence polynomials, for all graphs of order $n$ and size $m$.
\null\hfill$\Box$\par\medskip
\end{proof}
Moreover, we can compute the independence polynomial of this optimally-greatest graph. To do so, we need the following well-known recursion to compute the independence polynomial of a graph.
Let $G$ be a graph with $v\in V(G)$. Then
\begin{eqnarray}
I(G,x)=xI(G-N[v],x)+I(G-v,x) \label{ipoly}
\end{eqnarray}
where $N[v]$ is the closed neighbourhood of $v$ and $G-v$ is $G$ with $v$ removed.
For graph $G_{n,m,\preceq}$ with $m < {{n} \choose {2}}$, write $m=(n-\ell)+(n-\ell+1)+\ldots (n-1)+j$ with $j \in \{0,\ldots,n-\ell-2\}$. Since the edges are added in lexicographic order, there will be $k$ vertices that have degree $n-1$ and one vertex of degree $j+\ell$. The remaining $n-\ell-1$ vertices form an independent set. Let $v$ be the vertex of degree $j+\ell$.
Using vertex, $v$ and recursion (\ref{ipoly}), we obtain our result,
\begin{eqnarray*}
I(G_{n,m,\preceq},x)& = &I(G_{n,m,\preceq}-v,x)+xI(G_{n,m,\preceq}-N[v],x)\\
& = & I(K_{\ell} \cup \overline{K_{n-\ell-1}},x) + I(K_{\ell} \cup \overline{K_{n-\ell-j-1}},x)\\
& = & (1+\ell x)(1+x)^{n-\ell-1} + x(1+\ell x)(1+x)^{n-\ell-j-1}\\
& = & (1+\ell x)(1+x)^{n-\ell-j-1} \left( (1+x)^{j} + x\right).
\end{eqnarray*}
Of course, for $m = {n \choose 2}$, $G_{n,m,\preceq} = K_{n}$ and so $I(G_{n,m,\preceq},x) = 1+nx$.
\vspace{0.25in}
It is instructive that this result can also be proved purely algebraically, and we devote the remainder of this section to do so. (We refer the reader \cite{brownbook}
for an introduction to complexes and their connection to commutative algebra.)
The {\em independence complex} of graph $G$ of order $n$ and size $m$, $\Delta_{2}(G)$, has as its faces the independent sets of $G$ (these are obviously closed under containment, and hence a complex). The {\em $f$-vector} of $\Delta_{2}(G)$ is $(1,n,{{n} \choose {2}} - m,f_{3},\ldots,f_{\beta})$, where $f_{i}$ is the number of faces of cardinality $i$ in the complex (and $\beta$ is the independence number of $G$, which is the same as the dimension of the complex). We will show that we can maximize the independence polynomial on $[0,\infty)$ by maximizing ({\em simultaneously}, for some graph $G$) all of the $f_{i}$'s, via an excursion into commutative algebra.
We begin with some definitions. Fix a field ${\mathbf k}$. Let $A$ be a {\em ${\mathbf k}$-graded algebra}, that is, $A$ is a commutative ring containing ${\mathbf k}$ as a subring, that can be written as a vector space direct sum $\displaystyle{A = \bigoplus_{d\geq 0} A_{d}}$, over ${\mathbf k}$, with the property that $A_{i}A_{j} \subseteq A_{i+j}$ for all $i$ and $j$ (we call elements in some $A_{i}$ {\em homogeneous}, and $A_{i}$ is called the {\em $d$-th graded component} of $A$). The graded ${\mathbf k}$-algebra $A$ is {\em standard} if it is generated (as a ring) by a finite set of elements in $A_{1}$. Our prototypical example of a standard ${\mathbf k}$-graded algebra is the polynomial ring ${\mathbf k}[x_{1},x_{2},\ldots,x_{n}]$ in variables $x_{1},x_{2},\ldots,x_{n}$. Note that for any standard graded ${\mathbf k}$-algebra $\displaystyle{A = \bigoplus_{d\geq 0} A_{d}}$ that is a quotient of a polynomial ring by a homogenous ideal, a ${\mathbf k}$-basis for $A_{d}$ is simply the monomials in $A_{d}$.
The {\em Stanley-Reisner complex} of an ideal $I$ of a standard graded ${\mathbf k}$-algebra $A$ (generated by $x_{1},x_{2},\ldots,x_{n}$ in $A_{1}$) is the (simplicial) complex whose faces are the square-free monomials in $x_{1},x_{2},\ldots,x_{n}$ not in $I$ (the properties of being an ideal ensures that this set is closed under containment).
Let $I$ be an ideal of ${\mathbf k}$-algebra $A$; $I$ is {\em homogeneous} if it is generated by homogeneous elements of $A$. We write $\displaystyle{I = \bigoplus_{d\geq 0} I_{d}}$, where $I_{d} = A_{d} \cap I$ is the {\em $d$-th graded component of $I$} (it is a ${\mathbf k}$-subspace of $I$). For a homogeneous ideal $I$ of $A$, a square-free monomial $M$ of degree $d$ in $x_{1},\ldots,x_{n}$ belongs to exactly one of $I_{d}$ and the Stanley-Reisner complex of $I$ (where we identify a face of the complex with the product of its elements). As the total number of monomial of $Q$ of degree $d$ is fixed, we see that maximizing the number of faces of size $d$ in the Stanley-Reisner complex of $I$ corresponds to minimizing the number of monomials of degree $d$ in $I$.
The {\em Hilbert function} of the homogeneous ideal $I$ is the function $H_{I}:{\mathbb N} \rightarrow {\mathbb N}$, where $H_{I}(d) = \mbox{dim}_{\mathbf k}I_d$. We call $I$ {\em Gotzmann} (see \cite{hoefelpaper}) if for all other homogeneous ideals $J$ of $A$ and all $d \geq 0$, if $H_{I}(d)=H_{J}(d)$ then $H_{I}(d+1)\leq H_{J}(d+1)$.
For an ideal of $Q$ which is Gotzmann, its Hilbert function is smallest for each value of $d \in {\mathbb N}$.
We will now focus in on a standard graded ${\mathbf k}$-algebra related to independence in graphs. Fix $n \geq 1$. The {\em Kruskal-Katona ring}, $Q = {\mathbf k}[x_{1},\ldots,x_{n}]/\langle x_{1}^{2},\ldots,x_{n}^{2}\rangle$, is generated by the square-free monomials; it is clearly a standard graded $k$-algebra. Let $G$ be a graph on vertices $\{x_1,x_2,\ldots x_n\}$. The {\em edge ideal} $I_{G}$ is the ideal of $Q$ generated by $\{x_ix_j \mid x_ix_j \in E(G) \}$. If a (square-free) monomial of $Q$ is not in $I_G$ then that set of vertices cannot contain an edge in $G$, so it is an independent set (and vice versa). This means that the Stanely-Reisner complex of our edge ideal $I_{G}$ in the Kruskal-Katona ring is precisely the independence complex $\Delta_{2}(G)$ of our graph $G$. If we can show that the edge ideal $I_{G}$ is Gotzmann in $Q$, then this means that for each $d \geq 0$, $I_{G}$ contains the fewest monomials of degree $d$ for all $d$ compared to {\em any} other such edge ideal. Hence by a previous observation, the $f$-vector of the independence complex of $G$ will have the largest entries component-wise compared to any other graph of order $n$ and size $m$. Thus our graph will have an independence polynomial that is optimally-greatest.
Let $I$ be an ideal in a standard graded ${\mathbf k}$-algebra $A$, generated by $x_{1},\ldots,x_{n} \in A_{1}$. Then $I$ is a { \em lexicographic ideal} if for any monomials $u$ and $v$ in $x_{1},\ldots,x_{n}$, whenever $v \in I$ and $u$ is lexicographically bigger than $v$, we have $u \in I$ as well.
It is known that lexicographic ideals are Gotzmann in the Kruskal-Katona rings \cite{macaulaylex}. As the edges of our family of graphs $G_{n,m,\preceq}$ are added in lexicographic order, it follows that the edge ideal of $I_G$ in $Q$ is lexicographic, and hence Gotzmann. Therefore the $f$-vector is maximized, given $f_0, f_1, f_2$, that is, given, $n$ and $m$, and so $G_{n,m,\preceq}$ is optimally-greatest.
\vspace{0.25in}
\subsection{Optimally-Least Graphs for Independence Polynomials}
We now turn our attention to the existence of optimally-least graphs for the independence polynomial, and we find here that we can only prove the existence of optimally-least graphs for certain values of $m = m(n)$ (and we do not know if there are values of $n$ and $m$ for which optimally-least graphs do not exist).
We begin with dense graphs. It is obvious that for a graph $G$ of order $n$ and size $m$ (which has ${n \choose 2}-m$ many independent sets of cardinality $2$) that the independence polynomial of such a graph $G$ is, for $x \geq 0$, at least
\[ 1+nx+\left( {n \choose 2}-m \right)x^2,\]
and by a previous observation, if there is a graph with this independence polynomial, then it is the optimally-least.
Turan's famous theorem states (see, for example \cite{thebook}) that the maximum number of edges of a graph with no triangles is $\lceil \frac{n}{2} \rceil \lfloor \frac{n}{2} \rfloor = \lfloor \frac{n^{2}}{4} \rfloor$, with equality iff $G$ is a complete bipartite graph with sides of equal or nearly equal cardinality. It follows, by taking complements, that provided
\[ m \geq {{n} \choose {2}} - \Bigl \lfloor \frac{n^{2}}{4} \Bigr \rfloor \]
then the graph formed by adding any $m - ({{n} \choose {2}} - \lfloor \frac{n^{2}}{4} \rfloor)$ edges to the complement of $K_{ \lceil \frac{n}{2} \rceil, \lfloor \frac{n}{2} \rfloor}$, namely $\overline{K_{ \lceil \frac{n}{2} \rceil, \lfloor \frac{n}{2} \rfloor}} ~=~ K_{\lceil \frac{n}{2} \rceil} \cup K_{\lfloor \frac{n}{2} \rfloor}$ is the optimally-least graph. This gives the following result:
\begin{theorem}
For a given $n\geq 2$ and $m\geq {{n} \choose {2}} - \lfloor \frac{n^{2}}{4} \rfloor$, the graph with the optimally-least independence polynomial is formed from $K_{\lceil \frac{n}{2} \rceil} \cup K_{\lfloor \frac{n}{2} \rfloor}$ by adding in any $m - ({{n} \choose {2}} - \lfloor \frac{n^{2}}{4} \rfloor)$ edges. The independence polynomial of such a graph is
\[ 1+nx+\left( {n \choose 2}-m \right)x^2.\]
\null\hfill$\Box$\par\medskip
\end{theorem}
We can extend this result by utilizing a result of Lov\'{a}sz and Simonovits \cite{lovsim}, which, answering a conjecture of Erd\"{o}s, showed that for $1 \leq k < n/2$, the ($K_{4}$-free) graph of order $n$ and size $\lfloor n^{2}/4 \rfloor$ with the fewest number of triangles is the graph formed from $K_{ \lceil \frac{n}{2} \rceil, \lfloor \frac{n}{2} \rfloor}$ by adding in edges to a largest cell so that no triangles are formed within the cell. (In such a case, the number of triangles formed in the graph is $k \lfloor n/2 \rfloor$.) As well, a theorem of Fisher and Solow \cite{fishersolow} states that for $a \geq b \geq 1$, the least number of triangles in a $K_{4}$-free graph of order $n=2a+b$ and size $m=2ab+a^{2}$ occurs in $K_{a,a,b}$. By taking complements, we derive:
\begin{theorem}
For a given $n\geq 2$ and $m = {{n} \choose {2}} - \lceil \frac{n}{2} \rceil \lfloor \frac{n}{2} \rfloor - k$, where $1 \leq k \leq \lfloor n/2 \rfloor$, the graph with the optimally-least independence polynomial is formed from $K_{\lceil \frac{n}{2} \rceil} \cup K_{\lfloor \frac{n}{2} \rfloor}$ by deleting any $k$ edges in $K_{\lceil \frac{n}{2} \rceil}$ so that no independent set of size $3$ is formed in that part. The independence polynomial of such a graph is
\[ 1+nx+\left( {n \choose 2}-m \right)x^2+ k \lfloor n/2 \rfloor x^{3}.\]
As well, for any for $a \geq b \geq 1$, the graph with the optimally-least independence polynomial with order $n=2a+b$ and size $m = a(a-1)+b(b-1)/2$ is $2K_{a} \cup K_{b}$, which has independence polynomial
\[ 1+nx+\left( {n \choose 2}-m \right)x^2+\left( 2{a \choose 3} + {b \choose 3}\right)x^3.\] \null\hfill$\Box$\par\medskip
\end{theorem}
To find sparse families of optimally-least graphs we will look at a graph operation which can be done to decrease the value of the independence polynomial on $[0,\infty)$.
Let $H_1$ be a graph which consists of an induced subgraph $G_1$, containing an edge $e = vw$, and two other vertices $y$ and $z$ that are isolated, and let $H_2$ be the graph formed from $H_1$ by removing edge $e$ and adding in an edge between $y$ and $z$ (see Figure~\ref{polyinc} -- we set $G_{2} = G_{1} - e$). Clearly $G_1$ and $G_2$ have the same number of vertices and edges.
We will show that this removal of an edge to form a $K_{2}$ can never increase the independence polynomial (on $[0,\infty)$).
\begin{figure}[ht]
\centering
\includegraphics[width=4in]{polyinc.eps}
\caption{Graphs for Lemma~\ref{lelmlem}. $H_1$ is the graph $G_1 \cup \{y,z\}$ (on the left) and $H_2$ is the graph $G_2 \cup \{y,z\}$ (on the right).}
\label{polyinc}
\end{figure}
\begin{lemma}
\label{lelmlem}
For the graphs in Figure~\ref{polyinc}, we have $I(H_2,x)\leq I(H_1,x)$ on $[0,\infty)$.
\end{lemma}
\begin{proof}
Note that by Equation (\ref{ipoly}), our deletion-contraction formula for independence polynomials, we have
\begin{eqnarray*}
I(H_1,x)& = & (1+2x+x^2)I(G_1,x)\\
& = & (1+2x+x^2)(I(G_1-v,x)+xI(G_1-N[v],x))\\
&=& (1+2x+x^2)(I(G_2-v,x)+xI(G_2-N[v]-w,x))
\end{eqnarray*}
and
\begin{eqnarray*}
I(H_2,x)& = &(1+2x)(I(G_2-v,x)+xI(G_2-N[v],x)).
\end{eqnarray*}
We also find that
\begin{eqnarray}\label{inequal} I(G_2-N[v],x)\leq (1+x)I(G_2-N[v]-w,x) \end{eqnarray} since $G_2-N[v]$ is a subgraph of $(G_2-N[v]-w) \cup K_{1}$.
Consider $F(x)=I(H_1,x)-I(H_2,x)$. Using our expression for $i(H_{1},x)$ and $i(H_{2},x)$ and the inequality (\ref{inequal}), we can see that
\begin{eqnarray*}
F(x) & = & (1+2x)I(G_2-v,x)+x^2I(G_2-v,x)+\\
&& x(1+2x+x^2)\ipoly{G_2-N[v]-w}{x}-(1+2x)\ipoly{G_2-v}{x}\\
&&-x(1+2x)\ipoly{G_2-N[v]}{x}\\
&=& x^2\ipoly{G_2-v}{x}+x(1+x)\ipoly{G_2-N[v]-w}{x}\\
&&+x^2(1+x)\ipoly{G_2-N[v]-w}{x}-x(1+2x)\ipoly{G_2-[v]}{x}\\
&\geq & x^2\ipoly{G_2-v}{x}+x\ipoly{G_2-N[v]}{x}\\
&&+x^2\ipoly{G_2-N[v]}{x}-x\ipoly{G_2-[v]}{x}-2x^2\ipoly{G_2-N[v]}{x}\\
&=& x^2\ipoly{G_2-v}{x}-x^2\ipoly{G_2-N[v]}{x}.
\end{eqnarray*}
Since $G_2-N[v]$ is a subgraph of $G_2-v$, we have $\ipoly{G_2-v}{x}\geq \ipoly{G_2-N[v]}{x}$. It follows that $F(x)\geq 0$, so $\ipoly{H_1}{x}\geq \ipoly{H_2}{x}$.
\null\hfill$\Box$\par\medskip
\end{proof}
It follows that if $m\leq n/2$, by pulling out $K_2$'s, we derive:
\begin{theorem}
\label{lessthanhalf}
For a given $n\geq 2$ and $m \leq \frac{n}{2} $, the optimally-least graph for the independence polynomial for $x \geq 0$ is $mK_2 \cup (n-2m)K_1 $. \null\hfill$\Box$\par\medskip
\end{theorem}
\section{Optimality for $k$-Independence Polynomials}
We now look at the optimality of $k$-independence polynomials and find the situation is much different for $k \geq 3$ than for $k = 2$. We will show that in contrast, to $k=2$, for all $k \geq 3$, there {\em does not} always exist optimally-greatest nor optimally-least graphs for the $k$-independence polynomial.
Before we do so, we make the following observation, which shows that the nonexistence of optimal graphs can sometimes be derived by considering only certain coefficients of the polynomials.
\begin{observation}
\label{max}
Suppose that $G$ and $H$ be graphs on $n$ vertices and $m$ edges and let $k\geq 2$, with
\[ {\rm I}_{k}(G,x)=\sum_{j=0}^ni_j(G)x^{j} \] and
\[ {\rm I}_{k}(H,x)=\sum_{j=0}^ni_j(H)x^{j} \]
Then
\begin{itemize}
\item if $i_j(G)=i_j(H)$ for $j<\ell$ but $i_{\ell}(G)>i_{\ell}(H)$, then {\rm I}$_{k}(G,x)>${\rm I}$_{k}(H,x)$ for $x$ arbitrarily small and
\item if $i_j(G)=i_j(H)$ for $t>j$ but $i_{t}(G)>i_{t}(H)$, then {\rm I}$_{k}(G,x)>${\rm I}$_{k}(H,x)$ for $x$ arbitrarily large.
\end{itemize}
\end{observation}
Note that $i_{j} = {n \choose j}$ for $j < k$. For a graph $G$, let $r_{G} = r^{k}_{G}$ denote the largest value of $j$ such that there exists an induced subgraph of $G$ of order $j$ that does not contains a $k$-clique, that is, $r_{G}$ is the largest value of $j$ such that $i_j(G) > 0$.
Thus to show that for $k$-independence polynomials ($k \geq 3$) there does not always exist optimally-greatest graphs, we will show that for some $n$ and $m = m(n)$, there is a unique graph $G \in \mathcal{S}_{n,m}$ with the largest value of $r_{G}$ in $\mathcal{S}_{n,m}$, thus optimally-greatest for sufficiently large values of $x$, but there is another graph $H \in \mathcal{S}_{n,m}$ with more $k$-independent sets than $G$, and hence optimally-greatest the $k$-independence polynomial for arbitrarily small values of $x\geq 0$.
\begin{theorem}
For $k \geq 3$ and $l \geq 2$, and any $n > (k-1)l(l-1)$, there does not exist an optimally-greatest graph for the $k$-independence polynomial of order $n$ and size $m = {n \choose 2}-(k-1){l \choose 2}$.
\end{theorem}
\begin{proof}
We recall an old well known result by Turan (see \cite{bollobas2}, for example) that states that the unique graph $T_{n,k}$ of order $n$ with the maximum number $m_{n,k}$ of edges in a graph of order $n$ without a $k$-clique is the complete $(k-1)$-partite graph with cells of order $\lfloor n/(k-1) \rfloor$ or $\lceil n/(k-1) \rceil$. For fixed $n > (k-1)l(l-1)$, we set $m = {n \choose 2}-(k-1){l \choose 2}$ and consider the class $\mathcal{S}_{n,m}$. We define the graph $G = G(n,m)$ as the join $T_{(k-1)l,m} + K_{n-(k-1)l}$ of the Turan graph $T_{(k-1)l,m}$ with $K_{n-(k-1)l}$, that is, $G$ is formed from the disjoint union of $T_{(k-1)l,k}$ with $K_{n-(k-1)l}$ by adding in all edges between them (equivalently, the complement of $G$, $\overline{G}$, consists of $k-1$ cliques of order $l$, together with $n-(k-1)l$ isolated vertices). A quick calculation shows that $G$ has $m$ edges, and hence belongs to $\mathcal{S}_{n,m}$.
We claim first that among all graphs $F^{\prime}$ in $\mathcal{S}_{n,m}$, $G$ has the largest value of $r_{F^{\prime}}$. Note that $r_{G} = (k-1)l$ as the Turan graph $T_{(k-1)l,k}$ has no $k$-clique. Now if there is a graph $F$ in $\mathcal{S}_{n,m}$ with $r_{F} > r_{G}$, then $F$ has an induced subgraph $S$ on say $s > (k-1)l$ vertices with no $k$-clique, and hence $S$ has at most as many edges as $T_{s,m}$. However, then $\overline{F}$ has at least as many edges as $\overline{T_{s,m}}$, which is strictly more than the number of edges in $\overline{T_{(k-1)l,m}}$ (to see this, think of the complements of Turan graphs being the disjoint unions of cliques, and observe that for $t > (k-1)l$, one can form $\overline{T_{t,m}}$ from $\overline{T_{(k-1)l,m}}$
by successively adding vertices to the cliques to keep them as nearly equal as possible). Thus $\overline{F}$ would contain more edges than $\overline{G}$, the disjoint union of $\overline{T_{(k-1)l,m}}$ and isolated vertices, and hence $F$ would fewer edges than $G$, a contradiction as both $F$ and $G$ have the same number of vertices and edges. We conclude no such $F$ exists, so $G$ has the maximal $r_F$ value. Moreover, by the argument given, if $G^{\prime}$ were a graph in $\mathcal{S}_{n,m}$ with $r_{G^{\prime}} = r_{G}$, then if $S$ is any induced subgraph of $G^{\prime}$ of size $r_{G}=(k-1)l$ that has no $k$-clique, $\overline{S}$ must have precisely as many edges as $\overline{T_{(k-1)l,k}}$, which is the number of edges of $\overline{G}$. We conclude that, from Turan's Theorem, that $G^\prime$ is isomorphic to $G$, which is therefore the unique graph in $\mathcal{S}_{n,m}$ with the largest $r_F$-value, and hence the unique graph optimally-greatest for the $k$-independence polynomial for $x$ sufficiently large.
So if there is an optimally-greatest graph the $k$-independence polynomial for $\mathcal{S}_{n,m}$, it must be $G$. We note that if a graph of order $n$ and size $m$ has a minimum number of $k$-cliques, then it has a maximum number of $k$-independent sets of order $k$. We will show now that there is another graph $H \in \mathcal{S}_{n,m}$ with fewer $k$-cliques than $G$, and hence more $k$-independent sets of size $k$, and so $G$ cannot be optimally-greatest the $k$-independence polynomial as $I_{k}(H,x) > I_{k}(G,x)$ for $x> 0 $ sufficiently small.
Let $H$ be the graph of order $n$ such that $\overline{H}$ is the disjoint union of $(k-1){l \choose 2}$ $K_{2}$'s and isolated vertices (as $n > (k-1)l(l-1) = 2(k-1){l \choose 2}$, we can find such a graph of order $n$). We can think of $\overline{H}$ as splitting the edges of the $l$-cliques of $\overline{G}$ into edges.
Now it is easy to see that for graphs $G_{1}$ and $G_{2}$ having $k_{1,i}$ and $k_{2,i}$ cliques of order $i$, respectively, then for any positive integer $t$, the number of $k$-cliques of order $t$ in $G_{1}+G_{2}$ is
\[ \sum_{i+j = t} k_{1,i}k_{2,j}.\]
It follows that it suffices to show that the following graph
$G_{1} = \overline{K_{l}+l(l-2)K_{1}}$
has, for all $i$, at least as many $i$-cliques as the graph $G_{2} = \overline{{l \choose 2}K_{2}}$, and that for $i \geq 3$ the former has strictly more $i$-cliques than the latter. For $i = 1 $ and $i = 2$ they have the same number of $i$-cliques (as they have the same number of vertices and edges). For $i \geq 2$, the number of $i$-cliques in $G_{1}$ is
\[ {l(l-2) \choose i} + l {l(l-2) \choose {i-1}}\]
while $G_{2}$ has
\[ {{l \choose 2} \choose i}2^{i}\]
many $i$-cliques.
We set
\[ f_{l,i} = \frac{{l(l-2) \choose i} + l {l(l-2) \choose {i-1}}}{{{l \choose 2} \choose i}2^{i}}.\]
Clearly $f_{l,2} = 1$ as both $G_{1}$ and $G_{2}$ has the same number of edges. We'll show that the sequence is strictly increasing, and hence for $i \geq 3$, always greater than $1$, so that $G_{1}$ has strictly more $i$-cliques than $G_{2}$, concluding the proof.
Now via some straightforward but tedious calculations, we find that for $i \geq 2$,
\[ \frac{f_{l,i+1}}{f_{l,i}} = \frac{(l(l-2)-i+l(i+1))(l(l-2)-i+1)}{(l(l-1)-2i)(l(l-2)-i+1+li)}.\]
It follows that
\[ \frac{f_{l,i+1}}{f_{l,i}} > 1 \]
iff
\begin{eqnarray*}
(l(l-2)-i+l(i+1))(l(l-2)-i+1) & > & (l(l-1)-2i)(l(l-2)-i+1+li),
\end{eqnarray*} which holds iff
\[ i(l-1)(i-1) > 0.\]
The latter is true as $i, l \geq 2$.
Thus we conclude that $f_{l,i} > 1$ for all $i \geq 2$. Thus $H$ has fewer $k$-cliques than $G$, hence more $k$-independent sets of order $k$ and so an optimally-greatest graph the $k$-independence polynomial does not exist.
\null\hfill$\Box$\par\medskip
\end{proof}
\vspace{0.25in}
We turn finally to the issue of optimally-least $k$-independence polynomials.
From Observation~\ref{max}, we have that if an optimally-least graph the $k$-independence polynomial exists, then it must have the least number of $k$-independent sets of order $k$ (or equivalently, for the optimally-least graph the $k$-independence polynomial has the maximum number of $k$-cliques), since a graph with the latter will be optimally-least the $k$-independence polynomial for sufficiently small values of $x\geq 0$. In \cite{bollobas2} it was shown that for a graph on $n$ vertices and $m={d \choose 2}+r$ edges ($0 \leq r < d$) and for $k \geq 3$, the maximum number of cliques of size $k$ is ${d \choose k}+{r \choose k-1}$, and a graph which achieves such bounds consists of a $K_d$, a vertex $x$ with $N(x)\subseteq V(K_d)$, and $n-d-1$ isolated vertices (see Figure~\ref{optkleastnear0}). (This graph is not the unique extremal graph for values of $r<k-1$, since the addition of fewer than $k-1$ edges will not produce another $k$-clique, but for values of $r\geq k-1$, this graph is the unique extremal graph, since to obtain the maximum number of $k$-cliques, the edges will have to be added to the same vertex.) Such graphs are candidates for optimally-least graphs the $k$-independence polynomial, but we will show that for $k\geq 3$, that optimally-least graphs the $k$-independence polynomial do not always exist.
\begin{figure}[ht]
\centering
\includegraphics[width=1.8in]{optkleastnear0.eps}
\caption{A graph that is optimally-least for the $k$-independence polynomial near $0$.}
\label{optkleastnear0}
\end{figure}
\begin{theorem}
For $k\geq 3$, $n \geq {k\choose 2} + k + 1$ vertices and $m={{k\choose 2}+2\choose 2}-1$ optimally-least graphs the $k$-independence polynomial do not exist.
\end{theorem}
\begin{proof}
Let $k\geq 3$. Let $G=\left( K_{{k\choose 2}+2}-e \right) \cup (n-{k \choose 2}-2)K_1$, for some edge $e$ of $K_{{k\choose 2}+2}$, and let $H=K_{{k\choose 2}+1}\cup K_k\cup (n-{k \choose 2}-1-k)K_1$. We know that $G$ is optimally-least for the $k$-independence polynomial, when $x$ is sufficiently close to $0$, and $H$ is not. We will now show that $H$ is optimally-least for the $k$-independence polynomial for arbitrarily large values of $x$.
In $G$, the largest size of a vertex set that does not contain a $K_k$ is $n-{k \choose 2}+k-2$, taking any of the $n-{k\choose 2}-2$ isolated vertices, end points of edge $e$ and $k-2$ vertices from $K_{{k\choose 2}+2}$. In $H$, the largest size of a vertex set that does not contain a $K_k$ is size $n-{k \choose 2}+k-3$, taking any of the isolated $n-{k \choose 2}-1-k$ vertices and $k-1$ vertices from each complete graph. Thus by Observation~\ref{max}, $H$ is optimally-least the $k$-independence polynomial for arbitrarily large values of $x$, and $G$ is not, so no optimally-least graphs exist for the $k$-independence polynomial for these $n$ and $m$.
\null\hfill$\Box$\par\medskip
\end{proof}
\section{Conclusion}
While we have seen that optimally-greatest graphs exist for independence polynomials, we have only be able to prove the existence of optimally-least polynomials for a restricted collection of $n$ and $m$. Our belief is that such graphs always exist, but there does not seem to be any reasonable family to put forward as extremal.
In terms of optimality for $k$-independence polynomials, we have seen that for all $k \geq 3$, there are infinitely many values of $n$ and $m$ such that optimally-greatest graphs do not exists, and similarly for optimally-least graphs.
A full characterization of when optimal graphs for the $k$-independence polynomial exist (for $k\geq 3$, and even for optimally-least for $k=2$) remains open.
\vspace{0.25in}
\noindent {\bf Acknowledgements}
\vspace{0.1in}
\noindent J.I. Brown acknowledges support from NSERC (grant application RGPIN 170450-2013). D. Cox acknowledges research support from NSERC (grant application RGPIN 2017-04401) and Mount Saint Vincent University.
| {
"timestamp": "2017-10-11T02:00:41",
"yymm": "1710",
"arxiv_id": "1710.03249",
"language": "en",
"url": "https://arxiv.org/abs/1710.03249",
"abstract": "The independence polynomial $I(G,x)$ of a finite graph $G$ is the generating function for the sequence of the number of independent sets of each cardinality. We investigate whether, given a fixed number of vertices and edges, there exists optimally-least (optimally-greatest) graphs, that are least (respectively, greatest) for all non-negative $x$. Moreover, we broaden our scope to $k$-independence polynomials, which are generating functions for the $k$-clique-free subsets of vertices. For $k \\geq 3$, the results can be quite different from the $k = 2$ (i.e. independence) case.",
"subjects": "Combinatorics (math.CO)",
"title": "Optimal Graphs for Independence and $k$-Independence Polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918536478773,
"lm_q2_score": 0.8104788995148792,
"lm_q1q2_score": 0.8011517897239546
} |
https://arxiv.org/abs/1811.06493 | Intermediate dimensions | We introduce a continuum of dimensions which are `intermediate' between the familiar Hausdorff and box dimensions. This is done by restricting the families of allowable covers in the definition of Hausdorff dimension by insisting that $|U| \leq |V|^\theta$ for all sets $U, V$ used in a particular cover, where $\theta \in [0,1]$ is a parameter. Thus, when $\theta=1$ only covers using sets of the same size are allowable, and we recover the box dimensions, and when $\theta=0$ there are no restrictions, and we recover Hausdorff dimension.We investigate many properties of the intermediate dimension (as a function of $\theta$), including proving that it is continuous on $(0,1]$ but not necessarily continuous at $0$, as well as establishing appropriate analogues of the mass distribution principle, Frostman's lemma, and the dimension formulae for products. We also compute, or estimate, the intermediate dimensions of some familiar sets, including sequences formed by negative powers of integers, and Bedford-McMullen carpets. | \section{Intermediate dimensions: definitions and background}
\setcounter{equation}{0}
\setcounter{theo}{0}
We work with subsets of $\mathbb{R}^n$ throughout, although much of what we establish also holds in more general metric spaces. We denote the {\em diameter} of a set $F$ by $|F|$, and when we refer to a {\em cover} $\{U_i\}$ of a set $F$ we mean that $F\subseteq \bigcup_i U_i$ where $\{U_i\}$ is a finite or countable collection of sets.
Recall that Hausdorff dimension $\dim_{\rm H}$ may be defined without introducing Hausdorff measures, but using Hausdorff content. For $F\subseteq \mathbb{R}^n$,
\begin{align*}\label{hdim}
\dim_{\rm H} F = \inf \big\{& s\geq 0 : \mbox{ \rm for all $\epsilon >0$ there exists a cover $ \{U_i\} $ of $F$ such that $\sum |U_i|^s \leq \epsilon$} \big\},
\end{align*}
see \cite[Section 3.2]{Fa}. (Lower) box dimension $\lbd$ may be expressed in a similar manner, by forcing the covering sets to be of the same diameter. For bounded $F\subseteq \mathbb{R}^n$,
\begin{align*}
\lbd F = \inf \big\{ s\geq 0 &: \mbox{ \rm for all $\epsilon >0$ there exists a cover $ \{U_i\} $ of $F$} \\
& \mbox{ \rm such that $|U_i| = |U_j|$ for all $i,j$ and $\sum |U_i|^s \leq \epsilon$} \big\}.
\end{align*}
see \cite[Chapter 2]{Fa}. Expressed in this way, Hausdorff and box dimensions may be regarded as extreme cases of the same definition, one with no restriction on the size of covering sets, and the other requiring them all to have equal diameters. With this in mind, one might regard them as the extremes of a continuum of dimensions with increasing restrictions on the relative sizes of covering sets. This is the main idea of this paper, which we formalise by considering restricted coverings where the diameters of the smallest and largest covering sets lie in a geometric range $\delta^{1/\theta} \leq |U_i| \leq \delta$ for some $0\leq \theta \leq 1$.
\begin{defi}\label{adef}
Let $F\subseteq \mathbb{R}^n$ be bounded. For $0\leq \theta \leq 1$ we define the {\em lower $\theta$-intermediate dimension} of $F$ by
\begin{align*}
\da F = \inf \big\{& s\geq 0 : \mbox{ \rm for all $\epsilon >0$ and all $\delta_0>0$, there exists $0<\delta\leq \delta_0$} \\
& \mbox{ \rm and a cover $ \{U_i\} $ of $F$ such that $\delta^{1/\theta} \leq |U_i| \leq \delta$ and
$\sum |U_i|^s \leq \epsilon$} \big\}.
\end{align*}
Similarly, we define the {\em upper $\theta$-intermediate dimension} of $F$ by
\begin{align*}
\uda F = \inf \big\{& s\geq 0 : \mbox{ \rm for all $\epsilon >0$ there exists $\delta_0>0$ such that for all $0<\delta\leq \delta_0$,} \\
& \mbox{ \rm there is a cover $ \{U_i\} $ of $F$ such that $\delta^{1/\theta} \leq |U_i| \leq \delta$ and
$\sum |U_i|^s \leq \epsilon$} \big\}.
\end{align*}
\end{defi}
With these definitions,
$$\underline{\mbox{\rm dim}}_{0} F = \overline{\mbox{\rm dim}}_{0} F = \hdd F, \quad \underline{\mbox{\rm dim}}_{1} F = \lbd F \quad {\mbox{ and }}\quad \overline{\mbox{\rm dim}}_{1} F = \ubd F, $$
where $\ubd$ is the upper box dimension. Moreover, it follows immediately that, for a bounded set $F$ and $\theta \in [0,1]$,
\[
\hdd F \leq \da F \leq \uda F \leq \ubd F \quad {\mbox{ and }}\quad \da F \leq \lbd F.
\]
It is also immediate that $\da F$ and $\uda F$ are increasing in $\theta$, though as we shall see they need not be strictly increasing. Furthermore, $\uda$ is finitely stable, that is $\uda (F_1\cup F_2) = \max\{\uda F_1, \uda F_2\}$, and, for $\theta\in (0,1]$, both $\da F$ and $\uda F$ are unchanged on replacing $F$ by its closure.
In many situations, even if $\hdd F < \ubd F$, we still have $\lbd F = \ubd F$ and $\da F = \uda F$ for all $\theta \in [0,1]$. In this case we refer to the box dimension $\bdd F = \lbd F = \ubd F$ and the {\em $\theta$-intermediate dimension} $\mbox{\rm dim}_{\theta} F = \da F = \uda F$.
This paper is devoted to understanding $\theta$-intermediate dimensions. The hope is that $\mbox{\rm dim}_{\theta} F$ will interpolate between the Hausdorff and box dimensions in a meaningful way, a rich and robust theory will be discovered, and interesting further questions unearthed. We first derive useful properties of intermediate dimensions, including that $\underline{\mbox{\rm dim}}_{0} F$ and $\overline{\mbox{\rm dim}}_{0} F $ are continuous on $(0,1]$ but not necessarily at $0$, as well as proving versions of the mass distribution principle, Frostman's lemma and product formulae. We then examine a range of examples illustrating different types of behaviour including sequences formed by negative powers of integers, and self-affine Bedford-McMullen carpets.
Intermediate dimensions provide an insight into the distribution of the diameters of covering sets needed when estimating the Hausdorff dimensions of sets whose Hausdorff and box dimensions differ. They also have concrete applications to well-studied problems. For example, since the intermediate dimensions are preserved under bi-Lipschitz mappings, they provide another invariant for Lipschitz classification of sets. A very specific variant was used in \cite{KP} to estimate the singular sets of partial differential equations.
A related approach to `dimension interpolation' was recently considered in \cite{spec1} where a new dimension function was introduced to interpolate between the box dimension and the Assouad dimension. In this case the dimension function was called the \emph{Assouad spectrum}, denoted by $\mbox{\rm dim}_\textup{A}^{\theta} F$ $(\theta \in (0,1))$.
\section{Properties of intermediate dimensions}\label{sec:props}
\setcounter{equation}{0}
\setcounter{theo}{0}
\subsection{Continuity}
The first natural question is whether, for a fixed bounded set $F$, $\da F$ and $\uda F$ vary continuously for $\theta\in [0,1]$. We show this is the case, except possibly at $\theta = 0$. We provide simple examples exhibiting discontinuity at $\theta = 0$ in Section \ref{examplessimple}. However, for many natural sets $F$ we find that the intermediate dimensions \emph{are} continuous at $0$ (and thus on $[0,1]$), for example for self-affine carpets, see Section \ref{carpetssec}.
\begin{prop}\label{cty}
Let $F$ be a non-empty bounded subset of $\mathbb{R}^n$ and let $0\leq \theta<\phi \leq 1$. Then
\begin{equation}\label{ineqs}
\da F \ \leq \ \db F \ \leq \ \da F +\Big(1-\frac{\theta}{\phi}\Big) (n- \da F).
\end{equation}
and
\begin{equation}\label{ineqs2}
\uda F \ \leq \ \udb F \ \leq \ \uda F +\Big(1-\frac{\theta}{\phi}\Big) (n- \uda F).
\end{equation}
In particular, $\theta \mapsto\da F$ and $\theta \mapsto\uda F$ are continuous for $\theta \in (0,1]$.
\end{prop}
\begin{proof}
We will only prove \eqref{ineqs} since \eqref{ineqs2} is similar. The left-hand inequality of \eqref{cty} is just the monotonicity of $\da F$. The right-hand inequality is trivially satisfied when $\da F=n$, so we assume that $0\leq \da F <n$. Suppose that $0\leq \theta<\phi \leq 1$ and that $0\leq \da F <s <n$. Then, given $\epsilon >0$, we may find arbitrarily small $\delta>0$ and countable or finite covers $\{U_i\}_{i\in I}$ of $F$ such that
\begin{equation}\label{epsum}
\sum_{i\in I} |U_i|^s<\epsilon \quad \mbox{ and } \quad \delta \leq |U_i| \leq \delta^\theta \quad \mbox{ for all } i\in I.
\end{equation}
Let
$$I_0 =\{i\in I: \delta \leq |U_i| \leq \delta^\phi\} \quad \mbox{ and } \quad I_1 =\{i\in I: \delta^\phi < |U_i| \leq \delta^\theta\}.$$
For each $ i\in I_1$ we may split $U_i$ into subsets of small coordinate cubes to get sets $\{U_{i,j}\}_{j\in J_i}$ such that $U_i \subseteq\bigcup_{j\in J_i} U_{i,j}$, with
$|U_{i,j}| \leq \delta^\phi$ and ${\rm card} J_i \ \leq \ c_n|U_i|^n \delta^{-\phi n} \ \leq \ c_n\delta^{n(\theta-\phi)}$, where
$c_n = 4^n n^{n/2}$.
Let $s<t\leq n$. Then $ \{U_i\}_{i\in I_0} \cup \{U_{i,j}\}_{i\in I_1, j \in J_i}$ is a cover of $F$ such that
$\delta \leq |U_i|, |U_{i,j}| \leq \delta^\phi$. Taking sums with respect to this cover:
\begin{eqnarray*}
\sum_{i\in I_0}|U_i|^t + \sum_{i\in I_1}\sum_{j\in J_{i}}|U_{i,j}|^t
&\leq&
\sum_{i\in I_0}|U_i|^t + \sum_{i\in I_1} \delta^{\phi t} c_n |U_i|^n \delta^{-\phi n} \\
&\leq&
\sum_{i\in I_0}|U_i|^t + c_n \sum_{i\in I_1} |U_i|^s |U_i|^{n-s}\delta^{\phi( t-n)} \\
&\leq&
\sum_{i\in I_0}|U_i|^s + c_n \sum_{i\in I_1} |U_i|^s \delta^{\theta(n-s)}\delta^{\phi( t-n)} \\
&\leq&
\sum_{i\in I_0}|U_i|^s + c_n \delta^{\phi[t-(n\phi+\theta(s-n))/\phi]}\sum_{i\in I_1} |U_i|^s \\
&\leq& (1+c_n) \sum_{i\in I}|U_i|^s\ < \ (1+c_n)\epsilon
\end{eqnarray*}
if $t\geq (n\phi+\theta(s-n))/\phi$, from \eqref{epsum}. This holds for some cover for arbitrarily small $\epsilon$ and all $s>\da F$, giving
$\db F \leq n + \theta (\da F-n)/\phi$, which rearranges to give \eqref{ineqs}.
Finally, note that \eqref{ineqs2} follows by exactly the same argument noting that the assumption $ \uda F <s $ gives rise to $\delta_0>0$ such that for all $\delta \in (0,\delta_0)$ we can find covers $\{U_i\}_{i\in I}$ of $F$ satisfying \eqref{epsum}.
\end{proof}
\subsection{A mass distribution principle for $\da$ and $\uda$}
The \emph{mass distribution principle} is a powerful tool in fractal geometry and provides a useful mechanism for estimating the Hausdorff dimension from below by considering measures supported on the set, see \cite[page 67]{Fa}. We present natural analogues for $\da$ and $\uda$.
\begin{prop}\label{mdp}
Let $F$ be a Borel subset of $\mathbb{R}^n$ and let $0\leq \theta \leq 1$ and $s\geq 0$. Suppose that there are numbers $a, c, \delta_0 >0$ such that for all $0< \delta\leq \delta_0$ we can find a Borel measure $\mu_\delta$ supported by $F$ with
$\mu_\delta (F) \geq a $, and with
\begin{equation}\label{mdiscond}
\mu_\delta (U) \leq c|U|^s \quad \mbox{ for all Borel sets $U \subseteq \mathbb{R}^n $ with } \delta \leq |U|\leq \delta^\theta.
\end{equation}
Then $\da F \geq s$. Moreover, if measures $\mu_\delta$ with the above properties can be found only for a sequence of $\delta \to 0$, then the conclusion is weakened to $\uda F \geq s$. \end{prop}
\begin{proof}
Let $\{U_i\}$ be a cover of $F$ such that $\delta \leq |U_i|\leq \delta^\theta$ for all $i$. Then
$$a\ \leq \ \mu_\delta(F) \ \leq\ \mu_\delta\Big(\bigcup_i U_i\Big)\ \leq \ \sum_i \mu_\delta(U_i) \
\leq \ c\sum_i |U_i|^s,$$
so that $\sum_i |U_i|^s \geq a/c>0$ for every admissible cover and therefore $\da F \geq s$.
The weaker conclusion regarding the upper intermediate dimension is obtained similarly.
\end{proof}
Note the main difference between Proposition \ref{mdp} and the usual mass distribution principle is that a family of measures $\{\mu_\delta\}$ is used instead of a single measure. Since each measure $\mu_\delta$ is only required to describe a range of scales, in practice one can often use finite sums of point masses. Whilst the measures $\mu_\delta$ may vary, it is essential that they all assign mass at least $a>0$ to $F$.
\subsection{A Frostman type lemma for $\da$}
\emph{Frostman's lemma} is another powerful tool in fractal geometry, which asserts the existence of measures of the type considered by the mass distribution principle, see \cite[page 77]{Fa} or \cite[page 112]{mattila}. The following analogue of Frostman's lemma holds for intermediate dimensions and is a useful dual to Proposition \ref{mdp}.
\begin{prop}\label{frostman}
Let $F$ be a compact subset of $\mathbb{R}^n$, let $0< \theta \leq 1$, and suppose $0< s< \da F$. There exists a constant $c >0$ such that for all $\delta \in (0,1)$ we can find a Borel probability measure $\mu_\delta$ supported on $F$ such that for all $x \in \mathbb{R}^n$ and $\delta^{1/\theta} \leq r \leq \delta$,
\[
\mu_\delta (B(x,r)) \leq c r^s .
\]
Moreover, $\mu_\delta$ can be taken to be a finite collection of atoms.
\end{prop}
\begin{proof}
This proof follows the proof of the classical version of Frostman's lemma given in \cite[pages 112-114]{mattila}. For $m \geq 0$ let $\mathcal{D}_m$ denote the familiar partition of $[0,1]^n$ consisting of $2^{nm}$ pairwise disjoint half-open dyadic cubes of sidelength $2^{-m}$, that is cubes of the form $[a_1, a_1+2^{-m}) \times \cdots \times [a_n, a_n+2^{-m})$. By translating and rescaling we may assume without loss of generality that $F \subseteq [0,1]^n$ and that $F$ is not contained in any $Q \in \mathcal{D}_1$. It follows from the definition of $\da F$ that there exists $\varepsilon>0$ such that for all $\delta \in (0,1)$ and for all covers $\{U_i\}_i$ of $F$ satisfying $\delta^{1/\theta} \leq |U_i| \leq \delta$,
\begin{equation} \label{goodcover1}
\sum_i |U_i|^s > \varepsilon.
\end{equation}
Given $\delta \in (0,1)$, let $m \geq 0$ be the unique integer satisfying $2^{-m-1}< \delta^{1/\theta} \leq 2^{-m}$ and let $\mu_m$ be a measure defined on $F$ as follows: for each $Q \in \mathcal{D}_m$ such that $Q \cap F \neq \emptyset$, then choose an arbitrary point $ x_Q \in Q \cap F$ and let
\[
\mu_m = \sum_{ Q \in \mathcal{D}_m : Q \cap F \neq \emptyset} 2^{-ms} \delta_{x_Q}
\]
where $\delta_{x_Q}$ is a point mass at $x_Q$. Modify $\mu_m$ to form a measure $\mu_{m-1}$, supported on the same finite set, defined by
\[
\mu_{m-1}\vert_Q = \min\{1, 2^{-(m-1)s} \mu_m(Q)^{-1}\} \mu_m\vert_Q
\]
for all $Q \in \mathcal{D}_{m-1}$, where $\nu \vert_E$ denotes the restriction of $\nu$ to $E$. The purpose of this modification is to reduce the mass of cubes which carry too much measure. This is done since we are ultimately trying to construct a measure which we can estimate uniformly from above. Continuing inductively, $\mu_{m-k-1}$ is obtained from $\mu_{m-k}$ by
\[
\mu_{m-k-1}\vert_Q = \min\{1, 2^{-(m-k-1)s} \mu_{m-k} (Q)^{-1}\} \mu_{m-k} \vert_Q
\]
for all $Q \in \mathcal{D}_{m-k-1}$. We terminate this process when we define $\mu_{m-l}$ where $l$ is the largest integer satisfying $2^{-(m-l)}n^{1/2} \leq \delta$. (We may assume that $l \geq 0$ by choosing $\delta$ sufficiently small to begin with.) In particular, cubes $Q \in \mathcal{D}_{m-l}$ satisfy $|Q| = 2^{-(m-l)}n^{1/2} \leq \delta$. By construction we have
\begin{equation} \label{goodbound1}
\mu_{m-l}(Q) \leq 2^{-(m-k)s} = |Q|^s n^{-s/2}
\end{equation}
for all $k =0, \dots, l$ and $Q \in \mathcal{D}_{m-k}$. Moreover, for all $x \in F$, there is at least one $k \in \{0, \dots, l\}$ and $Q \in \mathcal{D}_{m-k}$ with $x \in Q$ such that the inequality in \eqref{goodbound1} is an equality. This is because all cubes at level $m$ satisfy the equality for $\mu_m$ and if a cube $Q$ satisfies the equality for $\mu_{m-k}$, then either $Q$ or its parent cube satisfies the equality for $\mu_{m-k-1}$. For each $x \in F$, choosing the largest such $Q$ yields a finite collection of cubes $Q_1, \dots, Q_t$ which cover $F$ and satisfy $\delta^{1/\theta} \leq |Q_i| \leq \delta$ for $i = 1, \dots, t$. Therefore, using \eqref{goodcover1},
\[
\mu_{m-l}(F) = \sum_{i=1}^t \mu_{m-l}(Q_i) = \sum_{i=1}^t |Q_i|^s n^{-s/2} > \varepsilon n^{-s/2}.
\]
Let $\mu_\delta = \mu_{m-l}(F)^{-1} \mu_{m-l}$, which is clearly a probability measure supported on a finite collection of points. Moreover, for all $x \in \mathbb{R}^n$ and $\delta^{1/\theta} \leq r \leq \delta$, $B(x,r)$ is certainly contained in at most $c_n$ cubes in $\mathcal{D}_{m-k}$ where $k$ is chosen to be the largest integer satisfying $0 \leq k \leq l$ and $2^{-(m-k+1)} < r$, and $c_n$ is a constant depending only on $n$. Therefore, using \eqref{goodbound1},
\[
\mu_\delta (B(x,r)) \leq c_n \mu_{m-l}(F)^{-1} 2^{-(m-k) s} \leq c_n \varepsilon^{-1} n^{s/2} 2^s r^s
\]
which completes the proof, setting $c= c_n \varepsilon^{-1} n^{s/2} 2^s $.
\end{proof}
\subsection{General bounds}
Here we consider general bounds which rely on the Assouad dimension and which have interesting consequences for continuity. Namely, they provide numerous examples where the intermediate dimensions are \emph{dis}continuous at $\theta=0$ and also provide another proof that the intermediate dimensions are continuous at $\theta = 1$. The \emph{Assouad dimension} of $F \subseteq \mathbb{R}^n$ is defined by
\begin{eqnarray*}
\dim_\textup{A} F &=& \inf \bigg\{ s \geq 0 \ : \ \text{there exists $C>0$ such that for all $x \in F$, } \\
&\,& \qquad \qquad \text{and for all $0<r<R$, we have} \ N_r(F \cap B(x,R)) \leq C \left(\frac{R}{r} \right)^s \bigg\}
\end{eqnarray*}
where $N_r(A)$ denotes the smallest number of sets of diameter at most $r$ required to cover a set $A$. In general $\underline{\dim}_\textup{B} F \leq \overline{\dim}_\textup{B} F \leq \dim_\textup{A} F \leq n$, but equality of these three dimensions occurs in many cases, even if the Hausdorff dimension and box dimension are distinct, for example if the box dimension is equal to the ambient spatial dimension. See \cite{Fra, robinson} for more background on the Assouad dimension.
The following proposition gives lower bounds for the intermediate dimensions in terms of Assouad dimensions.
\begin{prop} \label{assouad}
Given any non-empty bounded $F \subseteq \mathbb{R}^n$ and $\theta \in (0,1)$,
\[
\da F \geq \dim_\textup{A} F - \frac{\dim_\textup{A} F - \underline{\dim}_\textup{B} F}{\theta},
\]
and
\[
\uda F \geq \dim_\textup{A} F - \frac{\dim_\textup{A} F - \overline{\dim}_\textup{B} F}{\theta}.
\]
In particular, if $\underline{\dim}_\textup{B} F = \dim_\textup{A} F$, then $\da F =\overline{\dim}_\theta F = \underline{\dim}_\textup{B} F = \dim_\textup{A} F$ for all $\theta \in (0,1]$.
\end{prop}
\begin{proof}
We will prove the lower bound for $\da F$, the proof for $\uda F$ is similar. Fix $\theta \in (0,1)$ and assume that $\underline{\dim}_\textup{B} F >0$, otherwise the result is trivial. Let
\[
0<b< \underline{\dim}_\textup{B} F \leq \dim_\textup{A} F < d < \infty
\]
and $\delta \in (0,1)$ be given. By the definition of lower box dimension, there exists a uniform constant $C_0$, depending only on $F$ and $b$, such that there is a $\delta$-separated set of points in $F$ of cardinality at least $C_0 \delta^{-b}$. Let $\mu_\delta$ be a uniformly distributed probability measure on these points, i.e. a sum of $C_0\delta^{-b}$ point masses each with mass $C_0^{-1} \delta^{b}$. We use our mass distribution principle with this measure to prove the proposition.
Let $U \subseteq \mathbb{R}^n$ be a Borel set with $|U| = \delta^\gamma$ for some $\gamma \in [\theta, 1]$. By the definition of Assouad dimension there exists a uniform constant $C_1$, depending only on $F$ and $d$, such that $U$ intersects at most $C_1 ( \delta^\gamma/\delta)^d$ points in the support of $\mu_\delta$. Therefore
\[
\mu_\delta (U) \leq C_1\delta^{(\gamma-1)d} C_0^{-1} \delta^{b} = C_1C_0^{-1} \ |U|^{(\gamma d - d+ b)/ \gamma} \leq C_1C_0^{-1} \ |U|^{(\theta d - d+ b)/ \theta}
\]
which, using Proposition \ref{mdp}, implies that
\[
\da F \geq (\theta d - d+ b)/ \theta = d - \frac{d-b}{\theta}.
\]
Letting $d \to \dim_\textup{A} F$ and $b \to \underline{\dim}_\textup{B} F$ yields the desired result.
\end{proof}
This proposition implies that for bounded sets with $\dim_\textup{H} F < \underline{\dim}_\textup{B} F = \dim_\textup{A} F$, the intermediate dimensions $\da F$ and $\uda F$ are necessarily discontinuous at $\theta = 0$. In fact the intermediate dimensions are constant on $(0,1]$ in this case. On the other hand, this gives an alternative demonstration that $\da F$ and $\uda F$ are \emph{always} continuous at $\theta=1$. Moreover, the proposition provides a quantitative lower bound near $\theta=1$. In Section \ref{examplessimple} we will use Proposition \ref{assouad} to construct examples exhibiting a range of behaviours.
\subsection{Product formulae}
A well-studied problem in dimension theory is how dimensions of product sets behave. The following product formulae for intermediate dimensions may be of interest in their own right, but in Section \ref{examplessimple} they will be used to construct examples.
\begin{prop} \label{products}
Let $E \subseteq \mathbb{R}^n$ and $F \subseteq \mathbb{R}^m$ be bounded and $\theta \in [0,1]$. Then
\[
\da E + \da F\ \leq\ \da (E \times F)\ \leq\ \overline{\dim}_\theta ( E \times F)\ \leq\ \overline{\dim}_\theta E + \overline{\dim}_\textup{B} F.
\]
\end{prop}
\begin{proof}
Fix $\theta \in (0,1)$ throughout, noting that the cases when $\theta=0,1$ are well-known, see \cite[Chapter 7]{Fa}. We begin by demonstrating the left-hand inequality. We may assume that $\da E , \da F >0$ as otherwise the conclusion follows by monotonicity. Moreover, since $E,F$ are bounded we may assume they are compact since all the dimensions considered are unchanged under taking closure. Let $0< s< \da E$ and $0<t< \da F$. It follows from Proposition \ref{frostman} that there exist constants $C_s,C_t > 0$ such that for all $\delta \in (0,1)$ there exist Borel probability measures $\mu_\delta$ supported on $E$ and $\nu_\delta$ supported on $F$ such that for all $x \in \mathbb{R}^n$ and $\delta^{1/\theta} \leq r \leq \delta$,
\[
\mu_\delta(B(x,r)) \leq C_s r^s \qquad \text{and} \qquad \nu_\delta(B(x,r)) \leq C_t r^t.
\]
Consider the product measure $\mu_\delta \times \nu_\delta$ which is supported on $E \times F$. For $z \in \mathbb{R}^n \times \mathbb{R}^m$ and $\delta^{1/\theta} \leq r \leq \delta$,
\[
(\mu_\delta \times \nu_\delta) (B(z,r)) \leq C_s C_t r^{s+t}
\]
and Proposition \ref{mdp} yields $\da (E \times F) \geq s+t$; letting $ s\to \da E$ and $t\to \da F$ gives the desired inequality.
The middle inequality is trivial and so it remains to prove the right-hand inequality. Let $s > \overline{\dim}_\theta E$ and $d > \overline{\dim}_\textup{B} F$. From the definition of $\overline{\dim}_\textup{B} F$ there exists a constant $\delta_1 \in (0,1)$ such that for all $0<r<\delta_1$ there is a cover of $F$ by at most $r^{-d}$ sets of diameter $r$.
Let $\varepsilon>0$. By the definition of $\overline{\dim}_\theta E$ there exists $\delta_0\in (0,\delta_1)$ such that for all $0<\delta < \delta_0$ there is a cover of $E$ by sets $\{U_i\}_{i}$ with $\delta^{1/\theta} \leq |U_i| \leq \delta$ for all $i$ and
\[
\sum_{i} |U_i|^s \leq \varepsilon.
\]
Given such a cover of $E$, for each $i$ let $\{U_{i,j}\}_{j}$ be a cover of $F$ by at most $|U_i|^{-d}$ sets with
diameters $|U_{i,j}|=|U_{i}|$ for all $j$. Then
$$ E\times F\ \subseteq\ \bigcup_i \bigcup_j \big(U_i \times U_{i,j}\big),$$
with
$$ \sum_i\sum_j |U_i \times U_{i,j}|^{s+d} \ \leq\ \sum_i|U_i|^{-d}\big( \sqrt{2}|U_i|\big)^{s+d}
\ =\ 2^{(s+d)/2}\sum_i|U_i|^{s} \ \leq\ 2^{(s+d)/2}\varepsilon.$$
Since
$\delta^{1/\theta} \leq |U_i \times U_{i,j}| \leq \sqrt{2}\delta$ for all $i,j$, each set $U_i \times U_{i,j}$ may be covered by at most $c_{n+m}$ sets $\{V_{i,j,k}\}_k$ with diameters $\delta^{1/\theta}\leq |V_{i,j,k}| \leq \min\{|U_i \times U_{i,j}|,\delta\}\leq \delta$, where $c_{q}$ is the least number such that every set in $\mathbb{R}^q$ of diameter $\sqrt{2}$ can be covered by at most $c_{q}$ sets of diameter 1. Hence
$$\sum_i\sum_j \sum_k|V_{i,j,k}|^{s+d}\ \leq \ c_{n+m}2^{(s+d)/2}\varepsilon.$$
As $\varepsilon$ may be taken arbitrarily small, $\overline{\dim}_\theta (E \times F) \leq s+d$; letting $ s\to \uda E$ and $d\to \overline{\dim}_\textup{B} F$ completes the proof.
\end{proof}
\section{Examples}\label{sec:egs}
\setcounter{equation}{0}
\setcounter{theo}{0}
In this section we construct several simple examples where the intermediate dimensions exhibit a range of phenomena. All of our examples are compact subsets of $\mathbb{R}$ or $\mathbb{R}^2$ and in all examples the upper and the lower intermediate dimensions coincide for all $\theta \in [0,1]$.
\subsection{Convergent sequences}\label{secseq}
Let $p>0$ and
\begin{equation*}
F_p = \bigg\{0, \frac{1}{1^p}, \frac{1}{2^p},\frac{1}{3^p},\ldots \bigg\}.
\end{equation*}
Since $F_p$ is countable, $\hdd F_p=0$. It is well-known that $\bdd F_p=1/(p+1)$, see \cite[Chapter 2]{Fa}. We obtain the intermediate dimensions of $F_p$.
\begin{prop}\label{conseq}
For $p>0$ and $0\leq \theta \leq 1$,
$$\da F_p = \overline{\dim}_\theta F_p = \frac{\theta}{p+\theta}.$$
\end{prop}
\begin{proof}
We first bound $\overline{\dim}_\theta F_p$ above. Let $0<\delta<1$ and let $M =\lceil \delta^{-(s +\theta(1-s))/(p+1)}\rceil$. Write $B(x,r)$ for the closed interval (ball) of centre $x$ and length $2r$.
Take a covering ${\mathcal U}$ of $F_p$ consisting of $M$ intervals $B(k^{-p}, \delta/2)$ of length $\delta$ for $1\leq k\leq M$ and $\lceil M^{-p} /\delta^\theta\rceil \leq M^{-p}/\delta^\theta +1$ intervals of length $\delta^\theta$ that cover $[0, M^{-p}].$ Then
\begin{eqnarray*}
\sum_{U\in {\mathcal U}} |U|^s &\leq & M\delta ^s + \delta^{\theta s}\Big(\frac{1}{M^p\delta^\theta}+ 1\Big)\\
&=& M\delta ^s + \frac{\delta^{\theta (s-1)}}{M^p}+ \delta^{\theta s}\\
& \leq & ( \delta^{-(s +\theta(1-s))/(p+1)} +1) \delta^s
+ \delta^{\theta (s-1)}\delta^{(s +\theta(1-s))p/(p+1)} +\delta^{\theta s}\\
& =& 2\delta^{(\theta (s-1)+sp)/(p+1)} + \delta^s+ \delta^{\theta s} \ \to \ 0
\end{eqnarray*}
as $\delta \to 0$ if $s(\theta +p) > \theta$. Thus $\overline{\dim}_\theta F_p \leq\theta/(p+\theta)$.
\medskip
For the lower bound we put suitable measures on $F_p$ and apply Proposition \ref{mdp}. Fix $s = \theta/(p+\theta)$. Let $0<\delta < 1$ and again let $M =\lceil \delta^{-(s +\theta(1-s))/(p+1)}\rceil$. Define $\mu_\delta$ as the sum of point masses on the points $1/k^p \ (1\leq k<\infty)$ with
\begin{equation}\label{mass}
\mu_\delta \Big(\left\{\frac{1}{k^p}\right\}\Big)\ = \
\left\{
\begin{array}{cl}
\delta^s & \mbox{ if } 1\leq k \leq M \\
0 & \mbox{ if } M+1\leq k <\infty
\end{array}
\right. .
\end{equation}
Then
\begin{eqnarray*}
\mu_\delta(F_p) &=& M\delta^s \\
& \geq&\delta^{-(s +\theta(1-s))/(p+1)}\delta^s \\
& =& \delta^{(ps +\theta(s-1))/(p+1)} = 1
\end{eqnarray*}
by the choice of $s$.
To see that \eqref{mdiscond} is satisfied,
note that if $2\leq k\leq M$ then, by a mean value theorem estimate,
$$\frac{1}{(k-1)^p} - \frac{1}{k^p}\ \geq \ \frac{p}{k^{p+1}}\ \geq \ \frac{p}{M^{p+1}};$$
thus the gap between any two points of $F_p$ carrying mass is at least $p/M^{p+1}$.
Let $U$ be such that $\delta \leq |U|\leq \delta^\theta$. Then $U$ intersects at most
$1+ |U|/(p/M^{p+1}) = 1+|U|M^{p+1}/p $ of the points of $F_p$ which have mass $\delta^s$.
Hence
\begin{eqnarray*}
\mu_\delta (U)& \leq & \delta^s + \frac{1}{p}|U|\delta^s \delta^{-(s +\theta(1-s))} \\
&=& \delta^s + \frac{1}{p}|U| \delta^{(\theta(s-1))}\\
&\leq& |U|^s + \frac{1}{p} |U||U|^{s-1} \\
&=& \left(1+ \frac{1}{p}\right)|U|^s.
\end{eqnarray*}
From Proposition \ref{mdp}, $\da F_p \geq s = \theta/(p+\theta)$.
\end{proof}
\subsection{Simple examples exhibiting different phenomena} \label{examplessimple}
We use the convergent sequences $F_p$ from the previous section, the product formulae from Proposition \ref{products}, and that the \emph{upper} intermediate dimensions are finitely stable to construct examples displaying a range of different features which are illustrated in Figure 1.
The natural question which began this investigation is `does $\da$ vary continuously between the Hausdorff and lower box dimension?'. This indeed happens for the convergent sequences considered in the previous section, but turns out to be false in general. The first example in this direction is provided by another convergent sequence.
\medskip
\begin{figure}[t]
\centering
\includegraphics[width=0.65\textwidth]{idfig4}
\caption{Plots of $\da F$ for the four examples in Section \ref{examplessimple}}
\end{figure}
\emph{Example 1: Discontinuous at 0, otherwise constant.} Let
\[
F_{\log} = \{ 0, 1/\log 2, 1/\log 3, 1/\log 4, \dots \}.
\]
This sequence converges slower than any of the polynomial sequences $F_p$ and it is well-known and easy to prove that
$\bdd F_{\log} = \dim_\textup{A} F_{\log}= 1$.
It follows from Proposition \ref{assouad} that
\[
\da F_{\log} = \overline{\dim}_\theta F_{\log} = 1,\qquad \theta \in (0,1].
\]
Since $\dim_0 F_{\log}=\hdd F_{\log}=0$ there is a discontinuity at $\theta = 0$.
\medskip
\emph{Example 2: Continuous at 0, part constant, part strictly increasing. } In the opposite direction, it is possible that $\da F = \hdd F < \lbd F$ for some $\theta >0$. Indeed, let $F =F_1 \cup E$ where $F_1 = \{0, 1/1, 1/2, 1/3, \dots\}$ as before, and let $E \subset \mathbb{R}$ be any compact set with $\hdd E = \ubd E = 1/3$ (for example an appropriately chosen self-similar set). Then it is straightforward to deduce that
\[
\da F = \overline{\dim}_\theta F = \max\left\{ \frac{ \theta}{1+ \theta}, \, 1/3\right\},\qquad \theta \in [0,1].
\]
It is also possible for $\da F$ to approach a value strictly in between $\hdd F$ and $\lbd F$ as $\theta \to 0$. This is the subject of the next two examples.
\medskip
\emph{Example 3: Discontinuous at 0, part constant, part strictly increasing.} For an example that is constant on an interval adjacent to 0, let $F =E \cup F_{1}$ where this time $E\subset \mathbb{R}$ is any closed countable set with $\lbd E = \dim_\textup{A} E = 1/4$. It is immediate that
\[
\da F = \overline{\dim}_\theta F = \max\left\{ \frac{ \theta}{1+ \theta}, \, 1/4\right\},\qquad \theta \in (0,1],
\]
with $\hdd F=0$ and $\bdd F=1/2$.
\medskip
\emph{Example 4: Discontinuous at 0, strictly increasing.} Finally, for an example where $\da F$ is smooth, strictly increasing but not continuous at $\theta=0$, let
\[
F = F_{1} \times F_{\log}\subset \mathbb{R}^2.
\]
Here $\hdd F=0$ and $\bdd F=3/2$ and Proposition \ref{products} gives
\[
\da F = \overline{\dim}_\theta F = \frac{ \theta}{1 + \theta}+ 1,\qquad \theta \in (0,1],
\]
noting that $\da F_{\log} = \overline{\dim}_\theta F_{\log} = \lbd F_{\log} = \ubd F_{\log} = \dim_\textup{A} F_{\log}= 1$ for $\theta \in (0,1]$.
\section{Bedford-McMullen carpets} \label{carpetssec}
A well-known class of fractals where the Hausdorff and box dimensions differ are the self-affine carpets; this is a consequence of the alignment of the component rectangles in the iterated construction. The first studies of planar self-affine carpets were by Bedford \cite{bedford} and McMullen \cite{mcmullen} independently, see also \cite{peres} and these Bedford-McMullen carpets have been widely studied and generalised, see for example \cite{Fa1} and references therein. Indeed, the question of the distribution of scales of covering sets for Hausdorff and box dimensions of Bedford-McMullen carpets was one of our motivations for studying intermediate dimensions. Finding an exact formula for the intermediate dimensions of Bedford-McMullen carpets seems a difficult problem, so here we obtain some lower and upper bounds, which in particular establish continuity at $\theta=0$ and that the intermediate dimensions take a strict minimum at $\theta=0$.
We introduce the notation we need, generally following that of McMullen \cite{mcmullen}. Choose integers $n > m \geq 2$. Let ${I}=\left\{0,\ldots, m-1 \right\}$ and ${J}=\left\{0,\ldots, n-1 \right\}$. Choose a fixed digit set $D \subseteq I \times J$ with at least two elements. For $\left( p,q \right)\in D$ we define the affine contraction $S_{\left(p,q \right)}\colon [0,1]^2 \rightarrow [0,1]^2$ by
\[
S_{ \left( p,q \right)}\left(x,y\right) = \left( \frac{x+p}{m},\frac{y+q}{n} \right) .
\]
There exists a unique non-empty compact set $F \subseteq [0,1]^2$ satisfying
\[
F=\bigcup_{(p,q) \in D}S_{ \left( p,q \right)}(F);
\]
that is $F$ is the attractor of the iterated function system $\ \left\{ S_{ \left(p,q \right)} \right\}_{ \left( p,q \right)\in D}$. We call such a set $F$ a {\em Bedford-McMullen self-affine carpet}. It is sometimes convenient to denote pairs in $D$ by $\ell = (p_\ell,q_\ell )$.
We model our carpet $F$ via the symbolic space $D^{\mathbb{N}}$, which consists of all infinite words over $D$ and is equipped with the product topology.
We write $\mathbf{i} \equiv (i_1,i_2,\ldots)$ for elements of $D^{\mathbb{N}}$ and $(i_1,\ldots,i_k)$ for words of length $k$ in $D^k$, where $i_j \in D$.
Then the canonical projection $\tau :D^{\mathbb{N}} \rightarrow [0,1]^2$ is defined by
\[
\{\tau(\mathbf{i})\}\equiv\{\tau(i_1,i_2,\ldots)\}=\bigcap_{k \in \mathbb{N}} S_{i_1} \circ \cdots \circ S_{i_k}([0,1]^2).
\]
This allows us to switch between symbolic and geometric notation since
\[
\tau(D^{\mathbb{N}})=F.
\]
Bedford \cite{bedford} and McMullen \cite{mcmullen} showed that
\be\label{dimb}
\bdd F = \frac{\log m_0}{\log m} + \frac{\log |D| - \log m_0}{\log n}
\ee
where $m_0$ is the number of $p$ such that there is a $q$ with $(p,q) \in D$, that is the number of columns of the array containing at least one rectangle.
They also showed that
\be\label{dimh}
\hdd F = \log \big(\sum_{p=1}^m n_p^{\log_n m} \big) \Big/ \log m,
\ee
where $n_p \, (1\leq p\leq m)$ is the number of $q$ such that $(p,q)\in D$, that is the number of selected rectangles in the $p$th column of the array.
For each $\ell \in D$ we let $a_\ell$ be the number of rectangles of $D$ in the same column as $\ell$.
Then, writing $d = \hdd F$, \eqref{dimh} may be written as
\begin{equation}
m^d \ = \ \sum_{p=1}^m {n}_p^{\log_n m}\ =\ \sum_{\ell\in D} a_{\ell}^{(\log_n m -1)},
\end{equation}
where equality of the sums follows from the definitions of $n_p$ and $a_\ell$.
We assume that the non-zero $n_p$ are not all equal, otherwise $\hdd F = \bdd F$; in particular this implies that $a := \max_{\ell \in D}a_\ell \geq 2$.
We denote the $k$th-level iterated rectangles by
$$ R_k (i_1,\ldots,i_k)\ =\ S_{i_1}\circ \cdots \circ S_{i_k}([0,1]^2).$$
We also write $R_k(\mathbf{i}) \equiv R_k(i_1,i_2, \ldots)$ for this rectangle when we wish to indicate the $k$th-level iterated rectangle containing the point $\tau(\mathbf{i}) = \tau(i_1,i_2, \ldots)$.
We will associate a probability vector $\ \left\{ b_{\ell} \right\}_{\ell\in D}$ with $D$ and let $\widetilde{\mu}=\prod_{k\in\mathbb{ N}}\left( \sum_{ i_k\in D}b_{ i_k}\delta_{ i_k}\right)$ be the natural Borel product probability measure on $D^{\mathbb{N}}$, where $\delta_{ \ell}$ is the Dirac measure on $D$ concentrated at $ \ell$. Then the measure
\[
\mu=\widetilde{\mu}\circ \tau^{-1}
\]
is a self-affine measure supported on $F$.
Following McMullen \cite{mcmullen} we set
\be\label{pi}
b_\ell = a_\ell^{(\log_n m -1)}/m^d\qquad (\ell \in D),
\ee
noting that $\sum_{\ell\in D} b_\ell =1$, to get a measure $\mu$ on $F$; thus the measures of the iterated rectangles are
\be\label{rect}
\mu\big( R_k (i_1,\ldots,i_k)\big)\ =\ b_{i_1}\cdots b_{i_k}\
=\ m^{-kd} ( a_{i_1} \cdots a_{i_k})^{(\log_n m -1)}.
\ee
Approximate squares are well-known tools in the study of self-affine carpets. Given $k\in \mathbb{N}$ let $l(k) = \lfloor k \log_n m\rfloor $ so that
\be\label{lkbounds}
k\log_n m \ \leq\ l(k) \ \leq k\log_n m \ + \ 1
\ee
For such $k$ and $\mathbf{i} = (i_1,i_2, \ldots) \in D^{\mathbb{N}}$ we define the {\em approximate square} containing $\tau(\mathbf{i})$ as the union of $m^{-k}\times n^{-k}$ rectangles:
\begin{align*}
Q_k(\mathbf{i}) = Q_k(i_1,i_2, \ldots) = \bigcup\Big\{ &R_k(i_1',\ldots,i_k'): p_{i_{j}'}=p_{i_{j}}
\text{ for } j= 1, \ldots, k \\
& \text{ and } q_{i_{j}'}=q_{i_{j}} \text{ for } j= 1, \ldots, l(k) \ \Big\},
\end{align*}
recalling that $\ell = (p_\ell,q_\ell)$.
This approximate square has sides $m^{-k}\times n^{-l(k)}$ where
$n^{-1}m^{-k} \leq n^{-l(k)}\leq m^{-k}$.
Note that, by virtue of self-affinity and since the sequence $(p_{i_1},\cdots p_{i_k})$ is the same for all level-$k$ rectangles $R_k (i_1,\ldots,i_k)$ in the same approximate square, the $a_{i_{l(k)+1}}\cdots a_{i_k}$ level-$k$ rectangles that comprise the approximate square $Q_k(\mathbf{i})$ all have equal $\mu$-measure. Thus, writing $L= \log_n\! m$,
\begin{eqnarray}
\mu(Q_k(\mathbf{i})) &=& m^{-kd} a_{i_1}^{(L -1)}\cdots a_{i_k}^{(L -1)}
\times a_{i_{l(k)+1}}\cdots a_{i_k}\label{squaremes1}\\
&=& m^{-kd} a_{i_1}^{L}\cdots a_{i_{k}}^{L}
\times a_{i_{1}}^{-1}\cdots a_{i_{l(k)}}^{-1}.\label{squaremes}
\end{eqnarray}
We now obtain an upper bound for $\overline{\dim}_\theta F$ which implies continuity at $\theta = 0$ and so on $[0,1]$. Recall that $a = \max_{\ell \in D}a_\ell \geq 2$.
\begin{prop}\label{propup}
Let $F$ be the Bedford-McMullen carpet as above. Then for $0<\theta <\frac{1}{4}(\log_n\! m)^2$,
\be\label{upbound}
\overline{\dim}_\theta F \ \leq \ \hdd F + \bigg(\frac{2\log(\log_m n)\log a}{\log n}\bigg) \frac{1}{-\log \theta}.
\ee
In particular, $ \underline{\dim}_\theta F$ and $ \overline{\dim}_\theta F$ are continuous at $\theta = 0$ and so are continuous on $[0,1]$.
\end{prop}
\begin{proof}
For $\mathbf{i} = (i_1,i_2,\ldots)$, rewriting \eqref{squaremes} gives
\be
\mu\big(Q_k(\mathbf{i})\big) = m^{-kd}
\bigg(\frac{ (a_{i_1}\cdots a_{i_k})^{1/k}}{(a_{i_1}\cdots a_{i_{l(k)}})^{1/l(k)}}\bigg)^{Lk}
\big(a_{i_1}\cdots a_{i_{l(k)}}\big)^{(kL/l(k))-1}.
\label{rewritemes}
\ee
We consider the two bracketed terms on the right-hand side of \eqref{rewritemes} in turn. We show that the first term cannot be too small for too many consecutive $k$ and that the second term is bounded below by $1$.
For $k>1$ with $l(k) =\lfloor k \log_n m\rfloor$ as usual, define
\be\label{fkstar}
f_k(\mathbf{i})\ \equiv\ f_k(i_1,i_2,\ldots)
\ =\ \bigg(\frac{ (a_{i_1}\cdots a_{i_k})^{1/k}}{(a_{i_1}\cdots a_{i_{l(k)}})^{1/l(k)}}\bigg)
\ee
We claim that for all $K\geq L/(1-L)$ and all $\mathbf{i} = (i_1,i_2,\ldots) \in D^{\mathbb{N}}$, there exists $k$ with $K\leq k \leq K/ \theta$ such that
\be\label{claim}
f_k(\mathbf{i})\ \geq \ a^{\log L/\log(L/2\theta)}.
\ee
Suppose this is false for some $(i_1,i_2,\ldots)$ and $K\geq 1/L(L-1)$, so for all $K\leq k \leq K/ \theta$, $ f_k(\mathbf{i}) < \lambda:= a^{\log L/\log(L/2\theta)}$, that is
\be\label{basin}
(a_{i_1} a_{i_2}\cdots a_{i_k})^{1/k}\ <\ \lambda (a_{i_1} a_{i_2}\cdots a_{i_l})^{1/l(k)}.
\ee
Define a sequence of integers $k_{r}\, (r=0,1,2,\ldots)$ inductively by $k_{0} = K$ and for $r \geq 1$ taking $k_{r}$ to be the least integer such that
$ \lfloor k_{r} \log_n m \rfloor = k_{r-1}$.
Then $k_{r} \leq k_{r-1}/L +1$, and a simple induction shows that
$$k_{r}\ \leq \ L^{-r}\big(K + L/(1-L)\big)-L/(1-L) \ \leq\ L^{-r}\big(K + L/(1-L)\big)\qquad (r\geq 0).$$
Fix $N$ to be the greatest integer such that $k_N \leq K/\theta$. Then
$$ L^{-(N+1)} \big(K+L\big/(1 -L)\big)\ \geq\ k_{N+1}\ >\ K/\theta$$
so rearranging, provided $K\geq L/(1-L)$,
$$L^N\ <\ \frac{\theta\big(K + L\big/(1-L)\big)}{LK}\ \leq \ \frac{2\theta}{L},
$$
that is
\be\label{ineq1}
N > \frac{\log(2\theta/L)}{\log L}.
\ee
From \eqref{basin}
$$
(a_{i_1} a_{i_2}\cdots a_{i_{k_r}})^{1/{k_r}}\ <\ \lambda (a_{i_1} a_{i_2}\cdots a_{i_{k_{r-1}}})^{1/k_{r-1}} \qquad (1\leq r\leq N)
$$
so iterating
$$
1\ \leq\ (a_{i_1} a_{i_2}\cdots a_{i_{k_N}})^{1/k_N}\ <\ \lambda^{N} (a_{i_1} a_{i_2}\cdots a_{i_{K}})^{1/K}\ \leq \ \lambda^{N}a.
$$
Combining with \eqref{ineq1} we obtain
$$
\lambda \ >\ a^{-1/N} \geq a^{\log L/\log(L/2\theta)}
$$
which contradicts the definition of $\lambda$. Thus the claim \eqref{claim} is established.
For the second bracket on the right-hand side of \eqref{rewritemes} note that
$$ 0\ \leq\ (kL/l(k))-1\ =\ \frac{ k\log_n m -\lfloor k\log_n m \rfloor}{l(k)}\ \leq \frac{1}{l(k)}$$
so that
\be\label{brac2}
1\ \leq \ \big(a_{i_1}\cdots a_{i_{l(k)}}\big)^{(kL/l(k))-1}\ \leq a.
\ee
Putting together \eqref{rewritemes}, \eqref{claim} and \eqref{brac2}, we conclude that for all $K\geq L/(1-L)$ and all $\mathbf{i} \in D^{\mathbb{N}}$
there exists $ K\leq k \leq K/ \theta $ such that
$$\mu\big(Q_k(\mathbf{i})\big) \ \geq\ m^{-dk} a^{kL\log L/\log(L/2\theta)}
\ \geq\ m^{-dk} a^{2kL\log L/\log(1/\theta)}
\ =\ m^{-k(d+\epsilon(\theta))},$$
as $\theta \leq L^2/4$, where
$$\epsilon(\theta)\ =\ \frac{ -(\log a)\, 2L\log L}{\log m \log(1/\theta)} \
=\ \frac{2 \log(\log_m n)\log a}{\log n} \frac{1}{-\log \theta}.$$
Geometrically this means that for $K\geq L/(1-L)$ every point $z \in F$ belongs to at least one approximate square, $Q_{k(z)}$ say, with $K \leq {k(z)} \leq K/\theta$ and with $\mu(Q_{k(z)}) \geq m^{-k(d+\epsilon(\theta))}$. Since the approximate squares form a nested hierarchy we may choose a subset $\{Q_{k(z_n)}\}_{n=1}^N \subset \{ Q_{k(z)}: z\in F\}$ that is disjoint (except possibly at boundaries of approximate squares) and which cover $F$. Thus
$$1\ =\ \mu(F)\ = \ \sum_{n=1}^N \mu (Q_{k(z_n)})\ \geq\ \sum_{n=1}^N m^{-k(z_n)(d+\epsilon(\theta))} \ \geq\ \sum_{n=1}^N (2^{-1/2}|Q_{k(z_n)}|)^{(d+\epsilon(\theta))} $$
where $| Q_k|$ denotes the diameter of the approximate square $Q_k$, noting that $| Q_k|\leq 2^{1/2} m^{-k}$. It follows that $ \overline{\dim}_\theta F \leq d+\epsilon(\theta)$ as claimed.
\end{proof}
The following lemma brings together some basic estimates that we will need to obtain a lower bound for the intermediate dimensions of $F$.
\begin{lem}\label{mcmass}
Let $\epsilon >0$. There exists $K_0\in \mathbb{N}$ and a set $E\subset F$ with $\mu(E) \geq \frac{1}{2}$ such that for all $\mathbf{i}$ with $\tau(\mathbf{i})\in E$ and $k\geq K_0$,
\begin{equation}\label{uppermass}
\mu(Q_k(\mathbf{i}))\leq m^{-k(d-\epsilon)}
\end{equation}
and
\begin{equation}\label{lowermass}
\mu(R_k(\mathbf{i}))\geq \exp(-k(H(\mu)+\epsilon)),
\end{equation}
where $d=\dim_\textup{H} F$ and $H(\mu) \in (0,\log |D|)$ is the entropy of the measure $\mu$.
\end{lem}
\begin{proof}
McMullen \cite[Lemmas 3,4(a)]{mcmullen} shows that for $\widetilde{\mu}$-almost all $\mathbf{i} \in D^{\mathbb{N}}$
$$\lim_{k\to\infty} \mu(Q_k(\mathbf{i}))^{1/k} \to m^{-d}.$$
Thus by Egorov's theorem we may find a set $\widetilde{E}_1\subset D^{\mathbb N}$ with $\widetilde{\mu}(\widetilde{E}_1) \geq \frac{3}{4}$, and
$K_1\in \mathbb{N}$ such that \eqref{uppermass} holds for all $\mathbf{i} \in \widetilde{E}_1$ and $k\geq K_1$.
Furthermore, it is immediate from the Shannon-MacMillan-Breimann Theorem and \eqref{rect} that for $\widetilde{\mu}$-almost all $\mathbf{i} \in D^{\mathbb{N}}$,
$$\lim_{k\to\infty} \mu(R_k(\mathbf{i})))^{1/k} \to \exp(-H(\mu)),$$
and again by Egorov's theorem there is a set $\widetilde{E}_2\subset D^{\mathbb N}$ with $\widetilde{\mu}(\widetilde{E}_2) \geq \frac{3}{4}$, and
$K_2\in \mathbb{N}$ such that \eqref{lowermass} holds for all $\mathbf{i} \in \widetilde{E}_2$ and $k\geq K_2$.
The conclusion of the lemma follows taking $E =\tau(\widetilde{E}_1\cap \widetilde{E}_2)$ and $K_0 = \max\{K_1,K_2\}$.
\end{proof}
We now obtain a lower bound for $\underline{\dim}_{\theta}F$, showing in particular that $\underline{\dim}_{\theta}F> \hdd F$ for all $\theta>0$.
\begin{prop}\label{proplb}
Let $F$ be the Bedford-McMullen carpet as above. Then for $0\leq \theta \leq 1$,
\begin{equation}\label{lowerbnd}
\underline{\dim}_{\theta}F\geq \hdd F+\theta\frac{\log |D|-H(\mu)}{\log m}.
\end{equation}
\end{prop}
\begin{proof}
Fix $\theta\in(0,1)$, let $E\subset F$ and $K_0$ be given by Lemma \ref{mcmass}, and let $K\geq K_0$. We define a measure $\nu_K$ which assigns equal mass to all level-$K$ rectangles, and then subdivides this mass among sub-rectangles using the same weights as for the measure $\mu$, given by \eqref{pi}. This gives a measure to which we can apply the mass distribution principle, Lemma \ref{mdp}. Thus for $k\geq K$, writing $b_\ell = a_\ell^{(\log_n m -1)}/m^d$ as in \eqref{pi},
\begin{equation}\label{defnuk}
\nu_K\big( R_k (i_1,\ldots,i_k)\big)\ :=\ |D|^{-K}b_{i_{K+1}}\cdots b_{i_k}\
=\ |D|^{-K} m^{-(k-K)d} ( a_{i_{K+1}} \cdots a_{i_k})^{L-1}.
\end{equation}
We now consider an approximate square $Q_k(\mathbf{i})$ containing the point $\mathbf{i}$. This approximate square is a union of rectangles $R_k(\mathbf j)$ which, as explained in our comment before \eqref{squaremes1}, each have the same $\mu$ measure equal to $\mu(R_k(\mathbf i))$. The same argument gives that for any $R_k(\mathbf j)\subset Q_k(\mathbf i)$
\[
\nu_K(R_k(\mathbf j))=\nu_K(R_k(\mathbf i))=\mu(R_k(\mathbf i)) \dfrac{|D|^{-K}}{\mu(R_K(\mathbf i))}
\]
where the final equality holds since the formula for $\nu_{K}$ differs from that of $\mu$ only in the mass it assigns according to the first $K$ letters. Putting this together allows one to express the $\nu_{K}$-measure of an approximate square of side length $m^{-k}$ in relation to the $\mu$-measure of such a square. For $\tau(\mathbf{i})\in E$ and $k\geq K$, the approximate square $Q_k(\mathbf{i})$ that contains the point $\mathbf{i}$ has $\nu_K$-measure
\begin{eqnarray}
\nu_{K}(Q_k(\mathbf{i}))&=&\dfrac{|D|^{-K}}{\mu(R_K(\mathbf{i}))}\mu(Q_k(\mathbf{i}))\label{nukmu}\\
&\leq& \dfrac{|D|^{-K} m^{-k(d-\epsilon)}}{\exp(-K(H(\mu)+\epsilon))},\nonumber
\end{eqnarray}
using Lemma \ref{mcmass}. (Alternatively \eqref{nukmu} may be verified directly using \eqref{rect}, \eqref{squaremes} and \eqref{defnuk}.)
Then
\begin{eqnarray}
\nu_{K}(Q_k(\mathbf{i}))& \leq & m^{-k(d-\epsilon) -K \log_m \big(|D|\exp(-H(\mu)-\epsilon)\big)}\nonumber\\
&\leq& m^{-k\big(d-\epsilon +\frac{K}{k}(\log |D|-H(\mu)-\epsilon)/\log m\big)}\label{inequal}.
\end{eqnarray}
We need bounds that are valid for all $k\in[K,K/\theta]$ corresponding to approximate squares of sides between (approximately) $m^{-K}$ and $m^{-K/\theta}$. The exponent in \eqref{inequal} is maximised when $k = K/\theta$, so that
\begin{equation}\label{munu}\nu_{K}(Q_k(\mathbf{i}))\ \leq \ m^{-k\big(d-\epsilon+\theta(\log |D|-H(\mu)-\epsilon)/\log m\big)}\end{equation}
for all $\mathbf{i}$ with $\tau(\mathbf{i})\in E$ and integers $k\in[K,K/\theta]$, where $\mu (E) \geq \frac{1}{2}$.
To use our mass distribution principle we need equation \eqref{munu} to hold on a set of $\mathbf{i}$ of large $\nu_K$ mass, whereas currently we have that it holds on a set $E$ of large $\mu$ mass.
Let
$$E'=\tau\{\mathbf{i} : \mbox{inequality \eqref{munu} is satisfied for all } k\in[K,K/\theta] \}.$$
Firstly we observe that $Q_k(\mathbf{i})$ depends only on $(i_1,\ldots i_k)$, and we are dealing with $k\leq K/\theta$, so the question of whether $\tau(\mathbf{i})\in E'$ is independent of $(i_{\lfloor K/\theta\rfloor+1}, i_{\lfloor K/\theta\rfloor+2},\ldots)$. Secondly, since $Q_k(\mathbf{i})$ is a union of rectangles $R_k(\mathbf{i})$, and $\nu_K\big( R_k (i_1,i_2,\ldots))$ is independent of $(i_1,\ldots ,i_K)$, the question of whether $\tau(\mathbf{i})\in E'$ is independent of $(i_1,\ldots ,i_K)$. Thus we can write
\[
E'=\bigcup_{i_{K+1}\ldots i_{\lfloor K/\theta\rfloor}\in I''} \bigg(\bigcup_{i_1\ldots i_K\in D^K} R_{\lfloor K/\theta\rfloor}(i_1,\ldots,i_{\lfloor K/\theta\rfloor})\bigg)
\]
for some set $I''\subset D^{\lfloor K/\theta\rfloor-K}$.
But using \eqref{defnuk} gives
\begin{eqnarray*}
\nu_K \bigg(\bigcup_{i_1\ldots i_K\in D^K} R_{\lfloor K/\theta\rfloor}(i_1,\ldots, i_{\lfloor K/\theta\rfloor})\bigg)&=& \sum_{i_1\ldots i_K\in D^K}\frac{1}{|D|^K}b_{i_{K+1}}\cdots b_{i_{\lfloor K/\theta\rfloor}}\\
&=& b_{i_{K+1}}\cdots b_{i_{\lfloor K/\theta\rfloor}}
\end{eqnarray*}
and \eqref{rect} gives
\begin{eqnarray*}
\mu \bigg(\bigcup_{i_1\ldots i_K\in D^K} R_{\lfloor K/\theta\rfloor}(i_1,\ldots, i_{\lfloor K/\theta\rfloor})\bigg)&=& \sum_{i_1\ldots i_K\in D^K}b_{i_1}\cdots b_{i_K}b_{i_{K+1}}\cdots b_{i_{\lfloor K/\theta\rfloor}}\\
&=& \bigg(\sum_{i_1\ldots i_K\in D^K}b_{i_1}\cdots b_{i_K}\bigg)b_{i_{K+1}}\cdots b_{i_{\lfloor K/\theta\rfloor}}\\
&=& b_{i_{K+1}}\cdots b_{i_{\lfloor K/\theta\rfloor}},
\end{eqnarray*}
as $\sum_{i\in D} b_i =1$. Since these quantities are equal we conclude that
\[
\nu_K(E')\ =\ \mu(E')\ \geq\ \mu(E)\ \geq\ {\textstyle \frac{1}{2}}
\]
as required.
Since \eqref{munu} holds for all $\mathbf{i} \in E'$, a straightforward variant on our mass distribution principle, where we use approximate squares instead of balls, gives
\[
\underline{\dim}_{\theta}F\ \geq\ \hdd F-\epsilon +\theta\frac{\log |D|-H(\mu)-\epsilon}{\log m}.
\]
Since $\epsilon$ can be chosen arbitrarily small, \eqref{lowerbnd} follows.
\end{proof}
Note that, in \eqref{lowerbnd},
\[
H(\mu)\ =\ -m^{-d}\sum_{\ell\in D }(a_\ell^{L-1}\big((L-1)\log a_\ell - d\log m\big)\ \leq\ \log | D|
\]
with equality if and only if $\mu$ gives equal mass to all cylinders of the same length, which happens if and only if each column in our construction contains the same number of rectangles. This happens exactly when the box and Hausdorff dimension coincide. Thus our lower bounds give that $\underline{\dim}_{\theta}F> \hdd F$ whenever $\theta>0$, provided that the Hausdorff and box dimensions of $F$ are different.
Since the measures that we have constructed to give lower bounds on $\underline{\dim}_{\theta}F$ are rather crude, it is unlikely that our lower bound for $\underline{\dim}_{\theta}F$ converges to ${\dim}_\textup{B} F$ as $\theta\to 1$. However, a lower bound which \emph{does} approach ${\dim}_\textup{B} F$ as $\theta\to 1$ is given by Proposition \ref{assouad}, noting that $\dim_\textup{A} F > \underline{\dim}_\textup{B} F = {\dim}_\textup{B} F$ provided ${\dim}_\textup{B} F > {\dim}_\textup{H} F$, see \cite{mackay,Fra}.
Many questions on the intermediate dimensions of Bedford-McMullen carpets remain, most notably finding the exact forms of
$\underline{\dim}_{\theta}F$ and $\overline{\dim}_{\theta}F$. In that direction we would at least conjecture that these intermediate dimensions are equal and strictly monotonic. One might hope to get better estimates using alternative definitions of $\mu$ in Proposition \ref{propup} and $\nu_K$ in Proposition \ref{proplb}, but McMullen's measure and our modifications seemed to work best when optimising mass distribution type estimates across $F$ and over the required range of scales.
\section*{Acknowledgements}
JMF was financially supported by a \emph{Leverhulme Trust Research Fellowship} (RF-2016-500) and KJF and JMF were financially supported in part by an \emph{EPSRC Standard Grant} (EP/R015104/1). We thank James Robinson for pointing out the reference \cite{KP}.
\\
\bibliographystyle{plain}
| {
"timestamp": "2019-11-07T02:15:12",
"yymm": "1811",
"arxiv_id": "1811.06493",
"language": "en",
"url": "https://arxiv.org/abs/1811.06493",
"abstract": "We introduce a continuum of dimensions which are `intermediate' between the familiar Hausdorff and box dimensions. This is done by restricting the families of allowable covers in the definition of Hausdorff dimension by insisting that $|U| \\leq |V|^\\theta$ for all sets $U, V$ used in a particular cover, where $\\theta \\in [0,1]$ is a parameter. Thus, when $\\theta=1$ only covers using sets of the same size are allowable, and we recover the box dimensions, and when $\\theta=0$ there are no restrictions, and we recover Hausdorff dimension.We investigate many properties of the intermediate dimension (as a function of $\\theta$), including proving that it is continuous on $(0,1]$ but not necessarily continuous at $0$, as well as establishing appropriate analogues of the mass distribution principle, Frostman's lemma, and the dimension formulae for products. We also compute, or estimate, the intermediate dimensions of some familiar sets, including sequences formed by negative powers of integers, and Bedford-McMullen carpets.",
"subjects": "Metric Geometry (math.MG); Classical Analysis and ODEs (math.CA); Dynamical Systems (math.DS)",
"title": "Intermediate dimensions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9854964186047326,
"lm_q2_score": 0.8128673246376008,
"lm_q1q2_score": 0.8010778372311661
} |
https://arxiv.org/abs/math/9806076 | On the volume of the polytope of doubly stochastic matrices | We study the calculation of the volume of the polytope B_n of n by n doubly stochastic matrices; that is, the set of real non-negative matrices with all row and column sums equal to one.We describe two methods. The first involves a decomposition of the polytope into simplices. The second involves the enumeration of ``magic squares'', i.e., n by n non-negative integer matrices whose rows and columns all sum to the same integer.We have used the first method to confirm the previously known values through n=7. This method can also be used to compute the volumes of faces of B_n. For example, we have observed that the volume of a particular face of B_n appears to be a product of Catalan numbers.We have used the second method to find the volume for n=8, which we believe was not previously known. | \section{Introduction}
\label{sec:intro}
We study the calculation of the volume of the polytope $B_n$ of
$n \times n$ doubly stochastic matrices; that is, the set of real nonnegative
matrices with all row and column sums equal to one. This polytope is
sometimes known as the Birkhoff polytope or the assignment polytope.
We will describe and evaluate two methods for computing the volume of $B_n$.
In the first method we decompose $B_n$ into a disjoint union of simplices all
of the same volume and count the simplices. The fact that this can be
done appears in \cite{St1}. This method applies to any face of $B_n$ as well.
In the second method we count the number of $n \times n$ nonnegative integer
matrices with all row and column sums equal to $t$ (sometimes called magic
squares) for suitable values of $t$.
These numbers allow us to compute the Ehrhart polynomial of $B_n$,
which (essentially) has the volume of $B_n$ as its leading coefficient.
It appears that this has been the most common method of computing the
volume of $B_n$. Sturmfels reports in \cite{Stu} on other work in
which the volume of $B_n$ has been computed for $n$ up to 7.
We have also used this method to compute the volume when $n=8$.
This study is largely expository since the two methods are not
new. However, the details about how we carry out these methods
may be of interest. We are not aware of any reports of others who
have carried out the simplicial decomposition method.
As a byproduct of our program for carrying out the simplicial
decomposition method, we are easily able to compute the volume of any
face (of any dimension) of $B_n$ provided that $n$ is not too large.
This allowed us to
discover that a certain special face of $B_n$ has a volume which appears
to be given by a simple product formula.
This formula is given in Conjecture \ref{conj:1}.
Our study resulted from a question \cite{Mi} of Victor Miller, who asked how
one could generate a doubly stochastic matrix uniformly at random.
It is not hard to see that it would be easy to generate
a random doubly stochastic matrix if one could easily calculate the
volume of any face of $B_n$. However the method described here for
calculating face volumes is practical only for small $n$.
In what follows we will make use of some well known properties of the
face structure of $B_n$:
the vertices of $B_n$ are precisely the $n!$ $n \times n$
permutation matrices; on the other hand,
for each pair $(i,j)$ with
$1 \le i,j \le n$, the doubly stochastic matrices with
$(i,j)$ entry equal to 0 form a {\it facet\/}\ (maximal proper face) of $B_n$ and all
facets arise in this way. See \cite{BS} for further properties and
references.
In general, it is convenient to identify the faces of $B_n$ with certain
$n \times n$ matrices of 0's and 1's, as follows.
First we identify a 0-1 matrix with the set of entries in the matrix that
are 1's. Thus, for two 0-1 matrices $A$ and $B$ of the same size,
we can define their union $A \cup B$ as the 0-1 matrix whose set
of 1's is the union of the sets of 1's of $A$ and $B$. {\it e.g.,\/}\
\[
\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right)
\; \cup \;
\left(\begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array}\right)
\; = \;
\left(\begin{array}{ccc} 1 & 1 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right).
\]
Similarly we can speak of one 0-1 matrix containing another and so forth.
Now to each face $F$ of $B_n$, we associate the matrix
$M$ which is the union of the vertices (permutation matrices) in $F$.
The facets of $B_n$ containing $F$ are precisely those associated with the
zero entries of $M$. Since any face is the intersection of the facets
containing it, any permutation matrix contained in $M$ must be a vertex
of $F$.
Thus the vertices of $F$ are precisely the permutation matrices contained
in $M$, so we can recover $F$ from $M$.
In this way we identify the faces of $B_n$ with the set of 0-1
matrices which are unions of permutation matrices.
Note that not every 0-1 matrix corresponds to a face of $B_n$. For example
\[
\left(\begin{array}{cc} 0 & 1 \\ 1 & 1 \end{array}\right)
\]
is not a union of permutation matrices, hence not a face of $B_2$.
\section{Volume}
\label{sec:2}
It is easy to see that the dimension of $B_n$ is $(n-1)^2$.
Strictly speaking, the volume we wish to compute is the
$(n-1)^2$-volume of $B_n$ regarded as a subset of
$n^2$-dimensional Euclidean space. Thus, for example, the polytope
$B_2$ consists of the line segment joining the matrices
\[
\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right) \quad\hbox{and} \quad
\left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right)
\]
and hence its volume is 2.
An $n \times n$ doubly stochastic matrix is determined
by its upper left $(n-1)\times (n-1)$ submatrix. The set of $(n-1)\times (n-1)$
matrices obtained this way is the set $A_n$ of all nonnegative $(n-1)\times (n-1)$
matrices with row and column sums $\le 1$ such that the sum of all the
entries is at least $n-2$. This is affinely isomorphic to
$B_n$. In the Appendix we show that the ratio of the volume of $B_n$
to the volume of $A_n$, regarded as a subset of Euclidean $(n-1)^2$ space,
is $n^{n-1}$. In some ways the volume of $A_n$ is easier to
understand since its dimension is equal to the dimension of its ambient space.
James Maiorana \cite{Ma} (and probably others) noted a Monte Carlo method for approximating
the volume of $A_n$. Consider the set $C_n$ of $(n-1)\times (n-1)$ nonnegative
matrices with row sums (but not necessarily column sums) $\le 1$. This is the
Cartesian product of $n-1$ unit simplices in Euclidean $(n-1)$-space so
its volume is $ \frac{1}{(n-1)!^{n-1}}$.
It is easy to choose points in $C_n$ uniformly at random. The probability
$\alpha_n$ that such a point is in $A_n$ is the ratio of the volume of
$A_n$ to that of $C_n$.
Thus we can
run Monte-Carlo trials to estimate $\alpha_n$ and
hence the volume of $A_n$.
For large $n$, this Monte Carlo method is impractical since $\alpha_n$ is too small.
However, it is useful for checking computations for small $n$.
A lower bound for $\alpha_n$ is given by Bona in \cite{Bo}.
There is a more natural unit for the volume of $B_n$ and
its faces. This is based on the fact that the vertices of $B_n$
are integer matrices. Suppose that $F$ is a $d$-dimensional
face of $B_n$. Since its vertices have integer coordinates, the integer
points in the affine span of $F$ comprise a
$d$-dimensional affine lattice $L$. Given such a lattice there is
a minimum volume of any $d$-simplex with vertices in $L$. Lattice points
$w_0,\dots,w_d$ are the vertices of one of these minimum volume simplices
if and only if every point of $L$ is uniquely expressible in the
form $ \sum_{i=0}^d k_i w_i$, where the $k_i$'s are integers whose sum is 1.
The {\it relative volume} of a face $F$ is the volume of
$F$ expressed in units equal to the volume of a minimal simplex in $L$.
The relative volume of a face is the
same whether regarded as a face of $B_n$ or as a face of $A_n$, since
the mapping from $B_n$ to $A_n$ (by taking the upper left $(n-1) \times (n-1)$ minor)
preserves integrality of points.
Here are the currently known relative volumes of $B_n$.
\[
\begin{array}{cr}
n & \mbox{Relative Volume of $B_n$} \\
1 & 1\\
2 & 1\\
3 & 3\\
4 & 352\\
5 & 4718075\\
6 & 14666561365176\\
7 & 17832560768358341943028\\
8 & 12816077964079346687829905128694016
\end{array}
\]
To convert relative volumes to true volumes, we need to know the
volume of a minimal simplex of $A_n$.
But the affine span of $A_n$ is all of $(n-1)^2$-dimensional space. Hence
the volume of a minimal simplex in $A_n$ is $\frac{1}{((n-1)^2)!}$,
and the volume of a minimal simplex in $B_n$ is $\frac{n^{n-1}}{((n-1)^2)!}$.
\section{Triangulations}
\label{sec:3}
We call the first method for computing the volume of $B_n$ the
{\it triangulation method.\/}\ The method applies to the
calculation of the volume of any polytope. The
essence is that we decompose the polytope into simplices and
sum the volumes of the simplices.
For $B_n$ we have used a standard method of decomposing a
polytope $P$ into simplices. See for example \cite{St1}.
To decompose $P$ into simplices, we
choose an arbitrary vertex $v$ and form the collection of facets of $P$
opposite $v$ (facets of
$P$ not containing $v$.) We then
recursively triangulate each facet. The triangulation of
$P$ is then formed by adding our chosen vertex to each simplex in
the triangulation of each of the facets.
The standard triangulations of $B_n$ and its faces have an unusual property,
given in \cite{St1}, for which we provide a self-contained proof below.
\begin{proposition}
In any standard triangulation of a face $F$ of $B_n$, every
simplex has minimal volume in the affine lattice determined by $F$.
\end{proposition}
{\it Proof:\/}\
Let $F$ be a $d$-dimensional face of $B_n$,
$v_0$ any vertex in $F$, and $G$ a facet of $F$ opposite $v_0$.
Suppose that a simplex in a standard triangulation of $G$
has vertices $v_1,v_2,\dots,v_d$.
We need to prove that the set of integer points of the affine space determined by $F$ is
the same as the set of points
$\sum_{i=0}^{d} k_i v_i $,
where the $k_i$ are integers whose sum is 1.
Of course all the integer combinations are in the affine span.
The question is whether there are any other points.
Any integral point of the affine span can be uniquely expressed in the form
$\sum_{i=0}^{d} r_i v_i $,
where the $r_i$'s are real numbers with sum 1.
Since $v_0$ is not in the face $G$, there is a facet of
$B_n$ containing $G$ but not $v_0$. Thus
$v_0$ must have at least one
entry equal to $1$ in the same position where all $v_i$, $i\ge 1$, have zeroes.
Thus, in the hypothetical combination above, $r_0$ must be an integer.
If we add $r_0(v_1-v_0)$ to the combination above, we obtain another integral
point in the affine span of $G$. It follows, using induction, that $r_1+r_0$, and
$r_2,\dots,r_d$ are integers and therefore all the $r$'s are integers,
as desired.\\
Note that, as a corollary, in any standard triangulation
of a face of the $B_n$, the number of simplices in the
triangulation is equal to the relative volume.
We also obtain an important computational principle. Given a face $F$
of $B_n$ and a vertex $v$ of $F$, the relative volume
of $F$ is the sum of the relative volumes of facets of $F$ opposite
$v$.
\section{The Triangulation Method for $B_n$}
\label{sec:4}
We now describe the triangulation method for computing the volume of
$B_n$. This is simply an elaboration of the principle that
the relative volume of a face is the sum of the relative volumes of the
faces opposite any vertex.
We apply this principle recursively. To get started we use the fact
that the relative volume of any zero-dimensional face of $B_n$ is 1.
In the most naive plan we calculate the relative
volumes of all faces. We first produce a list of all faces of each
dimension. For dimension 0, we know all the relative volumes are 1.
Then, for each face $F$ of dimension $d$ we select a vertex and
find the opposite facets (of dimension $d-1$). Assuming recursively
that their relative volumes have already been computed, we now find the relative volume
of $F$ by summing the relative volumes of the facets.
There are two serious drawbacks to the naive plan.
Perhaps the most pressing problem is that we need to compute the volumes of
an extremely large number of faces, since quite a few of the $2^{n^2}$
possible 0-1 matrices are actually faces of $B_n$.
Here we have recourse to a single important trick. If we
permute the rows and columns of the matrix representing a face, we obtain
the matrix of another face with the same volume. Also if we transpose
a matrix representing a face, we obtain another face of the same
volume. We regard matrices which can be obtained from each other by
these operations as equivalent. We can cut down on the cost of our
algorithm if we compute the volume only for a single
``canonical'' face in each equivalence class.
The next most difficult problem is to produce the lists of faces.
The most practical method that we found for producing faces is to
start with the single $(n-1)^2$-dimensional face, $B_n$ itself,
and successively produce faces of lower
dimension by intersecting with a facet of $B_n$.
While producing the faces we save the subface information so that
we can look up the volumes when we are done. Unfortunately
we need to construct a very large partially ordered set of faces before
we can calculate any volumes since the only volumes we know
are those of the zero-dimensional faces.
While the cost in memory is not so bad for $n$ less than 8, when we reach
$n=8$, we seem to need about 200 gigabytes of intermediate storage.
If the memory were available, the computation of the volumes
would be relatively easy. In fact we are able to carry out a
substantial fraction of the work before running out of memory.
There are two phases to our algorithm.
In the first phase we construct a collection of faces
together with information about which ones are facets of
which others. In particular, we successively compute, for
$d=(n-1)^2,(n-1)^2-1,\dots,0$, a collection ${\cal F}_d$ of
$d$-dimensional faces of $B_n$. We begin by
setting ${\cal F}_{(n-1)^2} = \{ B_n \}$,
{\it i.e.,\/}\ consisting of just the all 1's matrix representing
$B_n$ itself.
Given ${\cal F}_d$ we produce ${\cal F}_{d-1}$ as follows.
Start with ${\cal F}_{d-1}=\emptyset$.
For each face $F\in {\cal F}_d$ we select a vertex $v\in F$.
We then find
the facets of $F$ opposite $v$, canonicalize these faces, and add them
to ${\cal F}_{d-1}$. Having done this for all $F\in {\cal F}_d$,
we sort ${\cal F}_{d-1}$ and remove the duplicates.
Then, for each face $f\in{\cal F}_{d-1}$, we save a list of pointers to
the faces in ${\cal F}_d$ from which $f$ arose.
(Equivalent faces can appear several times as opposite faces
of the same face. When this happens, we include the pointer in the list of
pointers multiple times.)
This completes the first phase. In the
second phase we start with ${\cal F}_0$ and work up to higher
dimensions, calculating the relative volume of every saved face until
we obtain the volume of $B_n$ itself. This is quite fast, requiring just
one addition for each saved pointer.
Note that once the pointers are constructed we do not
need the faces themselves, unless we want to know which
face has each of the intermediate volumes we are computing.
For larger values of $n$, the accumulators used for calculating
the volumes will overflow. But we can get around this
problem by using multiple precision arithmetic or by
performing the volume calculation several times modulo various
primes and combining the results with the Chinese Remainder Theorem.
The main computational work of our algorithm takes place in three steps.
\begin{enumerate}
\item for each face $F\in {\cal F}_d$, find a vertex $v\in F$.
\item determine the facets of $F$ opposite $v$.
\item put these opposite facets into canonical form.
\end{enumerate}
We now describe how each of these steps is done.
One important decision is the data structure for storing faces. We
identify each face with the 0-1 matrix which is the union of its
vertices (regarded as permutation matrices). For $n\le 8$ it is
convenient to represent each face as $n^2$ bits of a single word,
where the words of a (64-bit) computer are regarded as 64-long arrays
of bits.
In Step 1 we are given a face $f$ represented by a 0-1 matrix and
we are looking for a permutation matrix $\pi$ contained in $f$. This
could be done with the assignment algorithm or one of the methods for
finding maximum matchings, but for the small values of $n$ that we
were using, it was quicker to use a backtracking search method, as follows.
The matrix $f$ has at least one 1 in its first row.
We guess one of these as the location of the 1 in the first row of $\pi$.
We then guess the location of the 1 in the second row of $\pi$,
bearing in mind that it cannot be in the same column as the 1 in the first row.
We continue this way searching for the location of the 1 in subsequent rows.
We backtrack if we reach a row in which there are no feasible choices.
Now we consider Step 2. Given a face $f$ and a vertex $\pi$ we need to find
the facets of $f$ opposite $\pi$.
For a moment let us ignore $\pi$ and consider the general problem of
constructing the facets of $f$. The main principle is that each facet
of $f$ can be obtained by intersecting $f$ with a facet of $B_n$ that does not
contain $f$. Consider the facet corresponding to the pair $(i,j)$.
The facet does not contain $f$ if $f_{ij}=1$. To intersect $f$ with this
facet we start by replacing $f_{ij}$ with 0, obtaining a 0-1
matrix $g$. The face which is the intersection of $f$ with the facet
$(i,j)$ is then the largest face $h$ contained in $g$.
The matrix $h$, which is the union of the permutation matrices contained in
$g$, can be strictly contained in $g$. Given one of the 1's in $g$, to test
whether it is in $h$, we search for a permutation matrix in $g$ which uses
the 1 in question. This can be done with our
backtracking search algorithm. The 1 in question is in $h$ precisely
when this search succeeds. When the position of a 1 in $g$ is zero in
$h$ we say it is {\it forced} to zero.
For example if $n=3$ and $f$ is the 3-dimensional face with matrix
\[
\left(\begin{array}{ccc} 0 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array}\right)
\]
then the intersection of $f$ with the facet corresponding to the middle entry
of the top row is one-dimensional, with matrix
\[
\left(\begin{array}{ccc} 0 & 0 & 1 \\ 1 & 1 & 0 \\ 1 & 1 & 0 \end{array}\right).
\]
In this example two zeroes in the last column are forced.
This example also shows that although every facet of $f$ is the intersection of
$f$ with a facet of $B_n$, the converse is not true and the dimension of the
intersection can be too small to be a facet of $f$.
There are some additional simplifications when we search for the facets of $f$
opposite a given vertex $\pi$ of $f$. If $g$ is a facet of $f$ not containing
$\pi$, then $g$ must contain a 0 in place of one of the 1's of $\pi$. Thus
there are at most $n$ facets of $f$ opposite $\pi$.
Observe that if $g$ is a facet of $f$ not containing
$\pi$, then $g \cup \pi$ is a union of permutation matrices and
therefore a face of $B_n$ containing $g$ and $\pi$. Thus $g \cup \pi=f$.
This implies an important and helpful principle. When we introduce a 0 at a
1 of $\pi$ and this results in a facet of $f$,
then the only other positions that might be forced to zero are
those of the other 1's of $\pi$. Thus we can loop through the $n$ 1's of $\pi$
one at a time and, for each of these, introduce a 0 and determine what other
1's of $\pi$ are forced to be 0 and produce accordingly a matrix, which we
call a candidate. We obtain a set of
$n$ candidates among which all the facets opposite $\pi$ must occur.
(This list can have duplicates which we remove.) Of these
candidates the facets are those which are maximal under inclusion.
Indeed, it is clear that every candidate contains a face that has
the same intersection with $\pi$. But this face is contained in a
facet which has an intersection with $\pi$ that is at least as large.
Thus every candidate is contained in at least one facet,
and the facets are precisely the maximal candidates.
Finally we describe Step 3, which we call ``canonicalization''.
The most straightforward way to choose a canonical form
for a face $f$ is to apply every element of our group of
symmetries to $f$ and choose the image of $f$ with the least value (where
the bit pattern $f$ is regarded as an integer.)
But this is prohibitively slow.
Instead we make use of certain special functions, which we call scores,
which assign integers to every row and column of a 0-1 matrix.
The scores have the special property that
when rows are permuted,
the row scores are permuted the same way leaving the column scores
unchanged, whereas, when columns are permuted, the column scores are permuted
the same way leaving the row scores unchanged. An example of an allowable
score is to assign
to each row its row sum and to each column its column sum.
Given such scores we say that a matrix is in standard form if it satisfies the
following three properties:
\begin{enumerate}
\item the column scores are weakly increasing.
\item the row scores are weakly increasing.
\item in the case of tied row scores the rows are ordered lexicographically
as bit strings.
\end{enumerate}
For a given 0-1 matrix, once
its row and column scores have been computed it is easy to put a matrix
and its transpose into standard form by forcing each of the three
conditions above in the listed order.
For each face constructed,
we put both the face and its transpose into standard form
and finally choose the smaller of these two, regarded as integers,
as the ``canonical'' form that is saved.
Note that we are abusing terminology a little here since although
the method always replaces a face by an equivalent face, it is
conceivable that equivalent faces will canonicalize to distinct faces.
When this happens, we still obtain correct volumes, but we end up doing
work which could be avoided if the equivalence were recognized.
However, if this event is rare, we obtain almost all the savings
of true canonicalization as described above (but without the excessive cost).
It turns out that just using row and column sums as the score functions
fails to recognize a substantial number of equivalences. What we need
are scores that tend to assign different values to different rows and
columns. Slightly more complicated scores do better. Given a
column score, we can produce a more complicated row score by assigning
to each row the sum (or any symmetric function) of the values of the
column scores of those columns for which 1's occur in the given row.
Similarly a row score can be used to produce a more complicated column
score. We can also add two row scores to obtain another row score or
two column scores to obtain another column score. By combining steps like this
we produced scores that were better at
distinguishing rows (and columns) without being much more expensive to
compute.
This concludes our description of the triangulation method.
As mentioned earlier it is reasonably practical for $n < 8$.
The times required on a 500mhz DEC alpha were as follows:
\[
\begin{array}{cr}
& \mbox{Time in seconds} \\
n<6 & \mbox{less than 0.1}\\
n=6 & 0.63\\
n=7 & 250.1
\end{array}
\]
Although the volumes of $B_n$ do not seem to follow a recognizable
pattern, it seemed conceivable that there would be faces of $B_n$ for
which the relative volumes had interesting properties.
One fairly natural class is the set of matrices for which the set of
of zeroes of the matrix form a Young tableau in a corner
of the matrix.
Since our triangulation method applies to any face of $B_n$, we were able
to check some natural classes of faces. It turned out that for the
simplest non-trivial Young tableau faces the volumes apparently
obey a simple rule, although we have not been able to supply a proof.
More precisely, suppose that $n\ge 2$ and that $F_n$ is the $n\times n$
matrix whose $(i,j)$ entry is 1 when $j \le i+1$ and 0 otherwise.
Then $F_n$ is a union of permutation matrices corresponding to a face of $B_n$
of dimension $\binom{n}{2}$ with $2^{n-1}$ vertices and we have the following\\
\begin{conjecture}\label{conj:1}
The relative volume of $F_n$ is the product
\[
\prod_{i=0}^{n-2} \frac{1}{i+1}\binom{2i}{i}
\]
of the first $n-1$ Catalan numbers.\\
\end{conjecture}
We have verified this for $n \leq 12$.\\
Finally, we give some miscellaneous observations which may be useful
but do not actually enter our algorithm.
\begin{itemize}
\item In our method, we never needed to calculate the dimension
of a face since the way they were produced guaranteed their dimension.
However one may wonder how one can efficiently calculate the dimension
of a face. One of the most efficient methods makes use of the fact, discussed
in \cite{BS},
that the dimension is equal to $e+k-2n$, where $e$ is the number of
1's in the matrix of $F$ and $k$ the number of components in the graph
corresponding to $F$. ({\it i.e.,\/}\ the bipartite graph on $2n$
letters in which $i$ is joined to $j$ when the $(i,j)$ entry of the
matrix of $F$ is 1.)
\item The relative volume of any $d$-face $F$ can be computed
in several different ways since it is the sum of the relative volumes
of the facets opposite any vertex of $F$. This yields linear relations on the
volumes of $(d-1)$-faces of $B_n$. It seems conceivable that
these linear relations could be strong enough to yield useful
information about the volumes. However from our limited investigation
this does not appear to save anything in our computations.
\item Since our standard triangulations all involve minimum volume simplices,
one might wonder whether all minimum volume simplices with vertices
from the vertex set of $B_n$ belong to one of these triangulations.
For $n=4$, we found that there are 658584 minimum volume simplices whose
vertices are vertices of $B_4$. Of these, only 641112 belong to some
standard triangulation.
\end{itemize}
\section{The Magic Squares Method}
\label{sec:5}
In the next two sections we describe the magic squares method for
calculating the volume of $B_n$. We have no reason to
believe that our implementation is substantially different
from those used by others. (See \cite{DG}, \cite{Mo},\cite{SS},
and \cite{Stu}.) The only apparent novelty
is that we have carried out the computation when $n=8$.
We briefly explain here the connection between magic squares and
the volume of $B_n$.
It is known that for a $d$-dimensional polytope $P$ with integer vertices, for
any nonnegative integer $t$,
the number $e(P,t)$ of lattice points contained in $t\cdot P$ is a polynomial
of degree $d$ in $t$. This polynomial is called the
{\it Ehrhart polynomial\/}\ of $P$. Its leading coefficient is the
volume of $P$ in units equal to the volume of the fundamental domain of the
affine lattice spanned by $P$.
Thus if we know the values of $e(P,t)$ for values of $t$ from 0 to $d$,
we can find the Ehrhart polynomial by interpolation and in that way
determine the volume of $P$.
For $P=B_n$, this method is particularly attractive since the polynomial
is known to have certain symmetries, which make it necessary to calculate
the values of $e(B_n,t)$ for $t$ only up to and including
$\binom{n-1}{2}$ rather than $(n-1)^2$.
Note that $e(B_n,t)$ is exactly the number of $n \times n$ matrices
with nonnegative integer entries and all row and column sums equal to $t$,
i.e., the number of $n \times n$ magic squares with sum $t$.
In the next section we will describe how to count magic squares
relatively efficiently.
To see that we need only find $e(B_n,t)$ for values of $t$ up to and
including $\binom{n-1}{2}$ we refer to the following identities:
\begin{enumerate}
\item $e(B_n,t)=0$ for $-n+1 \le t \le -1$.
\item $e(B_n,-n-t)=(-1)^{n-1} e(B_n,t)$ for all $t$.
\end{enumerate}
These identities (conjectured in \cite{ADG}) are easy consequences of
Ehrhart's Law of Reciprocity,
which states that, for a $d$-dimensional polytope $P$ with integer vertices, and
$t>0$,
\[
e^*(P,t)=(-1)^d e(P,-t)
\]
where $e^*(P,t)$ denotes the number of integer points in
the interior of $P$. See \cite[Chapter 9]{H}, and \cite{E}
for proof and references.
\noindent {\it Proof of 1\/}:\\
$e^*(B_n,t)$ is the number of $n\times n$ matrices with positive integer
entries and all row and column sums equal to $t$.
Since all the entries are $\ge 1$, each row and column sum must be $\ge n$,
so $e^*(B_n,t)=0$ for $1 \le t \le n-1$.
By Ehrhart's Law of Reciprocity this implies $e(B_n,t)=0$ for
$-n+1 \le t \le -1$.
\noindent {\it Proof of 2\/}:\\
There is a one-to-one correspondence between $n\times n$ matrices with
nonnegative integer entries and row and column sums $t$ and
$n\times n$ matrices with positive integer entries and row and column
sums $n+t$. (Simply add 1 to each entry in matrices of the first type.)
Thus $e(B_n,t)=e^*(B_n,n+t)$. Applying Ehrhart's Law of Reciprocity,
the right-hand-side equals $(-1)^{(n-1)^2}e(B_n,-n-t)$, which simplifies
to $(-1)^{n-1}e(B_n,-n-t)$.
\medskip
The effect of the first identity is that we know $n-1$ zeroes of $e(B_n,t)$.
We also have $e(B_n,0)=1$. For each $t>0$, if we calculate the value of
$e(B_n,t)$, by the second identity we obtain also the value of $e(B_n,-n-t)$.
Thus if we calculate $e(B_n,t)$ for $t$ up to $\binom{n}{2}$, we have a total
of $n-1+1+2\binom{n}{2}=(n-1)^2+1$ values of the $e(B_n,t)$ so we have
enough data to find the polynomial $e(B_n,t)$ by interpolation.
\section{Counting Magic Squares}
\label{sec:6}
We now describe the method we used for counting the number of
$n \times n$ magic squares of row and column sum $t$ for $t\le\binom{n-1}{2}$.
This seems no different from the methods used by others \cite{DG}
to carry out the smaller cases.
Given an $m$-tuple $r=(r_1,\dots,r_m)$ and an $n$-tuple
$c=(c_1,\dots,c_n)$ of nonnegative integers, we denote by $N(r,c)$
the number of nonnegative integer matrices with row sums
$r_1,\dots,r_m$ and column sums $c_1,\dots,c_n$.
There are a few computational principles.
The first is that $N(r,c)=0$ unless $|r|=\sum_ir_i=\sum_jc_j=|c|$.
Next
note that
$N(r,c)$ is invariant under permutation of either the
$r$'s or the $c$'s. Finally the principle that leads to
substantial computational savings is that, for any integer $k$ (usually
near $m/2$)
\[
N(r,c)=\sum_x N((r_1,\dots,r_k),x)N((r_{k+1},\dots,r_n),c-x)
\]
where the sum is over all nonnegative $n$-tuples $x$ such that $|x|=r_1+\cdots+r_k$
and $x_i \le c_i$, $i=1,\dots,n$. This formula results from classifying
the matrices counted by $N(r,c)$ according to
the column sums of the submatrix formed from the first $k$ rows.
For fixed column sums $x_1,\dots,x_n$, the column sums of the
submatrix formed by the remaining rows must be $c_i-x_i$.
The total number of matrices in the class corresponding
to $x$ is the number of ways of choosing the top
submatrix multiplied by the number of ways of choosing the bottom.
The counting of magic squares amounts to the calculation of
$N(r,c)$ with the ``constant'' $n$-tuples $r=c=(t,\dots,t)$.
For this special case there are a few simplifications.
We discuss the case when $n$ is even. The same ideas apply
with slight modification when $n$ is odd.
Suppose that $n=2m$ and we wish to calculate $e(B_n,t)$.
From our general principle we have
\[
e(B_n,t)=\sum_y N(R,y)N(R,T-y)
\]
where $R$ is the $m$-tuple of all $t$'s, $T$ is
the $n$-tuple of all $t$'s, and
$y$ runs over all nonnegative $n$-tuples satisfying $|y|=mt$,
and $y_i \le t$ for all $i$. For a $k$-tuple $y=(y_1,\dots,y_k)$,
let us denote by $M(y)$ the number of distinct $k$-tuples which
arise by permuting the $y_i$'s. So, if $z_1,\dots,z_l$ are distinct,
and $y$ is a $k$-tuple consisting of $k_1$ $z_1$'s, $k_2$ $z_2$'s, etc.,
then $M(y)=k!/(k_1!\dots k_l!)$. In terms of this notation
a more computationally efficient version of the preceding equation is
\begin{equation}\label{eq:a}
e(B_n,t)=\sum_y M(y)N(R,y)N(R,T-y)
\end{equation}
where now we further restrict $y$ to weakly increasing $n$-tuples.
We can apply this principle again to the calculation of
$N(R,y)$ and $N(R,T-y)$ that appear in the last formula. We find that
\begin{equation} \label{eq:b}
N(R,y)=\sum_x M(x)N(x,(y_1,\dots,y_m))N(R-x,(y_{m+1},\dots,y_n))
\end{equation}
where now $x$ runs over all weakly increasing nonnegative $m$-tuples
with $|x|=y_1+\cdots+y_m$ and $x_i\le t$ for all $i$.
We can save an additional factor of 2 by noting that the
quantities $N(R,y)$ are the same as $N(R,T-y)$ except
in a different order. Thus if we save the former in a suitable
array, we can look up the latter ones in the array rather
than computing them.
Notice that the ingredients for calculating the sums
$N(R,y)$ and $N(R,T-y)$ are the quantities
$N(x,y)$ where $x$ and $y$ vary over weakly increasing nonnegative $m$-tuples with
$x_i,y_i \le \binom{n-1}{2}$.
Thus it is sensible to precompute these
quantities and save the results before forming the sums for
$N(R,y)$ or the sum for $e(B_n,t)$.
For example, for $n=8$ we need to precompute the quantities $N(x,y)$
where $x$ and $y$ have length 4. Again it is easier to calculate
\[
N(x,y)=\sum N((x_1,x_2),z) N((x_2,x_3),y-z)
\]
where the sum is over all 4-long vectors $z$ with
$|z|=x_1+x_2$ and $z_i \le y_i$ for all $i$. However we do not have available
the additional simplification to a sum over increasing sequences
$z$.
Thus on the right side we require the values $N(x,y)$, for pairs $(x,y)$
where $x$ has length 2 and $y$ has length 4, not necessarily weakly increasing,
where the components of
$x$ and $y$ vary up to 21. It would be possible to precompute all the
needed values and save these as well for later use. This might
be advantageous since these results are used several times each.
However, for simplicity, we use a subroutine to compute these, in effect
repeating the calculation of any $N(x,y)$ whenever needed.
This subroutine in turn calls
a subroutine for counting $2 \times 2$ matrices with prescribed row and column
sums which calculates
$N((x_1,x_2),(y_1,y_2))=\min(x_1,x_2,y_1,y_2)+1$
whenever $|x|=|y|$.
The precalculation for $n=8$ requires about 20 minutes
on a 500Mhz DEC alpha. The remaining calculation
also takes about 20 minutes. The first part can be calculated
in single precision. In the remaining parts we need some
sort of multiple precision method. We perform the calculation
modulo several primes and combine the results with the
Chinese Remainder Theorem. A similar program for $n=7$ requires
38 seconds.
Here are the Ehrhart polynomials $e(B_n,t)$, for $n=1,\dots,8$.
For each $n$, the coefficient of the last binomial coefficient
in the expression for $e(B_n,t)$ is the relative volume of $B_n$.
We express the Ehrhart polynomial of $B_n$ as an integer combination of
binomial coefficients
$C(t+n-1+k,n-1+2k)=\binom{t+n-1+k}{n-1+2k}$,
$k=0,\dots,\binom{n-1}{2}$,
because they are a basis for the polynomials satisfying
$p(-n-t)=(-1)^{n-1}p(t)$.
\begin{eqnarray*}
e(B_1,t) &=& C(t,0)\\
e(B_2,t) &=& C({t+1},1)\\
e(B_3,t) &=& C({t+2},2)+3C({t+3},4)\\
e(B_4,t) &=& C({t+3},3)+20C({t+4},5)+152C({t+5},7)+352C({t+6},9)\\
e(B_5,t) &=& C({t+4},4)+115C({t+5},6)+5390C({t+6},8)+\\
&& 101275C({t+7},10)+858650C({t+8},12)+\\
&& 3309025C({t+9},14)+4718075C({t+10},16)\\
e(B_6,t) &=& C({t+5},5)+714C({t+6},7)+196677C({t+7},9)+\\
&& 18941310C({t+8},11)+809451144C({t+9},13)+\\
&& 17914693608C({t+10},15)+223688514048C({t+11},17)+\\
&& 1633645276848C({t+12},19)+6907466271384C({t+13},21)+\\
&& 15642484909560C({t+14},23)+14666561365176C({t+15},25)\\
\end{eqnarray*}
\begin{eqnarray*}
e(B_7,t) &=& C({t+6},6)+ 5033C({t+7},8)+\\
&& 9090305C({t+8},10)+ 4562637436C({t+9},12)+\\
&&876755512997C({t+10},14)+ 80592643025748C({t+11},16)+\\
&& 4085047594855542C({t+12},18)+\\
&& 125166504299043921C({t+13},20)+\\
&& 2460507569635629206C({t+14},22)+\\
&& 32199612314177385616C({t+15},24)+\\
&& 285953447105799237366C({t+16},26)+\\
&& 1727929241168643056768C({t+17},28)+\\
&& 6989369809320320632154C({t+18},30)+\\
&& 18096158896344747268932C({t+19},32)+\\
&& 27093648035077238674360C({t+20},34)+\\
&& 17832560768358341943028C({t+21},36)\\
e(B_8,t) &=& C(t+7,7)+\\
&& 40312C(t+8,9)+\\
&& 544604804C(t+9,11)+\\
&& 1572522771472C(t+10,13)+\\
&&1433860489078360C(t+11,15)+\\
&& 546197610013169408C(t+12,17)+\\
&& 104573799019751624800C(t+13,19)+\\
&& 11404657872578818785152C(t+14,21)+\\
&& 773100275338739807806336C(t+15,23)+\\
&& 34668602440014649185072000C(t+16,25)+\\
&& 1075823106306592550013512704C(t+17,27)+\\
&& 23865735845675030268755397632C(t+18,29)+\\
&& 387264682746696963082402212768C(t+19,31)+\\
&& 4666750907574155613393947915904C(t+20,33)+\\
&& 42107239094874587731729608526080C(t+21,35)+\\
&& 284859465667770778104594682157824C(t+22,37)+\\
&& 1435919936068954265096148477657088C(t+23,39)+\\
&& 5307981556350553774098942855517184C(t+24,41)+\\
&& 13958946247270195588626193027208192C(t+25,43)+\\
&& 24706461764218063045041689495950080C(t+26,45)+\\
&& 26368507913706408235698183181290240C(t+27,47)+\\
&& 12816077964079346687829905128694016C(t+28,49)\\
\end{eqnarray*}
\section{Comparison}
\label{sec:7}
We now compare the two methods described above.
The main advantage of the first method seems to be that it applies
just as well to any face of $B_n$ as it does to $B_n$ itself.
To apply the algorithm to a face $F$ of $B_n$, we simply start
at the top level with the 0-1 matrix associated to $F$ and produce
lists of canonical subfaces as before.
In the second method it is not obvious how well one could do in computing
the volume of an arbitrary face $F$ of $B_n$. This would amount to
counting the number of magic squares with prescribed zeros and
row and column sums $t$ for possibly as many as $\dim(F)$ values of $t$.
We would not have $e^*(F,t)=0$ for $1 \le t \le n-1$, nor would we have
$e(F,t)=e^*(F,n+t)$, because of the prescribed zeros in $F$.
For certain $F$ ({\it e.g.,\/}\ those with the same number of prescribed
zeros in every row) we would have an analogous identity, and some
automatic roots, but in general we cannot guarantee any cutdown
in the number of values of $e(F,t)$ needed to determine the polynomial.
Furthermore in the actual counting of magic squares with certain prescribed
zeros, we would not be able to exploit the symmetries used in our algorithm
above.
The second method however has the advantage that, for computing
volumes of $B_n$ itself, it is much more feasible in terms of memory.
The second method also computes the Ehrhart polynomial.
It seems possible that the first method could be modified to
compute Ehrhart polynomials of the faces as well as just their
volumes. We would need to keep track of the numbers of
simplices of each dimension in a standard triangulation instead
of just the simplices of the largest dimension.
\newpage
\section*{Appendix: Ratio of Volumes of \protect\boldmath$B_n$ and \protect\boldmath$A_n$}
\label{sec:appendix}
Consider the linear mapping $\cal L$ from $(n-1) \times (n-1)$ matrices to $n \times n$
matrices which sends matrix $E_{i,j}$ which is all zero except for a 1 at $(i,j)$ to
the matrix $F_{i,j}$ which is all zero except for 1's at $(i,j)$ and $(n,n)$ and
$-1$'s at $(i,n)$ and $(n,j)$. If we follow $\cal L$ by the addition of
the $n \times n$ matrix that has the block form
\[
\left(\begin{array}{ccc} 0 & J_{n-1,1} \\ J_{1,n-1} & 2-n \end{array}\right)
\]
where $J_{k,l}$ is the all $k \times l$ matrix of all 1's, then we obtain
the affine mapping which sends $A_n$ to $B_n$. Thus if we
denote the ratio we seek by
$R$, we find that $R^2$ is
the determinant of the $(n-1)^2 \times (n-1)^2$ matrix
of dot products of $F_{i,j} \cdot F_{k,l}=2^{\delta_{ik}}2^{\delta_{jl}}$.
But in general if $ x_{ik}$ and $y_{jl}$ are two $m \times m$ matrices
and $z$ is the $m^2 \times m^2$ tensor product matrix indexed by pairs $ij$
and $kl$ given by $ z_{ij,kl}=x_{ik}y_{jl} $ then $\det z = (\det x
\det y)^m $.
Our case is the special case that
$x=y=J_{n-1,n-1}+I_{n-1}$. Since the characteristic polynomial
of $-J_m$ is $\lambda^{m-1}(\lambda+m)$, the determinant of $J_{n-1}+I_{n-1}$ is
$n$. It follows that $R=n^{n-1}$.
| {
"timestamp": "1998-06-13T04:05:13",
"yymm": "9806",
"arxiv_id": "math/9806076",
"language": "en",
"url": "https://arxiv.org/abs/math/9806076",
"abstract": "We study the calculation of the volume of the polytope B_n of n by n doubly stochastic matrices; that is, the set of real non-negative matrices with all row and column sums equal to one.We describe two methods. The first involves a decomposition of the polytope into simplices. The second involves the enumeration of ``magic squares'', i.e., n by n non-negative integer matrices whose rows and columns all sum to the same integer.We have used the first method to confirm the previously known values through n=7. This method can also be used to compute the volumes of faces of B_n. For example, we have observed that the volume of a particular face of B_n appears to be a product of Catalan numbers.We have used the second method to find the volume for n=8, which we believe was not previously known.",
"subjects": "Combinatorics (math.CO)",
"title": "On the volume of the polytope of doubly stochastic matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9854964215865318,
"lm_q2_score": 0.8128673201042492,
"lm_q1q2_score": 0.8010778351873715
} |
https://arxiv.org/abs/2109.01697 | The double-bubble problem on the square lattice | We investigate minimal-perimeter configurations of two finite sets of points on the square lattice. This corresponds to a lattice version of the classical double-bubble problem. We give a detailed description of the fine geometry of minimisers and, in some parameter regime, we compute the optimal perimeter as a function of the size of the point sets. Moreover, we provide a sharp bound on the difference between two minimisers, which are generally not unique, and use it to rigorously identify their Wulff shape, as the size of the point sets scales up. | \section{Introduction}
The classical double-bubble problem is concerned with the shape of
two sets of given volume under minimisation of their surface area. In the
Euclidean space, minimisers are enclosed by three spherical
caps, intersecting at an angle of $2\pi/3$. The proof of this fact in
${\mathbb R}^2$ dates back to \cite{Foisy}, and has then been
extended to
${\mathbb R}^3$ \cite{Hutchings} and ${\mathbb R}^n$ for $n \geq
4$ \cite{Reichardt}. See also \cite{Cicalese2} for a quantitative stability analysis
in two dimensions. A number of variants of the problem has also been
tackled, including double bubbles in spherical and hyperbolic spaces \color{black}
\cite{Corneli,Corneli2,Cotton,Masters}, \color{black} hyperbolic surfaces \cite{Boyer}, cones
\cite{Lopez,Morgan}, the $3$-torus \cite{Carrion,Corneli0}, \color{black} the
Gau\ss\ space \cite{Corneli2,Milman}, \color{black} and in the anisotropic
Grushin plane \cite{Franceschi}.
The aim of this paper is to tackle a lattice version of the
double-bubble problem. We restrict our attention to the
square lattice ${\mathbb Z}^2$ and define the {\it lattice length} of
the interface
separating two disjoint sets $C,\,D \subset {\mathbb Z}^2$ as $Q(C,D)
= \#\{(c,d)\in C\times D \colon \, |c-d|=1\}$, \color{black} where $|\cdot|$
is the Euclidean norm. \color{black} The {\it lattice
double-bubble problem} consists in finding
two distinct lattice subsets
$A$ and $B$ of fixed sizes $N_A,N_B \in \mathbb{N}$ solving
\begin{equation}
\label{eq:dbp}
\min\{P(A,B) \colon \ A,\, B \subset {\mathbb Z}^2, \ A \cap B =
\emptyset, \ \#A = N_A, \ \#B = N_B\},
\end{equation}
where the {\it lattice perimeter} $P(A,B)$ is defined by
\begin{align}\label{eq:dbp3} P(A,B) &= Q(A,A^c) + Q(B,B^c) - 2\beta Q(A,B)\notag\\
&= Q(A,A^c\setminus B) + Q(B,B^c\setminus A) + (2-2\beta) Q(A,B).
\end{align}
The latter definition features the parameter $\beta\in (0,1)$.
Note that the classical double-bubble case corresponds to the choice
$\beta=1/2$. In the following, we allow for the more general
$\beta\in (0,1)$, for this will be relevant in connection
with applications, see Section \ref{sec:equiv}. In particular,
$\beta$ models the interaction between the two sets. The reader is
referred to
\cite{Futer} where cost-minimizing networks featuring different
interaction costs are considered.
Analogously to the Euclidean case, we prove that minimisers $(A,B)$ of
\eqref{eq:dbp} are connected ($A$, $B$, and $A\cup B$ are connected in
the usual lattice sense, see below). Call {\it isoperimetric}
those subsets of the lattice which minimize $C \mapsto Q(C,C^c)$ under
given cardinality. Without claiming completeness, the reader is
referred to the monograph \cite{Harper} and to \cite{Bezrukov0,Biskup,Bobkov,Bollobas,Wang} for a
minimal collection of results on discrete
isoperimetric inequalities, to \cite{Cicalese,Mainini,Mainini2} for sharp
fluctuation estimates, and to \cite{Barrett} for
some numerical approximation. A second analogy with the Euclidean setting is that optimal pairs $(A,B)$ {\it do
not} consist of the mere union of two isoperimetric sets $A$ and
$B$, for the onset of an interface between $A$ and $B$ influences
their shape.
\begin{figure}[h]
\includegraphics[scale=0.4]{aspectratio.png}
\caption{A minimiser for $\beta=1/2$}
\label{fig:aspectratio}
\end{figure}
Differently from the Euclidean case, existence of minimisers for
\eqref{eq:dbp} is here obvious, for the minimisation problem is finite. Moreover,
the geometry of the intersection of interfaces is much simplified, as
effect of the discrete geometry of the underlying lattice. In
particular, all interfaces meet at multiples of $\pi/2$ angles.
At finite sizes $N_A, \, N_B $, boundary effects are relevant and a
whole menagerie of minimisers of \eqref{eq:dbp} may arise, depending
on the specific values of $N_A, \, N_B $, and $\beta$. Indeed,
although uniqueness holds in some special cases, it cannot be expected
in general. We are however able to prove an a priori estimate on the
symmetric distance of two minimisers, which differ at most by \color{black}
$N^{1/2}_A=N^{1/2}_B$ \color{black} points.
As size scales up, whereas properly rescaled isoperimetric sets approach the square, $A$ and $B$ converge to suitable rectangles. In the limit $N_A = N_B \to \infty$ (and for $\beta=1/2$), we prove that minimisers of \eqref{eq:dbp}
converge to the {\it Wulff shape} configuration of Figure
\ref{fig:aspectratio}. That is, uniqueness is restored in the Wulff
shape limit. In fact,
\color{black} in the crystalline-perimeter case, the \color{black}
double-bubble problem for
$\beta=1/2$ has been already tackled in \cite{Morgan98}, see also the recent \cite{Duncan0} for an elementary
proof of the existence of minimisers. The case $\beta\not = 1/2$ is
addressed in \cite{Wecht} instead. In particular, the different possible
geometries of the Wulff shape, corresponding to different volume
fractions of the two phases, have been identified.
Let us now present our main results. We start by associating to each $\mathcal{V} \subset
{\mathbb Z}^2$
the corresponding {\it unit-disk graph}, namely the undirected simple graph $G=(\mathcal{V},\mathcal{E})$, where vertices are identified with the points in $\mathcal{V}$, and the set $\mathcal{E}\subset \mathcal{V} \times \mathcal{V}$ of edges contains one edge for each pair of points in $\mathcal{V}$ at distance $1$. We say that a subset
$ \mathcal{V} \subset {\mathbb Z}^2$ is {\it connected} if the corresponding unit-disk
graph is connected. Moreover, we indicate by $R_z : = {\mathbb Z}\times
\{z\}$ and $C_z=\{z\}\times {\mathbb Z}$ rows and columns, for all $z \in {\mathbb Z}$.
Our main findings read as follows.
\begin{theorem}\label{thm:main}
Let $(A,B)$ solve the double-bubble problem \eqref{eq:dbp}. Then,
\begin{enumerate}
\item[\rm i] {\rm (Connectedness)} The sets $A$, $B$, and $A \cup B$ are
connected. Moreover, the sets $A\cap R_z$, $B\cap R_z$, $(A \cup B)\cap R_z$,
$A\cap C_z$, $B\cap C_z$, and $(A \cup B)\cap C_z$ are connected
(possibly being empty) for all $z\in {\mathbb
Z}$;\\
\item[\rm ii] {\rm (Separation)} If $\max\{x \colon\, (x,z)\in A\} \leq \min\{x
\colon\, (x,z)\in B\} - 1 $ {\rm for some} $z \in {\mathbb Z} $, then the
same holds with equality {\rm for all} $z \in {\mathbb Z} $
(whenever not empty). An analogous
statement is valid for columns, possibly after exchanging the role of $A$
and $B$;\\
\item[\rm iii] {\rm (Interface)} Let $I\subset {\mathbb R}^2$ be the
set of midpoints of segments connecting points in $A$ with points in
$B$ at distance $1$. Then, for all $x\in I$ there exists $y\in I\setminus\{x\}$
with $|x-y| \in \lbrace 1/ \sqrt{2}, 1\rbrace $ and $I$ can be
\color{black} included in the image of a piecewise-affine curve \color{black} $\iota \colon [0,1] \to {\mathbb R}^2 $ with monotone
components.
\end{enumerate}
If $N_A=N_B=\color{black} N \color{black}$ and $\beta\leq 1/2$, we additionally have that
\begin{enumerate}
\item[\rm iv] {\rm (Minimal perimeter)}
$P(A,B)=\min\{P_*,P^*\}$, where
\begin{align*}
P_*:= 4 \left\lceil \frac{\color{black} N \color{black}}{ \left\lfloor
\sqrt{\frac{2\color{black} N \color{black}}{2-\beta}}\right\rfloor }\right\rceil +2 \left\lfloor
\sqrt{\frac{2 \color{black} N \color{black}}{2-\beta}}\right\rfloor (2-\beta),\\
P^*:= 4\left\lceil \frac{\color{black} N \color{black}}{ \left\lceil
\sqrt{\frac{2 \color{black} N \color{black} }{2-\beta}}\right\rceil }\right\rceil+2 \left\lceil
\sqrt{\frac{2 \color{black} N \color{black} }{2-\beta}}\right\rceil (2-\beta);
\end{align*}
\item[\rm v] {\rm (Explicit solution)} There exist $h,\, \ell
\in {\mathbb N}$ and $0 \le r < h$ with $\color{black} N \color{black} = h\ell +r$,
$|h - \sqrt{2 \color{black} N \color{black}/(2-\beta)}| \le
1$, and $|\ell - \sqrt{\color{black} N \color{black} (2-\beta)/2}| \le 2 $ such that, letting
\begin{align*}
A'&:=\{(x,y)\in {\mathbb Z}^2 \, \colon \, x \in[-\ell + 1 ,0], \, y \in
[1,h] \ \text{or} \ x=-\ell,\, y \in [1,r]\},\\
B'&:=\{(x,y)\in {\mathbb Z}^2 \, \colon \, x \in[1,\ell], \, y \in
[1,h] \ \text{or} \ x=\ell+1,\, y \in [1,r]\},
\end{align*}
the pair $(A',B')$ solves the
double-bubble problem \eqref{eq:dbp};\\
\item[\rm vi] {\rm (Fluctuations)} There exists a constant $C_\beta$
only depending on $\beta$ \color{black} and an isometry $T$ of ${\mathbb
Z}^2$ \color{black} such that
\begin{equation}\label{eq:fluct}\# (A \triangle T(A')) + \# (B
\triangle T( B')) \leq C_\beta N^{1/2},
\end{equation}
(see beginning of Section \ref{sec:law} for the definition of \color{black}
isometry). \color{black}
\end{enumerate}
\end{theorem}
Theorem \ref{thm:main} is proved in subsequent steps along the paper, by carefully characterising the
geometry of optimal pairs $(A,B)$. In fact, our analysis reveals
additional geometrical details, so that the statements in the coming
sections are
often more precise and more general
in terms of conditions on the parameters $N_A$, $N_B$, and
$\beta$ with respect to Theorem~\ref{thm:main}. We prefer to postpone these details in order not
to overburden the introduction.
The connectedness of
optimal pairs $(A,B)$ is discussed in Section \ref{sec:algo} and Theorem \ref{thm:main}.i is
proved in Theorem \ref{thm:connected} and Corollary
\ref{cor:rows}. The separation property of Theorem \ref{thm:main}.ii follows from
Proposition~\ref{prop:algorithmdecreasesenergy} and
Corollaries~\ref{cor:nomissingrows}-\ref{cor:allononeside}. The
geometry of the interface between $A$ and $B$, namely Theorem~\ref{thm:main}.iii,
is described by Corollary \ref{cor:interface}.
In Section \ref{sec:info} we present a collection of examples, illustrating
the variety of optimal geometries. In particular, we show that optimal
pairs may be not unique and, in some specific
parameter range, present quite distinguished shapes. We then
classify different admissible pairs in Section \ref{sec:classi} by
introducing five
distinct classes of configurations.
The first of these classes, called Class $\mathcal I$ and corresponding
to Figure \ref{fig:aspectratio}, is indeed the reference one and is
studied in detail in
Section \ref{sec:reg1}. In Proposition
\ref{prop:classIregularisationstep2} we prove the existence of optimal
pairs in Class $\mathcal I$, among which \color{black} there is \color{black} the explicit one of Theorem
\ref{thm:main}.v. The minimal
perimeter in Theorem~\ref{thm:main}.iv is then computed by referring to this
specific class in Theorem~\ref{thm:classIexact}. The remaining classes are studied in Section \ref{sec:reg2}. We show that some of the classes cannot be optimal in the case $N_A = N_B$, and that the other ones can be modified to a configuration in Class $\mathcal{I}$ by an explicit regularisation procedure. We also observe that for arbitrarily large $N$ solutions may appear which are not in Class $\mathcal{I}$, see Proposition~\ref{prop:largeminimisersiv}.
Although optimal pairs $(A,B)$ are not unique, by carefully inspecting
our constructions, we are able to prove
that, in some specific parameter regime, two optimal pairs differ
by at most $C_\beta N^{1/2}$ points, up to isometries, and that this bound is actually
sharp. This is studied in Section~\ref{sec:law}, see
Theorem~\ref{thm:nonehalf} \color{black} which \color{black} proves Theorem
\ref{thm:main}.vi. \color{black} This scaling in fluctuations is
specifically related to the presence of an interface between the two
sets $A$ and $B$. In fact, in case of a single set $A$, optimal
configurations show fluctuations of order $N^{3/4}$, \color{black} see
Subsection \ref{sec:opt} for details.
\color{black} Although \color{black} the setting of our paper is discrete, our results deliver some
understanding of the continuous case, as well.
This results by
considering the so-called {\it thermodynamic
limit} as $\color{black} N \color{black} \to \infty$. For all $ V =\{x_1, \dots, x_{\color{black} N \color{black}}\} \subset {\mathbb Z}^2$,
let $ \mu_V = (\sum_{i=1}^{\color{black} N \color{black}} \delta_{x_i/\sqrt{\color{black} N \color{black}}})/\color{black} N \color{black}$ be the corresponding
empirical measure on the plane and denote by $\mathcal L$ the
two-dimensional Lebesgue measure. We indicate by
\begin{align}
\mathcal
A:=\left(-\sqrt{\frac{2-\beta}{2}},0\right)\times \left(0, \sqrt{\frac{2}{2-\beta}}\right) \ \ \text{and} \ \ \mathcal
B:= \left(0, \sqrt{\frac{2-\beta}{2}} \right)\times \left(0,
\sqrt{\frac{2}{2-\beta}}\right)\label{eq:wulff}
\end{align}
the continuous Wulff shapes, see Figure \ref{fig:aspectratio}. Note
that $\mathcal L(\mathcal A)=\mathcal L(\mathcal B) = 1$. By
combining the explicit construction of Theorem \ref{thm:main}.v and the
fluctuation estimate \eqref{eq:fluct} we have the following.
\begin{corollary}[Wulff shapes]\label{cor: wulff}
Let $\beta\leq 1/2$ and $(A_{\color{black} N \color{black}},B_{\color{black} N \color{black}})$ be
solutions of \eqref{eq:dbp} with $N_{A_{\color{black} N \color{black}}} = N_{B_{\color{black} N
\color{black}}} = \color{black} N \color{black}$, for all $\color{black} N \color{black}\in \mathbb N$. Then, there exist
isometries $T_{\color{black} N \color{black}}$ of ${\mathbb Z}^2$ such that
\begin{align}
\label{eq:meas}
\mu_{T_{\color{black} N \color{black}} A_{\color{black} N \color{black}}} \stackrel{\ast}{\rightharpoonup} {\mathcal L} \mres {\mathcal A} \ \ \text{and} \ \ \mu_{T_{\color{black} N \color{black}} B_{\color{black} N \color{black}}} \stackrel{\ast}{\rightharpoonup} {\mathcal
L} \mres {\mathcal B},
\end{align}
\color{black} as $N \to \infty$, \color{black} where the symbol $\stackrel{\ast}{\rightharpoonup}$ indicates the
weak-$\ast$ convergence of measures.
\end{corollary}
Note that, by taking $\beta=1$ in \eqref{eq:wulff}
(not covered by the corollary, though) we have
that $\mathcal A \cup \mathcal B$ form a single square with side
$\sqrt{2}$ whereas for $\beta=0$ the Wulff shapes $\mathcal A$ and
$\mathcal B$ are two squares of side $1$.
Our results also allow to solve the double-bubble problem in the
continuous setting of
${\mathbb R}^2$ with respect to a crystalline perimeter notion. More
precisely, for every set $D \subset {\mathbb R}^2$ of {\it finite
perimeter} we denote by $\partial^* D$ its {\it reduced boundary} \cite{Ambrosio-Fusco-Pallara,Maggi},
and define the {\it crystalline perimeter} and the {\it crystalline
length} as
$${\rm Per} (D) = \int_{\partial^* D} \| \nu \|_1 \, {\rm d} \mathcal
H^1, \quad {\rm L} (\gamma) = \int_{\gamma} \| \nu \|_1 \, {\rm d} \mathcal
H^1,$$
where $\nu$ is the
outward pointing \color{black} unit \color{black} normal to $\partial^*D$, $\|\nu\|_1 = |\nu_x|+|\nu_y|$, $ \mathcal
H^1$ is the one-dimensional Hausdorff measure, and $\gamma \subset
\partial^*D$ is measurable.
The continuous analogue of
\eqref{eq:dbp} is the {\it crystalline double-bubble problem}
\begin{align}
& \min\Big\{ {\rm Per} (A) + {\rm Per} (B) - 2\beta \, {\rm L} (\partial^*A
\cap \partial^*B)\, \colon \ \\
& \qquad \qquad A,\, B \subset {\mathbb R}^2 \ \text{of
finite perimeter}, \quad A \cap B =
\emptyset, \ \mathcal L(A)=\mathcal L(B)=1\Big\}.
\label{eq:dbp2}
\end{align}
By combining Theorem \ref{thm:main}.v and \ref{thm:main}.vi we obtain the following.
\begin{corollary}[Crystalline double bubble]\label{cor: cryst-db} For all $\beta \leq 1/2$,
the pair $(\mathcal A,\mathcal B)$ is a solution of
\eqref{eq:dbp2}. The minimal energy is given by $4
\sqrt{4-2\beta}$.
\end{corollary}
For the reference choice $\beta=1/2$, the solution of the crystalline
double-bubble problem~\eqref{eq:dbp2} is depicted in Figure
\ref{fig:aspectratio}, see also \cite{Duncan,Morgan98}.
Corollaries \ref{cor: wulff}
and \ref{cor: cryst-db} are proved in
Section \ref{sec:wulff}.
In the recent \cite{Duncan}, the
difference in energy between any properly rescaled optimal discrete
configuration and the Wulff shape is estimated. In case $N_A=N_B$
and $\beta\leq 1/2$ such an estimate can be recovered from the exact expressions in Theorem
\ref{thm:main}.iv and of Corollary \ref{cor: cryst-db}. Note however that the
analysis in \cite{Duncan} covers the case
$N_A\not =N_B$ as well, although for $\beta=1/2$ only.
\section{ Equivalent formulations of the double-bubble problem}\label{sec:equiv}
\subsection{Optimal particle configurations}\label{sec:opt} The double-bubble problem
\eqref{eq:dbp} can be equivalently recasted in terms of ground states
of \color{black} configurations of particles of two different types. \color{black}
Let $A =\{ x_1,\dots, x_{N_A} \}$ and $B=\{ x_{N_A+1}, \dots,
x_{N_A+N_B}\}$ indicate the mutually distinct positions of particles
of two different particle
species and assume that $A,\,B \subset {\mathbb Z}^2$, which in turn
restricts the model to the description of zero-temperature situations.
To the particle configuration $(A,B)$ we associate the {\it configurational energy}
\begin{align}\label{eq: basic eneg}
E(A,B) = \frac12 \sum_{i, j=1}^{N_A+ N_B} V_{ \rm sticky}(x_i, x_j),
\end{align}
where
\begin{equation*}
V_{ \rm sticky}(x_i, x_j) = \threepartdef{-1}{|x_i - x_j|=
1 \mbox{ and } x_i,x_j \in A \mbox{ or } x_i,x_j \in
B,}{-\beta}{|x_i - x_j| = 1 \mbox{ and } x_i \in A, x_j \in B \mbox{ or } x_i \in B, x_j \in A,}{0}{|x_i - x_j| \neq 1.}
\end{equation*}
The interaction density $V_{\rm sticky}(x_i, x_j) $ corresponds to the so-called
{\it sticky} or {\it Heitmann-Radin-type} potential
\cite{Heitmann} and models the binding energy of
the two particles $x_i$ and $ x_j $. In particular,
only first-neighbor interactions contribute to
the energy, and {\it intraspecific} (namely, of type $A-A$ or $B-B$)
and {\it interspecific} (type $A-B$) interactions are
quantified differently, with interspecific interactions being weaker as $\beta <1$.
The relation between the minimisation of $E$ and the double-bubble
problem \eqref{eq:dbp} is revealed by the equality
\begin{equation}
E(A,B)+2N_A + 2N_B =
\frac12P(A,B).\label{eq:eq}
\end{equation}
This follows by analysing the contribution to $E$ and $P$ of each
point. In fact, one could decompose
$$E(A,B) = \sum_{i=1}^{N_A+N_B} e(x_i), \quad P(A,B) =\sum_{i=1}^{N_A+N_B}
p(x_i), $$
where the single-point contribution to energy and perimeter is
quantified via
\begin{align*}
e(x) = -\frac12 \# \{\text{same-species neighbors of $x$}\} -
\frac{\beta}{2} \# \{\text{other-species neighbors of $x$}\}\\
p(x) = 4 - \# \{\text{same-species neighbors of $x$}\} -
\beta \# \{\text{other-species neighbors of $x$}\}.
\end{align*}
The latter entail \eqref{eq:eq}, which in turn ensures that ground states of $E$ and minimisers of
$P$ coincide, for all given sizes $N_A$ and $N_B$ of the sets $A$ and $B$.
The geometry of ground states of $E$ results from the competition
between intraspecific and interspecific interaction.
In the extremal case $\beta=1$, intra- and interspecific
interaction are indistinguishable, and one can consider the whole
system $(A,B)$ as a single species. The minimisation of $E$ is then
the classical {\it edge-isoperimetric problem} \cite{Bezrukov,Harper},
namely the minimisation of $C \mapsto Q(C,C^c)$ under
prescribed size $\#C$.
Ground states are isoperimetric sets, the ground-state energy is
known, the possible distance between
two ground states scales as $N^{3/4}$ where $N = \# C$, and one could even directly prove
crystallization, \color{black} i.e., the periodicity of ground states, \color{black} under some stronger assumptions on the interaction
potentials~\cite{Mainini}.
In the other extremal case $\beta=0$, no interspecific interaction is
accounted for, and both phases $A$ and $B$ are independent isoperimetric
sets. In particular, if $N_A$ and $N_B$ are
perfect squares (or for
$N_A, \, N_B \to \infty$ and up to rescaling), the phases $A$ and $B$
are squares.
In the intermediate case $\beta\in(0,1)$, which is hence the interesting
one, intraspecific and
interspecific interaction compete and neither $A$ or $B$ nor $A \cup B$
end up being isoperimetric
sets. The presence of interspecific interactions adds some level
of rigidity. This is revealed by the fact, which we prove, that the distance between different ground
states scales like $N^{1/2}$, in contrast with the purely
edge-isoperimetric case, where fluctuations are of order
$N^{3/4}$ \cite{Mainini}, see also
\cite{Cicalese,Davoli,Mainini2,Schmidt}.
Although we do not directly deal with crystallization here, for the
points $A$ and $B$ are {\it assumed} to be subset of the lattice
$\mathbb Z^2$, let us
mention that a few rigorous crystallization results in
multispecies systems are available. At first, existence of quasiperiodic ground states in a
specific multicomponent two-dimensional system has been shown by
Radin \cite{Radin86}. One dimensional crystallization of {\it
alternating} configurations of two-species has been investigated by B\'etermin, Kn\"upfer, and Nolte
\cite{Betermin}, see also \cite{periodic} for some related
crystallization and noncyrstallization results. In the
two-dimensional, sticky interaction case, two crystallization results in
hexagonal and square geometries are given in \cite{kreutz,
kreutz2}. Here, however, interspecific interactions favor \color{black} the
onset of alternating phases. \color{black}
\subsection{Finite Ising model} The
double-bubble problem \eqref{eq:dbp} can also be equivalently seen as the ground-state problem for a {\it finite} Ising model with ferromagnetic
interactions. In particular, given $C=A \cup B\subset {\mathbb Z}^2$
one describes the state of the system by $u \colon C \to \pm 1$,
distinguishing the $+1$ and the $-1$ phase. The Ising-type energy of the
system is then given by
$$\color{black} F \color{black} (C,u) = -\frac{1-\beta}{ 4}\sum_{\substack{x,y \in C \\ |x-y|=1}} u(x)\,
u(y) - \frac{1+\beta}{ 4} \sum_{\substack{x,y \in C \\ |x-y|=1}} \color{black}
|u(x)\,
u(y))|. \color{black}
$$
The first term above is the classical ferromagnetic interaction
contribution, while the second sum gives the total number of
interactions, irrespective of the phase. This second term is required since in our model
same-phase and different-phase interactions are both assumed to give negative contributions to the energy.
Under the above provisions, minimisers of the problem
$$\min\left\{\color{black} F \color{black} (C,u) \, \colon \, C \subset {\mathbb Z}^2, \ u \colon C \to \pm 1, \
\# \{x \in C \, \colon \, u(x)=1\} = N_A, \ \# \{x \in C \, \colon \, u(x)=-1\} = N_B \right\} $$
corresponds to solutions $(A,B)$ of the double-bubble problem
\eqref{eq:dbp}, under the
equivalence $A\equiv \{x \in C \, \colon \, u(x)=1\}$ and $B\equiv \{x \in
C \, \colon \, u(x)=-1\}$. In fact, each pair of first neighbors contributes $-1$
to $\color{black} F \color{black}$ if it belongs to the same phase and $-\beta$ if it belongs to
different phases, namely,
$$\color{black} F \color{black} (C,u) = E(A,B).$$
The literature on the Ising model is vast and the reader is referred to \cite{Cerf,McCoy} for a
comprehensive collection of results. Ising models are usually
investigated from the point of view of their thermodynamic limit $\#C
\to \infty$ and at \color{black} positive \color{black} temperature. In particular, models are
usually formulated on the whole lattice or on a large box with constant
boundary states. Correspondingly, the analysis of Wulff shapes is concerned with the study of a droplet of one phase in a sea of the other
one \cite{Cerf2}.
Our setting is much
different, for our system is finite and boundary effects
matter. To the best of our knowledge, we contribute here
the first characterisation of ferromagnetic Ising ground states, where
the location $C$ of the
system is also unknown and results from minimisation.
Alternatively to the finite two-state setting above, one could
equivalently formulate the minimisation problem in the whole ${\mathbb
Z}^2$ by allowing a third state, to be interpreted as interaction-neutral. In
particular, we could equivalently consider the minimisation problem
$$\min\left\{\color{black} F \color{black} ({\mathbb Z}^2,v) \, \colon \, v \colon {\mathbb Z}^2\to\{-1,0,1\}, \
\# \{x \in {\mathbb Z}^2 \, \colon \, v(x)=1\} = N_A, \ \# \{x \in {\mathbb Z}^2 \, \colon \, v(x)=-1\} = N_B \right\}. $$
The equivalence is of course given by setting $u=v$ on $C:=\{x\in {\mathbb
Z}^2\, \colon \, v(x)\not = 0 \}$.
\subsection{Finite Heisenberg model} The three-state formulation of
the previous subsection can be easily reconciled within the
frame of the classical Heisenberg model \cite{Tasaki}. In particular, we shall define
the vector-valued state function $s\colon M \to \{s_{-1}, s_0, s_1\}$ where
the box $M$ is given as $M:=[0,m]^2 \cap {\mathbb Z}^2$ for $m$
large. We choose the
three possible spins as
$$ s_0 =(-1,0), \ \ s_1 =\left(\beta, \sqrt{1-\beta^2}\right), \ \
s_{-1}=\left(\beta, -\sqrt{1-\beta^2}\right).$$
The energy of the system is defined as
$$H(s) = -\sum_{\substack{x,y \in M \\ |x-y|=1}} s(x)\cdot s(y).$$
For all $s\colon M \to \{s_{-1}, s_0, s_1\}$, let $A:=\{x \in M \, \colon \,
s(x)=s_1\} $ and $B:=\{x \in M \, \colon \,
s(x)=s_{-1}\}$. We are interested in the minimisation problem
\begin{align*}&\min\left\{H(s) \, \colon \, s\colon M \to \{s_{-1}, s_0, s_1\}, \ \#A=N_A , \ \#B=N_B \right\}.
\end{align*}
By letting $m$ be very large compared with $N_A$ and $N_B$,
we can with no loss of generality assume that $s=s_0$ close to the boundary $\partial M$.
Let us now show that the latter minimisation problem is indeed
equivalent to the double-bubble problem \eqref{eq:dbp}. To this aim,
we start by noting that the total number of first-neighbor interactions in
$M$ is $2m^2+2m$. First-neighbor interactions between identical states
contribute $-1$ to the energy, $s_0 - s_{1}$ and $s_0 - s_{-1}$
interactions contribute $-s_1\cdot s_0 = -s_{-1}\cdot s_0 =
\beta$, and $s_1- s_{-1}$ interactions contribute $-s_1\cdot s_{-1} =
1-2{\beta^2} $. We hence have that
\begin{align*}
H(s) + (2m^2 +2m) &= (\beta+1) \left( Q (A,
A^c\setminus B) + Q (B,
B^c\setminus A) \right) + (2-2\beta^2) Q(A,B) \\
& = \left(\beta+1 \right) \left( Q (A,
A^c\setminus B) + Q (B,
B^c\setminus A) +(2-2\beta)Q(A,B) \right)
\nonumber\\
&=
\left(\beta+1\right) P(A,B),
\end{align*}
so that minimising $H$ is actually equivalent to solving \eqref{eq:dbp}.
\subsection{Minimum \color{black} balanced-separator \color{black} problem} One can rephrase the
double-bubble problem \eqref{eq:dbp} as a minimum \color{black} balanced-separator \color{black} problem on an unknown
graph as well. Indeed, as interspecific contributions
are energetically less favored with respect to
intraspecific ones, given the common occupancy ${\mathcal V} =A \cup B$ of the two
phases, one is asked to part ${\mathcal V} $ into two regions $A$ and $B$ with given size in
such a way that the interface between $A$ and $B$ is minimal. This corresponds to a
minimum \color{black} balanced-separator \color{black} problem on the {\it unit-disk graph} corresponding to
${\mathcal V}$, i.e., finding a disjunct
partition ${\mathcal V} = A \cup B$ solving
$$\min \{ Q(A,B) \, : \, \#A =
N_A, \ \#B = N_B\}.$$
This is indeed a classical problem, with relevant applications in operations
research and computer science \cite{Nagamochi}.
Here, we generalize the above minimum \color{black} balanced-separator \color{black} problem by letting the underlying
graph also vary and by simultaneously optimising its perimeter. In
particular, we consider
$$\min \{P(A,B) \, \colon \, V=A \cup B, \ A \cap B =\emptyset, \ \#A =
N_A, \ \#B = N_B\},$$
where $({\mathcal V}, {\mathcal E})$ is again the unit graph related to $A \cup B\subset
{\mathbb Z}^2$.
Also in this setting, the competition between minimisation of the
interface and of the perimeter is evident. Recall $P(A,B) =
Q(A,A^c \setminus B) + Q(B,B^c \setminus A) + (2-2\beta) Q(A,B)$.
On
the one hand, a graph with few edges between $A$ and $B$ would
give a short cut $Q(A,B)$, while necessarily having large
$Q(A,A^c \setminus B) + Q(B,B^c \setminus A)$. On the other hand, a
graph with small $Q(A,A^c \setminus B) + Q(B,B^c \setminus A)$ has $A \cup
B$ close to be a square, and for $N_A = N_B$ all possible cuts partitioning it in two
are approximately as long as its side.
\section{Notation}
Let us collect here some notation, to be used throughout the
paper.
For each pair of disjoint sets $A,B \subset \mathbb{Z}^2$
we call the elements of $A$ and $B$ the {\it $A$-points} and {\it
$B$-points}, respectively. We let $N_A = \# A$ and $N_B = \# B$. For
any point $p \in A \cup B$, we denote its first and second coordinate
by $p = (p_x, p_y)$. We say that two points are {\it connected by an
edge} if their distance is equal to one. (Equivalently, we sometimes
use the words {\it bond} or {\it connection} in place of {\it edge}.)
We say that a set $S \subset A \cup B$ is {\it connected} if it is
connected as a graph with edges described above, or equivalently
if the corresponding unit-disk graph is connected.
For the sake of definiteness, from here on, our notation is
adapted to the
setting of Subsection~\ref{sec:opt}. In particular, we say that a
configuration is {\it minimal} (or {\it optimal}) if it
minimises the energy $E$ given in \eqref{eq: basic eneg} in the class of configurations with the same
number of $A$- and $B$-points. Recall once more that minimisers
of $E$ and solutions of the double-bubble problem \eqref{eq:dbp}
coincide.
Since the number of points is finite, any configuration lies in a
bounded square. Suppose that a configuration $(A,B)$ has $N_{\rm
row}$ rows (i.e., there are $N_{\rm row}$ rows in $\mathbb{Z}^2$
with at least one point from $A \cup B$). For $k=1,...,N_{\rm row}$,
denote by $R_k$ the $k$-th row (counting from the top). In a similar
fashion, $N_{\rm col}$ denotes the number of columns, and $C_k$
indicates the $k$-th column (counting from the left). To simplify the
notation, given a finite set $X \subset \mathbb{Z}^2$, we denote
$X^{\rm row}_k = X \cap R_k$ and $X_k^{\rm col} = X \cap C_k$. We will
typically apply this to the sets $A$, $B$, their union or some of
their subsets. Moreover, denote by $n_k^{\rm row}$ the number of
$A$-points in the row $R_k$ and by $m^{\rm row}_k$ the number of
$B$-points in the row $R_k$. In a similar fashion, $n_k^{\rm col}$ and
$m_k^{\rm col}$ denote the number of $A$- and $B$-points in column
$C_k$, respectively. In the following, we will frequently modify
configurations. Not to overburden the notation, when we use the
notation $n^{\rm row}_k$ and $m_k^{\rm row}$ (and similarly for
columns) we always refer to the configuration in the same sentence,
unless otherwise specified.
For two points $p,q \in A \cup B$, we say that $p$ {\it lies to the
left} (respectively {\it right}) of $q$ if $p_y = q_y$ and $p_x <
q_x$ (respectively $p_x > q_x$). In other words, they are in the same
row, and the first coordinate of $p$ is smaller (respectively larger)
than the first coordinate of $q$. We say that $p$ lies directly to the
left (respectively right) of $q$ if additionally $p$ and $q$ are
connected by an edge. Similarly, we say that $p$ {\it lies above}
(respectively {\it below}) $q$ if $p_x = q_x$ and $p_y > q_y$
(respectively $p_y < q_y$). Again, we say that $p$ lies {\it directly} above (respectively below) $q$ if additionally these two points are connected by an edge.
We will also say that the set $A^{\rm
row}_k$ {\it lies to the left} (respectively {\it right}) of $B^{\rm
row}_k$ if for every $p \in A^{\rm row}_k$ and $q \in B^{\rm row}_k$
the point $p$ lies to the left (respectively right) of $q$. (Note that
by definition $A^{\rm row}_k$ and $B^{\rm row}_k$ are in the same
row.) We also say that $A^{\rm row}_k$ lies {\it directly} to the left
of $B^{\rm row}_k$ if additionally there is a connection between one
of the points in $A^{\rm row}_k$ and one of the points in $B^{\rm
row}_k$. An analogous notion is used for columns.
Furthermore, we say that a number of points from different rows are
{\it aligned} if their first coordinates are equal. We also say that
two sets are {\it aligned to the right} (or {\it left}) if their rightmost (leftmost) points are aligned. The same notion is also used for columns.
Finally, given a finite set $X \subset \mathbb{Z}^2$, we denote by $X + (a,b)$ the set consisting of all points of $X$ shifted by the vector $(a,b) \in \mathbb{Z}^2$.
\section{Connectedness, separation, and interface}\label{sec:algo}
In this section, we introduce a procedure in order to modify an
arbitrary configuration $(A,B)$ into another configuration $(\hat{A},
\hat{B})$ with specific additional properties, without increasing
the energy. In particular, this will prove that for a
minimal configuration the sets $A$, $B$, and $A \cup B$ are connected.
\subsection{Description of the procedure}
The goal of this subsection is to present a procedure allowing to
modify a configuration, making it more regular in the following sense: not only the sets $A$ and $B$ are connected, but also for any $k = 1,...,N_{\rm row}$ and any $l = 1,...,N_{\rm col}$ the sets $A^{\rm row}_k$, $B^{\rm row}_k$, $(A \cup B)_k^{\rm row}$, $A^{\rm col}_l$, $B^{\rm col}_l$, and $(A \cup B)_l^{\rm col}$ are connected. We start with the following preliminary result.
\begin{proposition}
Let $(A,B)$ be a configuration in the sense described above. If there are any empty rows (or columns) between any two rows (or columns) in $(A,B)$, then there exists a configuration $(\hat{A},\hat{B})$ with strictly smaller energy.
\end{proposition}
\begin{proof}
Without restriction we present the argument for rows. Suppose that between rows $R_k$ and $R_{k+1}$ for some $k \in \{ 1,...,N_{\rm row} - 1 \}$ there are $l$ empty rows. Then, we can reduce the energy in the following way: denote by $(A',B')$ the configuration consisting of the top $k$ rows and by $(A'',B'')$ the configuration consisting of the bottom $N_{\rm row}-k$ rows. Then, we remove the empty rows, i.e., replace $(A'',B'')$ with $(A'',B'') + (0,l)$. Clearly, this does not increase the energy of the configuration $(A,B)$. If after this shift there is at least one connection between $A^{\rm row}_k \cup B^{\rm row}_k$ and $A^{\rm row}_{k+1} \cup B^{\rm row}_{k+1}$, the energy even decreases by at least $\beta$. Otherwise, if after this shift there are no connections between $A^{\rm row}_k \cup B^{\rm row}_k$ and $A^{\rm row}_{k+1} \cup B^{\rm row}_{k+1}$, we shift the configuration $(A',B')$ horizontally to make at least one connection. Again, the energy is decreased by at least $\beta$.
\end{proof}
Hence, in studying minimal configurations, we may assume that there
are no empty rows and columns. Now, we are ready to describe a
modification procedure making the configuration more regular. Notice that we may write the energy in the following way:
\begin{equation*}
E(A,B) = \sum_{k=1}^{N_{\rm row}} E^{\rm row}_k(A,B) + \sum_{k=1}^{N_{\rm row} -1} E_k^{\rm \color{black} inter \color{black}}(A,B).
\end{equation*}
Here, $E^{\rm row}_k(A,B)$ is the part of the energy given by interactions in the row $R_k$, namely
\begin{equation}\label{eq: row energy}
E^{\rm row}_k(A,B) = \frac{1}{2} \sum_{ x_i,x_j \in A^{\rm row}_k \cup B^{\rm row}_k} V_{ \rm sticky}(x_i, x_j),
\end{equation}
and $E_k^{\rm \color{black} inter \color{black}}(A,B)$ is the part of the energy given by interactions between rows $R_k$ and $R_{k+1}$, namely
\begin{equation*}
E_k^{\rm \color{black} inter \color{black}}(A,B) = \sum_{ x_i \in A^{\rm row}_k \cup B^{\rm row}_k, \, x_j \in A^{\rm row}_{k+1} \cup B^{\rm row}_{k+1}} V_{ \rm sticky }(x_i, x_j).
\end{equation*}
Now, let us see that we may bound $E^{\rm row}_k$ and $E_k^{\rm \color{black} inter \color{black}}$ by expressions depending on $n^{\rm row}_k$ and $m^{\rm row}_k$. First, we estimate $E^{\rm row}_k$.
\begin{lemma}\label{lem:horizontal}
We have
\begin{equation*}
E^{\rm row}_k(A,B) \geq \begin{cases} - (n^{\rm row}_k + m^{\rm row}_k) +2- \beta & \text{if $n^{\rm row}_k > 0$, $m^{\rm row}_k > 0$,} \\
- (n^{\rm row}_k + m^{\rm row}_k) +1 & \text{else.}
\end{cases}
\end{equation*}
Moreover, this inequality is an equality if and only if the sets $A^{\rm row}_k$, $B^{\rm row}_k$, and $A^{\rm row}_k \cup B^{\rm row}_k$ are connected.
\end{lemma}
\begin{proof}
We consider two cases. In the first case, we suppose that $m^{\rm row}_k = 0$ (a similar argument works if $n^{\rm row}_k = 0$): then, the desired inequality takes the form $E^{\rm row}_k(A,B) \geq -n^{\rm row}_k + 1$. Since $A^{\rm row}_k$ is a subset of a single row, $n^{\rm row}_k - 1$ is the maximum number of connections between points in $A^{\rm row}_k$ and it is achieved only if $A^{\rm row}_k$ is connected.
In the second case, we have $n^{\rm row}_k > 0$ and $m^{\rm row}_k > 0$. Since $A^{\rm row}_k \cup B^{\rm row}_k$ is a subset of a single row, the maximum number of connections (regardless of their type) is $n^{\rm row}_k + m^{\rm row}_k - 1$. It is achieved only if $(A \cup B)_k^{\rm row}$ is connected. Among these, at most $n^{\rm row}_k - 1$ are connections between points in $A^{\rm row}_k$ and at most $m^{\rm row}_k - 1$ are connections between points in $B^{\rm row}_k$. These numbers are achieved if and only if $A^{\rm row}_k$ and $B^{\rm row}_k$ are connected. Each of these connections contributes $-1$ to the energy and there can be at most $n^{\rm row}_k + m^{\rm row}_k - 2$ of them. The remaining connections are between $A^{\rm row}_k$ and $B^{\rm row}_k$ contributing $-\beta$ to the energy. The fact that $\beta < 1$ yields the statement.
\end{proof}
Now, we make a similar computation for $E_k^{\rm \color{black} inter \color{black}}$.
\begin{lemma}\label{lem:vertical}
We have
\begin{equation*}
E_k^{\rm \color{black} inter \color{black}}(A,B) \geq -(1-\beta)\big(\min\lbrace n^{\rm row}_k, n^{\rm row}_{k+1}\rbrace +\min\lbrace m^{\rm row}_k, m^{\rm row}_{k+1}\rbrace \big) - \beta \min\lbrace n^{\rm row}_k + m^{\rm row}_k, n^{\rm row}_{k+1} + m^{\rm row}_{k+1}\rbrace.
\end{equation*}
Moreover, equality is achieved if and only if the following conditions hold: \\
(1) There are $\min\lbrace n^{\rm row}_k,n^{\rm row}_{k+1}\rbrace$ points in $A^{\rm row}_k$ directly above points in $A^{\rm row}_{k+1}$; \\
(2) There are $\min\lbrace m^{\rm row}_k,m^{\rm row}_{k+1}\rbrace$ points in $B^{\rm row}_k$ directly above points in $B^{\rm row}_{k+1}$; \\
(3) Supposing that $n^{\rm row}_k + m^{\rm row}_k \geq n^{\rm row}_{k+1} + m^{\rm row}_{k+1}$, there is a point in $A^{\rm row}_k \cup B^{\rm row}_k$ directly above every point in $A^{\rm row}_{k+1} \cup B^{\rm row}_{k+1}$. Otherwise, if $n^{\rm row}_k + m^{\rm row}_k < n^{\rm row}_{k+1} + m^{\rm row}_{k+1}$, there is a point in $A^{\rm row}_{k+1} \cup B^{\rm row}_{k+1}$ directly below every point in $A^{\rm row}_k \cup B^{\rm row}_k$.
\end{lemma}
\begin{proof}
First, as there are $n^{\rm row}_k + m^{\rm row}_k$ points in $A^{\rm row}_k \cup B^{\rm row}_k$ and $n^{\rm row}_{k+1} + m^{\rm row}_{k+1}$ points in $A^{\rm row}_{k+1} \cup B^{\rm row}_{k+1}$, there are at most $\min\lbrace n^{\rm row}_k + m^{\rm row}_k, n^{\rm row}_{k+1} + m^{\rm row}_{k+1}\rbrace$ connections between points in $A^{\rm row}_k \cup B^{\rm row}_k$ and $A^{\rm row}_{k+1} \cup B^{\rm row}_{k+1}$, regardless of their type. Among these, we denote the number of connections between points in $A^{\rm row}_k$ and $A^{\rm row}_{k+1}$ by $\tilde{n}_k$ and the number of connections between points in $B^{\rm row}_k$ and $B^{\rm row}_{k+1}$ by $\tilde{m}_k$. We have $\tilde{n}_k \le \min\lbrace n^{\rm row}_k, n^{\rm row}_{k+1}\rbrace$ and $\tilde{m}_k \le \min\lbrace m^{\rm row}_k, m^{\rm row}_{k+1}\rbrace$ with equality if this many points in $A^{\rm row}_{k+1}$ are placed directly under points in $A^{\rm row}_k$ (and similarly for $B^{\rm row}_k$ and $B^{\rm row}_{k+1}$). Each of these connections contributes $-1$ to the energy, i.e., a total contribution of $-\tilde{n}_k - \tilde{m}_k$. Then, there are at most $\min\lbrace n^{\rm row}_k + m^{\rm row}_k, n^{\rm row}_{k+1} + m^{\rm row}_{k+1}\rbrace - (\tilde{n}_k + \tilde{m}_k)$ possible connections which need to be either connections between points in $A^{\rm row}_k$ and $B^{\rm row}_{k+1}$ or between points in $B^{\rm row}_k$ and $A^{\rm row}_{k+1}$. Either way, each of these connections contributes $-\beta$ to the energy. In conclusion, we obtain the desired inequality, with equality only if $\tilde{n}_k = \min\lbrace n^{\rm row}_k, n^{\rm row}_{k+1}\rbrace$, $\tilde{m}_k = \min\lbrace m^{\rm row}_k, m^{\rm row}_{k+1}\rbrace$, and if there are $\min\lbrace n^{\rm row}_k + m^{\rm row}_k, n^{\rm row}_{k+1} + m^{\rm row}_{k+1}\rbrace$ connections between $A^{\rm row}_k \cup B^{\rm row}_k$ and $A^{\rm row}_{k+1} \cup B^{\rm row}_{k+1}$.
\end{proof}
In light of these estimates, we describe a simple
modification procedure making any configuration more regular. For any configuration $(A,B)$, we construct a configuration $(\hat{A},\hat{B})$ having the same number of $A$- and $B$-points in each row as $(A,B)$ such that the energy is lower or equal and $(\hat{A},\hat{B})$ has some additional structure properties.
\textit{Step 0:} We start with the first row from the top. We let $\hat{A}_1$ be a connected set in a single row consisting of $n^{\rm row}_1$ atoms and let $\hat{B}_1$ be the connected set in the same row with $m^{\rm row}_1$ points right of $\hat{A}_1$, in such a way that there is a connection between $\hat{A}_1$ and $\hat{B}_1$. By Lemma~\ref{lem:horizontal}, we have $ E^{\rm row}_1 (\hat{A},\hat{B}) \leq E^{\rm row}_1 (A,B)$.
\textit{Step $k$} (for $k = 1,..., N_{\rm row} -1$): We suppose that the sets in the previous steps have been constructed in such a way that $\hat{A}_{k}$, $\hat{B}_k$, and $\hat{A}_k \cup \hat{B}_k$ are connected, and $\hat{A}_k$ lies on the left of $\hat{B}_k$. We will now define $\hat{A}_{k+1}$ and $\hat{B}_{k+1}$. To this end, we distinguish four cases.
\textit{Case 1:} $n^{\rm row}_k \leq n^{\rm row}_{k+1}$ and $m^{\rm row}_k \leq m^{\rm row}_{k+1}$. We place $n^{\rm row}_k$ points of $\hat{A}_{k+1}$ directly below $\hat{A}_k$. Then, we put the remaining $n^{\rm row}_{k+1} - n^{\rm row}_k$ points to the left of the previously placed points, so that $\hat{A}_{k+1}$ is connected. Similarly, we place $m^{\rm row}_k$ points from $\hat{B}_{k+1}$ directly below $\hat{B}_k$ and the remaining $m^{\rm row}_{k+1} - m^{\rm row}_k$ points to the right of the previously placed points, so that $\hat{B}_{k+1}$ is connected. By Lemma \ref{lem:horizontal}, we have $E^{\rm row}_{k+1}(\hat{A},\hat{B}) \leq E^{\rm row}_{k+1}(A,B)$, and by Lemma \ref{lem:vertical}, we have $E_k^{\rm \color{black} inter \color{black}}(\hat{A},\hat{B}) \leq E_k^{\rm \color{black} inter \color{black}}(A,B)$.
\textit{Case 2:} $n^{\rm row}_k > n^{\rm row}_{k+1}$ and $m^{\rm row}_k > m^{\rm row}_{k+1}$. We place all the points of $\hat{A}_{k+1}$ directly below $\hat{A}_k$, starting from the right. Then, we place all the points of $\hat{B}_{k+1}$ directly below $\hat{B}_k$, starting from the left. In this way, the sets $\hat{A}_{k+1}$, $\hat{B}_{k+1}$ and $\hat{A}_{k+1} \cup \hat{B}_{k+1}$ are connected. Again, by Lemma \ref{lem:horizontal} we have $E^{\rm row}_{k+1}(\hat{A},\hat{B}) \leq E^{\rm row}_{k+1}(A,B)$ and by Lemma \ref{lem:vertical} we have $E_k^{\rm \color{black} inter \color{black}}(\hat{A},\hat{B}) \leq E_k^{\rm \color{black} inter \color{black}}(A,B)$.
\textit{Case 3:} $n^{\rm row}_k \le n^{\rm row}_{k+1}$ and $m^{\rm row}_k > m^{\rm row}_{k+1}$. First, we put $n^{\rm row}_k$ points of $\hat{A}_{k+1}$ directly below $\hat{A}_k$. Then, we consider two possibilities:
- If $n^{\rm row}_k + m^{\rm row}_k \geq n^{\rm row}_{k+1} + m^{\rm row}_{k+1}$, we place the remaining $n^{\rm row}_{k+1} - n^{\rm row}_k$ points of $\hat{A}_{k+1}$ under $\hat{B}_k$, starting from the left so that $\hat{A}_{k+1}$ is connected. Then, we place the $m^{\rm row}_{k+1}$ points of $\hat{B}_{k+1}$ to the right of the previously placed points, so that $\hat{B}_{k+1}$ and $\hat{A}_{k+1} \cup \hat{B}_{k+1}$ are connected.
- If $n^{\rm row}_k + m^{\rm row}_k < n^{\rm row}_{k+1} + m^{\rm row}_{k+1}$, we place the $m^{\rm row}_{k+1}$ points of $\hat{B}_{k+1}$ below points in $\hat{B}_k$, starting from the right, so that $\hat{B}_{k+1}$ is connected. Then, we place $m^{\rm row}_k - m^{\rm row}_{k+1}$ points of $\hat{A}_{k+1}$ between the two sets of previously placed points. Finally, we place the remaining points of $\hat{A}_{k+1}$ to the left of all points placed so far, so that $\hat{A}_{k+1} \cup \hat{B}_{k+1}$ is connected.
In both cases, by Lemma~\ref{lem:horizontal} we have $E^{\rm row}_{k+1}(\hat{A},\hat{B}) \leq E^{\rm row}_{k+1}(A,B)$ and by Lemma~\ref{lem:vertical} we get $E_k^{\rm \color{black} inter \color{black}}(\hat{A},\hat{B}) \leq E_k^{\rm \color{black} inter \color{black}}(A,B)$.
\textit{Case 4:} $n^{\rm row}_k > n^{\rm row}_{k+1}$ and $m^{\rm row}_k \le m^{\rm row}_{k+1}$. We proceed as in Case 3 with the roles of $A$ and $B$ interchanged, with 'left' and 'right' also interchanged. Again, by Lemma \ref{lem:horizontal} we have $E^{\rm row}_{k+1}(\hat{A},\hat{B}) \leq E^{\rm row}_{k+1}(A,B)$ and by Lemma \ref{lem:vertical} we have $E_k^{\rm \color{black} inter \color{black}}(\hat{A},\hat{B}) \leq E_k^{\rm \color{black} inter \color{black}}(A,B)$.
\begin{proposition}\label{prop:algorithmdecreasesenergy}
The procedure described above modifies a configuration $(A,B)$ into a configuration $(\hat{A},\hat{B})$ with $E(\hat{A},\hat{B}) \leq E(A,B)$. Moreover, if one of the sets $A^{\rm row}_k$, $B^{\rm row}_k$, or $(A \cup B)_k^{\rm row}$, for $k=1,\ldots,N_{\rm row}$, is not connected, or one of the properties {\rm (1)--(3)} in Lemma \ref{lem:vertical} is violated, then $E(\hat{A},\hat{B}) < E(A,B)$.
\end{proposition}
\begin{proof}
The construction ensures that the configuration $(\hat{A},\hat{B})$ has the same number of rows as $(A,B)$. Hence, we compute
\begin{align*}
E(\hat{A},\hat{B}) & = \sum_{k=1}^{N_{\rm row} } E^{\rm row}_k(\hat{A},\hat{B}) + \sum_{k=1}^{N_{\rm row}-1} E_k^{\rm \color{black} inter \color{black}}(\hat{A},\hat{B}) \leq \sum_{k=1}^{N_{\rm row}} E^{\rm row}_k(A,B) + \sum_{k=1}^{N_{\rm row}-1} E_k^{\rm \color{black} inter \color{black}}(A,B) \\ & = E(A,B).
\end{align*}
In view of Lemma \ref{lem:horizontal}, we obtain strict inequality if one of the sets $A^{\rm row}_k$, $B^{\rm row}_k$ or $(A \cup B)_k^{\rm row}$ is not connected. In a similar fashion, we get strict inequality whenever one of the properties (1)--(3) in Lemma \ref{lem:vertical} does not hold.
\end{proof}
In particular, for optimal configurations $(A,B)$, all sets $A^{\rm
row}_k$, $B^{\rm row}_k$, and $(A \cup B)_k^{\rm row}$ are
connected. In other words, inside any row we have first all points of
one type and then all points of the other type without any gaps in
between. Moreover, we may make use of this procedure (and prove an analogue of Lemma \ref{lem:horizontal}--Proposition \ref{prop:algorithmdecreasesenergy}) for columns in place of rows. Hence, the sets $A^{\rm col}_k$, $B^{\rm col}_k$, and $(A \cup B)_k^{\rm col}$ are connected. In other words, given an optimal configuration, in each column there are first all points of one type and then all points of the other type without any gaps in between. In particular, as a consequence, we get an important property of any minimising configuration.
\begin{theorem}\label{thm:connected}
Suppose that $(A,B)$ is an optimal configuration. Then $A$ and $B$ are connected.
\end{theorem}
\begin{proof}
Suppose by contradiction that $A$ is not connected (we proceed similarly for $B$). First of all, let us notice that for each $k = 1,...,N_{\rm row}$ the set $A^{\rm row}_k$ is connected. Otherwise, by Proposition \ref{prop:algorithmdecreasesenergy} we find that $(A,B)$ was not an optimal configuration.
Let us first suppose that $n^{\rm row}_k > 0$ for all $k \in 1,...,N_{\rm row}$ (i.e., $A^{\rm row}_k \neq \emptyset$). Since every $A^{\rm row}_k$ is connected, if $A$ is not connected, it means that there is no connection between $A^{\rm row}_k$ and $A^{\rm row}_{k+1}$ for some choice of $k$. In this case, by Lemma~\ref{lem:vertical} and by Proposition \ref{prop:algorithmdecreasesenergy} we find that $(A,B)$ was not an optimal configuration.
Hence, the only remaining possibility that $A$ is not connected is
that there exist $k_1 < k_2 < k_3$ such that $n^{\rm row}_{k_1},
n^{\rm row}_{k_3} > 0$ and $n^{\rm row}_{k_2} = 0$ (i.e., $A^{\rm
row}_{k_1}, A^{\rm row}_{k_3} \neq \emptyset$ and $A^{\rm row}_{k_2}
= \emptyset$). Without loss of generality, we may require that for
every $k = k_1 + 1,...,k_3 - 1$ the set $A^{\rm row}_{k}$ is
empty. Let us apply the reorganisation $(A,B) \rightarrow
(\hat{A},\hat{B})$ using the procedure described above. Clearly, $(\hat{A},\hat{B})$ is still optimal by Proposition \ref{prop:algorithmdecreasesenergy}. Then, for $k = k_1$ we are either in Case 2 or in Case 4 of the procedure. We distinguish these two cases.
In the first one, suppose that for $k = k_1$ Case~2 of the procedure
applies. Then, the leftmost point of $\hat{B}_{k_1+1}$ lies directly
below the leftmost point of $\hat{B}_{k_1}$. Then, since for every $k
= k_1 + 1,...,k_3 - 1$ the set $A^{\rm row}_{k}$ is empty, Case 1 or 3
of the procedure shows that also the leftmost point of $\hat{B}_k$
lies below the leftmost point of $\hat{B}_{k_1+1}$ (hence below the
leftmost point of $\hat{B}_{k_1}$). Now, for $k = k_3 - 1$, when we
place the sets $\hat{A}_{k_3}$ and $\hat{B}_{k_3}$, we either fall
into Case 1 or Case 3 in the description of the procedure. In Case
1, the leftmost point of $\hat{B}_{k_3}$ is again placed below the
leftmost point of $\hat{B}_{k_1}$. Then, the rightmost point of
$\hat{A}_{k_3}$ is placed below the rightmost point of
$\hat{A}_{k_1}$. \color{black} Now, one reaches a contradiction by following
the same construction of
Proposition~\ref{prop:algorithmdecreasesenergy} by exchanging the role
of rows and columns. \color{black}
In Case 3, we either have that a point of $\hat{A}_{k_3}$ is placed below a point of $\hat{A}_{k_1}$, which as above is a contradiction to Proposition~\ref{prop:algorithmdecreasesenergy}, or
the leftmost point of $\hat{A}_{k_3}$ is placed below the leftmost point of $\hat{B}_{k_1}$. In particular, the leftmost point of $\hat{A}_{k_3}$ is placed one point to the right of the rightmost point of $\hat{A}_{k_1}$. Then, by Lemma~\ref{lem:vertical}(1) and Proposition~\ref{prop:algorithmdecreasesenergy} for columns in place of rows we again see that the energy of $(\hat{A},\hat{B})$ was not minimal, a contradiction.
In the second case, we have that for $k = k_1$ Case 4 of the algorithm applies. Then, the leftmost point of $\hat{B}_{k_1+1}$ does not lie directly below the leftmost point of $\hat{B}_{k_1}$, but it lies to its left (but no further than the leftmost point of $\hat{A}_{k_1}$). Again, for every $k = k_1 + 1,...,k_3 - 1$ the leftmost point of $\hat{B}_k$ lies below the leftmost point of $\hat{B}_{k_1+1}$. Again, when we place the sets $\hat{A}_{k_3}$ and $\hat{B}_{k_3}$, Case~1 or Case~3 of the procedure applies. In Case 1, the leftmost point of $\hat{B}_{k_3}$ is again placed below the leftmost point of $\hat{B}_{k_1+1}$. Hence, the rightmost point of $\hat{A}_{k_3}$ is placed either below a point in $\hat{A}_{k_1}$ or, in view of the definition of $\hat{B}_{k_1+1}$, one point to the left from the leftmost point of $\hat{A}_{k_1}$. As before, by Lemma~\ref{lem:vertical}(1) and Proposition \ref{prop:algorithmdecreasesenergy} for columns in place of rows, we see that the energy of $(\hat{A},\hat{B})$ was not minimal, a contradiction. In Case 3, a point of $\hat{A}_{k_3}$ is placed below the leftmost point of $\hat{B}_{k_3-1}$. This shows that the leftmost point of $\hat{A}_{k_3}$ is placed either below a point in $\hat{A}_{k_1}$ or one point to the right from the rightmost point of $\hat{A}_{k_1}$. As before, we obtain a contradiction to the minimality of $(\hat{A},\hat{B})$, and the proof is concluded.
\end{proof}
A careful inspection of the proofs of Proposition
\ref{prop:algorithmdecreasesenergy} and Theorem \ref{thm:connected}
provides some more information about the structure of any minimising configuration, collected in the following corollaries.
\begin{corollary}\label{cor:rows}
Let $(A,B)$ be an optimal configuration. Then, for any row $R_k$, the sets $A^{\rm row}_k$, $B^{\rm row}_k$ and $(A \cup B)_k^{\rm row}$ are connected. The same claim holds for columns. \hfill$\Box$ \vskip.3cm
\end{corollary}
\begin{corollary}\label{cor:nomissingrows}
Let $(A,B)$ be an optimal configuration. If for some $1 \leq k_1 < k_2 \leq N_{\rm row}$ we have $A^{\rm row}_{k_1}, A^{\rm row}_{k_2} \neq \emptyset$, then also $A^{\rm row}_k \neq \emptyset$ for all $k_1 \leq k \leq k_2$. The same claim holds for columns and the set $B$. \hfill$\Box$ \vskip.3cm
\end{corollary}
\begin{corollary}\label{cor:allononeside}
Let $(A,B)$ be an optimal configuration. Suppose that there exists a row $R_{k_0}$ such that $A^{\rm row}_{k_0}, B^{\rm row}_{k_0} \neq \emptyset$ and $A^{\rm row}_{k_0}$ lies to the left of $B^{\rm row}_{k_0}$. Then, for every row $R_k$ either $A^{\rm row}_k$ lies to the left of $B^{\rm row}_k$ or one of these sets is empty. The same claim holds for columns and if we interchange the roles of $A$ and $B$. \hfill$\Box$ \vskip.3cm
\end{corollary}
We observe that Theorem \ref{thm:connected} and Corollary
\ref{cor:rows} imply Theorem \ref{thm:main}.i and that Theorem \ref{thm:main}.ii follows from
Proposition~\ref{prop:algorithmdecreasesenergy} and
Corollaries~\ref{cor:nomissingrows}--\ref{cor:allononeside}. Corollary \ref{cor:interface} implies Theorem~\ref{thm:main}.iii and will be crucial for our later considerations. To this end, we introduce the following definition.
\begin{definition}
The interface $I_{AB}$ (between $A$ and $B$) is the set of midpoints of edges connecting a point in $A$ with a point in $B$. We say that there is an edge between two points $p,q \in I_{AB}$ if $|p - q| \in \lbrace 1/\sqrt{2}, 1\rbrace$ and the line segment between $p$ and $q$ does not intersect any point in $\mathbb{Z}^2$. We say that the interface is connected if it is connected as a graph.
\end{definition}
In other words, a point $p \in \mathbb{R}^2$ lies in the interface $I_{AB}$ between $A$ and $B$ if there exist points $p_1 \in A$ and $p_2 \in B$ such that $|p - p_1| = |p - p_2| = 1/2$. Necessarily, the interface is a subset of the lattice $\{ (k + \frac{1}{2},l)\colon k,l \in \mathbb{Z} \} \cup \{ (k, l + \frac{1}{2})\colon k,l \in \mathbb{Z} \}$. An example is presented in Figure \ref{fig:interface}.
\begin{figure}[h]
\includegraphics[scale=0.09]{interface-eps-converted-to.pdf}
\caption{Definition of the interface}
\label{fig:interface}
\end{figure}
Notice that Corollary \ref{cor:rows} implies that there is at most one point in $I_{AB}$ which is a midpoint of an edge between a point in $A^{\rm row}_k$ and a point in $B^{\rm row}_k$. Similarly, there is at most one point in $I_{AB}$ which is a midpoint of an edge between a point in $A^{\rm col}_k$ and a point in $B^{\rm col}_k$. Hence, we get the following result.
\begin{corollary}\label{cor:interface}
For any optimal configuration $(A,B)$, the interface $I_{AB}$ is connected. Moreover, it is monotone: up to reflections, it goes only upwards and to the right, i.e., given $p,q \in I_{AB}$, if $p_1 > q_1$, then $p_2 \ge q_2$. \hfill$\Box$ \vskip.3cm
\end{corollary}
We will use this result to study the minimal configurations in the
following way: we will identify all possible shapes of the interface,
collected in different classes. Analysing the different classes in
detail, we will show that there always exists an optimal configuration
in the most natural class (called Class~$\mathcal{I}$). For this
class, we are able to directly compute the minimal energy,
explicitly exhibit a minimiser, and provide a sharp estimate of the possible mismatch of ground
states in terms of their size, see \eqref{eq:fluct}.
Let us also note that the introduction of $I_{AB}$ enables us to write a convenient formula for the energy associated to an optimal configuration $(A,B)$. Namely, denote by $E_A$ the energy inside $A$, i.e., minus the number of bonds between $A$-points. In a similar fashion, we define $E_B$. Eventually, by $E_{AB} := - \# I_{AB} \beta$ we denote the interfacial energy, i.e., minus the number of bonds between $A$- and $B$-points weighted by the coefficient $\beta$. Then,
\begin{equation}\label{eq:formulafortheenergy}
E(A,B) = E_A + E_B + E_{AB}.
\end{equation}
This simple formula has a very important consequence. Namely, if we separate the sets $A$ and $B$ and reattach them in a different way (i.e., apply an isometry to one or both sets), then $E_A$ and $E_B$ do not change, but $E_{AB}$ possibly might. Therefore, if a configuration is optimal, it has the longest possible interface with respect to this operation. We will use variants of this argument on multiple occasions in Section \ref{sec:regularisation}.
\section{A collection of examples}\label{sec:info}
In this short section, we consider a few examples of minimisers that will serve as a motivation for the discussion about possible shapes of the interface in the next section. By Theorem~\ref{thm:connected}, for any optimal configuration, both sets $A$ and $B$ are connected. The properties of an optimal configuration are further restricted by Corollaries \ref{cor:rows}--\ref{cor:interface}. The following configurations are optimal for the choices of $N_A, N_B > 0$ and $\beta \in (0,1)$ described below. Even though some of them are irregular, the main effort in this paper will be to prove that actually for $N_A = N_B$ and $\beta \le 1/2$ one may find an optimal configuration which is very regular, in the sense that they roughly consist of two rectangles as given in Theorem \ref{thm:main}.v.
The first example consists of only three points: we have $N_A = 2$, $N_B = 1$, for any $\beta \in (0,1)$. Even then, the minimiser may fail to be unique: up to isometries, we have two minimisers, both presented in Figure \ref{fig:threepointexample}.
\begin{figure}[h!]
\includegraphics[scale=0.4]{example_three_points.png}
\caption{Minimisers for $N_A = 2, N_B = 1$}
\label{fig:threepointexample}
\end{figure}
The second example consists of six points: we have $N_A = N_B = 3$ for any $\beta \in (0,1)$. The numbers of $A$- and $B$-points are equal. The minimiser may fail to be unique: up to isometries, we have two minimisers, both presented in Figure \ref{fig:sixpointexample}. Note that the interface is not necessarily straight. However, there is a minimiser which has a straight interface.
\begin{figure}[h!]
\includegraphics[scale=0.35]{example_six_points_new.png}
\caption{Minimisers for $N_A = 3, N_B = 3$}
\label{fig:sixpointexample}
\end{figure}
The third example consists of eight points: we have $N_A = N_B = 4$ for any $\beta \in (0,1)$. In this case, the minimiser is unique. Up to isometries, the only solution is presented in Figure \ref{fig:fourpointexample}. Note that the interface is straight and both rectangles are ``full''. This situation is very special, and in a generic case we do not expect uniqueness.
\begin{figure}[h!]
\includegraphics[scale=0.55]{example_four_points.png}
\caption{Unique minimiser for $N_A = 4, N_B = 4$}
\label{fig:fourpointexample}
\end{figure}
The fourth example consists of seven points: we have $N_A = 3$, $N_B = 4$, for any $\beta \in (0,1)$. Up to isometries, we have three minimisers, presented in Figure \ref{fig:sevenpointexample}. As in the second example of Figure~\ref{fig:sixpointexample}, in the configuration on the right the interface is ``L-shaped''.
\begin{figure}[h!]
\includegraphics[scale=0.23]{example_seven_points.png}
\caption{Minimisers for $N_A = 3, N_B = 4$}
\label{fig:sevenpointexample}
\end{figure}
The fifth example consists of ten points: we have $N_A = N_B = 5$ for any $\beta \in (0,1)$. Up to isometries, we have five possible minimisers, presented in Figure \ref{fig:fivepoints}. Notice that the heights of the two types may differ and that the interface may fail to be straight. Furthermore, the two configurations on the left differ even though the interface is straight.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.34]{fivepoints_new.png}
\caption{Minimisers for $N_A = 5, N_B = 5$}
\label{fig:fivepoints}
\end{figure}
The final example consists of sixteen points: we have $N_A = 12$ and $N_B = 4$. Then, the situation may differ with $\beta$. For $\beta \in (1/2,1)$, up to isometries we have two possible minimisers (with energy $-20-4\beta$), presented in Figure \ref{fig:sixteenpointexample}. In one case, we have a straight interface, while in the other it is L-shaped.
\begin{figure}[h!]
\includegraphics[scale=0.31]{example_sixteen_points.png}
\caption{Minimisers for $N_A = 12, N_B = 4$, large $\beta$}
\label{fig:sixteenpointexample}
\end{figure}
For $\beta \in (0,1/2)$, up to isometries, we have three possible minimisers (with energy $-21-2\beta$), presented in Figure \ref{fig:sixteenpointexamplebigalpha}. Here, the structure of sets $A$ and $B$ is fixed, but we may attach them in a few different ways.
\begin{figure}[h!]
\includegraphics[scale=0.26]{example_sixteen_points_v2.png}
\caption{Minimisers for $N_A = 12, N_B = 4$, small $\beta$}
\label{fig:sixteenpointexamplebigalpha}
\end{figure}
For $\beta = 1/2$, all configurations presented in Figures \ref{fig:sixteenpointexample} and \ref{fig:sixteenpointexamplebigalpha} are minimal.
\section{Classification of admissible configurations}\label{sec:classi}
For simplicity, we will call the configurations which satisfy the statement of Theorem \ref{thm:connected} and of the corollaries below it {\it admissible}. In particular, these results show that optimal configurations are admissible. In this section, we collect admissible configurations in different classes. These classes will be analysed in more detail in the subsequent sections. The starting point is the observation that by Corollary \ref{cor:nomissingrows} we have that there cannot be a row $R_{k_0}$ such that $n^{\rm row}_k > 0$ above and below this row (for some $k > k_0$ and some other $k < k_0$), while $n^{\rm row}_{k_0} = 0$. The same result holds for columns. Therefore, we may cluster the minimisers into several classes which are easier to handle and are described using this property.
Let us start from the top and suppose that $ n^{\rm row}_1 > 0$ (otherwise, we exchange the roles of the two types). Denote by $R_{k_0}$ the last row such that $n^{\rm row}_{k_0} > 0$. Then, we have the two possibilities
\begin{equation}\label{eq:k0=Nr}
{\rm (i)} \ \ k_0 = N_{\rm row} \quad \quad \quad \text{and} \quad \quad \quad {\rm (ii)} \ \ k_0 < N_{\rm row}.
\end{equation}
In case (i), we differ four possibilities, depending on whether $B_1$ and $B^{\rm row}_{N_{\rm row}}$ are empty or not: if $ m^{\rm row}_1, m^{\rm row}_{N_{\rm row}} > 0$, then each row contains points from both types. If $ m^{\rm row}_1$ or $m^{\rm row}_{N_{\rm row}}$ equals zero, then the $B$-part of the configuration has a smaller height.
In case (ii), we differ two possibilities, depending on whether $B_1$ is empty or not. If $B_1$ is not empty, then $m^{\rm row}_k > 0$ for all $k = 1,...,N_{\rm row}$. If $B_1$ is empty, then there exists $k_1 > 0$ such that we have $m^{\rm row}_k = 0$ for $k \leq k_1$ and $m^{\rm row}_k > 0$ for $k = k_1 + 1, \ldots ,N_{\rm row}$.
By performing the same analysis for columns, and recalling the
corollaries after Theorem \ref{thm:connected}, we end up with a number
of possibilities which we list below, where without restriction we
assume that $ n_1^{\rm col} >0$. This list is complete up to isometries and
changing roles of the types. For the sake of the presentation, by
applying Corollary~\ref{cor:interface} we can without restriction
(possibly up to isometry and changing the roles of the types) assume
that the interface is going upwards and to the right. We divide all admissible configurations
into five main \textit{classes}, the first three being quite regular
and the last two a bit more difficult to handle. In this section, we
list all classes and introduce appropriate notation for each of
them. In the next section we advance a regularisation procedure for all
configurations. This has the aim of proving that for $N_A = N_B$
and $\beta \le 1/2$ all minimal configurations belong to Class
$\mathcal{I}$, $\mathcal{IV}$, or $ \mathcal{V}$, as
well as checking some fine geometrical properties of such minimisers.
\subsection{Class $\mathcal{I}$}
The first possibility is the reference case: we say that an admissible
configuration $(A,B)$ belongs to Class $\mathcal{I}$ if for each $k =
1,...,N_{\rm row}$ we have $n^{\rm row}_k > 0$ and $m^{\rm row}_k >
0$. In other words, \eqref{eq:k0=Nr}(i) holds with $m^{\rm row}_1, m^{\rm
row}_{N_{\rm row}} > 0$. The situation is presented in Figure
\ref{fig:ClassIbefore}. Examples of optimal configurations in Class
$\mathcal{I}$ can be found in Figure \ref{fig:threepointexample} (on
the right), in Figure \ref{fig:sixpointexample} (both), in Figure
\ref{fig:fourpointexample}, in Figure \ref{fig:sevenpointexample} (in
the middle), in Figure \ref{fig:fivepoints} (all but the two middle
ones), and in Figure \ref{fig:sixteenpointexample} (on the
right). The abundance of examples in Class $\mathcal{I}$ is in some
sense expected. Indeed, we will prove that for many choices of $N_A$,
$N_B$, and $\beta$ existence of an optimal configuration in Class $\mathcal{I}$ is guaranteed.
\begin{figure}[h]
\includegraphics[scale=0.08]{ClassIbefore-eps-converted-to.pdf}
\caption{Class $\mathcal{I}$}
\label{fig:ClassIbefore}
\end{figure}
Let us introduce the following notation. Let $h$ denote the number of rows (which in this case corresponds to the number of rows of both $A$ and $B$). Let $l_1$ denote the number of columns such that $A^{\rm col}_k \neq \emptyset$ and $B^{\rm col}_k = \emptyset$. Let $l_2$ denote the number of columns such that $A^{\rm col}_k \neq \emptyset$ and $B^{\rm col}_k \neq \emptyset$. Finally, let $l_3$ denote the number of columns such that $A^{\rm col}_k = \emptyset$ and $B^{\rm col}_k \neq \emptyset$. This notation is also presented in Figure \ref{fig:ClassIbefore}. Then, in view of \eqref{eq:eq}, the energy \eqref{eq: basic eneg} may be expressed as
\begin{equation}\label{eq:formulaforenergyclassI}
E(A,B) = - 2 (N_A + N_B) + (l_1 + l_2 + l_3) + h + (1-\beta) (l_2 + h).
\end{equation}
In particular, the energy splits into the \textit{bulk energy} $-
2 (N_A + N_B)$ and, up to a factor $1/2$, into the \textit{lattice
perimeter} introduced in \eqref{eq:dbp3}. Clearly, only the latter
is relevant for identifying optimal configurations. For convenience,
we will frequently refer to it as the surface energy.
In the next section, we will simplify the structure of configurations in Class $\mathcal{I}$, without increasing the energy, in order to compute the minimal energy in this class. After such regularisation, it will turn out that we have two possibilities: either $l_2 = 0$ or $l_2 = 1$, i.e., either the interface is a straight line or it has one horizontal jump, see Proposition \ref{prop:classIregularisationstep1}.
\subsection{Class $\mathcal{II}$}
We say that an admissible configuration $(A,B)$ belongs to Class~$\mathcal{II}$ if there exists a column $C_{k_0}$ such that for all $k \leq k_0$ we have $n^{\rm col}_k > 0$ and $m^{\rm col}_k = 0$, for all $k > k_0$ we have $n^{\rm col}_k = 0$ and $m^{\rm col}_k > 0$, and $(A,B)$ does not lie in Class~$\mathcal{I}$. In other words, the interface is a straight vertical line, and there exists at least one row which contains only one type (as otherwise $(A,B) \in \mathcal{I}$). Examples of optimal configurations in this class can be found in Figure \ref{fig:threepointexample} (on the left), in Figure \ref{fig:sevenpointexample} (on the left), and in Figure \ref{fig:sixteenpointexamplebigalpha} (all of them). Notice that in all these examples we have $N_A \neq N_B$. Indeed, in Section \ref{sec:regularisation} we will show that, if $N_A$ and $N_B$ are equal, such a configuration cannot be optimal.
A priori, this set of configurations may arise from both cases in \eqref{eq:k0=Nr}. Up to changing the roles the two types, however, we may assume that we are in situation \eqref{eq:k0=Nr}(i), as we can see in the following simple observation.
\begin{lemma}\label{lem:classIIregularisation}
Fix $N_A, N_B > 0$ and $ \beta \in (0,1)$. Suppose that $(A,B) \in \mathcal{II}$ is a minimal configuration. Then, there exists a minimal configuration $(\hat{A},\hat{B}) \in \mathcal{II}$ such that the last rows align, i.e., $n^{\rm row}_{N_{\rm row}} > 0$ and $m^{\rm row}_{N_{\rm row}} > 0$.
\end{lemma}
\begin{proof}
Without loss of generality, suppose that $n^{\rm row}_{N_{\rm row}} > 0$ and that $r_0 < N_{\rm row}$ is the biggest number such that $m^{\rm row}_{r_0} > 0$. Notice that, since the interface is a straight line, we may move the set $B$ by the vector $(0,r_0 - N_{\rm row})$ so that the last two rows align and this procedure does not increase the energy. The resulting configuration $(\hat{A},\hat{B})$ also lies in Class $\mathcal{II}$: if after this procedure we had also $n^{\rm row}_{1} > 0$ and $m^{\rm row}_{1} > 0$, i.e., $(\hat{A},\hat{B})$ lies in Class $\mathcal{I}$, then we would have added at least one bond. This induces a drop in the energy, a contradiction to the fact that $(A,B)$ is a minimal configuration.
\end{proof}
After applying this regularisation argument, we introduce the following notation. Up to reflection along the (straight) interface and interchanging the roles of the types, we may assume that $A$ is on the left-hand side and that it has more nonempty rows than $B$. Then, let $h_1$ denote the number of rows such that $A^{\rm row}_k \neq \emptyset$ and $B^{\rm row}_k = \emptyset$, and let $h_2$ be the number of rows such that $A^{\rm row}_k \neq \emptyset$ and $B^{\rm row}_k \neq \emptyset$. Moreover, let $l_1$ denote the number of columns such that $A^{\rm col}_k \neq \emptyset$ and $l_3$ denote the number of columns such that $B^{\rm col}_k \neq \emptyset$ (the notation $l_2$ is omitted on purpose to simplify some later regularisation arguments). Then, arguing as in the justification of formula \eqref{eq:formulaforenergyclassI}, see also \eqref{eq:eq}, the energy \eqref{eq: basic eneg} may be expressed as
\begin{equation}\label{eq: energy,class2}
E(A,B) = - 2(N_A + N_B) + (l_1 + l_3) + (h_1 + h_2) +
(1-\beta) h_2.
\end{equation}
The situation is presented in Figure \ref{fig:classIInotation}.
\begin{figure}[h]
\includegraphics[scale=0.08]{ClassIIbefore-eps-converted-to.pdf}
\caption{Class $\mathcal{II}$}
\label{fig:classIInotation}
\end{figure}
\subsection{Class $\mathcal{III}$}
We say that an admissible configuration $(A,B)$ belongs to Class~$\mathcal{III}$ if for each $k = 1,...,N_{\rm row}$ we have $n^{\rm row}_k > 0$ and for each $l = 1,...,N_{\rm col}$ we have $n^{\rm col}_l > 0$. In other words, each row and each column of $(A,B)$ contains at least one $A$-point (or equivalently, for every $B$-point there is a $A$-point above it and another one to its left). An example of an optimal configuration in this class can be found in Figure~\ref{fig:sixteenpointexample}. Note that in this example the ratio $N_A / N_B$ is far away from $1$. Indeed, in Section \ref{sec:regularisation} we will show that for $N_A = N_B$ configurations in this class cannot be optimal.
Counting from the left, let $l_1$ denote the number of columns such that $A^{\rm col}_k \neq \emptyset$ and $B^{\rm col}_k = \emptyset$, let $l_2$ denote the number of columns such that $A^{\rm col}_k \neq \emptyset$ and $B^{\rm col}_k \neq \emptyset$, and let $l_3$ be the number of columns such that $A^{\rm col}_k \neq \emptyset$ and $B^{\rm col}_k = \emptyset$. Similarly, counting from the top, denote by $h_1$ the number of rows such that $A^{\rm row}_k \neq \emptyset$ and $B^{\rm row}_k = \emptyset$, let $h_2$ be the number of rows such that $A^{\rm row}_k \neq \emptyset$ and $B^{\rm row}_k \neq \emptyset$, and finally let $h_3$ be the number of rows such that $A^{\rm row}_k \neq \emptyset$ and $B^{\rm row}_k = \emptyset$. Similarly to previous classes, the energy may be expressed as
\begin{equation}\label{eq: nerg3}
E(A,B) = - 2(N_A + N_B) + (l_1 + l_2 + l_3) + (h_1 + h_2 + h_3) +
(1-\beta) (l_2 + h_2).
\end{equation}
The situation is presented in Figure \ref{fig:classIII}.
\begin{figure}[h]
\includegraphics[scale=0.6]{ClassIIIbefore.png}
\caption{Class $\mathcal{III}$}
\label{fig:classIII}
\end{figure}
\subsection{Class $\mathcal{IV}$}
We say that an admissible configuration $(A,B)$ belongs to Class~$\mathcal{IV}$ if there exist $l_1, l_2, h_1, h_2 > 0$ such that $N_{\rm row} + N_{\rm col} - (l_1+l_2+h_1+h_2)>0$ and the following conditions hold: for each $k = 1,...,l_1$ we have $n^{\rm col}_k > 0$ and $m^{\rm col}_k = 0$. For each $k = l_1+1,...,l_1 + l_2$ we have $n^{\rm col}_k > 0$ and $m^{\rm col}_k > 0$. Finally, for all $k = l_1+l_2+1,...,N_{\rm row}$ (this may possibly be empty) we have $n^{\rm col}_k = 0$ and $m^{\rm col}_k > 0$. Similarly, for each $l = 1,...,h_1$ we have $n^{\rm row}_l > 0$ and $m^{\rm row}_l = 0$. For each $l = h_1+1,...,h_1+h_2$ we have $n^{\rm row}_l > 0$ and $m^{\rm row}_l > 0$. Finally, for all $l = h_1+h_2+1,...,N_{\rm col}$ (this may possibly be empty) we have $n^{\rm row}_l = 0$ and $m^{\rm row}_l > 0$. Setting $l_3 = N_{\rm col} - l_1 - l_2$ and $h_3 = N_{\rm row} - h_1 - h_2$ we observe $l_3>0$ or $h_3>0$, i.e., the configuration does not lie in Class $\mathcal{III}$. The energy may be expressed as
\begin{equation}\label{eq:classIVformula}
E(A,B) = - 2(N_A + N_B) + (l_1 + l_2 + l_3) + (h_1 + h_2 + h_3) +
(1-\beta) (l_2 + h_2).
\end{equation}
The situation is presented in Figure \ref{fig:ClassIVbefore}. Examples of optimal configurations in this class can be found in Figure~\ref{fig:sevenpointexample} (on the right) and in Figure \ref{fig:fivepoints} (both in the middle).
\begin{figure}[h]
\includegraphics[scale=0.08]{ClassIVbefore-eps-converted-to.pdf}
\caption{Class $\mathcal{IV}$}
\label{fig:ClassIVbefore}
\end{figure}
\subsection{Class $\mathcal{V}$}
We say that an admissible configuration $(A,B)$ belongs to Class~$\mathcal{V}$ if there exist $l_1, l_2, l_3, h_1, h_2, h_3 > 0$ such that $l_1 + l_2 + l_3 = N_{\rm col}$, $h_1+h_2+h_3 = N_{\rm row}$ and the following conditions hold: for each $k = 1,...,l_1$ we have $n^{\rm col}_k > 0$ and $m^{\rm col}_k = 0$. For each $k = l_1+1,...,l_1 + l_2$ we have $n^{\rm col}_k > 0$ and $m^{\rm col}_k > 0$. Finally, for all $k = l_1+l_2+1,...,N_{\rm row}$ we have $n^{\rm col}_k > 0$ and $m^{\rm col}_k = 0$. On the other hand, for each $l = 1,...,h_1$ we have $n^{\rm row}_l > 0$ and $m^{\rm row}_l = 0$. For each $l = h_1+1,...,h_1+h_2$ we have $n^{\rm row}_l > 0$ and $m^{\rm row}_l > 0$. Finally, for all $l = h_1+h_2+1,...,N_{\rm col}$ we have $n^{\rm row}_l = 0$ and $m^{\rm row}_l > 0$. The energy may be expressed as
\begin{equation*}
E(A,B) = - 2(N_A + N_B) + (l_1 + l_2 + l_3) + (h_1 + h_2 + h_3) + (1-\beta) (l_2 + h_2).
\end{equation*}
The situation is presented in Figure \ref{fig:classVbefore}.
\begin{figure}[h]
\includegraphics[scale=0.24]{ClassVbefore.png}
\caption{Class $\mathcal{V}$}
\label{fig:classVbefore}
\end{figure}
We close this section with the observation that the five classes cover all possible cases up to isometries, reflections, and changing roles of the types.
\section{Analysis of Class $\mathcal{I}$}\label{sec:reg1}
\subsection{Regularisation inside Class $\mathcal{I}$}
The goal of this section is to make the configuration in Class~$\mathcal{I}$ more regular without increasing the energy. This regularisation will facilitate the computation of the minimal energy. We keep the notation as in the previous section, and begin with the following observation.
\begin{proposition}\label{prop:classIregularisationstep1}
Fix $N_A, N_B > 0$ and $ \beta \in (0,1)$. Suppose that $(A,B) \in \mathcal{I}$ is an optimal configuration. Then, we either have $l_2 = 0$ or $l_2 = 1$.
\end{proposition}
Both cases can happen: take $N_A = N_B = 3$ and $ \beta \in (0,1)$. Then, there are two optimal configurations, one with $l_2 = 0$ and the other one with $l_2 = 1$, see Figure \ref{fig:sixpointexample}.
\begin{proof}
The idea of the proof is the following: we suppose by contradiction that $l_2 \geq 2$. We add more points to the configuration $(A,B)$, so that it becomes a full rectangle, keeping track of the change of the energy in the process. Then, we exchange a number of points, making the interface shorter and causing a drop in the energy. Finally, we remove the added points, again keeping track of the energy. This yields strictly smaller total energy, a contradiction. The argument is presented in Figure \ref{fig:regularisationClassI}.
To be exact, let us modify the configuration $(A,B)$ as follows. We add $N'_A$ $A$-points on the left and $N'_B$ $B$-points on the right such that that $(A,B)$ becomes a full rectangle with sides $l_1 + l_2 + l_3$ and $h$. Notice that in this way we do not alter the surface energy. Meanwhile, the bulk energy changes by $- 2 (N_A' + N_B')$. Now, look at the rectangle in the middle with sides $l_2$ and $h$. If we exchange $A$-points from its rightmost column and $B$-points from its leftmost column (as many as we can), we will make one column (or two) full of points of one type. Hence, in the formula for the energy, see \eqref{eq:formulaforenergyclassI}, we replace $l_2$ by $l_2 - 1$ (respectively $l_2 - 2$), and $l_1 + l_3$ by $l_1 + l_3 + 1$ (respectively $l_1 + l_3 + 2$). This causes a drop in the surface energy by $ (1-\beta)$ or $2(1-\beta)$.
Finally, we take care of the added points. We remove $N_A'$ $A$-points, starting from the leftmost column, going from top to bottom. In the process, the surface energy decreases or remains the same (since $l_1$ may decrease or remain the same). Similarly, we remove $N_B'$ $B$-points, starting from the rightmost column and going from top to bottom.
In this way, we have obtained a configuration $(\hat{A},\hat{B})$ with
the same number of $A$- and $B$-points as $(A,B)$, but with energy
lower at least by $ (1-\beta) $. After this operation, we possibly end up with a shape of the interface different from the one in Class $\mathcal{I}$, but this does not matter since we only wanted to show that $(A,B)$ was not optimal. Hence, if $(A,B)$ is an optimal configuration, then $l_2 = 0$ or $l_2 = 1$.
\end{proof}
\begin{figure}[h]
\includegraphics[scale=0.5]{ClassIregularisation.png}
\caption{Regularisation of Class $\mathcal{I}$}
\label{fig:regularisationClassI}
\end{figure}
By performing the modification described in the proof, we get that we may assume that the configuration is as compact as possible: given $h$, the values of $l_1$ and $l_3$ are as small as possible, and all the columns except for the leftmost and rightmost ones are full (i.e., have $h$ points). This is a property that we will use several times in the sequel.
Now, let us focus on the case $N_A = N_B$. We will give an exact formula for the minimal energy. For this purpose, let us first prove that we may assume that $l_2 = 0$. To this end, let us first state the following technical lemma.
\begin{lemma}\label{lem:classIstructurelemma}
\color{black} Fix $N := N_A = N_B>0$ \color{black} and $\beta \in (0,1)$ . Suppose that $(A,B) \in \mathcal{I}$ is an optimal configuration such that $l_1 = l_3$ and $l_2 = 1$. Then, we have $l_1 = l_3 \geq h/2$.
\end{lemma}
\begin{proof}
Without restriction we assume that $(A,B)$ has the form described
before the statement of the lemma, see also the last picture in Figure
\ref{fig:regularisationClassI}. Let $k = \lceil h/2 \rceil$. Suppose by contradiction that the statement does not hold, i.e., $l_1 = l_3 < k$ (in particular, $k \geq 2$).
Consider two cases: first, assume that $h$ is even, so that $h = 2k$. Then, the whole configuration fits into a rectangle with height $2k$ and width $2l_1 + 1$, where $l_1 \leq k-1$. Let us rearrange all the points so that the resulting configuration lies in a rectangle with height $2k-1$ and width $2l_1+2$. We place the points by filling the columns from left to right, first with $A$-points and then with $B$-points, so that the resulting configuration lies in Class $\mathcal{I}$ and has $l_2 \leq 1$. In fact, all points may be placed in this rectangle since the assumption $l_1 \leq k-1$ implies
$$ (2k-1)(2l_1+2) \geq 2k(2l_1+1).$$
But then the new configuration has strictly smaller energy since $h$ decreased by $1$, $l_2 \leq 1$, and $l_1+l_3$ grew by at most $1$. Hence, the original configuration was not optimal, a contradiction.
In the second case, $h$ is odd, so that $h = 2k-1$. Then, the whole configuration fits into a rectangle with height $2k -1$ and width $2l_1 + 1$, where $l_1 \leq k-1$. Let us again rearrange all the points using the procedure from the previous paragraph, so that the resulting configuration lies in a rectangle with height $2k-2$ and width $2l_1+2$ and satisfies $l_2 \leq 1$. Indeed, if $l_1 \leq k-2$, all points may be placed in this rectangle since in this case we have
\begin{align}\label{inequili}
(2k-2)(2l_1+2) \geq (2k-1)(2l_1+1).
\end{align}
On the other hand, if $l_1 = k-1$, we have
$$(2k-2)(2l_1+2) = (2k-1)(2l_1+1) - 1,$$
so the inequality \eqref{inequili} is not satisfied. In this case,
however, $(2k-1)(2l_1+1)$ is odd. Thus, since the total number of
points $\color{black} 2 N \color{black}$ is even, it is not possible that the entire rectangle with height $2k$ and width $2l_1 + 1$ was full in the original configuration. Therefore, we can still place all the points in the rectangle with height $2k-2$ and width $2l_1+2$. As before, the new configuration has strictly smaller energy since $h$ decreased by $1$, $l_2 \leq 1$, and $l_1+l_3$ grew by at most $1$: a contradiction.
\end{proof}
Now, we proceed to prove the main result for Class $\mathcal{I}$, namely that for the purpose of the computation of the minimal energy we may assume that $l_2 = 0$.
\begin{proposition}\label{prop:classIregularisationstep2}
\color{black} Fix $N := N_A = N_B>0$ and \color{black} $ \beta \in (0,1)$. Then, if $(A,B) \in \mathcal{I}$ is an optimal configuration, then there exists an optimal configuration $(\hat{A},\hat{B}) \in \mathcal{I}$ with $l_2 = 0$.
\end{proposition}
\begin{proof}
If $(A,B) \in \mathcal{I}$ is such that $l_2 = 0$, there is nothing to prove. Suppose to the contrary that $l_2 > 0$. Then, by Proposition \ref{prop:classIregularisationstep1} we have that $l_2 = 1$. We introduce the following notation: again, $l_1$ is the number of columns with only $A$-points and $l_3$ is the number of columns with only $B$-points. We can assume that all columns except for the leftmost and rightmost ones are full, cf.\ last picture in Figure \ref{fig:regularisationClassI}. By $r_1 \in \{ 1,...,h \}$ we denote the number of $A$-points in the leftmost column, and $r_4 \in \{ 1,...,h \}$ is the number of $B$-points in the rightmost column. By $r_2,r_3 \in \{ 1,...,h-1 \}$ we denote the numbers of $A$- and $B$-points, respectively, in the single column which contains points of both types.
Since $N_A = N_B$, we compute the number of points of each type and we get
\begin{equation*}
(l_1 - 1) h + r_1 + r_2 = (l_3 - 1) h + r_3 + r_4,
\end{equation*}
so
\begin{equation}\label{eq: numbering}
(l_1 - l_3) h = r_3 + r_4 - r_1 - r_2.
\end{equation}
Due to the range of $r_1,\ldots,r_4$, the left-hand side can take only values between $-2h+3$ and $2h-3$, so it needs to take values in the set $\{ -h,0,h\}$. Hence, up to exchanging the roles of the two types, we either have $l_1 = l_3$ or $l_1 = l_3 + 1$.
First, suppose that $l_1 = l_3 + 1$. Then, by \eqref{eq: numbering} we have $r_1 + r_2 + h = r_3 + r_4$. In particular, $r_1 + r_2 < h$ as $r_3+r_4 \le 2h-1$. Hence, we may move the $r_2$ $A$-points from the single column with both types to the leftmost column, and replace them by $r_2$ $B$-points from the rightmost column. In this way, the double-type column disappeared altogether. This process strictly decreases the energy \eqref{eq:formulaforenergyclassI} since $l_1$ stays the same, $l_2$ decreases by $1$, and $l_3$ increases by $1$ or stays the same. This is a contradiction.
Now, suppose that $l_1 = l_3$. Then, by \eqref{eq: numbering} we have $r_1 + r_2 = r_3 + r_4$. If $r_1 + r_2 \leq h$, we proceed as in the previous paragraph. Suppose otherwise, i.e., $r_1 + r_2 = r_3+r_4 > h$. Without restriction we can suppose that $r_3 \ge r_2$. Let $k \in \mathbb{N}$ such that $k = \lceil h/2 \rceil$. Notice that we may modify the configuration so that $r_2 = \lfloor h/2 \rfloor$ and $r_3 = k$. Indeed, otherwise we move $\lfloor h/2 \rfloor - r_2$ ($=r_3 - k$) $B$-points from the double-type column to the rightmost column and move $\lfloor h/2 \rfloor - r_2$ $A$-points from the leftmost column to the double-type column, so that both types have $\lfloor h/2 \rfloor$ and $k$ points, respectively, in the double-type column. In this way, since
$$r_4 + \lfloor h/2 \rfloor - r_2 = (r_1+r_2-r_3) + \lfloor h/2 \rfloor - r_2 = r_1 + \lfloor h/2 \rfloor -r_3 \le h,$$
where we used $r_1 \le h$ and $r_3 \ge \lfloor h/2 \rfloor$, we did not add any additional column on the right. Thus, the total energy did not increase.
As $l_1=l_3$ and $l_2 = 1$, by Lemma \ref{lem:classIstructurelemma} we have that $l_1 = l_3 \geq k$. Now, remove all the points in the double-type column and place them directly above the first row, $ \lfloor h/2 \rfloor $ $A$-points directly above the $l_1$ $A$-points (starting from the right) and $k$ $B$-points directly above the $l_3$ $B$-points (starting from the left). Finally, we merge the two connected components of the resulting configuration by moving the connected component on the left by $(1,0)$. In this way, $h$ increased by 1, $l_2$ decreased by 1, and $l_1$ and $l_3$ remain unchanged, so that the energy remains the same, see \eqref{eq:formulaforenergyclassI}. Hence, the resulting configuration $(\hat{A},\hat{B})$ is minimal, lies in Class $\mathcal{I}$, and satisfies $l_2 = 0$. This concludes the proof.
\end{proof}
\subsection{Exact calculation for Class $\mathcal{I}$}
The regularisation procedure presented in the previous subsection
enables us to compute directly the minimal energy for configurations
in Class $\mathcal{I}$ for any $ \beta \in (0,1)$. In this subsection, we suppose that $N_A =
N_B$ and denote the common value by \color{black} $N$. \color{black} Later, in Section
\ref{sec:regularisation} we will show that there exists always a
minimiser in Class~$\mathcal{I}$ which induces that the energy
computed below coincides with the minimal energy.
\begin{theorem}\label{thm:classIexact}
Fix $\color{black} N := N_A = N_B>0$ and \color{black} $ \beta \in (0,1)$. Suppose that a minimal configuration $(A,B)$ is in Class $\mathcal{I}$. Then, its energy is equal to the smaller of the two numbers
\begin{align}\label{eq: E*}
E_*=-4 \color{black} N \color{black} +2\left\lceil \frac{\color{black} N \color{black}}{ \left\lfloor
\sqrt{\frac{2 \color{black} N \color{black}}{ 2 - \beta }}\right\rfloor }\right\rceil + \left\lfloor
\sqrt{\frac{2 \color{black} N \color{black}}{ 2 - \beta }}\right\rfloor ( 2 - \beta
)
\end{align}
and
\begin{align}\label{eq: E**}
E^*=-4 \color{black} N \color{black}+2\left\lceil \frac{\color{black} N \color{black}}{ \left\lceil
\sqrt{\frac{2 \color{black} N \color{black}}{ 2 - \beta }}\right\rceil }\right\rceil+ \left\lceil
\sqrt{\frac{2 \color{black} N \color{black}}{ 2 - \beta }}\right\rceil ( 2 - \beta )
\end{align}
depending on $\color{black} N \color{black}$ and $ \beta$.
\end{theorem}
\begin{proof}
By Proposition \ref{prop:classIregularisationstep2}, for the purpose of the computation of the minimal energy, we may assume that $l_2 = 0$. Hence, we also have $l_1 = l_3$, and denote the common value by $\ell$. Notice that we may minimise the energy under the constraint
$${ h,\,\ell \in \mathbb{N}, \quad \color{black} N \color{black} =h \ell + r \quad \text{with}\
r\in \mathbb{N}, \ 0\leq r\leq h-1.}$$
This constraint is natural since for fixed $h$, the length $\ell$ is minimal whenever all the columns except for the leftmost and rightmost ones are full (i.e., have $h$ points). We also refer to the configuration given in Theorem~\ref{thm:main}.v. Under these assumptions, we may rewrite the energy \eqref{eq:formulaforenergyclassI} as
\begin{equation*}
E(A,B) = -4 \color{black} N \color{black} + 2 (\ell+\min\lbrace r,1 \rbrace) + h( 2- \beta ).
\end{equation*}
In particular, one can express $E$ solely in terms of $h\in \mathbb{N}$ as
\begin{equation}\label{eq:E}
E(h):=-4 \color{black} N \color{black}+ 2\left\lceil \frac{n}{h}\right\rceil + h( 2 - \beta ).
\end{equation}
Since the function $h \in (0,\infty) \mapsto -4 \color{black} N \color{black} + 2 \color{black} N \color{black}/h + h ( 2 - \beta ) $ is
strictly convex and attains its minimum in $\sqrt{2 \color{black} N \color{black}/( 2 - \beta )}$,
the minimiser $h$ of $E$ from \eqref{eq:E} is either
\begin{align}\label{eq: h***}
h_*= \left\lfloor \sqrt{\frac{2 \color{black} N \color{black}}{ 2 - \beta }}\right\rfloor \quad \text{or}
\quad h^* = \left\lceil \sqrt{\frac{2 \color{black} N \color{black}}{ 2 - \beta }}\right\rceil.
\end{align}
In the first case, the minimal value of $E$ equals \eqref{eq: E*} and in the second case it equals \eqref{eq: E**}.
Whether $E^*$ or $E_*$ is smaller depends on $n$ and $ \beta
$. However, unless $h^* = h_*$, these two numbers cannot be equal for
any $ \beta \in (0,1)$.
\end{proof}
We close this section with the observation that, once we have guaranteed the existence of a minimiser in Class~$\mathcal{I}$ (see Theorem \ref{thm:classIexistence} below), Theorem \ref{thm:main}.iv follows from Theorem \ref{thm:classIexact} and \eqref{eq:eq}. The construction of the configuration in the previous proof, in particular \eqref{eq: h***}, also yields the explicit solution in Theorem \ref{thm:main}.v.
\section{Analysis and regularisation of other classes}\label{sec:regularisation}\label{sec:reg2}
In this section, we show how to regularise configurations related to
classes $\mathcal{II}$--$\mathcal{V}$. Our main goal is to show that
for $N_A = N_B$, it is not possible that a minimiser lies in Class~$\mathcal{II}$ or Class~$\mathcal{III}$. While it is possible that a
minimiser lies in Class~$\mathcal{IV}$, see Proposition \ref{prop:largeminimisersiv} below, we will show that under the
constraint $ \beta \leq 1/2$ we can modify an optimal configuration
so that it lies in Class~$\mathcal{I}$.
\subsection{Class $\mathcal{II}$}
Since the definition of Class $\mathcal{II}$ already involved a very regular interface, namely a straight line, the situation here is much simpler with respect to Class~$\mathcal{I}$. In fact, the whole analysis of the problem boils down to the following simple result.
\begin{proposition}\label{prop:classIIregularisation}
\color{black} Fix $N:= N_A = N_B>0$ \color{black} and $ \beta \in (0,1)$. If $(A,B)$ is an optimal configuration, then $(A,B) \notin \mathcal{II}$.
\end{proposition}
\begin{proof}
Suppose otherwise. Then, recalling \eqref{eq: energy,class2}, notice that we may rewrite the energy as
\begin{equation*}
E(A,B) = -4 \color{black} N \color{black} + E_A + E_B - \beta h_2,
\end{equation*}
where $E_A = l_1 + h_1 + h_2$ and $E_B = l_3 + h_2$ are the energy between the void and $A$ and $B$, respectively, and the last term corresponds to the interface energy.
Suppose first that $E_A > E_B$. Then, we modify the configuration as follows: set $\hat{B} = B$ and let $\hat{A}$ be the symmetric image of $B$ under the reflection along the interface. In this way, we obtain
\begin{equation*}
E(\hat{A},\hat{B}) = -4 \color{black} N \color{black}+ 2E_B - \beta h_2 < -4 \color{black} N \color{black} + E_A + E_B - \beta h_2 = E(A,B),
\end{equation*}
a contradiction to minimality of $(A,B)$. Now, we suppose $E_A \leq E_B$ instead. We modify the configuration as follows: set $\hat{A} = A$ and let $\hat{B}$ be the symmetric image of $A$ under the reflection along the interface. In this way,
the part of the energy corresponding to the shape of $A$ stays the same, the part corresponding to $B$ drops or stays the same, and the length $h_2$ of the interface increases at least by $1$. Hence, the total energy decreases, so $(A,B)$ was not a minimal configuration.
\end{proof}
\subsection{Class $\mathcal{III}$}
Using again the notation introduced in the previous section, our first
goal is to show that we can modify an admissible configuration in
Class~$\mathcal{III}$ such that we remain in Class~$\mathcal{III}$ and
$l_3 = h_3 = 0$ without increasing the energy. Then, we will prove
that such a configuration cannot be optimal if $N_A = N_B$.
\begin{proposition}\label{prop:classIIIregularisationstep1}
Fix $N_A, N_B > 0$ and $ \beta \in (0,1)$. Suppose that $(A,B) \in \mathcal{III}$ is a minimal configuration and $l_3 > 0$ (respectively $h_3 > 0$). Then, there exists a minimal configuration $(\hat{A},\hat{B}) \in \mathcal{III}$ with $l_3 = 0$ (respectively $h_3 = 0$).
\end{proposition}
\begin{proof}
Assume that $l_3 > 0$ (the proof in the case $h_3 > 0$ is analogous). Our construction is presented in Figure~\ref{fig:classIIIb}. We will modify the top $h_1$ rows of the configuration $(A,B)$ in the following way: for every $1 \leq k \leq N_{\rm row}$, denote by $x_k$ the first coordinate in the rightmost point of $(A \cup B)_k^{\rm row}$. Then, for $k \leq h_1$, we set $\hat{A}_k := A^{\rm row}_k + (\min\{x_{h_1+1} - x_k,0\},0)$, i.e., each row which has points further to the right than the rightmost point of $B_{h_1 + 1}$ is translated to the left, in such a way that its rightmost point aligns with the rightmost point of $B_{h_1 + 1}$. As we made no modifications inside rows, $E^{\rm row}_k(\hat{A},\hat{B}) = E^{\rm row}_k(A,B)$ for all $k = 1,\ldots,N_{\rm row}$, see \eqref{eq: row energy}. Regarding $E_k^{\rm \color{black} inter \color{black}}$, observe that for $k \geq h_1 + 1$ nothing changed in the configuration, so $E_k^{\rm \color{black} inter \color{black}}(\hat{A},\hat{B}) = E_k^{\rm \color{black} inter \color{black}}(A,B)$. On the other hand, for $k < h_1$, we either left two adjacent rows intact (so the number of connections between them stayed the same); moved both of them to the left so that their rightmost points align (so the number of connections between them stayed the same or increased); or moved only one of them to the left, but because the rightmost point of the other one has first coordinate smaller or equal to the first coordinate of $B_{h_1+1}$, this shift did not destroy any bonds and possibly created new ones. In every case, all these connections are of type $A$-$A$, so we have $E_k^{\rm \color{black} inter \color{black}}(\hat{A},\hat{B}) \leq E_k^{\rm \color{black} inter \color{black}}(A,B)$. Finally, for $k = h_1$, we did not change the number of $A$-$B$ connections and possibly added some $A$-$A$ connections. Thus, $E_k^{\rm \color{black} inter \color{black}}(\hat{A},\hat{B}) \leq E_k^{\rm \color{black} inter \color{black}}(A,B)$.
Note that after this procedure all columns $A_l^{\rm col}$ for $l \leq l_1$ are still connected, as otherwise this would contradict Theorem \ref{thm:connected} and the minimality of the original configuration. Hence, the resulting configuration lies in Class $\mathcal{III}$.
\end{proof}
\begin{figure}[h]
\includegraphics[scale=0.2]{ClassIIIpartone_new.png}
\caption{Regularisation of Class $\mathcal{III}$: part one}
\label{fig:classIIIb}
\end{figure}
In order to facilitate the proof that configurations in Class~$\mathcal{III}$ cannot be optimal, we further modify the configuration without increasing the energy.
\begin{lemma}\label{lem:classIIIregularisationstep2}
Fix $N_A, N_B > 0$ and $ \beta \in (0,1)$. Suppose that $(A,B) \in \mathcal{III}$ is a minimal configuration. Then, there exists a minimal configuration $(\hat{A},\hat{B}) \in \mathcal{III}$ such that for every $k = 1,...,N_{\rm row}$ the rightmost point of $(\hat{A} \cup \hat{B})_k^{\rm row}$ has the same first coordinate and for every $k = 1,...,N_{\rm col}$ the lowest point of $(\hat{A} \cup \hat{B})_k^{\rm col}$ has the same second coordinate.
\end{lemma}
\begin{proof}
By the previous proposition, we may assume that $l_3 = h_3 = 0$. We will use a version of the technique used for Class~$\mathcal{I}$, and refer to Figure \ref{fig:regularisationClassIII} for an illustration of the construction. Note that if we add $N_A'$ $A$-points on the top and on the left and $N_B'$ $B$-points on in the bottom right corner, so that the configuration $(A,B)$ becomes a full rectangle with sides $l_1 + l_2$ and $h_1 + h_2$, we do not alter the surface energy, but the bulk energy changes by $- 2 (N_A' + N_B')$.
Having fixed $N_B' > 0$, let us remove the topmost $A$-point in the leftmost column and change the type of the topmost $B$-point in the leftmost column to $A$. In this way, we removed a $B$-point, without increasing the energy \eqref{eq: nerg3}. We repeat this procedure until we removed $N_B'$ $B$-points. Then, we remove $N_A'$ $A$-points, starting from the top of the leftmost column. Again, this cannot increase the energy. Moreover, the resulting configuration lies in Class $\mathcal{III}$ because if in this last step we removed a whole column or a point which lies next to the interface, we would decrease the energy. Hence, the resulting configuration is also minimal and satisfies the desired property. \end{proof}
\begin{figure}[h]
\includegraphics[scale=0.18]{ClassIIIregularisation.png}
\caption{Regularisation of Class $\mathcal{III}$: part two}
\label{fig:regularisationClassIII}
\end{figure}
These regularisation results imply that in the case when the numbers of points in the two types are equal, then the minimising configuration cannot lie in Class $\mathcal{III}$.
\begin{proposition}\label{prop:classIIIexcluded}
\color{black} Fix $ N_A = N_B>0$ \color{black} and $ \beta \in (0,1)$. Then, if $(A,B)$ is a minimal configuration, $(A,B) \notin \mathcal{III}$.
\end{proposition}
\begin{proof}
Suppose otherwise and let $(A,B) \in \mathcal{III}$ be a minimal configuration. Apply the regularisation procedure described in Proposition \ref{prop:classIIIregularisationstep1} and Lemma \ref{lem:classIIIregularisationstep2}. After these operations, $(A,B)$ lies in a rectangle $R$ with sides $h_1 + h_2$ and $l_1 + l_2$. Then, the length of the interface equals $l_2 + h_2$. Without loss of generality $h_1 + h_2 \leq l_1 + l_2$ (otherwise, this is true after applying a symmetry with respect to the line $\mathbb{R}(-1,1)$). Then, we compare $(A,B)$ with a configuration $(\hat{A},\hat{B}) \in \mathcal{I}$ which fits into the rectangle $R$, with $A$-points on the left and $B$-points on the right such that the length of the interface is either $h_1 + h_2$ or $h_1 + h_2 + 1$, depending on whether $l_2 =0$ or $l_2 = 1$. Hence, by minimality of $(A,B)$, we have $l_2 + h_2 \leq h_1 + h_2 + 1$, i.e.,
\begin{align}\label{eq: l2h1}
l_2 \leq h_1 + 1.
\end{align}
This gives a contradiction with the assumption $N_A = N_B$. To see
this, first recall that the configuration is {\it full}, in the sense that the construction in Lemma \ref{lem:classIIIregularisationstep2} ensures that all the columns except for the leftmost one have the same number of points. Therefore, we may first estimate from above the number of $B$-points by
\begin{equation*}
N_B \leq l_2 h_2 \leq h_1 h_2 + h_2
\end{equation*}
and the number of $A$-points from below by
\begin{align*}
N_A & \geq h_1 l_2 + h_1 (l_1 - 1) + h_2 (l_1 - 1) = h_1 (l_1 + l_2) - h_1 + h_2 (l_1 - 1)\\
&\geq h_1 (h_1 + h_2) - h_1 + h_2 (l_1 - 1) = h_1 h_2 + h_1 (h_1 - 1) + h_2 (l_1 - 1),
\end{align*}
where we used the assumption that $h_1 + h_2 \leq l_1 + l_2$. Hence, whenever $h_1, l_1 \geq 2$ or $l_1 \geq 3$, we have $N_A > N_B$, which would contradict the assumption $N_A = N_B$. Moreover, we get that necessarily $h_1 \leq h_2$.
Finally, we have to take into consideration the case when $l_1 = 1$ (with $h_1$ arbitrary) or when $h_1 = 1$ and $l_1 = 2$. In the first case, by \eqref{eq: l2h1} we have $h_1 + h_2 \leq l_2 + 1 \leq h_1 + 2$, so $h_2 \leq 2$. But then $h_1 \leq h_2 \leq 2$, and thus $l_2 \leq h_1 + 1 \leq 3$. This leaves us with a finite (and small) number of configurations to consider separately and it may be checked that none of them is optimal. In the second case, again by \eqref{eq: l2h1} we have $l_2 \leq h_1 + 1 = 2$. Furthermore, $l_1 + l_2 \geq h_1 + h_2$, so $h_1 + h_2 \leq 4$, and hence $h_2 \leq 3$. Again, we end up with a small number of configurations, and it is easy to see that none of them is optimal.
\end{proof}
\subsection{Class $\mathcal{IV}$, part one}\label{sec:classIVpartone}
The situation in Class~$\mathcal{IV}$ is not as clear-cut as in
Classes $\mathcal{II}$ and $\mathcal{III}$: whereas configurations in
Classes $\mathcal{II}$ and $\mathcal{III}$ are never optimal, the
problem is that, even for $N_A = N_B$ and $ \beta = 1/2$, an optimal configuration may actually lie in Class~$\mathcal{IV}$, see Figure~\ref{fig:fivepoints}. Hence, the goal in this subsection is a bit different: we will prove that even though minimal configurations in Class $\mathcal{IV}$ may exist, there also exists an optimal configuration in Class $\mathcal{I}$. Moreover, the reasoning will also provide some further properties of optimal configurations in Class $\mathcal{IV}$. In particular, a careful inspection of the forthcoming constructions will show a fluctuation estimate for minimisers in Class~$\mathcal{IV}$, see Section~\ref{sec:law} below.
This goal is achieved as follows: in the first part, we regularise our
configuration such that $h_3 = 0$ and $h_1\le l_1$. This is achieved
in Proposition \ref{prop:classIVregularisationstep3}, with the key
part of the reasoning proved in Proposition
\ref{prop:classIVregularisationstep2}. These arguments are valid for
any $ \beta \in (0,1)$. Then, in the second part, under the
restriction $ \beta \leq 1/2$, we regularise a configuration with $h_3 = 0$ and $h_1\le l_1$ to obtain a configuration in Class $\mathcal{I}$. This is achieved in Propositions \ref{prop:regularisationofclassIVpart4}--\ref{prop:wemayrequireclassI}. We break the reasoning into smaller pieces in order to highlight different techniques and different assumptions required at each point.
\begin{lemma}\label{lem:classIVregularisationstep1}
Fix $N_A, N_B > 0$ and $ \beta \in (0,1)$. Suppose that $(A,B) \in \mathcal{IV}$ is a minimal configuration. Then, there exists a minimal configuration $(\hat{A},\hat{B}) \in \mathcal{I} \cup \mathcal{IV}$ such that $l_2 \leq h_2$, $\min \lbrace h_1, h_1+h_2 -l_1-l_2\rbrace \le 0$, and $\min \lbrace h_3, h_2+h_3 -l_2-l_3\rbrace \le 0$.
\end{lemma}
\begin{proof}
Choose a minimal configuration $(A,B)$ in Class~$\mathcal{IV}$. Without loss of generality, we may assume that $l_2 \leq h_2$. Otherwise, consider a reflection of the original configuration with respect to the line $\mathbb{R}(-1,1)$. Then, we end up with a configuration of the same type with the roles of $h_i$ and $l_i$ reversed. We suppose that $h_1 \ge 1$ as otherwise the second condition in the statement of the lemma is satisfied. We modify the configuration without increasing the energy such that $h_1=0$ or $l_1 + l_2 \ge h_1 + h_2$. To see this, suppose that $l_1 + l_2 < h_1 + h_2$. Then, we remove all the points in the first row, and place them on the left-hand side starting from the second row, one in each row, possibly forming one additional column. The assumption guarantees that there was enough space to place all the points. In this way, $h_1$ decreases by 1 and $l_1$ increases possibly by 1, so the total energy decreases (in which case $(A,B)$ was not a minimal configuration) or stays the same, cf.\ \eqref{eq:classIVformula}. We repeat this procedure until $h_1 = 0$ or $l_1 + l_2 \geq h_1 + h_2$.
In a similar fashion, we modify the configuration to obtain $\min \lbrace h_3, h_2+h_3 -l_2-l_3\rbrace \le 0$. Finally, if $h_1=h_3=0$, the configuration is in Class~$\mathcal{I}$. Otherwise, if $h_1 \ge 1$, the configuration is in Class~$\mathcal{IV}$, and if $h_1=0$, $h_3 \ge 1$, after a rotation by $\pi$ and interchanging the roles of the two types we obtain a configuration in Class~$\mathcal{IV}$.
\end{proof}
We continue the regularisation in the following proposition.
\begin{proposition}\label{prop:classIVregularisationstep2}
Fix $N_A, N_B > 0$ and $ \beta \in (0,1)$. Suppose that $(A,B) \in \mathcal{IV}$ is a minimal configuration. Then, there exists a minimal configuration $(\hat{A},\hat{B}) \in \mathcal{I} \cup \mathcal{IV}$ which satisfies $l_2 \leq h_2$, $\min \lbrace h_1, h_1+h_2 -l_1-l_2\rbrace \le 0$, $\min \lbrace h_3, h_2+h_3 -l_2-l_3\rbrace \le 0$, and at least one of the following two properties:
$$(1) \ \ l_2 = 1, \quad \quad \quad (2) \ \ h_3 = 0.$$
\end{proposition}
For the proof, we introduce the following notation specific for Class $\mathcal{IV}$. With the notation of Figure \ref{fig:ClassIVbefore}, we will refer to the nine rectangles with sides $l_i$ and $h_j$ as $l_i:h_j$. For instance, the rectangle in the middle with sides $l_2$ and $h_2$ will be referred to as rectangle $l_2:h_2$. A priori, some of these rectangles may be not full or even empty, for instance the rectangle $l_3:h_1$.
\begin{proof}
Let $(A,B) \in \mathcal{IV}$ be a minimal configuration from Lemma \ref{lem:classIVregularisationstep1} which does not satisfy the desired properties, i.e., $l_2 >1$ and $h_1,h_3 >0$ ($l_2=0$ is not possible as it would imply $(A,B) \in \mathcal{II}$). Then, we first make a similar regularisation as we did for Class~$\mathcal{I}$. We add $N_A'$ $A$-points to the configuration $(A,B)$, so that the interface between $A$ and the void consists of four line segments (of lengths $l_1$, $h_1 + h_2$, $l_1 + l_2$ and $h_1$). This does not increase the surface energy. Then, we remove $N_A'$ $A$-points, column by column, starting from the leftmost column in $(A,B)$. If we removed a whole column, or if we removed a point which lies at the interface, the energy drops, so the original configuration $(A,B)$ was not minimal. Hence, the resulting configuration lies in Class~$\mathcal{IV}$. We proceed in a similar fashion for the $B$-points. In particular, the rectangle $l_2:h_2$ (in the middle) is full.
Now, let us look at the (full) rectangle $l_2:h_2$. It contains exactly $l_2 h_2$ points, $N_A''$ of them of type $A$ and $N_B''$ of them of type $B$. We rearrange them (i.e., remove all the points in $l_2:h_2$ and place them back in $l_2:h_2$) in the following way: we start with the leftmost column and we fill the columns one by one with $A$-points until we end up with less than $h_2$ points to place. Then, we place the remaining points in the next column, starting from the top. Similarly, we place the $B$-points starting from the rightmost column and we fill the columns one by one until we end up with less than $h_2$ points. We place the remaining points on the bottom of the next column. In this way, the resulting configuration has an interface with at most one step in $l_2:h_2$, and we did not change the energy. By Lemma \ref{lem:classIVregularisationstep1} we also have
\begin{equation}\label{eq:tripleassumption}
{\rm (i)} \ \ l_2 \leq h_2, \quad \quad {\rm (ii)} \ \ l_1 \geq h_1, \quad \quad {\rm (iii) } \ \ l_3 \geq h_3.
\end{equation}
Indeed, (i) is clear. If $h_1 = 0$, (ii) is obvious. Otherwise we have $h_1+h_2 - l_1-l_2 \le 0$ which along with (i) shows (ii). The proof of (iii) is similar.
As $l_2 \ge 2$ and the interface has at most one step, we observe that at least one of the following cases holds true: (a) The rightmost column of $l_2:h_2$ consists only of points of type $B$. (b) The leftmost column of $l_2:h_2$ consists only of points of type $A$.
Then, we do one of the two following procedures:
(a) We move the $A$-points from the rightmost column of the rectangle $l_2:h_1$ (in the upper right corner) to the rectangle $l_1:h_3$ (in the bottom left corner) and place them in its highest row (starting from the right). Here, we use \eqref{eq:tripleassumption}(ii) and $h_3 \ge 1$. In this way, we do not increase the surface energy, see \eqref{eq:classIVformula}, since we have $h_2 \rightarrow h_2 + 1$, $l_2 \rightarrow l_2-1$, $h_3 \rightarrow h_3 - 1$, $l_3 \rightarrow l_3 + 1$, and $h_1$ and $l_1$ remain unchanged. Finally, we perform a rearrangement in the new rectangle $l_2:h_2$ as above.
(b) We move the $B$-points from the leftmost column of the rectangle $l_2:h_3$ (in the bottom left corner) to the rectangle $l_3:h_1$ (in the upper right corner) and place them in its lowest row (starting from the left). Here, we use \eqref{eq:tripleassumption}(iii) and $h_1 \ge 1$. In this way, we do not increase the surface energy since we have $h_2 \rightarrow h_2 + 1$, $l_2 \rightarrow l_2-1$, $h_1 \rightarrow h_1 - 1$, $l_1 \rightarrow l_1 + 1$, and $h_3$ and $l_3$ remain unchanged. Finally, we perform a rearrangement in the new rectangle $l_2:h_2$ as above.
In both cases, after applying the procedure, the condition \eqref{eq:tripleassumption} is still satisfied, so we may repeat it. We repeat it until $l_2 = 1$, $h_1 = 0$, or $h_3 = 0$. Indeed, this follows after a finite number of steps since in each step $l_2$ decreases. If $l_2 = 1$ or $h_3 = 0$ hold, the proof is concluded. Otherwise, $h_3 = 0$ holds after a rotation by $\pi$ and interchanging the roles of the two types.
\end{proof}
We now come to the main result of this subsection.
\begin{proposition}\label{prop:classIVregularisationstep3}
Fix \color{black} $ N_A = N_B>0$ \color{black} and $ \beta \in (0,1)$. Suppose that $(A,B) \in \mathcal{IV}$ is a minimal configuration. Then, there exists a minimal configuration $({A},{B}) \in \mathcal{I} \cup \mathcal{IV}$ such that $l_2 \leq h_2$, $h_1 \leq l_1$, and $h_3 = 0$.
\end{proposition}
\begin{proof}
Let $(A,B)$ be a configuration from Proposition \ref{prop:classIVregularisationstep2}. Suppose by contradiction that $(A,B)$ (up to a rotation by $\pi$ and interchanging the roles of the two types) does not have the desired properties. Since \eqref{eq:tripleassumption} holds, we thus get that $l_2=1$ and $h_1, h_3 > 0$. By Proposition~\ref{prop:classIVregularisationstep2} and $h_1, h_3 > 0$ we also have
\begin{equation}\label{eq:squareboundinthespecialcase}
l_1 + 1 \geq h_1 + h_2 \qquad \mbox{and} \qquad l_3 + 1 \geq h_2 + h_3.
\end{equation}
As $h_2 \ge 1$, this particularly implies $h_1 \le l_1$. We can thus move the single column $l_2: h_1$ to the empty rectangle $l_1: h_3$ without increasing the energy. Note that $l_1 \ge h_1$ guarantees that there was enough space to place all the points. The resulting configuration has a straight interface with $h_1 >0$, i.e., lies in Class~$\mathcal{II}$. In view of Proposition~\ref{prop:classIIregularisation}, however, this contradicts optimality of the original configuration.
\end{proof}
Hence, for $N_A = N_B$ and any $ \beta \in (0,1)$, we may
require that $h_3 = 0$ and $h_1 \le l_1$. We continue the analysis in
the next subsection, with an additional requirement on $ \beta$.
\subsection{Class $\mathcal{IV}$, part two}\label{sec:classIVparttwo}
From now on, we will work with configurations which satisfy the
statement of Proposition \ref{prop:classIVregularisationstep3}, i.e.,
$h_1 \leq l_1$ and $h_3 = 0$. Our goal is to perform a further
modification such that configurations lie in Class $\mathcal{I}$. To
this end, we assume without restriction that configurations from
Proposition \ref{prop:classIVregularisationstep3} lie in
Class~$\mathcal{IV}$ and that $\color{black} N \color{black} := N_A = N_B$. In due course, we will introduce an additional assumption on $ \beta \in (0,1)$.
As a first step of the regularisation procedure, we again straighten the interface such that it has at most one step.
\begin{lemma}\label{lemma: step lemma}
Fix $\color{black} N_A = N_B>0 \color{black}$ and $ \beta \in (0,1)$. Suppose that $(A,B) \in \mathcal{IV}$ is an optimal configuration with $h_3 = 0$. Then, there exists a minimal configuration with the same properties and at most one step in the interface.
\end{lemma}
\begin{proof}
We proceed similarly to our reasoning in Class $\mathcal{I}$, i.e., as in the proof of Proposition \ref{prop:classIregularisationstep1}. We add points to the configuration such that the rectangles $l_i:h_j$ for $i=1,2,3$ and $j=1,2$, except for $l_3:h_1$ are full. In this way, the surface part of the energy did not change. Then, we remove the same number of $A$- and $B$-points that we added, starting with the leftmost and rightmost column. If we removed a full column, then the energy would drop and the original configuration would not be minimal. Hence, the rectangle $l_2:h_2$ is necessarily full. Let us now reorganise it in the following way: we put all the $A$-points to the left and all the $B$-points to the right, so that the interface between them (inside $l_2:h_2$) is vertical except for a single possible step to the right. Its length did not change, so the resulting configuration is optimal.
\end{proof}
\begin{lemma}\label{lem:h1smallerthanh2}
Fix $\color{black} N_A = N_B>0 \color{black}$ and $ \beta \in (0,1)$. Suppose that $(A,B) \in \mathcal{IV}$ is an optimal configuration such that $h_1 \leq l_1$ and $h_3 = 0$. Then, $h_1 \le h_2$.
\end{lemma}
\begin{proof}
Suppose otherwise, i.e., $h_1 > h_2$. First, we can assume that $l_3 \leq h_2$. Indeed, if not, we can remove the whole rectangle $l_3:h_2$, rotate it by $\pi/2$ and reattach it to the configuration, adding at least one additional bond: a contradiction to minimality of $(A,B)$. Moreover, we can assume that $l_1 \ge 2$ as $l_1 = 1$ implies also $h_1 = 1$, and the inequality $h_1 \leq h_2$ is automatically satisfied. Finally, we can suppose that the interface has at most one step, see Lemma \ref{lemma: step lemma}. The main step of the proof is to show that $l_1 < l_3$. Indeed, then we obtain the contradiction
\begin{equation*}
h_2 \leq h_1 \leq l_1 < l_3 \leq h_2.
\end{equation*}
Let us now prove $l_1 < l_3$. To this end, we will calculate the total number of points in two ways. Denote by $r_1$ the number of $A$-points in the leftmost column, by $h_1 + r_2$ the number of $A$-points in the leftmost double-type column, by $r_3$ the number of $B$-points in the leftmost double-type column, and by $r_4$ the number of $B$-points in the rightmost column. Then, we have
\begin{equation}
N_A = (l_1-1)(h_1 + h_2) + l_2 h_1 + r_1 + r_2
\end{equation}
and
\begin{equation}
N_B = (l_2 + l_3 - 2) h_2 + r_3 + r_4.
\end{equation}
Now, we subtract one of these equations from the other. Since $r_1> 0, r_2 \geq 0$, and $r_3, r_4 \leq h_2$ we get
\begin{align*}
0 & = N_A - N_B = l_1 h_1 + l_1 h_2 + l_2 h_1 - h_1 - h_2 + r_1 + r_2 - l_2 h_2 - l_3 h_2 + 2h_2 - r_3 - r_4 \\
&> (l_1 - 1) h_1 - h_2 + (l_1 - l_3) h_2 + l_2(h_1 - h_2) \geq (l_1 - l_3) h_2,
\end{align*}
where in the last step we used $l_1 \ge 2$ and the assumption (by contradiction) that $h_1 \geq h_2$. This shows $l_1 < l_3$ and concludes the proof.
\end{proof}
\begin{proposition}\label{prop:regularisationofclassIVpart4}
Fix $\color{black} N_A = N_B>0 \color{black}$ and $ \beta \leq 1/2$. Suppose that $(A,B) \in \mathcal{IV}$ is an optimal configuration such that $h_1 \leq l_1$ and $h_3 = 0$. Then, there exists an optimal configuration $(\hat{A},\hat{B})$ such that $(\hat{A},\hat{B}) \in \mathcal{IV}$ with $h_3 = 0$ and $l_2 \in \{ 1,2 \}$.
\end{proposition}
\begin{proof}
Suppose that $(A,B)$ satisfies $l_2 \geq 3$. By Lemma
\ref{lem:h1smallerthanh2} we have $h_1 \leq h_2$. Then, let us remove
the rightmost two layers in $l_2:h_1$, and place the (at most $2h_1$)
$A$-points on the left of the configuration, at most one point in
every row. Since $h_1 \leq h_2$, there is enough space to place all
the points. In this way, since the configuration can assumed to have only one step in the interface (see Lemma~\ref{lemma: step lemma}), $l_1$ increases by at most 1, $l_2$ decreases
by 2, $l_3$ increases by 2, and all $h_i$ stay the same. Hence, by
formula \eqref{eq:classIVformula} we see that the energy stays the
same (for $ \beta = 1/2$), so the resulting configuration is
optimal, or decreases (for $ \beta < 1/2$), so the original configuration was not optimal. We repeat this procedure until $l_2 \in \lbrace 1,2 \rbrace$.
\end{proof}
Hence, in order to prove existence of an optimal configuration in Class $\mathcal{I}$, we have two special cases to consider, depending on the value of $l_2$. We start with the case $l_2 = 1$.
\begin{proposition}\label{prop:regularisationofclassIVpart5}
Fix $\color{black} N_A = N_B>0 \color{black}$ and $ \beta \in (0,1)$. Suppose that $(A,B) \in \mathcal{IV}$ is an optimal configuration such that $h_3 = 0$ and $l_2 = 1$. Then, $h_1 = 1$. Furthermore, there exists an optimal configuration $(\hat{A},\hat{B}) \in \mathcal{I}$.
\end{proposition}
\begin{proof}
As in \eqref{eq:formulafortheenergy}, let us write the energy as
\begin{equation}
E(A,B) = E_A + E_B - (h_2+1)\beta,
\end{equation}
where $E_A$ is minus the number of bonds between points in $A$ and $E_B$ is minus the number of bonds between points in $B$.
We consider two cases. First, suppose that $E_B < E_A$. We do the following rearrangement of points: we separate $A$ and $B$ and suppose without restriction that the leftmost column of $B$ is full as otherwise we can move the points in this column to the right-hand side of $B$, without changing the $E_B$. We replace $A$ by $\hat{A}$, a reflection of $B$ along the vertical axis. Then we reconnect $\hat{A}$ and $B$ along the vertical line segment of length $h_2$. In this way, the resulting configuration has energy
\begin{equation}
E(\hat{A},B) = E_B + E_B - h_2 \beta.
\end{equation}
Hence, as $E_B \le E_A -1$, the energy drops by at least $1-\beta$, so the original configuration was not optimal, a contradiction.
Now, suppose that $E_A \leq E_B$. We do the following: we keep $A$ fixed (or, as above, we make $A$ flat on one side without changing $E_A$) and replace $B$ by $\hat{B}$, a reflection of $A$ along the vertical axis. Then, we join $A$ and $\hat{B}$ along the vertical line segment of length $h_1 + h_2$. In this way, the resulting configuration lies in Class $\mathcal{I}$, has a flat interface, and the energy is given by
\begin{equation}
E(A,\hat{B}) = E_A + E_A - (h_1+h_2) \beta.
\end{equation}
Therefore, the only way in which the energy does not decrease is that $E_A = E_B$ and $h_1 = 1$.
\end{proof}
We will employ another variant of the reflection argument to deal with the case $l_2 = 2$. This is formalised in the next proposition.
\begin{proposition}\label{prop:regularisationofclassIVstep6}
Fix $\color{black} N_A = N_B>0 \color{black}$ and $ \beta \leq 1/2$. Suppose that $(A,B) \in \mathcal{IV}$ is an optimal configuration such that $h_3 = 0$ and $l_2 = 2$. Then, $h_1 \le 2+1/\beta$ and there exists an optimal configuration $(\hat{A},\hat{B}) \in \mathcal{I}$.
\end{proposition}
\begin{proof}
Again, as in \eqref{eq:formulafortheenergy}, we write the energy as
\begin{equation*}
E(A,B) = E_A + E_B - (h_2 + 2)\beta.
\end{equation*}
We consider three cases: first, suppose that either $E_B \le E_A -2$ or $E_B = E_A -1$ and $h_1 \le 2+1/\beta$. We do the following rearrangement of points: we keep $B$ fixed (up to making one side flat, as in the previous proof) and replace $A$ by $\hat{A}$, a reflection of $B$ along the vertical axis. Then, we join $\hat{A}$ and $B$ along the vertical line segment of length $h_2$. The resulting configuration lies in Class~$\mathcal{I}$ and satisfies
\begin{equation}
E(\hat{A},B) = E_B + E_B - h_2 \beta.
\end{equation}
Hence, the energy drops by $k -2\beta$, where $k = E_A - E_B \ge 1$. Thus, either the original configuration was not optimal (for $k \ge 2$ or $k=1$ and $\beta < 1/2$) or the resulting configuration is optimal (for $k=1$ and $\beta = 1/2$). Moreover, the resulting configuration lies in Class $\mathcal{I}$.
Now, suppose that either $E_B = E_A - 1$ and $h_1 > 2+1/\beta$ or $E_B = E_A$ and $h_1 \ge 2$ or $E_A < E_B$. This time, we keep $A$ fixed (up to making one side flat) and replace $B$ by $\hat{B}$, a reflection of $A$ along the vertical axis. Then, we join $A$ and $\hat{B}$ along the vertical line segment of length $h_1 + h_2$. In this way, the resulting configuration lies in Class $\mathcal{I}$, has a flat interface, and the energy is given by
\begin{equation}
E(A,\hat{B}) = E_A + E_A - (h_1+h_2) \beta.
\end{equation}
Thus, the energy decreases by $k + (h_1-2)\beta$, where $k = E_B-E_A$. In particular, for $k=-1$ and $h_1 > 2+1/\beta$ or $k=0$ and $h_1 \ge 3$ or $k>0$ the energy drops. For $k=0$ and $h_1=2$ it stays the same, so the resulting configuration is optimal and lies in Class $\mathcal{I}$.
The only case left to consider is when $E_A = E_B$ and $h_1 = 1$. We proceed as follows: we exchange the rightmost $A$-point (i.e., the rightmost point of the rectangle $l_2:h_1$) with the top $B$-point from column $C_{l_1+1}$, i.e., the point with two connections to points of type $A$ and two connections to points of type $B$. If $C_{l_1+1}$ contains only one $B$-point, then the interface became shorter (without changing the overall shape of the configuration) and the energy actually drops. If it contains more then one $B$-point, this procedure did not change the energy. Moreover, the resulting configuration is in Class $\mathcal{I}$. The construction is presented in Figure \ref{fig:exchangingonepoint}.
\begin{figure}[h]
\includegraphics[scale=0.23]{ClassIVlaststep.png}
\caption{Final step of modification into Class $\mathcal{I}$}
\label{fig:exchangingonepoint}
\end{figure}
Summarising, we have shown that $h_1 \le 2+1/\beta$ and that there exists an optimal configuration in Class~$\mathcal{I}$.
\end{proof}
We summarise the reasoning from this subsection in the following result.
\begin{proposition}\label{prop:wemayrequireclassI}
Fix $\color{black} N_A = N_B>0 \color{black}$ and $ \beta \leq 1/2$. Suppose that $(A,B) \in \mathcal{IV}$ is an optimal configuration. Then, there exists an optimal configuration $(\hat{A},\hat{B}) \in \mathcal{I}$.
\end{proposition}
\begin{proof}
By Proposition \ref{prop:classIVregularisationstep3}, there exists an optimal configuration with $h_1 \leq l_1$ and $h_3 = 0$. Since $ \beta \leq 1/2$, by Proposition \ref{prop:regularisationofclassIVpart4} one may require additionally that $l_2 = 1$ or $l_2 = 2$. In both cases, existence of an optimal configuration in Class $\mathcal{I}$ is guaranteed by Proposition \ref{prop:regularisationofclassIVpart5} and by Proposition \ref{prop:regularisationofclassIVstep6}, respectively.
\end{proof}
\subsection{Class $\mathcal{V}$}
Finally, we show that we can modify optimal configurations in Class~$\mathcal{V}$ to optimal configuration in Class $\mathcal{IV}$. Along with Proposition~\ref{prop:wemayrequireclassI} this shows that there always exists a minimiser in Class~$\mathcal{I}$. This is done in the following proposition which employs a similar technique to the one used for Class $\mathcal{III}$.
\begin{proposition}\label{prop:classVregularisation}
Fix $N_A, N_B > 0$ and $ \beta \in (0,1)$. Suppose that $(A,B) \in \mathcal{V}$. Then, there exists $(\hat{A},\hat{B}) \in \mathcal{IV}$ with $E(\hat{A},\hat{B}) \leq E(A,B)$.
\end{proposition}
\begin{proof}
We will modify the top $h_1$ rows of the configuration $(A,B)$ in a similar fashion to the proof of Proposition \ref{prop:classIIIregularisationstep1}. For every $k \leq h_1$, we set $\hat{A}_k := A^{\rm row}_k + (-1,0)$. This translation implies $E^{\rm row}_k(\hat{A},\hat{B}) = E^{\rm row}_k(A,B)$ for all $k = 1,...,N_{\rm row}$. Regarding $E_k^{\rm \color{black} inter \color{black}}$, a change is possible at most for $k = h_1$, where we did not change the number of $A$-$B$ connections and added zero or one $A$-$A$ connections, so $E_k^{\rm \color{black} inter \color{black}}(\hat{A},\hat{B}) \leq E_k^{\rm \color{black} inter \color{black}}(A,B)$. Hence, the total energy did not increase. We repeat this procedure for all rows wit index $k \le h_1$ until the rightmost point of all $A^{\rm row}_{k}$ with $k \leq h_1$ does not lie right to the rightmost point of $B^{\rm row}_{h_1+1}$. We thus get a configuration which lies in Class~$\mathcal{IV}$.
\end{proof}
\subsection{Conclusion}
Finally, we are in the position to state another of the main results, which together with Theorem \ref{thm:classIexact} gives the exact formula for the minimal energy, see Theorem \ref{thm:main}.iv.
\begin{theorem}\label{thm:classIexistence}
Fix $\color{black} N_A = N_B>0 \color{black}$ and $ \beta \leq 1/2$. Then, there exists an optimal configuration $(A,B)$ which lies in Class $\mathcal{I}$ and has a straight interface.
\end{theorem}
\begin{proof}
Since the number of points is finite, there exists an optimal configuration. By Theorem \ref{thm:connected} and the discussion below it, it lies in one of the five classes. However, it cannot lie in Class $\mathcal{II}$ by Proposition \ref{prop:classIIregularisation}. It also cannot lie in Class $\mathcal{III}$ by Proposition \ref{prop:classIIIexcluded}. If it lies in Class $\mathcal{V}$, then there exists a minimal configuration in Class $\mathcal{IV}$ by virtue of Proposition \ref{prop:classVregularisation}. If it lies in Class $\mathcal{IV}$, then by Proposition \ref{prop:wemayrequireclassI} there exists a minimal configuration in Class $\mathcal{I}$. Finally, since there is an optimal configuration in Class $\mathcal{I}$, by Proposition \ref{prop:classIregularisationstep2} we may suppose that it has a flat interface.
\end{proof}
Let us note that in the above theorem we only state that a solution in Class $\mathcal{I}$ exists and that we cannot fully exclude existence of solutions in other classes. In particular, the following result shows that there exist arbitrarily large optimal configurations in Class $\mathcal{IV}$.
\begin{proposition}\label{prop:largeminimisersiv}
Let $\beta \in (0,1/2]\cap {\mathbb Q}$, $r,\, s
\in {\mathbb N}$ with $r/s=1-\beta/2$, and $k \in {\mathbb N}$. Then, the
Class-${\mathcal IV}$ configuration $(A,B)$ with
\begin{align*}
A&=\{(x,y)\in {\mathbb Z}^2 \, \colon \, x \in [-kr+1,0], \ y \in
[1,ks]\}\} \cup (1,ks),\\
B&=\{(x,y)\in {\mathbb Z}^2 \, \colon \, x \in [1,kr], \ y \in
[0,ks-1]\}\cup (0,0)
\end{align*}
is optimal.
\end{proposition}
\begin{proof}
Using \eqref{eq:eq} and formula \eqref{eq:classIVformula}, one can directly compute
\begin{equation}
P(A,B) = 4kr + 2 (ks+1) + 2(1-\beta)(ks+1).\label{eq:to_compare}
\end{equation}
To prove optimality, it hence suffices to check that
$P(A,B)=\min\{P_*,P^*\}$, where $P_*$ and $P^*$ are defined in
Theorem \ref{thm:main}.iv for $\color{black} N := \color{black} N_A=N_B=k^2rs+1$. From $\beta \in (0,1/2]$ we get that $s/r = 2/(2-\beta) \in
(1,4/3]$. This in particular entails that $s>r \geq 2$, which in turn
allows to prove that
\begin{align*}
\sqrt{\frac{2 \color{black} N \color{black}}{2-\beta}} = \sqrt{\frac{k^2rs + 1}{r/s}} = \sqrt{{k^2s^2 + s/r}} \in (ks,ks+1).
\end{align*}
In particular, we have checked that
\begin{align*}
\left\lfloor\sqrt{\frac{2 \color{black} N \color{black}}{2-\beta}}\right\rfloor=ks \quad\text{and}\quad \left\lceil
\sqrt{\frac{2 \color{black} N \color{black}}{2-\beta}} \right\rceil=ks+1.
\end{align*}
One can hence compute
\begin{align*}
P_*&= 4 \left\lceil
\frac{ \color{black} N \color{black}}{\left\lfloor\sqrt{\frac{2 \color{black} N \color{black}}{2-\beta}}\right\rfloor }
\right\rceil + 2
\left\lfloor\sqrt{\frac{2 \color{black} N \color{black}}{2-\beta}}\right\rfloor (2-\beta)\\
&=4 \left\lceil
\frac{k^2rs +1}{ks }
\right\rceil + 2 ks (2-\beta) = 4 \left\lceil
kr + 1/ks
\right\rceil + 2 ks (2-\beta) =4 kr + 4 + 2 ks (2-\beta).
\end{align*}
On the other hand, using again the fact that for $s> r \geq 2$ we get that
$$\frac{k^2rs +1}{ks+1} \in (kr-1,kr]$$ and we can compute
\begin{align*}
P^*&= 4 \left\lceil
\frac{ \color{black} N \color{black}}{\left\lceil\sqrt{\frac{2 \color{black} N \color{black}}{2-\beta}}\right\rceil }
\right\rceil + 2
\left\lceil\sqrt{\frac{2 \color{black} N \color{black}}{2-\beta}}\right\rceil (2-\beta)\\
&=4 \left\lceil
\frac{k^2rs +1}{ks+1}
\right\rceil + 2(ks+1) (2-\beta) = 4kr + 2(ks+1) (2-\beta).
\end{align*}
We conclude that
$$\min\{P_*,P^*\} = P^* \stackrel{\eqref{eq:to_compare}}{=}P(A,B)$$
which proves that $(A,B)$ is optimal.
\end{proof}
\section{$N^{1/2}$-law for minimisers}\label{sec:law}
In this section, we give a quantitative upper bound on the difference of two optimal configurations, see Theorem \ref{thm:main}.vi. The goal is to prove that, even though in general there is no uniqueness of the optimal configurations and some of them may even not be in Class $\mathcal{I}$, they all have the same approximate shape. In the following, an isometry $T\colon \mathbb{Z}^2 \rightarrow \mathbb{Z}^2$ indicates a composition of the translations $x \mapsto x + \tau$ for $\tau \in \mathbb{Z}^2$, the rotation $(x_1,x_2) \mapsto (-x_2, x_1) $ by the angle $\pi/2$, and the reflections $(x_1,x_2) \mapsto (x_1,-x_2)$, $(x_1,x_2) \mapsto (-x_1,x_2)$.
\begin{theorem}[$N^{1/2}$-law]\label{thm:nonehalf}
\color{black} Fix $N := N_A = N_B>0$ \color{black} and $ \beta \leq
\frac12$. Then, there exists a constant $ C_\beta$ only
depending on $ \beta $ such that for each two optimal configurations $(A,B)$ and $(A',B')$ it holds that
\begin{equation}\label{eq: N12}
\min \bigg\{ \#(A \triangle T(A')) + \#(B \triangle T(B')) \colon \,
T\colon \mathbb{Z}^2 \rightarrow \mathbb{Z}^2 \mbox{ is an isometry}
\bigg\} \leq C_\beta N^{1/2}.
\end{equation}
\end{theorem}
\begin{proof}
Throughout the proof, $ C_\beta $ is a constant which depends
only on $ \beta $ whose value may vary from line to line. We start the proof by mentioning that it suffices to check the assertion only for $N \ge N_0$ for some $N_0 \in \mathbb{N}$ depending only on $ \beta $. As observed in the proof of Theorem~\ref{thm:classIexistence}, every optimal configuration lies in the Classes $\mathcal{I}$, $\mathcal{IV}$, $\mathcal{V}$. In Step 1, we show \eqref{eq: N12} for two optimal configurations in Class $\mathcal{I}$. Afterwards, in Step 2 we show that for each optimal configuration $(A,B)$ in Class $\mathcal{IV}$ there exists $(A',B')$ in Class $\mathcal{I}$ such that \eqref{eq: N12} holds. Eventually, in Step 3 we check that for each optimal configuration $(A,B)$ in Class $\mathcal{V}$ there exists $(A',B')$ in Class $\mathcal{IV}$ such that \eqref{eq: N12} holds. The combination of these three steps yields the statement.
{\bf Step 1: Class $\mathcal{I}$.} Let first $(A,B)$ be an optimal
configuration in Class $\mathcal{I}$ such that $l_2 = 0$. Then by
Theorem~\ref{thm:classIexact}, in particular by \eqref{eq: h***}
we find
\begin{equation}\label{eq: fixing hhh}
h \sim \sqrt{\frac{\color{black} 2 \color{black} N}{ 2- \beta }}, \quad \quad \quad
l_1=l_3 \sim \sqrt{\frac{N( 2- \beta )}{\color{black} 2 \color{black}}},
\end{equation}
where here and in the following $\sim$ indicates that equality holds
up to a constant depending only on $ \beta$. Consequently, two optimal configurations in Class $\mathcal{I}$ with $l_2= 0$ clearly satisfy \eqref{eq: N12}. Also, notice that since the interface is straight, reflection along the interface exchanges the roles of the sets $A$ and $B$. Now, consider an optimal configuration $(A,B)$ in Class~$\mathcal{I}$ with $l_2>0$. Then we get $l_2=1$ by Proposition \ref{prop:classIregularisationstep1}. The regularisation of Proposition \ref{prop:classIregularisationstep2} shows that $(A,B)$ can be modified to a configuration $(A',B')$ in Class $\mathcal{I}$ with $l_2 = 0$ such that \eqref{eq: N12} holds. Indeed, in this regularisation we only alter the configurations involving the single column containing points of both types and possibly merge two connected components by moving one connected component by $(1,0)$. This concludes Step 1 of the proof.
{\bf Step 2: Class $\mathcal{IV}$.} We now consider an optimal
configuration in Class $\mathcal{IV}$ and show that it can be modified
to a configuration in Class $\mathcal{I}$ such that \eqref{eq: N12}
holds. We will work through the proofs in Subsections
\ref{sec:classIVpartone} and \ref{sec:classIVparttwo} in reverse
order. Our strategy is as follows: we use the knowledge of the
structure of the final step of the regularisation procedure, obtain
some a posteriori bounds on the size of $l_i$ and $h_i$, and go back
to see how these can change at every step of the regularisation
procedure. Eventually, this will allow us to show that already after
the first modification described in Lemma
\ref{lem:classIVregularisationstep1} we obtain an optimal
configuration in Class~$\mathcal{I}$, by moving at most $ C_\beta N^{1/2}$ many points. This will conclude Step 2 of the proof.
{\bf Step 2.1.} Our starting points are Propositions \ref{prop:regularisationofclassIVpart5} and \ref{prop:regularisationofclassIVstep6}: recall that applying all the intermediate steps, in the end we have $h_3 = 0$ and we land with an alternative $l_1 = 1$ (which is covered in Proposition \ref{prop:regularisationofclassIVpart5}) or $l_2 = 2$ (which is covered by Proposition \ref{prop:regularisationofclassIVstep6}). In both cases, before applying these propositions, we have
\begin{align}\label{eq: 2.1}
l_2 \leq 2, \quad
h_1 \leq 2 + \frac{1}{\beta}, \quad h_3 = 0, \quad h_2 \sim
\sqrt{\frac{\color{black} 2 \color{black} N}{ 2 - \beta }}, \quad l_1,l_3 \sim
\sqrt{\frac{N( 2 - \beta )}{\color{black} 2 \color{black}}}.
\end{align}
In fact, the last conditions follow from \eqref{eq: fixing hhh} (for $h=h_2$) and the reflection procedure described in the propositions.
{\bf Step 2.2.} Now, we go a step back in the regularisation
procedure. In Proposition~\ref{prop:regularisationofclassIVpart4}, for
$ \beta < 1/2$ nothing changes and the same bounds hold. For
$ \beta = 1/2$,
\eqref{eq: 2.1} yields that $\frac{h_1}{h_2}\to 0$ as $N \rightarrow
\infty$. This implies that in
Proposition~\ref{prop:regularisationofclassIVpart4}, for sufficiently
large $N$, we move at most two layers. In fact, if we moved at least
three layers, the energy would strictly decrease since all of them fit
into a single column. Hence, for sufficiently big $N$ (depending only
on $ \beta$), we have the following bounds
\begin{align}\label{eq: 22}
l_2 \leq 4, \quad h_1 \leq 2 + \frac{1}{\beta}, \quad h_3 = 0, \quad
h_2 \sim \sqrt{\frac{\color{black} 2 \color{black} N}{ 2 - \beta }}, \quad l_1,l_3
\sim \sqrt{\frac{N( 2 - \beta )}{\color{black} 2 \color{black}}}.
\end{align}
Finally, let us take one more step back in the regularisation procedure. In Lemma~\ref{lemma: step lemma}, we actually modify the configuration only slightly inside the rectangle $l_2:h_2$.
In this way, $h_i$ and $l_i$ were not altered, so that the bounds \eqref{eq: 22} still holds.
{\bf Step 2.3.} Now we come to the main part of the regularisation procedure, i.e., Proposition~\ref{prop:classIVregularisationstep2}. In its proof, we apply an iterative procedure, and at every step one of the following changes happens:
\begin{align*}
{\rm (a)} \ \ \ h_2 \rightarrow h_2 + 1, \quad l_2 \rightarrow l_2-1, \quad h_3 \rightarrow h_3 - 1, \quad l_3 \rightarrow l_3 + 1, \quad h_1 \rightarrow h_1, \quad l_1 \rightarrow l_1
\end{align*}
or
\begin{align*}
{\rm (b)} \ \ \ h_2 \rightarrow h_2 + 1, \quad l_2 \rightarrow l_2-1, \quad h_1 \rightarrow h_1 - 1, \quad l_1 \rightarrow l_1 + 1, \quad h_3 \rightarrow h_3, \quad l_3 \rightarrow l_3.
\end{align*}
Notice that in both cases $l_1$ and $l_3$ cannot decrease during this procedure, and exactly one of them increases at every step. The procedure can end in two ways: $h_3 = 0$ (or equivalently $h_1 = 0$) or $l_2 = 1$. In the latter case, however, the proof of Proposition \ref{prop:classIVregularisationstep3} implies that the original configuration was not optimal, so we only need to examine the former case.
Consider the last step of the regularisation procedure in the proof of Proposition~\ref{prop:classIVregularisationstep2}, i.e., the one before we reach $h_3 = 0$. Denote by $\hat{h}_1$ the value of $h_1$ at the end of the regularisation procedure, and note that $\hat{h}_1 \leq 2 + \frac{1}{\beta}$ by \eqref{eq: 22}. There are two possible situations: either
\begin{equation}
\hat{l}_1 \leq 2 \hat{h}_1 \qquad \mbox{or} \qquad \hat{l}_1 > 2 \hat{h}_1.
\end{equation}
In the second case, notice that we cannot have applied the construction from case (a) twice as otherwise a slightly modified procedure would give the following: we move the $A$-points from the rightmost two columns of the rectangle $l_2:h_1$ to the rectangle $l_1:h_3$, but we place them in a single row. In this way, we have
\begin{equation}
h_2 \rightarrow h_2 + 1, \quad l_2 \rightarrow l_2-2, \quad h_3 \rightarrow h_3 - 1, \quad l_3 \rightarrow l_3 + 2, \quad h_1 \rightarrow h_1, \quad l_1 \rightarrow l_1.
\end{equation}
This shows that the energy \eqref{eq:classIVformula} strictly decreases as the length of the interface is decreased. Hence, the original configuration was not optimal, so either $ \hat{l}_1 \leq 2 \hat{h}_1$ or we have applied a step of type (a) at most once.
Similarly, since $\hat{h}_3$ at the end of the procedure equals zero, we consider the alternative
\begin{equation*}
\hat{l}_3 \leq 2 \qquad \mbox{or} \qquad \hat{l}_3 > 2.
\end{equation*}
We apply a similar argument to conclude that either $ \hat{l}_3 \leq 2 $ or that we have applied a step of type (b) at most once.
In view of \eqref{eq: 22}, and because $l_1$ and $l_3$ can only
increase during the regularisation procedure, we see that $l_1
\leq 2 \hat{h}_1$ and $l_3 \leq 2$ lead to contradictions for $N$
sufficiently large depending only $ \beta $. This implies that there can be at most one step of type (a) and (b), respectively.
Therefore, using again \eqref{eq: 22} we see that before the application of Proposition~\ref{prop:classIVregularisationstep2} it holds that
\begin{align}\label{eq: 23}
l_2 \leq 6, \quad h_1 \leq 4 + \frac{1}{\beta}, \quad h_3 \le 2,
\quad h_2 \sim \sqrt{\frac{\color{black} 2 \color{black} N}{ 2 - \beta }}, \qquad
l_1,l_3 \sim \sqrt{\frac{N( 2 - \beta )}{\color{black} 2 \color{black}}}.
\end{align}
{\bf Step 2.4.} Finally, we consider the modification in Lemma
\ref{lem:classIVregularisationstep1}. For simplicity, we only address
the modification leading to $\min \lbrace h_1, h_1+h_2 -l_1-l_2\rbrace
\le 0$. Note that each step of the procedure consists in $h_1
\rightarrow h_1- 1$ and $l_1 \rightarrow l_1 +1$. As after the
application of Lemma \ref{lem:classIVregularisationstep1} we have
$\hat{h}_2/\hat{l}_1 \ge 2/( 2 - \beta ) + {\rm O}(1/\sqrt{N})$, see \eqref{eq: 23}, and during its application $h_2$ does not change and $l_1$ can only increase, at each step of the procedure it holds that ${h}_2/{l}_1 \ge 2/( 2 - \beta ) + {\rm O}(1/\sqrt{N})$. In view of \eqref{eq: 23}, in particular the fact that $l_2 \leq 6$, for $N$ sufficiently large depending only on $\beta$ we have
\begin{align}\label{eq: good con}
(h_1+ h_2) / (l_1+l_2) \geq h_2/(l_1 + l_2) \ge c_\beta
\end{align}
at each step of the procedure, for some constant $ c_\beta
>1$ only depending on $ \beta $. This ensures that at the beginning we have $h_1 \le M$ for $M \in \mathbb{N}$ such that $(M+1)/M < c_\beta $ since otherwise $M+1$ rows could be moved to $M$ columns leading to a strictly smaller energy. This along with \eqref{eq: 23} shows that at most $ C_\beta N^{1/2}$ are moved. Moreover, the modifications stops once $h_1=0$ or $h_1 + h_2 \le l_1 + l_2$ as been obtained. By \eqref{eq: good con} we see that it necessarily holds $h_1 = 0$. In a similar fashion, one gets $h_3 = 0$. This shows that directly after the application of Lemma \ref{lem:classIVregularisationstep1} we obtain a configuration in Class~$\mathcal{I}$. This concludes the proof as we have seen that in the modification of Lemma \ref{lem:classIVregularisationstep1} only $ C_\beta N^{1/2}$ points are moved.
{\bf Step 3: Class $\mathcal{V}$.} We now consider an optimal
configuration in Class $\mathcal{V}$ and show that it can be modified
to a configuration in Class $\mathcal{IV}$ such that \eqref{eq: N12}
holds. The modification in Proposition~\ref{prop:classVregularisation}
consists in moving at most $h_1$ rows to the left.
By Step 2 we know that $h_1\le C_\beta $ which implies that we have moved at most $ C_\beta N^{1/2}$ many points. This concludes the proof of Step 3.
\end{proof}
Let us highlight that in the proof of Theorem \ref{thm:nonehalf} we have not only shown the $N^{1/2}$-law for minimisers, but we also get explicit estimates on the shape of the configuration, written as a separate corollary. This is a consequence of equations \eqref{eq: 23}, \eqref{eq: good con}, and the procedure from Step 3 of the proof of Theorem \ref{thm:nonehalf}.
\begin{corollary}
Suppose that $(A,B) \in \mathcal{IV} \cup \mathcal{V} $ is an optimal configuration. Then,
\begin{align*}
l_2, h_1, h_3 \leq C_\beta , \quad h_2 \sim
\sqrt{\frac{\color{black} 2 \color{black} N}{ 2- \beta }}, \quad l_1,l_3 \sim
\sqrt{\frac{N( 2- \beta )}{\color{black} 2 \color{black}}}.
\end{align*}
\end{corollary}
Finally, let us note that the quantitative bound given in Theorem \ref{thm:nonehalf} is sharp: the optimal configuration in Class $\mathcal{IV}$ given by Proposition \ref{prop:largeminimisersiv} differs from the one given in Theorem \ref{thm:main}.v by a number of points of exactly this order.
\section{Proofs in the continuum setting}\label{sec:wulff}
We conclude by providing the proofs of Corollaries \ref{cor: wulff}
and \ref{cor: cryst-db} from the Introduction.
\begin{proof}[Proof of Corollary \ref{cor: wulff}]
For the explicit solution $(A'_{\color{black} N \color{black}},B'_{\color{black} N \color{black}})$ in Theorem \ref{thm:main}.v with $N_A=N_B=: \color{black} N \color{black}$, one can directly verify that $\mu_{A'_{\color{black} N \color{black}}} \stackrel{\ast}{\rightharpoonup} {\mathcal L} \mres {\mathcal A}$ and $\mu_{B'_{\color{black} N \color{black}}} \stackrel{\ast}{\rightharpoonup} {\mathcal L} \mres {\mathcal B}$, where $\mathcal{A}$ and $\mathcal{B}$ are given in \eqref{eq:wulff}. For a general sequence of solutions $(A_{\color{black} N \color{black}},B_{\color{black} N \color{black}})$ of \eqref{eq:dbp}, the statement follows from the fluctuation estimate in Theorem \ref{thm:main}.vi.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor: cryst-db}]
We start by relating point configurations with sets of finite
perimeter: given $(A_{\color{black} N \color{black}},B_{\color{black} N \color{black}})$ with $N_A = N_B
=: \color{black} N \color{black} $, we define the sets
\begin{align}\label{eq: sofp}
A^{\color{black} N \color{black}}:= \frac{1}{\sqrt{\color{black} N \color{black}}} {\rm int}\Big(\bigcup_{p \in A_{\color{black} N \color{black}}} p + [-\tfrac{1}{2}, \tfrac{1}{2}]^2\Big), \quad \quad \quad B^{\color{black} N \color{black}} := \frac{1}{\sqrt{\color{black} N \color{black}}} {\rm int}\Big( \bigcup_{p \in B_{\color{black} N \color{black}}} p + [-\tfrac{1}{2}, \tfrac{1}{2}]^2\Big).
\end{align}
Clearly, $A^{\color{black} N \color{black}}$ and $B^{\color{black} N \color{black}}$ satisfy $A^{\color{black} N \color{black}} \cap B^{\color{black} N \color{black}} =
\emptyset$ and $\mathcal L(A^{\color{black} N \color{black}})=\mathcal L(B^{N})=1$. It is an
elementary matter to check that \eqref{eq:dbp} and \eqref{eq:dbp2}
coincide in this case up to normalisation, i.e.,
\begin{align}\label{eq: scal-en-es}
\color{black} N \color{black}^{-1/2} P(A_{\color{black} N \color{black}},B_{\color{black} N \color{black}}) = P_{\rm cont}(A^{\color{black} N \color{black}},B^{\color{black} N \color{black}}) := {\rm Per} (A^{\color{black} N \color{black}}) + {\rm Per} (B^{\color{black} N \color{black}}) - 2\beta {\rm L} (\partial^* A^{\color{black} N \color{black}}
\cap \partial^* B^{\color{black} N \color{black}}).
\end{align}
Now, consider any pair of sets of finite perimeter with $A \cap B = \emptyset$ and $\mathcal L(A)=\mathcal L(B)=1$. Given $\varepsilon>0$, by the density result \cite[Theorem 2.1 and Corollary 2.4]{BraidesDensity} (for $\mathcal{Z}$ consisting of three values representing $A$, $B$, and the emptyset) we can find $A'$ and $B'$ with polygonal boundary such that $A' \cap B' = \emptyset$, $\mathcal L(A')=\mathcal L(B')=1$, and
$$P_{\rm cont}(A',B') \le P_{\rm cont}(A,B) + \varepsilon. $$
(Strictly speaking, the constraint $\mathcal L(A')=\mathcal L(B')=1$
has not been addressed there. However, possibly after scaling one can
assume that $\mathcal L(A')\le 1$, $\mathcal L(B') \le 1$, and then it
suffices to add a disjoint squares of small volume and surface to satisfy the constraint.)
We define a point configuration related to $A'$ and $B'$ by setting
$$A_{\color{black} N \color{black}} = \lbrace p \in \mathbb{Z}^2 \colon \, p/\sqrt{\color{black} N \color{black}} \in A' \rbrace, \quad \quad \quad B_{\color{black} N \color{black}} = \lbrace p \in \mathbb{Z}^2 \colon \, p/\sqrt{\color{black} N \color{black}} \in B' \rbrace. $$
By $A^{\color{black} N \color{black}}$ and $B^{\color{black} N \color{black}}$ we denote the corresponding sets of finite
perimeter defined in \eqref{eq: sofp}. Note that the sets $A^{\color{black} N \color{black}}$
and $B^{\color{black} N \color{black}}$ may have different cardinalities, although $\mathcal
L(A')=\mathcal L(B')=1$. Still, equal cardinalities can be restored by
adding points to one of the two sets. This can be achieved at the price of making a
small error in the perimeter, which goes to $0$ with $\color{black} N \color{black}$ after
rescaling.
The fact that $(A', B')$ have polygonal boundary along with the properties of $\Vert \cdot \Vert_1$ implies that
$$\lim_{\color{black} N \color{black} \to \infty} P_{\rm cont}(A^{\color{black} N \color{black}},B^{\color{black} N \color{black}}) = P_{\rm cont}(A',B').$$
This along with \eqref{eq: scal-en-es} and Theorem \ref{thm:main}.iv yields
\begin{align*}
P_{\rm cont}(A,B) & \ge \lim_{\color{black} N \color{black}\to \infty} P_{\rm cont}(A^{\color{black} N \color{black}},B^{\color{black} N \color{black}}) - \varepsilon \ge \lim_{\color{black} N \color{black} \to \infty} \color{black} N \color{black}^{-1/2}\min\{P_*,P^*\} -\varepsilon \\
& = 4 \frac{1}{\sqrt{\frac{2}{2-\beta}} } +2
\sqrt{\frac{2}{2-\beta}} (2-\beta) -\varepsilon = 4\sqrt{2}\sqrt{2-\beta} -\varepsilon.
\end{align*}
We directly compute $P_{\rm cont}(\mathcal A,\mathcal B) =
4\sqrt{2}\sqrt{2-\beta}$. As $\varepsilon>0$ is arbitrary, we conclude that the pair $(\mathcal A,\mathcal B)$ is a solution of \eqref{eq:dbp2}.
\end{proof}
\section*{Acknowledgements}
MF acknowledges support of the DFG project FR 4083/3-1. This work
was supported by the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) under Germany's Excellence Strategy EXC 2044
-390685587, Mathematics M\"unster: Dynamics--Geometry--Structure. WG
acknowledges support of the FWF grant I4354, the OeAD-WTZ project CZ
01/2021, and the grant 2017/27/N/ST1/02418 funded by the National
Science Centre, Poland. US acknowledges support of the FWF grants
I4354, F65, I5149, and P\,32788, and by the OeAD-WTZ project CZ
01/2021. The authors are indebted to Frank Morgan for pointing
out many relevant references.
| {
"timestamp": "2022-03-29T02:44:44",
"yymm": "2109",
"arxiv_id": "2109.01697",
"language": "en",
"url": "https://arxiv.org/abs/2109.01697",
"abstract": "We investigate minimal-perimeter configurations of two finite sets of points on the square lattice. This corresponds to a lattice version of the classical double-bubble problem. We give a detailed description of the fine geometry of minimisers and, in some parameter regime, we compute the optimal perimeter as a function of the size of the point sets. Moreover, we provide a sharp bound on the difference between two minimisers, which are generally not unique, and use it to rigorously identify their Wulff shape, as the size of the point sets scales up.",
"subjects": "Metric Geometry (math.MG); Mathematical Physics (math-ph); Combinatorics (math.CO)",
"title": "The double-bubble problem on the square lattice",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9854964207345893,
"lm_q2_score": 0.8128673201042492,
"lm_q1q2_score": 0.8010778344948553
} |
https://arxiv.org/abs/1612.00214 | A remark on local fractional calculus and ordinary derivatives | In this short note we present a new general definition of local fractional derivative, that depends on an unknown kernel. For some appropriate choices of the kernel we obtain some known cases. We establish a relation between this new concept and ordinary differentiation. Using such formula, most of the fundamental properties of the fractional derivative can be derived directly. | \section{Introduction}
Fractional calculus is a generalization of ordinary calculus, where derivatives and integrals of arbitrary real or complex order are defined. These fractional operators may model more efficiently certain real world phenomena, specially when the dynamics is affected by constraints inherent to the system. There exist several definitions for fractional derivatives and fractional integrals, like the Riemann--Liouville, Caputo, Hadamard, Riesz, Gr\"{u}nwald--Letnikov, Marchaud, etc. (see e.g., \cite{Kilbas,Podlubny} and references therein). Although most of them are already well-studied, some of the usual features for what concerns the differentiation of functions fails, like the Leibniz rule, the chain rule, the semigroup property, among others. As it was mentioned in \cite{Babakhani}, ``These definitions, however, are non-local in nature, which makes them unsuitable for investigating properties related to local scaling or fractional differentiability". Recently, the concept of local fractional derivative have gained relevance, namely because they kept some of the properties of ordinary derivatives, although they loss the memory condition inherent to the usual fractional derivatives \cite{Anderson,Chen,Katumgapola,Kolwankar,Kolwankar2}.
One question is what is the best local fractional derivative definition that we should consider, and the answer is not unique. Similarly to what happens to the classical definitions of fractional operators, the best choice depends on the experimental data that fits better in the theoretical model, and because of this we find already a vast number of definitions for local fractional derivatives.
\section{Local fractional derivative}
We present a definition of local fractional derivative using kernels.
\begin{definition} Let $k:[a,b]\to\mathbb R$ be a continuous nonnegative map such that $k(t)\not=0$, whenever $t>a$. Given a function $f:[a,b]\to\mathbb R$ and $\alpha\in(0,1)$ a real, we say that $f$ is $\alpha$-differentiable at $t>a$, with respect to kernel $k$, if the limit
\begin{equation}\label{def}f^{(\a)}(t):=\lim_{\epsilon\to0}\frac{f(t+\epsilon k(t)^{1-\alpha})-f(t)}{\epsilon}\end{equation}
exists. The $\alpha$-derivative at $t=a$ is defined by
$$f^{(\a)}(a):=\lim_{t\to a^+}f^{(\a)}(t),$$
if the limit exists.
\end{definition}
Consider the limit $\alpha\to1^-$. In this case, for $t>a$, we obtain the classical definition for derivative of a function, $f^{(\a)}(t)=f'(t)$. Our definition is a more general concept, compared to others that we find in the literature. For example, taking $k(t)=t$ and $a=0$, we get the definition from \cite{Batarfi,Cenesiz,Hammad,Hesameddini,Khalil} (also called conformable fractional derivative); when $k(t)=t-a$, the one from \cite{Abdeljawad,Anderson2,Unal}; for $k(t)=t+1/\Gamma(\alpha)$, the definition in \cite{Atangana,Atangana2}.
The following result is trivial, and we omit the proof.
\begin{theorem} Let $f:[a,b]\to\mathbb R$ be a differentiable function and $t>a$. Then, $f$ is $\alpha$-differentiable at $t$ and
$$f^{(\a)}(t)=k(t)^{1-\alpha}f'(t), \quad t>a.$$
Also, if $f'$ is continuous at $t=a$, then
$$f^{(\a)}(a)=k(a)^{1-\alpha}f'(a).$$
\end{theorem}
However, there exist $\alpha$-differentiable functions which are not differentiable in the usual sense. For example, consider the function $f(t)=\sqrt t$, with $t\geq0$. If we take the kernel $k(t)=t$, then $f^{(\a)}(t)=1/2 \, t^{1/2-\alpha}$. Thus, for $\alpha\in(0,1/2)$, $f^{(\a)}(0)=0$ and for $\alpha=1/2$, $f^{(\a)}(0)=1/2$. In general, if we consider the function $f(t)=\sqrt[n]{t}$, with $t\geq0$ and $n\in\mathbb N\setminus \{1\}$, we have $f^{(\a)}(t)=1/n \, t^{1/n-\alpha}$ and so $f^{(\a)}(0)=0$ if $\alpha\in(0,1/n)$ and for $\alpha=1/n$, $f^{(\a)}(0)=1/n$.
\begin{theorem} If $f^{(\a)}(t)$ exists for $t>a$, then $f$ is differentiable at $t$ and
$$f'(t)=k(t)^{\alpha-1} f^{(\a)}(t).$$
\end{theorem}
\begin{proof} It follows from
$$\begin{array}{ll}
f'(t)&=\displaystyle \lim_{\delta\to0}\frac{f(t+\delta)-f(t)}{\delta}\\
&=\displaystyle k(t)^{\alpha-1} \lim_{\epsilon\to0}\frac{f(t+\epsilon k(t)^{1-\alpha})-f(t)}{\epsilon}\\
&=\displaystyle k(t)^{\alpha-1} f^{(\a)}(t).
\end{array}$$
\end{proof}
Of course we can not conclude anything at the initial point $t=a$, as was discussed before.
Combining the two previous results, we have the main result of our paper.
\begin{theorem}\label{MainT} A function $f:[a,b]\to\mathbb R$ is $\alpha$-differentiable at $t>a$ if and only if it is differentiable at $t$. In that case, we have the relation
\begin{equation}\label{MainF}f^{(\a)}(t)=k(t)^{1-\alpha}f'(t), \quad t>a.\end{equation}
\end{theorem}
\section{Conclusion}
In this short note we show that some of the existent notions about local fractional derivative are very close related to the usual derivative function. In fact, the $\alpha$-derivative of a function is equal to the first-order derivative, multiplied by a continuous function. Also, using formula \eqref{MainF}, most of the results concerning $\alpha$-differentiation can be deduced trivially from the ordinary ones. In the authors opinion, local fractional calculus is an interesting idea and deserves further research, but definitions like \eqref{def} are not the best ones and a different path should be followed.
\section*{Acknowledgments}
Research supported by Portuguese funds through the CIDMA - Center for Research and Development in Mathematics and Applications,
and the Portuguese Foundation for Science and Technology (FCT-Funda\c{c}\~ao para a Ci\^encia e a Tecnologia), within project UID/MAT/04106/2013 (R. Almeida) and by the Warsaw School of Economics grant KAE/S15/35/15 (T. Odzijewicz).
| {
"timestamp": "2016-12-02T02:04:31",
"yymm": "1612",
"arxiv_id": "1612.00214",
"language": "en",
"url": "https://arxiv.org/abs/1612.00214",
"abstract": "In this short note we present a new general definition of local fractional derivative, that depends on an unknown kernel. For some appropriate choices of the kernel we obtain some known cases. We establish a relation between this new concept and ordinary differentiation. Using such formula, most of the fundamental properties of the fractional derivative can be derived directly.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "A remark on local fractional calculus and ordinary derivatives",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985496421586532,
"lm_q2_score": 0.8128673110375457,
"lm_q1q2_score": 0.8010778262521677
} |
https://arxiv.org/abs/2207.02060 | A sharp Korn's inequality for piecewise $H^1$ space and its application | In this paper, we revisit Korn's inequality for the piecewise $H^1$ space based on general polygonal or polyhedral decompositions of the domain. Our Korn's inequality is expressed with minimal jump terms. These minimal jump terms are identified by characterizing the restriction of rigid body mode to edge/face of the partitions. Such minimal jump conditions are shown to be sharp for achieving the Korn's inequality as well. The sharpness of our result and explicitly given minimal conditions can be used to test whether any given finite element spaces satisfy Korn's inequality, immediately as well as to build or modify nonconforming finite elements for Korn's inequality to hold. | \section{Introduction}\Label{Intro}
The Korn's inequality played a fundamental role in the development of linear elasticity. There is a work reviewing Korn's inequality and its applications in continuum mechanics \cite{horgan1995korn}. In fact, there are a lot of works on proving classical Korn's inequality \cite{nitsche1981korn,wang2003korn,ciarlet2010korn} for functions in $H^1$ vector fields. In \cite{brenner2004korn}, Korn’s inequalities for piecewise $H^1$ vector fields are established. We note that a strengthened version of Korn’s inequality for piecewise $H^1$ vector fields is presented in \cite{mardal2006observation}. By the Korn's inequality established in
\cite{mardal2006observation}, the nonconforming finite element for 2D introduced in \cite{mardal2002robust} is shown to satisfy the Korn's inequality. In this paper, we revisit Korn's inequality for the piecewise $H^1$ space based on general polygonal or polyhedral decompositions of the domain. Our Korn's inequality is expressed with minimal jump terms, which can readily be used to design degree of freedoms. Namely, we move one step further from the result presented in \cite{mardal2006observation}. In particular, for 3D, we identify that the tangential component of rigid body mode restricted to the face of triangle is indeed the lowest Raviart-Thomas element on face, which is the rotated version discussed in \cite{mardal2006observation}. Additionally, with the minimal jump terms, we show that our Korn's inequality is sharp. We emphasize that with the help of sharpness of the Korn's inequality and explicit form of jump conditions, it is very easy to test whether any given finite element spaces satisfy the Korn's inequality. The proposed minimal jump conditions can also provide a guidance to modify some existing nonconforming finite elements so that the Korn's inequality is satisfied.
Throughout the paper, we shall use the standard notation for Sobolev spaces. Namely, $H^k(\Omega)$ denotes the Sobolev space of scalar functions in $\Omega$ whose derivatives up to order $k$ are square integrable, with the norm $\|\cdot\|_k$. The notation $|\cdot |_k$ denotes the semi-norm derived from the partial derivatives of order equal to $k$. Furthermore, $\|\cdot\|_{k,T}$ and $|\cdot|_{k,T}$ denote the norm $\|\cdot\|_k$ and the semi-norm $|\cdot|_k$ restricted to the domain $T$. Given a partitions $\mathcal{T}_h$ of the domain $\Omega$, where $\Omega$ is a bounded connected open polyhedral domain in $\Reals{d}$ with $d = 2$ or $3$. , we shall also use $H^1(\Omega;\mathcal{T}_h)$ to denote the element-wise $H^1$ functions. We denote the vectors of size $d$ whose components are in $H^1(\Omega;\mathcal{T}_h)$ by $(H^1(\Omega;\mathcal{T}_h))^d$. We also denote $H({\rm div};\Omega)$ by the space consisting of vectors, whose divergence belongs to $L^2(\Omega)$. We use $P_\ell(T)$ for the space of polynomials of degree upto $\ell$ on the domain $T$ while $(P_\ell(T))^d$ denotes the vectors of size $d$ whose components are polynomials of degree at most $\ell$.
We recall that the operator ${\bm{curl}}$ on a scalar function $q$ in $2D$ is defined by
\begin{equation}
{\bm{curl}} q = \left ( -\frac{\partial q}{\partial y}, \frac{\partial q}{\partial x} \right )^T,
\end{equation}
while on a vector function $\bm{q} = (q_i)_{i=1,2,3}^T$ in $3D$, it is defined by
\begin{equation}
{\bm{curl}} \bm{q} = \left ( \frac{\partial q_2}{\partial z} - \frac{\partial q_3}{\partial y}, \frac{\partial q_3}{\partial x} - \frac{\partial q_1}{\partial z}, \frac{\partial q_1}{\partial y} - \frac{\partial q_2}{\partial x} \right )^T,
\end{equation}
For any given vector space $\bm{V}$, by ${\rm dim} \bm{V}$, we mean the dimension of $\bm{V}$.
The rest of our paper is organized as follows. In \S \ref{korncd}, we prove the Korn's inequality for piecewise $H^1$ space with minimal jump terms. \S \ref{sharp} considers the sharpness of the inequality, which indicates that the minimal jump terms presented in our Korn's inequality are necessary and can not be reduced any more. With the Korn's inequality for piecewise $H^1$ space with minimal jump terms and sharpness discussion, in \S \ref{appl}, some of existing finite elements have been discussed. And some application of our theory is provided to modify nonconforming finite elements which do not satisfy the Korn's inequality so that they satisfy the Korn's inequality.
Lastly, we provide a concluding remark.
\section{Korn's inequality for the piecewise $H^1$ vector functions}\label{korncd}
Let $\Omega$ be a bounded connected open polyhedral domain in $\Reals{d}$ with $d = 2$ or $3$ and $\partial \Omega$ be the boundary of the domain $\Omega$. Then the classical Korn's inequality reads as follows:
\begin{equation}
|{\bf{u}} |_{H^1(\Omega)} \leq C_\Omega \left ( \| \mathcal{D}({\bf{u}})\|_0 + \|{\bf{u}}\|_0 \right ), \quad \forall {\bf{u}} \in [H^1(\Omega)]^d,
\end{equation}
where the strain tensor $\mathcal{D}({\bf{u}}) \in \Reals{d\times d}$ is given as follows:
\begin{equation}
\mathcal{D}_{ij}({\bf{u}}) = \frac{1}{2} \left ( \frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} \right ) \quad \mbox{ for } 1 \leq i,j \leq d.
\end{equation}
Let ${\bm{RM}}(\Omega)$ be the space of rigid motions on $\Omega$ defined by
\begin{equation}
\bm{RM} (\Omega) = \{ \bm {a} + \bm{A} \bm{x} \in \Reals{d} : \bm{a} \in \Reals{d} \,\, \mbox{ and } \,\, \bm{A} \in {\textsf{Skw}}^{d\times d} \},
\end{equation}
where $\bm{x} = (x_1, \cdots, x_d)^T \in \Reals{d}$ is the position vector function on $\Omega$ and ${\textsf{Skw}^{d\times d}}$ is the set of anti-symmetric $d\times d$ matrices. We would like to remark that if $\bm{v} = \bm{v}(\bm{x}) \in \bm{RM}(\Omega)$, then $\bm{v} = \bm{a} + \bm{A}\bm{x}$ for some constant vector $\bm{a}$ and a homogeneous polynomial of degree one $\bm{A}\bm{x}$ such that $\bm{x} \cdot \bm{A} \bm{x} = 0$. This is relevant to the lowest N\'ed\'elec first kind finite element \cite{monk2003finite}. It is easy to verify that the space $\bm{RM}(\Omega)$ is the kernel of the strain tensor, i.e., it holds that
\begin{equation}
\bm{RM}(\Omega) = \{\bm{v} \in (H^1(\Omega))^d : \mathcal{D}(\bm{v}) = 0 \}.
\end{equation}
\subsection{Korn's inequality with explicit upper bounds}
Given $\Omega \subset \Reals{d}$ with $d = 2$ or $3$, we consider the mesh generation $\mathcal{T}_h$, which is a shape-regular polygonal and/or polyhedral partition for $\Omega$. Let $\mathcal{T}_h = \cup \{T\}$ denote the collection of decompositions and $h = \max_{T \in \mathcal{T}_h} {\rm diam} (T)$ denote the mesh size. We further denote by $\mathcal{E}_h$ the set of all edges/faces for $\mathcal{T}_h$ and
\begin{equation}
\mathcal{E}_h = \mathcal{E}_h^{o} \cup \mathcal{E}_h^\partial,
\end{equation}
where $\mathcal{E}_h^o$ denotes the set of all interior edges/faces of $\mathcal{T}_h$ and $\mathcal{E}_h^{\partial}$ denotes the set of all boundary edges/faces, respectively. Let $\mathcal{V}(T)$ be the set of all vertices for $T \in \mathcal{T}_h$ while $\mathcal{V} (\mathcal{T}_h)$ denotes the set of all vertices for the
partition $\mathcal{T}_h$.
Given two adjacent polygon/polyhedron $T^+$ and $T^-$ in $\mathcal{T}_h$, let $f = \partial T^+ \cap \partial T^-$
be the common boundary (interface) between $T^+$ and $T^-$ in $\mathcal{T}_h$, and $n^+$ and $n^-$ be unit normal vectors to $f$ pointing to the exterior of $T^+$ and $T^-$, respectively. For any edge (or face) $f
\in \mathcal{E}_h^{o}$, and a scalar $q$ and vector $\bm{v}$, we define the jumps
\begin{subeqnarray}
\jump{q}_f &=& q|_{\partial T^+\cap f} - q|_{\partial T^-\cap f}, \\
\jump{\bm{v}}_f &=& \bm{v}|_{\partial T^+\cap f} - \bm{v}|_{\partial T^-\cap f}.
\end{subeqnarray}
When $f \in \mathcal{E}_h^{\partial}$ then the above quantities are defined as
\begin{equation}
\jump{q}_f = q|_{f}, \quad \mbox{ and } \quad \jump{\bm{v}} = \bm{v}|_{f}.
\end{equation}
Throughout the paper, we shall also consider the subspace of $\bm {RM}(\Omega)$, denoted by $\bm {RM}^\partial (\Omega)$ defined as follows:
\begin{equation}
\bm {RM}^{\partial}(\Omega) = \left \{\bm m \in \bm {RM}(\Omega): ~ \|\bm m\|_{L^2(\partial \Omega)}=1,\int_{\partial \Omega}\bm mds=0 \right \}.
\end{equation}
We also consider the following space of piecewise linear vector fields:
\begin{eqnarray*}
\bm{V}_h &=& \{\bm v\in (L^2(\Omega))^d: \bm v_T = \bm v|_T \in (P_1(T))^d, \quad \forall T \in \mathcal {T}_h\}
\end{eqnarray*}
and space of continuous piecewise linear vector fields:
\begin{eqnarray*}
\bm{W}_h &=& \{\bm v\in (H^1(\Omega))^d: \bm v_T =\bm v|_T \in (P_1(T))^d, \quad \forall T \in \mathcal{T}_h\}.
\end{eqnarray*}
Consider a linear map $E: \bm{V}_h \longrightarrow \bm{W}_h$ defined as follows:
\begin{equation}
E (\bm v)(p) = \frac{1}{|\chi_p|}\sum_{T \in \chi_p}\bm v|_T(p), \quad \forall p \in \mathcal{V}(\mathcal{T}_h),
\end{equation}
where $\chi_p = \{ T \in \mathcal{T}_h: p \in \mathcal{V}(T) \}$, the patch of the vertex $p$, i.e., the collection simplexes that contain $p$ as its vertex, and $|\chi_p|$ is the cardinality of the set $\chi_p$. We note that it holds true
\begin{equation}
|\chi_p| \lesssim 1, \quad \forall p \in \mathcal{V}(\mathcal{T}_h).
\end{equation}
We recall the following approximation property, which can be found in \cite{brenner2004korn}:
\begin{Lemma}
Let $\bm v \in \bm{V}_h$ and $T \in \mathcal{T}_h$. Then, with $\bm v_{T} = \bm v|_T$, for all $p \in \mathcal{V}(\mathcal{T}_h)$, we have the following estimate:
\begin{equation}
|\bm v_{T} - E(\bm v)(p)|^2 \lesssim \sum_{f \in \mathcal{E}_p} |\jump{\bm v}_f(p)|^2, \quad T \in \mathcal{T}_h \, \mbox{ and } \, p \in \mathcal{V}(T),
\end{equation}
where
\begin{equation}
\mathcal{E}_p = \{ e \in \mathcal{E}_h^o : p \in \partial f \},
\end{equation}
is the set of interior sides sharing $p$ as a common vertex, and $\jump{\bm v}_f$ is the jump of $\bm v$ across the edge/face $f$.
\end{Lemma}
The main observation that will lead us to construct the minimal jump conditions or refined Korn's inequality will be presented at the following simple but important theorem:
\begin{Theorem}\label{kornpre}
For any $T \in \mathcal{T}_h$. Let $f \in \partial T$ and $\bm{c}_{f}$ be the barycenter of $f$. For some constant $c, c_1, c_2 \in \Reals{}$, we have for 2D and 3D, respectively, with $\bm{t}_f$ and $\bm{n}_f$ being the tangent vector to the edge and the normal vector to the face,
\begin{subeqnarray*}
{\bm{RM}}(f)^\perp &=& \{ (\bm{v} \cdot \bm{t}_f) |_f: \forall \bm{v} \in \bm{RM}(T) \} = {\rm span} \{ 1\} \quad \mbox{ for } d = 2 \\
&=& {\rm RT}_0(f) := \{ (\bm{v} \times \bm{n}_f)|_f : \forall \bm{v} \in \bm{RM}(T) \} \quad \mbox{ for } d = 3,
\end{subeqnarray*}
where ${\rm RT}_0(f)$ can be characterized as follows:
\begin{equation}
{\rm RT}_0(f) = \{ (c_1 + c x_1) \bm{t}_1 + (c_2 + c x_2) \bm{t}_2 : x_1 , x_2 \in \Reals{} \},
\end{equation}
where $f = {\rm span} \{\bm{t}_1, \bm{t}_2 \}$ such that $\bm{t}_1 \cdot \bm{t}_2 = 0$.
\end{Theorem}
\begin{proof}
We begin our proof for the {\rm 2D} case. Choose $\bm{v} \in {\bm{RM}}(T)$. Then, $\bm{v}$ is of the following form:
\begin{equation}
\bm{v} = \bm{a} + b (y, -x)^T,
\end{equation}
for some constant vector $\bm{a}$ and a constant $b$. Without loss of generality, we assume that $f \in \partial T$ can be expressed as a linear function $y = mx + n$. Then, we have that $\bm{t}_f =\frac{1}{\sqrt{1+m^2}} (1,m)^T$ and
\begin{equation}
(\bm{v} \cdot \bm{t}_f) |_f= c_0,
\end{equation}
where $c_0$ is some constant. We shall now consider {\rm 3D} case. For any $\bm{v} \in {\bm{RM}}(T)$ can be given as $\bm{v} = \bm{a} + \bm{A} \bm{x}$, where $\bm{A}$, $\bm{x}$, and $\bm{n}_f$ can be denoted by the followings:
\begin{eqnarray*}
\bm{A} = \left ( \begin{array}{ccc} 0 & a_1 & a_2 \\ -a_1 & 0 & a_3 \\ -a_2 & -a_3 & 0 \end{array} \right ), \quad \bm{x} = \left ( \begin{array}{c} x_1 \\ x_2 \\ x_3 \end{array} \right )
, \quad \mbox{ and } \quad \bm{n}_{f} = \left ( \begin{array}{c} n_1 \\ n_2 \\ n_3 \end{array} \right ).
\end{eqnarray*}
Then a simple calculation leads to
\begin{eqnarray*}
\bm{v} \times \bm{n}_{f} = \bm{a} \times \bm{n}_f + \left ( \begin{array}{c} a_1 x_2 + a_2 x_3 \\ -a_1 x_1 + a_3x_3 \\ -a_2x_1 - a_3x_2 \end{array} \right ) \times \left ( \begin{array}{c} n_1 \\ n_2 \\ n_3 \end{array} \right ).
\end{eqnarray*}
Particularly, we pay attention to the following quantity:
\begin{eqnarray*}
\bm{A} \bm{x} \times \bm{n}_{f} = \left ( \begin{array}{c} \eta x_1 + a_3 \bm{n}_f \cdot \bm{x} \\ \eta x_2 - a_2 \bm{n}_f \cdot \bm{x} \\ \eta x_3 + a_1 \bm{n}_f \cdot \bm{x} \end{array} \right ) =
\left ( \begin{array}{c} \eta x_1 + a_3 \bm{n}_f \cdot (\bm{x} - \bm{c}_f) + a_3 \bm{n}_f \cdot \bm{c}_f \\ \eta x_2 - a_2 \bm{n}_f \cdot ( \bm{x} -\bm{c}_f ) - a_2 \bm{n}_f \cdot \bm{c}_f \\ \eta x_3 + a_1 \bm{n}_f \cdot ( \bm{x} - \bm{c}_f ) + a_1 \bm{n}_f \cdot \bm{c}_f \end{array} \right )
\end{eqnarray*}
where $\eta = \bm{n}_f \cdot (-a_3, a_2, -a_1) = -a_1 n_3 + a_2 n_2 - a_3 n_1$ and $\bm{c}_f$ is the barycenter of the face $f$.
In case $\eta = 0$, $(\bm{A} \bm{x} \times \bm{n}_f)|_f = \bm{n}_f \cdot \bm{c}_f (a_3, -a_2, a_1)^T.$ Note that $(a_3, -a_2, a_1)^T \in f$ and so, it can be expressed as $\bm{\mu} \times \bm{n}_f$ for some vector $\bm{\mu}$. Therefore, $(\bm{v} \times \bm{n}_f)|_f = \bm{b} \times \bm{n}_f$ with $\bm{b} = \bm{a} + \bm{\mu}$. In case $\eta \neq 0$, we observe that for $\bm{x} \in f$,
\begin{eqnarray*}
\bm{A} \bm{x} \times \bm{n}_{f} = \eta (\bm{x} - \bm{c}),
\end{eqnarray*}
where
\begin{equation}
\bm{c} = \frac{1}{\eta} \left ( \begin{array}{c} -a_3 \bm{n}_{f} \cdot \bm{c}_f \\ a_2 \bm{n}_{f} \cdot \bm{c}_f \\ - a_1 {\bm{n}}_{f} \cdot \bm{c}_f \end{array} \right ) \quad \mbox{ and } \quad (\bm{x} - \bm{c})\cdot \bm{n}_f = 0.
\end{equation}
This means that we can modify $\bm{A}\bm{x} \times \bm{n}_f$ further into the following, with $c = \eta$,
\begin{equation}
\bm{A} \bm{x} \times \bm{n}_{f} = c (\bm{x} - \bm{c}) = c (\bm{x} - \bm{c}_ f + \bm{c}_f - \bm{c}) = \bm{d} + c (\bm{x} - \bm{c}_f),
\end{equation}
where $\bm{d}$ is a constant vector such that $\bm{d} \cdot \bm{n}_f = 0$. Therefore, we arrive at the conclusion and this completes the proof.
\end{proof}
For 3D, we remark that for any given $T \in \mathcal{T}_h$ with $f$ being a face of $T$, the dimension of the space $\{\bm{q}(\bm{x}) \times \bm{n}_f : \bm{q} \in \bm{RM}(T), \,\, \forall \bm{x} \in f\}$ is three and we denote the space $\{ (c_1 + c x_1) \bm{t}_1 + (c_2 + c x_2) \bm{t}_2 : x_1, x_2 \in \Reals{} \}$ by ${\rm RT}_0(f)$ \cite{xie2008uniformly}. Now, we can prove the Korn's inequality for functions in $(H^1(\Omega,\mathcal{T}_h))^d$. We note that the spaces of projection onto the face of each $T$ are different for {\rm 2D} and {\rm{3D}}, the detailed jump terms shall be stated accordingly. We first define $\pi_1$ by the $L^2$ orthogonal projection from $L^2(f)$ onto $P_1(f)$ and $\pi_{RM^\perp(f)}$ is the $L^2$ orthogonal projection form $L^2(f)$ onto $P_0(f)$ for $d = 2$, but it is from $L^2(f)$ to ${{\rm RT}}_0(f)$ for $d = 3$, respectively.
To state the discrete Korn's inequality, we shall first introduce on each $T \in \mathcal{T}_h$, a projection operator $\Pi_T$ from $(H^1(T))^d$ onto the $\bm{RM}(T)$ by the following conditions:
\begin{equation}
\left|\int_{T}(\bm v - \Pi_{T}\bm v)dx\right|=0, \quad \forall \bm v \in (H^1(T))^d,
\end{equation}
\begin{equation}
\left|\int_{T}\nabla \times(\bm v-\Pi_{T}\bm v) dx\right| = 0, \quad \forall \bm v \in (H^1(T))^d.
\end{equation}
Hence, by the definition of $\Pi_T$, we have (see (3.3) in \cite{brenner2004korn})
\begin{equation} \label{3.3}
| \bm v-\Pi_{T}\bm v |_{H^1(T)} \lesssim
\| \mathcal{D} (\bm v - \Pi_T \bm v) \|_0 = \|\mathcal{D} (\bm v) \|_0, \quad \forall \bm v \in (H^1(T))^d.
\end{equation}
\begin{equation} \label{3.4}
\|\bm v-\Pi_{T}\bm v\|_0\lesssim (\hbox{diam} T)|\bm v-\Pi_{T}\bm v|_{H^1(T)}, \quad \forall \bm v\in (H^1(T))^d.
\end{equation}
Using this local projection $\Pi_T$, we define $\Pi: (H^1(\Omega,\mathcal{T}_h))^d\longrightarrow \boldsymbol{V}_h$ by
\begin{equation}
(\Pi \bm u) = \Pi_T\bm u_T, \quad \forall T \in \mathcal{T}_h.
\end{equation}
Next, following \cite{brenner2004korn}, we also introduce a seminorm on $(H^1(\Omega;\mathcal{T}_h))^d$, denoted by $\Phi$ satisfying the following Assumptions:
\begin{itemize}
\item[(C1)] $|\Phi(\bm w)| \lesssim \|\bm w\|_{1}, \quad \forall \bm{w} \in (H^1(\Omega))^d,$
\item[(C2)] $\Phi(\bm{m}) = 0$ and $\bm{m} \in {\bf{RM}}(\Omega)$ if and only if $\bm{m}$ is constant.
\item[(C3)] $(\Phi(\bm v - E \bm v))^2 \lesssim \sum_{f \in \mathcal{E}_h^o}(\hbox{diam}(f))^{d-2}\sum_{p\in \mathcal{V}(f)}|\jump{\bm v}_f(p)|^2, \, \forall \bm v \in \bm{V}_h$, where $\mathcal{V}(f)$ is the set of the vertices of $f$.
\end{itemize}
The first estimate is well-known for the piecewise $H^1$ functions, see \cite{brenner2004korn} :
\begin{Lemma}
Let $\Phi$ be the seminorm on $H^1(\Omega;\mathcal{T}_h)$ satisfying the Assumptios (C1), (C2), and (C3). Then, the following estimate holds:
\begin{equation}\label{2.9}
|\bm v|^2_{H^1(\Omega,\mathcal{T}_h)} \lesssim \|\mathcal{D}_{\mathcal {T}}(\bm v)\|_0^2+
(\Phi(\bm v))^2+\sum_{f \in \mathcal{E}^o_h}(\hbox{diam}~ f)^{d-2}\sum_{p\in \mathcal{V}(f)}|\jump{\bm v}_f(p)|^2
\end{equation}
for all $\bm v \in \bm{V}_h$, where $\mathcal{D}_{\mathcal {T}}(\bm v)|_T=\mathcal{D}(\bm v|_T)$
for all $T \in \mathcal T_h$.
\end{Lemma}
Now, in order to refine the discrete Korn's inequality, we further impose an addition Assumption for $\Phi$ as follows:
\begin{itemize}
\item[(C4)] $|\Phi(\bm u-\Pi\bm u)| \lesssim \|\mathcal D_\mathcal{T}(\bm u)\|_0, \quad \forall \bm u \in (H^1(\Omega,\mathcal{T}_h))^d$.
\end{itemize}
\begin{remark}
For $u\in (H^1(\Omega))^d$, Assumption (C1) implies Assumption (C4), but for $u \in (H^1(\Omega,\mathcal{T}_h))^d$, Assumption (C4) can not be derived from Assumption (C1).
\end{remark}
We then obtain the following main result in this paper:
\begin{Theorem}\label{main}
Let $\Phi:(H^1(\Omega, \mathcal{T}_h))^d\longrightarrow \Reals{}$ be a seminorm satisfying the Assumptions (C1), (C2), (C3) and (C4). We have the following results for 2D and 3D, respectively. For $2D$, we have
\begin{eqnarray*}
&& |\bm u|^2_{H^1(\Omega,\mathcal{T}_h)} \lesssim \|\mathcal{D}_{\mathcal {T}}(\bm u)\|_0^2 + (\Phi(\bm u))^2 \\
&&+ \sum_{f \in \mathcal{E}^o_h}(\hbox{diam}(f))^{-1} \Big(\|\left [\pi_1(\jump{\bm u}_f \cdot \bm n_f) \right ] \bm{n}_f \|_{0,f}^2 +
\| \left [\pi_{RM^\perp(f)}(\jump{\bm u}_f \cdot \bm t_f) \right ] \bm t_f \|_{0,f}^2\Big).
\end{eqnarray*}
For $3D$, we have
\begin{eqnarray*}
&& |\bm u|^2_{H^1(\Omega,\mathcal{T}_h)} \lesssim \|\mathcal{D}_{\mathcal {T}}(\bm u)\|_0^2 +
(\Phi(\bm u))^2 \\
&&+ \sum_{f \in \mathcal{E}^o_h}(\hbox{diam}(f))^{-1}\Big(\|\left [ \pi_1(\jump{\bm u}_f\cdot \bm n_f) \right ] \bm{n}_f \|_{0,f}^2+\| \left [\pi_{RM^\perp(f)}(\jump{\bm u}_f \times \bm n_f) \right ] \times \bm{n}_f\|_{0,f}^2\Big).
\end{eqnarray*}
\end{Theorem}
\begin{proof}
Let $\bm u\in (H^1(\Omega,\mathcal{T}_h))^d$, from \eqref{2.9} and \eqref{3.3}, we have
\begin{equation*}\label{3.8}
\begin{split}
|\bm u|^2_{H^1(\Omega,\mathcal{T}_h)}&\lesssim |\bm u-\Pi \bm u|^2_{H^1(\Omega,\mathcal{T}_h)}+ |\Pi \bm u|^2_{H^1(\Omega,\mathcal{T}_h)}\\
&\lesssim
\|\mathcal{D}_{\mathcal {T}}(\bm u)\|_0^2+
(\Phi(\Pi\bm u))^2+\sum_{f \in \mathcal{E}^o_h}(\hbox{diam}(f))^{d-2}\sum_{p\in \mathcal{V}(f)}|\jump{\Pi \bm u}_f(p)|^2.
\end{split}
\end{equation*}
Using condition (C4), we find
\begin{equation}\label{3.8}
\Phi(\Pi \bm u)\leq \Phi(\bm u-\Pi\bm u)+\Phi(\bm u)\lesssim \|\mathcal{D}_{\mathcal {T}}(\bm u)\|_0+\Phi(\bm u).
\end{equation}
Let $f \in \mathcal{E}^o_h$ be arbitrary and $p \in \mathcal V(f)$, we have, by inverse estimate
\begin{equation}\label{3.9}
|\jump{\Pi\bm u}_f(p)|^2\lesssim (\hbox{diam}(f))^{1-d} \|
\jump{\Pi\bm u}_f\|_{0,f}^2.
\end{equation}
We first prove the $d = 2$ case. Let $f = T^{+} \cap T^{-}$
and choose $\bm{n}_f = \bm n^{-}$ as the unit normal vector and
$\bm{t}_f = \bm t^{-}$ as the unit tangential vector of $f$.
Then we have that
\begin{equation}
\begin{split}
\|\jump{\Pi\bm u}_f\|_{0,f}^2 &= \int_f |(\jump{\Pi\bm u}_f \cdot \bm{n}_f )\bm{n}_f |^2ds + \int_f|(\jump{\Pi\bm u}_f \cdot \bm{t}_f)\bm{t}_f |^2 ds
\\
&= \int_f (\jump{\Pi\bm u}_f \cdot \bm{n}_f)^2
+(\jump{\Pi\bm u}_f \cdot \bm{t}_f)^2 ds.
\end{split}
\end{equation}
Using Theorem \ref{kornpre}, we see that
\begin{equation}
\jump{\Pi\bm u}_f \cdot \bm{t}_f = c
\end{equation}
for some constant $c$. Therefore, we have that
\begin{equation}\label{3.10}
\begin{split}
\|\jump{\Pi\bm u}_f\|_{0,f}^2&=\int_f \Big(\pi_1(\jump{\Pi\bm u}_f \cdot \bm{n}_f )\Big)^2 \, ds + \int_f \Big(\pi_{RM^\perp(f)}(\jump{\Pi\bm u}_f\cdot \bm{t}_f)\Big)^2\, ds \\
&\leq \int_f \Big(\pi_1(\jump{\Pi \bm u-\bm u}_f\cdot \bm{n}_f )\Big)^2 \, ds +\int_f \Big(\pi_{RM^\perp(f)}(\jump{\Pi \bm u-\bm u}_f \cdot \bm{t}_f )\Big)^2 \, ds\\
&+ \int_f \Big(\pi_1(\jump{\bm u}_f \cdot \bm{n}_f )\Big)^2 \, ds +\int_f \Big(\pi_{RM^\perp(f)}(\jump{\bm u}_f \cdot \bm{t}_f )\Big)^2 \, ds.
\end{split}
\end{equation}
Let $\mathcal T_f = T^{+}\cup T^{-}$, it follows from \eqref{3.3}, \eqref{3.4} and trace theorem that
\begin{equation}\label{3.11}
\begin{aligned}
&\int_f \Big(\pi_1(\jump{\Pi\bm u-\bm u}_f \cdot \bm{n}_f )\Big)^2 \, ds +\int_f \Big(\pi_{RM^\perp(f)}(\jump{\Pi\bm u-\bm u}_f \cdot \bm{t}_f )\Big)^2 ds \\
\leq&\|\jump{\Pi\bm u-\bm u}\|^2_{0,f} \\
\lesssim & \sum_{T \in \mathcal T_f}
\Big((\hbox{diam}(T))|\bm u_T - \Pi_T\bm u_T|_{H^1(T)}^2 + (\hbox{diam}(T))^{-1}|\bm u_T-\Pi_T\bm u_T|_{0,T}^2\Big) \\
\lesssim &\sum_{T \in \mathcal{T}_f} (\hbox{diam}(T))\|\mathcal D(\bm u_T)\|_{0,T}^2.
\end{aligned}
\end{equation}
Combining \eqref{3.9} \eqref{3.10} and \eqref{3.11}, and noting that diameter of $T$ is equivalrent to diameter of $f$, we find
\begin{equation}\label{3.12}
\begin{split}
&\sum_{f \in \mathcal{E}^o_h}(\hbox{diam}(f))^{d-2} \sum_{p \in \mathcal{V}(f)}|\jump{\bm u}_f(p)|^2\\
&\lesssim \|\mathcal D_{\mathcal T}(\bm u)\|_{0}^2+\sum_{f \in \mathcal{E}^o_h} (\hbox{diam}(f))^{-1}\Big(\|\pi_1(\jump{\bm u}_f \cdot \bm{n}_f)\|_{0,f}+\|\pi_{RM^\perp(f)}(\jump{\bm u}_f \cdot \bm{t}_f)\|_{0,f}\Big).
\end{split}
\end{equation}
For $d=3$, we have that
\begin{equation}
\jump{\Pi \bm u}_f = -(\jump{\Pi\bm u}_f \times \bm{n}_f ) \times \bm{n}_f + (\jump{\Pi\bm u}_f \cdot \bm{n}_f ) \bm{n}_f.
\end{equation}
Furthermore, similar to the proof of Theorem \ref{kornpre}, we can show that
\begin{equation}
\jump{\Pi\bm u}_f \times \bm{n}_f = \bm{a} \times \bm{n}_f + c (\bm{x} - \bm{c}_f), \quad \forall \bm{x} \in f,
\end{equation}
where $\bm{c}_f$ is a barycenter of $f$. Using the same argument for $2D$, we can establish the estimate for 3D case in Theorem \ref{main}. This completes the proof.
\end{proof}
\begin{definition}
A subspace of rigid body mode will be denoted and defined by
\begin{equation}
\bm {RM}^{\partial}(\Omega) = \left \{\bm m \in \bm {RM}(\Omega): ~ \|\bm m\|_{L^2(\partial \Omega)}=1,\int_{\partial \Omega}\bm mds=0 \right \},
\end{equation}
\end{definition}
We shall set $\Phi(\bm u)$ as follows:
\begin{equation}
\Phi(\bm u) : = \sup \limits_{\bm m \in \bm {RM}^{\partial} (\Omega)} \int_{\partial \Omega}\bm u \cdot \bm m \, ds.
\end{equation}
Under this setting, we can show that $\Phi$ satisfies the Aummptions (C1), (C2), (C3) and (C4). Additionally, we have
\begin{equation}
\Phi(\bm u) \leq \sum_{f \subset \partial \Omega}\Big(\|\pi_1(\jump{\bm u}_f \cdot \bm n_f)\|_{0,f}^2+\|\pi_{RM^\perp(f)}(\jump{\bm u}_f \cdot \bm t_f)\|_{0,f}^2\Big)~\hbox{for}~ d=2,
\end{equation}
and
\begin{equation}
\Phi(\bm u)\leq \sum_{f \subset \partial \Omega}\Big(\|\pi_1(\jump{\bm u}_f\cdot \bm n_f)\|_{0,f}^2+\|\pi_{RM^\perp(f)}(f)(\jump{\bm u}_f \times \bm n_f)\|_{0,f}^2\Big)~\hbox{for}~ d=3.
\end{equation}
Let $P_{1/0}(f) = P_1(f)/P_0(f)$ and $\pi_{1/0}$ be the orthogonal projection from $L^2(f)$ onto $P_{1/0}(f)$.
\begin{Corollary}\label{DisKorn2d}
For any $\bm u\in (H^1(\Omega, \mathcal{T}_h))^2$, and if $\bm u$ satisfies the following continuity conditions across the edges $f\in \mathcal{E}_h^o$
\begin{enumerate}
\item $\int_{f} \jump{\bm u}_f \cdot \bm n_f \, p \, ds = 0, \quad \forall p \in P_1(f)$;
\item $\int_{f} \jump{\bm u}_f \cdot \bm t_f ds = 0$;
\end{enumerate}
then the following estimate holds:
\begin{equation}
|\bm u|_{H^1(\Omega,\mathcal{T}_h)}\lesssim \|\mathcal{D}_{\mathcal {T}}(\bm u)\|_0+\sup\limits_{\bm m\in \bm {RM}^\partial(\Omega)}\int_{\partial \Omega}\bm u\cdot \bm mds.
\end{equation}
\end{Corollary}
\begin{proof}
This is immediate from Theorem \ref{main}. This completes the proof.
\end{proof}
\begin{Corollary}\label{DisKorn3d}
For any $\bm u \in (H^1(\Omega, \mathcal{T}_h))^3$, and if $\bm u$ satisfies the following continuity conditions across the edges $f \in \mathcal{E}_h^o$
\begin{enumerate}
\item $\int_{f} \jump{\bm u}_f \cdot \bm n_f \, p \, ds = 0, \quad \forall p \in P_1(f)$;
\item $\int_{f} (\jump{\bm u}_f \times \bm n_f )
\cdot \bm {p}\, ds = 0, \quad \forall \bm{p} \in
{\rm RT}_0(f)$;
\end{enumerate}
then the following estimate holds:
\begin{equation}
|\bm u|_{H^1(\Omega,\mathcal{T}_h)}\lesssim \|\mathcal{D}_{\mathcal {T}}(\bm u)\|_0+\sup\limits_{\bm m\in \bm {RM}^\partial(\Omega)}\int_{\partial \Omega}\bm u\cdot \bm m\, ds.
\end{equation}
\end{Corollary}
\begin{proof}
This is immediate from Theorem \ref{main}. This completes the proof.
\end{proof}
\section{On sharpness of the Korn's inequality}\label{sharp}
In this subsection, we establish that the proposed Korn's inequality is sharp. We first start with 2D case and move on to 3D case. The goal is to establish that the proposed Korn's inequality is sharp, see Corollary \ref{DisKorn2d} and Corollary \ref{DisKorn3d}. More precisely, the above continuity conditions in Corollary \ref{DisKorn2d} and Corollary \ref{DisKorn3d} minimize the conditions to obtain the classical Korn's inequality for piecewise $H^1$ space. To put it in another way, if one of the conditions is violated, the classical Korn's inequality does not hold, but the inequality can be made to hold by adding an appropriate jump terms.
For a sharpness proof, we shall take examples showing that if any one of the conditions is missing, then the inequality does not hold. These examples are constructed in special 2D and 3D domains as given in Figure \ref{ex}.
\begin{figure}[h]
\includegraphics[width=12cm, height=5cm]{image/domain.png}
\caption{Domain 2D and 3D}\label{ex}
\end{figure}
For simplicity, we shall divide our discussion for 2D and 3D. For 2D, we consider a special $\Omega=\mathcal T_h = T_1 \cup T_2$, which consists of the vertexes $(-1,0), (0,0), (0,1),(-1,1)$ and $ (0,0),(1,0), (1,1),(0,1)$. We denote $f = T_1 \cap T_2$ and
\begin{subeqnarray*}
E_1 &=& \{\bm u\in (H^1(\Omega, \mathcal{T}_h))^2: \pi_{1/0}(\jump{\bm u}_f \cdot \bm n_f)=0~\hbox{and}~\pi_0(\jump{\bm u}_f \cdot \bm t_f)=0\}, \\
E_2 &=& \{\bm u\in (H^1(\Omega, \mathcal{T}_h))^2: \pi_0(\jump{\bm u}_f \cdot \bm n_f)=0~\hbox{and}~\pi_0(\jump{\bm u}_f \cdot \bm t_f)=0\}, \\
E_3 &=& \{\bm u \in (H^1(\Omega, \mathcal{T}_h))^2: \pi_0(\jump{\bm u}_f \cdot \bm n_f)=0~\hbox{and}~\pi_{1/0}(\jump{\bm u}_f \cdot \bm n_f)=0\}.
\end{subeqnarray*}
We first note that a simple calculation leads that
\begin{equation}\label{RM0}
\bm {RM}^\partial(\Omega) \subset \hbox{span}\{(1-2y, 2x)^t\}.
\end{equation}
Furthermore, we shall note that the definitions of $\pi_0$ and $\pi_{1/0}$ mean
\begin{eqnarray}
\pi_0 (u) &=& \frac{1}{|f|} \int_f u ds \quad \mbox{ and } \quad
\pi_{1/0}(u) = \frac{1}{|f|} \int_f u s ds.
\end{eqnarray}
\begin{Theorem}
There exists $\bm u\in E_k,$ for $k=1,2,$ or $3$, such that
\begin{equation}\label{notkorn1}
|\bm u|^2_{H^1(\Omega,\mathcal{T}_h)} \neq 0, \quad \mbox{ but } \quad
\|\mathcal{D}_{\mathcal {T}}(\bm u)\|_0^2 + \left ( \sup\limits_{\bm m\in \bm {RM}^\partial (\Omega)}\int_{\partial \Omega}\bm u\cdot \bm mds \right )^2=0.
\end{equation}
\end{Theorem}
\begin{proof}
Let us consider $\bm u$ such that $\bm u_i = \bm u|_{T_i}$ for $i = 1,2$ given as follows:
\begin{equation}\label{form}
\bm u_i = \left ( \begin{array}{c} a_i \\ b_i \end{array} \right ) + c_i\left ( \begin{array}{c} -y \\ x \end{array} \right ) \in \bm {RM}(T_i), \quad\forall i=1,2.
\end{equation}
Here six coefficients $(a_i,b_i,c_i)_{i=1,2}$ shall be appropriately determined so that the Korn's inequality does not hold. We observe that
$\|\mathcal{D}_{\mathcal {T}}(\bm u)\|_0 = 0$ by construction for any choice of coefficients. We first note that from \eqref{RM0}, i.e., $\bm {RM}^\partial(\Omega) \subset\hbox{span}\{(1-2y, 2x)^t\}$, we can calculate that with $\bm m = c(1 - 2y, 2x)^t$ and $c \neq 0$,
\begin{equation}\label{cal}
\int_{\partial \Omega}\bm u\cdot \bm m \, ds = c\left ( 4 (b_2 - b_1) + \frac{9}{2}(c_1 + c_2) \right ).
\end{equation}
We also observe that with $\overline{n} = n_2 - n_1$,
\begin{eqnarray*}
\jump{\bm{u}}_f \cdot \bm{n}_f &=& \overline{a} - \overline{c}y \\
\jump{\bm{u}}_f \cdot \bm{t}_f &=& \overline{b} + \overline{c} x.
\end{eqnarray*}
A simple computation leads that
\begin{eqnarray*}
\int_f \jump{\bm{u}}_f \cdot \bm{n}_f ds &=& \int_0^1 \overline{a} - \overline{c} y \, dy = \overline{a} - \frac{1}{2} \overline{c} \\
\int_f (\jump{\bm{u}}_f \cdot \bm{n}_f) y ds &=& \int_0^1 \overline{a}y - \overline{c} y^2 \, dy = \frac{1}{2} \overline{a} - \frac{1}{3} \overline{c} \\
\int_f \jump{\bm{u}}_f \cdot \bm{t}_f \, ds &=& \int_0^1 \overline{b} \, dy = \overline{b}.
\end{eqnarray*}
Now, we begin our search of $\bm{u}$ of the aforementioned form \eqref{form} that belongs to $E_k$, which satisfies \eqref{notkorn1} for all $k=1,2,$ or $3$. This shall be discussed case by case as follows.
\begin{itemize}[leftmargin=*]
\item For $k=1$, we shall choose $\overline{a} = \frac{2}{3} \overline{c} \neq 0$ and $\overline{b} = 0$. This way, we can make $\bm u \in E_1$. On the other hand, for \eqref{cal}, we must choose $c_1 + c_2 = 0$. Then, $\bm u$ satisfies \eqref{notkorn1}.
\item For $k=2$, we shall choose $\overline{a} = \frac{1}{2} \overline{c} \neq 0$. Then by choosing $\overline{b} = 0$, we can make $\bm u \in E_2$. Again, we choose $c_1 + c_2 = 0$ for \eqref{cal}. Namely, $c_1 = c/2$ and $c_2 = -c/2$ for arbitrary $c \neq 0$ to guarantee the above conditions. We note that for $c \neq 0$, $\bm u$ satisfies \eqref{notkorn1}.
\item For $k=3$, we shall choose $\overline{a} = \overline{c} = 0$. Then by choosing $\overline{b} \neq 0$, we can make $\bm u \in E_3$. On the other hand, for \eqref{cal}, we choose $c_1 =- c_2 = c/2\neq 0$ so that it holds, we see that since $c \neq 0$, $\bm u$ satisfies \eqref{notkorn1}.
\end{itemize}
This completes the proof.
\end{proof}
We shall now turn our attention to three dimensional case. For 3D case, we consider two cubes as given in Figure \ref{ex} (b). Namely, $\Omega=\mathcal T_h = T_1 \cup T_2$, whose coordinates are $(1, -1, 0), (1, 0, 0)$, $(0, 0, 0),(0, -1, 0),(1, -1, 1), (1, 0, 1),(0, 0, 1),(0, -1, 1)$
and $(1, 0, 0), (1, 1, 0),(0, 1, 0)$, $(0, 0, 0), (1, 0, 1), (1, 1, 1),(0, 1, 1),(0, 0, 1)$. We denote $f = T_1\cap T_2$ by the face and expand the two conditions in Corollary \ref{DisKorn3d} as following six conditions
\begin{enumerate}
\item [(A1)] $\int_{f} \jump{\bm u}_f \cdot \bm n_f ds = 0$;
\item [(A2)] $\int_{f} \jump{\bm u}_f \cdot \bm n_f x ds = 0$;
\item [(A3)] $\int_{f} \jump{\bm u}_f \cdot \bm n_f z ds = 0$;
\item [(A4)] $\int_{f} (\jump{\bm u}_f \times \bm n_f) \cdot (0,0,1)^T ds = 0$;
\item [(A5)] $\int_{f} (\jump{\bm u}_f \times \bm n_f) \cdot (1,0,0)^T ds = 0$;
\item [(A6)] $\int_{f} (\jump{\bm u}_f \times \bm n_f) \cdot (x,0,z)^T ds = 0$.
\end{enumerate}
We now list a total of six subsets of $(H^1(\Omega;\mathcal{T}_h))^3$ as follows:
\begin{subeqnarray*}
F_1 &=& \{\bm u\in (H^1(\Omega, \mathcal{T}_h))^3: \bm u ~~ \hbox{satisfies (A2), (A3), (A4), (A5), and (A6)} \} \\
F_2 &=& \{\bm u\in (H^1(\Omega, \mathcal{T}_h))^3: \bm u ~~ \hbox{satisfies (A1), (A3), (A4), (A5), and (A6)} \} \\
F_3 &=& \{\bm u\in (H^1(\Omega, \mathcal{T}_h))^3: \bm u ~~ \hbox{satisfies (A1), (A2), (A4), (A5), and (A6)} \} \\
F_4 &=& \{\bm u\in (H^1(\Omega, \mathcal{T}_h))^3: \bm u ~~ \hbox{satisfies (A1), (A2), (A3), (A5), and (A6)} \} \\
F_5 &=& \{\bm u\in (H^1(\Omega, \mathcal{T}_h))^3: \bm u ~~ \hbox{satisfies (A1), (A2), (A3), (A4), and (A6)} \} \\
F_6 &=& \{\bm u\in (H^1(\Omega, \mathcal{T}_h))^3: \bm u ~~ \hbox{satisfies (A1), (A2), (A3), (A4), and (A5)} \}.
\end{subeqnarray*}
We first establish that with $\Omega = T_1\cup T_2$, the space of $\bm {RM}^\partial(\Omega)$. A tedious but simple calculation shows that
\begin{equation}\label{RM1}
\bm{RM}^\partial(\Omega) \subset \hbox{span}\{\bm m_1, \bm m_2, \bm m_3\},
\end{equation}
where $\bm m_i$ with $i=1,2,3$ are given as follows:
\begin{eqnarray*}
\bm m_1 = \left ( \begin{array}{c} -1 + 2z \\ 0 \\ 1 - 2x \end{array} \right ), \,\, \bm m_2 = \left ( \begin{array}{c} 2y \\ 1 - 2x \\ 0 \end{array} \right ), \,\, \mbox{ and } \,\, \bm m_3 = \left (\begin{array}{c} y \\ -x + z \\ -y \end{array} \right ).
\end{eqnarray*}
We shall now state and prove the main result in this section.
\begin{Theorem}
There exists $\bm u \in F_k,$ for $k=1,2,3,4,5$ or $6$, such that
\begin{equation}\label{notkorn2}
|\bm u|^2_{H^1(\Omega,\mathcal{T}_h)} \neq 0, \quad \mbox{ but } \quad
\|\mathcal{D}_{\mathcal {T}}(\bm u)\|_0^2 + \left ( \sup\limits_{\bm m\in \bm {RM}^\partial (\Omega)}\int_{\partial \Omega}\bm u\cdot \bm mds \right )^2=0.
\end{equation}
\end{Theorem}
\begin{proof}
Let us consider $\bm u$ such that $\bm u_i = \bm u|_{T_i}$ for $i=1,2$ given as follows:
\begin{equation}\label{form}
\bm u_i = \left ( \begin{array}{c} a_i \\ b_i \\ c_i \end{array} \right ) + \left ( \begin{array}{ccc} 0 & d_i & e_i \\ -d_i & 0 & f_i \\ -e_i & -f_i & 0 \end{array} \right ) \left ( \begin{array}{c} x \\ y \\ z \end{array} \right ) \in \bm {RM}(T_i), \quad\forall i=1,2.
\end{equation}
Here twelve coefficients $(a_i,b_i,c_i,d_i,e_i,f_i)_{i=1,2}$ shall be appropriately determined so that the Korn's inequality does not hold under the condition that $\bm u \in F_k$ for any $k$. We observe that
$\|\mathcal{D}_{\mathcal {T}}(\bm u)\|_0 = 0$ by construction for any choice of coefficients. We first investigate what constraints are given from the conditions:
\begin{equation}
\int_{\partial \Omega} \bm u \cdot \bm m_i \, ds = 0, \quad \forall i=1,2,3,
\end{equation}
where $\bm {RM}^\partial(\Omega) \subset\hbox{span}\{ \bm m_1, \bm m_2, \bm m_3\}$. We observe that
\begin{eqnarray*}
\bm u \cdot \bm m_1|_{T_i} &=& (a_i + d_i y + e_i z) ( -1 + 2z ) + (c_i - e_i x - f_i y) (1 - 2x); \\
\bm u \cdot \bm m_2|_{T_i} &=& (a_i + d_i y + e_i z) ( 2y ) + (b_i - d_i x + f_i z) ( 1 - 2x ); \\
\bm u \cdot \bm m_3|_{T_i} &=& (a_i + d_i y + e_i z) ( y ) +
(b_i - d_i x + f_i z) ( -x + z ) + (c_i - e_i x - f_i y) (- y).
\end{eqnarray*}
A simple but tedious computation leads that
\begin{subeqnarray}\label{rot}
\int_{\partial \Omega} \bm u \cdot \bm m_1 \, ds &=&
\left ( \frac{9}{3} (e_1 + e_2) \right ) ;\\
\int_{\partial \Omega} \bm u \cdot \bm m_2 \, ds &=& 6(a_2 - a_1) + 3(e_2 - e_1) + \frac{37}{6}(d_1 + d_2) \slabel{eq2};\\
\int_{\partial \Omega} \bm u \cdot \bm m_3 \, ds &=& 3(a_2 - a_1) - 3(c_2 - c_1) + 3 (e_2 - e_1) \slabel{eq3} \\
&& + \frac{37}{12} \left [ (d_2 + d_1) + (f_2 + f_1) \right ]. \nonumber
\end{subeqnarray}
We also observe that with $\overline{n} = n_2 - n_1$,
\begin{eqnarray*}
\jump{\bm{u}}_f \cdot \bm{n}_f &=& \overline{b} - \overline{d} x + \overline{f}z; \\
\jump{\bm{u}}_f \times \bm{n}_f &=& (- (\overline{c} - \overline{e} x - \overline{f} y), 0, \overline{a} + \overline{d} y + \overline{e} z)^t|_f = (- (\overline{c} - \overline{e} x), 0, \overline{a} + \overline{e} z)^t.
\end{eqnarray*}
A simple computation leads that
\begin{eqnarray*}
\int_f \jump{\bm{u}}_f \cdot \bm{n}_f ds &=& \int_0^1 \int_0^1 \overline{b} - \overline{d} x + \overline{f} z \, dxdz = \overline{b} - \frac{1}{2} \overline{d} + \frac{1}{2} \overline{f}; \\
\int_f \jump{\bm{u}}_f \cdot \bm{n}_f x ds &=& \int_0^1 \int_0^1 \overline{b}x - \overline{d} x^2 + \overline{f} xz \, dxdz = \frac{1}{2} \overline{b} - \frac{1}{3} \overline{d} + \frac{1}{4} \overline{f}; \\
\int_f \jump{\bm{u}}_f \cdot \bm{n}_f z ds &=& \int_0^1 \int_0^1 \overline{b} z - \overline{d} xz + \overline{f} z^2 \, dxdz = \frac{1}{2} \overline{b} - \frac{1}{4} \overline{d} + \frac{1}{3} \overline{f}.
\end{eqnarray*}
Furthermore, we have that
\begin{eqnarray*}
\int_f [\jump{\bm{u}}_f \times \bm{n}_f ] \cdot (0,0,1)^T ds &=& \int_0^1 \int_0^1 \overline{a} + \overline{e} z \, dxdz = \overline{a} + \frac{1}{2} \overline{e};\\
\int_f [\jump{\bm{u}}_f \times \bm{n}_f ] \cdot (1,0,0)^T ds &=& \int_0^1 \int_0^1 -\overline{c} + \overline{e} x \, dxdz = -\overline{c} + \frac{1}{2} \overline{e};\\
\int_f [\jump{\bm{u}}_f \times \bm{n}_f ] \cdot (x,0,z)^T ds &=& \int_0^1 \int_0^1 -\overline{c} x + \overline{e} x^2 + \overline{a} z + \overline{e} z^2 \, dxdz \\
&=& -\frac{1}{2} \overline{c} + \frac{2}{3} \overline{e} + \frac{1}{2} \overline{a}.
\end{eqnarray*}
Now, we begin our search of $\bm{u}$ of the aforementioned form \eqref{form} that belongs to $F_k$, which satisfies \eqref{notkorn2} for all $k=1,2,3,4,5,$ or $6$. This shall be discussed case by case as follows.
\begin{itemize}[leftmargin=*]
\item For $k=1$, we shall choose $\overline{d} = - \overline{f} \neq 0$. Then by choosing $\overline{b} = -\frac{7}{6} \overline{f}$, we can make $(A2)$ and $(A3)$ hold, but $(A1)$ does not. For $(A4), (A5)$ and $(A6)$, we can simply choose $\overline{a} = \overline{c} = \overline{e} = 0$. On the other hand, for \eqref{rot}, we must choose $d_i, e_i$ and $f_i$ so that $d_1 + d_2 = f_1 + f_2 = e_1 + e_2 = 0$. This means, $e_1 = e_2 = 0$. On the other hand, we can choose $d_2 = c/2, d_1 = -c/2$ and $f_2 = -c/2$ and $f_1 = c/2$ for arbitrary $c \neq 0$ to guarantee the above conditions. We note that for $c \neq 0$, $\bm u$ satisfies \eqref{notkorn2}.
\item For $k=2$, we shall choose $\overline{b} = \frac{1}{2} \overline{d} \neq 0$. Then by choosing $\overline{f} = 0$, we can make $(A1)$ and $(A3)$ hold, but $(A2)$ does not. For $(A4), (A5)$ and $(A6)$, we can simply choose $\overline{a} = \overline{c} = \overline{e} = 0$. On the other hand, for \eqref{rot}, we must choose $d_i, e_i$ and $f_i$ so that $d_1 + d_2 = f_1 + f_2 = e_1 + e_2 = 0$. This means, $e_1 = e_2 = 0$. On the other hand, we can choose $d_2 = c/2, d_1 = -c/2$ for arbitrary $c \neq 0$ to guarantee the above conditions. We note that for $c \neq 0$, $\bm u$ satisfies \eqref{notkorn2}.
\item For $k=3$, we shall choose $\overline{b} = \frac{1}{2} \overline{f} \neq 0$. Then by choosing $\overline{d} = 0$, we can make $(A1)$ and $(A2)$ hold, but $(A3)$ does not. For $(A4), (A5)$ and $(A6)$, we can simply choose $\overline{a} = \overline{c} = \overline{e} = 0$. On the other hand, for \eqref{rot}, we must choose $d_i, e_i$ and $f_i$ so that $d_1 + d_2 = f_1 + f_2 = e_1 + e_2 = 0$. This means, $e_1 = e_2 = 0$. On the other hand, we can choose $f_2 = c/2, f_1 = -c/2$ for arbitrary $c \neq 0$ to guarantee the above conditions. We note that for $c \neq 0$, $\bm u$ satisfies \eqref{notkorn2}.
\item For $k=4$, we shall choose $\overline{c} = \frac{1}{2} \overline{e} \neq 0$
and $\overline{a} = -\frac{1}{6} \overline{e} \neq 0$. Then by choosing $\overline{d} = 0$, we can make $(A5)$ and $(A6)$ hold, but $(A4)$ does not. For $(A1), (A2)$ and $(A3)$, we can simply choose $\overline{b} = \overline{d} = \overline{f} = 0$. On the other hand, for \eqref{rot}, we must choose $d_i, e_i$ and $f_i$ so that $e_1 + e_2 = 0$ and both $d_1 + d_2$ and $f_1 + f_2$ appropriately to guarantee \eqref{eq2} and \eqref{eq3}. With such a choice, $\bm u$ satisfies \eqref{notkorn2}.
\item For $k=5$, we shall choose $\overline{c} = \frac{1}{6} \overline{e} \neq 0$
and $\overline{a} = -\frac{1}{2} \overline{e} \neq 0$. Then by choosing $\overline{d} = 0$, we can make $(A4)$ and $(A6)$ hold, but $(A5)$ does not. For $(A1), (A2)$ and $(A3)$, we can simply choose $\overline{b} = \overline{d} = \overline{f} = 0$. On the other hand, for \eqref{rot}, we must choose $d_i, e_i$ and $f_i$ so that $e_1 + e_2 = 0$ and both $d_1 + d_2$ and $f_1 + f_2$ appropriately to guarantee \eqref{eq2} and \eqref{eq3}. With such a choice, $\bm u$ satisfies \eqref{notkorn2}.
\item For $k=6$, we shall choose $\overline{c} = \frac{1}{2} \overline{e} = -\overline{a} \neq 0$. Then by choosing $\overline{d} = 0$, we can make $(A4)$ and $(A5)$ hold, but $(A6)$ does not. For $(A1), (A2)$ and $(A3)$, we can simply choose $\overline{b} = \overline{d} = \overline{f} = 0$. On the other hand, for \eqref{rot}, we must choose $d_i, e_i$ and $f_i$ so that $e_1 + e_2 = 0$ and both $d_1 + d_2$ and $f_1 + f_2$ appropriately to guarantee \eqref{eq2} and \eqref{eq3}. With such a choice, $\bm u$ satisfies \eqref{notkorn2}.
\end{itemize}
This completes the proof.
\end{proof}
\section{Some Applications}\label{appl}
We begin this section with a simple example of the nonconforming finite element space on triangular partition
introduced in Mardal et al., \cite{mardal2002robust} for the 2D case. Note that the space
is composed of functions which are cubic vector fields with constant divergence on each element and with linear normal component on each edge. Namely, the local space is given as follows:
\begin{equation}
\bm V(T) = \{\bm v \in (P_3(T))^2 : \, {\rm div} \bm v \in P_0(T),\,\, \bm v\cdot\bm n_f |_f \in P_1(f), \,\, \forall f \in \partial T \}.
\end{equation}
Note that the degrees of freedom consist of
\begin{equation}
\int_f (\bm v \cdot \bm n_f) q \, ds, \quad \forall q \in P_1(f) \quad \mbox{ and } \quad \int_f \bm v \cdot \bm t_f \, ds.
\end{equation}
This is exactly what is required minimally for the Korn's inequality.
In the next subsection, we shall consider enriched $H({\rm div})$ conforming finite element spaces due to Xue et al. \cite{xie2008uniformly}. We then slightly modify these spaces so that we obtain enriched Crouzeix-Raviart element spaces that satisfy the Korn's inequality. Finally, we shall discuss the lowest enrichment for the $H({\rm div})$ conforming finite element space \cite{johnny2012family} and its remedy by modifying the enrichment. Throughout the section, we consider triangular (for 2D) or tetrahedral (for 3D) partitions, for $T \in \mathcal{T}_h$, ${\rm{BDM}}_\ell(T)$ denotes the local Brezzi-Douglas-Marini (BDM) space of order $\ell \geq 1$, namely
\begin{equation}
{\rm{BDM}}_\ell(T) = (P_{\ell}(T))^d, \quad \ell \geq 1,
\end{equation}
and ${\rm{RT}}_\ell(T)$ denotes the Raviart-Thomas space of order $\ell \geq 0$,
\begin{equation}
{\rm{RT}}_\ell(T) = (P_{\ell}(T))^d + \widetilde{P}_{\ell}(T) \bm x, \quad \ell \geq 0,
\end{equation}
where $\widetilde{P}_\ell(T)$ denotes the homogeneous polynomial space of degree $\ell$. Also, for any given $T \in \mathcal{T}_h$, we denote $\lambda_i$ with $i = 1,\cdots,d+1$ the barycentric coordinates of $T$.
We shall frequently use the following standard bubble functions. Namely, for $T \in \mathcal{T}_h$, we denote $b_T$ the bubble function defined by
\begin{equation}\label{tbubble}
b_T = \Pi_{i=1}^{d+1} \lambda_i.
\end{equation}
Similarly, we can define edge/face bubble functions denoted by $b_f$ for any $f \in \partial T$.
\subsection{Enriched $H({\rm div})$ conforming finite elements that satisfy the Korn's inequality}
In this section, we shall recall some of finite elements, which can be shown to satisfy the Korn's inequality within our framework. The finite element spaces listed in this subsection are constructed based on ${H}({\rm div};\Omega)$ finite element spaces plus divergence free functions \cite{xie2008uniformly}. This subsection is an example indicating the powerfulness of our framework to show the Korn's inequality.
The local space of the finite elements introduced in \cite{xie2008uniformly} would be of the following form:
for $T \in \mathcal{T}_h$,
\begin{equation}
\bm{V}_1(T) = \bm{V}_{1,T}^0 + \bm{curl} (b_T \bm{Y}),
\end{equation}
where $\bm{V}^0_{1,T}$ is the following well-known $H({\rm div})$-conforming finite element spaces, either ${\rm{BDM}}_1(T)$ or ${\rm{RT}}_1(T)$. For the space $\bm{Y}$, we choose the following polynomial spaces:
\begin{equation}
\bm{Y} = \left \{ \begin{array}{ll}
\bm{Y}_1 = P_1(T) & \mbox{ for } d = 2; \\
\bm{Y}_2 = (P_1(T))^3 & \mbox{ for } d = 3;\\
\bm{Y}_3 = (P_1(T))^3/ {\rm span} \left \{ \left( \lambda_i - \frac{1}{3} \right )\nabla \lambda_i \right \}_{i=1}^4 & \mbox{ for } d = 3.
\end{array} \right.
\end{equation}
Total of six elements can be listed in the following Table \ref{xutable} with degrees of freedom.
\begin{table}
\begin{tabular}{ |p{2.17cm}||p{1.5cm}|p{0.5cm}|p{5.cm}|p{1cm}| }
\hline
\multicolumn{5}{|c|}{The six modified $H({\rm div})$ elements} \\
\hline\hline
{Elements} & $\bm{V}_{1,T}^0$ & \bm{Y} & {\rm DOF} & Korn \\
\hline
$1^{\rm st}$ FEM (2D) & ${\rm RT}_1(T)$ & $\bm{Y}_1$ & $ {\small{\begin{array}{l}
\int_f \bm{v} \cdot \bm{n} q\, ds, \quad \forall q \in P_1(f), \\ \int_T \bm{v} \cdot \bm{q} \, dx, \quad \bm{q} \in (P_0(T))^2 \\
\int_f \bm{v} \cdot \bm{t} \, ds \end{array}}} $ & Yes \\
\hline
$2^{\rm nd}$ FEM (2D) & ${\rm BDM}_1(T)$ & $\bm{Y}_1$ & $ {\small{\begin{array}{l}
\int_f \bm{v} \cdot \bm{n} q\, ds, \quad \forall q \in P_1(f), \\
\int_f \bm{v} \cdot \bm{t} \, ds \end{array}}} $ & Yes \\
\hline
$1^{\rm st}$ FEM (3D) & ${\rm RT}_1(T)$ & $\bm{Y}_2$ & $ {\small{\begin{array}{l}
\int_f \bm{v} \cdot \bm{n} q\, ds, \quad \forall q \in P_1(f), \\ \int_T \bm{v} \cdot \bm{q} \, dx, \quad \bm{q} \in (P_0(T))^3 \\
\int_f (\bm{v} \times \bm{n})\cdot \bm{r} \, ds, \quad \forall \bm{r} \in {\rm RT}_0(f) \end{array}}} $ & Yes \\
\hline
$2^{\rm nd}$ FEM (3D) & ${\rm BDM}_1(T)$ & $\bm{Y}_2$ & $ {\small{\begin{array}{l}
\int_f \bm{v} \cdot \bm{n} q\, ds, \quad \forall q \in P_1(f), \\
\int_f (\bm{v} \times \bm{n}) \cdot \bm{r} \, ds, \quad \forall \bm{r} \in {\rm RT}_0(f) \end{array}}} $ & Yes \\
\hline
$3^{\rm rd}$ FEM (3D) & ${\rm RT}_1(T)$ & $\bm{Y}_3$ &$ {\small{\begin{array}{l}
\int_f \bm{v} \cdot \bm{n} q\, ds, \quad \forall q \in P_1(f), \\ \int_T \bm{v} \cdot \bm{q} \, dx, \quad \bm{q} \in (P_0(T))^3 \\
\int_f (\bm{v} \times \bm{n})\cdot\bm{r} \, ds, \quad \forall \bm{r} \in (P_0(f))^2 \end{array}}} $ & No \\
\hline
$4^{\rm th}$ FEM (3D) & ${\rm BDM}_1(T)$ & $\bm{Y}_3$ &$ {\small{\begin{array}{l}
\int_f \bm{v} \cdot \bm{n} q\, ds, \quad \forall q \in P_1(f), \\
\int_f (\bm{v} \times \bm{n})\cdot\bm{r} \, ds, \quad \bm{r} \in (P_0(f))^2 \end{array}}} $ & No \\
\hline
\end{tabular}\caption{FEMs introduced in \cite{xie2008uniformly}}\label{xutable}
\end{table}
Due to our framework, we easily notice that the first four elements are the ones that satisfy the Korn's inequality while the last two do not.
\subsection{Enriched ${\rm CR}$ finite elements that satisfy the Korn's inequality}
In this subsection, we shall consider enriched {\rm CR} elements that satisfy the Korn's inequality. Our discussion will be done for 2D and 3D separately. We note that in \cite{falk1991nonconforming}, Falk analyzed Korn's inequality for some nonconforming two dimensional finite element spaces. In particular, it was shown that the Crouzeix-Raviart element \cite{crouzeix1973conforming} does not satisfy such a Korn's inequality. A simple remedy can be accomplished. Namely, we add the same enrichment for $\bm{V}_1(T) = {\rm BDM}_1(T)$ replaced with $({\rm CR}(T))^d$ for both $d = 2$ and $d = 3$ and therefore, we propose the enriched $({\rm{CR}}(T))^d$ finite element spaces so that for $T \in \mathcal{T}_h$, we define the local space and degrees of freedom by
\begin{equation}
\bm{V}(T) := ({\rm CR}(T))^d + {\bm{curl}}(b_T \bm{Y}),
\end{equation}
where $\bm{Y}$ is either $\bm{Y}_1$ or $\bm{Y}_2$, depending on $d = 2$ or $d= 3$, respectively.
\begin{Lemma}
The space $\bm{V}(T)$ with the degrees of freedom given as in Table \ref{xutable} is unisolvent.
\end{Lemma}
\begin{proof}
We let $\bm{v} \in \bm{V}(T)$ be decomposed into two parts:
\begin{equation}
\bm{v} = \bm{v}_0 + {\bm{curl}}(b_T \bm{q}), \quad \bm{v}_0 \in ({\rm{CR}}(T))^d, \quad \bm{q} \in \bm{Y}.
\end{equation}
In $3D$, we have that
\begin{equation}
{\bm{curl}}(b_T \bm{q})\cdot\bm{n} = {\bm{curl}}_f (b_T \bm{q})_f = 0,
\end{equation}
where $(b_T \bm{q})_f$ is the tangential component of $b_T \bm{q}$ on $f$. In 2D, we have that ${\bm{curl}}(b_T q)\cdot\bm{n} = \partial_t (b_T q) = 0$. Thus $\bm{v} \cdot \bm{n} = \bm{v}^0 \cdot \bm{n} \in P_1(f), \quad \forall f \in \partial T$. Since for $({\rm CR}(T))^d$ is unisolvent with the degrees of freedom, $\int_f \bm{v}^0 \cdot \bm{n} q \, ds$ for $q \in P_1(f)$, we arrive at $\bm{v}^0 = \bm{0}$. Therefore, it is sufficient to show that $\bm{curl}(b_T \bm{q}) = \bm{0}$ using the tangential component of the degrees of freedom. However, this part of the proof is essentially presented in \cite{xie2008uniformly} and also in \cite{tai2006discrete}. Therefore, we shall skip the proof and completes the proof.
\end{proof}
In passing to next subsection, we propose another enrichment for ${\rm CR}^2$ finite element space that satisfies the Korn's inequality.
\subsubsection{Enriched {\rm CR} for 2D}
Let $T \in \mathcal{T}_h$ be a triangle in 2D with $\{a_i, a_j, a_k\}$ vertexes of $T$. We denote the barycentric coordinates by $\{\lambda_i, \lambda_j, \lambda_k\}$. Further, $\bm n_{ij},\bm n_{jk},\bm n_{ki} $ are the normals to the edges $\{e_{ij} = [a_i,a_j], e_{jk} = [a_j,a_k], e_{ki} = [a_k,a_i]\}$. We let ${\rm CR}(T)$ be the standard {\rm CR} element on $T$. We then define an edge bubble function $b$ by
\begin{equation}
b = \lambda_i \lambda_j + \lambda_j \lambda_k + \lambda_k \lambda_i - \frac{1}{6}, \end{equation}
where the constant $1/6$ comes from the fact that
\begin{equation}
\frac{1}{6} = \frac{1}{|f_{\nu\mu}|} \int_{f_{\nu\mu}} \lambda_\nu \lambda_\mu \, ds, \quad \nu\mu = ij, jk, ki.
\end{equation}
We are now in a position to define an enriched {\rm CR} element on $T$ by
\begin{equation}
\bm{E_{CR}}(T) = ({\rm{CR}}(T))^2 + \bm{V_{EC}}(T),
\end{equation}
where ${\bm{V_{EC}}}(T)$ is the space with the following functions as bases:
\begin{equation}
\bm \psi_{ij} = b (\lambda_i-\lambda_j)\bm n_{ij},~~~\bm \psi_{jk}=b (\lambda_j-\lambda_k)\bm n_{jk},~~~\bm \psi_{ki} = b(\lambda_k-\lambda_i)\bm n_{ki}.
\end{equation}
The degrees of freedom are given as follows:
\begin{equation}\label{dof}
\int_{f} \bm v \cdot \bm{t}_f ds \quad \hbox{and} \quad \int_{f} \bm v\cdot \bm{n}_f q ds, \quad \forall q \in P_1(f).
\end{equation}
It is simple to prove that the aforementioned degrees of freedom can be equivalently formulated as
\begin{equation}\label{dof}
\int_{f} \bm v ds \quad \hbox{and} \quad \int_{f} \bm v \cdot \bm{n}_f q ds, \quad \forall q \in P_1(f).
\end{equation}
We shall now prove that the space $\bm{E_{CR}}(T)$ with the degrees of freedom \eqref{dof} is unisolvent.
\begin{Lemma}
The space $\bm {E_{CR}}(T)$ with the degrees of freedom \eqref{dof} is unisolvent.
\end{Lemma}
\begin{proof}
We let $\bm{v} \in \bm{E_{CR}}(T)$ be decomposed into two parts:
\begin{equation}
\bm{v} = \bm{v}_0 + \bm{v}_1, \quad \bm{v}_0 \in ({\rm{CR}}(T))^2, \quad \bm{v}_1 \in \bm{V_{EC}}(T).
\end{equation}
We shall assume that $\int_f \bm{v} \, ds = 0$ and $\int_{f} \bm v \cdot \bm{n}_f q \, ds=0, \,\, \forall q \in P_1(f)$, for all $f \in \partial T$ and shall show that $\bm v = \bm{0}$. We first note that for any $\bm {v}_1 \in \bm {V_{EC}}(T)$, we have
$\int_{f} \bm {v}_1 \, ds = 0$, for any $f \in \partial T$, since for all $f \in \partial T$, it holds that $\int_{f} b q \, ds = 0, \forall q \in P_1(f)$. This measn that $\int_{f} \bm {v}_0\, ds = 0$ for any $f \in \partial T$ and so, $\bm {v}_0 = 0$. Therefore, it is sufficient to show that $\bm{v}_1 = \bm{0}$. By the definition of $\bm{V_{CR}}(T)$, we can write $\bm {v}_1$ as follows for some constants $c_1, c_2, c_3$:
\begin{equation}
\bm{v}_1 = c_1\bm \psi_{ij} + c_2 \bm \psi_{jk} + c_3 \bm \psi_{ki}.
\end{equation}
The fact that $\int_{f} \bm{v}_1 \cdot \bm{n} \, q ds = 0, \,\, \forall q \in P_1(f)$ leads to the following linear system:
\begin{subeqnarray*}
\int_{f_{ij}} (c_1\bm \psi_{ij}+c_2\bm \psi_{jk}+c_3 \bm \psi_{ki}) \cdot \bm n_{ij} \lambda_i ds &=& 0, \\
\int_{f_{jk}}(c_1\bm \psi_{ij}+c_2\bm \psi_{jk}+c_3 \bm \psi_{ki})\cdot \bm n_{jk} \lambda_j ds &=& 0, \\
\int_{f_{ki}} (c_1\bm \psi_{ij}+c_2\bm \psi_{jk}+c_3 \bm \psi_{ki}) \cdot \bm n_{ki} \lambda_k ds &=& 0.
\end{subeqnarray*}
Or equivalently, we have that
\begin{equation}\label{system}
\left ( \begin{array}{ccc}
-2 & \bm{n}_{jk} \cdot \bm{n}_{ij} & \bm{n}_{ki} \cdot \bm{n}_{ij} \\
\bm{n}_{jk} \cdot \bm{n}_{ij} & -2 & \bm{n}_{ki} \cdot \bm{n}_{jk} \\
\bm{n}_{ij} \cdot \bm{n}_{ki} & \bm{n}_{jk} \cdot \bm{n}_{ki} & - 2 \end{array} \right ) \left ( \begin{array}{c} c_1 \\ c_2 \\ c_3 \end{array} \right ) = \left ( \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right ).
\end{equation}
Since $|\bm n_{ij}\cdot \bm n_{ki}|+|\bm n_{jk}\cdot \bm n_{ki}| < 2$, the coefficient matrix in the equation \eqref{system} is diagonally dominant and so it is invertible, which implies $\bm {v}_1 = \bm 0$. This completes the proof.
\end{proof}
\subsection{A new finite element space that satisfies the Korn's inequality}
In this subsection, we shall investigate the finite elements introduced in \cite{johnny2012family}. We observe that some of their finite element for the lowest order case do not satisfy the Korn's inequality and provide modification, which satisfies the Korn's inequality. For a given $T \in \mathcal{T}_h$,
we let
\begin{equation}
Q_f^{0}(T) = P_{0}(T)~
\hbox{for}~ d=2.
\end{equation}
and
\begin{equation}
\bm{Q}_f^{0}(T) = (P_{0}(T))^3 \times \bm{n}_f~ \hbox{for}~ d=3,
\end{equation}
and define $Q^{0}(T) = \sum_{f \in \partial T} b_f {Q}_f^{0}(T)$ for $d = 2$ and $\bm Q^{0}(T)= \sum_{f \in \partial T} b_f \bm{Q}_f^{0}(T)$ for $d=3$.
We now consider the lowest order case of finite elements introduced in \cite{johnny2012family}. We recall that the local space was given as follows:
for $T \in \mathcal{T}_h$,
\begin{equation}\label{guzman}
\bm{V}_1(T) = \bm{V}_{1,T}^0 + \bm{curl} (b_T \bm{Y}),
\end{equation}
where $\bm{V}^0_{1,T}$ is the well-known $H({\rm div})$-conforming finite element spaces, either ${\rm{BDM}}_1(T)$ or ${\rm{RT}}_1(T)$. The space $\bm{Y}$ is given as following:
\begin{equation}
\bm{Y} = \left \{ \begin{array}{ll}
\bm{Y}_4 = Q^0(T) & \mbox{ for } d = 2; \\
\bm{Y}_5 = \bm Q^0(T) & \mbox{ for } d = 3.
\end{array} \right.
\end{equation}
From our sharp Korn's inequality, we can easily check if the above four finite elements given in \eqref{guzman} from different choices of $\bm{V}_{1,T}^0$ and $\bm{Y}$ satisfy the Korn's inequality immediately. This is presented in the following Table \ref{guzmantable}:
\begin{table}[ht]
\begin{tabular}{ |p{2.17cm}||p{1.5cm}|p{0.5cm}|p{5.cm}|p{1cm}| }
\hline
\multicolumn{5}{|c|}{The six modified $H({\rm div})$ elements} \\
\hline\hline
{Elements} & $\bm{V}_{1,T}^0$ & \bm{Y} & {\rm DOF} & Korn \\
\hline
$1^{\rm st}$ FEM (2D) & ${\rm RT}_1(T)$ & $\bm{Y}_4$ & $ {\small{\begin{array}{l}
\int_f \bm{v} \cdot \bm{n} q\, ds, \quad \forall q \in P_1(f), \\ \int_T \bm{v} \cdot \bm{q} \, dx, \quad \bm{q} \in (P_0(T))^2 \\
\int_f \bm{v} \cdot \bm{t} \, ds \end{array}}} $ & Yes \\
\hline
$2^{\rm nd}$ FEM (2D) & ${\rm BDM}_1(T)$ & $\bm{Y}_4$ & $ {\small{\begin{array}{l}
\int_f \bm{v} \cdot \bm{n} q\, ds, \quad \forall q \in P_1(f), \\
\int_f \bm{v} \cdot \bm{t} \, ds \end{array}}} $ & Yes \\
\hline
$3^{\rm rd}$ FEM (3D) & ${\rm RT}_1(T)$ & $\bm{Y}_5$ &$ {\small{\begin{array}{l}
\int_f \bm{v} \cdot \bm{n} q\, ds, \quad \forall q \in P_1(f), \\ \int_T \bm{v} \cdot \bm{q} \, dx, \quad \bm{q} \in (P_0(T))^3 \\
\int_f (\bm{v} \times \bm{n})\cdot\bm{r} \, ds, \quad \forall \bm{r} \in (P_0(f))^2 \end{array}}} $ & No \\
\hline
$4^{\rm th}$ FEM (3D) & ${\rm BDM}_1(T)$ & $\bm{Y}_5$ &$ {\small{\begin{array}{l}
\int_f \bm{v} \cdot \bm{n} q\, ds, \quad \forall q \in P_1(f), \\
\int_f (\bm{v} \times \bm{n})\cdot\bm{r} \, ds, \quad \bm{r} \in (P_0(f))^2 \end{array}}} $ & No \\
\hline
\end{tabular}
\caption{Lowest FEMs introduced in \cite{johnny2012family}}\label{guzmantable}
\end{table}
We note that particularly, for the local space
\begin{equation}
\bm{V}_1(T) = \bm{V}_{1,T}^0 + \bm{curl} (b_T \bm{Y}_5)= \bm{V}_{1,T}^0 + \bm{curl} (b_T \bm{Q}^0(T)),
\end{equation}
the degrees of freedom are not sufficient since the constant projected tangential component on the face being zero is not sufficient for the Korn's inequality.
Therefore, we investigate how to remedy this situation. Namely, we shall modify the local space $\bm V_1(T)$ and the definition of degrees of freedom
so that the Korn's inequality can be satisfied in the remainder of this subsection.
We define the local space for the nonconforming finite element that satisfies the Korn's inequality :
\begin{equation}
\bm{V}^1(T) = \bm{V}_{1,T}^0 + {\bm{curl}} \left ( b_T \bm{Q}^*(T) \right ),
\end{equation}
where
\begin{eqnarray*}
\bm{Q}^{*}(T) &=& \sum_{f \in \partial T} b_f \bm{Q}_f^*(T),
\end{eqnarray*}
with
\begin{eqnarray*}
\bm{Q}_f^*(T) = \left \{ \bm{q} \times \bm{n}_f : \bm{q} \in \bm{RM}(T), \int_T (\bm{q} \times {\bm{n}}_f) \cdot (\bm{w} \times \bm{n}_f) b_T b_f \, dx = 0, \,\, \bm{w} \in (P_0(T))^3 \right \}.
\end{eqnarray*}
It is easy to see that the dimension of $\bm{Q}_f^*(T)$ is three.
Therefore, the dimension of the space $\bm{Q}^*(T)$ is 12. Note that the space $ \bm{V}_{1,T}^0$ can be either $BDM_1$ or $RT_1$. But, we shall only consider $BDM_1$ for simplicity. Under this choice, it is well-known that a function $\bm{v} \in \bm{V}_{1,T}^0$ is uniquely determined by the following degrees of freedom: for all faces $f \in \partial T$,
\begin{equation}
\langle \bm{v}\cdot \bm{n}_f, \bm{\mu} \rangle_f, \quad \forall \bm{\mu} \in ({P}_1(f))^3.
\end{equation}
With this in mind, we can provide the following degrees of freedom that define a function $\bm{v} \in \bm{V}^1(T)$ uniquely: for all $f\in\partial T$,
\begin{eqnarray}
\langle \bm{v}\cdot \bm{n}_f, \bm{\mu} \rangle_f, && \quad
\forall \bm{\mu} \in ({P}_1(f))^3 \label{dof1}; \\
\langle \bm{v}\times \bm{n}_f, \bm{\kappa} \rangle_f, && \quad \forall \bm{\kappa} \in {\rm{RT}}_0(f). \label{dof2}
\end{eqnarray}
We can show that the following holds true:
\begin{Lemma}
For any $T \in \mathcal{T}_h$, we have
\begin{equation}
{\rm dim} \,{\bm{curl}} \left ( b_T \bm{Q}^*(T) \right ) = 4 \, {\rm dim} \, \bm{Q}_f^*(T).
\end{equation}
\end{Lemma}
\begin{proof}
The proof can be done similarly to the argument of Lemma 3.2 in \cite{johnny2012family}. Therefore, we complete the proof.
\end{proof}
\begin{Theorem}
We have the following relation that
\begin{equation}
\bm{V}^1(T) = \bm{V}_{1,T}^0 \oplus {\bm{curl}} \left ( b_T \bm{Q}^*(T) \right )
\end{equation}
and
\begin{equation}
{\rm dim} \bm{V}^1(T) = {\rm dim} \bm{V}_{1,T}^0 + {\rm dim} {\bm{curl}} \left ( b_T \bm{Q}^*(T) \right ).
\end{equation}
Furthermore, any function $\bm{v} \in \bm{V}^1(T)$ is uniquely determined by the degrees of freedom \eqref{dof1} and \eqref{dof2}.
\end{Theorem}
\begin{proof}
The proof can be done similarly to the argument of Theorem 3.3 in \cite{johnny2012family}. Therefore, we complete the proof. \end{proof}
\section{Conclusion}\label{con}
We have proven a Korn's inequality for the piecewise $H^1$ space. Our characterization of the Korn's inequality considers to be the rotated version of what is characterized in \cite{mardal2006observation}. We further have shown some sharpness of the Korn's inequality. This result has been applied to a number of finite elements to check that they satisfy the Korn's inequality or not. Also the minimal jump terms that appear in the Korn's inequality are explicit enough so that they can be used for construction of finite element spaces that satisfy the Korn's inequality.
\bibliographystyle{plain}
| {
"timestamp": "2022-07-06T02:17:20",
"yymm": "2207",
"arxiv_id": "2207.02060",
"language": "en",
"url": "https://arxiv.org/abs/2207.02060",
"abstract": "In this paper, we revisit Korn's inequality for the piecewise $H^1$ space based on general polygonal or polyhedral decompositions of the domain. Our Korn's inequality is expressed with minimal jump terms. These minimal jump terms are identified by characterizing the restriction of rigid body mode to edge/face of the partitions. Such minimal jump conditions are shown to be sharp for achieving the Korn's inequality as well. The sharpness of our result and explicitly given minimal conditions can be used to test whether any given finite element spaces satisfy Korn's inequality, immediately as well as to build or modify nonconforming finite elements for Korn's inequality to hold.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A sharp Korn's inequality for piecewise $H^1$ space and its application",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9854964173268185,
"lm_q2_score": 0.8128673133042217,
"lm_q1q2_score": 0.801077825023387
} |
https://arxiv.org/abs/2011.12449 | Iterations for the Unitary Sign Decomposition and the Unitary Eigendecomposition | We construct fast, structure-preserving iterations for computing the sign decomposition of a unitary matrix $A$ with no eigenvalues equal to $\pm i$. This decomposition factorizes $A$ as the product of an involutory matrix $S = \operatorname{sign}(A) = A(A^2)^{-1/2}$ times a matrix $N = (A^2)^{1/2}$ with spectrum contained in the open right half of the complex plane. Our iterations rely on a recently discovered formula for the best (in the minimax sense) unimodular rational approximant of the scalar function $\operatorname{sign}(z) = z/\sqrt{z^2}$ on subsets of the unit circle. When $A$ has eigenvalues near $\pm i$, the iterations converge significantly faster than Padé iterations. Numerical evidence indicates that the iterations are backward stable, with backward errors often smaller than those obtained with direct methods. This contrasts with other iterations like the scaled Newton iteration, which suffers from numerical instabilities if $A$ has eigenvalues near $\pm i$. As an application, we use our iterations to construct a stable spectral divide-and-conquer algorithm for the unitary eigendecomposition. | \section{Introduction} \label{sec:intro}
Every matrix $A \in \mathbb{C}^{n \times n}$ with no purely imaginary eigenvalues can be written uniquely as a product
\[
A = SN,
\]
where $S \in \mathbb{C}^{n \times n}$ is involutory ($S^2=I$), $N \in \mathbb{C}^{n \times n}$ has spectrum in the open right half of the complex plane, and $S$ commutes with $N$. This is the celebrated matrix sign decomposition~\cite{higham1994matrix}, whose applications are widespread~\cite{denman1976matrix,kenney1995matrix}. In terms of the principal square root $(\cdot)^{1/2}$, we have $S=A(A^2)^{-1/2} =: \mathrm{sign}(A)$ and $N=(A^2)^{1/2}$.
When $A$ is unitary, so too are $S$ and $N$. It follows that $S=S^{-1}=S^*$, so we may write, for any unitary $A$ with $\Lambda(A) \cap i\mathbb{R} = \emptyset$,
\begin{equation} \label{unitarysign}
A = SN, \quad S^2=I, \; S=S^*, \; N^2=A^2, \; N^*N=I, \, \Lambda(N) \subset \mathbb{C}_+,
\end{equation}
where $\Lambda(N)$ denotes the spectrum of $N$ and $\mathbb{C}_+ = \{z \in \mathbb{C} \mid \Re(z)>0\}$. We refer to this decomposition as the \emph{unitary sign decomposition}.
We say that an algorithm for computing the decomposition~(\ref{unitarysign}) is backward stable if it computes matrices $\widehat{S}$ and $\widehat{N}$ with the property that the quantities
\begin{equation} \label{backwarderrors}
\|A-\widehat{S}\widehat{N}\|, \, \|\widehat{S}^2-I\|, \, \|\widehat{S}-\widehat{S}^*\|, \, \|\widehat{N}^*\widehat{N}-I\|, \, \|\widehat{N}^2-A^2\|, \, \max\{0,-\min_{\lambda \in \Lambda(\widehat{N})} \Re\lambda\}
\end{equation}
are each a small multiple of the unit roundoff $u$ ($=2^{-53}$ in double-precision arithmetic).\footnote{Note that this property implies $\|\widehat{N}\widehat{S}-\widehat{S}\widehat{N}\|$ is small as well; see Lemma~\ref{lemma:SNcommute}.} Here, $\|\cdot\|$ denotes the 2-norm.
The goal of this paper is to design backward stable iterations for computing the decomposition~(\ref{unitarysign}). To illustrate why this is challenging, let us point out the pitfalls of naive approaches. A widely used iteration for computing the sign of a general matrix $A \in \mathbb{C}^{n \times n}$ is the Newton iteration~\cite{roberts1980linear}~\cite[Section 5.3]{higham2008functions}
\begin{equation} \label{newtonit}
X_{k+1} = \frac{1}{2}(X_k+X_k^{-1}), \quad X_0 = A.
\end{equation}
If $A$ is unitary, then the first iteration is simply
\begin{equation} \label{firstnewtonit}
X_1 = \frac{1}{2}(A+A^*).
\end{equation}
In floating point arithmetic, this calculation is susceptible to catastrophic cancellation if $A$ has eigenvalues near $\pm i$. Indeed, if we carry out~(\ref{firstnewtonit}) followed by~(\ref{newtonit}) for $k=1,2,\dots$ on the $100 \times 100$ unitary matrix \verb$A = gallery('orthog',100,3)$ from the MATLAB matrix gallery, then the iteration diverges. Scaling the iterates with standard scaling heuristics~\cite{kenney1992scaling} leads to convergence, but the computed sign of $A$ satisfies $\|\widehat{S}-\widehat{S}^*\| > 0.1$ in typical experiments.
This happens because $A$ has several eigenvalues lying near $\pm i$.
The above algorithm can be reinterpreted in a different way: It is computing the unitary factor in the polar decomposition of $(A+A^*)/2$. Indeed, the Newton iteration $X_{k+1} = \frac{1}{2}(X_k+X_k^{-*})$ for the polar decomposition~\cite[Section 8.3]{higham2008functions} coincides with~(\ref{newtonit}) on Hermitian matrices. This suggests another family of potential algorithms: compute the polar decomposition of $(A+A^*)/2$ via iterative methods or other means. However, numerical experiments confirm that such algorithms are similarly inaccurate on matrices with eigenvalues near $\pm i$. This unstable behavior is also shared by the superdiagonal Pad\'e iterations for the matrix sign function~\cite{kenney1991rational}, all of which map eigenvalues $\lambda \approx \pm i$ of $A$ to a small real number (or the inverse thereof) in the first iteration.
One way to overcome these difficulties is to adopt structure-preserving iterations. Here, we say that an iteration $X_{k+1} = g_k(X_k)$ for the unitary sign decomposition is structure-preserving if the iterates $X_k$ are unitary for every $k$.
Examples include the diagonal family of Pad\'e iterations~\cite{higham2004computing}, whose lowest-order member is the iteration
\begin{equation} \label{padelow}
X_{k+1} = X_k(3I+X_k^2)(I+3X_k^2)^{-1}, \quad X_0 = A.
\end{equation}
By keeping $X_k$ unitary, a structure-preserving iteration ensures that the eigenvalues of $X_k$ remain on the unit circle, ostensibly skirting the dangers of catastrophic cancellation. We observe numerically that, if implemented in a clever way (described in Section~\ref{sec:algorithm}), the diagonal Pad\'e iterations are backward stable. However, they can take excessively long to converge on matrices with eigenvalues near $\pm i$. For example, when \verb$A = gallery('orthog',100,3)$, the iteration~(\ref{padelow}) takes 34 iterations to converge.
We construct in this paper a family of structure-preserving iterations for the unitary sign decomposition that converge more rapidly---sometimes dramatically more so---than the diagonal Pad\'e iterations. Numerical evidence indicates that these iterations are backward stable, with backward errors often smaller than those obtained with direct methods.
The key ingredient that we use to construct our iterations is a recently discovered formula for the best (in the minimax sense) unimodular rational approximant of the scalar function $\sign(z) = z/\sqrt{z^2}$ on subsets of the unit circle~\cite{gawlik2020zolotarev}. Remarkably, it can be shown that composing two such approximants yields a best approximant of higher degree~\cite{gawlik2020zolotarev}, laying the foundations for an iteration. When applied to matrices, the iteration produces a sequence of unitary matrices $X_0=A$, $X_1$, $X_2$, $\dots$ that converges rapidly to $S=\sign(A)$, often significantly faster than the corresponding diagonal Pad\'e iteration. When \verb$A = gallery('orthog',100,3)$, for example, the lowest-order iteration converges within 6 iterations, which is about 6 times faster than the corresponding diagonal Pad\'e iteration~(\ref{padelow}).
\paragraph{Prior work}
Matrix iterations constructed from rational minimax approximants have attracted growing interest in recent years. Early examples include the optimal scaling heuristic proposed by Byers and Xu~\cite{byers2008new} for the Newton iteration for the polar decomposition, as well as an analogous scaling heuristic for the matrix square root proposed by Wachspress~\cite{wachspress1962} and Beckermann~\cite{beckermann2013optimally}. Nakatsukasa, Bai, and Gygi~\cite{nakatsukasa2010optimizing} designed an optimal scaling heuristic for the Halley iteration for the polar decompsition, and their strategy was generalized to higher order by Nakatsukasa and Freund~\cite{nakatsukasa2016computing}. The latter work elucidated the link between these scaling heuristics and the seminal work of Zolotarev~\cite{zolotarev1877applications} on rational minimax approximation. The iterations derived in~\cite{nakatsukasa2016computing} have a variety of applications, including algorithms for the symmetric eigendecomposition, singular value decomposition, polar decomposition, and CS decomposition~\cite{nakatsukasa2016computing,gawlik2018backward}.
All of the aforementioned algorithms rely crucially on the following fact: if two rational minimax approximants of the scalar function $\sign(x)$ on suitable real intervals are composed with one another, then their composition is a best approximant of higher degree~\cite{nakatsukasa2016computing}.
A related composition law for rational minimax approximants of $\sqrt{z}$ has been used to construct iterations for the matrix square root~\cite{gawlik2018zolotarev}. These iterations were generalized to the matrix $p$th root in~\cite{gawlik2020rational} and used to derive approximation theoretic results in~\cite{gawlik2019approximating}.
An even more recent advancement---a composition law for rational minimax approximants of $\sign(z)$ on subsets of the unit circle~\cite{gawlik2020zolotarev}---is what inspired the present paper.
\paragraph{Connections to other iterations}
The iterations we derive in this paper are intimately connected to several existing iterations for the matrix sign function and the polar decomposition. When applied to a unitary matrix $A$, our iterations produce a sequence of unitary matrices whose Hermitian part coincides with the sequence of matrices generated by Nakatsuka and Freund's iterations~\cite{nakatsukasa2016computing} for the polar decomposition of $(A+A^*)/2$. A special case of this result is a connection between our lowest-order iteration for $\sign(A)$ and the optimally scaled Halley iteration for the polar decomposition of $(A+A^*)/2$~\cite{nakatsukasa2010optimizing}. It is important to note that these equivalences hold only in exact arithmetic. In floating-point arithmetic, our iterations behave very differently from the aforementioned algorithms.
There is also a link between our iterations and the diagonal Pad\'e iterations. Roughly speaking, our iterations are designed using rational minimax approximants of $\sign(z)$ on two circular arcs containing $\pm 1$. If these arcs are each shrunk to a point, then the diagonal Pad\'e iterations are recovered. This helps to explain the slow convergence of the diagonal Pad\'e iterations on unitary matrices with eigenvalues near $\pm i$: The iterations need to approximate $\sign(z)$ near $z = \pm i$, but they use rational functions that are designed to approximate $\sign(z)$ near $z = \pm 1$.
\paragraph{Unitary eigendecomposition}
Our emphasis on handling eigenvalues near $\pm i$ is not merely pedantic. It is precisely the sort of situation that one often encounters if the unitary sign decomposition is used as part of a spectral divide-and-conquer algorithm for the unitary eigendecomposition.
Indeed, consider a unitary matrix $A \in \mathbb{C}^{m \times m}$ with eigendecomposition $A=V\Lambda V^*$. The matrix $(I+\sign(A))/2$ is a spectral projector onto the invariant subspace $\mathcal{V}_+$ of $A$ associated with eigenvalues having positive real part.
A spectral divide-and-conquer algorithm uses this projector to find orthonormal bases $U_1 \in \mathbb{C}^{m \times m_1}$, $U_2 \in \mathbb{C}^{m \times m_2}$, $m_1+m_2=m$, for $\mathcal{V}_+$ and its orthogonal complement.
Then $\begin{pmatrix} U_1 & U_2 \end{pmatrix}^* A \begin{pmatrix} U_1 & U_2 \end{pmatrix}$ is block diagonal, so recursion can be used to determine $V$ and $\Lambda$. At each step, scalar multiplication by complex numbers with unit modulus can be used to rotate the spectrum so that it is distributed approximately evenly between the left and right half-planes. If $A$ has a cluster of nearby eigenvalues, then it is reasonable to expect this process to center the cluster near $\pm i$ at some step. This is precisely what we observe in practice, and the ability to compute the unitary sign decomposition quickly and accurately in the presence of eigenvalues near $\pm i$ becomes paramount.
\paragraph{Organization}
This paper is organized as follows. We begin in Section~\ref{sec:scalarsign} by studying rational minimax approximants of $\sign(z)$ on the unit circle. This material is largely drawn from~\cite{gawlik2020zolotarev}, but we add some additional results and insights to relate these approximants to Pad\'e approximants. Next, we use these approximants to construct matrix iterations for the unitary sign decomposition in Section~\ref{sec:algorithm}. We illustrate their utility by constructing a spectral divide-and-conquer algorithm for the unitary eigendecomposition in Section~\ref{sec:eig}. We conclude with numerical examples in Section~\ref{sec:numerical}.
\section{Rational Approximation of the Sign Function on the Unit Circle} \label{sec:scalarsign}
In this section, we study rational approximants of the scalar function $\sign(z) = z/\sqrt{z^2}$ on the set
\[
\mathbb{S}_\Theta = \{z \in \mathbb{C} \mid |z|=1, \, \arg z \notin (\Theta,\pi-\Theta) \cup (-\pi+\Theta,-\Theta) \},
\]
where $\Theta \in (0,\pi/2)$. Since our ultimate interest is in constructing structure-preserving iterations for the unitary sign decomposition, we focus on rational functions $r$ satisfying $|r(z)|=1$ for $|z|=1$. We call such rational functions unimodular. Unimodular rational functions have the property that $r(A)$ is unitary for any unitary matrix $A$.
The problem of determining the best (in the minimax sense) unimodular rational approximant of $\sign(z)$ on $\mathbb{S}_\Theta$ has recently been solved in~\cite{gawlik2020zolotarev}. To describe the solution, let us introduce some notation. We use $\mathrm{sn}(\cdot,\ell)$, $\mathrm{cn}(\cdot,\ell)$, and $\mathrm{dn}(\cdot,\ell)$ to denote Jacobi's elliptic functions with modulus $\ell$, and we use $\ell' = \sqrt{1-\ell^2}$ to denote the modulus complementary to $\ell$. We denote the complete elliptic integral of the first kind by $K(\ell) = \int_0^{\pi/2} (1-\ell^2 \sin^2\theta)^{-1/2} \, d\theta$. We say that a rational function $r(z)=p(z)/q(z)$ has type $(m,n)$ if $p$ and $q$ are polynomials of degree at most $m$ and $n$, respectively.
\begin{theorem}
Let $\Theta \in (0,\pi/2)$ and $n \in \mathbb{N}_0$. Among all rational functions $r$ of type $(2n+1,2n+1)$ that satisfy $|r(z)|=1$ for $|z|=1$, the ones which minimize
\[
\max_{z \in \mathbb{S}_\Theta} \left| \arg\left(\frac{r(z)}{\sign(z)}\right) \right|
\]
are
\[
r(z) = r_{2n+1}(z;\Theta) = z \prod_{j=1}^n \frac{z^2+a_j}{1+a_j z^2}
\]
and its reciprocal, where
\[
a_j = a_j(\Theta) = \left( \frac{\ell \sn(v_j,\ell') + \dn(v_j,\ell')}{\cn(v_j,\ell')} \right)^{2(-1)^{j+n}},
\]
$v_j = \frac{2j-1}{2n+1}K(\ell')$, $\ell = \cos\Theta$, and $\ell' = \sqrt{1-\ell^2} = \sin\Theta$.
\end{theorem}
\begin{proof}
See~\cite[Theorem 2.1 and Remark 2.2]{gawlik2020zolotarev}.
\end{proof}
\begin{remark}
For simplicity, we have chosen to focus only on best unimodular rational approximants of $\sign(z)$ on $\mathbb{S}_\Theta$ of type $(2n+1,2n+1)$ in this paper.
Best approximants of type $(2n,2n)$ can also be written down; see~\cite{gawlik2020zolotarev} for details.
\end{remark}
The rational function $r_{2n+1}(z;\Theta)$ has the following remarkable behavior under composition.
\begin{theorem} \label{thm:composition}
Let $\Theta \in (0,\pi/2)$, $m,n \in \mathbb{N}_0$, and $\widetilde{\Theta} = \left| \arg(r_{2n+1}(e^{i\Theta}; \Theta)) \right|$. Then
\[
r_{2m+1}(r_{2n+1}(z;\Theta); \widetilde{\Theta}) = r_{(2m+1)(2n+1)}(z;\Theta).
\]
\end{theorem}
\begin{proof}
See~\cite[Theorem 3.3 and Remark 3.6]{gawlik2020zolotarev}.
\end{proof}
We also have the following error estimate.
\begin{theorem} \label{thm:error}
Let $\Theta \in (0,\pi/2)$ and $n \in \mathbb{N}_0$. We have
\[
\max_{z \in \mathbb{S}_\Theta} \left| \arg\left(\frac{r_{2n+1}(z;\Theta)}{\sign(z)}\right) \right| \le 4 \rho^{-(2n+1)},
\]
where
\begin{equation} \label{rho}
\rho = \rho(\Theta) = \exp\left( \frac{ \pi K(\cos\Theta) }{ 2K(\sin\Theta) } \right).
\end{equation}
\end{theorem}
\begin{proof}
See~\cite[Theorem 3.2]{gawlik2020zolotarev}, and note that their definition of $\rho$ differs from ours by a factor of 2 in the exponent.
\end{proof}
\begin{remark} \label{remark:theta0}
Theorems~\ref{thm:composition} and~\ref{thm:error} continue to hold when $\Theta=0$ if we adopt the convention that $\rho(0) = \infty$, $\mathbb{S}_0 = \{-1,1\}$, and $r_{2n+1}(z;0) = z \prod_{j=1}^n \frac{z^2+a_j(0)}{1+a_j(0) z^2}$. We elaborate on this fact below.
\end{remark}
\subsection{Connections with Other Rational Approximants} \label{sec:connections}
The rational function $r_{2n+1}(z;\Theta)$ is closely connected to several other well-known rational approximants of $\sign(z)$.
\begin{proposition} \label{prop:pade}
As $\Theta \rightarrow 0$, $r_{2n+1}(z;\Theta)$ converges coefficientwise to $z p_n(z^2)$, where $p_n(z)$ is the type-$(n,n)$ Pad\'e approximant of $z^{-1/2}$ at $z=1$.
\end{proposition}
\begin{proof}
This is a consequence of~\cite[Proposition 3.9]{gawlik2020zolotarev}, where it is shown that $\sqrt{z}/r_{2n+1}(\sqrt{z};\Theta)$ converges coefficientwise to $1/p_n(z)$ as $\Theta \rightarrow 0$.
\end{proof}
In the notation of Remark~\ref{remark:theta0}, the above proposition states that
\[
r_{2n+1}(z;0) = zp_n(z^2).
\]
This rational function has been studied extensively in~\cite{kenney1991rational,kenney1994hyperbolic,gomilko2012pade}~\cite[Theorem 5.9]{higham2008functions}. It satisfies~\cite[Theorem 5.9]{higham2008functions}
\begin{equation} \label{tanh}
z p_n(z^2) = \tanh((2n+1)\arctanh z)
\end{equation}
It also has the following properties. Both $p_n(z)$ and $zp_n(z^2)$ are unimodular~\cite{higham2004computing}; that is, for any $n \in \mathbb{N}_0$,
\[
|zp_n(z^2)| = |p_n(z)|=1, \text{ if } |z|=1.
\]
Under composition, we have~\cite[Theorem 5.9)(c)]{higham2008functions}
\begin{equation} \label{composition0}
r_{2m+1}(r_{2n+1}(z;0);0) = r_{(2m+1)(2n+1)}(z;0)
\end{equation}
for any $m,n \in \mathbb{N}_0$. Finally, $r_{2n+1}(1;0)=-r_{2n+1}(-1;0)=1$ for all $n \in \mathbb{N}_0$. These last two facts justify Remark~\ref{remark:theta0}.
The rational functions $r_{2n+1}(z;0)$, $n \in \mathbb{N}_0$, have been used in~\cite{kenney1991rational} to construct iterations for computing the matrix sign function. The iterations constitute the diagonal family of Pad\'e iterations. The first few diagonal Pad\'e approximants of $z^{-1/2}$ at $z=1$ are
\[
p_0(z)=1, \; p_1(z) = \frac{3+z}{1+3z}, \; p_2(z) = \frac{5+10z+z^2}{1+10z+5z^2}, \; p_3(z) = \frac{7+35z+21z^2+z^3}{1+21z+35z^2+7z^3}.
\]
More generally, Pad\'e iterations can be constructed from rational functions of the form $zp_{m,n}(z^2)$, where $p_{m,n}(z)$ is the type-$(m,n)$ Pad\'e approximant of $z^{-1/2}$ at $z=1$. However, when $m\neq n$, the Pad\'e iterations are not structure-preserving, as $|p_{m,n}(z)| \not\equiv 1$ for $|z|=1$ and $m \neq n$.
We now turn our attention back to the rational function $r_{2n+1}(z,\Theta)$ with positive $\Theta$. Interestingly, this function is intimately connected to the solution of another rational approximation problem: approximating $\sign(x)$ on the union of real intervals $[-1,-\ell] \cup [\ell,1]$.
\begin{theorem} \label{thm:realpart}
Let $\Theta \in [0,\pi/2)$ and $n \in \mathbb{N}_0$. For $z \in \mathbb{C}$ with $|z|=1$, we have
\begin{equation} \label{realpart}
\Re r_{2n+1}(z;\Theta) = \widehat{R}_{2n+1}(\Re z; \cos\Theta),
\end{equation}
where
\[
\widehat{R}_m(x;\ell) =
\begin{cases}
\frac{R_m(x;\ell)}{\max_{y \in [\ell,1]} R_m(y;\ell)} &\mbox{ if } \ell \in (0,1), \\
x p_n(x^2), &\mbox{ if } \ell=1,
\end{cases}
\]
and
\[
R_m(\cdot;\ell) = \argmin_{R \in \mathcal{R}_{m,m}} \max_{x \in [-1,-\ell] \cup [\ell,1]} |R(x)-\sign(x)|.
\]
\end{theorem}
\begin{proof}
This identity is proven for $\Theta \in (0,\pi/2)$ in~\cite[Theorem 2.4]{gawlik2020zolotarev}. To see that it also holds when $\Theta = 0$, we must show that if $|z|=1$ and $x = \Re z = \frac{1}{2}(z+1/z)$, then
\[
\frac{1}{2} \left( \tanh((2n+1)\arctanh z) + \frac{1}{\tanh((2n+1)\arctanh z)} \right) = \tanh((2n+1)\arctanh x).
\]
Since $\frac{1+x}{1-x} = -\left(\frac{1+z}{1-z}\right)^2$, we have $\arctanh x = \frac{1}{2}\log\left(\frac{1+x}{1-x}\right) = \log\left(i \frac{1+z}{1-z}\right)$. Thus,
\begin{equation} \label{tanhleft}
\tanh((2n+1)\arctanh x) = \tanh\left( (2n+1)\log\left(i \frac{1+z}{1-z}\right) \right).
\end{equation}
On the other hand, the identity $\tanh(2y) = \frac{2\tanh y}{1+\tanh^2 y}$ shows that
\begin{align}
\frac{1}{2} &\left( \tanh((2n+1)\arctanh z) + \frac{1}{\tanh((2n+1)\arctanh z)}) \right) \nonumber \\
&= \coth((4n+2)\arctanh z) \nonumber \\
&= \coth\left( (2n+1)\log\left(\frac{1+z}{1-z}\right)\right). \label{tanhright}
\end{align}
Since $(2n+1)\log\left(\frac{1+z}{1-z}\right)$ differs from $(2n+1)\log\left(i \frac{1+z}{1-z}\right)$ by an odd multiple of $\frac{\pi i}{2}$, it follows that~(\ref{tanhleft}) and~(\ref{tanhright}) are equal.
\end{proof}
Written another way, the lemma above states that
\begin{equation} \label{rplusrinv}
\frac{1}{2} \left( r_{2n+1}(z;\Theta) + \frac{1}{r_{2n+1}(z;\Theta)} \right) = \widehat{R}_{2n+1}\left( \frac{z+1/z}{2}; \cos\Theta \right)
\end{equation}
for all $z$ with $|z|=1$. In particular,
\[
\frac{1}{2}\left( zp_n(z^2) + \frac{1}{z p_n(z^2)} \right) = \left( \frac{z+1/z}{2}\right) p_n\left( \left(\frac{z+1/z}{2}\right)^2 \right).
\]
Since these equalities hold on the unit circle, they hold on all of $\mathbb{C}$.
By combining~(\ref{composition0}),~(\ref{realpart}), and Theorem~\ref{thm:composition}, one sees that the function $\widehat{R}_{2n+1}(x;\ell)$ satisfies
\begin{equation} \label{Rcomp}
\widehat{R}_{2m+1}(\widehat{R}_{2n+1}(x,\ell),\widetilde{\ell}) = \widehat{R}_{(2m+1)(2n+1)}(x,\ell), \quad \text{ if } \widetilde{\ell} = \widehat{R}_{2n+1}(\ell,\ell)
\end{equation}
for all $m,n \in \mathbb{N}_0$ and all $\ell \in [0,1)$.
This equality was derived in~\cite{nakatsukasa2016computing} for $\ell \in (0,1)$ by counting extrema of $\widehat{R}_{2m+1}(\widehat{R}_{2n+1}(x,\ell),\widetilde{\ell})-\sign(x)$.
It can be leveraged to construct iterations for the matrix sign function, and such iterations are particularly well-suited for computing the sign of a Hermitian matrix $B$ (which coincides with the unitary factor in the polar decomposition of $B$); see~(\ref{zolopd1}-\ref{zolopd2}) below.
\section{Algorithm} \label{sec:algorithm}
\subsection{Matrix Iteration}
Theorem~\ref{thm:composition} suggests the following iteration for computing the sign of a unitary matrix $A$ with spectrum contained in $\mathbb{S}_\Theta$, $\Theta \in [0,\pi/2)$:
\begin{align}
X_{k+1} &= r_{2n+1}(X_k; \Theta_k), & X_0 &= A,\label{zolo1} \\
\Theta_{k+1} &= |\arg r_{2n+1}(e^{i\Theta_k};\Theta_k)|, & \Theta_0 &= \Theta. \label{zolo2}
\end{align}
Below we summarize the properties of the iteration~(\ref{zolo1}-\ref{zolo2}).
\begin{proposition}
The iteration~(\ref{zolo1}-\ref{zolo2}) is structure-preserving. That is, if $A$ is unitary, then $X_k$ is unitary for every $k \ge 0$.
\end{proposition}
\begin{proof}
Since $|r_{2n+1}(z;\Theta_k)|=1$ for every scalar $z$ with unit modulus, $r_{2n+1}(X; \Theta_k)$ is unitary for every unitary matrix $X$.
\end{proof}
\begin{theorem}
Let $A$ be a unitary matrix with spectrum contained in $\mathbb{S}_\Theta$ for some $\Theta \in (0,\pi/2)$.
For any $n \in \mathbb{N}$, the iteration~(\ref{zolo1}-\ref{zolo2}) converges to $\sign(A)$ with order of convergence $2n+1$. In fact,
\begin{equation} \label{errorbound}
\|\log(X_k \sign(A)^{-1})\| \le 4 \rho^{-(2n+1)^k},
\end{equation}
for every $k \ge 0$, where $\rho$ is given by~(\ref{rho}).
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:composition}, we have
\[
X_k = r_{(2n+1)^k}(A; \Theta)
\]
for every $k \ge 0$. Thus, every eigenvalue of $X_k \sign(A)^{-1}$ is of the form $r_{(2n+1)^k}(\lambda;\Theta) / \sign(\lambda)$ for some eigenvalue $\lambda$ of $A$.
By Theorem~\ref{thm:error},
\begin{align*}
\|\log(X_k \sign(A)^{-1})\|
&= \max_{\lambda \in \Lambda(X_k \sign(A)^{-1})} |\arg\lambda| \\
&= \max_{\lambda \in \Lambda(A)} \left| \arg\left( \frac{r_{(2n+1)^k}(\lambda;\Theta)}{\sign(\lambda)} \right)\right| \\
&\le \max_{z \in \mathbb{S}_\Theta} \left| \arg\left(\frac{r_{(2n+1)^k}(z;\Theta)}{\sign(z)}\right) \right| \\
&\le 4 \rho^{-(2n+1)^k}.
\end{align*}
\end{proof}
\subsection{Connections with Other Iterations}
There is an intimate connection between the iteration~(\ref{zolo1}-\ref{zolo2}) and several existing iterations for the matrix sign function. First, Proposition~\ref{prop:pade} implies that~(\ref{zolo1}-\ref{zolo2}) reduces to the diagonal Pad\'e iteration when we set $\Theta=0$:
\begin{equation} \label{pade}
X_{k+1} = X_k p_n(X_k^2), \quad X_0 = A.
\end{equation}
Second, there is a link between the iteration~(\ref{zolo1}-\ref{zolo2}) and the iteration
\begin{align}
Y_{k+1} &= \widehat{R}_{2n+1}(Y_k; \ell_k), & Y_0 &= B,\label{zolopd1} \\
\ell_{k+1} &= \widehat{R}_{2n+1}(\ell_k;\ell_k), & \ell_0 &= \ell, \label{zolopd2}
\end{align}
which was introduced in~\cite{nakatsukasa2016computing} to compute the sign of a Hermitian matrix $B$ with spectrum contained in $[-1,-\ell] \cup [\ell,1]$. Note that~(\ref{zolopd1}-\ref{zolopd2}) reduces to
\begin{align}
Y_{k+1} &= Y_k p_n(Y_k^2), &\quad Y_0 &= B \label{padepd}
\end{align}
when we set $\ell=1$ and ignore the spectrum of $B$. This is the same iteration as~(\ref{pade}), but with a starting matrix labelled $B$ rather than $A$.
\begin{proposition}
Let $A$ be a unitary matrix with no eigenvalues equal to $\pm i$. Let $n \in \mathbb{N}$ and $\Theta \in [0,\pi/2)$.
If $B = (A+A^*)/2$ and $\ell = \cos\Theta$, then the iterations~(\ref{zolo1}-\ref{zolo2}) and~(\ref{zolopd1}-\ref{zolopd2}) generate sequences satisfying
\[
Y_k = \frac{1}{2}(X_k+X_k^*), \text{ and } \ell_k = \cos\Theta_k
\]
for every $k \ge 0$.
\end{proposition}
\begin{proof}
It follows from Theorem~\ref{thm:composition} that in the iteration~(\ref{zolo1}-\ref{zolo2}), we have
\[
X_k = r_{(2n+1)^k}(A;\Theta), \quad \Theta_k = |\arg r_{(2n+1)^k}(e^{i\Theta};\Theta)|,
\]
for each $k \ge 0$. On the other hand, the composition law~(\ref{Rcomp}) implies that in the iteration~(\ref{zolopd1}-\ref{zolopd2}), we have
\[
Y_k = \widehat{R}_{(2n+1)^k}(B;\ell), \quad \ell_k = \widehat{R}_{(2n+1)^k}(\ell;\ell),
\]
for each $k \ge 0$.
Thus, by~(\ref{rplusrinv}),
\begin{align*}
\frac{1}{2}(X_k+X_k^*)
&= \frac{1}{2}(X_k+X_k^{-1}) \\
&= \frac{1}{2}\left(r_{(2n+1)^k}(A;\Theta) + r_{(2n+1)^k}(A;\Theta)^{-1}\right) \\
&= \widehat{R}_{(2n+1)^k}((A+A^{-1})/2;\cos\Theta) \\
&= \widehat{R}_{(2n+1)^k}(B;\ell) \\
&= Y_k.
\end{align*}
Also, by Theorem~\ref{thm:realpart},
\[
\cos\Theta_k = \Re e^{i\Theta_k} = \Re r_{(2n+1)^k}(e^{i\Theta};\Theta) = \widehat{R}_{(2n+1)^k}(\Re e^{i\Theta};\cos\Theta) = \widehat{R}_{(2n+1)^k}(\ell;\ell) = \ell_k.
\]
\end{proof}
In the case that $\Theta=0$, the above result implies a connection between the diagonal Pad\'e iterations~(\ref{pade}) and~(\ref{padepd}).
\begin{corollary}
Let $A$ be a unitary matrix with no eigenvalues equal to $\pm i$, and let $n \in \mathbb{N}$. If $B = (A+A^*)/2$, then the diagonal Pad\'e iterations~(\ref{pade}) and~(\ref{padepd}) generate sequences satisfying
\[
Y_k = \frac{1}{2}(X_k+X_k^*)
\]
for every $k \ge 0$.
\end{corollary}
\subsection{Implementation}
To implement the $k$th step of the iteration~(\ref{zolo1}-\ref{zolo2}), one must compute products of unitary matrices of the form
\begin{equation} \label{XaX}
V_j = (X_k^2 + a_j I) (I+a_j X_k^2)^{-1} = (X_k + a_j X_k^*) (X_k^*+a_j X_k)^{-1}, \quad j=1,2,\dots,n,
\end{equation}
where $X_k$ is unitary. The following lemma describes a method for computing~(\ref{XaX}) that is guaranteed to produce a matrix that is unitary to machine precision.
\begin{lemma}
Let $B \in \mathbb{C}^{m \times m}$ be a nonsingular normal matrix. Let $Q_1 R_1 = B$ and $Q_2 R_2 = B^*$ be the QR factorizations of $B$ and $B^*$, respectively. Then
\[
B B^{-*} = Q_1 Q_2^*.
\]
\end{lemma}
\begin{proof}
Since $R_1$ is the Cholesky factor of $B^*B$ and $R_2$ is the Cholesky factor of $BB^* = B^*B$, we have $R_1=R_2$. Hence, $BB^{-*} = Q_1 R_1 R_2^{-1} Q_2^* = Q_1 Q_2^*$.
\end{proof}
Once~(\ref{XaX}) has been computed for each $j$, one must decide in what order to multiply the matrices $V_1, V_2, \dots, V_n$, and $X_k$. Our numerical experience suggests that this decision has a strong influence on the backward stability of the algorithm. We find that the choice
\begin{equation} \label{Xkp1}
X_{k+1} = \frac{1}{2}(X_k V_1 V_2 \cdots V_n + V_n V_{n-1} \cdots V_1 X_k)
\end{equation}
is preferable to, for instance, $X_{k+1} = X_k V_1 V_2 \cdots V_n$ or $X_{k+1} = V_n V_{n-1} \cdots V_1 X_k$. This choice appears to guarantee that $\|X_k A - A X_k\|=O(u)$ for each $k$, which is essential for backward stability; see Lemma~\ref{lemma:backwardstability} for details. A proof that $\|X_k A - A X_k\|=O(u)$ when~(\ref{Xkp1}) is used remains an open problem.
\paragraph{Termination}
We must also decide how to terminate the iteration.
Here we suggest terminating slightly early and applying two post-processing steps---symmetrization followed by one step of the Newton-Schulz iteration~\cite[Equation 8.20]{higham2008functions} for the polar decomposition---to ensure that the computed matrix $\widehat{S} \approx \sign(A)$ is Hermitian and unitary to machine precision. These post-processing steps have the following effect.
Let $\{\sigma_j \cos\theta_j + i\sin\theta_j\}_{j=1}^m$ be the eigenvalues of $X_k$, where $\sigma_j \in \{-1,1\}$ and $|\theta_j|<\pi/2$ for each $j$. Then
\begin{equation} \label{symmetrize}
Y = \frac{1}{2}(X_k + X_k^*)
\end{equation}
has eigenvalues $\{\sigma_j \cos\theta_j \}_{j=1}^m$, and
\begin{equation} \label{newtonschulz}
Z = \frac{1}{2} Y(3I-Y^*Y) = \frac{1}{2} Y(3I-Y^2)
\end{equation}
has eigenvalues $\{ \frac{1}{2}\sigma_j \cos\theta_j (3 - \cos^2\theta_j) \}_{j=1}^m$. For small $\theta_j$, we have
\[
\frac{1}{2}\sigma_j \cos\theta_j (3 - \cos^2\theta_j) = \sigma_j \left(1-\frac{3}{8}\theta_j^4\right) + O(\theta_j^6).
\]
This number will lie within a tolerance $\delta$ of $\pm 1$ if
\begin{equation} \label{thetaconverged}
\theta_j \lesssim \left( \frac{8\delta}{3} \right)^{1/4}.
\end{equation}
The above calculations suggest the following termination criterion. Since the eigenvalues of $X_k-X_k^*$ are $\{2i\sin\theta_j\}_{j=1}^m \approx \{2i\theta_j\}_{j=1}^m$, we terminate the iteration and carry out the post-processing steps~(\ref{symmetrize}-\ref{newtonschulz}) as soon as
\[
\|X_k-X_k^*\| \le 2 \left( \frac{8\delta}{3} \right)^{1/4}.
\]
Note that since the Frobenius norm $\|\cdot\|_F$ is an upper bound for the $2$-norm $\|\cdot\|$, we may safely replace $\|X_k-X_k^*\|$ by $\|X_k-X_k^*\|_F$ in the criterion above. If desired, a second symmetrization can be performed after the Newton-Schulz step. This has virtually no effect on the eigenvalues' distance to $\pm 1$, but it may be desirable if an exactly Hermitian matrix is sought.
\paragraph{Spectral angle}
Let us also mention how to determine $\Theta$ so that $\Lambda(A) \subset \mathbb{S}_\Theta$. We hereafter refer to the smallest such $\Theta$ as the \emph{spectral angle} of $A$, denoted $\Theta(A)$. A simple heuristic is to estimate the eigenvalues $\lambda_+$ and $\lambda_-$ of $A$ that lie closest to $i$ and $-i$, respectively. Then one can set
\[
\Theta = \max\{ \pi/2 - |\arg(i\lambda_-)|, |\arg(i\lambda_+)| - \pi/2 \}.
\]
In practice, it is not necessary to determine the spectral angle of $A$ precisely.
Our experience suggests that underestimates and overestimates of $\Theta$ can be used without significant harm, unless $\Theta$ is very close to $\pi/2$.
\paragraph{Spectral angles close to $\pi/2$}
There are a few delicate numerical issues that arise when the spectral angle of $A$ is close to $\pi/2$. First, as noted in~\cite[Section 4.3]{nakatsukasa2016computing}, the built-in MATLAB functions \verb$ellipj$ and \verb$ellipke$ cannot be used to reliably compute $\sn(\cdot,\ell')$, $\cn(\cdot,\ell')$, $\dn(\cdot,\ell')$, and $K(\ell')$ when $\Theta = \arccos\ell$ is close to $\pi/2$. Instead, the code described in~\cite[Section 4.3]{nakatsukasa2016computing} is preferred. In addition, the lowest-order iteration ($n=1$) appears to be more reliable than the higher-order iterations when $\Theta>\pi/2-u^{1/2}$, so we advocate using the lowest-order iteration until $\Theta_k$ falls below $\pi/2-u^{1/2}$ (recall that $u=2^{-53}$ denotes the unit roundoff). Typically this takes two or fewer iterations, after which one can switch to a higher-order iteration if desired.
To implement the lowest-order iteration ($n=1$) when $\Theta>\pi/2-u^{1/2}$, we have found the following heuristic to be useful for ensuring rapid convergence. If, at the $k$th iteration, $\Theta_k$ lies above $\pi/2-u^{1/2}$, we compute $\Theta_{k+1}$ as $\Theta_{k+1}=\Theta(X_{k+1})$ (the spectral angle of $X_{k+1}$) rather than via~(\ref{zolo2}). This tends to speed up the iteration.
To improve stability, we have also found it prudent to replace $\Theta_k$ by $\pi/2-10u$ if $\Theta_k > \pi/2-10u$.
A summary of our proposed algorithm for computing the unitary sign decomposition is presented in Algorithm~\ref{alg:zolosign}.
\begin{algorithm}
\caption{Order-$(2n+1)$ iteration for the unitary sign decomposition\newline
\textit{Inputs}: Unitary matrix $A \in \mathbb{C}^{m \times m}$, tolerance $\delta>0$, degree $n \in \mathbb{N}$\newline
\textit{Outputs}: Matrices $S,N \in \mathbb{C}^{m \times m}$ satisfying~(\ref{unitarysign})}
\label{alg:zolosign}
\begin{algorithmic}[1]
\STATE{$\Theta_0 = \min\{\Theta(A),\pi/2-10u$\}} \label{line:theta0}
\STATE{$X_0=A$, $n_0=n$, $k=0$}
\WHILE{$\|X_k-X_k^*\|_F > 2(8\delta/3)^{1/4}$}
\LINEIFELSE{$\Theta_k>\pi/2-u^{1/2}$}{$n=1$}{$n=n_0$}
\STATE{$Y=X_k$, $Z=X_k$}
\FOR{$j=1$ \TO $n$}
\STATE{$Q_1 R_1 = X_k+a_j(\Theta_k) X_k^*$ (QR factorization)}
\STATE{$Q_2 R_2 = X_k^*+a_j(\Theta_k) X_k$ (QR factorization)}
\STATE{$Y= Y Q_1 Q_2^* $}
\STATE{$Z = Q_1 Q_2^* Z$}
\ENDFOR
\STATE{$X_{k+1} = \frac{1}{2}(Y+Z)$}
\IF{$\Theta_k>\pi/2-u^{1/2}$}
\STATE{$\Theta_{k+1} = \min\{\Theta(X_{k+1}),\pi/2-10u\}$}
\ELSE
\STATE{$\Theta_{k+1} = |\arg r_{2n+1}(e^{i\Theta_k};\Theta_k)|$}
\ENDIF
\STATE{$k = k+1$}
\ENDWHILE
\STATE{$S = (X_k+X_k^*)/2$}
\STATE{$S = S(3I-S^2)/2$}
\STATE{$S = (S+S^*)/2$}
\STATE{$N = S A$}
\RETURN $S$, $N$
\end{algorithmic}
\end{algorithm}
\subsection{Backward Stability}
We now discuss how some of the choices made above are inspired by backward stability considerations.
We first address a remark that was made in the footnote of this paper's introduction concerning the list of backward errors~(\ref{backwarderrors}). At first glance, this list may appear to be incomplete because the norm of $\widehat{N}\widehat{S}-\widehat{S}\widehat{N}$ is absent.
The following lemma shows that if $\widehat{S}$ and $\widehat{N}$ are well-conditioned matrices and $\|\widehat{N}^2-A^2\|$, $\|A-\widehat{S}\widehat{N}\|$, and $\|\widehat{S}^2-I\|$ are small, then $\|\widehat{N}\widehat{S}-\widehat{S}\widehat{N}\|$ is automatically small as well.
\begin{lemma} \label{lemma:SNcommute}
Let $A \in \mathbb{C}^{m \times m}$ be a unitary matrix. For any invertible matrices $\widehat{S},\widehat{N} \in \mathbb{C}^{m \times m}$, we have
\begin{equation*}
\|\widehat{N}\widehat{S}-\widehat{S}\widehat{N}\| \le \left( \|\widehat{N}^2-A^2\| + (1 + \|\widehat{S}\|\|\widehat{N}\|) \|A-\widehat{S}\widehat{N}\| + \|\widehat{N}\|^2 \|\widehat{S}^2-I\| \right) \|\widehat{N}^{-1}\| \|\widehat{S}^{-1}\|.
\end{equation*}
\end{lemma}
\begin{proof}
This follows from the identity
\[
(\widehat{N}\widehat{S}-\widehat{S}\widehat{N})\widehat{S}\widehat{N} = \widehat{N}^2-A^2 + A(A-\widehat{S}\widehat{N}) + (A-\widehat{S}\widehat{N})\widehat{S}\widehat{N} + \widehat{N}(\widehat{S}^2-I)\widehat{N}.
\]
\end{proof}
The next lemma shows that in order to achieve backward stability, it is prudent to compute a Hermitian matrix $\widehat{S}$ such that $\|\widehat{S}^2-I\|$ and $\|A\widehat{S}-\widehat{S}A\|$ are small, and then set $\widehat{N} = \widehat{S}A$. This highlights the importance of ensuring the smallness of $\|AX_k-X_kA\|$ in Algorithm~\ref{alg:zolosign}.
\begin{lemma} \label{lemma:backwardstability}
Let $A \in \mathbb{C}^{m \times m}$ be a unitary matrix, let $\widehat{S}$ be an invertible Hermitian matrix, and let $\widehat{N} = \widehat{S}A$. Then
\begin{align}
\|\widehat{N}^*\widehat{N}-I\| &\le \|\widehat{S}^2-I\|, \label{ineq1} \\
\|A-\widehat{S}\widehat{N}\| &\le \|\widehat{S}^2-I\|, \label{ineq2} \\
\|\widehat{N}^2-A^2\| &\le \|\widehat{S}\| \|A\widehat{S}-\widehat{S}A\| + \|\widehat{S}^2-I\|, \label{ineq3} \\
\|\widehat{N}\widehat{S}-\widehat{S}\widehat{N}\| &\le \|\widehat{S}\| \|A\widehat{S}-\widehat{S}A\|. \label{ineq4}
\end{align}
\end{lemma}
\begin{proof}
Since $A^*A=I$, $\widehat{N}=\widehat{S}A$, and $\widehat{S}=\widehat{S}^*$, we have
\[
\widehat{N}^*\widehat{N}-I = A^*\widehat{S}^2 A - I = A^* (\widehat{S}^2-I)A.
\]
Taking the norm of both sides proves~(\ref{ineq1}). Similarly, the equalities
\begin{align*}
A-\widehat{S}\widehat{N} &= (I-\widehat{S}^2)A, \\
\widehat{N}^2-A^2 &= \widehat{S}A\widehat{S}A - A^2 = \widehat{S}(A\widehat{S}-\widehat{S}A)A + (\widehat{S}^2-I)A^2, \\
\widehat{N}\widehat{S}-\widehat{S}\widehat{N} &= \widehat{S}(A\widehat{S}-\widehat{S}A)
\end{align*}
yield~(\ref{ineq2}-\ref{ineq4}).
\end{proof}
\section{A Spectral Divide-and-Conquer Algorithm for the Unitary Eigendecomposition} \label{sec:eig}
The iteration we have proposed for computing the unitary sign decomposition can be used to construct a spectral divide-and-conquer algorithm for the unitary eigendecomposition, following~\cite{nakatsukasa2013stable,nakatsukasa2016computing}. The idea is as follows. Given a unitary matrix $A \in \mathbb{C}^{m \times m}$, we scale $A$ by a complex number $e^{i\phi}$ so that roughly half (say, $m_1$) of the eigenvalues of $e^{i\phi}A$ lie in the right half of the complex plane, and roughly half (say, $m_2$) lie in the left half of complex plane. We then compute $S = \sign(e^{i\phi}A)$ using Algorithm~\ref{alg:zolosign}. The matrix $P=(I+S)/2$ is a spectral projector onto the invariant subspace $\mathcal{V}_+$ of $e^{i\phi}A$ associated with the eigenvalues of $e^{i\phi}A$ having positive real part. Using subspace iteration, we can compute orthonormal bases $U_1 \in \mathbb{C}^{m \times m_1}$ and $U_2 \in \mathbb{C}^{m \times m_2}$ (where $m_1+m_2=m$) for $\mathcal{V}_+$ and its orthogonal complement. Then
\[
\begin{pmatrix} U_1^* \\ U_2^* \end{pmatrix} A \begin{pmatrix} U_1 & U_2 \end{pmatrix}
=
\begin{pmatrix} A_1 & 0 \\ 0 & A_2 \end{pmatrix}
\]
is block diagonal, so we can recurse to find eigendecompositions $A_1 = V_1 \Lambda_1 V_1^*$ and $A_2 = V_2 \Lambda_2 V_2^*$. The eigendecomposition of $A$ is then $A = V\Lambda V^*$, where
\[
V = \begin{pmatrix} U_1 V_1 & U_2 V_2 \end{pmatrix}
\]
and
\[
\Lambda = \begin{pmatrix} \Lambda_1 & 0 \\ 0 & \Lambda_2 \end{pmatrix}.
\]
Since every eigenvalue of $P$ is either $0$ and $1$, subspace iteration with $P$ typically converges in one iteration, or, in rare cases, two. To choose the scalar $e^{i\phi}$, a simple heuristic is to compute the median $\mu$ of the arguments of the diagonal entries of $A$ and set $\phi=\pi/2-\mu$. When $A$ is nearly diagonal, this has the effect of centering the eigenvalues around $i$.
A summary of the algorithm just described is presented in Algorithm~\ref{alg:eig}.
\begin{algorithm}
\caption{Divide-and-conquer algorithm for the unitary eigendecomposition\newline
\textit{Inputs}: Unitary matrix $A \in \mathbb{C}^{m \times m}$ \newline
\textit{Outputs}: Matrices $V,\Lambda \in \mathbb{C}^{m \times m}$ satisfying $V\Lambda V^* = A$, $V^*V = I$, and $\Lambda$ diagonal}
\label{alg:eig}
\begin{algorithmic}[1]
\STATE{$\phi = \frac{\pi}{2} - \operatorname{median} \{\arg A_{11},\dots,\arg A_{mm}\}$}
\STATE{$S = \sign(e^{i\phi} A)$} \label{line:sign}
\STATE{$P = (I+S)/2$}
\STATE{Use subspace iteration to compute orthonormal bases $U_1 \in \mathbb{C}^{m \times m_1}$ and $U_2 \in \mathbb{C}^{m \times m_2}$ for the 0- and 1-eigenspaces of $P$.}
\STATE{$A_1 = U_1^* A U_1$, $A_2 = U_2^* A U_2$}
\STATE{Recurse to find eigendecompositions $V_1 \Lambda_1 V_1^* = A_1$ and $V_2 \Lambda_2 V_2^* = A_2$.}
\STATE{$V = \begin{pmatrix} U_1 V_1 & U_2 V_2 \end{pmatrix}$}
\STATE{$\Lambda = \begin{pmatrix} \Lambda_1 & 0 \\ 0 & \Lambda_2 \end{pmatrix}$}
\RETURN $V$, $\Lambda$
\end{algorithmic}
\end{algorithm}
\section{Numerical Examples} \label{sec:numerical}
In this section, we study the iteration~(\ref{zolo1}-\ref{zolo2}) numerically, and we test Algorithms~\ref{alg:zolosign} and~\ref{alg:eig} on a collection of unitary matrices.
\subsection{Scalar Iteration}
\begin{table}
\centering
\begin{tabularx}{\linewidth}{ Y*{11}{Y} }
& \multicolumn{11}{c}{$\frac{\pi}{2}-\Theta$} \\
$n$ & 1.5 & 1 & 0.5 & $10^{-2}$ & $10^{-4}$ & $10^{-6}$ & $10^{-8}$ & $10^{-10}$ & $10^{-12}$ & $10^{-14}$ & $10^{-16}$ \\
\cmidrule(lr){1-1}
\cmidrule(lr){2-12}
1 & 1 & 2 & 2 & 3 & 4 & 4 & 5 & 5 & 5 & 5 & 5 \\
2 & 1 & 2 & 2 & 3 & 3 & 3 & 3 & 3 & 3 & 4 & 4 \\
3 & 1 & 1 & 2 & 2 & 2 & 3 & 3 & 3 & 3 & 3 & 3 \\
4 & 1 & 1 & 1 & 2 & 2 & 2 & 3 & 3 & 3 & 3 & 3 \\
5 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 3 & 3 & 3 \\
6 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\
7 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\
8 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\
\end{tabularx}
\caption{Smallest integer $k$ for which $4\rho(\Theta)^{-(2n+1)^k} \le (8\delta/3)^{1/4}$, where $\delta = 10^{-16}$, for various values of $n$ and $\Theta$.}
\label{tab:iter}
\end{table}
\begin{table}
\centering
\begin{tabularx}{\linewidth}{ Y*{11}{Y} }
& \multicolumn{11}{c}{$\frac{\pi}{2}-\Theta$} \\
$n$ & 1.5 & 1 & 0.5 & $10^{-2}$ & $10^{-4}$ & $10^{-6}$ & $10^{-8}$ & $10^{-10}$ & $10^{-12}$ & $10^{-14}$ & $10^{-16}$ \\
\cmidrule(lr){1-1}
\cmidrule(lr){2-12}
1 & 1 & 2 & 3 & 7 & 11 & 15 & 19 & 24 & 28 & 32 & 37 \\
2 & 1 & 2 & 2 & 5 & 8 & 10 & 13 & 16 & 19 & 22 & 25 \\
3 & 1 & 2 & 2 & 4 & 6 & 9 & 11 & 13 & 16 & 18 & 21 \\
4 & 1 & 1 & 2 & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 19 \\
5 & 1 & 1 & 2 & 3 & 5 & 7 & 9 & 11 & 13 & 15 & 17 \\
6 & 1 & 1 & 2 & 3 & 5 & 7 & 9 & 10 & 12 & 14 & 16 \\
7 & 1 & 1 & 2 & 3 & 5 & 6 & 8 & 10 & 12 & 13 & 15 \\
8 & 1 & 1 & 2 & 3 & 5 & 6 & 8 & 9 & 11 & 13 & 14
\end{tabularx}
\caption{Smallest integer $k$ for which $|r_{(2n+1)^k}(e^{i\Theta};0)-1| \le (8\delta/3)^{1/4}$, where $\delta = 10^{-16}$, for various values of $n$ and $\Theta$.}
\label{tab:iterpade}
\end{table}
To understand how rapidly the iteration~(\ref{zolo1}-\ref{zolo2}) can be expected to converge, let us study the upper bound~(\ref{errorbound}). Table~\ref{tab:iter} reports the smallest integer $k$ for which $4\rho(\Theta)^{-(2n+1)^k}$ falls below the number $(8\delta/3)^{1/4}$ appearing in the convergence criterion~(\ref{thetaconverged}). Here, we took $\delta = 10^{-16}$ and considered various choices of $n$ and $\Theta$. The integer $k$ so computed provides an estimate for the number of iterations one can expect~(\ref{zolo1}-\ref{zolo2}) to take to converge to $\sign(A)$ if $A$ has spectrum contained in $\mathbb{S}_\Theta$.
For comparison, we computed the number of iterations needed for the scalar Pad\'e iteration
\[
z_{k+1} = r_{2n+1}(z_k;0) = z_k p_n(z_k^2)
\]
to converge to $\sign z_0$, starting from $z_0 = e^{i\Theta}$. The results, reported in Table~\ref{tab:iterpade}, show that the Pad\'e iterations take significantly longer to converge if $\Theta$ is close to $\pi/2$. This suggests the matrix Pad\'e iteration~(\ref{pade}) will require a large number of iterations to converge to $\sign(A)$ if the spectral angle $\Theta(A)$ is close to $\pi/2$.
\subsection{Matrix Iteration}
To test Algorithm~\ref{alg:zolosign}, we computed the sign decomposition of four unitary matrices:
\begin{enumerate}
\item \label{mat1} A matrix sampled randomly from the Haar measure on the $m \times m$ unitary group.
\item \label{mat2} \verb$A = gallery('orthog',m,3)$. This is the $m$-point discrete Fourier transform matrix with entries $A_{jk} = e^{2\pi i (j-1)(k-1)/m} / \sqrt{m}$. Its eigenvalues are $1,-1,i,-i$. The spectrum of the floating point representation of $A$ therefore includes $O(u)$-perturbations of $\pm i$, posing a challenge to numerical algorithms for the unitary sign decomposition.
\item \label{mat3} \verb$A = circshift(eye(m),1)$. This is a permutation matrix with eigenvalues $e^{2\pi i j/m}$, $m=1,2,\dots,m$. For even $m$, the spectrum of $A$ includes $\pm i$. The same is true of the floating point representation of $A$, since the entries of $A$ are integers.
\item \label{mat4} \verb$A = gallery('orthog',m,-2)$ (with columns normalized). The entries of $A$ (prior to normalizing columns) are $A_{jk} = \cos((k-1/2)(j-1)\pi/m)$. The spectrum of $A$ is clustered near $\pm 1$, making its sign decomposition somewhat easy to compute iteratively.
\end{enumerate}
\medskip
In our numerical experiment, we used $m=100$. The computed spectral angles for the matrices above were $\pi/2-\Theta(A) = 0.026$, $4.4 \times 10^{-16}$, $0$, and $0.95$, respectively.
On each of the matrices above, we compared 10 algorithms:
\begin{itemize}
\item Algorithm~\ref{alg:zolosign} with $n=1,4,8$.
\item The diagonal Pad\'e iteration~(\ref{pade}) with $n=1,4,8$. We implemented this by running Algorithm~\ref{alg:zolosign} with line~\ref{line:theta0} replaced by $\Theta_0=0$.
\item Three algorithms that compute the unitary factor $S$ in the polar decomposition of $B=(A+A^*)/2$. The first uses the Newton iteration with $1,\infty$-norm scaling, as described in~\cite[Section 8.6]{higham2008functions} and implemented in~\cite{Higham:MCT}. The second uses the Zolo-pd algorithm from~\cite{nakatsukasa2016computing}. The third computes $S$ as $S=UV^*$, where $B=U\Sigma V^*$ is the SVD of $B$. In all three cases, we applied post-processing to $S$ ($S=(S+S^*)/2$, followed by $S=S(3I-S^2)/2$, followed by $S=(S+S^*)/2$) and set $N=SA$.
\item A direct method: computing the eigendecomposition $A=V\Lambda V^*$ of $A$ and setting $S = V\sign(\Lambda)V^*$. We computed the eigendecomposition by using the MATLAB command \verb$schur(A,'complex')$ and setting the off-diagonal entries of the triangular factor to zero. We applied post-processing to $S$ ($S=S(3I-S^2)/2$ followed by $S=(S+S^*)/2$) and set $N=SA$.
\end{itemize}
\medskip
The results of the tests are reported in Table~\ref{tab:sign}. All of the algorithms under consideration performed in a backward stable way on the first and fourth matrices. On the second and third matrices (\verb$gallery('orthog',m,3)$ and \verb$circshift(eye(m),1)$), only the direct method and the structure-preserving iterations (Algorithm~\ref{alg:zolosign} and the Pad\'e iteration~(\ref{pade})) exhibited backward stability. Among the structure-preserving iterations, Algorithm~\ref{alg:zolosign} consistently converged more quickly than the Pad\'e iteration~(\ref{pade}) for each degree $n$. The reduction in iteration count was particularly noticeable for \verb$gallery('orthog',m,3)$ and \verb$circshift(eye(m),1)$.
\begin{table}[t]
\centering
\pgfplotstabletypeset[
every head row/.style={
after row=\midrule},
every nth row={10}{before row=\midrule},
columns={leftcol2,0,1,2,3,4,5},
create on use/leftcol2/.style={create col/set list={
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct,
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct,
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct,
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct}},
columns/leftcol/.style={string type,column type/.add={}{},column name={$\frac{\pi}{2}-\Theta(A)$}},
columns/leftcol/.style={string type,column type/.add={}{},column name={$\pi/2-\Theta(A)$}},
columns/leftcol2/.style={string type,column type/.add={}{},column name={Algorithm}},
columns/0/.style={column type/.add={}{},column name={$k$}},
columns/1/.style={sci,sci e,sci zerofill,precision=1,column type/.add={}{},column name={$\|A-\widehat{S}\widehat{N}\|$}},
columns/2/.style={sci,sci e,sci zerofill,precision=1,column type/.add={}{},column name={$\|\widehat{S}^2-I\|$}},
columns/3/.style={sci,sci e,sci zerofill,precision=1,column type/.add={}{},column name={$\|\widehat{N}^*\widehat{N}-I\|$}},
columns/4/.style={sci,sci e,sci zerofill,precision=1,column type/.add={}{},column name={$\|\widehat{N}^2-A^2\|$}},
columns/5/.style={sci,sci e,sci zerofill,precision=1,column type/.add={}{},column name={$\mu(\widehat{N})$}}
]
{sign.dat}
\caption{Performance of algorithms for computing the unitary sign decomposition of the matrices~\ref{mat1}-\ref{mat4}. The table reports the iteration count $k$ and backward errors $\|A-\widehat{S}\widehat{N}\|$, $\|\widehat{S}^2-I\|$, $\|\widehat{N}^*\widehat{N}-I\|$, $\|\widehat{N}^2-A^2\|$, $\mu(\widehat{N})=\max\{0,-\min_{\lambda \in \Lambda(\widehat{N})} \Re\lambda\}$ for each algorithm.}
\label{tab:sign}
\end{table}
\subsection{Unitary eigendecomposition}
Next, we tested our spectral divide-and-conquer algorithm~\ref{alg:eig} on the same four matrices. We implemented line~\ref{line:sign} of Algorithm~\ref{alg:eig} in nine different ways, namely, by using the nine indirect methods considered in the previous experiment. We compared the results with the following direct method: \verb$[V,Lambda]=schur(A,'complex'); Lambda = diag(diag(Lambda))$. The results are reported in Table~\ref{tab:eig}.
All of the algorithms under consideration performed in a backward stable way on the first, second, and fourth matrices. On the third matrix \verb$circshift(eye(m),1)$, the algorithms that used Zolo-pd and the SVD did not. Curiously, the algorithm that used the Newton iteration succeeded, but this is an anomaly. Changing \verb$circshift(eye(m),1)$ to \verb$circshift(eye(m),1)+eps*randn(m)$ leads to a backward error $\|A-\widehat{V}\widehat{\Lambda}\widehat{V}^*\|$ close to 0.1 for the Newton-based algorithm, and it has a negligible effect on the other algorithms' backward errors.
\begin{table}[t]
\hspace{-0.25in}
\pgfplotstabletypeset[
every head row/.style={after row=\midrule},
every nth row={10}{before row=\midrule},
columns={leftcol2,0,1},
create on use/leftcol2/.style={create col/set list={
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct,
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct,
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct,
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct}},
columns/leftcol/.style={string type,column type/.add={}{},column name={$\frac{\pi}{2}-\Theta(A)$}},
columns/leftcol2/.style={string type,column type/.add={}{},column name={Algorithm}},
columns/0/.style={sci,sci e,sci zerofill,precision=1,column type/.add={}{},column name={$\|A-\widehat{V}\widehat{\Lambda}\widehat{V}^*\|$}},
columns/1/.style={sci,sci e,sci zerofill,precision=1,column type/.add={}{},column name={$\|\widehat{V}^*\widehat{V}-I\|$}}
]
{eig12.dat}
\hspace{0.1in}
\pgfplotstabletypeset[
every head row/.style={
after row=\midrule},
every nth row={10}{before row=\midrule},
columns={leftcol2,0,1},
create on use/leftcol2/.style={create col/set list={
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct,
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct,
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct,
Alg.~\ref{alg:zolosign} ($n=1$),Alg.~\ref{alg:zolosign} ($n=4$),Alg.~\ref{alg:zolosign} ($n=8$), Pad\'e ($n=1$), Pad\'e ($n=4$), Pad\'e ($n=8$), Polar (Newton), Polar (Zolo-pd), Polar (SVD), Direct}},
columns/leftcol/.style={string type,column type/.add={}{},column name={$\frac{\pi}{2}-\Theta(A)$}},
columns/leftcol2/.style={string type,column type/.add={}{},column name={Algorithm}},
columns/0/.style={sci,sci e,sci zerofill,precision=1,column type/.add={}{},column name={$\|A-\widehat{V}\widehat{\Lambda}\widehat{V}^*\|$}},
columns/1/.style={sci,sci e,sci zerofill,precision=1,column type/.add={}{},column name={$\|\widehat{V}^*\widehat{V}-I\|$}}
]
{eig34.dat}
\caption{Performance of algorithms for computing the unitary eigendecomposition of the matrices~\ref{mat1}-\ref{mat2} (left) and~(\ref{mat3}-\ref{mat4}) (right). With the exception of the entries labeled ``Direct'', the entries reported in column 1 refer to the algorithms for the unitary sign decomposition used in line~\ref{line:sign} of Algorithm~\ref{alg:eig}.}
\label{tab:eig}
\end{table}
\section{Conclusion} \label{sec:conclusion}
This paper constructed structure-preserving iterations for computing the unitary sign decomposition using rational minimax approximants of the scalar function $\sign(z)$ on the unit circle. Relative to other structure-preserving iterations, they converge significantly faster, and relative to non-structure-preserving iterations, they exhibit much better numerical stability. We used our iterations to construct a spectral divide-and-conquer algorithm for the unitary eigendecomposition.
| {
"timestamp": "2020-11-26T02:07:00",
"yymm": "2011",
"arxiv_id": "2011.12449",
"language": "en",
"url": "https://arxiv.org/abs/2011.12449",
"abstract": "We construct fast, structure-preserving iterations for computing the sign decomposition of a unitary matrix $A$ with no eigenvalues equal to $\\pm i$. This decomposition factorizes $A$ as the product of an involutory matrix $S = \\operatorname{sign}(A) = A(A^2)^{-1/2}$ times a matrix $N = (A^2)^{1/2}$ with spectrum contained in the open right half of the complex plane. Our iterations rely on a recently discovered formula for the best (in the minimax sense) unimodular rational approximant of the scalar function $\\operatorname{sign}(z) = z/\\sqrt{z^2}$ on subsets of the unit circle. When $A$ has eigenvalues near $\\pm i$, the iterations converge significantly faster than Padé iterations. Numerical evidence indicates that the iterations are backward stable, with backward errors often smaller than those obtained with direct methods. This contrasts with other iterations like the scaled Newton iteration, which suffers from numerical instabilities if $A$ has eigenvalues near $\\pm i$. As an application, we use our iterations to construct a stable spectral divide-and-conquer algorithm for the unitary eigendecomposition.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Iterations for the Unitary Sign Decomposition and the Unitary Eigendecomposition",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9854964190307041,
"lm_q2_score": 0.8128673110375457,
"lm_q1q2_score": 0.8010778241746188
} |
https://arxiv.org/abs/1303.6012 | Are the Snapshot Difference Quotients Needed in the Proper Orthogonal Decomposition? | This paper presents a theoretical and numerical investigation of the following practical question: Should the time difference quotients of the snapshots be used to generate the proper orthogonal decomposition basis functions? The answer to this question is important, since some published numerical studies use the time difference quotients, whereas other numerical studies do not. The criterion used in this paper to answer this question is the rate of convergence of the error of the reduced order model with respect to the number of proper orthogonal decomposition basis functions. Two cases are considered: the no_DQ case, in which the snapshot difference quotients are not used, and the DQ case, in which the snapshot difference quotients are used. The error estimates suggest that the convergence rates in the $C^0(L^2)$-norm and in the $C^0(H^1)$-norm are optimal for the DQ case, but suboptimal for the no_DQ case. The convergence rates in the $L^2(H^1)$-norm are optimal for both the DQ case and the no_DQ case. Numerical tests are conducted on the heat equation and on the Burgers equation. The numerical results support the conclusions drawn from the theoretical error estimates. Overall, the theoretical and numerical results strongly suggest that, in order to achieve optimal pointwise in time rates of convergence with respect to the number of proper orthogonal decomposition basis functions, one should use the snapshot difference quotients. | \section{Conclusions}
\label{sec:conclusions}
The effect of using or not the snapshot DQs in the generation of the POD basis (the $DQ$ and the $no\_DQ$ cases, respectively) was investigated theoretically and numerically.
The criterion used in this theoretical and numerical investigation was the rate of convergence with respect to $r$ of the POD-G-ROM solution to the exact solution, where $r$ is the number of POD basis functions used in the POD-G-ROM.
The error estimates in Section~\ref{sec:error} yielded the following conclusions:
In the $DQ$ case, the convergence rates were optimal in all three norms considered (the $C^0(L^2)$-norm, the $C^0(H^1)$-norm and the $L^2(H^1)$-norm.
In the $no\_DQ$ case, the convergence rates were suboptimal in the $C^0(L^2)$-norm and in the $C^0(H^1)$-norm, and optimal in the $L^2(H^1)$-norm.
The numerical results in Section~\ref{sec:numerical} for the (linear) heat equation and the (nonlinear) Burgers equation confirmed the conclusions suggested by the theoretical error estimates in Section~\ref{sec:error}:
In the $DQ$ case, the convergence rates were superoptimal in the $C^0(L^2)$-norm, the $C^0(H^1)$-norm, and the $L^2(H^1)$-norm.
In the $no\_DQ$ case, the convergence rates were suboptimal in the $C^0(L^2)$-norm, the $C^0(H^1)$-norm, and the $L^2(H^1)$-norm.
The only departure from the theoretical conclusions was that, in the $no\_DQ$ case, the convergence rate in the $L^2(H^1)$-norm was suboptimal.
We emphasize that, for both the heat equation and the Burgers equation, {\it the convergence rates in the $DQ$ case in all three norms were much higher than (and usually at least twice as high as) the corresponding rates of convergence in the $no\_DQ$ case}.
The theoretical error estimates in Section~\ref{sec:error} and the numerical results in Section~\ref{sec:numerical} strongly suggest the following conjecture:
{\it ``The snapshot DQs should be used in the generation of the POD basis in order to achieve optimal rates of convergence with respect to $r$, the number of POD basis functions utilized in the POD-G-ROM.
}
We also conjecture that using the snapshot DQs in the generation of the POD basis could alleviate some of the degrading of convergence with respect to $r$ seen in, e.g., \cite{rowley2004model,carlberg2011efficient,baiges2012explicit,caiazzo2013numerical}.
We intend to investigate this conjecture in a future study.
\section{Error Estimates}
\label{sec:error}
In this section, we prove estimates for the error $ u - u_r $, where $u$ is the solution of the weak formulation of the heat equation \eqref{eqn:heat_weak} and $u_r$ is the solution of the POD-G-ROM~\eqref{eqn:pod_g_rom}.
Error estimates for the POD reduced order modeling of general systems were derived in, e.g., \cite{KV01,KV02,veroy2005certified,HPS07,LCNY08,rozza2008reduced,kalashnikova2010stability,drohmann2012reduced,urban2012new,amsallem2013error,sachs2013priori,herkt2013convergence}
In our theoretical analysis, we consider two cases, depending on the type of snapshots used in the derivation of the POD basis: {\bf Case I}: $V = V^{DQ}$ (i.e., with the DQs); and {\bf Case II}: $V = V^{no\_DQ}$ (i.e., without the DQs).
The main goal of this paper is to investigate whether {\bf Case I}, {\bf Case II}, or both {\bf Case I} and {\bf Case II}, yield {\it error estimates that are optimal with respect to $r$}.
The optimality with respect to $r$ is given by the following error estimates:
\begin{eqnarray}
\| u - u_r \|
&\leq& C \, \| \eta^{interp} \| \, ,
\label{eqn:optimality_eta} \\
\| \nabla (u - u_r) \|
&\leq& C \, \| \nabla \eta^{interp}\| \, ,
\label{eqn:optimality_nabla_eta}
\end{eqnarray}
where $\eta^{interp}$ is the POD interpolation error defined in~\eqref{eqn:definition_interpolation_error}.
We emphasize that $\| \eta^{interp} \|$ and $\| \nabla \eta^{interp}\|$ scale differently with respect to $r$:
The scaling of $\| \eta^{interp} \|$ is given by the POD approximation property~\eqref{pod_error_formula_nodq_pointwise}--\eqref{pod_error_formula_dq_pointwise} in Assumption~\ref{assumption_pod_error_formula_pointwise}.
The scaling of $\| \nabla \eta^{interp} \|$ is {\it not} given by the POD approximation property~\eqref{pod_error_formula_nodq_pointwise}--\eqref{pod_error_formula_dq_pointwise} in Assumption~\ref{assumption_pod_error_formula_pointwise}.
To derive such an estimate, we use the fact that the interpolation error lives in a finite dimensional space, i.e., the space spanned by the snapshots.
Using an inverse estimate similar to that presented in Lemma~\ref{lemma_inverse_pod} but for the entire space of snapshots (of dimension $d$), we get the following estimate:
\begin{eqnarray}
\| \nabla \eta^{interp} \|_{L^2}
\leq C_{inv}(d) \, \| \eta^{interp} \|_{L^2} \, ,
\label{eqn:inverse_estimate_interpolation_error}
\end{eqnarray}
where $C_{inv}(d)$ is the constant in the inverse estimate in Lemma~\ref{lemma_inverse_pod}.
Following the discussion in Remark~\ref{remark_pod_inverse_estimate_scalings}, we conclude that the scaling of $\| \nabla \eta^{interp} \|$ is of lower order with respect to $r$ than the scaling of $\| \eta^{interp} \|$.
Thus, if the error analysis yields estimates of the form
\begin{eqnarray}
\| u - u_r \|
\leq C \, \| \nabla \eta^{interp}\| \, ,
\label{eqn:suboptimal_error_estimate}
\end{eqnarray}
these estimates will be called {\it suboptimal} with respect to $r$.
In this section, we investigate the optimality question from a theoretical point of view, by monitoring the dependency of the error estimates on $r$.
In Section~\ref{sec:numerical}, we investigate the same question from a numerical point of view.
We note that we perform the error analysis only for the {\it semidiscretization} of the POD-G-ROM~\eqref{eqn:pod_g_rom}.
In fact, in this semidiscretization, we only consider the error component corresponding to the POD truncation.
Of course, in practical numerical simulations, the semidiscretization also has a spatial component (e.g., due to the finite element discretization -- see Section~\ref{sec:numerical}).
Furthermore, when considering the full discretization, the error also has a time discretization component (e.g., due to the time stepping algorithm -- see Section~\ref{sec:numerical}).
All these error components should be included in a rigorous error analysis of the discretization of the POD-G-ROM~\eqref{eqn:pod_g_rom} (see, e.g., \cite{LCNY08,iliescu2012variational,iliescu2013variational}).
For clarity of presentation, however, we only consider the error component corresponding to the POD truncation.
In what follows, we will show that this is sufficient for answering the question asked in the title of this paper.
We start by introducing some notation and we list several results that will be used throughout this section.
We note that, since the POD basis is computed in the $L^2$-norm (see~\eqref{pod_min}), the POD mass matrix $M_{r} \in \mathbb{R}^{r \times r}$ with $M_{i j} = (\varphi_j , \varphi_i)$ is the identity matrix.
Thus, the POD inverse estimate that was proven in \cite{iliescu2012variational} (see also Lemma 2 and Remark 2 in~\cite{KV01}) becomes:
\begin{lemma}[POD Inverse Estimate]
Let $S_{r} \in \mathbb{R}^{r \times r}$ with $S_{i j} = (\nabla \varphi_j , \nabla \varphi_i)$ be the POD stiffness matrix and let $\| \cdot \|_2$ denote the matrix 2-norm.
Then, for all $v_r \in X^r$, the following POD inverse estimate holds:
\begin{eqnarray}
\| \nabla v_r \|_{L^2}
\leq C_{inv}(r) \, \| v_r \|_{L^2} \, ,
\label{lemma_inverse_pod_1}
\end{eqnarray}
where $C_{inv}(r) := \sqrt{\| S_r \|_2}$.
\label{lemma_inverse_pod}
\end{lemma}
\begin{remark}[POD Inverse Estimate Scalings]
\label{remark_pod_inverse_estimate_scalings}
Since the $r$ dependency of the error estimates presented in this section will be carefully monitored, we try to get some insight into the scalings of the constant $C_{inv}(r)$ in \eqref{lemma_inverse_pod_1}, i.e., the scalings of $\| S_r \|_2$ with respect to $r$, the dimension of the POD basis.
We note that, since the POD basis significantly varies from test case to test case, it would be difficult to derive a general scaling of $\| S_r \|_2$.
We emphasize, however, that when the underlying system is {\it homogeneous} (i.e., invariant to spatial translations), the POD basis is identical to the Fourier basis (see, e.g., \cite{HLB96}).
In this case, it is easy to derive the scalings of $\| S_r \|_2$ with respect to $r$.
Without loss of generality, we assume that the computational domain is $[0,1] \subset \mathbb{R}^{1}$ and that the boundary conditions are homogeneous Dirichlet.
In this case, the Fourier basis functions are given by the following formula: $\varphi_j(x) = \sin(j \, \pi \, x)$.
It is a simple calculation to show that the matrix $S_r$ is diagonal and that its diagonal entries are given by the following formula:
\begin{eqnarray}
S_{j j}
= \int_{0}^{1} ( j \, \pi)^2 \, \sin^2(j \, \pi \, x) \, dx
= \frac{1}{2} \, ( j \, \pi)^2 \, .
\label{eqn:pod_stiffness_matrix_entries_1d}
\end{eqnarray}
It is easy to see that, in the $n$-dimensional case, formula~\eqref{eqn:pod_stiffness_matrix_entries_1d} becomes
\begin{eqnarray}
S_{j j}
= \int_{\Omega} ( j \, \pi)^{2 \, n} \, \sin^2(j \, \pi \, x) \, dx
= \frac{1}{2} \, ( j \, \pi)^{2 \, n} \, .
\label{eqn:pod_stiffness_matrix_entries_nd}
\end{eqnarray}
Since the POD stiffness matrix $S_r$ is symmetric, its matrix 2-norm is given by $\| S_r \|_2 = \lambda_{max}$, where $\lambda_{max}$ is the largest eigenvalue of $S_r$.
Thus, in the $n$-dimensional case, we have
\begin{eqnarray}
\| S_r \|_2
= \frac{1}{2} \, ( r \, \pi)^{2 \, n}
= \mathcal{O}(r^{2 \, n}) \, .
\label{eqn:pod_stiffness_matrix_scaling_nd}
\end{eqnarray}
Thus, we conclude that, when the underlying system is homogeneous, the 2-norm of the POD stiffness matrix $S_r$ scales as $\mathcal{O}(r^{2 \, n})$, where $n$ is the spatial dimension.
As mentioned at the beginning of the remark, for general (non-homogeneous) systems is would be hard to derive theoretical scalings.
The numerical tests in Section~\ref{sec:numerical}, however, seem to confirm the theoretical scaling in \eqref{eqn:pod_stiffness_matrix_scaling_nd}.
For the heat equation and the Burgers equation (see Section~\ref{sec:numerical} for details regarding the numerical simulations), we monitor the scaling of $\|S_r\|_2$ with respect to $r$ for the POD-G-ROM~\eqref{eqn:pod_g_rom}.
We consider two cases: when the DQs are used in the generation of the POD basis (the corresponding results are denoted by DQ), and when the DQs are not used in the generation of the POD basis (the corresponding results are denoted by no-DQ).
The scalings are plotted in Figure \ref{fig:test1_Sr}.
It is seen that both no-DQ and DQ approaches yields scalings that are similar to the theoretical scaling~\eqref{eqn:pod_stiffness_matrix_scaling_nd} predicted for the homogeneous flow fields (i.e., when the POD basis reduces to the Fourier basis).
The only exception seems to be for the Burgers equation in the DQ case.
In all cases, however, the scaling $\| S_r \|_2 = \mathcal{O}(r^{\alpha})$, where $\alpha$ is a positive constant, seems to be valid.
\begin{figure}[htp]
\begin{minipage}[h]{.45\linewidth}
\includegraphics[width=1.1\textwidth]{Heat_Sr_wrt_r.eps}
\end{minipage}
\hspace{.3cm}
\begin{minipage}[h]{.45\linewidth}
\includegraphics[width=1.1\textwidth]{Burg_Sr_wrt_r.eps}
\end{minipage}
\caption{
Heat equation (left); Burgers equation (right).
Plots of the scalings of $\|S_r\|_2$ with respect to $r$ when the DQs are used (denoted by DQ) and when the DQs are not used (denoted by no-DQ).
}
\label{fig:test1_Sr}
\end{figure}
\end{remark}
\medskip
After these preliminaries, we are ready to derive the error estimates.
The error analysis will proceed along the same lines as the error analysis of the finite element semidiscretization~\cite{GR86,layton2008introduction,thomee2006galerkin}.
The main difference between the finite element and the POD settings is that the finite element approximation property is {\it global} \cite{GR86,layton2008introduction}, whereas the POD approximation property is {\it local}, i.e., it is only valid at the time instances at which the snapshots were taken (see~\eqref{pod_error_formula_nodq}-\eqref{pod_error_formula_dq_pointwise}).
Thus, in order to be able to use the POD approximation property~\eqref{pod_error_formula_nodq}-\eqref{pod_error_formula_dq_pointwise}, in what follows we assume that the final error estimates for the semidiscretization are considered only at the time instances $t_i, \, i =1, \ldots, N$.
We start by considering the error equation:
\begin{eqnarray}
( e_t , v_r )
+ \nu \, ( \nabla e , \nabla v_r )
= 0
\qquad
\forall \, v_r \in X^r ,
\label{s_error_semi_eq_1}
\end{eqnarray}
where
$e := u - u_r$ is the error.
The error is split into two parts:
\begin{eqnarray}
e
= u - u_r
= ( u - w_r )
- ( u_r - w_r )
= \eta
- \phi_r ,
\label{s_error_semi_eq_2}
\end{eqnarray}
where
$w_r$ is an {\it arbitrary} function in $X^r$,
$\eta := u - w_r$, and
$\phi_r := u_r - w_r$.
Using this decomposition in the error equation \eqref{s_error_semi_eq_1}, we get
\begin{eqnarray}
( \phi_{r,t} , v_r )
+ \nu \, ( \nabla \phi_r , \nabla v_r )
= ( \eta_t , v_r )
+ \nu \, ( \nabla \eta , \nabla v_r ) .
\label{s_error_semi_eq_2a}
\end{eqnarray}
The analysis proceeds by using \eqref{s_error_semi_eq_2a} to show that
\begin{eqnarray}
\| \phi_r \|
\leq C \, \| \eta \| .
\label{s_error_semi_eq_3}
\end{eqnarray}
Using the triangle inequality, one then gets
\begin{eqnarray}
\| e \|
\leq \| \eta \|
+ \| \phi_r \|
\leq (1 + C) \, \| \eta \| .
\label{s_error_semi_eq_4}
\end{eqnarray}
Since $w_r$ was chosen arbitrarily and since the LHS does not depend on $w_r$, we can take the infimum over $w_r$ in \eqref{s_error_semi_eq_4} and get the following error estimate:
\begin{eqnarray}
\| e \|
\leq (1 + C) \, \inf_{w_r \in X^r} \| u - w_r \| .
\label{s_error_semi_eq_5}
\end{eqnarray}
At this stage, one invokes the approximability property of the space $X^r$ in \eqref{s_error_semi_eq_5} and concludes that the error estimate~\eqref{s_error_semi_eq_5} is {\it optimal with respect to $r$}.
That is, the error in~\eqref{s_error_semi_eq_5} is, up to a constant, the interpolation error in $X^r$.
There are a lot of variations of the error analysis sketched above, but most of the existing error analyses follow the above path.
The main variations are the following:
(i) the choice of the arbitrary function $w_r \in X^r$;
(ii) the norm $\| \cdot \|$ used above; and
(iii) the choice of the test function used in the error equation to get \eqref{s_error_semi_eq_3} \cite{thomee2006galerkin}.
In the remainder of this section, we investigate whether error estimates that are {\it optimal with respect to $r$} can be obtained with or without including the DQs in the set of snapshots.
To this end, in Section~\ref{sec:case_I} we consider the case in which the DQs are included in the set of snapshots (i.e., $V = V^{DQ}$).
Then, in Section~\ref{sec:case_II} we consider the case in which the DQs are not included in the set of snapshots (i.e., $V = V^{no\_DQ}$).
\subsection{Case I ($V = V^{DQ}$)}
\label{sec:case_I}
The standard approach used to prove error estimates in this case is to use the Ritz projection~\cite{KV01,KV02,iliescu2012variational,iliescu2013variational}.
We note that this is the standard approach used in the finite element context~\cite{GR86,layton2008introduction}.
As pointed out in \cite{thomee2006galerkin}, the Ritz projection was first used by Wheeler in \cite{wheeler1973priori} to obtain {\it optimal} error estimates for the finite element discretization of parabolic problems.
We start by choosing $w_r := R_r(u)$ in~\eqref{s_error_semi_eq_2}, where $R_r(u)$ is the {\it Ritz projection} of $u$, given by:
\begin{eqnarray}
\bigl( \nabla ( u - R_r(u) ) , \nabla v_r \bigr)
= 0
\qquad \forall \, v_r \in X^r .
\label{s_error_semi_eq_6}
\end{eqnarray}
To emphasize that we are using the Ritz projection, in the remainder of Section~\ref{sec:case_I} we will use the notation $\eta = u - R_r(u) = \eta^{Ritz}$.
Using \eqref{s_error_semi_eq_6}, \eqref{s_error_semi_eq_2a} becomes
\begin{eqnarray}
( \phi_{r,t} , v_r )
+ \nu \, ( \nabla \phi_r , \nabla v_r )
= ( \eta^{Ritz}_t , v_r )
+ \nu \, \cancelto{0}{( \nabla \eta^{Ritz} , \nabla v_r )} .
\label{s_error_semi_eq_7}
\end{eqnarray}
It is the cancelation of the last term on the RHS of \eqref{s_error_semi_eq_7} that yields {\it optimal} error estimates.
We let $v_r := \phi_r$ in \eqref{s_error_semi_eq_7}, and then we apply the Cauchy-Schwarz inequality to the remaining term on the RHS:
\begin{eqnarray}
\frac{1}{2} \, \frac{d}{dt} \| \phi_r \|^2
+ \nu \, \| \nabla \phi_r \|^2
\leq \| \eta^{Ritz}_t \| \, \| \phi_r \| .
\label{s_error_semi_eq_8}
\end{eqnarray}
We rewrite the first term on the LHS of \eqref{s_error_semi_eq_8} as
\begin{eqnarray}
\frac{1}{2} \, \frac{d}{dt} \| \phi_r \|^2
= \| \phi_r \| \, \frac{d}{dt} \| \phi_r \| .
\label{s_error_semi_eq_9}
\end{eqnarray}
We apply the Poincare-Friedrichs inequality to the second term on the LHS of \eqref{s_error_semi_eq_8}:
\begin{eqnarray}
\nu \, \| \nabla \phi_r \|^2
\geq C \, \nu \, \| \phi_r \|^2 .
\label{s_error_semi_eq_10}
\end{eqnarray}
We note that the Poincare-Friedrichs inequality $C \| v \|^2 \leq \| \nabla v \|^2$ holds for every function $v$ in the {\it continuous} space $H_0^1(\Omega)$, and, in particular, for $\phi_r \in X^r \subset X = H_0^1(\Omega)$ (see equation (3) in \cite{KV01}).
Thus, the constant $C$ in \eqref{s_error_semi_eq_10} does not depend on $r$.
Using \eqref{s_error_semi_eq_9} and \eqref{s_error_semi_eq_10} in \eqref{s_error_semi_eq_8}, we get
\begin{eqnarray}
\frac{1}{2} \, \frac{d}{dt} \| \phi_r \|
+ C \, \nu \, \| \phi_r \|
\leq \| \eta^{Ritz}_t \| .
\label{s_error_semi_eq_11}
\end{eqnarray}
Using Gronwall's lemma in \eqref{s_error_semi_eq_11}, we get for $0 < t \leq T$
\begin{eqnarray}
\| \phi_r (t) \|
\leq e^{-2 \, C \, \nu \, t} \, \| \phi_r(0) \|
+ 2 \, \int_0^t e^{-2 \,C \, \nu \, (t-s)} \, \| \eta^{Ritz}_t(s) \| \, ds .
\label{s_error_semi_eq_12}
\end{eqnarray}
Using~\eqref{s_error_semi_eq_4}, the first term on the RHS of \eqref{s_error_semi_eq_12} can be estimated as follows:
\begin{eqnarray}
\| \phi_r(0) \|
\leq \| e(0) \|
+ \| \eta^{Ritz}(0) \| .
\label{s_error_semi_eq_13}
\end{eqnarray}
Thus, \eqref{s_error_semi_eq_12} becomes
\begin{eqnarray}
\| \phi_r (t) \|
\leq e^{-2 \, C \, \nu \, t} \, \biggl( \| e(0) \|
+ \| \eta^{Ritz}(0) \| \biggr)
+ 2 \, \int_0^t e^{-2 \, C \, \nu \, (t-s)} \, \| \eta^{Ritz}_t(s) \| \, ds .
\label{s_error_semi_eq_14}
\end{eqnarray}
Applying the triangle inequality, just as in \eqref{s_error_semi_eq_4}, we get
\begin{eqnarray}
\| e(t) \|
\leq \| \eta^{Ritz}(t) \|
+ e^{-2 \, C \, \nu \, t} \, \biggl( \| e(0) \|
+ \| \eta^{Ritz}(0) \| \biggr)
+ 2 \, \int_0^t e^{-2 \, C \, \nu \, (t-s)} \, \| \eta^{Ritz}_t(s) \| \, ds .
\nonumber \\
\label{s_error_semi_eq_15}
\end{eqnarray}
Estimate~\eqref{s_error_semi_eq_15} shows that, as long as the Ritz projection~\eqref{s_error_semi_eq_6} yields estimates for $\| \eta^{Ritz}(t) \|$ (including for $t=0$) and $\| \eta^{Ritz}_t(t) \|$ that are {\it optimal} with respect to $r$, the estimates for the POD error $e$ are also optimal with respect to $r$.
\begin{remark}[POD Ritz Projection]
\label{remark_ritz}
In the finite element context, both $\| \eta^{Ritz} \|$ and $\| \eta^{Ritz}_t \|$ are optimal with respect to the mesh size $h$.
In the POD context, however, this is not that clear.
To the best of the authors' knowledge, the state-of-the-art regarding the Ritz projection in a POD context is given in the pioneering paper of Kunisch and Volkwein~\cite{KV01}.
Since the Ritz projection plays such an important role in this paper, we summarize below the results in~\cite{KV01}.
The main result in \cite{KV01} regarding the Ritz projection is Lemma 3 (see also (10) and (11) in \cite{KV01}), which, in our notation, states the following:
\begin{eqnarray}
\| \nabla \eta^{Ritz} \|^ 2
\leq \| S_d \|_2 \, \sum \limits_{j=r+1}^{d} \lambda_j \, .
\label{eqn:kv01_lemma3_2}
\end{eqnarray}
For clarity of presentation, we have not included in \eqref{eqn:kv01_lemma3_2} the constants that do not depend on $r$.
Thus, the following relationship between the Ritz projection error and the POD interpolation error holds:
\begin{eqnarray}
\| \nabla \eta^{Ritz} \|
\leq C \, \sqrt{\| S_d \|_2} \, \| \eta^{interp} \| \, .
\label{eqn:relationship_ritz_interpolation_nabla_eta}
\end{eqnarray}
The scaling in \eqref{eqn:relationship_ritz_interpolation_nabla_eta} suggests that the Ritz projection yields optimal error estimates with respect to $r$ in the $H^1$-seminorm (see \eqref{eqn:inverse_estimate_interpolation_error}).
We emphasize that Lemma 3 in \cite{KV01} does not include any bounds for the $L^2$-norm of $\eta^{Ritz}$, i.e., for $\| \eta^{Ritz} \|$.
This is in clear contrast with the finite element context, in which $\| \eta^{Ritz} \|$ is estimated by the usual duality argument (the Aubin-Nitsche ``trick," see, e.g., \cite{thomee2006galerkin}).
Using a duality argument, however, is challenging in the POD context, since any auxiliary dual problem would not necessarily inherit the POD approximation property~\eqref{pod_error_formula_nodq}--\eqref{pod_error_formula_dq_pointwise}.
To the best of the authors' knowledge, such a duality argument has never been used in a POD context.
We emphasize that not being able to use a duality argument in the Ritz projection to get error estimates that are optimal with respect to $r$ has significant consequences in the error analysis.
Indeed, in the proof of Theorem 7 in \cite{KV01} (the error estimate for the backward Euler time discretization), to estimate the $\| \eta^{Ritz} \|$ error component in equations (27a) and (27b), the authors use the $\| \nabla \eta^{Ritz} \|$ estimate given in Lemma 3 and the Poincare-Friedrichs inequality given in equation (3).
Since the Poincare-Friedrichs constant does not depend on $r$, we conclude that $\| \eta^{Ritz}\|$ and $\| \nabla \eta^{Ritz} \|$ have the same order.
This, in turn, suggests that $\| \eta^{Ritz} \|$ is suboptimal with respect to $r$.
We note that the same approach (i.e., Lemma 4 and the Poincare-Friedrichs inequality) is used in~\cite{KV01} to estimate the DQ approximation of $\| \eta^{Ritz}_t \|$ (see the two inequalities above equation (29a)).
Thus, the analysis in \cite{KV01} suggests that the estimates for $\| \eta^{Ritz} \|$ and $\| \eta^{Ritz}_t \|$ are suboptimal with respect to $r$.
The numerical tests in Section~\ref{sec:numerical} of this report, however, suggest that these estimates are actually optimal, just as in the finite element case.
In the analysis that follows, we will use the insight from the numerical results in Section~\ref{sec:numerical} and
make the following assumption:
\begin{assumption}
We assume that the POD Ritz projection error $\eta^{Ritz}$ satisfies optimal error estimates with respect to $r$ in the $L^2$-norm:\begin{eqnarray}
\| \eta^{Ritz} \|
&\leq& C \, \| \eta^{interp} \| \, ,
\label{eqn:relationship_ritz_interpolation_eta} \\
\| \eta^{Ritz}_t \|
&\leq& C \, \| \eta^{interp} \| \, .
\label{eqn:relationship_ritz_interpolation_eta_t}
\end{eqnarray}
\label{assumption_ritz}
\end{assumption}
\end{remark}
\subsection{Case II ($V = V^{no\_DQ}$)}
\label{sec:case_II}
This approach was used in \cite{chapelle2012galerkin,singler2012new}.
The motivation for this approach is the following:
In Case I ($V = V^{DQ}$), the first term on the RHS of \eqref{s_error_semi_eq_7}, $( \eta_t , v_r)$, yields a term $\| \eta_t \|$ that stays in all the subsequent inequalities, including the final error estimate \eqref{s_error_semi_eq_15}.
Chapelle et al. proposed in \cite{chapelle2012galerkin} a different approach that eliminated the $( \eta_t , v_r)$ term in \eqref{s_error_semi_eq_7}.
Their approach was straightforward: Instead of using the Ritz projection (as in Case I), they used the $L^2$ projection.
That is, they chose $w_r := P_r(u)$, where $P_r(u)$ is the {\it $L^2$ projection} of $u$, given by
\begin{eqnarray}
\bigl( u - P_r(u) , v_r \bigr)
= 0
\qquad \forall \, v_r \in X^r .
\label{s_error_semi_eq_16}
\end{eqnarray}
To emphasize that we are using the $L^2$ projection, in the remainder of Section~\ref{sec:case_II} we will use the notation $\eta = u - P_r(u) = \eta^{L^2}$.
We note that the POD $L^2$ projection error $\eta^{L^2}$ is exactly the POD interpolation error defined in~\eqref{eqn:definition_interpolation_error}:
\begin{eqnarray}
\eta^{L^2}
= \eta^{interp} \, .
\label{s_error_semi_eq_16.5}
\end{eqnarray}
Next, we show how the error analysis in Case I changes with $w_r = P_r(u)$ as in \cite{chapelle2012galerkin} (see also \cite{singler2012new}).
Using \eqref{s_error_semi_eq_16}, \eqref{s_error_semi_eq_2a} becomes
\begin{eqnarray}
( \phi_{r,t} , v_r )
+ \nu \, ( \nabla \phi_r , \nabla v_r )
= \cancelto{0}{( \eta^{L^2}_t , v_r )}
+ \nu \, ( \nabla \eta^{L^2} , \nabla v_r ) .
\label{s_error_semi_eq_17}
\end{eqnarray}
We emphasize that it is the cancelation of the first term on the RHS of \eqref{s_error_semi_eq_17} that yields error estimates that do not require the DQs.
We let $v_r := \phi_r$ in \eqref{s_error_semi_eq_17} and we apply Cauchy-Schwarz inequality to the remaining term on the RHS:
\begin{eqnarray}
\frac{1}{2} \, \frac{d}{dt} \| \phi_r \|^2
+ \nu \, \| \nabla \phi_r \|^2
\leq \nu \, \| \nabla \eta^{L^2} \| \, \| \nabla \phi_r \| .
\label{s_error_semi_eq_18}
\end{eqnarray}
The error analysis can then proceed in several directions.
\subsubsection{Approach II.A}
\label{sec:approach_II.A}
One approach is to use Young's inequality in \eqref{s_error_semi_eq_18} to get
\begin{eqnarray}
\frac{1}{2} \, \frac{d}{dt} \| \phi_r \|^2
+ \nu \, \| \nabla \phi_r \|^2
\leq \frac{\nu}{2} \, \| \nabla \eta^{L^2} \|^2
+ \frac{\nu}{2} \, \| \nabla \phi_r \|^2 ,
\label{s_error_semi_eq_19}
\end{eqnarray}
which implies
\begin{eqnarray}
\frac{1}{2} \, \frac{d}{dt} \| \phi_r \|^2
+ \frac{\nu}{2} \, \| \nabla \phi_r \|^2
\leq \frac{\nu}{2} \, \| \nabla \eta^{L^2} \|^2 .
\label{s_error_semi_eq_20}
\end{eqnarray}
Noticing that the second term on the LHS of \eqref{s_error_semi_eq_20} is positive, we get
\begin{eqnarray}
\frac{d}{dt} \| \phi_r \|^2
\leq \nu \, \| \nabla \eta^{L^2} \|^2 .
\label{s_error_semi_eq_21}
\end{eqnarray}
Using~\eqref{s_error_semi_eq_16.5} and~\eqref{eqn:suboptimal_error_estimate}, we conclude that Approach II.A will yield error estimates that are suboptimal with respect to $r$.
We also note in passing that estimate~\eqref{s_error_semi_eq_21} suggests that Approach II.A will yield error estimates that are suboptimal with respect to $h$ as well.
\subsubsection{Approach II.B}
\label{sec:approach_II.B}
The other way of continuing from \eqref{s_error_semi_eq_18} is to apply the POD inverse estimate~\eqref{lemma_inverse_pod_1}:
\begin{eqnarray}
\| \nabla \phi_r \|
\leq C_{inv}(r) \, \| \phi_r \| ,
\label{s_error_semi_eq_22}
\end{eqnarray}
where
$C_{inv}(r) := \sqrt{\| S_r \|_2 }$.
Using~\eqref{s_error_semi_eq_22} in \eqref{s_error_semi_eq_18} yields
\begin{eqnarray}
\frac{1}{2} \, \frac{d}{dt} \| \phi_r \|^2
+ \nu \, \| \nabla \phi_r \|^2
\leq C_{inv}(r) \, \nu \, \| \nabla \eta^{L^2} \| \, \| \phi_r \| .
\label{s_error_semi_eq_23}
\end{eqnarray}
Dropping $\nu \, \| \nabla \phi_r \|^2$ in \eqref{s_error_semi_eq_23} and simplifying the resulting inequality by $\| \phi_r \|$ (as we did in \eqref{s_error_semi_eq_12}), we get
\begin{eqnarray}
\frac{1}{2} \, \frac{d}{dt} \| \phi_r \|
\leq C_{inv}(r) \, \nu \, \| \nabla \eta^{L^2} \| .
\label{s_error_semi_eq_24}
\end{eqnarray}
Comparing estimate \eqref{s_error_semi_eq_24} with estimate \eqref{s_error_semi_eq_21} in Approach II.A, we note that both estimates have $\| \nabla \eta \|$ on the RHS.
In addition, estimate \eqref{s_error_semi_eq_24} has $C_{inv}(r)$ on the RHS, which increases the suboptimality with respect to $r$ (see Remark~\ref{remark_pod_inverse_estimate_scalings}).
Thus, estimate~\eqref{s_error_semi_eq_24} suggests that Approach II.B yields estimates that are suboptimal with respect to $r$ (and $h$), just as Approach II.A.
\subsubsection{Approach II.C}
\label{sec:approach_II.C}
Since both Approach II.A and Approach II.B yield error estimates that are suboptimal with respect to $r$ in the $L^2$-norm, one can try instead to prove optimal error estimates in the $H^1$-seminorm.
To this end, we use the approach in \cite{thomee2006galerkin} and, instead of choosing $v_r := \phi_r$ in \eqref{s_error_semi_eq_17}, we choose $v_r := \phi_{r,t}$:
\begin{eqnarray}
\| \phi_{r,t} \|^2
+ \frac{\nu}{2} \, \frac{d}{dt} \| \nabla \phi_r \|^2
\leq \nu \,\| \nabla \eta^{L^2} \| \, \| \nabla \phi_{r,t} \| .
\label{s_error_semi_eq_25}
\end{eqnarray}
Applying Young's inequality and the POD inverse estimate~\eqref{s_error_semi_eq_22} in \eqref{s_error_semi_eq_25}, we get
\begin{eqnarray}
\| \phi_{r,t} \|^2
+ \frac{\nu}{2} \, \frac{d}{dt} \| \nabla \phi_r \|^2
&\leq& \nu \, \| \nabla \eta^{L^2} \| \, \| \nabla \phi_{r,t} \|
\nonumber \\
&\leq& \frac{\nu^2}{2} \, C_{inv}(r)^2 \, \| \nabla \eta^{L^2} \|^2
+ \frac{1}{2 \, C_{inv}(r)^2} \, \| \nabla \phi_{r,t} \|^2
\nonumber \\
&\stackrel{\eqref{s_error_semi_eq_22}}{\leq}& \frac{\nu^2}{2} \, C_{inv}(r)^2 \, \| \nabla \eta^{L^2} \|^2
+ \frac{1}{2} \, \| \phi_{r,t} \|^2 ,
\label{s_error_semi_eq_26}
\end{eqnarray}
which implies
\begin{eqnarray}
\frac{d}{dt} \| \nabla \phi_r \|^2
\leq \nu \, C_{inv}(r)^2 \, \| \nabla \eta^{L^2} \|^2 .
\label{s_error_semi_eq_27}
\end{eqnarray}
In contrast with estimate~\eqref{s_error_semi_eq_21} in Approach II.A and estimate~\eqref{s_error_semi_eq_24} in Approach II.B, estimate~\eqref{s_error_semi_eq_27} seems to yield error estimates that are optimal with respect to $r$.
As in estimates~\eqref{s_error_semi_eq_21} and \eqref{s_error_semi_eq_27}, estimate~\eqref{s_error_semi_eq_27} contains the term $\| \nabla \eta^{L^2} \|$ on the RHS.
We note, however, that this term does not cause any problems, since now we are considering the $H^1$-seminorm of the error.
The factor $C_{inv}(r)^2$ in~\eqref{s_error_semi_eq_27}, however, increases the suboptimality with respect to $r$ (see Remark~\ref{remark_pod_inverse_estimate_scalings}).
Thus, estimate~\eqref{s_error_semi_eq_27} suggests that Approach II.C yields estimates that are suboptimal with respect to $r$, just as Approaches II.A and II.B.
Since for {\bf Case I ($V = V^{DQ}$)} in Section~\ref{sec:case_I}, we did not prove error estimates in the $H^1$-norm, for a fair comparison with Approach II.C, we prove these error estimates below.
To this end, we let $v_r := \phi_{r,t}$ in \eqref{s_error_semi_eq_7}:
\begin{eqnarray}
( \phi_{r,t} , \phi_{r,t} )
+ \nu \, ( \nabla \phi_r , \nabla \phi_{r,t} )
= ( \eta^{Ritz}_t , \phi_{r,t} ) .
\label{s_error_semi_eq_28}
\end{eqnarray}
Applying Young's inequality on the RHS of \eqref{s_error_semi_eq_28}, we get
\begin{eqnarray}
\| \phi_{r,t} \|^2
+ \frac{\nu}{2} \, \frac{d}{dt} \| \nabla \phi_r \|^2
&\leq& \| \eta^{Ritz}_t \| \, \| \phi_{r,t} \|
\nonumber \\
&\leq& \frac{1}{2} \, \| \eta^{Ritz}_t \|^2
+ \frac{1}{2} \, \| \phi_{r,t} \|^2 ,
\label{s_error_semi_eq_29}
\end{eqnarray}
which implies
\begin{eqnarray}
\frac{d}{dt} \| \nabla \phi_r \|^2
\leq \frac{1}{\nu} \, \| \eta^{Ritz}_t \|^2 .
\label{s_error_semi_eq_30}
\end{eqnarray}
Comparing \eqref{s_error_semi_eq_30} with \eqref{s_error_semi_eq_27} in Approach II.C, we note that the latter does not contain the factor $C_{inv}(r)^2$.
Thus, {\bf Case I ($V = V^{DQ}$)} in Section~\ref{sec:case_I} yields optimal error estimates with respect to $r$, as opposed to Approach II.C.
We also note that, at first glance, estimate~\eqref{s_error_semi_eq_30} suggests that one can get superconvergence in the $H^1$-seminorm.
As mentioned in~\cite{thomee2006galerkin}, however, when applying the triangle inequality one obtains the expected convergence rate:
\begin{eqnarray}
\| \nabla e \|
\leq \| \nabla \eta^{Ritz} \|
+ \| \nabla \phi_r \| .
\label{s_error_semi_eq_31}
\end{eqnarray}
\subsubsection{Approach II.D}
\label{sec:approach_II.D}
Approaches II.A, II.B, and II.C suggest that the pointwise in time (i.e., in the $C^0(0,T; L^2(\Omega))$-norm and the $C^0(0,T; H^1(\Omega))$-norm) error estimates are suboptimal with respect to $r$.
Thus, we derive error estimates in the solution norm (i.e., in the $L^2(0,T; H^1(\Omega))$-norm).
Integrating~\eqref{s_error_semi_eq_20} from $0$ to $T$, we get
\begin{eqnarray}
\| \phi_r(T) \|^2
+ \frac{\nu}{2} \, \int_{0}^{T} \| \nabla \phi_r(s) \|^2 \, ds
\leq \| \phi_r(0) \|^2
+ \frac{\nu}{2} \, \int_{0}^{T} \| \nabla \eta^{L^2}(s) \|^2 \, ds .
\label{s_error_semi_eq_23b}
\end{eqnarray}
Using \eqref{s_error_semi_eq_16.5}, \eqref{eqn:optimality_eta}, and \eqref{eqn:optimality_nabla_eta}, we conclude that estimate in \eqref{s_error_semi_eq_23b} is optimal with respect to $r$.
We note that Proposition 3.3 in \cite{chapelle2012galerkin} yields a similar estimate.
\bigskip
Case II (i.e., $V = V^{no\_DQ}$) yields the following general conclusions when the $L^2$ projection is used instead of the standard Ritz projection:
If the error is computed pointwise in time (i.e., in the $C^0(0,T; L^2(\Omega))$-norm and the $C^0(0,T; H^1(\Omega))$-norm), then the error estimates are suboptimal with respect to $r$.
This is the main message of Approaches II.A, II.B, and II.C.
If, however, the error is computed in the solution norm (i.e., in the $L^2(0,T; H^1(\Omega))$-norm), then the error estimates are optimal with respect to $r$.
This is the main message of Approach II.D.
\bigskip
\section{Introduction}
\label{s_introduction}
This paper addresses the following question:
{\it ``Should the time difference quotients (DQs) of the snapshots be used in the generation of the Proper Orthogonal Decomposition (POD) basis functions?"}
We emphasize that this is an important question.
There are two schools of thought: one uses the DQs (see, e.g. \cite{KV01,KV02,chaturantabut2010nonlinear,chaturantabut2012state}), the other does not (see, e.g. \cite{LCNY08,luo2009finite,chapelle2012galerkin,singler2012new}).
To our knowledge, the first instance in which the snapshot DQs were incorporated in the generation of the POD basis functions was the pioneering paper of Kunisch and Volkwein \cite{KV01}.
In that report, the authors considered two types of errors for a general parabolic equation: the time discretization errors and the POD truncation errors.
They argued that one needs to include the temporal difference quotients in the set of snapshots; otherwise, the error will be suboptimal, containing an extra $\frac{1}{\Delta t^2}$ factor (see Remark 1 in \cite{KV01}).
Thus, the motivation for using the temporal difference quotients was purely theoretical.
In numerical investigations, however, the authors reported contradictory findings: in \cite{KV01}, the use of the DQs did not improve the quality of the reduced-order model; in \cite{homberg2003control}, however, it did.
Kunisch and Volkwein used again the snapshot DQs when they considered the Navier-Stokes equations \cite{KV02}.
The snapshot DQs were also used in the {\it Discrete Empirical Interpolation Method (DEIM)} of Chaturantabut and Sorensen \cite{chaturantabut2010nonlinear,chaturantabut2012state} (which is a discrete variant of the {\it Empirical Interpolation Method (EIM)} \cite{barrault2004eim}).
The motivation in \cite{chaturantabut2010nonlinear,chaturantabut2012state}, however, was different from that in \cite{KV01}.
Indeed, the authors considered in \cite{chaturantabut2010nonlinear,chaturantabut2012state} a general, nonlinear system of equations of the form ${\bf y}' = {\bf f}({\bf y},t)$.
In the set of snapshots, they included not only the state variables ${\bf y}$, but also the {\it nonlinear} snapshots ${\bf f}({\bf y},t)$.
They further noted (see page 48 in \cite{chaturantabut2012state}) that, since ${\bf f}({\bf y},t) = {\bf y}'$ and $( {\bf y}^{n+1} - {\bf y}^{n} ) / \Delta t \sim {\bf y}'$, this is similar to including the temporal difference quotients, as done in \cite{KV01,KV02}.
To our knowledge, the first reports on POD analysis in which the DQs were not used were \cite{LCNY08} for the heat equation and \cite{luo2009finite} for the Navier-Stokes equations.
Chapelle et al.~\cite{chapelle2012galerkin} used a different approach that did not use the DQs either.
This approach employed the $L^2$ projection instead of the standard $H^1$ projection used in, e.g., \cite{KV01,KV02}.
Further improvements to the approach used in~\cite{chapelle2012galerkin} (as well as that used in \cite{KV01,KV02}) were made by Singler in \cite{singler2012new}.
From the above discussion, it is clear that the question whether the snapshot DQs should be included or not in the set of snapshots is important.
To our knowledge, this question is still open.
This report represents a first step in answering this question.
All our discussion will be centered around the heat equation, although most (if not all) of it could be extended to general parabolic equations in a straightforward manner.
From a theoretical point of view, the only motivation for using the snapshot DQs was given in Remark 1 in \cite{KV01}.
The main point of this remark is the following:
In the error analysis of the evolution equation, to approximate $u_t(t^n)$, the time derivative of the exact solution $u$ evaluated at time $t^n$, one usually uses the DQ $\overline{\partial} u(t^n) := \frac{u(t^{n+1}) - u(t^{n})}{\Delta t}$.
To approximate the DQ $\overline{\partial} u(t^n)$ in the POD space, one naturally uses the POD DQ $\overline{\partial} u_r(t^n) := \frac{u_r(t^{n+1}) - u_r(t^{n})}{\Delta t}$, where $u_r$ is the POD reduced order model approximation.
We assume that $u_r(t^{n+1})$ is an optimal approximation for $u(t^{n+1})$ and that $u_r(t^{n})$ is an optimal approximation for $u(t^{n})$, where the optimality is with respect to $r$ and $\Delta t$.
Then, it would appear that, with respect to $\Delta t$, $\overline{\partial} u_r(t^n)$ is a {\it suboptimal} approximation for $\overline{\partial} u(t^n)$, because of the $\Delta t$ in the denominator of the two difference quotients.
Although the argument above, used in Remark 1 in \cite{KV01} to motivate the inclusion of the snapshot DQs in the derivation of the POD basis, seems natural, we point out that this issue should be treated more carefully.
Indeed, in the finite element approximation of parabolic equations, it is well known that the DQs $\overline{\partial} u_h(t^n) := \frac{u_h(t^{n+1}) - u_h(t^{n})}{\Delta t}$ are actually {\it optimal} (with respect to $\Delta t$) approximations of the DQs $\overline{\partial} u(t^n)$ (see, e.g., \cite{labovschii2009defect,shan2012decoupling}).
Thus, since the POD and finite element approximations are similar (both use a Galerkin projection in the spatial discretization), one could question the validity of the argument used in Remark 1 in \cite{KV01}.
We emphasize that we are not claiming that the above argument is not valid in a POD setting; we are merely pointing out that a rigorous numerical analysis is needed before drawing any conclusions.
\begin{figure}[htp]
\begin{center}
\begin{minipage}[h]{.6\linewidth}
\includegraphics[width=0.9\textwidth]{Heat_eL2_dt.eps}
\end{minipage}
\end{center}
\caption{
Heat equation.
Plots of the errors in the $L^2(L^2)$-norm with respect to the time step $\Delta t$ when the DQs are used (denoted by DQ) and when the DQs are not used (denoted by no-DQ).
}
\label{fig:test1_dt}
\end{figure}
Our preliminary numerical studies indicate that not using the DQs does {\it not} yield suboptimal (with respect to $\Delta t$) error estimates.
For the heat equation (see Section~\ref{sec:numerical} for details regarding the numerical simulation), we monitor the rates of convergence with respect to $\Delta t$ for the POD reduced order model.
We consider two cases: when the DQs are used in the generation of the POD basis (the corresponding results are denoted by DQ), and when the DQs are not used in the generation of the POD basis (the corresponding results are denoted by no-DQ).
The errors (defined in Section~\ref{sec:numerical}) are listed in Table \ref{tab:test1_dt} and plotted in Figure \ref{fig:test1_dt} with associated {\it linear regressions (LR)}.
Both no-DQ and DQ approaches yield an optimal approximation order $\mathcal{O}(\Delta t^2)$ in the $L^2$-norm.
\begin{table}[h]
\centering
\tabcaption{Errors of the $no\_DQ$ and $DQ$ approaches when $\Delta t$ varies.}
\label{tab:test1_dt}
\begin{tabular}{c|ccc|ccc}
\hline
\multirow{2}{*}{$\Delta t$}&\multicolumn{3}{c|}{$no\_DQ$}&\multicolumn{3}{c}{$DQ$}\\
\cline{2-7}
{} & r & $\mathcal{E}_{L^2(L^2)}$ & $\mathcal{E}_{L^2(H_1)}$ & r & $\mathcal{E}_{L^2(L^2)}$ & $\mathcal{E}_{L^2(H_1)}$ \\
\hline
2.00e-01 & 6 & 3.71e-02 & 9.26e-01 & 6 & 3.71e-02 & 9.26e-01 \\
1.00e-01 & 11 & 1.27e-02 & 5.81e-01 & 11 & 1.27e-02 & 5.81e-01 \\
5.00e-02 & 21 & 2.99e-03 & 1.97e-01 & 21 & 2.99e-03 & 1.97e-01 \\
2.50e-02 & 41 & 6.53e-04 & 3.81e-02 & 41 & 6.53e-04 & 3.81e-02 \\
1.00e-02 & 59 & 1.03e-04 & 1.15e-02 & 88 & 1.03e-04 & 1.15e-02 \\
\hline
\end{tabular}
\end{table}
The rest of the paper is organized as follows:
In Section~\ref{sec:pod}, we sketch the derivation of the POD reduced order model.
In Section~\ref{sec:error}, we carefully derive the error estimates for the POD reduced order model.
We focus on the rates of convergence with respect to $r$, the number of POD basis functions.
In Section~\ref{sec:numerical}, we present numerical results for two test problems: the heat equation and the Burgers equation.
Finally, in Section~\ref{sec:conclusions}, we draw several conclusions.
\section{Numerical Results}
\section{Numerical Results}
\label{sec:numerical}
The main goal of this section is to numerically investigate the rates of convergence with respect to $r$ of the POD-G-ROM~\eqref{eqn:pod_g_rom} in the two cases considered in Section~\ref{sec:error}: {\bf Case I} ($V = V^{DQ}$) and {\bf Case II} ($V = V^{no\_DQ}$).
Although the error analysis in Section~\ref{sec:error} has been centered around the (linear) heat equation, in this section we consider both the heat equation (Section~\ref{sec:heat}) and the nonlinear Burgers equation (Section~\ref{sec:burgers}).
To measure the errors in the two cases (i.e., $V = V^{DQ}$ and $V = V^{no\_DQ}$), the same norms as those used in Section~\ref{sec:error} are used in this section.
Denoting the error at time $t_j$ by $e_j := u^r_h(\cdot, t_j) - u(\cdot, t_j)$, the following norms are considered:
the error in the $C^0(0,T; L^2(\Omega))$-norm, approximated by
$\mathcal{E}_{C^0(L^2)} = \max\limits_{0\leq j \leq N} \|e_j\|_{{L^2}(\Omega)}$;
the error in the $C^0(0,T; H^1(\Omega))$-norm, approximated by
$\mathcal{E}_{C^0(H^1)} = \max\limits_{0\leq j \leq N} \|e_j\|_{{H^1}(\Omega)}$;
and the error in the $L^2(0,T; H^1(\Omega))$-norm, approximated by
$\mathcal{E}_{L^2(H^1)}=\sqrt{\frac{1}{N+1} \sum\limits_{0\leq j \leq N} \| e_j \|_{{H^1}(\Omega)}^2}$.
For clarity, we also use the following notation: $\Lambda_r= \sqrt{\sum\limits_{j=r+1}^d \lambda_j}$.
As mentioned at the beginning of Section~\ref{sec:error}, the POD-G-ROM~\eqref{eqn:pod_g_rom} error estimates are optimal if the following statements hold:
(i) The $L^2$-norm of the error scales as $L^2$-norm of the POD interpolation error.
Using \eqref{eqn:optimality_eta} and \eqref{pod_error_formula_nodq}--\eqref{pod_error_formula_dq_pointwise}, this statement is equivalent to
\begin{eqnarray}
\mathcal{E}_{C^0(L^2)}
= \mathcal{O}\left(\sqrt{\sum\limits_{j=r+1}^d \lambda_j} \ \right) \, ,
\label{eqn:numerical_1}
\end{eqnarray}
(ii) The $H^1$-norm of the error scales as $H^1$-norm of the POD interpolation error.
Using \eqref{eqn:optimality_nabla_eta}, \eqref{eqn:inverse_estimate_interpolation_error}, and \eqref{pod_error_formula_nodq}--\eqref{pod_error_formula_dq_pointwise}, this statement is equivalent to
\begin{eqnarray}
\mathcal{E}_{C^0(H^1)}
\sim \mathcal{E}_{L^2(H^1)}
= \mathcal{O}\left(\sqrt{ \| S_d \|_2 \, \sum\limits_{j=r+1}^d \lambda_j}\right) \, ,
\label{eqn:numerical_2}
\end{eqnarray}
Based on the error analysis in Section~\ref{sec:error}, we expect the following convergence rates with respect to $r$:
\begin{table}[h]
\centering
\tabcaption{Theoretical convergence rates for the $no\_DQ$ and the $DQ$ cases.}
\label{tab:numerical_1}
\begin{tabular}{|c|c|c|}
\hline
& & \\[-0.2cm]
& $no\_DQ$ & $DQ$ \\[0.2cm]
\hline
& & \\[-0.2cm]
$\mathcal{E}_{C^0(L^2)}$ & suboptimal & optimal \\
& \eqref{s_error_semi_eq_21}; \eqref{s_error_semi_eq_24} & \eqref{s_error_semi_eq_15} \\[0.2cm]
\hline
& & \\[-0.2cm]
$\mathcal{E}_{C^0(H^1)}$ & suboptimal & optimal \\
& \eqref{s_error_semi_eq_27} & \eqref{s_error_semi_eq_31} \\[0.2cm]
\hline
& & \\[-0.2cm]
$\mathcal{E}_{L^2(H^1)}$ & optimal & optimal \\
& \eqref{s_error_semi_eq_23b} & \\[0.2cm]
\hline
\end{tabular}
\end{table}
\subsection{Heat Equation}
\label{sec:heat}
We consider the one-dimensional heat equation~\eqref{eqn:heat_weak} with a known exact solution that represents the propagation in time of a steep front:
\begin{eqnarray}
u(x,t) = \sin(\pi x)\left[\frac{1}{\pi}\arctan\left(\frac{c}{25}-c\left(x-\frac{t}{2}\right)^2\right) + \frac{1}{2}\right],
\label{eqn:exact_solution_heat}
\end{eqnarray}
where $x\in [0, 1]$ and $t\in [0, 1]$.
The constant $c$ in~\eqref{eqn:exact_solution_heat} controls the steepness of the front.
In all the numerical tests in this section, we used the value $c = 100$.
The value of the diffusion coefficient used in the heat equation~\eqref{eqn:heat_weak} is $\nu = 10^{-2}$.
Piecewise linear finite elements are used to generate snapshots for the POD-G-ROM~\eqref{eqn:pod_g_rom}.
A mesh size $h=1/1024$ and the Crank-Nicolson scheme with a time step $\Delta t = 10^{-3}$ are employed for the spatial and temporal discretizations.
The time evolution of the finite element solution is shown in Figure \ref{fig:test1dns}.
In total, 1001 snapshots are collected and used for generating POD basis functions in the $L^2$ space.
The same numerical solver as that used in the finite element approximation is utilized in the POD-G-ROM.
\begin{figure}[h]
\centering
\begin{minipage}[h]{.6\linewidth}
\includegraphics[width=1\textwidth]{Heat_dns.eps}
\end{minipage}
\caption{
Heat equation.
Fine resolution finite element solution used to generate the snapshots.
}
\label{fig:test1dns}
\end{figure}
By varying $r$, the number of basis functions used in the POD-G-ROM, we check the rates of convergence with respect to $r$ for the $no\_DQ$ and the $DQ$ cases.
The errors are listed in Table \ref{tab:test1_DQ0} (in the $no\_DQ$ case) and in Table \ref{tab:test1_DQ1} (in the $DQ$ case).
To visualize the rates of convergence with respect to $r$, these errors with their linear regression plots are drawn in Figure \ref{fig:test1_eC0L2_eC0H1_eL2H1}.
The convergence rate of the error in the $C^0(L^2)$-norm, $\mathcal{E}_{C^0(L^2)}$, is superoptimal in the $DQ$ case and suboptimal in the $no\_DQ$ case.
This supports the theoretical rates of convergence in Table~\ref{tab:numerical_1}, although the suboptimality in the $no\_DQ$ case is mild.
The convergence rate of the error in the $C^0(H^1)$-norm, $\mathcal{E}_{C^0(H^1)}$, is slightly superoptimal in the $DQ$ case and strongly suboptimal in the $no\_DQ$ case.
This again supports the theoretical rates of convergence in Table~\ref{tab:numerical_1}.
The convergence rate of the error in the $L^2(H^1)$-norm, $\mathcal{E}_{L^2(H^1)}$, is optimal in the $DQ$ case and strongly suboptimal in the $no\_DQ$ case.
This supports the theoretical rates of convergence in Table~\ref{tab:numerical_1} for the $DQ$ case, but not for the $no\_DQ$ case.
Overall, the numerical results support the theoretical rates of convergence proved in Section~\ref{sec:error} and summarized in Table~\ref{tab:numerical_1}.
We also emphasize that {\it the convergence rates in the $DQ$ case in all three norms are much higher than (and almost twice as high as) the corresponding rates of convergence in the $no\_DQ$ case}.
\begin{table}[h]
\centering
\tabcaption{Heat equation. Errors in the $no\_DQ$ case.}
\label{tab:test1_DQ0}
\begin{tabular}{|c|c|c|c|c|c}
\hline
r & $\Lambda_r$ & $\mathcal{E}_{C^0(L^2)}$ & $\mathcal{E}_{C^0(H^1)}$ & $\mathcal{E}_{L^2(H^1)}$ \\
\hline
3 & 5.72e-02 & 9.46e-02 & 2.30e+00 & 1.59e+00 \\
5 & 2.71e-02 & 4.70e-02 & 1.58e+00 & 1.14e+00 \\
7 & 1.58e-02 & 3.69e-02 & 1.38e+00 & 8.22e-01 \\
10 & 7.34e-03 & 1.57e-02 & 8.54e-01 & 5.31e-01 \\
13 & 3.84e-03 & 7.78e-03 & 5.84e-01 & 3.50e-01 \\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\tabcaption{Heat equation. Errors in the $DQ$ case.}
\label{tab:test1_DQ1}
\begin{tabular}{|c|c|c|c|c|c}
\hline
r & $\Lambda_r$ & $\mathcal{E}_{C^0(L^2)}$ & $\mathcal{E}_{C^0(H^1)}$ & $\mathcal{E}_{L^2(H^1)}$ \\
\hline
19 & 5.49e-02 & 7.15e-03 & 4.96e-01 & 3.19e-01 \\
23 & 2.95e-02 & 2.03e-03 & 1.98e-01 & 1.19e-01 \\
28 & 1.41e-02 & 6.52e-04 & 7.97e-02 & 4.91e-02 \\
33 & 6.75e-03 & 2.41e-04 & 3.68e-02 & 2.80e-02 \\
37 & 3.76e-03 & 8.60e-05 & 2.67e-02 & 2.29e-02 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[h]
\begin{minipage}[h]{.45\linewidth}
\includegraphics[width=1.1\textwidth]{Heat_eC0L2_wrt_r.eps}
\end{minipage}
\hspace{.3cm}
\begin{minipage}[h]{.45\linewidth}
\includegraphics[width=1.1\textwidth]{Heat_eC0H1_wrt_r_withSr.eps}
\end{minipage}
\hspace{.3cm}
\begin{center}
\begin{minipage}[h]{.45\linewidth}
\includegraphics[width=1.1\textwidth]{Heat_eH1_wrt_r_withSr.eps}
\end{minipage}
\end{center}
\caption{
Heat equation.
Plots of errors in $C^0(L^2)$-norm (top, left), $C^0(H^1)$-norm (top, right), and $L^2(H^1)$-norm (bottom).
}
\label{fig:test1_eC0L2_eC0H1_eL2H1}
\end{figure}
\subsection{Burgers Equation}
\label{sec:burgers}
In this section, we consider the one-dimensional Burgers equation.
As mentioned at the beginning of Section~\ref{sec:numerical}, the error estimates proved in Section~\ref{sec:error} are valid for the (linear) heat equation, but not necessarily valid for the nonlinear Burgers equation.
Nevertheless, to gain some insight into the range of validity of the theoretical development in Section~\ref{sec:error}, we investigate the rates of convergence with respect to $r$ in the $no\_DQ$ and the $DQ$ cases for the nonlinear Burgers equation:
\begin{eqnarray}
\label {burgers}
\left\{
\begin{array}{ll}
u_t
- \nu \, u_{xx}
+ u \, u_x
= f
& \qquad \text{ in } \Omega \times (0, T] \, , \\
u(x,0) = u_0(x)
& \qquad \text{ in } \Omega \, , \\
u(x, t) = g(x, t)
& \qquad \text{ on } \partial \Omega \times (0, T] \, .
\end{array}
\right.
\end{eqnarray}
The initial condition is
\begin{eqnarray}
u_0(x)
= \left\{
\begin{array}{cc}
1 & \quad \text{ if } x \in \left( 0, \frac{1}{2} \right] \\
\\
0 & \qquad \text{ if } x \in \left( \frac{1}{2} , 1 \right) \, ,
\end{array}
\right.
\label{ic_1}
\end{eqnarray}
which is similar to that used in \cite{KV01}.
The diffusion parameter is $\nu=10^{-2}$, the forcing term is $f = 0$, $\Omega = [0, 1]$, and $T= 1$.
The boundary conditions are homogeneous Dirichlet, that is,
$u(0,t) = u(1,t) = 0$ for all $t\in [0, 1]$.
To generate snapshots, we use piecewise linear finite elements with mesh size $h=1/1024$ and the backward Euler method with a time step $\Delta t = 10^{-4}$, and save data at each time instance.
The time evolution of the finite element solution is shown in Figure \ref{fig:test2dns}.
All snapshots are used for the POD basis generation and the same numerical solver is used in the POD-G-ROM.
\begin{figure}[h]
\centering
\begin{minipage}[h]{.6\linewidth}
\includegraphics[width=1\textwidth]{Burg_dns.eps}
\end{minipage}
\caption{
Burgers equation.
Fine resolution finite element solution used to generate the snapshots.
}
\label{fig:test2dns}
\end{figure}
By varying $r$, we check the rates of convergence with respect to $r$ for the $no\_DQ$ and the $DQ$ cases.
Since the exact solution of the Burgers equation~\eqref{burgers} with the initial condition~\eqref{ic_1} is not known, we consider the errors between the POD-G-ROM results and the snapshots.
The errors are listed in Table \ref{tab:test2_DQ0} (in the $no\_DQ$ case) and in Table \ref{tab:test2_DQ1} (in the $DQ$ case).
These errors with their linear regression plots are drawn in Figure~\ref{fig:test2_eC0L2_eC0H1_eL2H1}.
The convergence rate of the error in the $C^0(L^2)$-norm, $\mathcal{E}_{C^0(L^2)}$, is superoptimal in the $DQ$ case and strongly suboptimal in the $no\_DQ$ case.
This clearly supports the theoretical rates of convergence in Table~\ref{tab:numerical_1}.
The convergence rate of the error in the $C^0(H^1)$-norm, $\mathcal{E}_{C^0(H^1)}$, is superoptimal in the $DQ$ case and extremely suboptimal in the $no\_DQ$ case.
This strongly supports the theoretical rates of convergence in Table~\ref{tab:numerical_1}.
Finally, the convergence rate of the error in the $L^2(H^1)$-norm, $\mathcal{E}_{L^2(H^1)}$, is optimal in the $DQ$ case and strongly suboptimal in the $no\_DQ$ case.
This supports the theoretical rates of convergence in Table~\ref{tab:numerical_1} for the $DQ$ case, but not for the $no\_DQ$ case.
Overall, the numerical results clearly support the theoretical rates of convergence proved in Section~\ref{sec:error} and summarized in Table~\ref{tab:numerical_1}.
We also emphasize that {\it the convergence rates in the $DQ$ case in all three norms are much higher than (and at least twice as high as) the corresponding rates of convergence in the $no\_DQ$ case}.
\begin{table}[h]
\centering
\tabcaption{Burgers equation. Errors in the $no\_DQ$ approach.}
\label{tab:test2_DQ0}
\begin{tabular}{|c|c|c|c|c|c}
\hline
r & $\Lambda_r$ & $\mathcal{E}_{C^0(L^2)}$ & $\mathcal{E}_{C^0(H^1)}$ & $\mathcal{E}_{L^2(H^1)}$ \\
\hline
3 & 8.74e-02 & 2.38e-01 & 4.48e+01 & 2.34e+00 \\
5 & 3.95e-02 & 1.60e-01 & 4.44e+01 & 1.76e+00 \\
7 & 1.97e-02 & 1.17e-01 & 4.37e+01 & 1.24e+00 \\
9 & 1.02e-02 & 9.05e-02 & 4.28e+01 & 8.94e-01 \\
11 & 5.47e-03 & 7.01e-02 & 4.14e+01 & 6.84e-01 \\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\tabcaption{Burgers equation. Errors in the $DQ$ approach.}
\label{tab:test2_DQ1}
\begin{tabular}{|c|c|c|c|c|c}
\hline
r & $\Lambda_r$ & $\mathcal{E}_{C^0(L^2)}$ & $\mathcal{E}_{C^0(H^1)}$ & $\mathcal{E}_{L^2(H^1)}$ \\
\hline
18 & 8.55e-02 & 6.83e-03 & 5.60e-01 & 2.82e-01 \\
21 & 4.56e-02 & 2.99e-03 & 2.49e-01 & 1.37e-01 \\
24 & 2.39e-02 & 1.31e-03 & 1.23e-01 & 6.72e-02 \\
28 & 9.81e-03 & 4.29e-04 & 4.73e-02 & 2.57e-02 \\
31 & 4.97e-03 & 1.88e-04 & 2.28e-02 & 1.25e-02 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[h]
\begin{minipage}[h]{.45\linewidth}
\includegraphics[width=1.1\textwidth]{Burg_eC0L2_wrt_r.eps}
\end{minipage}
\hspace{.3cm}
\begin{minipage}[h]{.45\linewidth}
\includegraphics[width=1.1\textwidth]{Burg_eC0H1_wrt_r_withSr.eps}
\end{minipage}
\hspace{.3cm}
\begin{center}
\begin{minipage}[h]{.45\linewidth}
\includegraphics[width=1.1\textwidth]{Burg_eH1_wrt_r_withSr.eps}
\end{minipage}
\end{center}
\caption{
Burgers equation.
Plots of errors in $C^0(L^2)$-norm (top, left), $C^0(H^1)$-norm (top, right), and $L^2(H^1)$-norm (bottom).
}
\label{fig:test2_eC0L2_eC0H1_eL2H1}
\end{figure}
\section{Proper Orthogonal Decomposition Reduced Order Modeling}
\label{sec:pod}
In this section, we sketch the derivation of the standard POD Galerkin reduced order model for the heat equation.
For a detailed presentation of reduced order modeling in general settings, the reader is referred to, e.g., \cite{HLB96,KV99,Sir87abc,burkardt2006pod,AK04,bui2008model,wang2012proper,balajewicz2012novel}.
For clarity, we will denote by $C$ a generic constant that can depend on all the parameters in the system, except on
$M$ (the number of snapshots),
$d$ (the dimension of the set of snapshots, $V$), and
$r$ (the number of POD modes used in the POD reduced order model).
Let $X := H_0^1(\Omega)$, where $\Omega$ is the computational domain.
Let $u(\cdot,t)\in X, t\in[0,T]$ be the weak solution of the weak formulation of the heat equation:
\begin{eqnarray}
(u_t , v)
+ \nu \, (\nabla u , \nabla v)
= (f , v) \,
\quad \forall v \in X.
\label{eqn:heat_weak}
\end{eqnarray}
Given the time instances $t_1, \ldots, t_N \in [0,T]$, we consider the following two ensembles of snapshots:
\begin{eqnarray}
V^{no\_DQ}
&:=& \mbox{span}\left\{ u(\cdot,t_0), \ldots, u(\cdot,t_N) \right\},
\label{snapshots_nodq} \\
V^{DQ}
&:=& \mbox{span}\left\{ u(\cdot , t_0), \ldots, u(\cdot , t_N),
\overline{\partial} u(\cdot , t_1), \ldots, \overline{\partial} u(\cdot , t_N) \right\},
\label{snapshots_dq}
\end{eqnarray}
where
$\overline{\partial} u(t_n) := \frac{u(t_n) - u(t_{n-1})}{\Delta t} , \ n = 1, \ldots, N$
are the time {\it difference quotients (DQs)}.
The two ensembles of snapshots correspond to the two cases investigated in this paper:
(i) with the DQs not included in the snapshots (i.e., $V^{no\_DQ}$); and
(ii) with the DQs included in the snapshots (i.e., $V^{DQ}$).
As pointed out in Remark 1 in~\cite{KV01}, the ensemble of snapshots $V^{no\_DQ}$ and $V^{DQ}$ yield {\it different} POD bases.
This is clearly illustrated by Figures~\ref{fig:pod_basis_heat}-\ref{fig:pod_basis_burgers}, which display POD basis functions for the heat equation and the Burgers equation, respectively (see Section~\ref{sec:numerical} for details regarding the numerical simulations).
\begin{figure}[h]
\begin{center}
\begin{minipage}[h]{.9\linewidth}
\includegraphics[width=0.9\textwidth]{Heat_basis_cmp.eps}
\end{minipage}
\end{center}
\caption{
Heat equation.
Plots of the POD basis functions when the DQs are used (denoted by DQ) and when the DQs are not used (denoted by no-DQ).
}
\label{fig:pod_basis_heat}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{minipage}[h]{.9\linewidth}
\includegraphics[width=0.9\textwidth]{Burg_basis_cmp.eps}
\end{minipage}
\end{center}
\caption{
Burgers equation.
Plots of the POD basis functions when the DQs are used (denoted by DQ) and when the DQs are not used (denoted by no-DQ).
}
\label{fig:pod_basis_burgers}
\end{figure}
To simplify the presentation, we denote both sets of snapshots (i.e., $V^{no\_DQ}$ and $V^{DQ}$) by
\begin{equation*}
V = \mbox{span}\left\{ s_1, s_2, \ldots, s_M \right\},
\end{equation*}
where $M=N+1$ when $V^{no\_DQ}$ is considered and $M=2N+1$ when $V^{DQ}$ is considered. We use the specific notation (i.e., $V^{no\_DQ}$ or $V^{DQ}$) only when this is necessary.
Let dim $V = d$.
The POD seeks a low-dimensional basis $\{ \varphi_1, \ldots, \varphi_r \}$,
with $r \leq d$, which optimally approximates the input collection.
Specifically, the POD basis satisfies
\begin{eqnarray}
\min \frac{1}{M} \sum_{i=1}^M \left\| s_i -
\sum_{j=1}^r \bigl( s_i \, , \, \varphi_j(\cdot) \bigr)_{L^2} \, \varphi_j(\cdot)
\right\|_{L^2}^2 \, ,
\label{pod_min}
\end{eqnarray}
subject to the conditions that $(\varphi_i,\varphi_j)_{L^2} = \delta_{ij}, \ 1 \leq i, j \leq r$.
In order to solve \eqref{pod_min}, we consider the eigenvalue problem
\begin{eqnarray}
K \, v = \lambda \, v \, ,
\label{pod_eigenvalue}
\end{eqnarray}
where $K \in \mathbb{R}^{M \times M}$, with
$\displaystyle K_{ij} = \frac{1}{M} \, ( s_j, s_i )_{L^2} \,$,
is the snapshot correlation matrix,
$\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_d >0$ are the positive eigenvalues, and
$v_k, \, k = 1, \ldots, d,$ are the associated eigenvectors.
It can then be shown (see, e.g., \cite{HLB96,KV99}), that the solution of \eqref{pod_min} is given by
\begin{eqnarray}
\varphi_{k}(\cdot) = \frac{1}{\sqrt{\lambda_k}} \, \sum_{j=1}^{M} (v_k)_j \, s_{j},
\quad 1 \leq k \leq r,
\label{pod_basis_formula}
\end{eqnarray}
where $(v_k)_j$ is the $j$-th component of the eigenvector $v_k$.
\begin{definition}
The term
\begin{eqnarray}
\eta^{interp}(x,t)
:= u(x,t)
- \sum_{j=1}^r \bigl( u(x,t) \, , \, \varphi_j(x) \bigr)_{L^2} \, \varphi_j(x)
\label{eqn:definition_interpolation_error}
\end{eqnarray}
will be denoted as the {\it POD interpolation error}.
\end{definition}
\begin{remark}
\label{remark:pod_norm}
Note that the $H^1$-norm can also be used to generate the POD basis \cite{KV01,singler2012new}.
In this case, $\| \eta^{interp} \|_{L^2} \sim \| \nabla \eta^{interp} \|_{L^2}$.
Thus, the two cases considered in this paper (i.e., $V = V^{no\_DQ}$ and $V = V^{DQ}$) yield error estimates that have the same convergence rates with respect to $r$.
\end{remark}
It can also be shown~\cite{KV01} that the following {\it POD approximation property} holds:
\begin{eqnarray}
&& \frac{1}{N+1} \sum_{i=0}^N \left\| u(\cdot,t_i) -
\sum_{j=1}^r \bigl( u(\cdot,t_i) \, , \, \varphi_j(\cdot) \bigr)_{L^2} \, \varphi_j(\cdot)
\right\|_{L^2}^2
= \sum_{j=r+1}^{d} \lambda_j
\qquad \text{if } \ V = V^{no\_DQ} \, ,
\label{pod_error_formula_nodq} \\
&& \frac{1}{2 N + 1} \, \sum_{i=0}^N
\left\|
u(\cdot,t_i) - \sum_{j=1}^r \bigl( u(\cdot,t_i) \, , \, \varphi_j(\cdot) \bigr)_{L^2} \, \varphi_j(\cdot)
\right\|_{L^2}^2
\label{pod_error_formula_dq} \\
&& + \ \frac{1}{2 N + 1} \, \sum_{i=1}^N
\left\|
\overline{\partial} u(\cdot,t_i) - \sum_{j=1}^r \bigl( \overline{\partial} u(\cdot,t_i) \, , \, \varphi_j(\cdot) \bigr)_{L^2} \, \varphi_j(\cdot)
\right\|_{L^2}^2
= \sum_{j=r+1}^{d} \lambda_j
\quad \text{if } \ V = V^{DQ} \, . \quad
\nonumber
\end{eqnarray}
The approximation property~\eqref{pod_error_formula_nodq}-\eqref{pod_error_formula_dq} represents the relationship between the average of the square of the $L^2$-norm of the interpolation error and the sum of the eigenvalues of the POD modes that are not included in the POD reduced order model.
In order to be able to prove pointwise in time error estimates in Section~\ref{sec:error}, we also make the following assumption:
\begin{assumption}
We assume that, for $i = 1, \ldots, N$, the interpolation error satisfies the the following estimates:
\begin{eqnarray}
&& \left\| u(\cdot,t_i) -
\sum_{j=1}^r \bigl( u(\cdot,t_i) \, , \, \varphi_j(\cdot) \bigr)_{L^2} \, \varphi_j(\cdot)
\right\|_{L^2}^2
\leq C \, \sum_{j=r+1}^{d} \lambda_j
\qquad \text{if } \ V = V^{no\_DQ} \, ,
\label{pod_error_formula_nodq_pointwise} \\
&& \left\|
u(\cdot,t_i) - \sum_{j=1}^r \bigl( u(\cdot,t_i) \, , \, \varphi_j(\cdot) \bigr)_{L^2} \, \varphi_j(\cdot)
\right\|_{L^2}^2
+ \ \left\|
\overline{\partial} u(\cdot,t_i) - \sum_{j=1}^r \bigl( \overline{\partial} u(\cdot,t_i) \, , \, \varphi_j(\cdot) \bigr)_{L^2} \, \varphi_j(\cdot)
\right\|_{L^2}^2 \nonumber \\
&& \hspace*{6.2cm} \leq C \, \sum_{j=r+1}^{d} \lambda_j
\quad \text{if } \ V = V^{DQ} \, . \quad
\label{pod_error_formula_dq_pointwise}
\end{eqnarray}
\label{assumption_pod_error_formula_pointwise}
\end{assumption}
\begin{remark}
Assumption~\ref{assumption_pod_error_formula_pointwise} is natural.
It simply says that in the sums in \eqref{pod_error_formula_nodq} and \eqref{pod_error_formula_dq} no individual term is much larger than the other terms in this sum.
We also note that Assumption~\ref{assumption_pod_error_formula_pointwise} does not play an essential role in the error analysis in Section~\ref{sec:error}, since we will exclusively consider the continuous in time formulation.
We mention that Assumption~\ref{assumption_pod_error_formula_pointwise} would follow directly from the POD approximation property \eqref{pod_error_formula_nodq}--\eqref{pod_error_formula_dq} if we dropped the $\frac{1}{M}$ factor in the snapshot correlation matrix $K$.
In fact, this approach is used in, e.g., \cite{KV02,volkwein2011model}.
We note, however, that this would most likely increase the magnitudes of the eigenvalues on the RHS of the POD approximation property \eqref{pod_error_formula_nodq}--\eqref{pod_error_formula_dq}.
\label{remark_pod_error_formula_pointwise}
\end{remark}
In what follows, we will use the notation
$X^r = \mbox{span}\{ \varphi_1, \varphi_2, \ldots, \varphi_r \} \, .$
To derive the POD reduced-order model for the heat equation~\eqref{eqn:heat_weak}, we employ the Galerkin truncation, which yields the following approximation $u_r \in X^r$ of $u$:
\begin{eqnarray}
u_r(x, t)
:= \sum_{j=1}^{r} a_j(t) \, \varphi_j(x) .
\label{pod_g_truncation}
\end{eqnarray}
Plugging \eqref{pod_g_truncation} into \eqref{eqn:heat_weak} and multiplying by test functions in $X^r \subset X$ yields the {\em POD Galerkin reduced order model (POD-G-ROM)}:
\begin{eqnarray}
(u_{r, t} , v_r)
+ \nu \, (\nabla u_r , \nabla v_r)
= (f , v_r)
\quad \forall \, v_r \in X^r .
\label{eqn:pod_g_rom}
\end{eqnarray}
The main advantage of the POD-G-ROM \eqref{eqn:pod_g_rom} over a straightforward finite element discretization of \eqref{eqn:heat_weak} is clear --
the computational cost of the former is dramatically lower than that of the latter.
\section*{Acknowledgments}
\bibliographystyle{plain}
| {
"timestamp": "2013-06-18T02:00:31",
"yymm": "1303",
"arxiv_id": "1303.6012",
"language": "en",
"url": "https://arxiv.org/abs/1303.6012",
"abstract": "This paper presents a theoretical and numerical investigation of the following practical question: Should the time difference quotients of the snapshots be used to generate the proper orthogonal decomposition basis functions? The answer to this question is important, since some published numerical studies use the time difference quotients, whereas other numerical studies do not. The criterion used in this paper to answer this question is the rate of convergence of the error of the reduced order model with respect to the number of proper orthogonal decomposition basis functions. Two cases are considered: the no_DQ case, in which the snapshot difference quotients are not used, and the DQ case, in which the snapshot difference quotients are used. The error estimates suggest that the convergence rates in the $C^0(L^2)$-norm and in the $C^0(H^1)$-norm are optimal for the DQ case, but suboptimal for the no_DQ case. The convergence rates in the $L^2(H^1)$-norm are optimal for both the DQ case and the no_DQ case. Numerical tests are conducted on the heat equation and on the Burgers equation. The numerical results support the conclusions drawn from the theoretical error estimates. Overall, the theoretical and numerical results strongly suggest that, in order to achieve optimal pointwise in time rates of convergence with respect to the number of proper orthogonal decomposition basis functions, one should use the snapshot difference quotients.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Are the Snapshot Difference Quotients Needed in the Proper Orthogonal Decomposition?",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9770226280828406,
"lm_q2_score": 0.8198933425148214,
"lm_q1q2_score": 0.8010543482514554
} |
https://arxiv.org/abs/2012.00525 | The algebraic classification of nilpotent algebras | We give the complete algebraic classification of all complex 4-dimensional nilpotent algebras. The final list has 234 (parametric families of) isomorphism classes of algebras, 66 of which are new in the literature. | \section*{Introduction}
The description, up to isomorphism, of all the algebras of some fixed dimension satisfying certain properties (the so-called algebraic classification) is a classical problem in algebra.
There are many papers devoted to algebraic classification of small-dimensional algebras in several varieties of
associative and non-associative algebras \cite{ack, lisa, cfk18, degr3, usefi1, degr1, degr2, fkkv, kk20, kkl19, demir, kpv19, hac16, ha16, maz79, kkp19geo }.
Another interesting direction in the classification of algebras is the geometric classification (see, \cite{ fkkv,kpv19,kkp19geo } and references therein).
Restricting our consideration to subvarieties of complex nilpotent $4$-dimensional algebras, we mention here the results on
associative \cite{degr1},
commutative \cite{fkkv},
bicommutative \cite{kpv19},
Leibniz \cite{demir},
terminal \cite{kkp19geo} and
$\mathfrak{CD}$-algebras \cite{kk20}.
In the present paper, we complete the algebraic classification of allcomplex
$4$-dimensional nilpotent algebras. Namely, we find $66$ new algebras and parametric families of algebras completing the list of $4$-dimensional nilpotent algebras initiated in the above mentioned works.
Our approach is based on the calculation of central extensions of nilpotent algebras of dimension less than $4$. This method was developed by Skjelbred and Sund for Lie algebras in \cite{ss78} and has been an important tool for years (see, for example, \cite{hac16, omirov, ss78} and references therein).
In Section~\ref{SS:method} we give a brief description of the method of central extensions and its adaptation to our case. It turns out that most of the nilpotent $4$-dimensional algebras are $\frak{CD}$-algebras, which were classified in \cite{kk20}. Our classification will thus be carried out modulo this subvariety of $4$-dimensional nilpotent algebras, as explained in Section~\ref{sec-non-CD}. In Section~\ref{sec-spec-types} we discuss several other useful classes of nilpotent algebras of dimension at most $4$. In particular, we give a list of the $3$-dimensional nilpotent algebras (they are all, in fact, $\frak{CD}$-algebras). The remainder of Section~\ref{S:alg} (Sections~\ref{sec-CD^3_01}--\ref{sec-CD^3_04}) is devoted to the construction and classification of $1$-dimensional non-$\frak{CD}$ central extensions of $3$-dimensional nilpotent algebras. The work done in Section~\ref{S:alg} culminates in the full classification of $4$-dimensional nilpotent algebras, which is completed in Section~\ref{S:class}, where we present the full list of representatives of isomorphism classes. Finally, in Section~\ref{S:apps} we present a few applications of our results to the classes of Lie-admissible and Alia type algebras.
\medskip
\paragraph{\bf Main result}
The main result of the paper is Theorem~\ref{teo-alg} giving a full classification of complex $4$-dimensional nilpotent algebras.
The complete list of non-isomorphic algebras consists of four parts:
\begin{enumerate}
\item trivial $\mathfrak{CD}$-algebras, which were classified in
\cite[Theorem 2.1, Theorem 2.3 and Theorem 2.5]{demir}, the only anticommutative
trivial $\mathfrak{CD}$-algebra being $\mathfrak{CD}_{03}^{4*}$;
\item terminal non-trivial extensions of the family $\cd {3*}{04}(\lambda)$, which were classified in \cite[1.4.5. 1-dimensional central extensions of ${\bf T}^{3*}_{04}$]{kkp19geo};
\item non-trivial non-terminal $\mathfrak{ CD}$-algebras, which were classified in \cite{kk20};
\item the nilpotent algebras found in Sections~\ref{sec-CD^3_01}--\ref{sec-CD^3_04} of the present paper.
\end{enumerate}
\section{The algebraic classification of nilpotent algebras}\label{S:alg}
\subsection{Method of classification of nilpotent algebras}\label{SS:method}
Let ${\bf A}$ be an algebra, ${\bf V}$ a vector space and ${\rm Z}^{2}\left( {\bf A},{\bf V}\right)\cong {\rm Hom}({\bf A}\otimes {\bf A},\bf V)$ denote the space of bilinear maps $\theta :{\bf A}\times
{\bf A}\longrightarrow {\bf V}.$ For $f\in{\rm Hom}({\bf A},{\bf V})$, we introduce $\delta f\in {\rm Z}^{2}\left( {\bf A},{\bf V}\right)$ by the equality $\delta f\left( x,y\right) =f(xy)$ and
define ${\rm B}^{2}\left( {\bf A},{\bf V}\right) =\left\{\delta f \mid f\in {\rm Hom}\left( {\bf A},{\bf V}\right) \right\} $. One
can easily check that ${\rm B}^{2}({\bf A},{\bf V})$ is a linear subspace of ${\rm Z}^{2}\left( {\bf A},{\bf V}\right)$.
Let us define $\rm {H}^{2}\left( {\bf A},{\bf V}\right) $ as the quotient space ${\rm Z}^{2}\left( {\bf A},{\bf V}\right) \big/{\rm B}^{2}\left( {\bf A},{\bf V}\right)$.
The equivalence class of $\theta\in {\rm Z}^{2}\left( {\bf A%
},{\bf V}\right)$ in $\rm {H}^{2}\left( {\bf A},{\bf V}\right)$ is denoted by $\left[ \theta \right]$.
Suppose now that $\dim{\bf A}=m<n$ and $\dim{\bf V}=n-m$. For any
bilinear map $\theta :{\bf A}\times {\bf A}\longrightarrow {\bf V%
}$, one can define on the space ${\bf A}_{\theta }:={\bf A}\oplus
{\bf V}$ the bilinear product $\left[ -,-\right] _{%
{\bf A}_{\theta }}$ by the equality $\left[ x+x^{\prime },y+y^{\prime }\right] _{%
{\bf A}_{\theta }}= xy +\theta \left( x,y\right) $ for
$x,y\in {\bf A},x^{\prime },y^{\prime }\in {\bf V}$. The algebra ${\bf A}_{\theta }$ is called an $(n-m)$-{\it dimensional central extension} of ${\bf A}$ by ${\bf V}$.
It is also clear that ${\bf A}_{\theta }$ is nilpotent if and only if ${\bf A}$ is.
For a bilinear map $\theta :{\bf A}\times {\bf A}\longrightarrow {\bf V}$, the space $\theta ^{\bot }=\left\{ x\in {\bf A}\mid \theta \left(
x,{\bf A}\right) =\theta \left(
{\bf A},x\right) =0\right\} $ is called the {\it annihilator} of $\theta$.
For an algebra ${\bf A}$, the ideal
${\rm Ann}\left( {\bf A}\right) =\left\{ x\in {\bf A}\mid x{\bf A} ={\bf A}x =0\right\}$ is called the {\it annihilator} of ${\bf A}$.
One has
\begin{equation*}
{\rm Ann}\left( {\bf A}_{\theta }\right) =\left( \theta ^{\bot }\cap {\rm Ann}\left(
{\bf A}\right) \right) \oplus {\bf V}.
\end{equation*}
Any $n$-dimensional algebra with non-trivial annihilator can be represented in
the form ${\bf A}_{\theta }$ for some $m$-dimensional algebra ${\bf A}$, an $(n-m)$-dimensional vector space ${\bf V}$ and $\theta \in {\rm Z}^{2}\left( {\bf A},{\bf V}\right)$, where $m<n$ (see \cite[Lemma 5]{hac16}).
Moreover, there is a unique such representation with $m=n-\dim{\rm Ann}({\bf A})$. Note also that the latter equality is equivalent to the condition $\theta ^{\bot }\cap {\rm Ann}\left(
{\bf A}\right)=0$.
Let us pick some $\phi\in {\rm Aut}\left( {\bf A}\right)$, where ${\rm Aut}\left( {\bf A}\right)$ is the automorphism group of ${\bf A}$.
For $\theta\in {\rm Z}^{2}\left( {\bf A},{\bf V}\right)$, let us define $(\phi \theta) \left( x,y\right) =\theta \left( \phi \left( x\right)
,\phi \left( y\right) \right) $. Then we get an action of ${\rm Aut}\left( {\bf A}\right) $ on ${\rm Z}^{2}\left( {\bf A},{\bf V}\right)$ that induces an action of the same group on $\rm {H}^{2}\left( {\bf A},{\bf V}\right)$.
\begin{definition}
Let ${\bf A}$ be an algebra and $I$ be a subspace of ${\rm Ann}({\bf A})$. If ${\bf A}={\bf A}_0 \oplus I$, for some subalgebra ${\bf A}_0$ of ${\bf A}$,
then $I$ is called an {\it annihilator component} of ${\bf A}$. We say that ${\bf A}$ is {\it split} if it has some non-zero annihilator component; otherwise we say that ${\bf A}$ is {\it non-split}.
\end{definition}
For a linear space $\bf U$, the {\it Grassmannian} $G_{s}\left( {\bf U}\right) $ is
the set of all $s$-dimensional linear subspaces of ${\bf U}$. For any $s\ge 1$, the action of ${\rm Aut}\left( {\bf A}\right)$ on $\rm {H}^{2}\left( {\bf A},\mathbb{C}\right)$ induces
an action of the same group on $G_{s}\left( \rm {H}^{2}\left( {\bf A},\mathbb{C}\right) \right)$.
Let us define
$$
{\bf T}_{s}\left( {\bf A}\right) =\left\{ {\bf W}\in G_{s}\left( \rm {H}^{2}\left( {\bf A},\mathbb{C}\right) \right)\left|\underset{[\theta]\in W}{\cap }\theta^{\bot }\cap {\rm Ann}\left( {\bf A}\right) =0\right.\right\}.
$$
Note that ${\bf T}_{s}\left( {\bf A}\right)$ is stable under the action of ${\rm Aut}\left( {\bf A}\right) $.
Let us fix a basis $e_{1},\ldots,e_{s} $ of ${\bf V}$ and $\theta \in {\rm Z}^{2}\left( {\bf A},{\bf V}\right) $. Then there are unique $\theta _{i}\in {\rm Z}^{2}\left( {\bf A},\mathbb{C}\right)$ ($1\le i\le s$) such that $\theta \left( x,y\right) =\underset{i=1}{\overset{s}{%
\sum }}\theta _{i}\left( x,y\right) e_{i}$ for all $x,y\in{\bf A}$.
Note that $\theta ^{\bot
}=\theta^{\bot} _{1}\cap \theta^{\bot} _{2}\cdots \cap \theta^{\bot} _{s}$ in this case.
If $\theta ^{\bot
}\cap {\rm Ann}\left( {\bf A}\right) =0$, then by \cite[Lemma 13]{hac16} the algebra ${\bf A}_{\theta }$ is split if and only if $\left[ \theta _{1}\right],\left[\theta _{2}\right] ,\ldots ,\left[ \theta _{s}\right] $ are linearly
dependent in $\rm {H}^{2}\left( {\bf A},\mathbb{C}\right)$. Thus, if $\theta ^{\bot
}\cap {\rm Ann}\left( {\bf A}\right) =0$ and ${\bf A}_{\theta }$ is non-split, then $\left\langle \left[ \theta _{1}\right] , \ldots,%
\left[ \theta _{s}\right] \right\rangle$ is an element of ${\bf T}_{s}\left( {\bf A}\right)$.
Now, if $\vartheta\in {\rm Z}^{2}\left( {\bf A},\bf{V}\right)$ is such that $\vartheta ^{\bot
}\cap {\rm Ann}\left( {\bf A}\right) =0$ and ${\bf A}_{\vartheta }$ is non-split, then by \cite[Lemma 17]{hac16} one has ${\bf A}_{\vartheta }\cong{\bf A}_{\theta }$ if and only if
$\left\langle \left[ \theta _{1}\right] ,\left[ \theta _{2}%
\right] ,\ldots ,\left[ \theta _{s}\right] \right\rangle,
\left\langle \left[ \vartheta _{1}\right] ,\left[ \vartheta _{2}\right] ,\ldots,%
\left[ \vartheta _{s}\right] \right\rangle\in {\bf T}_{s}\left( {\bf A}\right)$ belong to the same orbit under the action of ${\rm Aut}\left( {\bf A}\right) $, where $%
\vartheta \left( x,y\right) =\underset{i=1}{\overset{s}{\sum }}\vartheta
_{i}\left( x,y\right) e_{i}$.
Hence, there is a one-to-one correspondence between the set of $%
{\rm Aut}\left( {\bf A}\right) $-orbits on ${\bf T}_{s}\left( {\bf A}%
\right) $ and the set of isomorphism classes of central extensions of $\bf{A}$ by $\bf{V}$ with $s$-dimensional annihilator and trivial annihilator component.
Consequently, to construct all $s$-dimensional central extensions with trivial annihilator component
of a given $(n-s)$-dimensional algebra ${\bf A}$ one has to describe ${\bf T}_{s}({\bf A})$, ${\rm Aut}({\bf A})$ and the action of ${\rm Aut}({\bf A})$ on ${\bf T}_{s}({\bf A})$ and then
for each orbit under the action of ${\rm Aut}({\bf A})$ on ${\bf T}_{s}({\bf A})$ pick a representative and construct the algebra corresponding to it.
\subsubsection{Reduction to non-$\mathfrak{CD}$-algebras}\label{sec-non-CD}
The class of $\mathfrak{CD}$-algebras is defined by the
property that the commutator of any pair of multiplication operators is a derivation \cite{ack, kk20};
namely, an algebra $\mathfrak{A}$ is a $\mathfrak{CD}$-algebra if and only if
\[ [T_x,T_y] \in \mathfrak{Der} (\mathfrak{A}),\]
for all $x,y \in \mathfrak{A}$, where $T_z \in \{ R_z,L_z\}$. Here we use the notation $R_z$ (resp. $L_z$) for the operator of right (resp. left) multiplication in $\mathfrak{A}$. We will denote the variety of $\mathfrak{CD}$-algebras by $\mathfrak{CD}$.
In terms of identities, the class of $\mathfrak{CD}$-algebras is defined by the following three:
\begin{align*}
((xy)a)b-((xy)b)a&=((xa)b-(xb)a)y+x((ya)b-(yb)a),\\
(a(xy))b-a((xy)b)&=((ax)b-a(xb))y+x((ay)b-a(yb)),\\
a(b(xy))-b(a(xy))&=(a(bx)-b(ax))y+x(a(by)-b(ay)).
\end{align*}
Our method of classification of nilpotent algebras will be based on a classification of
$\mathfrak{CD}$-algebras~\cite{kk20}. Clearly, any central extension of a non-$\mathfrak{CD}$-algebra is a non-$\mathfrak{CD}$-algebra. But a $\mathfrak{CD}$-algebra may have extensions which are not $\mathfrak{CD}$-algebras. More precisely, let $\bf{A}$ be a $\mathfrak{CD}$-algebra and $\theta \in {\rm Z^2}(\bf{A}, {\mathbb C}).$ Then ${\bf{A}}_{\theta }$ is a $\mathfrak{CD}$-algebra if and only if
\begin{align}
\label{cd1} \theta((xy)a,b)-\theta((xy)b,a)&=\theta((xa)b-(xb)a,y)+\theta(x,(ya)b-(yb)a),\\
\label{cd2} \theta(a(xy),b)-\theta(a,(xy)b)&=\theta((ax)b-a(xb),y)+\theta(x,(ay)b-a(yb)),\\
\label{cd3} \theta(a,b(xy))-\theta(b,a(xy))&=\theta(a(bx)-b(ax),y)+\theta(x,a(by)-b(ay)).
\end{align}
for all $x,y,z\in \bf{A}.$
Define the subspace ${\rm Z_\mathfrak{CD}^2}(\bf{A},{\mathbb C})$ of ${\rm Z^2}(\bf{A},{\mathbb C})$ by
\begin{equation*}
{\rm Z_{\mathfrak{CD}}^2}(\bf{A},{\mathbb C}) =\left\{\begin{array}{ll} \theta \in {\rm Z^2}(\bf{A},{\mathbb C}): &
\theta \mbox{ satisfies }
(\ref{cd1}), (\ref{cd2}) \mbox{ and } (\ref{cd3}) \end{array}\right\}.
\end{equation*}
Observe that ${\rm B^2}(\bf{A},{\mathbb C})\subseteq{\rm Z_\mathfrak{CD}^2}(\bf{A},{\mathbb C}).$
Let ${\rm H_\mathfrak{CD}^2}(\bf{A},{\mathbb C}) =%
{\rm Z_\mathfrak{CD}^2}(\bf{A},{\mathbb C}) \big/{\rm B^2}(\bf{A},{\mathbb C}).$ Then ${\rm H_\mathfrak{CD}^2}(\bf{A},{\mathbb C})$ is a subspace of $%
{\rm H^2}(\bf{A},{\mathbb C}).$ Define
\[{\bf R}_{s}(\bf{A}) =\left\{ {\bf W}\in {\bf T}_{s}(\bf{A}) :{\bf W}\in G_{s}({\rm H_\mathfrak{CD}^2}(\bf{A},{\mathbb C}) ) \right\}, \]
\[ {\bf U}_{s}(\bf{A}) =\left\{ {\bf W}\in {\bf T}_{s}(\bf{A}) :{\bf W}\notin G_{s}({\rm H_\mathfrak{CD}^2}(\bf{A},{\mathbb C}) ) \right\}.\]
Then ${\bf T}_{s}(\bf{A}) ={\bf R}_{s}(\bf{A})$ $\mathbin{\mathaccent\cdot\cup}$ ${\bf U}_{s}(\bf{A}).$ The sets ${\bf R}_{s}(\bf{A}) $ and ${\bf U}_{s}(\bf{A})$ are stable under the action
of $\operatorname{Aut}(\bf{A}).$
Thus, the nilpotent algebras
corresponding to the representatives of $\operatorname{Aut}(\bf{A})$-orbits on ${\bf R}_{s}(\bf{A})$ are $\mathfrak{CD}$-algebras,
while those corresponding to the representatives of $\operatorname{Aut}(\bf{A}) $-orbits on ${\bf U}_{s}(\bf{A})$ are non-$\mathfrak{CD}$-algebras. Hence, we may construct all nilpotent non-split non-$\mathfrak{CD}$-algebras $\bf{A}$ of dimension $n$ with $s$-dimensional annihilator
from a given nilpotent algebra $\bf{A}^{\prime }$ of dimension $n-s$ as follows:
\begin{enumerate}
\item If $\bf{A}^{\prime }$ is non-$\mathfrak{CD}$, then apply the procedure.
\item Otherwise, do the following:
\begin{enumerate}
\item Determine ${\bf U}_{s}(\bf{A}^{\prime})$ and $\operatorname{Aut}(\bf{A}^{\prime }).$
\item Determine the set of $\operatorname{Aut}(\bf{A}^{\prime })$-orbits on ${\bf U}_{s}(\bf{A}^{\prime }).$
\item For each orbit, construct the nilpotent algebra corresponding to one of its
representatives.
\end{enumerate}
\end{enumerate}
We will use the following auxiliary notation during the construction of central extensions.
Let ${\bf A}$ be an algebra with basis $e_{1},e_{2},\ldots,e_{n}$. Then $\Delta_{ij}:{\bf A}\times {\bf A}\longrightarrow \mathbb{C}$ denotes the bilinear form defined by the equalities $\Delta _{ij}\left( e_{i},e_{j}\right) =1$
and $\Delta _{ij}\left( e_{l},e_{m}\right) =0$ for $(l,m) \neq (i,j)$. In this case, the $\Delta_{ij}$ with $1\leq i , j\leq n $ form a basis of the space of bilinear forms on $\bf{A}$.
We also denote by
\begin{longtable}{lll}
$\mathfrak{CD}^{i*}_j$& the $j$th $i$-dimensional nilpotent trivial $\mathfrak{CD}$-algebra, \\
$\mathfrak{CD}^i_j$& the $j$th $i$-dimensional nilpotent non-trivial $\mathfrak{CD}$-algebra, \\
$\D{i}{j}$& the $j$th $i$-dimensional nilpotent terminal algebra,\\
${{\bf N}}^i_j$& the $j$th $i$-dimensional nilpotent non-$\mathfrak{CD}$-algebra.
\end{longtable}
\subsection{Some special types of nilpotent algebras}\label{sec-spec-types}
\subsubsection{Trivial $\mathfrak{CD}$-algebras}
Recall that the class of $n$-dimensional algebras defined by the identities $(xy)z=0$ and $x(yz)=0$
lies in the intersection of all well-known varieties of algebras defined by a family of polynomial identities of degree $3,$ such as Leibniz, Zinbiel, associative, Novikov and many other algebras.
On the other hand,
all algebras defined by the identities $(xy)z=0$ and $x(yz)=0$ are central extensions of some suitable algebra with zero product.
The list of all non-anticommutative $4$-dimensional algebras defined by the identities $(xy)z=0$ and $x(yz)=0$ can be found in \cite{demir}.
Note that there is only one $4$-dimensional nilpotent anticommutative algebra satisfying the identity $(xy)z=0.$
Obviously, all algebras from this list are $4$-dimensional nilpotent $\mathfrak{CD}$-algebras.
\subsubsection{$2$-dimensional nilpotent algebras}
There is only one non-zero $2$-dimensional nilpotent algebra.
It is a $\mathfrak{CD}$-algebra:
\begin{longtable}{ll llll}
$\cd {2*}{01}$ & $:$ & $e_1 e_1 = e_2$
\end{longtable}
\subsubsection{$3$-dimensional nilpotent algebras}
Thanks to \cite{cfk18}, we have the classification of all $3$-dimensional nilpotent algebras.
It is easy to see that each $3$-dimensional nilpotent algebra is a $\mathfrak{CD}$-algebra:
\begin{longtable}{lllll llll}
$\cd 3{01}$&$:$& $e_1 e_1 = e_2$ & $e_2 e_2=e_3$ \\
\hline
$\cd 3{02}$&$:$& $e_1 e_1 = e_2$ & $e_2 e_1= e_3$ & $e_2 e_2=e_3$ \\
\hline$\cd 3{03}$&$:$& $e_1 e_1 = e_2$ & $e_2 e_1=e_3$ \\
\hline$\cd 3{04}(\lambda)$&$:$& $ e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$ \\
\hline$\cd {3*}{01}$&$:$& $e_1 e_1 = e_2$\\
\hline$\cd {3*}{02}$&$:$& $e_1 e_1 = e_3$ &$ e_2 e_2=e_3$ \\
\hline$\cd {3*}{03}$&$:$& $e_1 e_2=e_3$ & $e_2 e_1=-e_3$ \\
\hline$\cd {3*}{04}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_2 e_1=e_3$ & $e_2 e_2=e_3$
\end{longtable}
\subsubsection{$4$-dimensional nilpotent algebras with $2$-dimensional annihilator }
Thanks to \cite{cfk18}, we have the classification of all $4$-dimensional non-split nilpotent non-trivial algebras with $2$-dimensional annihilator.
All of these algebras are $\mathfrak{CD}$-algebras:
\begin{longtable}{lllll llll}
$\cd 4{05}$&: & $e_1 e_1 = e_2$ & $e_2 e_1=e_4$ & $e_2 e_2=e_3$ \\
\hline$\cd 4{06}$ & $:$ & $ e_1 e_1 = e_2$ & $e_1 e_2=e_4$ & $e_2 e_1=e_3$ \\
\hline$\cd 4{07}(\lambda)$& $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_4$ & $e_2 e_1=\lambda e_4$ & $e_2 e_2=e_3$ \\
\end{longtable}
\subsubsection{$4$-dimensional nilpotent algebras with $1$-dimensional annihilator }
Thanks to \cite{kk20} we have the classification of all $4$-dimensional nilpotent $\mathfrak{CD}$-algebras with $1$-dimensional annihilator.
The remaining nilpotent non-$\mathfrak{CD}$-algebras with $1$-dimensional annihilator will be found in the present paper.
\subsubsection{Trivial central extensions.}
Thanks to \cite{kk20} we know that all central extensions of
$\mathfrak{CD}^{3*}_{01},$
$\mathfrak{CD}^{3*}_{02},$
$\mathfrak{CD}^{3*}_{03}$ and
$\mathfrak{CD}^{3*}_{04}$ are $\mathfrak{CD}$-algebras.
\bigskip\bigskip
\subsection{$1$-dimensional central extensions of $\cd 3{01}$}\label{sec-CD^3_01}
Here we collect all information about ${\mathfrak{CD}_{01}^{3}}$:
\begin{longtable}{|l|l|l|}
\hline
Algebra & Multiplication & Cohomology \\
\hline
${\mathfrak{CD}}_{01}^{3}$ &
$\begin{array}{l}e_1 e_1 = e_2\\ e_2 e_2=e_3
\end{array}$&
$\begin{array}{lcl}
{\rm H^2_{\mathfrak{CD}}}({\mathfrak{CD}}_{01}^{3})&=&\langle [\Delta_{12}], [\Delta_{21}] \rangle \\
{\rm H^2}({\mathfrak{CD}}_{01}^{3})&=&{\rm H^2_{\mathfrak{CD}}}({\mathfrak{CD}_{01}^3}) \oplus
\langle[\Delta_{13}], [\Delta_{31}], [\Delta_{23}], [\Delta_{32}], [\Delta_{33}] \rangle
\end{array}$\\
\hline
\end{longtable}
Let us use the following notations
\begin{longtable}{llll}
$\nb 1 = \Dl 12$&$ \nb 2 = \Dl 21$& $\nb 3 = \Dl 13$ &$\nb 4=\Dl 31$\\
$\nb 5 = \Dl 23$&$ \nb 6 = \Dl 32$& $\nb 7 = \Dl 33$.
\end{longtable}
Take $\theta=\sum_{i=1}^7\alpha_i\nb i\in {\rm H}^2 (\cd 3{01})$.
If
$$
\phi=
\begin{pmatrix}
x & 0 & 0\\
0 & x^2 & 0\\
y & 0 & x^4
\end{pmatrix}\in\aut{\cd 3{01}},
$$
then
$$
\phi^T\begin{pmatrix}
0 & \alpha_1 & \alpha_3\\
\alpha_2 & 0 & \alpha_5\\
\alpha_4 & \alpha_6 & \alpha_7
\end{pmatrix} \phi=
\begin{pmatrix}
\alpha^* & \alpha_1^* & \alpha_3^* \\
\alpha^*_2 & 0 & \alpha^*_5\\
\alpha^*_4 & \alpha^*_6 & \alpha^*_7
\end{pmatrix},
$$
where
\begin{longtable}{llll}
$\alpha^*_1=x^2 (x \alpha_1+y \alpha_6)$ &
$\alpha^*_2=x^2 (x \alpha_2+y \alpha_5)$ &
$\alpha^*_3=x^4 (x \alpha_3+y \alpha_7)$ &
$\alpha^*_4=x^4 (x \alpha_4+y \alpha_7)$\\
$\alpha^*_5=x^6 \alpha_5$ &
$\alpha^*_6=x^6 \alpha_6$&
$\alpha^*_7=x^8 \alpha_7$.&
\end{longtable}
Hence, $\phi\langle\theta\rangle=\langle\theta^*\rangle$, where $\theta^*=\sum\limits_{i=1}^7 \alpha_i^* \nb i.$
We are only interested in elements with $(\alpha_3, \alpha_4, \alpha_5, \alpha_6, \alpha_7)\neq (0,0,0,0,0).$
Then
\begin{enumerate}
\item $\alpha_7\neq 0.$
\begin{enumerate}
\item $\alpha_6\neq 0$.
Then choosing $x=\sqrt{\alpha_6 \alpha_7^{-1}}$ and $y=- \alpha_4\alpha_7^{-1}x$, we have the family of representatives
$\langle \alpha \nabla_1+\beta \nabla_2+\gamma \nabla_3+\delta \nb 5+\nb 6+\nb 7 \rangle.$
Observe that two distinct quadruples $(\alpha,\beta,\gamma, \delta)$ and $(\alpha',\beta',\gamma', \delta')$ determine the same orbit if and only if $(\alpha,\beta,\gamma, \delta)=(-\alpha',-\beta',-\gamma', \delta').$
\item $\alpha_6= 0$ and $\alpha_5\neq0$.
Choosing $x=\sqrt{\alpha_5 \alpha_7^{-1}}$ and $y=- \alpha_4\alpha_7^{-1}x$, we have the family of representatives
$\langle \alpha \nabla_1+\beta \nabla_2+\gamma \nabla_3+ \nb 5 +\nb 7 \rangle.$
Observe that two distinct triples $(\alpha,\beta,\gamma)$ and $(\alpha',\beta',\gamma')$ determine the same orbit if and only if $(\alpha,\beta,\gamma)=(-\alpha',-\beta',-\gamma').$
\item $\alpha_6=\alpha_5=0$ and $\alpha_3 \neq \alpha_4$.
Choosing $x=\sqrt[3]{(\alpha_3-\alpha_4) \alpha_7^{-1}}$ and $y=- \alpha_4\alpha_7^{-1}x$, we have the family of representatives
$\langle \alpha \nabla_1+\beta \nabla_2+ \nabla_3+ \nb 7 \rangle.$
Observe that two pairs $(\alpha,\beta)$ and $(\alpha',\beta')$ determine the same orbit if and only if $(\alpha,\beta )=(\xi \alpha',\xi \beta'),$ where $\xi^3=1.$
\item $\alpha_6=\alpha_5= 0,$ $\alpha_3 = \alpha_4$ and $\alpha_2\ne 0$.
Choosing
$x=\sqrt[5]{\alpha_2\alpha_7^{-1}}$ and $y=-\alpha_4\alpha_7^{-1}x$, we have the family of representatives of distinct orbits
$\langle \alpha \nabla_1+\nabla_2+\nb 7 \rangle.$
\item $\alpha_6=\alpha_5=0$, $\alpha_3 = \alpha_4$ and $\alpha_2=0$.
Choosing
$y=-\alpha_4\alpha_7^{-1}x$, we have two representatives
$\langle \nb 7 \rangle$ and $\langle \nabla_1+ \nb 7 \rangle$
depending on whether $\alpha_1=0$ or not.
\end{enumerate}
\item $\alpha_7=0.$
\begin{enumerate}
\item $\alpha_6\neq 0$ and $\alpha_4 \neq 0$.
Choosing
$x=\alpha_4 \alpha_6^{-1}$ and $y=-\alpha_1\alpha_4 \alpha_6^{-2},$ we have the family of representatives of distinct orbits
$\langle \alpha \nabla_2+\beta \nabla_3+ \nabla_4+ \gamma \nb 5 +\nb 6 \rangle.$
\item $\alpha_6\neq 0,$ $\alpha_4 = 0$ and $\alpha_3\neq0$.
Choosing
$x=\alpha_3 \alpha_6^{-1}$ and $y=-\alpha_1\alpha_3 \alpha_6^{-2},$ we have the family of representatives of distinct orbits
$\langle \alpha \nabla_2+ \nabla_3+ \beta \nb 5 +\nb 6 \rangle.$
\item $\alpha_6\neq 0$ and $\alpha_4=\alpha_3 = 0$.
Choosing
$y=-\alpha_1\alpha_6^{-1}x$ we have two families of representatives of distinct orbits
$\langle \alpha \nb 5 +\nb 6 \rangle$ and $\langle \nabla_2+ \alpha \nb 5 +\nb 6 \rangle$
depending on whether $\alpha_1\alpha_5 - \alpha_2\alpha_6=0$ or not.
\item $\alpha_6 = 0,$ $\alpha_5\neq0$ and $\alpha_4 \neq 0$.
Choosing
$x=\alpha_4 \alpha_5^{-1}$ and $y=-\alpha_2\alpha_4 \alpha_5^{-2},$ we have the family of representatives of distinct orbits
$\langle \alpha \nb 1+\beta \nabla_3 +\nb 4+\nb 5 \rangle.$
\item $\alpha_6 = 0,$ $\alpha_5\neq0$ and $\alpha_4 = 0$.
Choosing $y=-\alpha_2\alpha_5^{-1}x$, we have the following representatives of distinct orbits
$\langle \alpha \nb 1 +\nb 3+\nb 5 \rangle$,
$\langle \nb 1 +\nb 5 \rangle$ and
$\langle \nb 5 \rangle,$ corresponding to the 3 cases: $\alpha_3\ne 0$; $\alpha_3=0$, $\alpha_1\ne 0$; $\alpha_3=\alpha_1=0$, respectively.
\item $\alpha_6 = \alpha_5 = 0$ and $\alpha_4 \neq 0$.
We have the following families of representatives of distinct orbits
$\langle \nb 1+\alpha \nb 2+\beta\nb 3+\nb 4 \rangle$, $\langle \nb 2+\alpha\nb 3+\nb 4 \rangle$ and $\langle \alpha\nb 3+\nb 4 \rangle$ corresponding to the 3 cases: $\alpha_1\ne 0$; $\alpha_1=0$, $\alpha_2\ne 0$; $\alpha_1=\alpha_2=0$, respectively.
\item $\alpha_6 = \alpha_5=\alpha_4 = 0$ and $\alpha_3\neq 0$.
We have the following representatives of distinct orbits
$\langle \alpha \nb 1+ \nabla_2 +\nb 3 \rangle,$
$\langle \nb 1+ \nb 3 \rangle$ and
$\langle \nb 3 \rangle$ corresponding to the 3 cases: $\alpha_2\ne 0$; $\alpha_2=0$, $\alpha_1\ne 0$; $\alpha_2=\alpha_1=0$, respectively.
\end{enumerate}
\end{enumerate}
Summarizing, we have the following distinct orbits:
\begin{longtable}{lll}
$\langle \alpha \nb 1+ \nabla_2 +\nb 3 \rangle$ &
\multicolumn{2}{l}{$\langle \alpha \nabla_1+\beta \nabla_2+\gamma \nabla_3+\delta \nb 5+\nb 6+\nb 7 \rangle^{O(\alpha,\beta,\gamma, \delta)=O(-\alpha,-\beta,-\gamma, \delta)}$}\\
$\langle \alpha \nb 1+\nb 3+\nb 5 \rangle$ &
\multicolumn{2}{l}{$\langle \alpha \nabla_1+\beta \nabla_2+\gamma \nabla_3+\nb 5+\nb 7 \rangle^{O(\alpha,\beta,\gamma)=O(-\alpha,-\beta,-\gamma)}$}\\
\multicolumn{2}{l}{$\langle \alpha \nabla_1+\beta \nabla_2+ \nabla_3 +\nb 7 \rangle^{O(\alpha,\beta )=O(\sqrt[3]1(\alpha,\beta ))}$}&
$\langle \alpha \nabla_1+\nabla_2+ \nb 7 \rangle$ \\
$\langle \nb 1+ \nb 3 \rangle$ &
$\langle \nb 1+\alpha \nb 2+\beta\nb 3+\nb 4 \rangle$ &
$\langle \alpha \nb 1+\beta \nabla_3 +\nb 4+\nb 5 \rangle$ \\
$\langle \nb 1 +\nb 5 \rangle$ &
$\langle \nabla_1+ \nb 7 \rangle$ &
$\langle \nb 2+ \alpha \nabla_3 +\nb 4 \rangle$ \\
$\langle \alpha \nabla_2+\beta \nabla_3+ \nabla_4+ \gamma \nb 5 +\nb 6 \rangle$ &
$\langle \alpha \nabla_2+ \nabla_3+ \beta \nb 5 +\nb 6 \rangle$ &
$\langle \nb 2 + \alpha\nb 5+\nb 6 \rangle$ \\
$\langle \alpha \nabla_3 +\nb 4 \rangle$ &
$\langle \nb 3 \rangle$ &
$\langle \nb 5 \rangle$ \\
$\langle \alpha \nb 5 +\nb 6 \rangle$ &
$\langle \nb 7 \rangle$
\end{longtable}
They correspond to the following new algebras:
{\tiny
\begin{longtable}{lllllllllllllllllll}
${\bf N}^4_{01}(\alpha)$ &$:$& $e_1 e_1 = e_2$& $e_1e_2=\alpha e_4$& $e_1e_3=e_4$& $e_2e_1=e_4$& $e_2 e_2=e_3$ \\
\hline
${\bf N}^4_{02}(\alpha,\beta,\gamma,\delta)$ &$:$&
$e_1 e_1 = e_2$& $e_1e_2=\alpha e_4$& $e_1e_3=\gamma e_4$& $e_2e_1=\beta e_4$\\&&
$e_2 e_2=e_3$& $e_2e_3=\delta e_4$& $e_3e_2=e_4$& $e_3e_3=e_4$\\
\hline
${\bf N}^4_{03}(\alpha)$ &$:$& $e_1 e_1 = e_2$& $e_1e_2=\alpha e_4$& $e_1e_3=e_4$& $e_2 e_2=e_3$ & $e_2e_3=e_4$\\ \hline
${\bf N}^4_{04}(\alpha,\beta,\gamma)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=\gamma e_4$&$e_2e_1=\beta e_4$\\
&& $e_2 e_2=e_3$&$e_2e_3=e_4$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{05}(\alpha,\beta)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=e_4$ &$e_2e_1=\beta e_4$& $e_2 e_2=e_3$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{06}(\alpha)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_1=e_4$& $e_2 e_2=e_3$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{07}$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$&$e_1e_3=e_4$& $e_2 e_2=e_3$ \\ \hline
${\bf N}^4_{08}(\alpha,\beta)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$&$e_1e_3=\beta e_4$&$e_2e_1=\alpha e_4$& $e_2 e_2=e_3$ &$e_3e_1=e_4$\\ \hline
${\bf N}^4_{09}(\alpha,\beta)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=\beta e_4$& $e_2 e_2=e_3$ &$e_2e_3=e_4$&$e_3e_1=e_4$ \\ \hline
${\bf N}^4_{10}$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$& $e_2 e_2=e_3$ &$e_2e_3=e_4$ \\ \hline
${\bf N}^4_{11}$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$& $e_2 e_2=e_3$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{12}(\alpha)$ &$:$& $e_1 e_1 = e_2$& $e_1e_3=\alpha e_4$&$e_2e_1=e_4$ & $e_2 e_2=e_3$ &$e_3e_1=e_4$ \\ \hline
${\bf N}^4_{13}(\alpha,\beta,\gamma)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_3=\beta e_4$&$e_2e_1=\alpha e_4$& $e_2 e_2=e_3$\\
&&$e_2e_3=\gamma e_4$&$e_3e_1=e_4$&$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{14}(\alpha, \beta)$ &$:$& $e_1 e_1 = e_2$&$e_1e_3 =e_4$&$e_2e_1= \alpha e_4$& $e_2 e_2=e_3$ &$e_2e_3=\beta e_4$&$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{15}(\alpha)$ &$:$& $e_1 e_1 = e_2$&$e_2e_1=e_4$& $e_2 e_2=e_3$ &$e_2e_3=\alpha e_4$ &$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{16}(\alpha)$ &$:$& $e_1 e_1 = e_2$&$e_1e_3=\alpha e_4$ & $e_2 e_2=e_3$ &$e_3e_1=e_4$\\ \hline
${\bf N}^4_{17}$ &$:$& $e_1 e_1 = e_2$& $e_1e_3=e_4$ & $e_2 e_2=e_3$\\ \hline
${\bf N}^4_{18}$ &$:$& $e_1 e_1 = e_2$& $e_2 e_2=e_3$ &$e_2e_3=e_4$\\ \hline
${\bf N}^4_{19}(\alpha)$ &$:$& $e_1 e_1 = e_2$& $e_2 e_2=e_3$ &$e_2e_3=\alpha e_4$ &$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{20}$ &$:$& $e_1 e_1 = e_2$& $e_2 e_2=e_3$ &$e_3e_3=e_4$\\ \hline
\end{longtable}
}
\begin{longtable}{cc}
${\bf N}^4_{02}(\alpha,\beta,\gamma,\delta) \cong {\bf N}^4_{02}(-(\alpha,\beta,\gamma),\delta)$ &
${\bf N}^4_{04}(\alpha,\beta,\gamma) \cong {\bf N}^4_{04}(-(\alpha,\beta,\gamma))$ \\
\multicolumn{2}{c}{${\bf N}^4_{05}(\alpha,\beta) \cong {\bf N}^4_{05}(\sqrt[3]{1}(\alpha,\beta))$}
\end{longtable}
\subsection{$1$-dimensional central extensions of $\cd 3{02}$}
Here we collect all information about ${\mathfrak{CD}_{02}^{3}}$:
\begin{longtable}{|l|l|l|}
\hline
Algebra & Multiplication & Cohomology \\
\hline
${\mathfrak{CD}}_{02}^{3}$ &
$\begin{array}{l}
e_1 e_1 = e_2\\
e_2 e_1= e_3\\
e_2 e_2=e_3
\end{array}$&
$\begin{array}{lcl}
{\rm H^2_{\mathfrak{CD}}}({\mathfrak{CD}}_{02}^{3})&=&\langle [\Delta_{12}], [\Delta_{21}] \rangle \\
{\rm H^2}({\mathfrak{CD}}_{02}^{3})&=&{\rm H^2_{\mathfrak{CD}}}({\mathfrak{CD}_{02}^3}) \oplus
\langle[\Delta_{13}], [\Delta_{31}], [\Delta_{23}], [\Delta_{32}], [\Delta_{33}] \rangle
\end{array}$\\
\hline
\end{longtable}
Let us use the following notations
\begin{longtable}{llll}
$\nb 1 = \Dl 12$&$ \nb 2 = \Dl 21$& $\nb 3 = \Dl 13$ &$\nb 4=\Dl 31$\\
$\nb 5 = \Dl 23$&$ \nb 6 = \Dl 32$& $\nb 7 = \Dl 33$.
\end{longtable}
Take $\theta=\sum_{i=1}^7\alpha_i\nb i\in {\rm H}^2(\cd 3{02})$.
If
$$
\phi=
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
x & 0 & 1
\end{pmatrix}\in\aut{\cd 3{02}},
$$
then
$$
\phi^T\begin{pmatrix}
0 & \alpha_1 & \alpha_3\\
\alpha_2 & 0 & \alpha_5\\
\alpha_4 & \alpha_6 & \alpha_7
\end{pmatrix} \phi=
\begin{pmatrix}
\alpha^* & \alpha_1^* & \alpha_3^* \\
\alpha^*_2 & 0 & \alpha^*_5\\
\alpha^*_4 & \alpha^*_6 & \alpha^*_7
\end{pmatrix},
$$
where
\begin{longtable}{llll}
$\alpha^*_1=\alpha_1+x \alpha_6$ &
$\alpha^*_2=\alpha_2+x \alpha_5$ &
$\alpha^*_3=\alpha_3+x \alpha_7$ &
$\alpha^*_4=\alpha_4+x \alpha_7$\\
$\alpha^*_5= \alpha_5$ &
$\alpha^*_6= \alpha_6$&
$\alpha^*_7= \alpha_7$.&
\end{longtable}
Hence, $\phi\langle\theta\rangle=\langle\theta^*\rangle$, where $\theta^*=\sum\limits_{i=1}^7 \alpha_i^* \nb i.$
We are only interested in elements with $(\alpha_3, \alpha_4, \alpha_5, \alpha_6, \alpha_7)\neq (0,0,0,0,0).$
Then
\begin{enumerate}
\item $\alpha_7\ne 0$. Then choosing $x=-\alpha_3\alpha_7^{-1}$, we obtain the family of representatives of distinct orbits $\langle \alpha \nb 1+ \beta \nb 2+\gamma \nb 4+\delta \nb 5+\varepsilon \nb 6+\nb 7 \rangle$.
\item $\alpha_7=0$ and $\alpha_6\ne 0$. Then choosing $x=-\alpha_1\alpha_6^{-1}$, we obtain the family of representatives of distinct orbits $\langle \alpha \nb 2+\beta \nb 3+\gamma \nb 4+ \delta \nb 5+ \nb 6 \rangle$.
\item $\alpha_7=\alpha_6=0$ and $\alpha_5\ne 0$. Then choosing $x=-\alpha_2\alpha_5^{-1}$, we obtain the family of representatives of distinct orbits $\langle \alpha \nb 1+\beta \nb 3+\gamma \nb 4+ \nb 5 \rangle$.
\item $\alpha_7=\alpha_6=\alpha_5=0$ and $\alpha_4\ne 0$. Then we obtain the family of representatives of distinct orbits $\langle \alpha \nb 1+\beta \nb 2+\gamma\nb 3+ \nb 4 \rangle$.
\item $\alpha_7=\alpha_6=\alpha_5=\alpha_4=0$ and $\alpha_3\ne 0$. Then we obtain the family of representatives of distinct orbits $\langle \alpha \nb 1+\beta \nb 2+ \nb 3 \rangle$.
\end{enumerate}
Summarizing, we have the following distinct orbits:
\begin{longtable}{ll}
$\langle \alpha \nb 1+\beta \nb 2+ \nb 3 \rangle$ &
$\langle \alpha \nb 1+\beta \nb 2+\gamma\nb 3+ \nb 4 \rangle$\\
$\langle \alpha \nb 1+\beta \nb 3+\gamma \nb 4+ \nb 5 \rangle$ &
$\langle \alpha \nb 2+\beta \nb 3+\gamma \nb 4+ \delta \nb 5+ \nb 6 \rangle$\\
\multicolumn{2}{c}{$\langle \alpha \nb 1+ \beta \nb 2+\gamma \nb 4+\delta \nb 5+\varepsilon \nb 6+\nb 7 \rangle$}\\
\end{longtable}
This gives the following new algebras:
{\tiny \begin{longtable}{lllllllllllllll}
${\bf N}^4_{21}(\alpha,\beta)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=e_4$&$e_2e_1=e_3+\beta e_4$& $e_2 e_2=e_3$ \\ \hline
${\bf N}^4_{22}(\alpha,\beta,\gamma)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=\gamma e_4$&$e_2e_1=e_3+\beta e_4$& $e_2 e_2=e_3$&$e_3e_1=e_4$ \\ \hline
${\bf N}^4_{23}(\alpha,\beta,\gamma)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=\beta e_4$&$e_2e_1=e_3$\\
&& $e_2 e_2=e_3$&$e_2e_3=e_4$&$e_3e_1=\gamma e_4$ \\ \hline
${\bf N}^4_{24}(\alpha,\beta,\gamma,\delta)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_3=\beta e_4$&$e_2e_1=e_3+\alpha e_4$& $e_2 e_2=e_3$ \\
&&$e_2e_3=\delta e_4$&$e_3e_1=\gamma e_4$ & $e_3e_2=e_4$\\ \hline
${\bf N}^4_{25}(\alpha, \beta, \gamma, \delta, \varepsilon)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2= \alpha e_4$& $e_2e_1=e_3+\beta e_4$ & $e_2 e_2=e_3$\\
&&$e_2e_3=\delta e_4$ & $e_3e_1=\gamma e_4$ & $e_3e_2=\varepsilon e_4$ &$e_3e_3=e_4$\\ \hline
\end{longtable}
}
\subsection{$1$-dimensional central extensions of $\cd 3{03}$}
Here we collect all information about ${\mathfrak{CD}_{03}^{3}}$:
\begin{longtable}{|l|l|l|}
\hline
Algebra & Multiplication & Cohomology \\
\hline
${\mathfrak{CD}}_{03}^{3}$ &
$\begin{array}{l}
e_1 e_1 = e_2\\
e_2 e_1=e_3
\end{array}$&
$\begin{array}{lcl}
{\rm H^2_{\mathfrak{CD}}}({\mathfrak{CD}}_{03}^{3})&=&\langle \Dl 12, \Dl 22, \Dl 13-2\Dl 31 \rangle \\
{\rm H^2}({\mathfrak{CD}}_{03}^{3})&=&{\rm H^2_{\mathfrak{CD}}}({\mathfrak{CD}_{03}^3}) \oplus
\langle \Dl 31, \Dl 23, \Dl 32, \Dl 33 \rangle
\end{array}$\\
\hline
\end{longtable}
Let us use the following notations
\begin{longtable}{llll}
$\nb 1 = \Dl 12$&$ \nb 2 = \Dl 22$& $\nb 3 = \Dl 13-2\Dl 31$ &$\nb 4=\Dl 31$\\
$\nb 5 = \Dl 23$&$ \nb 6 = \Dl 32$& $\nb 7 = \Dl 33$.
\end{longtable}
Take $\theta=\sum_{i=1}^7\alpha_i\nb i\in {\rm H}^2(\cd 3{03})$.
If
$$
\phi=
\begin{pmatrix}
x & 0 & 0\\
y & x^2 & 0\\
z & xy & x^3
\end{pmatrix}\in\aut{\cd 3{03}},
$$
then
$$
\phi^T\begin{pmatrix}
0 & \alpha_1 & \alpha_3\\
0 & \alpha_2 & \alpha_5\\
\alpha_4-2\alpha_3 & \alpha_6 & \alpha_7
\end{pmatrix} \phi=
\begin{pmatrix}
\alpha^* & \alpha^*_1 & \alpha^*_3\\
\alpha^{**} & \alpha^*_2 & \alpha^*_5\\
\alpha^*_4-2\alpha^*_3 & \alpha^*_6 & \alpha^*_7
\end{pmatrix},
$$
where
\begin{longtable}{lclc}
$\alpha^*_1$ &$=$&$ (\alpha_1x^2 + (\alpha_2+\alpha_3)xy + \alpha_5y^2 + \alpha_6xz + \alpha_7yz)x,$\\
$\alpha^*_2$ &$=$&$ (\alpha_2x^2 + (\alpha_5+\alpha_6)xy + \alpha_7y^2)x^2,$\\
$\alpha^*_3$ &$=$&$ (\alpha_3x + \alpha_5y + \alpha_7z)x^3,$\\
$\alpha^*_4$ &$=$&$ (\alpha_4x + (2\alpha_5+\alpha_6)y + 3\alpha_7z)x^3,$\\
$\alpha^*_5$ &$=$&$ (\alpha_5x + \alpha_7y)x^4,$\\
$\alpha^*_6$ &$=$&$ (\alpha_6x + \alpha_7y)x^4,$\\
$\alpha^*_7$ &$=$&$ \alpha_7x^6.$
\end{longtable}
Hence, $\phi\langle\theta\rangle=\langle\theta^*\rangle$, where $\theta^*=\sum\limits_{i=1}^7 \alpha_i^* \nb i.$
We are only interested in $\theta$ with $(\alpha_4, \alpha_5, \alpha_6, \alpha_7)\neq (0,0,0,0)$.
\begin{enumerate}
\item $\alpha_7\ne 0$. Then put $y=-\frac{\alpha_6x}{\alpha_7}$ and $z=\frac x{\alpha_7^2}(\alpha_5\alpha_6 - \alpha_3\alpha_7)$ to make $\alpha^*_3=\alpha^*_6=0$.
\begin{enumerate}
\item $\alpha_5\ne \alpha_6$. Then choosing $x=\frac {\alpha_5 - \alpha_6}{\alpha_7}$, we obtain the family of representatives of distinct orbits $\langle\alpha\nb 1+\beta\nb 2+\gamma\nb 4+\nb 5+\nb 7\rangle$.
\item $\alpha_5=\alpha_6$ and $\alpha_4\ne 3\alpha_3$. Then choosing $x=\sqrt{\frac {\alpha_4-3\alpha_3}{3\alpha_7}}$, we obtain the family of representatives $\langle\alpha\nb 1+\beta\nb 2+\nb 4+\nb 7\rangle$, where two distinct pairs $(\alpha,\beta)$ and $(\alpha',\beta')$ determine the same orbit if and only if $(\alpha,\beta)=(-\alpha',\beta')$.
\item $\alpha_5=\alpha_6$, $\alpha_4=3\alpha_3$ and $\alpha_6^2\ne \alpha_2\alpha_7$. Then choosing $x=\frac 1{\alpha_7}\sqrt{\alpha_2\alpha_7-\alpha_6^2}$, we obtain the family of representatives $\langle\alpha\nb 1+\nb 2+\nb 7\rangle$, where two distinct parameters $\alpha$ and $\alpha'$ determine the same orbit if and only if $\alpha=-\alpha'$.
\item $\alpha_5=\alpha_6$, $\alpha_4=3\alpha_3$ and $\alpha_6^2=\alpha_2\alpha_7$. Then we obtain 2 representatives $\langle\nb 7\rangle$ and $\langle\nb 1+\nb 7\rangle$ depending on whether $\alpha_3\alpha_6=\alpha_1\alpha_7$ or not.
\end{enumerate}
\item $\alpha_7=0$, $\alpha_5\ne 0$ and $\alpha_6\ne 0$. Then $\alpha^*_7=0$, and we put $y=-\frac{\alpha_3x}{\alpha_5}$ and $z=\frac x{\alpha_5\alpha_6}(\alpha_2\alpha_3 - \alpha_1\alpha_5)$ to make $\alpha^*_1=\alpha^*_3=0$.
\begin{enumerate}
\item $(\alpha_4-2\alpha_3)\alpha_5 \ne \alpha_3\alpha_6$. Then choosing $x=\frac 1{\alpha_5\alpha_6}((\alpha_4-2\alpha_3)\alpha_5 - \alpha_3\alpha_6)$, we obtain the family of representatives of distinct orbits $\langle\alpha\nb 2+\nb 4+\beta\nb 5+\nb 6\rangle_{\beta\ne 0}$.
\item $(\alpha_4-2\alpha_3)\alpha_5 = \alpha_3\alpha_6$ and $(\alpha_2 - \alpha_3)\alpha_5 \ne \alpha_3\alpha_6$. Then choosing $x=\frac 1{\alpha_5\alpha_6}((\alpha_2 - \alpha_3)\alpha_5 - \alpha_3\alpha_6)$, we obtain the family of representatives of distinct orbits $\langle\nb 2+\alpha\nb 5+\nb 6\rangle_{\alpha\ne 0}$.
\item $(\alpha_4-2\alpha_3)\alpha_5 = \alpha_3\alpha_6=(\alpha_2 - \alpha_3)\alpha_5$. Then we obtain the family of representatives of distinct orbits $\langle\alpha\nb 5+\nb 6\rangle_{\alpha\ne 0}$.
\end{enumerate}
\item $\alpha_7=\alpha_6=0$ and $\alpha_5\ne 0$. Then $\alpha^*_6=\alpha^*_7=0$, and we put $y=-\frac{\alpha_3x}{\alpha_5}$ to make $\alpha^*_3=0$.
\begin{enumerate}
\item $\alpha_4\ne 2\alpha_3$. Then choosing $x=\frac {\alpha_4-2\alpha_3}{\alpha_5}$, we obtain the family of representatives of distinct orbits $\langle\alpha\nb 1+\beta\nb 2+\nb 4+\nb 5\rangle$.
\item $\alpha_4=2\alpha_3$ and $\alpha_2\ne\alpha_3$. Then choosing $x=\frac {\alpha_2-\alpha_3}{\alpha_5}$, we obtain the family of representatives of distinct orbits $\langle\alpha\nb 1+\nb 2+\nb 5\rangle$.
\item $\alpha_4=2\alpha_3$ and $\alpha_2=\alpha_3$. Then we obtain 2 representatives $\langle\nb 5\rangle$ and $\langle\nb 1+\nb 5\rangle$ depending on whether $\alpha_3^2 = \alpha_1\alpha_5$ or not.
\end{enumerate}
\item $\alpha_7=\alpha_5=0$ and $\alpha_6\ne 0$. Then $\alpha^*_5=\alpha^*_7=0$, and we put $y=-\frac{\alpha_4x}{\alpha_5}$ and $z=\frac x{\alpha_6^2}((\alpha_2 + \alpha_3)\alpha_4 - \alpha_1\alpha_6)$ to make $\alpha^*_1=\alpha^*_4=0$.
\begin{enumerate}
\item $\alpha_2\ne\alpha_4$. Then choosing $x=\frac {\alpha_2-\alpha_4}{\alpha_6}$, we obtain the family of representatives of distinct orbits $\langle\nb 2+\alpha\nb 3+\nb 6\rangle$.
\item $\alpha_2=\alpha_4$. Then we obtain 2 representatives $\langle\nb 6\rangle$ and $\langle\nb 3+\nb 6\rangle$ depending on whether $\alpha_3=0$ or not. We will join $\langle\nb 6\rangle$ with the family $\langle\alpha\nb 5+\nb 6\rangle_{\alpha\ne 0}$ found above.
\end{enumerate}
\item $\alpha_7=\alpha_6=\alpha_5=0$ and $\alpha_4\ne 0$. Then $\alpha^*_5=\alpha^*_6=\alpha^*_7=0$.
\begin{enumerate}
\item $\alpha_2\ne-\alpha_3$. Then choosing $y = -\frac{\alpha_1x}{\alpha_2 + \alpha_3}$, we obtain the family of representatives of distinct orbits $\langle\alpha\nb 2+\beta\nb 3+\nb 4\rangle_{\beta\ne-\alpha}$.
\item $\alpha_2=-\alpha_3$. Then we obtain two families of representatives of distinct orbits $\langle\alpha\nb 2-\alpha\nb 3+\nb 4\rangle$ and $\langle\nb 1+\alpha\nb 2-\alpha\nb 3+\nb 4\rangle$ depending on whether $\alpha_1=0$ or not. The family $\langle\alpha\nb 2-\alpha\nb 3+\nb 4\rangle$ will be joined with the family $\langle\alpha\nb 2+\beta\nb 3+\nb 4\rangle_{\beta\ne-\alpha}$ from the previous item.
\end{enumerate}
\end{enumerate}
Summarizing, we have the following distinct orbits:
\begin{longtable}{lll}
$\langle\nb 1+\alpha\nb 2-\alpha\nb 3+\nb 4\rangle$ &
$\langle\alpha\nb 1+\beta\nb 2+\nb 4+\nb 5\rangle$ &
$\langle\alpha\nb 1+\beta\nb 2+\gamma\nb 4+\nb 5+\nb 7\rangle$ \\
\multicolumn{2}{l}{$\langle\alpha\nb 1+\beta\nb 2+\nb 4+\nb 7\rangle^{O(\alpha,\beta)=O(-\alpha,\beta)}$}&
$\langle\alpha\nb 1+\nb 2+\nb 5\rangle$ \\
$\langle\alpha\nb 1+\nb 2+\nb 7\rangle^{O(\alpha)=O(-\alpha)}$ &
$\langle\nb 1+\nb 5\rangle$ &
$\langle\nb 1+\nb 7\rangle$ \\
$\langle\alpha\nb 2+\beta\nb 3+\nb 4\rangle$ &
$\langle\nb 2+\alpha\nb 3+\nb 6\rangle$&
$\langle\alpha\nb 2+\nb 4+\beta\nb 5+\nb 6\rangle_{\beta\ne 0}$ \\
$\langle\nb 2+\alpha\nb 5+\nb 6\rangle_{\alpha\ne 0}$ &
$\langle\nb 3+\nb 6\rangle$ &
$\langle\nb 5\rangle$ \\
$\langle\alpha\nb 5+\nb 6\rangle$&
$\langle\nb 7\rangle$
\end{longtable}
They correspond to the following new algebras:
{\tiny
\begin{longtable}{lllllllllllllll}
${\bf N}^4_{26}(\alpha)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2=e_4$&$e_1e_3=-\alpha e_4$ &$e_2e_1=e_3$ &$e_2e_2=\alpha e_4$&$ e_3e_1=(1+2 \alpha) e_4$ \\ \hline
${\bf N}^4_{27}(\alpha,\beta)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_1=e_3$ &$e_2e_2=\beta e_4$&$e_2e_3=e_4$&$e_3e_1=e_4$& \\ \hline
${\bf N}^4_{28}(\alpha,\beta,\gamma)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_1=e_3$&$e_2e_2=\beta e_4$\\&&$e_2e_3=e_4$&$e_3e_1=\gamma e_4$&$e_3e_3=e_4$& \\ \hline
${\bf N}^4_{29}(\alpha,\beta)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_2=\beta e_4$ &$e_2e_1=e_3$&$e_3e_1=e_4$&$e_3e_3=e_4$& \\ \hline
${\bf N}^4_{30}(\alpha) $ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_1=e_3$&$e_2e_2=e_4$&$e_2e_3=e_4$ \\ \hline
${\bf N}^4_{31}(\alpha) $ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_1=e_3$&$e_2e_2=e_4$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{32}$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$&$e_2e_1=e_3$&$e_2e_3=e_4$ \\ \hline
${\bf N}^4_{33}$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$&$e_2e_1=e_3$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{34}(\alpha,\beta)$ &$:$& $e_1 e_1 = e_2$&$e_1e_3=\beta e_4$&$e_2e_1=e_3$&$e_2e_2=\alpha e_4$&
\multicolumn{2}{l}{$e_3e_1=(1-2\beta) e_4$} \\ \hline
${\bf N}^4_{35}(\alpha)$ &$:$& $e_1 e_1 = e_2$&$e_1e_3=\alpha e_4$&$e_2e_1=e_3$ &$e_2e_2=e_4$&$e_3e_1=-2\alpha e_4$&$e_3e_2=e_4$& \\ \hline
${\bf N}^4_{36}(\alpha,\beta)_{\beta\neq 0}$ &$:$& $e_1 e_1 = e_2$&$e_2e_1=e_3$&$e_2e_2=\alpha e_4$
&$e_2e_3=\beta e_4$&$e_3e_1=e_4$&$e_3e_2=e_4$& \\ \hline
${\bf N}^4_{37}(\alpha)_{\alpha\neq0}$ &$:$& $e_1 e_1 = e_2$&$e_2e_1=e_3$&$e_2e_2=e_4$&$e_2e_3=\alpha e_4$&$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{38}$ &$:$& $e_1 e_1 = e_2$&$e_1e_3=e_4$&$e_2e_1=e_3$&$e_3e_1=-2e_4$&$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{39}$ &$:$& $e_1 e_1 = e_2$&$e_2e_1=e_3$&$e_2e_3=e_4$& \\ \hline
${\bf N}^4_{40}(\alpha)$ &$:$& $e_1 e_1 = e_2$&$e_2e_1=e_3$&$e_2e_3=\alpha e_4$&$e_3e_2=e_4$& \\ \hline
${\bf N}^4_{41}$ &$:$& $e_1 e_1 = e_2$&$e_2e_1=e_3$&$e_3e_3=e_4$& \\ \hline
\end{longtable}
}
\begin{longtable}{ll}
${\bf N}^4_{29}(\alpha,\beta) \cong {\bf N}^4_{29}(-\alpha,\beta)$ &
${\bf N}^4_{31}(\alpha) \cong {\bf N}^4_{31}(-\alpha)$
\end{longtable}
\subsection{$1$-dimensional central extensions of $\cd 3{04}$}\label{sec-CD^3_04}
Here we collect all information about ${\mathfrak{CD}_{04}^{3}}:$
\begin{longtable}{|l|l|l|}
\hline
Algebra & Multiplication & Cohomology \\
\hline
${\mathfrak{CD}}_{04}^{3}$ &
$\begin{array}{l}
e_1 e_1 = e_2\\
e_1 e_2=e_3\\
e_2 e_1=\lambda e_3
\end{array}$&
$\begin{array}{lcl}
{\rm H^2_{\mathfrak{CD}}}({\mathfrak{CD}}_{04}^{3})&=&\langle
(\lambda-2)\Dt 13-(2\lambda-1)\Dt 31, \Dt 21, \Dt 22
\rangle \\
{\rm H^2}({\mathfrak{CD}}_{04}^{3})&=&{\rm H^2_{\mathfrak{CD}}}({\mathfrak{CD}_{04}^3}) \oplus
\langle
\Dl 13-2 \Dl 31, \Dl 23, \Dl 32, \Dl 33
\rangle
\end{array}$\\
\hline
\end{longtable}
Let us use the following notations
\begin{longtable}{llll}
$\nb 1 = (\lambda-2)\Dl 13-(2\lambda-1)\Dl 31$ &$\nb 2 = \Dl 21$ & $\nb 3 = \Dl 22$\\
$\nb 4= \Dl 13-2 \Dl 31$ & $\nabla_5=\Dl 23$ & $\nabla_6=\Dl 32$ & $\nabla_7=\Dl 33$
\end{longtable}
Take $\theta=\sum_{i=1}^3\alpha_i\nb i\in {\rm H}^2_{\mathfrak{CD}}(\cd 3{04})$.
If
$$
\phi=
\begin{pmatrix}
x & 0 & 0\\
y & x^2 & 0\\
z & (\lambda+1)xy & x^3
\end{pmatrix}\in\aut{\cd 3{04}},
$$
then
$$
\phi^T\begin{pmatrix}
0 & 0 & (\lambda-2)\alpha_1+\alpha_4\\
\alpha_2 & \alpha_3 & \alpha_5\\
-(2\lambda-1)\alpha_1-2 \alpha_4 & \alpha_6 & \alpha_7
\end{pmatrix} \phi=
\begin{pmatrix}
\alpha^* & \alpha^{**} & (\lambda-2)\alpha_1^*+\alpha^*_4\\
\alpha_2^*+\lambda\alpha^{**} & \alpha_3^* & \alpha^*_5\\
-(2\lambda-1)\alpha_1^*-2 \alpha^*_4 & \alpha^*_6 & \alpha^*_7
\end{pmatrix},
$$
where
\begin{longtable}{lll}
$\alpha^*_1$& $=$& $\frac{x^3}{3} (3 x \alpha_1-y (2 \alpha_5+\alpha_6)-3 z \alpha_7)$\\
$\alpha^*_2$& $=$& $x (x^2 \alpha_2+y (1+\lambda ) (z \alpha_7 (1-\lambda )+y (\alpha_6-\alpha_5 \lambda ))+x (z (\alpha_5- \alpha_6 \lambda) -y (\alpha_3 (\lambda-1 )+$\\
&&$\alpha_1 (\lambda-1 ) (\lambda+1 )^2+\alpha_4 (2+3 \lambda +\lambda ^2))))$\\
$\alpha_3^*$ &$=$&$x^2 (x^2 \alpha_3+x y (\alpha_5+\alpha_6) (1+\lambda )+y^2 \alpha_7 (\lambda+1 )^2)$\\
$\alpha_4^*$ &$=$&$\frac{ x^3}{3} (3 x \alpha_4+y \alpha_6 (\lambda-2 )+3 z \alpha_7 (\lambda-1 )+y \alpha_5 (2 \lambda-1 ))$\\
$\alpha_5^*$ &$=$&$x^4 (x \alpha_5+y \alpha_7 (\lambda+1 ))$\\
$\alpha_6^*$ &$=$&$x^4 (x \alpha_6+y \alpha_7 (\lambda+1 ))$\\
$\alpha_7^*$ &$=$&$x^6 \alpha_7$
\end{longtable}
Hence, $\phi\langle\theta\rangle=\langle\theta^*\rangle$, where $\theta^*=\sum\limits_{i=1}^7 \alpha_i^* \nb i.$ We are only interested in elements with $(\alpha_4, \alpha_5, \alpha_6,\alpha_7)\neq (0,0,0,0).$
\begin{enumerate}
\item $\alpha_7\ne 0$ and $\lambda\neq -1$. We may assume $\alpha_7=1$. We put $z = \alpha_1x - \frac 13(2\alpha_5 + \alpha_6)y$ to make $\alpha^*_1=0$.
\begin{enumerate}
\item
$\alpha_6\neq \alpha_5$.
Then choosing $y=-\frac{3((\lambda - 1)\alpha_1 + \alpha_4)x}{\alpha_5 - \alpha_6}$ we have $\alpha^*_4=0$. Now, if $\alpha_6\ne\frac{3(\lambda+1)((\lambda-1)\alpha_1 + \alpha_4)}{\alpha_5 - \alpha_6}$, then choosing $x=\alpha_6-\frac{3(\lambda+1)((\lambda-1)\alpha_1 + \alpha_4)}{\alpha_5 - \alpha_6}$
we have the family of representatives of distinct orbits
$\langle \alpha \nb 2+\beta \nb 3+\gamma \nb 5+ \nb 6+ \nb 7\rangle_{\lambda\ne-1,\gamma\ne 1}$. Otherwise, choosing $x=\alpha_5-\alpha_6$ we have the family of representatives of distinct orbits
$\langle \alpha \nb 2+\beta \nb 3+\nb 5+ \nb 7\rangle_{\lambda\ne-1}$.
\item
$\alpha_6= \alpha_5$ and $\alpha_4 \neq (1 - \lambda )\alpha_1.$
Then choosing $y=-\frac{\alpha_5x}{\lambda + 1}$ and
$x=\sqrt{\alpha_4+(\lambda-1)\alpha_1}$,
we have the family of representatives
$\langle \alpha \nb 2+ \beta \nb 3+ \nb 4+ \nb 7\rangle_{\lambda\ne-1}$, where $(\alpha,\beta)$ and $(\alpha',\beta')$ determine the same orbit if and only if $(\alpha,\beta)=(\pm\alpha',\beta')$.
\item
$\alpha_6= \alpha_5,$ $\alpha_4 = (1 - \lambda )\alpha_1$ and $\alpha_3\ne \alpha_5^2.$
Then choosing $y=-\frac{\alpha_5x}{\lambda + 1}$ and
$x=\sqrt{ \alpha_3-\alpha_5^2}$,
we have the family of representatives
$\langle \alpha \nb 2+ \nb 3+ \nb 7\rangle_{\lambda\ne-1}$, where $\alpha$ and $\alpha'$ determine the same orbit if and only if $\alpha=\pm\alpha'$.
\item
$\alpha_6= \alpha_5,$ $\alpha_4=(1-\lambda)\alpha_1$ and $\alpha_3=\alpha_5^2$.
Then have two representatives
$\langle \nb 7\rangle_{\lambda\ne-1}$ and $\langle \nb 2+ \nb 7\rangle_{\lambda\ne-1}$ depending on whether $\alpha_2=(\lambda -1)\alpha_1 \alpha_5$ or not.
\end{enumerate}
\item $\alpha_7\ne 0$ and $\lambda=-1$. We may assume $\alpha_7=1$.
\begin{enumerate}
\item $\alpha_6\neq0$ and $\alpha_6\neq \alpha_5.$
Then choosing
$x=\alpha_6,$
$y=\frac{3(2 \alpha_1-\alpha_4) x}{\alpha_5-\alpha_6}$
and
$z=\frac{(\alpha_4 (2 \alpha_5+\alpha_6)-3 \alpha_1 (\alpha_5+\alpha_6))x}{\alpha_5-\alpha_6},$
we have the family of representatives of distinct orbits
$\langle \alpha \nb 2+\beta \nb 3+\gamma \nb 5+\nb 6+ \nb 7\rangle_{\lambda=-1, \gamma \neq 1}$.
\item $\alpha_6\neq0, \alpha_6= \alpha_5$ and $\alpha_3\neq \alpha_5^2$.
Then choosing
$x=\alpha_5,$
$y=\frac{(\alpha_2+2 \alpha_1 \alpha_5)x}{2 (\alpha_5^2-\alpha_3)}$
and
$z=\frac{(\alpha_2 \alpha_5+2 \alpha_1 \alpha_3)x}{2 (\alpha_3-\alpha_5^2)},$
we have the family of representatives of distinct orbits
$\langle \alpha \nb 3+ \beta \nb 4+ \nb 5+\nb 6+ \nb 7\rangle_{\lambda=-1,\alpha\ne 1}$.
\item $\alpha_6\neq0, \alpha_6= \alpha_5$ and $\alpha_3= \alpha_5^2.$
Then choosing
$x=\alpha_6,$
$y=\alpha_1$
and
$z=0,$
we have the family of representatives of distinct orbits
$\langle \alpha \nb 2+ \nb 3+ \beta \nb 4+ \nb 5+\nb 6+ \nb 7\rangle_{\lambda=-1}$.
\item $\alpha_6=0$ and $\alpha_5\neq 0.$
Then choosing
$x=\alpha_5,$
$y=3(2\alpha_1-\alpha_4)$
and
$z=(2 \alpha_4-3 \alpha_1)\alpha_5,$
we have the family of representatives of distinct orbits
$\langle \alpha \nb 2+ \beta \nb 3+ \nb 5 + \nb 7\rangle_{\lambda=-1}$.
\item $\alpha_6=0,$ $\alpha_5= 0,$ $\alpha_4\neq 2 \alpha_1$ and $\alpha_3\neq 0.$
Then choosing
$x=\sqrt{\alpha_4-2 \alpha_1},$
$y=-\frac{\alpha_2x}{2\alpha_3}$
and
$z=\alpha_1x,$
we have the family of representatives of distinct orbits
$\langle \alpha \nb 3+ \nb 4+ \nb 7\rangle_{\lambda=-1, \alpha\ne 0}$.
\item $\alpha_6=0,$ $\alpha_5= 0,$ $\alpha_4= 2 \alpha_1$ and $\alpha_3\neq 0.$
Then choosing
$x=\sqrt{\alpha_3},$
$y=-\frac{\alpha_2x}{2 \alpha_3}$
and
$z=\alpha_1x,$
we have the representative
$\langle \nb 3+\nb 7\rangle_{\lambda=-1}$.
\item $\alpha_6=0,$ $\alpha_5= 0,$ $\alpha_4= 2 \alpha_1$ and $\alpha_3= 0.$
Then choosing
$z=\alpha_1x$,
we have two representatives
$\langle \nb 7\rangle_{\lambda=-1}$ and $\langle \nb 2+ \nb 7\rangle_{\lambda=-1}$,
depending on whether $\alpha_2=0$ or not.
\end{enumerate}
\item $\alpha_7=0$ and $\alpha_6\ne 0.$ We may assume $\alpha_6=1$.
\begin{enumerate}
\item $ \alpha_5\not\in\{-\frac 12,\lambda\}$. Then the system $\alpha^*_1=\alpha^*_2=0$ has a unique solution in $y$ and $z$. Now, if $\alpha_4\neq -\frac{((2 \lambda-1)\alpha_5+\lambda-2)\alpha_1}{2\alpha_5 + 1}$, then choosing the suitable value of $x$
we have the family of representatives distinct orbits
$\langle \alpha \nb 3+ \nb 4 +\beta \nabla_5+ \nb 6\rangle_{\beta\not\in\{-\frac 12,\lambda\}}$. Otherwise, we have two families of representatives distinct orbits $\langle \alpha \nabla_5+ \nb 6\rangle_{\alpha\not\in\{-\frac 12,\lambda\}}$ and $\langle \nb 3+ \alpha \nabla_5+\nb 6\rangle_{\alpha\not\in\{-\frac 12,\lambda\}}$ depending on whether $\alpha_3=-\frac{3(\lambda + 1)(\alpha_5 + 1)\alpha_1}{2\alpha_5 + 1}$ or not.
\item $\alpha_5= \lambda\ne-\frac 12$ and $\alpha_4 \neq \frac{2 (1-\lambda ^2)\alpha_1}{2\lambda+1}$.
Then choosing
$x=\alpha_4+\frac{2 (\lambda^2-1)\alpha_1}{2 \lambda+1}$ and
$y=\frac{3 \alpha_1 x}{2 \lambda+1}$
we have the family of representatives of distinct orbits
$\langle \alpha \nb 2+\beta \nb 3+ \nb 4+\lambda \nabla_5+\nb 6\rangle_{\lambda\ne -\frac 12}$.
\item $\alpha_5= \lambda\ne-\frac 12$ and $\alpha_4 = \frac{2 (1-\lambda ^2)\alpha_1}{2\lambda+1}$. We put $y=\frac{3 \alpha_1 x}{2 \lambda+1}$ to make $\alpha^*_1=0$. Now, if $\alpha_3\neq -\frac{3 (\lambda+1)^2\alpha_1}{2 \lambda + 1}$,
then choosing
$x=\alpha_3+\frac{3(\lambda+1)^2\alpha_1}{2 \lambda+1}$,
we have the family of representatives of distinct orbits
$\langle \alpha \nb 2+ \nb 3 +\lambda \nabla_5+ \nb 6\rangle_{\lambda\ne -\frac 12}$. Otherwise,
we have two representatives
$\langle \lambda \nabla_5+\nb 6\rangle_{\lambda\ne -\frac 12}$ and $\langle \nb 2+ \lambda \nabla_5+\nb 6\rangle_{\lambda\ne -\frac 12}$,
depending on whether $\alpha_2=-\frac{9(\lambda - 1)(\lambda + 1)^2\alpha_1^2}{(2\lambda + 1)^2}$ or not.
\item $\alpha_5=-\frac 12 \ne\lambda$.
Then put
$y=2 \alpha_4 x$
and
$z=\frac{2(\alpha_2-2 \alpha_4 (\lambda-1) (\alpha_3+\alpha_1 (\lambda+1)^2)x}{2 \lambda+1}$
to make $\alpha^*_2=\alpha^*_4=0$. Now, if $\alpha_3\ne -(\lambda+1)\alpha_4$, then choosing $x=\alpha_3+(\lambda+1)\alpha_4,$
we have the family of representatives of distinct orbits
$\langle \alpha \nb 1+ \nb 3 -\frac 12\nb 5+ \nabla_6\rangle_{\lambda\ne -\frac 12}$. Otherwise, we have two representatives
$\langle -\frac 12\nb 5+ \nabla_6\rangle_{\lambda\ne -\frac 12}$ and $\langle \nb 1 -\frac 12\nb 5+ \nabla_6\rangle_{\lambda\ne -\frac 12}$,
depending on whether $\alpha_1=0$ or not.
\item $\alpha_5=-\frac 12=\lambda$ and $\alpha_4\ne-2 \alpha_3$.
Then choosing
$x=\alpha_3+\frac{\alpha_4}2$ and
$y=2\alpha_4x$
we have the family of representatives of distinct orbits
$\langle \alpha\nabla_1 +\beta \nabla_2+\nabla_3 -\frac 12\nb 5+ \nabla_6\rangle_{\lambda= -\frac 12}$.
\item $\alpha_5=-\frac 12=\lambda$ and $\alpha_4=-2 \alpha_3$. Then we put $y=-4\alpha_3 x$ to make $\alpha^*_4=0$. Now, if $\alpha_2\neq\frac {3\alpha_3}2 (\alpha_1+4 \alpha_3)$,
then choosing
$x=\sqrt{\alpha_2-\frac{3\alpha_3}2 (\alpha_1+4 \alpha_3)}$,
we have the family of representatives of distinct orbits
$\langle \alpha\nabla_1 + \nabla_2 -\frac 12\nb 5+ \nabla_6\rangle_{\lambda= -\frac 12}$. Otherwise, we have two representatives
$\langle \nabla_1 -\frac 12\nb 5+ \nabla_6\rangle_{\lambda= -\frac 12}$ and
$\langle -\frac 12\nb 5+ \nabla_6\rangle_{\lambda= -\frac 12}$,
depending on whether $\alpha_1=0$ or not.
\end{enumerate}
\item $\alpha_7=\alpha_6=0$ and $\alpha_5\ne 0$. We may assume $\alpha_5=1$. Then the system $\alpha^*_1=\alpha^*_2=0$ has a unique solution in $y$ and $z$. Now, if $2 \alpha_4+(2 \lambda-1)\alpha_1\neq 0$, then choosing $x=\alpha_4+(\lambda-\frac 12)\alpha_1$ we have the family of representatives of distinct orbits $\langle \alpha \nabla_3+ \nabla_4+\nb 5\rangle$. Otherwise, we have two representatives $\langle \nb 5\rangle$ and $\langle \nabla_3+ \nb 5\rangle$ depending on whether $2 \alpha_3+3(\lambda+1) \alpha_1=0$ or not.
\item $\alpha_7=\alpha_6=\alpha_5=0$ and $\alpha_4\ne 0$. We may assume $\alpha_4=1$.
\begin{enumerate}
\item $(\lambda-1) ((\lambda+1)^2\alpha_1+\alpha_3)+(\lambda + 1)(\lambda + 2)\neq 0.$
Then choosing
$y=\frac{\alpha_2x}{(\lambda-1) ((\lambda+1)^2\alpha_1+\alpha_3)+(\lambda + 1)(\lambda + 2)}$
we have the family of representatives of distinct orbits
$\langle \alpha \nabla_1+ \beta \nabla_3+\nb 4\rangle$, where $(\lambda-1) ((\lambda+1)^2\alpha+\beta)+(\lambda + 1)(\lambda + 2)\neq 0$.
\item $(\lambda-1) ((\lambda+1)^2\alpha_1+\alpha_3)+(\lambda + 1)(\lambda + 2)= 0$. Then $\lambda\neq 1$
and we have two families of representatives of distinct orbits $\Big\langle \alpha \nabla_1+ \frac{(\lambda+1) ((\lambda^2-1)\alpha + \lambda + 2)}{1-\lambda} \nabla_3+\nb 4\Big\rangle_{\lambda\ne 1}$ and
$\Big\langle \alpha \nabla_1+\nabla_2+ \frac{(\lambda+1) ((\lambda^2-1)\alpha + \lambda + 2)}{1-\lambda} \nabla_3+\nb 4\Big\rangle_{\lambda\ne 1}$, depending on whether $\alpha_2=0$ or not. The first family will be joined with the family from the previous item.
\end{enumerate}
\end{enumerate}
Summarizing, we have the following representatives of distinct orbits:
\begin{longtable}{lllllll}
$\Big\langle \alpha \nabla_1+\nabla_2+ \frac{(\lambda+1) ((\lambda^2-1)\alpha + \lambda + 2)}{1-\lambda} \nabla_3+\nb 4\Big\rangle_{\lambda\ne 1}$ &
$\langle \alpha\nabla_1 +\beta \nabla_2+\nabla_3 -\frac 12\nb 5+ \nabla_6\rangle_{\lambda= -\frac 12}$\\
$\langle \alpha\nabla_1 + \nabla_2 -\frac 12\nb 5+ \nabla_6\rangle_{\lambda= -\frac 12}$&
$\langle \alpha \nabla_1+ \beta \nabla_3+\nb 4\rangle$\\
$\langle \alpha \nb 1+ \nb 3 -\frac 12\nb 5+ \nabla_6\rangle_{\lambda\ne -\frac 12}$&
$\langle \nabla_1 -\frac 12\nb 5+ \nabla_6\rangle_{\lambda= -\frac 12}$\\
$\langle \alpha \nb 2+\beta \nb 3+ \nb 4+\lambda \nabla_5+\nb 6\rangle_{\lambda\ne -\frac 12}$&
$\langle \alpha \nb 2+ \nb 3+ \beta \nb 4+ \nb 5+\nb 6+ \nb 7\rangle_{\lambda=-1}$\\
$\langle \alpha \nb 2+ \beta \nb 3+ \nb 4+ \nb 7\rangle_{\lambda\ne-1}^{O(\alpha,\beta)=O(-\alpha,\beta)}$&
$\langle \alpha \nb 2+ \nb 3 +\lambda \nabla_5+ \nb 6\rangle_{\lambda\ne -\frac 12}$\\
$\langle \alpha \nb 2+\beta \nb 3+\gamma \nb 5+ \nb 6+ \nb 7\rangle_{\gamma\ne 1}$&
$\langle \alpha \nb 2+\beta \nb 3+\nb 5+ \nb 7\rangle$\\
$\langle \alpha \nb 2+ \nb 3+ \nb 7\rangle_{\lambda\ne-1}^{O(\alpha)=O(-\alpha)}$&
$\langle \nb 2+ \lambda \nabla_5+\nb 6\rangle_{\lambda\ne -\frac 12}$\\
$\langle \nb 2+ \nb 7\rangle$&
$\langle \alpha \nabla_3+ \nabla_4+\nb 5\rangle$\\
$\langle \alpha \nb 3+ \nb 4 +\beta \nabla_5+ \nb 6\rangle_{\beta\not\in\{-\frac 12,\lambda\}}$&
$\langle \alpha \nb 3+ \beta \nb 4+ \nb 5+\nb 6+ \nb 7\rangle_{\lambda=-1,\alpha\ne 1}$\\
$\langle \alpha \nb 3+ \nb 4+ \nb 7\rangle_{\lambda=-1, \alpha\ne 0}$&
$\langle \nabla_3+ \nb 5\rangle$\\
$\langle \nb 3+ \alpha \nabla_5+\nb 6\rangle_{\alpha\not\in\{-\frac 12,\lambda\}}$&
$\langle \nb 3+\nb 7\rangle_{\lambda=-1}$\\
$\langle \nb 5\rangle$&
$\langle \alpha \nabla_5+ \nb 6\rangle$\\
$\langle \nb 7\rangle$ &
\end{longtable}
The corresponding algebras are:
{\tiny \begin{longtable}{lllllllllllllll}
${\bf N}^4_{42}(\lambda,\alpha)_{\lambda\ne 1}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ && $e_1e_3=(\alpha(\lambda-2)+1)e_4$\\
&& $e_2 e_1=\lambda e_3+e_4$& \multicolumn{2}{l}{$e_2e_2=\frac{(\lambda+1) ((\lambda^2-1)\alpha + \lambda + 2)}{1-\lambda}e_4$} & $e_3e_1=(\alpha(1-2\lambda)-2)e_4$ \\
\hline
${\bf N}^4_{43}(\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=-\frac {5\alpha}2e_4$ & $e_2 e_1=-\frac 12e_3+\beta e_4$\\
&& $e_2e_2=e_4$ & $e_2 e_3=-\frac 12 e_4$ & $e_3e_1=2\alpha e_4$ & $e_3e_2 = e_4$ \\
\hline
${\bf N}^4_{44}(\alpha)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=-\frac {5\alpha}2e_4$ & $e_2 e_1=-\frac 12e_3+e_4$\\
&& $e_2 e_3=-\frac 12 e_4$ & $e_3e_1=2\alpha e_4$ & $e_3e_2 = e_4$ \\
\hline
${\bf N}^4_{45}(\lambda,\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & \multicolumn{2}{l}{$e_1e_3=(\alpha(\lambda-2)+1)e_4$}\\
&& $e_2 e_1=\lambda e_3$& $e_2e_2=\beta e_4$ & \multicolumn{2}{l}{$e_3e_1=(\alpha(1-2\lambda)-2)e_4$} \\
\hline
${\bf N}^4_{46}(\lambda,\alpha)_{\lambda\ne-\frac 12}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=\alpha(\lambda-2)e_4$ & $e_2 e_1=\lambda e_3$\\
&& $e_2e_2=e_4$ & $e_2 e_3=-\frac 12 e_4$ & $e_3e_1=\alpha(1-2\lambda)e_4$ & $e_3e_2 = e_4$ \\
\hline
${\bf N}^4_{47}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=-\frac 52e_4$ & $e_2 e_1=-\frac 12e_3$\\
&& $e_2 e_3=-\frac 12 e_4$ & $e_3e_1=2 e_4$ & $e_3e_2 = e_4$ \\
\hline
${\bf N}^4_{48}(\lambda,\alpha,\beta)_{\lambda\ne -\frac 12}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=e_4$ & $e_2 e_1=\lambda e_3+\alpha e_4$\\
&& $e_2e_2=\beta e_4$ & $e_2 e_3=\lambda e_4$ & $e_3e_1=-2e_4$ & $e_3e_2 = e_4$ \\
\hline
${\bf N}^4_{49}(\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=\beta e_4$\\
&& $e_2 e_1=-e_3+\alpha e_4$ & $e_2e_2=e_4$ & $e_2 e_3=e_4$\\
&& $e_3e_1=-2\beta e_4$ & $e_3e_2 = e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{50}(\lambda,\alpha,\beta)_{\lambda\ne-1}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=e_4$ & $e_2 e_1=\lambda e_3+\alpha e_4$\\
&& $e_2e_2=\beta e_4$ & $e_3e_1=-2e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{51}(\lambda,\alpha)_{\lambda\ne -\frac 12}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+\alpha e_4$\\
&& $e_2e_2=e_4$ & $e_2e_3=\lambda e_4$ & $e_3e_2=e_4$ \\
\hline
${\bf N}^4_{52}(\lambda,\alpha,\beta,\gamma)_{\gamma\ne 1}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+\alpha e_4$ & $e_2e_2=\beta e_4$\\
&& $e_2e_3=\gamma e_4$ & $e_3e_2=e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{53}(\lambda,\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+\alpha e_4$\\
&& $e_2e_2=\beta e_4$ & $e_2e_3=e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{54}(\lambda,\alpha)_{\lambda\ne -1}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+\alpha e_4$\\
&& $e_2e_2=e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{55}(\lambda)_{\lambda\ne -\frac 12}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+e_4$\\
&& $e_2e_3=\lambda e_4$ & $e_3e_2=e_4$ \\
\hline
${\bf N}^4_{56}(\lambda)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{57}(\lambda,\alpha)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=e_4$ & $e_2 e_1=\lambda e_3$\\
&& $e_2e_2=\alpha e_4$ & $e_2e_3=e_4$ & $e_3e_1=-2e_4$ \\
\hline
${\bf N}^4_{58}(\lambda,\alpha,\beta)_{\beta\not\in\{-\frac 12,\lambda\}}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=e_4$ & $e_2 e_1=\lambda e_3$\\
&& $e_2e_2=\alpha e_4$ & $e_2e_3=\beta e_4$ & $e_3e_1=-2e_4$ & $e_3e_2=e_4$ \\
\hline
${\bf N}^4_{59}(\alpha,\beta)_{\alpha\ne 1}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=\beta e_4$\\
&& $e_2 e_1=-e_3$ & $e_2e_2=\alpha e_4$ & $e_2e_3=e_4$\\
&& $e_3e_1=-2\beta e_4$ & $e_3e_2=e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{60}(\alpha)_{\alpha\ne 0}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=e_4$ & $e_2 e_1=-e_3$\\
&& $e_2e_2=\alpha e_4$ & $e_3e_1=-2e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{61}(\lambda)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$\\
&& $e_2e_2=e_4$ & $e_2e_3=e_4$ \\
\hline
${\bf N}^4_{62}(\lambda,\alpha)_{\alpha\not\in\{-\frac 12,\lambda\}}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$\\
&& $e_2e_2=e_4$ & $e_2e_3=\alpha e_4$ & $e_3e_2=e_4$ \\
\hline
${\bf N}^4_{63}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=-e_3$\\
&& $e_2e_2=e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{64}(\lambda)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$ & $e_2e_3=e_4$ \\
\hline
${\bf N}^4_{65}(\lambda,\alpha)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$\\
&& $e_2e_3=\alpha e_4$ & $e_3e_2=e_4$ \\
\hline
${\bf N}^4_{66}(\lambda)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$ & $e_3e_3=e_4$\\
\hline
\end{longtable}}\begin{longtable}{ll}
${\bf N}^4_{50}(\lambda,\alpha,\beta) \cong {\bf N}^4_{50}(\lambda,-\alpha,\beta)$ &
${\bf N}^4_{54}(\lambda,\alpha) \cong {\bf N}^4_{54}(\lambda,-\alpha)$
\end{longtable}
\section{The classification theorem}\label{S:class}
\begin{theorem} \label{teo-alg}
Let ${\bf N}$ be a complex $4$-dimensional nilpotent algebra.
Then ${\bf N}$ is isomorphic to an algebra from the following list:
{\tiny
\begin{longtable}{lllll llll}
\hline$\cd {4*}{01}$&$:$& $e_1 e_1 = e_2$\\
\hline$\cd {4*}{02}$&$:$& $e_1 e_1 = e_3$ &$ e_2 e_2=e_3$ \\
\hline$\cd {4*}{03}$&$:$& $e_1 e_2=e_3$ & $e_2 e_1=-e_3$ \\
\hline$\cd {4*}{04}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_2 e_1=e_3$ & $e_2 e_2=e_3$\\
\hline$\cd {4*}{05}$ & $:$ & $e_1e_1 = e_3$&$ e_2e_2=e_4$ \\
\hline$\cd {4*}{06}$ & $:$ & $e_1e_2 = e_4$&$ e_3e_1 = e_4$ \\
\hline$\cd {4*}{07}$ & $:$ & $e_1e_2 = e_3$&$ e_2e_1 = e_4$&$ e_2e_2 = -e_3$\\
\hline$\cd {4*}{08}(\alpha)$ & $:$&$ e_1e_1 = e_3$& $e_1e_2 = e_4$& $e_2e_1 = -\alpha e_3$&$e_2e_2 = -e_4$ \\
\hline$\cd {4*}{09}(\alpha)$&$:$&$e_1e_1 = e_4$&$ e_1e_2 = \alpha e_4$&$ e_2e_1 = -\alpha e_4$\\&&$ e_2e_2 = e_4$&$ e_3e_3 = e_4$\\
\hline$\cd {4*}{10}$&$:$&$ e_1e_2 = e_4$&$ e_1e_3 = e_4$&$ e_2e_1 = -e_4$\\&&$ e_2e_2 = e_4$&$ e_3e_1 = e_4$\\
\hline$\cd {4*}{11}$&$:$&$ e_1e_1 = e_4$&$ e_1e_2 = e_4$&$ e_2e_1 = -e_4$&$ e_3e_3 = e_4$\\
\hline$\cd {4*}{12}$&$:$&$ e_1e_2 = e_3$&$ e_2e_1 = e_4$ \\
\hline$\cd {4*}{13}$&$:$&$ e_1e_1 = e_4$&$ e_1e_2 = e_3$&$ e_2e_1 = -e_3$&$ e_2e_2=2e_3+e_4$\\
\hline$\cd {4*}{14}(\alpha)$&$:$&$ e_1e_2 = e_4$&$ e_2e_1 =\alpha e_4$&$ e_2e_2 = e_3$\\
\hline$\cd {4*}{15}$&$:$&$e_1e_2 = e_4$&$ e_2e_1 = -e_4$&$ e_3e_3 = e_4$\\
\hline
$\D{4}{00} $&$:$&
$e_1e_1=e_4$ & $e_1e_2=e_4$& $e_2 e_1=e_3$ \\&& $e_2 e_2 = e_3$ & $e_2e_3=e_4$
\\\hline
$\D{4}{01}(\lambda,\alpha,\beta)$&$:$&
$e_1 e_1 = \lambda e_3 + e_4$ & $e_1 e_3 = \alpha e_4$ & $e_2 e_1=e_3$ \\
&&$ e_2 e_2 = e_3$ & $e_2 e_3 = \beta e_4$ & $e_3e_1 = e_4$ \\
\hline
$\D{4}{02}(\lambda,\alpha,\beta)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_1 e_3 = \alpha e_4$ & $e_2 e_1=e_3$ \\
&& $e_2 e_2 = e_3$ & $e_2 e_3 = \beta e_4$ & $e_3e_1 = e_4$ \\
\hline
$\D{4}{03}(\lambda,\alpha)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_1 e_2 = e_4$ & $e_2 e_1=e_3$ \\
&& $e_2 e_2 = e_3$ & $e_2 e_3 = \alpha e_4$ & $e_3e_1 = e_4$ \\
\hline
$\D{4}{04}(\lambda,\alpha)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_2 e_1=e_3$ & $e_2 e_2 = e_3$ \\
&& $e_2 e_3 = \alpha e_4$ & $e_3e_1 = e_4$ \\
\hline
$\D{4}{05}(\lambda,\alpha)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_1 e_2 = \lambda e_4$ & $e_2 e_1=e_3 + \lambda\alpha e_4$ \\
&& $e_2 e_2 = e_3$ & $e_2 e_3 = \Theta e_4$ & $e_3e_1 = \lambda e_4$ \\
\hline
$\D{4}{06}(\lambda,\alpha)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_1 e_2 = e_4$ & $e_2 e_1=e_3 + \alpha e_4$ \\
&& $e_2 e_2 = e_3$ & $e_2 e_3 = \Theta^{-1} e_4$ & $e_3e_1 = e_4$ \\
\hline
$\D{4}{07}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_2 e_1=e_3 + \lambda e_4$ & $e_2 e_2 = e_3$ \\&& $e_2 e_3 = \Theta e_4$ & $e_3e_1 = \lambda e_4$ \\
\hline
$\D{4}{08}(\lambda)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_2 e_1=e_3 + e_4$ & $e_2 e_2 = e_3$ \\&& $e_2 e_3 = \Theta^{-1} e_4$ & $e_3e_1 = e_4$ \\
\hline
$\D{4}{09}(\lambda,\alpha)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_1 e_2 = e_4$& $e_1 e_3 = \alpha e_4$ \\
&& $e_2 e_1=e_3$ & $e_2 e_2 = e_3 $& $e_3e_1 = e_4$ \\
\hline
$\D{4}{10}(\lambda,\alpha)$&$:$&
$e_1 e_1 = \lambda e_3$& $e_1 e_3 = \alpha e_4$ & $e_2 e_1=e_3$ \\&& $e_2 e_2 = e_3$ & $e_3e_1 = e_4$ \\
\hline
$\D{4}{11}(\lambda,\alpha)$&$:$&
$e_1 e_1 = \lambda e_3 + e_4$& $e_1e_2 = \alpha e_4$ & $e_1 e_3 = -e_4$ \\
&& $e_2 e_1=e_3$ & $e_2 e_2 = e_3$ & $e_3e_1 = e_4$ \\
\hline
$\D{4}{12}(\lambda,\alpha)$&$:$&
$e_1 e_1 = \lambda e_3 + e_4$& $e_1e_3 = \alpha e_4$ & $e_2 e_1=e_3$ \\
&& $e_2 e_2 = e_3$ & $e_3e_1 = \Theta e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{13}(\lambda,\alpha)$&$:$&
$e_1 e_1 = \lambda e_3 + e_4$& $e_1e_3 = \alpha e_4$ & $e_2 e_1=e_3$ \\
&& $e_2 e_2 = e_3$ & $e_3e_1 = (1-\Theta)e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{14}(\lambda,\alpha)$&$:$&
$e_1 e_1 = \lambda e_3$& $e_1e_3 = \alpha e_4 $ & $e_2 e_1=e_3$ \\
&& $e_2 e_2 = e_3$ & $e_3e_1 = \Theta e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{15}(\lambda,\alpha)$&$:$&
$e_1 e_1 = \lambda e_3$& $e_1e_3 = \alpha e_4$ & $e_2 e_1=e_3$ \\
&& $e_2 e_2 = e_3$ & $e_3e_1 = (1-\Theta)e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{16}(\alpha)$&$:$& $e_1e_3 = \alpha e_4$ & $e_2 e_1=e_3 + e_4$ & $e_2 e_2 = e_3$ \\&& $e_3e_1 = e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{17}(\alpha)$&$:$&
$e_1 e_1 = e_4$ & $e_1e_3 = -e_4$ & $e_2 e_1=e_3 + \alpha e_4$ \\
&& $e_2 e_2 = e_3$ & $e_3e_1 = e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{18}(\lambda,\alpha)$&$:$& $e_1 e_1 = \lambda e_3 + e_4$& $e_1e_3 = \Theta\alpha e_4$ & $e_2 e_1=e_3$ & $e_2 e_2 = e_3$ \\ && $e_2 e_3 = \alpha e_4$ & $e_3e_1 = \Theta e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{19}(\lambda,\alpha)$&$:$& $e_1 e_1 = \lambda e_3 + e_4$& $e_1e_3 = (1-\Theta)\alpha e_4$ & $e_2 e_1=e_3$ & $e_2 e_2 = e_3$ \\&& $e_2 e_3 = \alpha e_4$ & $e_3e_1 = (1-\Theta)e_4$& $e_3e_2 = e_4$ \\
\hline
$\D{4}{20}(\lambda,\alpha)$&$:$& $e_1 e_1 = \lambda e_3$& $e_1e_3 = \Theta\alpha e_4$ & $e_2 e_1=e_3$ & $e_2 e_2 = e_3$ \\
&& $e_2 e_3 = \alpha e_4$ & $e_3e_1 = \Theta e_4$& $e_3e_2 = e_4$ \\
\hline
$\D{4}{21}(\lambda,\alpha)$&$:$& $e_1 e_1 = \lambda e_3$& $e_1e_3 = (1-\Theta)\alpha e_4$ & $e_2 e_1=e_3$ & $e_2 e_2 = e_3$ \\&& $e_2 e_3 = \alpha e_4$ & $e_3e_1 = (1-\Theta)e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{22}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + (1-2\lambda)e_4$& $e_1 e_2 = e_4$& $e_1e_3 = (\Theta - 1) e_4$ & $e_2 e_1=e_3$ \\&& $e_2 e_2 = e_3$ & $e_2 e_3 = (1-\Theta^{-1}) e_4$& $e_3e_1 = \Theta e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{23}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + \lambda(1-2\lambda) e_4$& $e_1 e_2 = \lambda e_4$& $e_1e_3 = -\lambda\Theta e_4 $& $e_2 e_1=e_3$ \\&&
$e_2 e_2 = e_3$ & $e_2 e_3 = -\Theta^2 e_4$ & $e_3e_1 = \lambda(1-\Theta)e_4$ & $e_3e_2 = \lambda e_4$ \\
\hline
$\D{4}{24}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + \Theta e_4$& $e_1 e_2 = e_4$& $e_1e_3 = (\Theta - 1) e_4$ & $e_2 e_1=e_3$ \\&& $e_2 e_2 = e_3$ & $e_2 e_3 = (1-\Theta^{-1}) e_4$ & $e_3e_1 = \Theta e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{25}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + \lambda(1-\Theta)e_4$& $e_1 e_2 = \lambda e_4$& $e_1e_3 = -\lambda\Theta e_4$ & $e_2 e_1=e_3$ \\&& $e_2 e_2 = e_3$ & $e_2 e_3 = -\Theta^2 e_4$
& $e_3e_1 = \lambda (1-\Theta)e_4$ & $e_3e_2 = \lambda e_4$ \\
\hline
$\D{4}{26}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + \Theta e_4$& $e_1 e_2 = e_4$& $e_1e_3 = -\Theta e_4$ & $e_2 e_1=e_3$\\
& & $e_2 e_2 = e_3$ & $e_2 e_3 = -e_4$
& $e_3e_1 = \Theta e_4$ & $e_3e_2 = e_4 $\\
\hline
$\D{4}{27}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + (1-\Theta)e_4$& $e_1 e_2 = e_4$& $e_1e_3 = (\Theta-1) e_4$ & $e_2 e_1=e_3$ \\&& $e_2 e_2 = e_3$ & $e_2 e_3 = -e_4$ & $e_3e_1 = (1-\Theta)e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{28}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + (1-\Theta)e_4$& $e_1 e_2 = e_4$& $e_1e_3 = -\Theta e_4$ & $e_2 e_1=e_3$ \\&& $e_2 e_2 = e_3$ & $e_2 e_3 = -e_4$ & $e_3e_1 = \Theta e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{29}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + \Theta e_4$& $e_1 e_2 = e_4$& $e_1e_3 = (\Theta-1) e_4$ & $e_2 e_1=e_3$ \\
&& $e_2 e_2 = e_3$ & $e_2 e_3 = -e_4$ & $e_3e_1 = (1-\Theta)e_4$ & $e_3e_2 = e_4$ \\
$\D{4}{30}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + e_4$ & $e_1e_3 = (\Theta-1) e_4$ & $e_2 e_1=e_3$ & $e_2 e_2 = e_3$ \\
& & $e_2 e_3 = -e_4$ & $e_3e_1 = \Theta e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{31}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + e_4$ & $e_1e_3 = -\Theta e_4$ & $e_2 e_1=e_3$ & $e_2 e_2 = e_3$ \\
&& $e_2 e_3 = -e_4$ & $e_3e_1 = (1-\Theta)e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{32}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_1e_3 = (\Theta-1) e_4$ & $e_2 e_1=e_3$ & $e_2 e_2 = e_3$ \\
&& $e_2 e_3 = -e_4$ & $e_3e_1 = \Theta e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{33}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_1e_3 = -\Theta e_4$ & $ e_2 e_1=e_3$ & $e_2 e_2 = e_3$ \\
&& $e_2 e_3 = -e_4$ & $e_3e_1 = (1-\Theta)e_4$ & $e_3e_2 = e_4$ \\
\hline
$\D{4}{34}$&$:$& $e_1 e_1 = e_4$ & $e_2 e_1=e_3 + e_4$ & $e_2 e_2 = e_3$\\& & $e_3 e_1 = e_4$& $e_3 e_2 = e_4$ \\
\hline
$\D{4}{35}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_1e_3 = e_4$ & $e_2 e_1=e_3 + e_4$ & $e_2 e_2 = e_3$ \\
\hline
$\D{4}{36}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_1e_3 = e_4$ & $e_2 e_1=e_3$ & $e_2 e_2 = e_3$ \\
\hline
$\D{4}{37}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + e_4$ & $e_1e_3 = \Theta e_4$ & $e_2 e_1=e_3$ \\&& $e_2 e_2 = e_3$& $e_2 e_3 = e_4$ \\
\hline
$\D{4}{38}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3 + e_4$ & $e_1e_3 = (1-\Theta)e_4$ & $e_2 e_1=e_3$ \\&& $e_2 e_2 = e_3$& $e_2 e_3 = e_4$ \\
\hline
$\D{4}{39}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_1e_3 = \Theta e_4$ & $e_2 e_1=e_3$ \\&& $e_2 e_2 = e_3$& $e_2 e_3 = e_4$ \\
\hline
$\D{4}{40}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_1e_3 = (1-\Theta)e_4$ & $e_2 e_1=e_3$ \\&& $e_2 e_2 = e_3$& $e_2 e_3 = e_4$\\
\hline
$\cd 4{01}$&$:$& $e_1 e_1 = e_2$ & $e_2 e_2=e_3$ \\
\hline
$\cd 4{02}$&$:$& $e_1 e_1 = e_2$ & $e_2 e_1= e_3$ & $e_2 e_2=e_3$ \\
\hline$\cd 4{03}$&$:$& $e_1 e_1 = e_2$ & $e_2 e_1=e_3$ \\
\hline$\cd 4{04}(\lambda)$&$:$& $ e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$ \\ \hline
$\cd 4{05}$ & $:$ & $e_1 e_1 = e_2$ & $e_2 e_1=e_4$ & $e_2 e_2=e_3$ \\
\hline$\cd 4{06}$ & $:$ & $ e_1 e_1 = e_2$ & $e_1 e_2=e_4$ & $e_2 e_1=e_3$ \\
\hline$\cd 4{07}(\lambda)$& $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_4$ & $e_2 e_1=\lambda e_4$ & $e_2 e_2=e_3$ \\\hline
$\cd 4{08}(\alpha)$ &$:$&
$e_1 e_1 = e_2$ & $e_1e_3=e_4$ & $e_2 e_1=e_3$ \\&& $e_2e_2=\alpha e_4$ & $e_3e_1=-2e_4$ \\
\hline$\cd 4{09}$ &$:$&
$e_1 e_1 = e_2$ & $e_1e_2=e_4$ & $e_1e_3=e_4$ \\
&& $e_2 e_1=e_3$ & $e_2e_2=- e_4$ & $e_3e_1=-2e_4$\\
\hline
$\cd 4{10}(\alpha)$ &$:$&
$e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=-e_4$ \\
&& $e_2 e_1= e_3 +e_4$ & $e_2e_2=\alpha e_4$ & $e_3e_1=- e_4$ \\
\hline$\cd 4{11}(\lambda)$ &$:$&
$e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=(\lambda-2)e_4$ \\
$\lambda \neq1$&& $e_2 e_1= \lambda e_3+e_4$ &
$e_2e_2=-(\lambda+1)^2 e_4$ &$e_3e_1= (1-2\lambda) e_4$ \\
\hline$\cd 4{12}(\alpha, \lambda)$ &$:$&
$e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=(\lambda-2)e_4$ \\&& $e_2 e_1=\lambda e_3$ & $e_2e_2=\alpha e_4$ & $e_3e_1=(1-2\lambda) e_4$ \\
\hline
$\cd {4}{13}(\alpha)$&$:$&
$e_1 e_1 = e_2$ & $e_1e_2=e_4$ & $e_1e_3=e_4$& $e_2e_1=e_4$ \\
$\alpha\neq \frac{1}{2}$&& $e_2e_3=\alpha e_4$& $e_3e_1=e_4$& \multicolumn{2}{l}{$e_3e_2=(\alpha +1)e_4$}\\
\hline
$\cd {4}{14}(\alpha, \beta)$&$:$&
$e_1 e_1 = e_2$ & $e_1 e_2 = e_4$& $e_1 e_3 = \alpha e_4$& $e_2 e_1 = e_4$\\ &&
$e_2 e_2 = e_4$& $e_3 e_1 = \alpha e_4$& $e_3 e_2 = e_4$& $e_3 e_3 =\beta e_4$& \\
\hline
$\cd {4}{15}(\alpha)$&$:$&
$e_1 e_1 = e_2$ & $e_1 e_2 = \alpha e_4$& $e_1 e_3 = e_4$\\&&
$e_2 e_1 =(\alpha+1) e_4$ & $e_3 e_1 = e_4$\\
\hline
$\cd {4}{16}$&$:$&
$e_1 e_1 = e_2$ & $e_1e_2=e_4$ & $e_2 e_1 = e_4$\\ && $e_2 e_3 = -\frac{1}{2} e_4$& $e_3 e_2 =\frac{1}{2} e_4$& $e_3 e_3 = e_4$&\\
\hline
$\cd {4}{17}(\alpha)$&$:$&
$e_1 e_1 = e_2$ & $e_1 e_2 = e_4$& $e_2 e_1 = e_4$\\&& $e_2 e_3 =\alpha e_4$& $e_3 e_2 = (\alpha+1) e_4$&\\
\hline
$\cd {4}{18}(\alpha)$&$:$&
$e_1 e_1 = e_2$ & $e_1 e_2 =\alpha e_4$& \multicolumn{1}{l}{$e_2 e_1 =(\alpha+1) e_4$}& $e_3 e_3 = e_4$&\\
\hline
$\cd {4}{19}$&$:$&
$e_1 e_1 = e_2$ & $e_1 e_2 = e_4$& $e_2 e_1 = e_4$\\&& $e_3 e_1 = e_4$&
$e_3 e_3 = e_4$&\\
\hline
$\cd {4}{20}$&$:$&
$e_1 e_1 = e_2$ & $e_1 e_2 = e_4$& $e_2 e_1 = e_4$& $e_3 e_1 = e_4$&\\
\hline
$\cd {4}{21}(\alpha)$&$:$&
$e_1 e_1 = e_2$ & $e_1 e_3 =\alpha e_4$& $e_2 e_1 = e_4$& $e_2 e_2 = e_4$&\\
&& $e_2 e_3 = e_4$& $e_3 e_1 =\alpha e_4$& $e_3 e_2 = e_4$& $e_3 e_3 = e_4$\\
\hline
$\cd {4}{22}$&$:$&
$e_1 e_1 = e_2$ & $e_1 e_3 = e_4$& $e_2 e_3 = -\frac{1}{2} e_4$\\&&
$e_3 e_1= e_4$& $e_3 e_2 =\frac{1}{2} e_4$& $e_3 e_3 = e_4$&\\
\hline
$\cd {4}{23}(\alpha)$&$:$&
$e_1 e_1 = e_2$& $e_1 e_3 = e_4$& $e_2 e_3 = \alpha e_4$\\&&
$e_3 e_1 = e_4$& $e_3 e_2 =(\alpha +1)e_4$&\\
\hline
$\cd {4}{24}(\alpha)$&$:$&
$e_1 e_1 = e_2$& $e_1 e_3 = e_4$& $e_2 e_2 = e_4$\\
&& $e_3 e_1 = e_4$& $e_3 e_2 = e_4$& $e_3 e_3 =\alpha e_4$&\\
\hline
$\cd {4}{25} $&$:$&
$e_1 e_1 = e_2$& $e_1 e_3 = e_4$& $e_2 e_1 = e_4$\\&& $e_3 e_1 = e_4$& $e_2 e_2 = e_4$&\\
\hline
$\cd {4}{26}(\alpha)$&$:$&
$e_1 e_1 = e_2$& $e_1 e_3 = \alpha e_4$& $e_2 e_2 = e_4$& \multicolumn{2}{l}{$e_3 e_1 =(\alpha+1) e_4$}&\\
\hline
$\cd {4}{27}$&$:$&
$e_1 e_1 = e_2$& $e_2 e_1 = e_4$& $e_2 e_3 = e_4$\\&& $e_3 e_2 = e_4$& $e_3 e_3 = e_4$&\\
\hline
$\cd {4}{28}(\alpha)$&$:$&
$e_1 e_1 = e_2$& $e_2 e_1 = e_4$& $e_2 e_2 = e_4$\\
$\alpha\ne 1$&& $e_2 e_3 = e_4$& $e_3 e_2 = e_4$& $e_3 e_3 =\alpha e_4$&\\
\hline
$\cd {4}{29}$&$:$&
$e_1 e_1 = e_2$& $e_2 e_3 =-\frac{1}{2} e_4$& $e_3 e_2 =\frac{1}{2} e_4$& $e_3 e_3 = e_4$&\\
\hline
$\cd {4}{30}$&$:$&
$e_1 e_1 = e_2$& $e_2 e_1 = e_4$& $e_2 e_3 = e_4$& $e_3 e_2 = e_4$&\\
\hline
$\cd {4}{31}$&$:$&
$e_1 e_1 = e_2$& $e_2 e_3 = e_4$& $e_3 e_1 = e_4$&
$e_3 e_2 = e_4$&\\
\hline
$\cd {4}{32}(\alpha)$&$:$&
$e_1 e_1 = e_2$& $e_2 e_3 = \alpha e_4$& \multicolumn{2}{l}{$e_3 e_2 = (\alpha+1) e_4$}& \\
\hline
$\cd {4}{33}$&$:$&
$e_1 e_1 = e_2$& $e_2 e_1 = e_4$& $e_2 e_2 = e_4$& $e_3 e_3 = e_4$& \\
\hline
$\cd {4}{34}$&$:$&
$e_1 e_1 = e_2$& $e_2 e_2 = e_4$& $e_3 e_3 = e_4$& \\
\hline
$\cd {4}{35}$&$:$&
$e_1 e_1 = e_2$& $e_2 e_2 = e_4$& $e_3 e_1 = e_4$& $e_3 e_3 = e_4$& \\
\hline
$\cd {4}{36}(\alpha)$&$:$&
$e_1 e_1 = e_2$& $e_2 e_2 = e_4$& $e_3 e_2= e_4$& $e_3 e_3 =\alpha e_4$&\\
\hline
$\cd {4}{37}$&$:$&
$e_1 e_1= e_2$& $e_1 e_2= e_4$& $e_2 e_1= e_4$& $e_3 e_3= e_4$\\
\hline
$\cd {4}{38}$&$:$&
$e_1 e_1= e_2$& $e_2 e_3= e_4$& $e_3 e_2= e_4$&\\
\hline
$\cd 4{39}$ & $:$ &
$e_1 e_1 = e_3+e_4$ & $e_1e_2=\frac i2 e_4$ & $e_1e_3=e_4$ & $e_2e_1=\frac i2 e_4$\\
&& $ e_2 e_2=e_3$ & $e_2e_3=-2ie_4$ & $e_3e_1=2e_4$ & $e_3e_2=-ie_4$\\
\hline
$\cd 4{40}$ & $:$ &
$e_1 e_1 = e_3+e_4$ & $e_1e_2=\frac i2 e_4$ & $e_1e_3=-\frac 12e_4$ & $e_2e_1=\frac i2 e_4$\\
&& $ e_2 e_2=e_3$ & $e_2e_3=-\frac i2e_4$ & $e_3e_1=\frac 12e_4$ & $e_3e_2=\frac i2e_4$\\
\hline
$\cd 4{41}$ & $:$ &
$e_1 e_1 = e_3+e_4$ & $e_1e_2=-\frac i2 e_4$ & $e_1e_3=e_4$ & $e_2e_1=-\frac i2 e_4$\\
&& $ e_2 e_2=e_3$ & $e_2e_3=-2ie_4$ & $e_3e_1=2e_4$ & $e_3e_2=-ie_4$\\
\hline
$\cd 4{42}$ & $:$ &
$e_1 e_1 = e_3+e_4$ & $e_1e_2=-\frac i2 e_4$ & $e_1e_3=-\frac 12e_4$ & $e_2e_1=-\frac i2 e_4$\\
&& $ e_2 e_2=e_3$ & $e_2e_3=-\frac i2e_4$ & $e_3e_1=\frac 12e_4$ & $e_3e_2=\frac i2e_4$\\
\hline
$\cd 4{43}(\alpha)$ &$:$&
$e_1 e_1 = e_3 + e_4$ & $e_1 e_2 = \alpha e_4$ & $e_1 e_3 = -\frac 12 e_4$ \\
&& $e_2 e_1 = \alpha e_4$ & $e_2 e_2 = e_3$ & $e_3 e_1 = \frac 12 e_4$\\
\hline
$\cd 4{44}(\alpha,\beta,\gamma)$ & $:$ &
$e_1 e_1 = e_3 + \alpha e_4$ & $e_1 e_2 = \beta e_4$ & $e_2 e_1 = (\beta+\gamma) e_4$ \\
&& $e_2 e_2 = e_3$ & $e_3 e_1 = e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{45}$ & $:$ &
$e_1 e_1 = e_3 + 2i e_4$ & $e_1 e_2 = e_4$ & $e_2 e_1 = e_4$& $e_2 e_2 = e_3$\\
&& $e_3 e_1 = e_4$ & $e_3 e_2 = i e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{46}(\alpha)$ & $:$ &
$e_1 e_1 = e_3 - 2i\alpha e_4$ & $e_1 e_2 = \alpha e_4$ & $e_2 e_1 = \alpha e_4$ & $e_2 e_2 = e_3$ \\
$\alpha\ne 0$& & $e_3 e_1 = e_4$ & $e_3 e_2 = i e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{47}(\alpha,\beta)$ & $:$ &
$e_1 e_1 = e_3 + e_4$ & $e_1 e_3 = \alpha e_4$ & $e_2 e_2 = e_3$\\
$\beta\ne 0$ && $e_2 e_3 = \beta e_4$ & $e_3 e_1 =(\alpha+1) e_4$ & $e_3 e_2 = \beta e_4$ \\
\hline
$\cd 4{48}(\alpha)$ & $:$ &
$e_1 e_1 = e_3 + \alpha e_4$ & $e_2 e_1 = i\alpha e_4$ & $e_2 e_2 = e_3$ \\
$\alpha\ne 0$&& $e_3 e_1 = e_4$ & $e_3 e_2 = i e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{49}(\alpha)$ & $:$ &
$e_1 e_1 = e_3 + \alpha e_4$ & $e_2 e_1 = -i\alpha e_4$ & $e_2 e_2 = e_3$\\
$\alpha\ne 0$& & $e_3 e_1 = e_4$& $e_3 e_2 = i e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{50}(\alpha)$ & $:$ &
$e_1 e_1 = e_3 + \alpha e_4$ & $e_2 e_1 = e_4$ & $e_2 e_2 = e_3$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{51}(\alpha)$ & $:$ &
$e_1 e_1 = e_3 + \alpha e_4$ & $e_2 e_2 = e_3$ & $e_3 e_1 = e_4$\\
&& $e_3 e_2 = i e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{52}$ & $:$ &
$e_1 e_1 = e_3 + e_4$ & $e_2 e_2 = e_3$ & $e_3 e_3 = e_4$\\
\hline
$ \cd 4{53}$ & $:$ &
$e_1 e_1 = e_3$ & $e_1e_2=-\frac 12e_4$ & $e_1e_3=e_4$ & $e_2e_1=\frac 12e_4$\\
&& $e_2 e_2 = e_3$ & $e_2e_3=ie_4$ & $e_3e_1=e_4$ & $e_3e_2=ie_4$ \\
\hline
$\cd 4{54}(\alpha)$ & $:$ &
$e_1 e_1 = e_3$ & $e_1e_2=e_4$ & $e_1e_3=\alpha e_4$ & $e_2e_1=e_4$\\
&& $ e_2 e_2=e_3$ & $e_2e_3=-i(\alpha+1)e_4$ & $e_3e_1=(\alpha+1) e_4$ & $e_3e_2=-i\alpha e_4$\\
\hline
$\cd 4{55}(\alpha)$ & $:$ &
$e_1 e_1 = e_3$ & $e_1 e_2 = e_4$ & $e_1 e_3 = \alpha e_4$ \\
& & $e_2 e_1 = e_4$ & $e_2 e_2 = e_3$ & $e_3 e_1 = (\alpha+1) e_4$ \\
\hline
$\cd 4{56}$ & $:$ &
$e_1 e_1 = e_3$ & $e_1 e_2 = e_4$ & $e_2 e_1 = -e_4$ & $e_2 e_2 = e_3$ \\
& & $e_3 e_1 = e_4$ & $e_3 e_2 = i e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{57}(\alpha,\beta)$ & $:$ &
$e_1 e_1 = e_3$ & $e_1 e_2 = \alpha e_4$ & $e_2 e_1 = (\alpha+\beta)e_4$ & $e_2 e_2 = e_3$ \\
$\beta\not\in\{0,-2\alpha\}$& & $e_3 e_1 = e_4$ & $e_3 e_2 = i e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{58}$ & $:$ &
$e_1 e_1 = e_3$ & $e_1 e_3 = e_4$ & $e_2 e_1 = e_4$ & $e_2 e_2 = e_3$\\
&& $e_2 e_3 = i e_4$ & $e_3 e_1 = e_4$ & $e_3 e_2 = i e_4$ \\
\hline
$\cd 4{59}(\alpha,\beta)$ & $:$ &
$e_1 e_1 = e_3$ & $e_1 e_3 = \alpha e_4$ & $e_2 e_2 = e_3$\\
$\beta\ne 0$ && $e_2 e_3 = \beta e_4$ & $e_3 e_1 = (\alpha+1) e_4$ & $e_3 e_2 = \beta e_4$ \\
\hline
$\cd 4{60}$ & $:$ &
$e_1 e_1 = e_3$ & $e_1 e_3 = i e_4$ & $e_2 e_2 = e_3$\\
&& $e_2 e_3 = e_4$ & $e_3 e_1 = (i+1) e_4$ & $e_3 e_2 = (i+1) e_4$ \\
\hline
$\cd 4{61}(\alpha)$ & $:$ &
$e_1 e_1 = e_3$ & $e_1 e_3 = -i\alpha e_4$ & $e_2 e_2 = e_3$\\
&& $e_2 e_3 = \alpha e_4$ & $e_3 e_1 = (1-i\alpha) e_4$ & $e_3 e_2 = (\alpha+i) e_4$ \\
\hline
$\cd 4{62}$ & $:$ &
$e_1 e_1 = e_3$ & $e_1e_3=e_4$ & $e_2 e_2=e_3$\\
&& $e_2e_3=-2ie_4$ & $e_3e_1=2e_4$ & $e_3e_2=-ie_4$\\
\hline
$\cd 4{63}$ & $:$ &
$e_1 e_1 = e_3$ & $e_1e_3=-\frac 12e_4$ & $ e_2 e_2=e_3$\\
&& $e_2e_3=-\frac i2e_4$ & $e_3e_1=\frac 12e_4$ & $e_3e_2=\frac i2e_4$\\
\hline
$\cd 4{64}(\alpha)$ & $:$ &
$e_1 e_1 = e_3$ & $e_1 e_3 = \alpha e_4$ & $e_2 e_2 = e_3$ & $e_3 e_1 = (\alpha+1) e_4$ \\
\hline
$\cd 4{65}(\alpha)$ & $:$ &
$e_1 e_1 = e_3$ & $e_1 e_3 = \alpha e_4$ & $e_2 e_2 = e_3$ \\
$\alpha\ne 0$&& $e_3 e_1 = (\alpha+1) e_4$ & $e_3 e_2 = i e_4$ \\
\hline
$\cd 4{66}$ & $:$ &
$e_1 e_1 = e_3$& $e_2 e_1 = e_4$ & $e_2 e_2 = e_3$ \\
&& $e_2 e_3 = e_4$ & $e_3 e_2 = e_4$ \\
\hline
$\cd 4{67}$ & $:$ &
$e_1 e_1 = e_3$ & $e_2 e_2 = e_3$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{68}$ & $:$ &
$e_1 e_1 = e_3+e_4$ & $e_1 e_3 = i e_4$ &$e_2 e_2 = e_3$ &\\
&&$e_2 e_3 = e_4$&$e_3 e_1 = ie_4$ & $e_3 e_2 = e_4$ \\
\hline
$\cd 4{69}$ & $:$ &
$e_1 e_1 = e_3$ & $e_1 e_3 = ie_4$ &$e_2 e_2 = e_3$ &\\
&&$e_2 e_3 = e_4$&$e_3 e_1 = ie_4$ & $e_3 e_2 = e_4$ \\
\hline
$\cd 4{70}$ & $:$ &
$e_1 e_1 = e_3$ & $e_1 e_3 = e_4$ &$e_2 e_2 = e_3$ & $e_3 e_1 = e_4$ & \\
\hline
$\cd 4{71}$ & $:$ & $e_1 e_1 = e_4$ & $e_1 e_2 = e_3$ & $e_1 e_3 = e_4$ \\&& $e_2 e_1 = -e_3$ & $e_3 e_1 = -e_4$\\
\hline
$\cd 4{72}$ & $:$ & $e_1 e_1 = e_4$ & $e_1 e_2 = e_3$ & $e_1 e_3 = e_4$\\
&& $e_2 e_1 = -e_3$ & $e_2 e_2 = e_4$ & $e_3 e_1 = -e_4$\\
\hline
$\cd 4{73}$ & $:$ & $e_1 e_2 = e_3 + e_4$ & $e_1 e_3 = e_4$ & $e_2 e_1 = -e_3$ & $e_3 e_1 = -e_4$\\
\hline
$\cd 4{74}(\alpha)$ & $:$ & $e_1 e_2 = e_3$ & $e_1 e_3 = (\alpha+1)e_4$ & $e_2 e_1 = -e_3$ & $e_3 e_1 = -\alpha e_4$\\
\hline
$\cd 4{75}(\alpha)$ & $:$ & $e_1 e_2 = e_3$ & $e_1 e_3 = (\alpha+1)e_4$ & $e_2 e_1 = -e_3$ \\&& $e_2 e_2 = e_4$ & $e_3 e_1 = -\alpha e_4$\\ \hline
$\cd 4{76}$ & $:$ & $e_1 e_2 = e_3$ & $e_1 e_3 = e_4$ & $e_2 e_1 = -e_3$ \\&& $e_2 e_2 = e_4$ & $e_3 e_1 = -e_4$\\
\hline
$\cd 4{77}$ & $:$ & $e_1 e_2 = e_3$ & $e_1 e_3 = e_4$ & $e_2 e_1 = -e_3$ \\&& $e_2 e_3 = e_4$ & $e_3 e_2 = -e_4$\\
\hline
$\cd 4{78}$ & $:$ & $e_1 e_2 = e_3$ & $e_1 e_3 = e_4$ & $e_2 e_1 = -e_3$\\
&& $e_2 e_2 = e_4$ & $e_2 e_3 = e_4$ & $e_3 e_2 = -e_4$\\
\hline
$\cd 4{79}(\alpha)$ & $:$ & $e_1 e_2 = e_3+\alpha e_4$ & $e_1 e_3 = e_4$ & $e_2 e_1 = -e_3$ \\&& $e_2 e_3 = e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{80}$ & $:$ & $e_1 e_2 = e_3+e_4$ & $e_1 e_3 = e_4$ & $e_2 e_1 = -e_3$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{81}$ & $:$ & $e_1 e_2 = e_3+e_4$ & $e_2 e_1 = -e_3$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{82}$ & $:$ & $e_1 e_2 = e_3$ & $e_1 e_3 = e_4$ & $e_2 e_1 = -e_3$ \\&& $e_2 e_2 = e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{83}( \alpha)$ & $:$ & $e_1 e_2 = e_3$ & $e_2 e_1 = -e_3$ & $e_2 e_2 = \alpha e_4$ \\
$\alpha\ne 0$&& $e_2 e_3 = e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{84}$ & $:$ & $e_1 e_2 = e_3$ & $e_2 e_1 = -e_3$ & $e_2 e_2 = e_4$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{85}$ & $:$ & $e_1 e_2 = e_3$ & $e_2 e_1 = -e_3$ & $e_3 e_3 = e_4$\\
\hline
$\cd 4{86}$ & $:$ & $e_1 e_2 = e_3$ & $e_2 e_1 = -e_3$ & $e_2 e_3 = e_4$ & $e_3 e_2 = -e_4$\\
\hline
$\cd {4}{87}(\lambda)$&$:$&
$e_1 e_1 = \lambda e_3+(2 \Theta-1)e_4$& $e_1 e_2=e_4$ & $e_1e_3=e_4$&
\multicolumn{2}{l}{$e_2 e_1=e_3-(1- \Theta)^2 \lambda^{-1}e_4$}\\
$\lambda \neq 0, \frac{1}{4}$&& $e_2 e_2=e_3$& $e_2e_3=\Theta\lambda^{-1}e_4$
&$e_3e_3=e_4$\\
\hline$\cd {4}{88}(\lambda)$&$:$&
$e_1 e_1 = \lambda e_3+(1-2 \Theta)e_4$& $e_1 e_2=e_4$ & $e_1e_3=e_4$&
\multicolumn{2}{l}{$e_2 e_1=e_3- \Theta^2 \lambda^{-1}e_4$}\\
$\lambda \neq 0, \frac{1}{4}$&& $e_2 e_2=e_3$& $e_2e_3=\Theta\lambda^{-1}e_4$
&$e_3e_3=e_4$\\
\hline$\cd {4}{89}(\lambda)$&$:$&
$e_1 e_1 = \lambda e_3+(2 \Theta-1)e_4$& $e_1 e_2=e_4$ & $e_1e_3=e_4$&
\multicolumn{2}{l}{$e_2 e_1=e_3-(1- \Theta)^2 \lambda^{-1}e_4$}\\
$\lambda \neq 0, \frac{1}{4}$&& $e_2 e_2=e_3$& $e_2e_3=(1-\Theta)\lambda^{-1}e_4$
&$e_3e_3=e_4$\\
\hline$\cd {4}{90}(\lambda)$&$:$&
$e_1 e_1 = \lambda e_3+(1-2 \Theta)e_4$& $e_1 e_2=e_4$ & $e_1e_3=e_4$&
\multicolumn{2}{l}{$e_2 e_1=e_3- \Theta^2 \lambda^{-1}e_4$}\\
$\lambda \neq 0, \frac{1}{4}$&& $e_2 e_2=e_3$& $e_2e_3=(1-\Theta)\lambda^{-1}e_4$
&$e_3e_3=e_4$\\
\hline$\cd {4}{91}(\lambda, \alpha)$&$:$&
$e_1 e_1 = \lambda e_3+(2 \Theta-1)e_4$& $e_1 e_2=e_4$ & $e_1e_3=\alpha e_4$& \\
$\lambda \neq 0, \frac{1}{4}$&& $e_2 e_1=e_3- (1-\Theta)^2 \lambda^{-1}e_4$&
$e_2 e_2=e_3$& $e_3e_3=e_4$\\
\hline$\cd {4}{92}(\lambda, \alpha)$&$:$&
$e_1 e_1 = \lambda e_3+(1-2 \Theta)e_4$& $e_1 e_2=e_4$ & $e_1e_3=\alpha e_4$& \\
$\lambda \neq 0, \frac{1}{4}$&& $e_2 e_1=e_3- \Theta^2 \lambda^{-1}e_4$&
$e_2 e_2=e_3$& $e_3e_3=e_4$\\
\hline$\cd {4}{93}( \alpha)$&$:$&
$e_1 e_1 = e_4$ & $e_1 e_2=e_4$ &$e_1e_3=\alpha e_4$ & $e_2e_1=e_3+e_4$\\
&&$ e_2 e_2=e_3$ &$e_2e_3=\alpha e_4$ & $e_3e_3=e_4$\\
\hline$\cd {4}{94}(\alpha, \beta)$&$:$&
$e_1 e_1 = e_4$ & $e_1e_2=e_4$& $e_1e_3=\alpha e_4$&\\
$\alpha\neq0$&&$e_2 e_1=e_3+\beta e_4$ & $e_2 e_2=e_3$ &$e_3e_3=e_4$\\
\hline$\cd {4}{95}(\alpha)$&$:$&
$e_1 e_1 = e_4$ & $e_1e_2=e_4$ &$e_1e_3=\alpha e_4$ & $e_2 e_1=e_3$\\
&& $e_2 e_2=e_3$ & $e_2e_3=\alpha e_4$&$e_3e_3=e_4$\\
\hline$\cd {4}{96}(\alpha)$&$:$&
$e_1 e_1 = e_4$ & $e_1e_2=e_4$ & $e_2 e_1=e_3+\alpha e_4$\\
&& $e_2 e_2=e_3$ & $e_2e_3= e_4$&$e_3e_3=e_4$\\
\hline$\cd {4}{97}(\lambda)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_1e_2=e_4$& $e_1e_3=\Theta e_4 $&$e_2 e_1=e_3-e_4$ \\
&& $e_2 e_2=e_3$ &$e_2e_3=e_4$ &$e_3e_3=e_4$\\
\hline$\cd {4}{98}(\lambda)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_1e_2=e_4$& $e_1e_3=(1-\Theta) e_4 $&$e_2 e_1=e_3-e_4$ \\
$\lambda \neq \frac{1}{4}$ && $e_2 e_2=e_3$ &$e_2e_3=e_4$ &$e_3e_3=e_4$\\
\hline$\cd {4}{99}(\alpha)$&$:$&
$e_1e_2=e_4$& $e_1e_3=e_4$& $e_2 e_1=e_3-e_4$\\
$\alpha\neq1$ && $e_2 e_2=e_3$ &$e_2e_3=\alpha e_4$&$e_3e_3=e_4$\\
\hline$\cd {4}{100}(\alpha)$&$:$&
$e_1 e_1 = \frac{1}{4} e_3$ & $e_1e_2=e_4$& $e_1e_3=\alpha e_4$& $e_2 e_1=e_3-e_4$ \\
$\alpha\notin\{0, \frac{1}{2}\}$&& $e_2 e_2=e_3$ &$e_2e_3=2 \alpha e_4$ &$e_3e_3=e_4$\\
\hline$\cd {4}{101}(\alpha, \beta)$&$:$&
$e_1e_2=e_4$& $e_1e_3=\alpha e_4$& $e_2 e_1=e_3$ \\
&& $e_2 e_2=e_3$&$e_2e_3=\beta e_4$ &$e_3e_3=e_4$\\
\hline$\cd {4}{102}(\lambda, \alpha)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_1e_2=e_4$& $e_2 e_1=e_3-e_4$ \\
$\lambda\neq0$&& $e_2 e_2=e_3$&$ e_2e_3=\alpha e_4$ &$e_3e_3=e_4$\\
\hline$\cd {4}{103}$&$:$&
$e_1 e_2 = e_4$ & $e_2 e_1=e_3-e_4$ & $e_2 e_2=e_3$ &$e_3e_3=e_4$\\
\hline$\cd {4}{104}$&$:$&
$e_1 e_3 = e_4$ & $e_2 e_1=e_3+e_4$ & $e_2 e_2=e_3$ \\&&$e_2e_3=e_4$&$e_3e_3=e_4$\\
\hline$\cd {4}{105}(\lambda, \alpha,\beta)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_1e_3=e_4$&$e_2 e_1=e_3+\alpha e_4$ \\
$ \lambda\ne 0, \alpha\ne 0$&& $e_2 e_2=e_3$ & $e_2e_3=\beta e_4$&$e_3e_3=e_4$\\
\hline$\cd {4}{106}(\alpha)$&$:$& $e_1 e_3 = e_4$ & $e_2 e_1=e_3+\alpha e_4$ & $e_2 e_2=e_3$ &$e_3e_3=e_4$\\
\hline$\cd {4}{107}(\lambda)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_1e_3=\Theta e_4$& $e_2 e_1=e_3$ \\
&& $e_2 e_2=e_3$ & $e_2e_3=e_4$&$e_3e_3=e_4$\\
\hline$\cd {4}{108}(\lambda)$&$:$&
$e_1 e_1 = \lambda e_3$ & $e_1e_3=(1-\Theta) e_4$& $e_2 e_1=e_3$ \\
$\lambda \not\in \{0, \frac{1}{4}\}$&& $e_2 e_2=e_3$ & $e_2e_3=e_4$&$e_3e_3=e_4$\\
\hline$\cd {4}{109}(\lambda,\alpha)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_2 e_1=e_3+e_4$ \\&& $e_2 e_2=e_3$ &$e_2e_3=\alpha e_4$&$e_3e_3=e_4$\\
\hline$\cd {4}{110}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_2 e_1=e_3$ & $e_2 e_2=e_3$ \\&&$e_2e_3= e_4$&$e_3e_3=e_4$\\
\hline$\cd {4}{111}(\lambda)$&$:$& $e_1 e_1 = \lambda e_3$ & $e_2 e_1=e_3$ & $e_2 e_2=e_3$ &$e_3e_3=e_4$\\
\hline$\cd {4}{112}(\lambda, \alpha, \beta, \gamma)$&$:$&
$e_1 e_1 = \lambda e_3+e_4$ & $e_1e_3=\alpha e_4$ & $e_2 e_1=e_3+\beta e_4$ \\
&&$e_2 e_2=e_3$& $e_2e_3=\gamma e_4$&$e_3e_3=e_4$\\
\hline
${\bf N}^4_{01}(\alpha)$ &$:$&
$e_1 e_1 = e_2$& $e_1e_2=\alpha e_4$& $e_1e_3=e_4$ \\& & $e_2e_1=e_4$& $e_2 e_2=e_3$ \\
\hline
${\bf N}^4_{02}(\alpha,\beta,\gamma,\delta)$ &$:$&
$e_1 e_1 = e_2$& $e_1e_2=\alpha e_4$& $e_1e_3=\gamma e_4$& $e_2e_1=\beta e_4$\\&&
$e_2 e_2=e_3$& $e_2e_3=\delta e_4$& $e_3e_2=e_4$& $e_3e_3=e_4$\\
\hline
${\bf N}^4_{03}(\alpha)$ &$:$& $e_1 e_1 = e_2$& $e_1e_2=\alpha e_4$& $e_1e_3=e_4$\\&& $e_2 e_2=e_3$ & $e_2e_3=e_4$\\ \hline
${\bf N}^4_{04}(\alpha,\beta,\gamma)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=\gamma e_4$&$e_2e_1=\beta e_4$\\
&& $e_2 e_2=e_3$&$e_2e_3=e_4$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{05}(\alpha,\beta)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=e_4$ \\&&$e_2e_1=\beta e_4$& $e_2 e_2=e_3$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{06}(\alpha)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_1=e_4$\\&& $e_2 e_2=e_3$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{07}$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$&$e_1e_3=e_4$& $e_2 e_2=e_3$ \\ \hline
${\bf N}^4_{08}(\alpha,\beta)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$&$e_1e_3=\beta e_4$\\&&$e_2e_1=\alpha e_4$& $e_2 e_2=e_3$ &$e_3e_1=e_4$\\ \hline
${\bf N}^4_{09}(\alpha,\beta)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=\beta e_4$ \\&& $e_2 e_2=e_3$ &$e_2e_3=e_4$&$e_3e_1=e_4$ \\ \hline
${\bf N}^4_{10}$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$& $e_2 e_2=e_3$ &$e_2e_3=e_4$ \\ \hline
${\bf N}^4_{11}$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$& $e_2 e_2=e_3$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{12}(\alpha)$ &$:$& $e_1 e_1 = e_2$& $e_1e_3=\alpha e_4$&$e_2e_1=e_4$\\& & $e_2 e_2=e_3$ &$e_3e_1=e_4$ \\ \hline
${\bf N}^4_{13}(\alpha,\beta,\gamma)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_3=\beta e_4$&$e_2e_1=\alpha e_4$& $e_2 e_2=e_3$\\
&&$e_2e_3=\gamma e_4$&$e_3e_1=e_4$&$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{14}(\alpha, \beta)$ &$:$& $e_1 e_1 = e_2$&$e_1e_3 =e_4$&$e_2e_1= \alpha e_4$ \\&& $e_2 e_2=e_3$ &$e_2e_3=\beta e_4$&$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{15}(\alpha)$ &$:$& $e_1 e_1 = e_2$&$e_2e_1=e_4$& $e_2 e_2=e_3$ \\&&$e_2e_3=\alpha e_4$ &$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{16}(\alpha)$ &$:$& $e_1 e_1 = e_2$&$e_1e_3=\alpha e_4$ & $e_2 e_2=e_3$ &$e_3e_1=e_4$\\ \hline
${\bf N}^4_{17}$ &$:$& $e_1 e_1 = e_2$& $e_1e_3=e_4$ & $e_2 e_2=e_3$\\ \hline
${\bf N}^4_{18}$ &$:$& $e_1 e_1 = e_2$& $e_2 e_2=e_3$ &$e_2e_3=e_4$\\ \hline
${\bf N}^4_{19}(\alpha)$ &$:$& $e_1 e_1 = e_2$& $e_2 e_2=e_3$ &$e_2e_3=\alpha e_4$ &$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{20}$ &$:$& $e_1 e_1 = e_2$& $e_2 e_2=e_3$ &$e_3e_3=e_4$\\ \hline
${\bf N}^4_{21}(\alpha,\beta)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=e_4$ \\&&$e_2e_1=e_3+\beta e_4$& $e_2 e_2=e_3$ \\ \hline
${\bf N}^4_{22}(\alpha,\beta,\gamma)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=\gamma e_4$ \\&&$e_2e_1=e_3+\beta e_4$& $e_2 e_2=e_3$&$e_3e_1=e_4$ \\ \hline
${\bf N}^4_{23}(\alpha,\beta,\gamma)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_1e_3=\beta e_4$&$e_2e_1=e_3$\\
&& $e_2 e_2=e_3$&$e_2e_3=e_4$&$e_3e_1=\gamma e_4$ \\ \hline
${\bf N}^4_{24}(\alpha,\beta,\gamma,\delta)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_3=\beta e_4$&$e_2e_1=e_3+\alpha e_4$& $e_2 e_2=e_3$ \\
&&$e_2e_3=\delta e_4$&$e_3e_1=\gamma e_4$ & $e_3e_2=e_4$\\ \hline
${\bf N}^4_{25}(\alpha, \beta, \gamma, \delta, \varepsilon)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2= \alpha e_4$& $e_2e_1=e_3+\beta e_4$ & $e_2 e_2=e_3$\\
&&$e_2e_3=\delta e_4$ & $e_3e_1=\gamma e_4$ & $e_3e_2=\varepsilon e_4$ &$e_3e_3=e_4$\\ \hline
${\bf N}^4_{26}(\alpha)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_2=e_4$&$e_1e_3=-\alpha e_4$ \\&&$e_2e_1=e_3$ &$e_2e_2=\alpha e_4$&$ e_3e_1=(1+2 \alpha) e_4$ \\ \hline
${\bf N}^4_{27}(\alpha,\beta)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_1=e_3$\\& &$e_2e_2=\beta e_4$&$e_2e_3=e_4$&$e_3e_1=e_4$& \\ \hline
${\bf N}^4_{28}(\alpha,\beta,\gamma)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_1=e_3$&$e_2e_2=\beta e_4$\\&&$e_2e_3=e_4$&$e_3e_1=\gamma e_4$&$e_3e_3=e_4$& \\ \hline
${\bf N}^4_{29}(\alpha,\beta)$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_2=\beta e_4$\\& &$e_2e_1=e_3$&$e_3e_1=e_4$&$e_3e_3=e_4$& \\ \hline
${\bf N}^4_{30}(\alpha) $ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_1=e_3$\\&&$e_2e_2=e_4$&$e_2e_3=e_4$ \\ \hline
${\bf N}^4_{31}(\alpha) $ &$:$& $e_1 e_1 = e_2$&$e_1e_2=\alpha e_4$&$e_2e_1=e_3$\\&&$e_2e_2=e_4$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{32}$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$&$e_2e_1=e_3$&$e_2e_3=e_4$ \\ \hline
${\bf N}^4_{33}$ &$:$& $e_1 e_1 = e_2$&$e_1e_2=e_4$&$e_2e_1=e_3$&$e_3e_3=e_4$ \\ \hline
${\bf N}^4_{34}(\alpha,\beta)$ &$:$&
$e_1 e_1 = e_2$&$e_1e_3=\beta e_4$&$e_2e_1=e_3$\\&&$e_2e_2=\alpha e_4$&
\multicolumn{2}{l}{$e_3e_1=(1-2\beta) e_4$} \\ \hline
${\bf N}^4_{35}(\alpha)$ &$:$& $e_1 e_1 = e_2$&$e_1e_3=\alpha e_4$&$e_2e_1=e_3$\\& &$e_2e_2=e_4$&$e_3e_1=-2\alpha e_4$&$e_3e_2=e_4$& \\ \hline
${\bf N}^4_{36}(\alpha,\beta)$ &$:$&
$e_1 e_1 = e_2$&$e_2e_1=e_3$&$e_2e_2=\alpha e_4$\\
$\beta\neq 0$& &$e_2e_3=\beta e_4$&$e_3e_1=e_4$&$e_3e_2=e_4$& \\ \hline
${\bf N}^4_{37}(\alpha)$ &$:$&
$e_1 e_1 = e_2$&$e_2e_1=e_3$&$e_2e_2=e_4$ \\
$\alpha\neq0$&&$e_2e_3=\alpha e_4$&$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{38}$ &$:$& $e_1 e_1 = e_2$&$e_1e_3=e_4$&$e_2e_1=e_3$\\&&$e_3e_1=-2e_4$&$e_3e_2=e_4$ \\ \hline
${\bf N}^4_{39}$ &$:$& $e_1 e_1 = e_2$&$e_2e_1=e_3$&$e_2e_3=e_4$& \\ \hline
${\bf N}^4_{40}(\alpha)$ &$:$& $e_1 e_1 = e_2$&$e_2e_1=e_3$&$e_2e_3=\alpha e_4$&$e_3e_2=e_4$& \\ \hline
${\bf N}^4_{41}$ &$:$& $e_1 e_1 = e_2$&$e_2e_1=e_3$&$e_3e_3=e_4$& \\ \hline
${\bf N}^4_{42}(\lambda,\alpha)$ & $:$ &
$e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & \multicolumn{2}{l}{$e_1e_3=(\alpha(\lambda-2)+1)e_4$}\\
$\lambda\ne 1$&& $e_2 e_1=\lambda e_3+e_4$ &
\multicolumn{2}{l}{$e_2e_2=\frac{(\lambda+1) ((\lambda^2-1)\alpha + \lambda + 2)}{1-\lambda}e_4$} & $e_3e_1=(\alpha(1-2\lambda)-2)e_4$ \\
\hline
${\bf N}^4_{43}(\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=-\frac {5\alpha}2e_4$ & $e_2 e_1=-\frac 12e_3+\beta e_4$\\
&& $e_2e_2=e_4$ & $e_2 e_3=-\frac 12 e_4$ & $e_3e_1=2\alpha e_4$ & $e_3e_2 = e_4$ \\
\hline
${\bf N}^4_{44}(\alpha)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=-\frac {5\alpha}2e_4$ & $e_2 e_1=-\frac 12e_3+e_4$\\
&& $e_2 e_3=-\frac 12 e_4$ & $e_3e_1=2\alpha e_4$ & $e_3e_2 = e_4$ \\
\hline
${\bf N}^4_{45}(\lambda,\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & \multicolumn{2}{l}{$e_1e_3=(\alpha(\lambda-2)+1)e_4$}\\
&& $e_2 e_1=\lambda e_3$& $e_2e_2=\beta e_4$ & \multicolumn{2}{l}{$e_3e_1=(\alpha(1-2\lambda)-2)e_4$} \\
\hline
${\bf N}^4_{46}(\lambda,\alpha)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=\alpha(\lambda-2)e_4$ & $e_2 e_1=\lambda e_3$\\
$\lambda\ne-\frac 12$&& $e_2e_2=e_4$ & $e_2 e_3=-\frac 12 e_4$ & $e_3e_1=\alpha(1-2\lambda)e_4$ & $e_3e_2 = e_4$ \\
\hline
${\bf N}^4_{47}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=-\frac 52e_4$ & $e_2 e_1=-\frac 12e_3$\\
&& $e_2 e_3=-\frac 12 e_4$ & $e_3e_1=2 e_4$ & $e_3e_2 = e_4$ \\
\hline
${\bf N}^4_{48}(\lambda,\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=e_4$ & $e_2 e_1=\lambda e_3+\alpha e_4$\\
$\lambda\ne -\frac 12$&& $e_2e_2=\beta e_4$ & $e_2 e_3=\lambda e_4$ & $e_3e_1=-2e_4$ & $e_3e_2 = e_4$ \\
\hline
${\bf N}^4_{49}(\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=\beta e_4$\\
&& $e_2 e_1=-e_3+\alpha e_4$ & $e_2e_2=e_4$ & $e_2 e_3=e_4$\\
&& $e_3e_1=-2\beta e_4$ & $e_3e_2 = e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{50}(\lambda,\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=e_4$ & $e_2 e_1=\lambda e_3+\alpha e_4$\\
$\lambda\ne-1$&& $e_2e_2=\beta e_4$ & $e_3e_1=-2e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{51}(\lambda,\alpha)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+\alpha e_4$\\
$\lambda\ne -\frac 12$&& $e_2e_2=e_4$ & $e_2e_3=\lambda e_4$ & $e_3e_2=e_4$ \\
\hline
${\bf N}^4_{52}(\lambda,\alpha,\beta,\gamma)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+\alpha e_4$ & $e_2e_2=\beta e_4$\\
$\gamma\ne 1$&& $e_2e_3=\gamma e_4$ & $e_3e_2=e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{53}(\lambda,\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+\alpha e_4$\\
&& $e_2e_2=\beta e_4$ & $e_2e_3=e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{54}(\lambda,\alpha)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+\alpha e_4$\\
$\lambda\ne -1$&& $e_2e_2=e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{55}(\lambda)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+e_4$\\
$\lambda\ne -\frac 12$ && $e_2e_3=\lambda e_4$ & $e_3e_2=e_4$ \\
\hline
${\bf N}^4_{56}(\lambda)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3+e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{57}(\lambda,\alpha)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=e_4$ & $e_2 e_1=\lambda e_3$\\
&& $e_2e_2=\alpha e_4$ & $e_2e_3=e_4$ & $e_3e_1=-2e_4$ \\
\hline
${\bf N}^4_{58}(\lambda,\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=e_4$ & $e_2 e_1=\lambda e_3$\\
$\beta\not\in\{-\frac 12,\lambda\}$&& $e_2e_2=\alpha e_4$ & $e_2e_3=\beta e_4$ & $e_3e_1=-2e_4$ & $e_3e_2=e_4$ \\
\hline
${\bf N}^4_{59}(\alpha,\beta)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=\beta e_4$\\
$\alpha\ne 1$&& $e_2 e_1=-e_3$ & $e_2e_2=\alpha e_4$ & $e_2e_3=e_4$\\
&& $e_3e_1=-2\beta e_4$ & $e_3e_2=e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{60}(\alpha)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=e_4$ & $e_2 e_1=-e_3$\\
$\alpha\ne 0$&& $e_2e_2=\alpha e_4$ & $e_3e_1=-2e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{61}(\lambda)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$\\
&& $e_2e_2=e_4$ & $e_2e_3=e_4$ \\
\hline
${\bf N}^4_{62}(\lambda,\alpha)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$\\
$\alpha\not\in\{-\frac 12,\lambda\}$&& $e_2e_2=e_4$ & $e_2e_3=\alpha e_4$ & $e_3e_2=e_4$ \\
\hline
${\bf N}^4_{63}$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=-e_3$\\
&& $e_2e_2=e_4$ & $e_3e_3=e_4$ \\
\hline
${\bf N}^4_{64}(\lambda)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$ & $e_2e_3=e_4$ \\
\hline
${\bf N}^4_{65}(\lambda,\alpha)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$\\
&& $e_2e_3=\alpha e_4$ & $e_3e_2=e_4$ \\
\hline
${\bf N}^4_{66}(\lambda)$ & $:$ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_2 e_1=\lambda e_3$ & $e_3e_3=e_4$\\
\end{longtable}}
All of these algebras are pairwise non-isomorphic, except for the following:
{\tiny \begin{longtable}{c}
$\D{4}{01}(\lambda,0,\beta) \cong \D{4}{02}(\lambda,0,\beta) \cong \D{4}{04}(\lambda,\beta),\quad \D{4}{01}(\lambda,\alpha,0)_{\alpha \neq -1} \cong \D{4}{02}(\lambda,\alpha,0) \cong \D{4}{10}(\lambda,\alpha),\quad \D{4}{01}(\lambda,-1,0) \cong \D{4}{11}(\lambda,0),$\\
$\D{4}{03}(\lambda,0) \cong \D{4}{09}(\lambda,0),\quad \D{4}{03}\left(\lambda,(1-\Theta)^{-1}\right)_{\lambda \neq 0} \cong \D{4}{05}(\lambda,0)_{\lambda \neq 0}, \D{4}{03}\left(\lambda,\Theta^{-1}\right)\cong \D{4}{06}(\lambda,0),\quad \D{4}{04}(\lambda,0) \cong \D{4}{10}(\lambda,0),$\\
$\D{4}{05}(1/4,\alpha) \cong \D{4}{06}(1/4,\alpha),\quad \D{4}{07}(1/4) \cong \D{4}{08}(1/4), \quad
\D{4}{05}(0,\alpha) \cong \D{4}{07}(0) \cong \D{4}{23}(0) \cong \D{4}{25}(0) \cong \D{4}{40}(0),$\\
$\D{4}{12}(\lambda,0) \cong \D{4}{18}(\lambda,0),\quad \D{4}{12}(1/4,\alpha) \cong \D{4}{13}(1/4,\alpha),\quad \D{4}{12}(0,\alpha)_{\alpha \neq -1} \cong \D{4}{14}(0,\alpha),\quad \D{4}{12}(0,-1) \cong \D{4}{17}(0),$\\
$\D{4}{13}(\lambda,0) \cong \D{4}{19}(\lambda,0),\quad \D{4}{14}(\lambda,0) \cong \D{4}{20}(\lambda,0),\quad \D{4}{14}(1/4,\alpha) \cong \D{4}{15}(1/4,\alpha),\quad \D{4}{15}(\lambda,0) \cong \D{4}{21}(\lambda,0),$\\
$\D{4}{18}(1/4,\alpha) \cong \D{4}{19}(1/4,\alpha),\quad \D{4}{18}(0,0) \cong \D{4}{22}(0) \cong \D{4}{24}(0),\quad \D{4}{18}(1/4,-1) \cong \D{4}{19}(1/4,-1) \cong \D{4}{30}(1/4) \cong \D{4}{31}(1/4),$\\
$\D{4}{20}(1/4,\alpha) \cong \D{4}{21}(1/4,\alpha),\quad \D{4}{20}(1/4,-1) \cong \D{4}{21}(1/4,-1) \cong \D{4}{32}(1/4) \cong \D{4}{33}(1/4),$\\
$\D{4}{22}(1/4) \cong \D{4}{23}(1/4) \cong \D{4}{24}(1/4) \cong \D{4}{25}(1/4) \cong \D{4}{26}(1/4) \cong \D{4}{27}(1/4) \cong \D{4}{28}(1/4) \cong \D{4}{29}(1/4),$\\
$ \D{4}{37}(1/4) \cong \D{4}{38}(1/4),\quad \D{4}{39}(1/4) \cong \D{4}{40}(1/4).$
\end{longtable}
\begin{longtable}{lll}
$\cd 4{43}(\alpha)\cong\cd 4{43}(-\alpha)$ &
$\cd 4{44}(\alpha,\beta,\gamma)\cong\cd 4{44}(\alpha,-\beta,-\gamma)$&
$\cd 4{47}(\alpha,\beta)\cong \cd 4{47}(\alpha,-\beta)$\\
$\cd 4{50}(\alpha)=\cd 4{50}(-\alpha)$&
$\cd 4{54}(\alpha)\cong\cd 4{54}(-\alpha-1)$&
$\cd 4{57}(\alpha,\beta)\cong \cd 4{57}(\alpha+\beta,-\beta)$\\
$\cd 4{59}(\alpha,\beta)\cong\cd 4{59}(\alpha,-\beta)$&
$\cd {4}{91}(\lambda, \alpha)\cong\cd {4}{91}(\lambda, -\alpha)$ &
$\cd {4}{92}(\lambda, \alpha)\cong\cd {4}{92}(\lambda, -\alpha)$ \\
$\cd {4}{93}(\alpha)\cong\cd {4}{93}(-\alpha)$ &
$\cd {4}{94}(\alpha,\beta)\cong\cd {4}{94}(-\alpha,\beta)$ &
$\cd {4}{95}(\alpha)\cong\cd {4}{95}(-\alpha)$ \\
$\cd {4}{100}(\alpha)\cong\cd {4}{100}(-\alpha)$ &
$\cd {4}{101}(\alpha,\beta)\cong\cd {4}{101}(-\alpha,-\beta)$ &
$\cd {4}{109}(\lambda,\alpha)\cong\cd {4}{109}(\lambda,-\alpha)$ \\
\multicolumn{3}{c}{$\cd {4}{112}(\lambda,\alpha,\beta,\gamma)\cong\cd {4}{112}(\lambda,-\alpha,\beta,-\gamma)$} \\
\multicolumn{3}{c}{$\cd {4}{112}(\lambda,\alpha,\beta,\gamma)\cong \cd {4}{112}\left(\lambda,(\gamma-\alpha\beta)\sqrt{\frac{-\lambda}{1-\beta+\lambda\beta^2}},\frac 1\lambda-\beta,(\frac\gamma\lambda-\frac\alpha\lambda-\beta\gamma)\sqrt{\frac{-\lambda}{1-\beta+\lambda\beta^2}}\right)$, if $\lambda\ne 0$, $\beta\ne\frac{1\pm\sqrt{1-4\lambda}}{2\lambda}$}\\
${\bf N}^4_{02}(\alpha,\beta,\gamma,\delta) \cong {\bf N}^4_{02}(-(\alpha,\beta,\gamma),\delta)$ &
${\bf N}^4_{04}(\alpha,\beta,\gamma) \cong {\bf N}^4_{04}(-(\alpha,\beta,\gamma))$ & ${\bf N}^4_{05}(\alpha,\beta) \cong {\bf N}^4_{05}(\sqrt[3]{1}(\alpha,\beta))$ \\
${\bf N}^4_{29}(\alpha,\beta) \cong {\bf N}^4_{29}(-\alpha,\beta)$ &
${\bf N}^4_{31}(\alpha) \cong {\bf N}^4_{31}(-\alpha)$ & ${\bf N}^4_{50}(\lambda,\alpha,\beta) \cong {\bf N}^4_{50}(\lambda,-\alpha,\beta)$ \\ &
${\bf N}^4_{54}(\lambda,\alpha) \cong {\bf N}^4_{54}(\lambda,-\alpha)$
\end{longtable}
}
\end{theorem}
\section{Applications}\label{S:apps}
\subsection{The algebraic classification of $4$-dimensional nilpotent Lie-admissible algebras}
The variety of Lie-admissible algebras is defined by the following identity
$$[[x,y],z]+[[y,z],x]+[[z,x],y]=0,$$
where $[x,y]=xy-yx.$
Lie-admissible algebras satisfy the following fundamental property: {\it under the commutator multiplication each Lie-admissible algebra is a Lie algebra}.
\begin{corollary}
Let $\bf A$ be a complex $4$-dimensional nilpotent Lie-admissible algebra.
Then $\bf A$ is isomorphic to an algebra from the list given in Theorem \ref{teo-alg}.
\end{corollary}
\begin{Proof}
Thanks to \cite{kkl19} each $4$-dimensional nilpotent anticommutative algebra is Lie.
Hence, each $4$-dimensional nilpotent algebra is a Lie-admissible algebra.
\end{Proof}
\subsection{The algebraic classification of
$4$-dimensional nilpotent Alia type algebras}
Let $\bf A$ be an algebra with a certain bilinear multiplication $(x,y)\mapsto xy$, which we call standard.
Let us define
\begin{center}$\textsf{T}(x,y,z, {\textsf P})={\textsf P}([x,y],z)+{\textsf P}([y,z],x)+{\textsf P}([z,x],y)$
\end{center}
where $[x,y]=xy-yx$ and ${\textsf P}$ is a bilinear multiplication on the same vector space.
Now we are ready to introduce Alia type algebras, which appeared in \cite{dzhuma1, dzhuma2}:
\begin{enumerate}
\item the variety of $0$-Alia (also known as $0$-anti-Lie-admissible) algebras is defined by the identity $\textsf{T}(x,y,z, {\textsf P})=0,$ where ${\textsf P}$ is the standard multiplication;
\item the variety of $1$-Alia (also known as $1$-anti-Lie-admissible) algebras is defined by the identity $\textsf{T}(x,y,z, {\textsf P})=0,$ where ${\textsf P}$ is the Jordan commutator multiplication ${\textsf P}(x,y)=xy+yx;$
\item the variety of two-sided Alia (also known as anti-Lie-admissible) algebras is defined by the identities $\textsf{T}(x,y,z, {\textsf P}_1)=0$ and $\textsf{T}(x,y,z, {\textsf P}_2)=0,$ where ${\textsf P}_1$ is the standard multiplication and ${\textsf P}_2$ is the opposite multiplication.
\end{enumerate}
\begin{corollary}
Let $\bf A$ be a complex $4$-dimensional nilpotent $0$-Alia ($1$-Alia, or two sided Alia) algebra.
Then $\bf A$ is isomorphic to an algebra from the list given in Theorem \ref{teo-alg}, except
\begin{longtable}{c}
$\cd {4}{79} - \cd {4}{85}, \cd {4}{87} - \cd {4}{112}, {\bf N}^4_{25}, \ {\bf N}^4_{28}, \ {\bf N}^4_{29}, \ {\bf N}^4_{31}, \ {\bf N}^4_{33}, \ {\bf N}^4_{41}, \
{\bf N}^4_{49}, \ {\bf N}^4_{50}(\lambda\neq 1), $\\
$\ {\bf N}^4_{52}(\lambda\neq 1),\ {\bf N}^4_{53}(\lambda\neq 1), \ {\bf N}^4_{54}(\lambda\neq 1),\ {\bf N}^4_{56}(\lambda\neq 1),{\bf N}^4_{59}, {\bf N}^4_{60}, {\bf N}^4_{63}, {\bf N}^4_{66}(\lambda\neq 1).$
\end{longtable}
\end{corollary}
\begin{Proof}
After verification, we have that all $3$-dimensional nilpotent algebras are
$0$-Alia, $1$-Alia, and two-sided Alia.
Analyzing the cocycles on $3$-dimensional algebras we see that the subspaces of $0$-Alia cocycles, $1$-Alia cocycles, and two-sided Alia cocycles coincide and are listed below.
\begin{longtable}{|l|l|}
\hline
Algebra & Cocycles \\
\hline
${\mathfrak{CD}}_{01}^{3}, {\mathfrak{CD}}_{04}^{3}(1),
{\mathfrak{CD}}_{01}^{3*},{\mathfrak{CD}}_{02}^{3*}$ &
$\langle \Delta_{ij} \rangle_{1\leq i,j\leq 3} $ \\
\hline
${\mathfrak{CD}}_{02}^{3}, {\mathfrak{CD}}_{03}^{3}, {\mathfrak{CD}}_{04}^{3}(\lambda \neq 1),
{\mathfrak{CD}}_{03}^{3*},{\mathfrak{CD}}_{04}^{3*}$ &
$\langle \Delta_{ij} \rangle_{1\leq i,j\leq 3; (i,j) \neq (3,3) } $ \\
\hline
\end{longtable}
It remains to choose the algebras from Theorem \ref{teo-alg} that are determined by Alia cocycles.
\end{Proof}
| {
"timestamp": "2020-12-02T02:23:37",
"yymm": "2012",
"arxiv_id": "2012.00525",
"language": "en",
"url": "https://arxiv.org/abs/2012.00525",
"abstract": "We give the complete algebraic classification of all complex 4-dimensional nilpotent algebras. The final list has 234 (parametric families of) isomorphism classes of algebras, 66 of which are new in the literature.",
"subjects": "Rings and Algebras (math.RA)",
"title": "The algebraic classification of nilpotent algebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.977022632097108,
"lm_q2_score": 0.8198933381139645,
"lm_q1q2_score": 0.8010543472429897
} |
https://arxiv.org/abs/1711.07288 | When Fourth Moments Are Enough | This note concerns a somewhat innocent question motivated by an observation concerning the use of Chebyshev bounds on sample estimates of $p$ in the binomial distribution with parameters $n,p$. Namely, what moment order produces the best Chebyshev estimate of $p$? If $S_n(p)$ has a binomial distribution with parameters $n,p$, there it is readily observed that ${\rm argmax}_{0\le p\le 1}{\mathbb E}S_n^2(p) = {\rm argmax}_{0\le p\le 1}np(1-p) = \frac12,$ and ${\mathbb E}S_n^2(\frac12) = \frac{n}{4}$. Rabi Bhattacharya observed that while the second moment Chebyshev sample size for a $95\%$ confidence estimate within $\pm 5$ percentage points is $n = 2000$, the fourth moment yields the substantially reduced polling requirement of $n = 775$. Why stop at fourth moment? Is the argmax achieved at $p = \frac12$ for higher order moments and, if so, does it help, and compute $\mathbb{E}S_n^{2m}(\frac12)$? As captured by the title of this note, answers to these questions lead to a simple rule of thumb for best choice of moments in terms of an effective sample size for Chebyshev concentration inequalities. | \section{Introduction}
This note concerns a somewhat innocent question motivated by an observation
concerning the use of Chebyshev bounds on sample estimates of $p$ in the
binomial distribution with parameters $n,p$. Namely, what moment order
produces the best Chebyshev estimate of $p$? Chebyshev is arguably the most
basic concentration inequality to occur in risk probability estimates and the
use of second moments is a textbook example in elementary probability and
statistics. Consider i.i.d. Bernoulli $0-1$ random variables $X_1,X_2,\dots,X_n$
with parameter $p\in[0,1]$, and let $S_n(p) = \sum_{j=1}^n(X_j-p)$. There it
is readily observed that
${\rm argmax}_{0\le p\le 1}{\mathbb E}S_n^2(p) = {\rm argmax}_{0\le p\le 1}np(1-p) = \frac12$.
It is also a well-known probability exercise to check that 4-th moment
Chebyshev bounds improve the rate of convergence that can more generally be
used for a proof of the strong law of large numbers, e.g. see (Bhattacharya and
Waymire, 2016; p.100). Somewhat relatedly, Rabi Bhattacharya
(personal communication) recently noticed, after a mildly tedious calculation
to check ${\rm argmax}_{0\le p\le 1}{\mathbb E}S_n^4(p) = \frac12$, that the
second moment Chebyshev bound is rather significantly improved by consideration
of fourth moments as well. In particular, while the second moment Chebyshev
sample size for a $95\%$ confidence estimate within $\pm 5$ percentage points
is $n = 2000$, the fourth moment yields the substantially reduced polling
requirement of $n = 775$. While the Chebyshev inequality is one among several
inequalities used to obtain sample estimates, it is no doubt the simplest; see
(Bhattacharya and Waymire, 2016) for comparison of fourth order Chebyshev to
other concentration inequality bounds, and (Skinner, 2017) for numerical
comparisons to higher order Chebyshev bounds.
So why stop at fourth moments? Is
${\rm argmax}_{0\le p\le 1}\mathbb{E}S_n^{2m}(p) = \frac12$ for all $m,n$ and,
if so, does it improve the estimate? Somewhat surprisingly we were not able to
find a resolution of such basic questions in the published literature. In any
case, with the argmax question resolved in part $(a)$ of the theorem below,
part $(b)$ provides a direct computation of $\mathbb{E}S_n^{2m}(\frac12)$. Part
$(c)$ then provides a more readily computable version.
\begin{theorem}\label{mainthm}
\begin{enumerate}[label=$(\alph*)$]
\item For all $m\ge 1$ and all $n$ sufficiently large,
${\rm argmax}_{0\le p\le 1}\mathbb{E}S^{2m}_n(p) = \frac12$.
\item For all positive $m$ and $n$,
$\mathbb{E}S^{2m}_n(\frac12) =
4^{-m}\sum_{\mu\in\pi(m), |\mu|\le m\wedge n}\binom{2m}{2\mu_1,\dots,2\mu_{|\mu|}}
\binom{n}{|\mu|}$,
\item For all positive $m$ and $n$,
${\mathbb E}S^{2m}_n(\frac12) = 2^{-2m-n}\sum_{k=0}^n \binom{n}{k} (2k-n)^{2m}$.
\end{enumerate}\noindent
Here $\pi(m)$ is the set of ordered integer partitions of $m$, also referred to
as integer compositions, and $|\mu|$ denotes the number of parts of
$\mu\in\pi(m)$. We refer to $|\mu|$ as the {\it size} of the partition $\mu$.
\end{theorem}
The equivalent calculus challenge is to show for fixed $m$ that for all
sufficiently large $n$,
\begin{equation}\label{eqMoment}
\text{argmax}_{0\le p\le 1}\frac{d^{2m}}{dt^{2m}}(pe^{qt} + qe^{-pt})^n|_{t=0}
= \frac12.
\end{equation}
The example below illustrates the challenge to locating absolute maxima
for such polynomials (in $p$), especially to proofs by mathematical
induction. The proof given here is based on explicit combinatorial computation
of ${\mathbb E}S_n^{2m}(p)$ in terms of ordered partitions of $2m$,
after introducing a few preliminary
lemmas. The lemmas are relatively simple to check using the
statistical independence and identical distributions of the terms $X_i-p$ and $X_j-p$,
$i\neq j$, and make good exercises in calculus, probability, and number theory.
However let us first observe that part $(a)$ of the theorem does not hold for $m > n$.
\
\noindent {\bf Counter example to Theorem \ref{mainthm}$(a)$ for
(small) $n< m$:}
Observe for $n = 1$ and $m=2$, the function
$${\mathbb E}S_1^4(p) = p - 4p^2 + 6p^3 - 3p^4, \quad 0\le p\le 1,$$
has a {\it minimum} at $p=\frac12$, with two local maxima
at $\frac12 \pm \frac{\sqrt{2}}{4}$. In particular,
$${\rm argmax}_{0\le p\le 1}{\mathbb E}S_1^4(p)
= \frac12 \pm \frac{\sqrt{2}}{4}.$$
In particular, the polynomial is generally {\it not} unimodal.
So the restriction to sufficiently large $n$
is necessary for part $(a)$ of Theorem \ref{mainthm}. There is also the question of how large is sufficiently
large. We do not address this here, but computations suggest a bound along the
lines of $m\le c\cdot n^{\varepsilon}$, with $\varepsilon$ a little less than $\frac12$.
We let $m_n$ denote the largest value of $m$, dependent on $n$, such that
Theorem \ref{mainthm}$(a)$ holds for all $m\le m_n$. We leave it as an open problem
to determine an exact formula for $m_n$ and to determine a formula for
${\rm argmax}_{0\le p\le1}\mathbb{E}_n^{2m}(p)$ for $m>m_n$.
\section{Proofs and Remarks}
Let $\pi(2m)$ denote the set of ordered partitions of $2m$. We we will use
$|\mu| = k$ to denote the number of parts of $\mu$. Finally, for
$\mu\in\pi(2m)$, let
\begin{equation*}
f_i(\mu,p) = pq^{\mu_i} + q(-p)^{\mu_i}, \quad
0\le p\le 1, q=(1-p), 1\le i\le |\mu|.
\end{equation*}
\begin{lemma}
\label{basiclemma} Let $0\le p\le 1$ and $q = 1-p$. The following hold,
\begin{enumerate}[label=$(\alph*)$]
\item $S_n(p) = -^{\text{dist}} S_n(q)$,
\item $\mathbb{E}S^{2m}_n(p) = \mathbb{E}S^{2m}_n(q)$,
\item $\mathbb{E}S^{2m}_n(p)
= \sum_{\mu\in\pi(2m)}\binom{n}{|\mu|} \binom{2m}{\mu_1,\dots,\mu_{|\mu|}}\prod_{i=1}^{|\mu|}f_i(\mu,p)$,
\item $\frac{d}{dp}\mathbb{E}S^{2m}_n(p)
= \sum_{\mu\in\pi(2m)}\binom{n}{|\mu|} \binom{2m}{\mu_1,\dots,\mu_{|\mu|}}\sum_{i=1}^{|\mu|}
f_i^\prime(\mu,p)\prod_{j\ne i}^{|\mu|}f_j(\mu,p)$.
\end{enumerate}
\end{lemma}
\begin{lemma}
\label{derivlem}
Let $\mu\in\pi(2m)$ and $1\le i\le |\mu|$. Then,
$$\frac{d}{dp}f_i(\mu,p) =
q^{\mu_i}\left(1-\frac{p}{q}\mu_i\right) + (-1)^{\mu_i+1}
p^{\mu_i}\left(1-\frac{q}{p}\mu_i\right).$$
\end{lemma}
It now follows easily that
\begin{equation}
\label{fi}
f_i\left(\mu,\frac12 \right) =
\begin{cases}
2^{-\mu_i} & \text{for even}\ \mu_i,\\
0 & \text{for odd}\ \mu_i,
\end{cases}
\end{equation}
\begin{equation}
\label{fiprime}
f^\prime_i\left(\mu,\frac12 \right) =
\begin{cases}
0 & \text{for even}\ \mu_i,\\
-2(\mu_i-1)2^{-\mu_i} & \text{for odd}\ \mu_i.
\end{cases}
\end{equation}
The keys to the following proof of Theorem \ref{mainthm} reside in
(1) the parity conflicts between
\eqref{fi} and \eqref{fiprime} and (2) the expansion
$(d)$ in Lemma \ref{basiclemma}, viewed as a polynomial in $n$.
\begin{proof}[Proof (of theorem)]
That $p=\frac12$ is a critical point follows from $(d)$ of Lemma \ref{basiclemma}
together with \eqref{fi} and \eqref{fiprime} by examining the terms
$f_i^\prime(\mu,\frac12)\prod_{j\ne i}^{|\mu|}f_j(\mu,\frac12)$.
In particular, for partitions of $2m$, if $\mu_i$ is odd then there must be a
$j\neq i$ such that $\mu_j$ is odd as well. To see that $p=\frac12$ is an
absolute maximum, the trick is to observe that for $0\le p < \frac12 < q$,
the leading coefficient of $\frac{d}{dp}\mathbb{E}S_n^{2m}(p)$, viewed as a
polynomial in $n$, is obtained at the $m$-part composition $\mu = (2,2,\dots,2)$
of $2m$. Namely, it is obtained from
$\binom{n}{m}\binom{2m}{2,2,\dots,2}m(q^2-p^2)(pq)^{m-1}$,
and therefore is positive for all $p<\frac12$. Thus, for sufficiently large $n$,
$$\frac{d}{dp}\mathbb{E}S_n^{2m}(p) > 0, \quad\mbox{for } 0\le p < 1/2.$$
In view of the symmetry expressed in
$(b)$ of Lemma \ref{basiclemma}, it follows that $p =\frac12$ is the unique
global maximum.
For part $(b)$ of the theorem one simply computes from independence,
writing $\tilde{X}_i = X_i-\frac12, i =1,2,\dots, n$. In particular,
$\tilde{X}_i = \pm\frac12$ with equal probabilities. So,
for $m\ge 1$,
\begin{align*}
\mathbb{E}S_n^{2m}\left(\frac12\right)
&=
\sum_{1\le j_1,\dots,j_{2m}\le n}{\mathbb E}\prod_{i=1}^{2m}\tilde{X}_{j_i}
\\
&=
\sum_{2m_1+\cdots +2m_n = 2m}\prod_{i=1}^{n}{\mathbb E}\tilde{X}_{i}^{2m_i}
\\
&=
\sum_{k=1}^{m\wedge n}\sum_{2m_1+\cdots +2m_n = 2m, \#\{j:m_j\ge 1\}=k}\prod_{i=1}^{n}4^{-m_i}
\\
&=
\sum_{k=1}^{m\wedge n}\binom{n}{k}\sum_{\mu=(\mu_1,\dots,\mu_k)\in\pi(m)}
\binom{2m}{2\mu_1,\dots,2\mu_k}4^{-m}.
\end{align*}
Here one adopts the convention that a sum over an empty set is
zero so that if there are no partitions $\mu$ of $m$ with $|\mu|=k$
then the indicated sum is zero for this choice of $k$. So
nonzero contributions to the sum are provided by ordered partitions $\mu$
of size $|\mu|\le m\wedge n$.
To simplify the computation in terms of ordered partitions $(b)$ one
may proceed as follows to obtain the formula in $(c)$. We instead compute
$\mathbb{E}S_n^{2m}(\frac12)$ as the $2m$-th moment of $S_n(\frac12)$ as
given in \eqref{eqMoment}. By the binomial theorem, we have that
\begin{align*}
\mathbb{E}S_n^{2m}\left(\frac12\right)
&=
\frac{d^{2m}}{dt^{2m}}
\left[\left(
\frac{e^{\frac{t}{2}}}{2}
+
\frac{e^{-\frac{t}{2}}}{2}
\right)^n\right]_{t=0}
=
\frac{d^{2m}}{dt^{2m}}
\left[
2^{-n}
\sum_{k=0}^n \binom{n}{k}
e^{\frac{t}{2}(2k-n)}
\right]_{t=0}
\\&=
2^{-n-2m}\sum_{k=0}^n \binom{n}{k} (2k-n)^{2m}.
\end{align*}
\end{proof}
\begin{remark}
A linear recurrence in $m$ is possible
to aid the pre-asymptotic (in $n$) computation
of $\mathbb{E}S_n^{2m}(\frac12)$.
Namely,
\begin{align}
\label{recur}
\mathbb{E}S_n^{2m+2\ell+2}\left(\frac12\right)
&=
\sum_{j=0}^\ell
c_j 2^{2j-2\ell-2}\mathbb{E}S_n^{2m+2j}\left(\frac12\right),
\end{align}
where
$\ell=\left\lfloor\frac{n-1}{2}\right\rfloor$,
$a_k = (2k-n)^2$, and $(c_0,c_1,\dotsc,c_\ell)$ is the unique solution to
\begin{align*}
\begin{pmatrix}
a_0^0&a_0^1&\dots&a_0^\ell\\
a_1^0&a_1^1&\dots&a_1^\ell\\
\vdots
\\
a_\ell^0&a_\ell^1&\dots&a_\ell^\ell\\
\end{pmatrix}
\begin{pmatrix}
c_0\\c_1\\ \vdots\\ c_\ell
\end{pmatrix}
&=
\begin{pmatrix}
a_0^{\ell+1}\\ a_1^{\ell+1}\\ \vdots\\ a_\ell^{\ell+1}
\end{pmatrix}.
\end{align*}
To see this, write
\begin{align*}
\mathbb{E}S_n^{2m}\left(\frac12\right)
&=
2^{-2m-n+1}\sum_{k=0}^{\ell}
\binom{n}{k} (2k-n)^{2m} .
\end{align*}
Then \eqref{recur} follows since
\begin{align*}
&\mathbb{E}S_n^{2m+2\ell+2}\left(\frac12\right)
-
\sum_{j=0}^\ell
c_j 2^{2j-2\ell-2}\mathbb{E}S_n^{2m+2j}\left(\frac12\right)
\\
&=
2^{-2m-2\ell-n-1}\sum_{k=0}^{\ell}\binom{n}{k}a_k^{m+\ell+1}
-
\sum_{j=0}^{\ell}c_j 2^{-2m-2\ell-n-1}
\sum_{k=0}^{\ell}\binom{n}{k}a_k^{m+j}
\\
&=
2^{-2m-2\ell-n-1}\sum_{k=0}^{\ell}\binom{n}{k}a_k^{m}
\left(a_k^{\ell+1}-\sum_{j=0}^{\ell}c_ja_k^j \right)
=0.
\end{align*}
\end{remark}
\
\
For the application to statistical estimation one may
combine Theorem \ref{mainthm}
with Chebyshev's inequality to obtain,
\begin{corollary} For $\epsilon > 0$, we have that
$
P\left(|\frac{1}{n}S_n(p)| > \epsilon\right) \le \min_{1\le m\le m_n}
\left(
\frac{\sqrt[2m]{{\mathbb E}S^{2m}_n(\frac12)}}
{n\epsilon}\right)^{2m}.
$
\end{corollary}
Noting the scaling invariance
$
\text{argmax}_{0\le p \le 1}\mathbb{E}S_n^{2m}(p) =
\text{argmax}_{0\le p\le 1}\mathbb{E}\frac{S_n^{2m}(p)}{n^m},
$
and
${\mathbb E}Z^{2m} = 2^{-m}\frac{(2m)!}{m!}$ for the standard normal random
variable $Z$, in the limit ``$n \to\infty, \epsilon \to 0, n\epsilon^2\to \tilde{n}$''
one has
\begin{equation*}
B_m :=
\mathbb{E}\frac{S^{2m}_n(\frac12)}{n^{2m}\epsilon^{2m}}
=
\mathbb{E}
\frac{\left(\frac{S_n(\frac12)}{\sqrt{n/4}}\right)^{2m}}{n^{2m}\epsilon^{2m}}
\left(\frac{n}{4}\right)^m
\to
2^{-2m}\tilde{n}^{-m}{\mathbb E}Z^{2m} = 2^{-3m}\frac{(2m)!}{m!}\tilde{n}^{-m}.
\end{equation*}
In particular, one may ask for the best choice of $m$ for large $n$, i.e,
in the above limit as $n\to\infty,\epsilon\downarrow 0, n\epsilon^2\to \tilde{n}$.
The quantity $\tilde{n} = n\epsilon^2$ denotes
an {\it effective sample size} in the sense of the risk assessment defined
by $P(|S_n(p)| > n\epsilon) < \epsilon$; see (Duchi et al. 2013) for
an introduction of this artful terminology in a much broader context.
Observe that in the limit of large $n$
\begin{equation*}
\lim_{n\to\infty, \epsilon\downarrow 0, n\epsilon^2 = \tilde{n}}
\frac{B_{m+1}}{B_m} = \frac{2m+1}{4\tilde{n}}
\begin{cases}
\le 1\\
= 1\\
\ge 1
\end{cases}
\end{equation*}
if and only if
\begin{equation*}
m
\begin{cases}
\le 2\tilde{n} -\frac12\\
= 2\tilde{n} -\frac12\\
\ge 2\tilde{n} -\frac12.
\end{cases}
\end{equation*}
\
\
The take-away is perhaps best summarized in terms of the following
informally interpreted optimal estimation principle.
\
\
\noindent {\bf Approximate Rule of Thumb:}
{\it For large $n$ the optimal moment order $2m$
for the Chebyshev bound is quadruple the effective sample size.
In particular, the fourth moment is optimal for a one unit
effective sample size!}
\section{References}
\noindent Bhattacharya, R.N., and Waymire, E.C. (2016),
``A Basic Course in Probability Theory'', 2nd ed.,
Universitext,
Springer, NY.
\
\noindent Duchi, J. and Wainwright, M. J. and Jordan, M. I. (2013),
``Local privacy and minimax bounds: Sharp rates for probability estimation'',
in Advances in Neural Information Processing Systems,
1529-1537.
\
\noindent Skinner, Dane, 2017, ``Concentration of Measure Inequalities'',
Master of Science, Oregon State University
\
\end{document}
\begin{bibdiv}
\begin{biblist}
\bibselect{DSEW_bib}
\end{biblist}
\end{bibdiv}
\end{document}
| {
"timestamp": "2017-11-21T02:17:22",
"yymm": "1711",
"arxiv_id": "1711.07288",
"language": "en",
"url": "https://arxiv.org/abs/1711.07288",
"abstract": "This note concerns a somewhat innocent question motivated by an observation concerning the use of Chebyshev bounds on sample estimates of $p$ in the binomial distribution with parameters $n,p$. Namely, what moment order produces the best Chebyshev estimate of $p$? If $S_n(p)$ has a binomial distribution with parameters $n,p$, there it is readily observed that ${\\rm argmax}_{0\\le p\\le 1}{\\mathbb E}S_n^2(p) = {\\rm argmax}_{0\\le p\\le 1}np(1-p) = \\frac12,$ and ${\\mathbb E}S_n^2(\\frac12) = \\frac{n}{4}$. Rabi Bhattacharya observed that while the second moment Chebyshev sample size for a $95\\%$ confidence estimate within $\\pm 5$ percentage points is $n = 2000$, the fourth moment yields the substantially reduced polling requirement of $n = 775$. Why stop at fourth moment? Is the argmax achieved at $p = \\frac12$ for higher order moments and, if so, does it help, and compute $\\mathbb{E}S_n^{2m}(\\frac12)$? As captured by the title of this note, answers to these questions lead to a simple rule of thumb for best choice of moments in terms of an effective sample size for Chebyshev concentration inequalities.",
"subjects": "Probability (math.PR); Statistics Theory (math.ST)",
"title": "When Fourth Moments Are Enough",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575183283514,
"lm_q2_score": 0.8152324871074608,
"lm_q1q2_score": 0.8010128093929564
} |
https://arxiv.org/abs/2205.10968 | Eigenvalue bounds of the Kirchhoff Laplacian | We prove that each eigenvalue l(k) of the Kirchhoff Laplacian K of a graph or quiver is bounded above by d(k)+d(k-1) for all k in {1,...,n}. Here l(1),...,l(n) is a non-decreasing list of the eigenvalues of K and d(1),..,d(n) is a non-decreasing list of vertex degrees with the additional assumption d(0)=0. We also prove that in general the weak Brouwer-Haemers lower bound d(k) + (n-k) holds for all eigenvalues l(k) of the Kirchhoff matrix of a quiver. | \section{The theorem}
\paragraph{}
Let $G=(V,E)$ be a {\bf finite simple graph} with $n$ vertices.
Denote by $\lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_n$ the ordered
list of eigenvalues of the {\bf Kirchhoff matrix} $K=B-A$, where $B$ is the
diagonal vertex degree matrix with ordered vertex degrees
$d_1 \leq d_2 \leq \cdots \leq d_n$ and where $A$ is the adjacency matrix of $G$.
\paragraph{}
We assume $d_0=0$ so that $d_1+d_0=d_1$ and prove
\begin{thm} $\lambda_k \leq d_k+d_{k-1}$, for all $1 \leq k \leq n$. \label{1} \end{thm}
The case $d=n$ is the {\bf spectral radius} estimate
$\lambda_n \leq d_n+d_{n-1}$ of Anderson and Morley \cite{AndersonMorley1985}
which we will use in the proof. The case $k=1$ is obvious because $\lambda_1=0$
and $k=2$ is a special case of the Schur-Horn
inequality \cite{Brouwer} as $\lambda_2 = \lambda_1+\lambda_2 \leq d_1+d_2$.
Already the case $\lambda_3 \leq d_3+d_2$ appears to be new as it goes beyond the
Schur-Horn inequality. Note that any estimate on the spectral radius $\lambda_n$
provides upper bounds on $\lambda_k$ and there are many improvements since
the ground breaking Anderson-Morley paper
\cite{GroneMerrisSunder1,GroneMerrisSunder2,Zhang2004,Das2004,
Guo2005,Shi2007,ShiuChan2009, LiShiuChan2010,ZhouXu}.
They would give better upper bounds also for $\lambda_k$
but look less elegant.
\paragraph{}
Already the corollary $\lambda_k \leq 2d_k$ is stronger than
what the {\bf Gershgorin circle theorem}
\cite{Gershgorin,GershgorinAndHisCircles}
gives in this case: the circle theorem provides in every interval $[0,2d_k]$ at
least one eigenvalue $\lambda_l$ of $K$. It does not need to be the k'th one.
In the Kirchhoff case, where the Gershgorin circles are nested,
$0$ is always in the spectrum. Theorem~(\ref{1}) gives more information.
The spectral data $\lambda_1=0,\lambda_2=10,\lambda_3=10$ for example are
Gershgorin compatible to $d_1=1, d_2=3, d_3=7$ because there is
an eigenvalue in each closed ball $[0,2], [0,6]$ and $[0,14]$.
But these data contradict Theorem~(\ref{1}) as
$\lambda_2 = 2 d_2+4$. Theorem~\ref{1} keeps the eigenvalues more
aligned with the degree sequence, similarly as the Schur-Horn theorem does.
\paragraph{}
An application is that the pseudo determinant ${\rm Det}(K)= \prod_{k=2}^n \lambda_k$
which by the Kirchhoff-Cayley {\bf matrix tree theorem} count
the number of rooted spanning trees has an upper bound $2^n \prod_k d_k$
and that ${\rm det}(1+K)$ by the Chebotarev-Shamis matrix forest theorem
count the number of rooted spanning forests has an upper bound $2^n \prod_k (1+d_k)$.
These determinant inequalities do not follow from the Gershgorin nor from the
Schur-Horn inequalities. We like to rephrase the determinant
bounds as bound on the spectral potential
$U(z)=(1/n) \log \det(K-z) \leq 2 + (1/n) \log\det(M-z)$,
where $M={\rm Diag}(d_1,\dots,d_n)$ is the diagonal matrix and $z \leq 0$ is real.
We made use of Theorem~(\ref{1}) to show that $U(z)$ has a Barycentric limit. It bounds the
potential $U$ of the interacting system with the potential $(1/n) \log\det(M-z)$ of the non-interacting
system where $M$ can be thought of as the Kirchhoff matrix of a non-interacting system where we have
$d_k$ self-loops at vertices $k$, even-so the tree and forest interpretations do not make sense any more.
\section{The proof}
\paragraph{}
If we make he statement slightly stronger, also the induction assumption becomes
more powerful. Let $\mathcal{K}$ the class of symmetric matrices which are obtained
as principal submatrices of a Kirchhoff matrices of a finite simple graph
obtained by deleting the rows and columns to a maximal diagonal entry.
In other words, we close the class of Kirchhoff matrices under the operation of taking
principal submatrices. The stronger statement is that for all $A \in \mathcal{K}$,
the eigenvalues $\lambda_k$ and diagonal entries $d_k$, when ordered in an ascending order,
satisfy $\lambda_k \leq d_k+d_{k-1}$.
\paragraph{}
The class $\mathcal{K}$ is invariant under
the operation $\phi$ which removes corresponding rows and columns. This allows induction.
The induction foundation $n=1$ is obvious because the result holds for any
$1 \times 1$ matrix with non-negative entries. For $2 \times 2$ matrices
$K=\left[ \begin{array}{cc} d_1 & -t \\ -t & d_2 \end{array} \right]$ we have
with the trace $T=d_1+d_2$ the eigenvalue $\lambda_2= (T+\sqrt{T^2+4t^2-4d_1 d_2})/2$
which is $\leq T=d_1+d_2$ if $t \leq \sqrt{d_1 d_2}$ and especially if $t \leq d_1$.
\paragraph{}
The induction step uses the {\bf Cauchy interlace theorem} or {\bf Separation theorem}
\cite{HornJohnson2012}:
if $\mu_k$ are the eigenvalues of $\phi(K)$, then $\lambda_k \leq \mu_k \leq \lambda_{k+1}$.
(The interlace theorem follows from the {\bf Hermite interlace theorem} for real polynomials
\cite{Hwang2004,Fisk2005}. If $f$ is a monic polynomial of degree $n$ with real roots
$\lambda_1 \leq \cdots \leq \lambda_n$
and $g$ is a monic polynomial of degree $n-1$ with real roots
$\mu_1 \leq \cdots \leq \mu_{n-1}$ then
{\bf $g$ interlaces $f$} if
$\lambda_1 \leq \mu_1 \leq \lambda_2 \leq \cdots \leq \mu_{n-1} \leq \lambda_n$.
Equivalent for $f$ and $g$ to interlace is that the interpolation $t f+(1-t)g$
has real roots for all $t \in [0,1]$.)
\paragraph{}
The interlace theorem does not catch the largest eigenvalue $\lambda_n$.
This requires an upper bound
for the {\bf spectral radius}.
This is where Anderson and Morley \cite{AndersonMorley1985} come in.
They realized that $K=F^* F$ is essentially isospectral to $F F^*$.
The later is the adjacency matrix of the line graph but with the modification
that all diagonal entries are $2$. Since the row or column sum
of $F F^*$ is bound by $d_k(a) + d_k(b)$ for any edge $(a,b)$,
the upper bound holds. But we need to extend this estimate to $\mathcal{K}$.
\begin{lemma}[Upgrade of Anderson and Morley]
For any matrix in $\mathcal{K}$, the spectral radius
satisfies $\lambda_n \leq d_n + d_{n-1}$.
\end{lemma}
\paragraph{}
We can assume that after a coordinate change
the diagonal entries of $K \in \mathcal{K}$ are ordered. This simply requires to
order the vertices compatible with the order of the $d_j$.
\cite{AndersonMorley1985} estimate $\lambda_n \leq d_n+d_{n-1}$
for Kirchhoff matrices and even show that
$\lambda_n \leq {\rm max}_{(a,b) \in E(G)} d(a)+d(b)$.
We now verify that this extends to $\mathcal{K}$.
\paragraph{}
We have already seen the induction assumption for $n=1$ and $n=2$.
Let $K$ be a $n \times n$ matrix in $\mathcal{K}$ and assume we know
the answer already for all $(n-1) \times (n-1)$ matrices in $\mathcal{K}$.
When taking a principal submatrix of $K$ by deleting
the first row and first column, we do not change $d_n$ nor $d_{n-1}$.
The reason is that decoupling the
lowest degree vertex does not change the largest two diagonal elements.
The upper bound $d_n+d_{n-1}$ is therefore not affected.
\paragraph{}
Let $\mu_1 \leq \mu_2 \leq \cdots \leq \mu_{n-1}$ be the eigenvalues of the principal
submatrix where the first row and column are deleted.
The Cauchy interlacing result gives $\lambda_{n-1} \leq \mu_{n-1} \leq \lambda_n$
The largest eigenvalue $\mu_{n-1}$ of the deformed matrix is smaller or equal than than
the largest eigenvalue $\lambda_n$ and so smaller or equal than
$d_n + d_{n-1}$. The inequality $\mu_{n-1} \leq d_n + d_{n-1}$ is what we wanted to show.
\paragraph{}
One could also explicitly track how the maximal eigenvalue decreases
if all the non-diagonal entries in the first row and column is multiplied by $t$.
If $v$ is the normalized eigenvector to the largest eigenvalue $\lambda_n$,
then since the matrices are symmetric, $v$ is perpendicular to
$[1,1,\dots, 1,1]$ the eigenvector to $\lambda_1=0$.
This implies $\sum_{k=1}^{n-1} v_k = -v_1$. Let now $E$ be the matrix in
which the first row and first column except $E_{11}$ contains only 1 and
everything else is zero. We need to understand how
$K - t E$ changes the largest eigenvalue $\lambda$.
The {\bf first Hadamard deformation formula} gives $\lambda(t)' = v^T E v = v_1^2>0$.
The {\bf second Hadamard deformation formula} would even show $\lambda(t)''>0$,
illustrating {\bf eigenvalue repulsion}. In any case, the largest eigenvalue increases
under the deformation. Since at $t=1$, the end of the deformation,
we have $\lambda_n \leq d_n+d_{n-1}$ this is also the case for
$t=0$, where the connections to the weak vertex link have all been capped.
\begin{comment}
s = RandomGraph[{20, 50}];
a = RandomChoice[VertexList[s]];
S = VertexList[NeighborhoodGraph[s, a]];
A = Normal[KirchhoffMatrix[s]];
Q[t_]:=Module[{U = A},Do[U[[a, S[[k]]]] = -t; U[[S[[k]], a]] = -t, {k, Length[S]}]; U]
F[t_]:=Module[{}, B = Q[t]; l = Max[Sort[N[Eigenvalues[B]]]];
d = Sort[Table[B[[k, k]], {k, Length[B]}]];
d1 = Prepend[Delete[d, Length[d]], 0]; l]
Plot[F[t], {t, 0, 1}]
\end{comment}
\section{Remarks}
\paragraph{}
Similarly than the Schur-Horn inequality
$\sum_{j=1}^k \lambda_j \leq \sum_{j=1}^k d_j$
which is true for any symmetric matrix with diagonal entries $d_j$,
Theorem~(\ref{1}) controls how close the ordered eigenvalue sequence
is to the ordered vertex degree sequence. But it is different.
For example, if the eigenvalues increase exponentially like $\lambda_k = 6^k$, then
the inequality $\lambda_k \leq 2 d_k$ implies
on a logarithmic scale $\log(\lambda_k) \leq \log(2) + \log(d_k)$. Schur-Horn does
not provide that. As for the {\bf Gershgorin circle theorem}, it
would only establish $\log(\lambda) \leq \log(2) + \log(d_n)$,
where $d_n$ is the largest entry because the theorem assures only that in each
Gershgorin circle, there is at least one eigenvalue.
\paragraph{}
Anderson and Morley \cite{AndersonMorley1985} have the better bound
$\max_{(x,y) \in E} d(x) + d(y)$ and Theorem~(\ref{1}) could be improved in that
$d_k+d_{k-1}$ can be replaced by ${\rm max}_{(x,y) \in E} d_k+d_j$.
Any general better upper bound of the spectral radius leads to better results.
The Anderson-Morley estimate as an early use of a {\bf McKean-Singer super symmetry}
\cite{McKeanSinger} (see \cite{knillmckeansinger} for graphs).
in a simple case where one has only $0$-forms and $1$-forms. There, it reduces to
the statment that $K=F^* F$ is essentially isospectral to $F F^*$ which is true for
all matrices. It uses that the Laplacian $K$ is of the form $F^* F = {\rm div} {\rm grad}$,
where $F={\rm grad}$ is the {\rm incidence matrix} $F f( (a,b) ) = f(b)-f(a)$ for functions
$f$ on vertices (0-forms) leading to functions on oriented edges (1-form).
\paragraph{}
There is much work on the spectral radius of the Kirchhoff Laplacian. It is bounded above
by the spectral radius of the {\bf signless Kirchhoff Laplacian} $|K|$ in which one takes the
absolute values for each entry. This matrix is a non-negative matrix in the connected case
has a power $|K|^n$ with all positive entries so that by the Perron-Frobenius theorem, the
maximal eigenvalue is unique. (The Kirchhoff matrix itself of course can have multiple maximal
eigenvalues like for the case of the complete graph). Also, unlike $K$ which is never
invertible $|K|$ is invertible if $G$ is not bipartite.
If we treat a graph as a one-dimensional simplicial complex
(ignoring 2 and higher dimensional simplices in the graph), and denote $d$
the exterior derivative of this skeleton complex, then
$(d+d^*)^2 = K_0 + K_1$ where $K=K_0=d^* d$ is the Kirchhoff matrix and $K_1=d d^*$ is the
one-form matrix with the same spectral radius. This leads to \cite{AndersonMorley1985}.
Much work has gone in improving this
\cite{BrualdiHoffmann,Stanley1987,LiZhang1997,Zhang2004,FengLiZhang,Shi2007,LiShiuChan2010}.
We have used that an identity coming from connection matrices \cite{Hydrogen}.
\paragraph{}
We stumbled on the theorem when looking for bounds on the {\bf tail distribution}
$\mu([x,\infty))$ of the {\bf limiting density of states}
$\mu$ of the Barycentric limit $\lim_{n \to \infty} G_n$ of a
finite simple graph $G=G_0$, where $G_n$ is the Barycentric
refinement of $G_{n-1}$ in which the complete subgraphs
of $G_{n-1}$ are the vertices and two are connected if one is contained
in the other \cite{KnillBarycentric,KnillBarycentric2}.
We wanted a {\bf potential} $U(z) = \int_0^{\infty} \log(z-w) \; d\mu(w)$ because
$U(-1)$ measures the exponential growth of rooted spanning trees while $U(0)$ measures
the exponential growth of the rooted spanning forests.
The connection is
that in general, the {\bf pseudo determinant} \cite{cauchybinet}
${\rm Det}(K)=\prod_{\lambda \neq 0} \lambda$
is the number of {\bf rooted spanning trees} in $G$ by
the {\bf Kirchhoff matrix tree theorem} and
${\rm det}(1+K)$ is the number of rooted spanning forests in $G$
by the {\bf Chebotarev-Schamis matrix forest theorem}
\cite{ChebotarevShamis1,ChebotarevShamis2,Knillforest}. All these relations follow directly
from the {\bf generalized Cauchy-Binet theorem} that states
that for any $n \times m$ matrices $F,G$, one has the pseudo determinant version
${\rm Det}(F^T G) = \sum_{|P|=k} \det(F_P) \det(G_P)$ with $k$ depends on $F,G$ and
$\det(1+x F^T G) = \sum_P x^{|P|} \det(F_P) \det(G_P)$.
{\bf Pythagorean identities} like ${\rm Det}(F^T F) = \sum_{|P|=k} \det^2(F_P)$
and $\det(1+F^T F) = \sum_P \det^2(F_P)$ follow for an arbitrary $n \times m$ matrix $F$.
Applied to the {\bf incidence matrix} $F$ of a connected graph, where $k(A)=n-1$ is the rank of $K=F^T F$,
the first identity counts on the right spanning trees and the second identity counts on the right
the number of spanning forests
\paragraph{}
Having noticed that the
{\bf tree-forest ratio}
$\tau(G)={\rm Det}(1+K)/{\rm Det}(K)=\prod_{\lambda \neq 0} (1+1/\lambda)$
has a Barycentric limit $\lim_{n \to \infty} (1/|G_n|) \log(\tau(G_n))$,
we interpreted this as $U(-1)-U(0)$ requiring the normalized potential
to exist. By the way, for complete graphs $G=K_n$ the tree-forest
ratio is $(1+1/n)^{n-1}$ and converges to
the {\bf Euler number} $e$. For triangle-free
graphs, $\log(\tau(G_n))/|V(G_n)|$ converges to $\log(\phi^2)$, where
$\phi$ is the {\bf golden ratio}.
For example, for $G=C_n$, where ${\rm Det}(K)=n^2$ and the number of rooted spanning forests is
the alternate {\bf Lucas number} recursively given by
$L(n+1) = 3 L(n)-L(n-1)+2, L(0)=0,L(1)=1$.
We proved in general that $\log(\tau(G_n))/|V(G_n))|$ converges under
{\bf Barycentric refinements} $G_0 \to G_1 \to G_2 \dots$
for arbitrary graphs to a universal constant that only
depends on the maximal dimension of $G=G_0$.
\paragraph{}
Theorem~(\ref{1}) needs to be placed in the context of the
spectral graph literature like
\cite{Biggs,VerdiereGraphSpectra,Godsil,Brouwer,Chung97,DvetkovicDoobSachs,Spielman2009}
or articles like \cite{Guo2005,Stephen,GroneMerrisSunder}.
Most research work in this area has focused on small
eigenvalues like $\lambda_2$ or large
eigenvalues like the spectral radius $\lambda_n$. For $\lambda_2$,
there is a {\bf Cheeger estimate} $h^2/(2d_n) \leq \lambda_2 \leq h$
for the eigenvalue $\lambda_2$ and Cheeger constant
$h=h(G)$ \cite{Cheeger1970} first defined for Riemannian manifolds,
meaning in the graph case that one needs remove $h |H|$ edges
to remove a subgraph $H$ from $G$.
For the largest eigenvalue $\lambda_n$ the Anderson-Morley bound
has produced an industry of results.
\paragraph{}
Let us look at some example. Figure~\ref{(1)} shows more visually
what happens in some examples of graphs with $n=10$ vertices. \\
a) For the cyclic graph $C_4$, the Kirchhoff
eigenvalues are $\lambda_1=0,\lambda_2=2,\lambda_3=2,\lambda_4=4$
and the edge degrees are $d_1=d_2=d_3=d_4=2$. \\
b) For the star graph with $n-1$ spikes, the eigenvalues are
$\lambda_1=0,\lambda_2= \cdots \lambda_{n-1}=1,
\lambda_n=n$ while the degree sequence is
$d_1=\cdots = d_{n-1}=1, d_n=n$. \\
c) For a complete bipartite graph $K_{n,m}$ with $m \leq m$,
we have $0 \leq m \leq \cdots \leq m \leq n \cdots n$ with
$m-1$ eigenvalues $m$ and $n-1$ eigenvalues $n$. The degree sequence
has $m \leq m \leq \cdots \leq m \leq n \leq \cdots \leq n$ with
$m$ entries $m$ and $n$ entries $n$.
\paragraph{}
From all the $38$ isomorphism classes of connected graphs with $4$ vertices,
there are only $3$, for which equality holds in Theorem~(\ref{1}) and $7$ for
which $\lambda_k = 2 d_k$. From the $728$
isomorphism classes of connected graphs with $5$ vertices, there are none for
Theorem~(\ref{1}) and $5$ for $\lambda_k = 2 d_k$.
From the $26704$ isomorphism classes of connected graphs with $6$ vertices,
there are $70$ for which Theorem~(\ref{1}) has equality and $76$ for which
$\lambda_k = 2 d_k$. It is always only the largest eigenvalue, where we have seen
equality $\lambda_n=2d_n$ to hold.
\paragraph{}
The theorem implies $\sum_{j=1}^k \lambda_j \leq 2\sum_{j=1}^k d_j$ which is
weaker than the Schur-Horn inequality $\sum_{j=1}^k \lambda_j \leq \sum_{j=1}^k d_j$.
The later result is of wide interest.
It can be seen in the context of partial traces
\cite{TaoSchurHorn} $\sum_{j=1}^k \lambda_j = {\rm inf}_{dim(V)=k} {\rm tr}(A|V)$ and is
special case of the {\bf Atiyah-Guillemin-Sternberg convexity theorem}
\cite{Atiyah1982,GuilleminSternberg1982}. The Schur inequality has been
sharpened a bit for Kirchhoff matrices to
$\sum_{j=1}^k \lambda_j \leq \sum_{j=1}^k d_j-1$ \cite{Brouwer}
Proposition 3.10.1.
\paragraph{}
There is a general lower bound $\lambda_k \geq d_k-(n-k)+1$
which had been conjectured of Guo \cite{Guo2007}
and is now a theorem \cite{BrouwerHaemers2008} (see also \cite{Brouwer} Proposition 3.10.2.).
This Brouwer-Haemers estimate from 2008 generalizes $\lambda_n \geq d_n+1$
\cite{GroneMerrisSunder2} $\lambda_{n-1} \geq d_{n-1}$ (\cite{LiPan1999} and
$\lambda_{n-2} \geq d_{n-2}-1$ \cite{Guo2007}.
It follows also from $K=-A+B$ with $B={\rm Diag}(d_1,\cdots d_n)$ positive semi definite
that $\lambda_k(K) \geq \lambda_k(-A)$ \cite{HornJohson2012} (Corollary 4.3.12).
\paragraph{}
Unlike the Schur-Horn inequality, Theorem~(\ref{1}) does not extend to general
symmetric matrices. Already for $A=\left[ \begin{array}{cc} 1 & 3 \\ 3 & 1 \end{array} \right]$
which has eigenvalues $\lambda_1=-2, \lambda_2=4$ and with $d_1=1,d_2=1$, the inequality
$\lambda_2 \leq 2 d_2$ fails. It also does not extend to symmetric matrices
with non-negative eigenvalues, but also this does not work as
$A=\left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right]$ shows,
as this matrix has eigenvalues $0,0,3$ and diagonal entries $1,1,1$. The case of $n \times n$
matrices with constant entries $1$ is an example with eigenvalues $0$ and $n$ showing that no
estimate $\lambda_n \leq C d_n$ is in general possible for symmetric matrices even when
asking the diagonal entry to dominate the other entries.
\section{Open ends}
\paragraph{}
Theorem~(1) also would follow from the statement
\begin{equation} \sum_{j=1}^k d_j
- \sum_{j=1}^k \lambda_j \leq d_{k}
\end{equation}
which would be of independent interest as it
estimates the {\bf Schur-Horn error}. Indeed, the Schur-Horn gives together
with such a hypothetical error bound $0 \leq \sum_{j=1}^k d_j - \sum_{j=1}^k \lambda_j
+ d_{k+1}-\lambda_{k+1} \leq d_k+ d_{k+1} - \lambda_{k+1}$,
which is $\lambda_k \leq d_k+d_{k+1}$. Can we prove the above
Schur-Horn error estimate (1)? We do not know yet but our experiments
indicate: \\
{\bf Conjecture A:} [Schur-Horn error] Estimate (1) holds for all finite simple graphs
\paragraph{}
We have mentioned the Brouwer-Haemers bound $\lambda_k \geq d_k-(n-k)+1$
which is very good for large $k$ but far from optimal for smaller $k$. (Note that
part of the graph theory literature labels the eigenvalues in decreasing order.
We use an ordering more familiar in the manifold case, where one has no largest
eigenvalue and which also appears in the earlier literature like
\cite{HornJohnson2012}.) The guess $d_{k-1}/2 \leq \lambda_k$
assuming $d_{-1}=0$ is a {\bf rule of thumb}, as it fails only in rare cases.
The next thing to try is $d_{k-2}/2 \leq \lambda_k$ and this is still wrong
in general but there are even less counter examples.
We can try $d_{k-2}/3$ for which we have not found a counter example yet but
it might just need to look for larger networks to find a counter example.
We still believe that there is an affine lower bound. This is based only on
limited experiments like for $A=1/3$ where it already looks good.
{\bf Conjecture B:} [Affine Brouwer-Haemers bound]
There exist constants $0<A<1$ and $B$
such that for all graphs and $1 \leq k \leq n$ we have
$A d_k - B \leq \lambda_k$.
\paragraph{}
We see for many Erd\"os-R\'enyi graphs that an upper bound
$\lambda_k \leq C d_k$ holds for most graphs
for any $C>1$, if the graphs are large. A possible conjecture is that \\
{\bf Conjecture C:} [Linear bound for Erdoes-Renyi]
For all $C>1$ and all $p \in [0,1]$, the probability of the
set of graphs in the Erd\"os-R\'enyi probability space
$E(n,p)$ with $\lambda_k \leq C d_k$ for all $1 \leq k \leq n$
goes to $1$ for $n \to \infty$.
\section{Illustration}
\paragraph{}
Here are a few examples of spectra with known upper and
lower bounds:
\begin{figure}[!htpb]
\scalebox{0.6}{\includegraphics{figures/stargraph.pdf}}
\scalebox{0.6}{\includegraphics{figures/cyclegraph.pdf}}
\scalebox{0.6}{\includegraphics{figures/lineargraph.pdf}}
\scalebox{0.6}{\includegraphics{figures/wheelgraph.pdf}}
\scalebox{0.6}{\includegraphics{figures/completegraph.pdf}}
\scalebox{0.6}{\includegraphics{figures/bipartite.pdf}}
\scalebox{0.6}{\includegraphics{figures/petersen.pdf}}
\scalebox{0.6}{\includegraphics{figures/gridgraph.pdf}}
\scalebox{0.6}{\includegraphics{figures/randomgraph.pdf}}
\label{Spectra}
\caption{
This figure shows examples of spectra and compares them
with known upper and lower bounds.
We see first the {\bf Star, Cycle and Path graph}, then
the {\bf Wheel, Complete, Bipartite graph},
and finally the {\bf Petersen, Grid and Random graph},
all with $10$ vertices. The eigenvalues are outlined thick (in red).
Above we see the upper bound (in blue) as proven in the present
paper. Then there are the Brouwer-Haemers and Horn-Johnson lower
bounds which, as the examples show, do not always compete
but complement each other.
}
\end{figure}
\bibliographystyle{plain}
| {
"timestamp": "2022-05-27T02:04:52",
"yymm": "2205",
"arxiv_id": "2205.10968",
"language": "en",
"url": "https://arxiv.org/abs/2205.10968",
"abstract": "We prove that each eigenvalue l(k) of the Kirchhoff Laplacian K of a graph or quiver is bounded above by d(k)+d(k-1) for all k in {1,...,n}. Here l(1),...,l(n) is a non-decreasing list of the eigenvalues of K and d(1),..,d(n) is a non-decreasing list of vertex degrees with the additional assumption d(0)=0. We also prove that in general the weak Brouwer-Haemers lower bound d(k) + (n-k) holds for all eigenvalues l(k) of the Kirchhoff matrix of a quiver.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Eigenvalue bounds of the Kirchhoff Laplacian",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575132207566,
"lm_q2_score": 0.8152324848629214,
"lm_q1q2_score": 0.8010128030236902
} |
https://arxiv.org/abs/1610.07836 | Classification of crescent configurations | Let $n$ points be in crescent configurations in $\mathbb{R}^d$ if they lie in general position in $\mathbb{R}^d$ and determine $n-1$ distinct distances, such that for every $1 \leq i \leq n-1$ there is a distance that occurs exactly $i$ times. Since Erdős' conjecture in 1989 on the existence of $N$ sufficiently large such that no crescent configurations exist on $N$ or more points, he, Pomerance, and Palásti have given constructions for $n$ up to $8$ but nothing is yet known for $n \geq 9$. Most recently, Burt et. al. had proven that a crescent configuration on $n$ points exists in $\mathbb{R}^{n-2}$ for $n \geq 3$. In this paper, we study the classification of these configurations on $4$ and $5$ points through graph isomorphism and rigidity. Our techniques, which can be generalized to higher dimensions, offer a new viewpoint on the problem through the lens of distance geometry and provide a systematic way to construct crescent configurations. | \section{Introduction}
Erd\H{o}s once wrote, ``my most striking contribution to geometry is, no doubt, my problem on the number of distinct distances,'' \cite{Erd96}. The referred question, which asks what is the minimum number of distinct distances determined by $n$ points, was first asked in 1946 \cite{Erd46} and marked the beginning of a chain of variants. See \cite{She} and \cite{SM15} for a survey on these. Although one would expect all distances between $n$ points to be different if they were to be placed in the plane at random, if the distances are regularly placed, such as on a lattice, then many distances may repeat. Erd\H{o}s' conjectured lower bound, $\Omega(n/\sqrt{\log{n}})$, attained by a $\sqrt{n}\times \sqrt{n}$ integer lattice, was essentially proven up to a $\sqrt{\log{n}}$ factor by Guth and Katz in 2010. \cite{GK}
The variant we study in this paper is one where the distances have prescribed multiplicities. One says $n$ points are in crescent configuration in $\mathbb{R}^d$ if they are in general position and determine $n-1$ distinct distances such that for every $1 \leq i \leq n-1$, there is a distance that occurs exactly $i$ times. Erd\H{o}s conjectured that there exists a sufficiently large $N$ such that no crescent configuration exists on $N$ or more points \cite{Erd89}. Though constructions have been provided for $n=5,6,7,8$ by Erd\H{o}s, I. Pal\'asti and C. Pomerance \cite{Pal87, Pal89, Erd89}, little progress has been made towards a construction for $n \geq 9$. One problem often encountered in the search for these configurations is the lack of understanding of their properties and the difficulty in exhibiting the configurations' information combinatorially.
As such, we take a new approach to studying these crescent configurations, borrowing techniques from distance geometry and graph theory. Our main theorems are the results of two algorithms that search for and classify all crescent configurations on any $n \geq 4$ up to graph isomorphism and find geometric realizations for each of these isomorphism classes in the plane.
\begin{theorem} \label{thm:3on4}
Given a set of three distinct distances $\{d_1,d_2,d_3\}$ on four points, there are only three allowable crescent configurations up to graph isomorphism. In Figure \ref{fig:mcr} we provide graph realizations for each type.
\end{theorem}
\begin{figure}[h]
\includegraphics[width=\linewidth]{MCRMain.png}
\caption{Types M, C, and R.}
\label{fig:mcr}
\end{figure}
\begin{theorem}\label{thm:27on5}
Given a set of four distinct distances $\{d_1,d_2,d_3,d_4\}$ on five points,
there are 27 allowable crescent configurations up to graph isomorphism. In Figure \ref{fig:allFIVE} we provide graph realizations for each type.
\end{theorem}
\begin{figure}[h]
\includegraphics[width=\textwidth]{configson52.png}
\caption{Representatives for all possible crescent configurations on five points.}
\label{fig:allFIVE}
\end{figure}
The advantage of this algorithmic method is that it can be applied in higher dimensions, though we hope that the running time ($\mathcal{O}(n^{n})$) can still be vastly improved.
In Appendix \ref{app: 5points} we include distance sets and realizable distances for each crescent configuration on four and five points.
\par
In Section 2, we introduce our distance geometry approach and prove a classification of crescent configurations for a general $n$. We follow this in Section 3 with an outline of the first half of the algorithm used to achieve Theorems \ref{thm:3on4} and \ref{thm:27on5}.
We then move to Section 4 where we discuss how distance geometry methods may be applied to determine whether a distance set is realizable. In Section 5, we outline the second half of the algorithm, completing the proofs for Theorems \ref{thm:3on4} and \ref{thm:27on5}. After the theorems have been established, we discuss another way to verify the uniqueness of our isomorphism classes and also a different method to characterize these configurations through analysis of rigidity in Section 6. Lastly, we discuss potential future work based on our approach using graph theory, rigidity, and distance geometry.
\begin{rem} The authors are happy to provide copies of any code referenced in the course of this paper. Please email \href{mailto:Steven.Miller.MC.96@aya.yale.edu}{Steven.Miller.MC.96@aya.yale.edu}.
\end{rem}
\section{Classification of Crescent Configurations}\label{sec:classification}
In this section we provide the key definitions and theorems that we use to classify crescent configurations.
\begin{defi}[General Position \cite{SM15}]\label{def:genpos}
We say that n points are in general position in $\mathbb{R}^d$ if no d+1 points lie on the same hyperplane and no d+2 lie on the same hypersphere.
\end{defi}
\begin{defi}[Crescent Configuration \cite{SM15}] We say $n$ points are in crescent configuration (in $\mathbb{R}^{d}$) if they lie in general position in $\mathbb{R}^{d}$ and determine $n - 1$ distinct distances, such that for every $1 \leq i \leq n - 1$ there is a distance that occurs exactly $i$ times.
\end{defi}
The notion of general position is very important in the construction of crescent configuration. Without this notion, the problem of placing $n$ points in $\mathbb{R}^d$ to determine $n-1$ distinct distances satisfying the prescribed multiplicities becomes trivial. By simply placing $n$ points on a line in an arithmetic progression, we solve the problem in any dimension.
\vspace{0.2cm}
\begin{defi}[Distance Coordinate]The \emph{distance coordinate}, $D_{A}$, of a point $A$ is the multiset of all distances, counting multiplicity, between $A$ and the other points in a set $\mathcal{P}$. Order does not matter.
\end{defi}
\begin{defi}[Distance Set] The \emph{distance set}, $\mathcal{D}$ corresponding to a set of points, $\mathcal{P}$, is the multiset of the distance coordinates for each point in the $\mathcal{P}$.
\end{defi}
\begin{defi} [Isomorphism for Labeled Graph \cite{Gervasi}]\label{def:isomorphism} Graph A is isomorphic to graph B if and only if there exists a bijective function $f: V(A) \mapsto V(B)$, (where V(A) and V(B) are the vertex spaces) such that:
\begin{enumerate}
\item $\forall a_{i} \in A, \ l_{A}(a_{i}) = l_{B}(f(a_{i}))$,
\item $\forall a_{i},a_{j} \in V, \hspace{2mm} \{a_{i},a_{j}\}\in E_{A}\leftrightarrow \{f(a_{i}),f(a_{j})\}\in E_{B}$, and
\item $\forall \{a_{i},a_{j}\}\in E_{A}, w_{A}(\{a_{i},a_{j}\})=w_{B}(f(\{a_{i},a_{j}\}))$,
\end{enumerate}
where $\{l_{A}, l_{B}\}$ and $\{w_{A}, w_{B}\}$ are functions that define the labels of the vertices and edges of A and B respectively.
\end{defi}
\vspace{0.2cm}
We note that a crescent configuration on $n$ points can be considered a weighted complete graph with $n-1$ distinct weights associated to the edges in a certain manner, so that the configuration can be realized in $\mathbb{R}^d$. The adjacency matrix, thus, is a
natural way to store information about the configuration. Should we rearrange the weights that each vertex is incident to, we would most likely have to draw a configuration differently. Due to this insight, we have the following theorem.
\begin{theorem}\label{thm: graphisom}
Let $A$ and $B$ be two crescent configurations on the same number of points $n$. If $A$ and $B$ have the same distance sets, then there exists a graph isomorphism from $A$ to $B$.
\end{theorem}
\begin{proof}
Consider two crescent configurations, $A$ and $B$, each on $n$ points, $\{a_{1},...,a_{n}\}$ and $\{b_{1},...,b_{n}\}$, such that $A$ and $B$ have the same distance sets (up to order of the entries of the coordinates). We note that we do not yet care about the specific distances and, instead, designate each distance by the points that define them, as will be shown below.
\par
First we show that the ordering of the elements of the distance coordinates does not matter when comparing the distance sets. Then we prove that $A$ and $B$ are isomorphic when we view them as labeled graphs.
\par
Consider $A$. First, we number each point of $A$ from $1$ to $n$. For each point $a_{i}$, $1\leq i \leq n$, in the configuration, we re-order its distance coordinate so that the distance between $a_{i}$ and $a_{1}$, $d_{a_{i,1}}$, is in the first slot, the distance between $a_{i}$ and $a_{2}$, $d_{a_{i,2}}$, is in the second slot, and so on, inserting 0 into the $i$th slot, representing the distance between $a_{i}$ and $a_{i}$. We will call these \emph{augmented distance coordinates}. All of our augmented distance coordinates uniquely determine the points that they represent. Note that the arrangement of these augmented coordinates depends on how we choose to index the points of $A$. The nonzero entries, however, only encode information about the set of distances between a point and all other points in a given configuration. They do not account for relative position. That is to say, until assign a number to each of the points of $A$ (i.e., $a_1$, $a_2$, and so on), the order of the entries of the distance coordinates does not matter. The distance coordinate $\{d_1,d_2,d_3\}$ is the same as the coordinate $\{d_3,d_1,d_2\}$.\\
\par
Next we do the same thing for $B$. Since the distance sets of $A$ and $B$ are the same, we know that for every augmented coordinate in $A$, there is an augmented coordinate in $B$ with the same set of non-zero entries. Thus, let $a_{i}$ and $b_{j}$ have the same non-zero entries in their augmented coordinates. We define each point by its augmented coordinate, $$D_{a_{i}}=(d_{a_{i,1}},...,d_{a_{i,i-1}},0,d_{a_{i,i+1}},...,d_{a_{i,n}})$$ $$D_{b_{j}}=(d_{b_{j,1}},...,d_{b_{j,j-1}},0,d_{b_{j,j+1}},...,d_{b_{j,n}}).$$ Since the set of non-zero entries in $D_{a_{i}}$ and $D_{b_{j}}$ are the same, we know that there exists a permutation, $f_{ij}$ such that $f_{ij}(D_{a_{i}})= D_{b_{j}}$. Since we know that every distance coordinate in $A$ has a match in $B$, there exists a permutation function of this sort that maps $D_{a_{i}}$ to some $D_{b_{j}}$ for all $1\leq i \leq n$.
\par
Furthermore, we actually only need one permutation for all of the reordered coordinates of $A$. This can be quickly seen by recognizing that the ordering of a single augmented distance coordinate of $A$ determines the ordering of all augmented distance coordinates if we wish to retain all information. To prove that it this is indeed the case, we point out that the reordered distance coordinates of $A$ are, in fact, the rows and columns of a weighted adjacency matrix. The adjacency matrix below models a configuration on $3$ points. The element in row $1$, column $2$ represents the distance between points $a_1$ and $a_2$.
$$\begin{bmatrix}
0&d_{1,2}&d_{1,3}\\
d_{1,2}&0&d_{2,3}\\
d_{1,3}&d_{2,3}&0
\end{bmatrix}$$
From this, a proof by induction will show that if we permute the entries of row $1$, the other rows (and columns) must undergo the same permutation in order to preserve the symmetry of the matrix.
\par
We now identify each point of $A$ and $B$ by these augmented and rematched distance coordinates to create two labelled and weighted graphs where the vertices are the points of $A$ and $B$, and each edge $\{a_i,a_j\}$ is weighted by the distance between the two points, $d_{a_{i,j}}$. In these graphs, the vertex labels are the augmented distance coordinates of each point, where the coordinates of $A$ are further transformed by $f_{ij}$, and the edge labels are the distinct distances, $d_1,d_2,....,d_{n-1}$ that define our crescent configurations. Given this information, it is very straightforward to prove that these graphs are isomorphic by showing that they satisfy the three conditions in Definition \ref{def:isomorphism}.
\par
Let $F: V(A_{D}) \mapsto V(B_{D})$ be such that $F(a_{i})=b_j$, where $b_j$ is chosen so that the distance coordinates of $a_i$ and $b_j$ have the same nonzero values. Since we augmented and transformed our distance coordinates by the permutation function $f_{ij}$, we know that $a_{i}$ and $b_{j}$ have the same vertex labels.
Thus, we know $F$ is one-to-one because the distance sets of $A$ and $B$ are the same. Furthermore, we know that $F$ is onto because $A$ and $B$ both have $n$ points. Therefore, $\forall b_{j}\in B$, $\exists \ a_{i} \in A$ such that $F(D_{a_{i}})=D_{b_{j}}$. Therefore, F is a bijection.
\par
\textit{Condition 1:} Let $l_A$ and $l_B$ be defined as in Definition \ref{def:isomorphism}. This menas that $l_{A}(a_i)=f_{ij}(D_{a_{i}})=D_{b_{j}}$ and $l_{B}(b_{j})=D_{b_{j}}$. Thus, given $F(a_{i})=b_{j}$, we have that $\forall a_{i} \in A, \ l_{A}(a_i)=l_{B}(F(a_{i}))$.
\par
\textit{Condition 2:} This condition holds because crescent configurations are complete graphs, and $F$ is a bijection. Therefore $\{a_{i},a_{j}\}\in E_{A} \leftrightarrow \{F(a_{i}), F(a_{j})\} \in E_{B}$, where $E_A$ and $E_B$ are the edge sets of $A$ and $B$ respectively.
\par
\textit{Condition 3:} To prove this condition, we recall that applying $f_{ij}$ to $a_i$ is equivalent to re-indexing the points of $A$. Therefore, if we let $w_{A}$ and $w_{B}$ be functions that return the edge labels of $A$ and $B$, respectively, as defined as in Definition \ref{def:isomorphism}, then we know that $$w_{A}(\{a_i,a_j\})=d_{a_{i,j}}=d_{b_{l,k}}=w_{B}(\{b_l,b_k\})=w_{B}(\{F(a_{i}),F(a_{j})\}),$$
satisfying this condition.
Thus A and B are isomorphic.
\end{proof}
\begin{remark} Since the algorithm can only distinguish the similar permutations according to the distance labels, the resulting distance sets may define crescent configurations that are not geometrically realizable. We address this concern later in the paper (see Section \ref{sec:geomREAL} on geometric realizability).
\end{remark}
\section{Method for Counting Isomorphism Classes}\label{sec: counting}
As a direct result of Theorem \ref{thm: graphisom}, we now have a method for classifying all crescent configurations on $n$ points into isomorphism classes.
In this section we provide a sketch of the algorithm used to find these isomorphism classes. A pseudocode is provided in Appendix \ref{app: algorithms}.
\par
Consider a set of distances, $\{d_1,d_2,d_2,...,d_{n-1}\}$, associated to a crescent configuration on $n$ points. This set of distances may be threaded through an $n \times n$ adjacency matrix like the one shown in the proof of Theorem \ref{thm: graphisom}.
\begin{comment}
Consider a set of distances associated to a crescent configuration on $n$ points: $\{d_1,d_2,d_2,...,d_{n-1}\}$. This set of distances may be threaded through the rows and columns of an adjacency matrix to define a complete graph on $n$ vertices as shown in \ref{eq:threadEx}
\begin{equation}\label{eq:threadEx}
\{d_1,d_2,d_2,...,d_{n-1}\} \ \to \ \begin{pmatrix}
0&d_1&d_2&d_2...&{}&...\\
d_1&0&...&{}&.&...\\
d_2&...&0&...&{}&...\\
d_2&{}&...&0&{}&...\\
.&.&{}&{}&.&...\\
.&{}&...&d_{n-1}&d_{n-1}&0\\
\end{pmatrix}
\end{equation}
\end{comment}
From this point on, we refer to these matrices as \textit{distance matrices}; however, they are the equivalent of weighted adjacency matrices in graph theory.
\begin{defi}[Distance Matrix]
Let $A$ be an $n \times n$ matrix. We say $A$ is a \textbf{distance matrix} if the following conditions are satisfied
\begin{enumerate}
\item $A$ is symmetric,
\item the entries along the main diagonal of $A$ are all 0, and
\item $a_{ij}$ (the entry found in the $ith$ row and $jth$ column of $A$) refers to the distance between points $i$ and $j$ of the point configuration (or graph) defined by $A$.
\end{enumerate}
\end{defi}
As each configuration has a distance matrix associated to it, we can generate all possible configurations by threading all permutations of $\{d_1,d_2,d_2,...,d_{n-1}\}$ through distance matrices. Using standard combinatorial techniques, we can quickly see that this method will generate 60 configurations on four points, 12,600 on five points, and 37,837,800 on six points. We would now like to apply Theorem \ref{thm: graphisom} and group these configurations into isomorphism classes (via graph isomorphism).
\begin{comment}
To do so requires the following lemma.
\begin{lemma}\label{lem: matrix2isom}
Let $A$ and $B$ be two $n \times n$ distance matrices describing two crescent configurations on $n$ points. For each matrix, we take the set of non-zero elements of row $i$ and call these sets $A_{i}$ and $B_{i}$. If $A_{i} \ \subset \{B_{i}\}_{i=1}^{n}$ for all $1 \leq i \leq n$, Then A and B are isomorphic.
\end{lemma}
\begin{proof}
Note that the set of non-zero elements of row $i$ in a given distance matrix represents the distances between point $i$ and all other points in the configuration. Therefore, it is equivalent to the \textbf{distance coordinate} of point $i$. Therefore, given a crescent configuration on $n$ points, the $n$ distance coordinates are the sets of non-zero elements of each row of its distance matrix, implying that the \textbf{distance set} of this configuration is entirely determined by the distance matrix.
The remainder of the proof is a direct application of \ref{thm: graphisom}.
\end{proof}
\par
Using this lemma, we may group all configurations on $n$ points into their isomorphism classes and conduct the remainder of our analysis on one representative from each class. For $n=4$, this reduces our initial $60$ to $4$ classes. For $n=5$, it reduces our initial $12,600$ to $85$ classes.
\end{comment}
\par
A computer program may then be used to group together all distance matrices defining configurations with identical distance sets. These groups then represent our isomorphism classes, and we can conduct the remainder of our analysis on one representative from each class. For $n=4$, this reduces our initial $60$ to $4$ classes. For $n=5$, it reduces our initial $12,600$ to $85$ classes.
\par
Having finished this classification, we note that there are three degenerate cases that will force the configuration to \textit{always} violate general position. These will remain unchanged under graph isomorphism since vertex and edge-set pairings stay the same, allowing us to eliminate entire isomorphism classes if one representative is proven to be degenerate. These cases are as follows.
\begin{enumerate}
\item The configuration contains one point at the center of a circle of radius $d_{i}$ with four or more points on this circle as seen in Figure \ref{fig:star}.
\item The configuration contains three (or more) isosceles triangles sharing the same base.
\item The configurations contains four points arranged on the vertices of an isosceles trapezoid.
\end{enumerate}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{star.png}
\caption{The central point of this configuration has distance coordinate $\{d,d,d,d,d,d\}$.}
\label{fig:star}
\end{figure}
Although there exist other cases that will force a class to violate general position, these three cases may be accounted for by only considering the distance matrices.
\par
Case 1 is very simple to account for and is only possible for $n\geq 5$. In order to eliminate these cases, we remove configurations containing one or more distance coordinates in which a particular distance, $d_{i}$, occurs four or more times. In Algorithm \ref{alg:class}, this case and case 3 are accounted for in the procedure REMOVECYCLIC.
\par
As with case 1, case 2 is only possible for $n\geq 5$. If three or more isosceles triangles share the same base, then all of their apexes must reside on the line bisecting this base, forcing them to violate general position.
\par
In a distance matrix, an isosceles triangle is indicated by a matching pair of distances occurring in a row.
Therefore, we remove all distance matrices in which three or more rows contain a matching pair occurring in the same slots in each row.
This case is accounted for in Algorithm \ref{alg:class} by the procedure REMOVELINEARCASE.
\par
Case 3 requires us to remove all configurations that contain a subset or subsets of four points defining an isosceles trapezoid, since isosceles trapezoids are always cyclic quadrilaterals. In Algorithm \ref{alg:class}, the procedures SUBMATRICES and REMOVECYCLIC are included to account for these cases, which may be identified by their distance matrices using the following lemma.
\begin{lemma}\label{cla:isosceles} A $4\times 4$ distance matrix will always define an isosceles trapezoid if and only if one of the following holds:
\begin{enumerate}
\item the matrix has only one distinct row such that
\begin{enumerate}
\item the matrix has only two distinct distances, or
\item the matrix only has three distinct distances,
\end{enumerate}
\item the matrix has two distinct rows, both with multiplicity two, such that
\begin{enumerate}
\item the matrix has three distinct distances with multiplicity no greater than three (note that this means each distance occurs no more than six times in the distance matrix), or
\item the matrix has four distinct distances with multiplicity no greater than two (each distance occurs no more than four times in the distance matrix).
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{proof}
($\Leftarrow$) According to Halsted \cite{Halsted}, a necessary and sufficient condition for a quadrilateral to be an isosceles trapezoid is that it has at least one pair of opposite sides with equal length and diagonals of equal length. It is not possible for these two lengths (sides and diagonals) to be equal because this would create two isosceles triangles that would have to be congruent. Therefore there are three cases for isosceles trapezoids: (1) four distinct distances, (2) three distinct distances, or (3) two distinct distances.
Figure \ref{fig: 4dist} presents possible realizations for each of these cases. From here, it is straightforward to show that each of these quadrilaterals satisfies one of the conditions stated in Lemma \ref{cla:isosceles}, thus completing this direction of the proof.
\begin{figure}[h]
\includegraphics[width=\linewidth]{isosceles_trap.png}
\caption{(1) Four distinct distances, (2) three distinct distances, (3) two distinct distances. }
\label{fig: 4dist}
\end{figure}
\par
($\Rightarrow$) We now prove the other direction.
\par
We begin with condition (1a): one distinct row and two distinct distances.
\par
Assume each row represents the distance coordinate $(a,a,b)$ (order of distances may differ among rows). Since we require all rows of the distance matrix to have the same distance coordinate, distance $b$ must touch every point yet only show up twice. Therefore, it must represent both diagonals or two opposite sides. Both cases yield a quadrilateral with a set of congruent opposite sides and congruent diagonals, a necessary and sufficient condition for an isosceles trapezoid.
\par
For condition 1b - one distinct row and three distinct distances - all three distances must be common to all points, so they must describe either a set of opposite sides or the diagonals, in all cases yielding an isosceles trapezoid.
\par
The proofs in the other directions all following similar arguments. In each case there are two distances that must be common to all points and at least one distance that can only be common to two points, implying that one set of opposite sides cannot be congruent, therefore the distances common to all points must describe the diagonals and only one set of opposite sides, thus describing an isosceles trapezoid in each case.
\end{proof}
Once these cases have been eliminated, we are left with three isomorphism classes for four points and $51$ for five points.
\begin{comment}\subsection{Usefulness of Classification}
\par
The usefulness of this classification system becomes evident when considering the question of geometric realizability (see \ref{def:geomreal}). Take, for instance, the two configurations shown in Figure \ref{fig:realisom}.
\par
If we do not assign values to $d_{1},d_{2},\ or \ d_{3}$, these two configurations are isomorphic, and thus fall in to the same isomorphism class. However, they clearly do not possess the same set of distances. And yet, in Section \ref{sec:geomREAL} that the distances are actually two different solutions to the same system of equations. By placing all crescent configurations into isomorphism classes, we set ourselves up nicely to solve for ALL realizable crescent configurations in a single class with one system of equations.
\end{comment}
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{isomfigs.png}
\caption{Two isomorphic configurations.}
\label{fig:realisom}
\end{figure}
\begin{rem}\label{rem:runtime}
It should be noted that the runtime of Algorithm \ref{alg:class} is $\mathcal{O}(n^{n})$, so its use is limited to crescent configurations on relatively few points. However, we believe that with enough processing power, upper bounds can be established using the algorithm for small $n$ such as $7$ and $8$. As such, at this time, this does not pose much of an issue to the progress of this research, as no crescent configurations on more than $8$ points is yet to be found.
\end{rem}
\section{Geometrically Realizability of Crescent Configurations} \label{sec:geomREAL}
In the previous sections we developed a way to find every isomorphism class of distance sets that corresponded to a crescent configuration on $n$ points; however, it is not clear that given some $m$ and one of these distance sets, there exists a set of points in $\ensuremath{\mathbb{R}}^m$ that realizes this distance set. Thus, we formally define geometric realizability.
\begin{defi}\label{def:geomreal}
A crescent configuration on $n$ points is \emph{geometrically realizable} in $\ensuremath{\mathbb{R}}^m$ if there exists some distances $d_1, \dots,d_{n-1}$ for which there exist points $P_1, P_2, \dots, P_n$ in $\ensuremath{\mathbb{R}}^m$ that realize the corresponding distance set.
\end{defi}
We have thus far established upper bounds on the number of crescent configurations on $n$ points based on the combinatorial aspects of this problem. We now sharpen these bounds by considering criteria for geometric realizability.
\begin{remark}
Burt et. al. \cite{SM15} showed that given an $n$, there exists an $m$ such that an $n-$crescent configuration exists in $\ensuremath{\mathbb{R}}^m$. In this section, we fix $m$ and determine whether a given distance set is geometrically realizable in $\ensuremath{\mathbb{R}}^m$.
\end{remark}
In Section \ref{sec: counting}, we classify crescent configurations in terms of the distances between each pair of points. The problem of determining information about a set of points based on the distances between them is well studied, and is known as the distance geometry problem. Thus, we are now able to use techniques from distance geometry in order to sharpen the bounds on the number of geometrically realizable configurations on $n$ points. We begin by introducing some of these techniques.
\begin{defi}
The \emph{Cayley-Menger matrix} corresponding to a set of $n$ points $\{P_1, P_2, \dots, P_n\}$ is an $(n+1)\times (n+1)$ matrix of the following form:
$$
\left(
\begin{matrix}
0 & d_{1,2}^2 & \ldots & d_{1,n}^2 & 1\\
d_{2,1}^2 & 0 & \ldots & d_{2,n}^2 &1\\
\vdots & \vdots & \ddots & \vdots &\vdots\\
d_{n,1}^2 & d_{n,2}^2 & \dots &0&1\\
1 & 1 &\ldots & 1 & 0
\end{matrix}
\right)
,$$
where $d_{i,j}$ is the distance between $P_i$ and $P_j$.
\end{defi}
The Cayley-Menger matrix corresponding to a set of points can be used to determine whether those points lie in a $d$-dimensional Euclidian space. Note that $n+1$ points always lie in a hyperplane of $\ensuremath{\mathbb{R}}^n$; thus we only need to consider collections of points of size at least $n+2$.
\begin{theorem}[Cayley-Menger Matrix]\label{thm: cayley}
\cite{LL}
Let $M$ be a $(n+3) \times (n+3)$ matrix of the form specified above. $M$ is the Cayley-Menger matrix of a set of $n+2$ points in $\mathbb{R}^n$ if and only if $\det{M} = 0$.
\end{theorem}
\begin{corollary}\label{cor:cayley}
Let $M$ be a $n+1\times n+1$ matrix of the form specified above; $M$ is the Cayley-Menger matrix of a set of $n$ points, $\{P_1, \dots, P_n\}$ in $\mathbb{R}^2$, if and only if the Cayley-Menger matrix corresponding to every size $m+2$ subset has determinant zero.
\end{corollary}
\begin{proof}
$\Rightarrow$ Suppose that $M$ is the Cayley-Menger matrix of $n$ points that can be realized in $\ensuremath{\mathbb{R}}_m$. Thus, every subset of $m+2$ points can also be realized in $\ensuremath{\mathbb{R}}^m$, and thus has a corresponding Cayley-Menger matrix with determinant zero. \\
$\Leftarrow$ Suppose that the Cayley-Menger matrix corresponding to every size $m+2$ subset has determinant zero. Note that $P_1, \dots, P_{m+1}$ must define an $m$-dimensional subspace of any Euclidean space. Note that by assumption, for every $i$, the Cayley-Menger matrix corresponding to $P_1, \dots, P_{m+1}, P_i$ has determinant $0$. Thus, as a consequence of Theorem \ref{thm: cayley}, $P_i$ is in the same $m$-dimensional subspace. Thus, our set of $n$ points must be realizable in $\ensuremath{\mathbb{R}}^m$.
\end{proof}
Submatrices of the Cayley-Menger matrix can also determine whether the points determined by a set of distances lie on the same hypersphere.
\begin{defi}
The \emph{Euclidian distance matrix} corresponding to a set of $n$ points $\{P_1, P_2, \dots P_n\}$ is an $n\times n$ matrix of the following form:
$$
\left(
\begin{matrix}
0 & d_{1,2}^2 & \ldots & d_{1,n}^2 \\
d_{2,1}^2 & 0 & \ldots & d_{2,n}^2\\
\vdots & \vdots & \ddots & \vdots &\vdots\\
d_{n,1}^2 & d_{n,2}^2 & \dots &0
\end{matrix}
\right)
.$$
\end{defi}
Note that the Euclidean distance matrices defined above are slightly different than the general notion of distance matrices used in previous sections.
\begin{theorem}\label{thm: circles} \cite{CS}
Let $E$ be the Euclidean distance matrix corresponding to points $P_1, \dots P_n$ in Euclidean space. These points lie on a hypersphere in $\ensuremath{\mathbb{R}}^{n-2}$ if and only if $\det{E} = 0$.
\end{theorem}
We apply these techniques to the problem of counting crescent configurations in the following corollary.
\begin{cor}\label{cor: realize}
Let $M$ be the Cayley-Menger matrix corresponding to a distance set on n points, $\mathcal{D}$. Then $\mathcal{D}$ is geometrically realizable in general position in $\ensuremath{\mathbb{R}}^m$ if and only if the following conditions hold.
\begin{enumerate}
\item Let $S$ be a size $m+3$ subset of $\{1, 2, \dots, n, n+1\}$ that contains $n+1$. Let $M_S$ be the submatrix of $M$ with rows and columns indexed by $S$. For every choice of $S$, $\det{M_S} = 0$.
\item Let $S$ be a size $m+2$ subset of $\{1, \dots, n, n+1\}$. For every such choice of $S$, $\det{M_S} \neq 0$.
\end{enumerate}
\end{cor}
\begin{proof}
From Corollary \ref{cor:cayley}, a collection of distances between $n$ points is geometrically realizable in $\ensuremath{\mathbb{R}}^m$ if and only if the Cayley-Menger matrix corresponding to each subset $m+2$ points has determinant 0. Each of these matrices is one of the submatrices specified by the first condition.
We consider the submatrices specified by the second condition in two parts. First, consider the submatrices $M_S$ for which $n+1 \in S$. We see that these comprise the Cayley-Menger matrices for each subset of $m+1$ points. We now consider the submatrices $M_S$ for which $n+1 \not\in S$. We see that these comprise the Euclidean distance matrices for each subset of $m+2$ points.
Thus, we can see that the submatrices specified by the second condition is the set of all Euclidean distance matrices for each size $m+2$ subset of the $n$ points together with the set of all Cayley-Menger matrices for each size $m+1$ subset of the $n$ points. By Theorem \ref{thm: circles}, a Euclidean distance matrix on $m+2$ points has determinant 0 if and only if these points lie on the same hypersphere in $\ensuremath{\mathbb{R}}^m$. By Theorem \ref{thm: cayley}, the Cayley-Menger matrix of $m+1$ points has determinant 0 if and only if these points lie on the same hyperplane in $\ensuremath{\mathbb{R}}^m$. Thus, this second condition holds if and only if the points lie in general position in $\ensuremath{\mathbb{R}}^m$.
\end{proof}
\par
Our application of this corollary to the distance sets on $4$ and $5$ points have allowed us to determine the geometric realizability of each of the distance sets found using techniques from earlier sections. These geometrically realizable configurations are discussed in the following section.
Thus far, most of our attentions have been focused on crescent configurations in the plane. However, these techniques can be applied to finding crescent configurations in higher dimensions, furthering the work of Burt et. al. \cite{SM15}.
\section{Finding Geometric Realizations for Crescent Configurations}
As stated in Section \ref{sec: counting}, Algorithm \ref{alg:class} yields three crescent configurations on four points and 51 on five points, unique up to isomorphism. However, these procedures do not guarantee that each isomorphism class contains geometrially realizable configurations.
\begin{comment}\begin{theorem} \label{thm:3on4}
Given a set of three distinct distances, $\{d1,d2,d3\}$, on four points in crescent configuration, there are only three allowable crescent configurations up to graph isomorphism.
We refer to these configurations as M-type, C-type, and R-type, respectively. In Figure \ref{fig:mcr} we provide graph realizations, adjacency matrices, and a set of distances for each.
\end{theorem}
\begin{theorem}\label{thm:27on5}
Given a set of four distinct distances, $\{d_1,d_2,d_3,d_4\}$, on five points in crescent configuration, there are 27 allowable crescent configurations up to graph isomorphism.
\end{theorem}
In Appendix \ref{app: 5points} we include distance sets and realizable distances for each crescent configuration on five points.
\end{comment}
\par
To check which of these configurations are geometrically realizable, we run Algorithm \ref{alg: check4}. A pseudocode of this algorithm can be found in Appendix \ref{app: algorithms}. Note that we assume $d_{1}=1$ in order to simplify the procedure.
\par
Algorithm \ref{alg: check4} is an extended application of Theorem \ref{cor: realize}. The first step of this algorithm is to take the Cayley-Menger determinants of all $4$-point subsets of each configuration found by Algorithm \ref{alg:class} and set them equal to zero. Doing so yields a system of $\begin{pmatrix} n\\ 4 \end{pmatrix}$ equations with unknowns : $\{d_2,d_3,...,d_{n-1}\}$. If the configuration is realizable in the plane, solving this system of equations will give all possible solutions to these distances in $\mathbb{R}^{2}$. Note that the values must be positive and real-valued.
\par
For each of these solutions, we check the Cayley-Menger determinants of all $3$-point subsets of the configuration. If one or more of these determinants equals zero, that solution forces the configuration to place three points on the same line, violating general position, so we throw it away. If none of the determinants are zero, we keep the solution.
\par
For each remaining solution, we take the determinant of the Euclidean distance matrix of each $4$-point subset of the configuration. If any of these determinants equal zero, the solution forces four points onto the same circle, violating general position, and we throw it away.
\par
Any remaining solutions represent the distances of a geometrically realizable crescent configuration.
Applying this algorithm to the configurations returned by Algorithm \ref{alg:class} completes the proofs of Theorems \ref{thm:3on4} and \ref{thm:27on5}, as we find that there are exactly three realizable crescent configurations on four points and 27 realizable crescent configurations on five points.
\par
In Appendix \ref{app: 5points}, we provide a set of distances for every configuration on five points that had at least one remaining solution after applying this algorithm.
\begin{comment}
\begin{figure}[h]
\definecolor{qqwuqq}{rgb}{0.,0.39215686274509803,0.}
\definecolor{qqqqff}{rgb}{0.,0.,1.}
\definecolor{ffqqqq}{rgb}{1.,0.,0.}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-3.810089413056916,1.0780675462585876) rectangle (15.017641730323243,7.476163793212526);
\draw [color=ffqqqq] (-2.,6.)-- (-3.,5.);
\draw [color=ffqqqq] (-3.,5.)-- (-1.,5.);
\draw [color=ffqqqq] (-2.,6.)-- (-1.,5.);
\draw [color=qqqqff] (-3.,5.)-- (-2.,3.);
\draw [color=qqqqff] (-1.,5.)-- (-2.,3.);
\draw [color=qqwuqq] (-2.,6.)-- (-2.,3.);
\draw [color=ffqqqq] (0.,5.)-- (2.,5.);
\draw [color=ffqqqq] (0.,5.)-- (1.,4.);
\draw [color=ffqqqq] (2.,5.)-- (1.,4.);
\draw [color=qqqqff] (0.,5.)-- (1.,3.);
\draw [color=qqqqff] (2.,5.)-- (1.,3.);
\draw [color=qqwuqq] (1.,4.)-- (1.,3.);
\draw [color=qqqqff] (5.,3.)-- (6.,6.);
\draw [color=qqqqff] (6.,6.)-- (7.,3.);
\draw [color=qqwuqq] (5.,3.)-- (7.,3.);
\draw [color=ffqqqq] (5.995999144407565,4.297200458023963)-- (6.,6.);
\draw [color=ffqqqq] (5.,3.)-- (5.995999144407565,4.297200458023963);
\draw [color=ffqqqq] (7.,3.)-- (5.995999144407565,4.297200458023963);
\draw [color=qqwuqq] (10.,3.)-- (10.,5.);
\draw [color=qqqqff] (10.,5.)-- (13.,6.);
\draw [color=qqqqff] (13.,6.)-- (14.,3.);
\draw [color=ffqqqq] (10.,5.)-- (14.,3.);
\draw [color=ffqqqq] (10.,3.)-- (13.,6.);
\draw [color=ffqqqq] (10.,3.)-- (14.,3.);
\draw (-0.7443349613914741,2.610944772091302) node[anchor=north west] {M};
\draw (5.787054957374033,2.5109745182326466) node[anchor=north west] {C};
\draw (11.718623352987604,2.544297936185532) node[anchor=north west] {R};
\end{tikzpicture}
\[
\hspace{1cm}
\{d_1,d_2,d_2\},\{d_1,d_3,d_3\},
\hspace{1.5cm}
\{d_1,d_2,d_3\},\{d_1,d_2,d_3\},
\hspace{1.5cm}
\{d_1,d_2,d_3\},\{d_1,d_3,d_3\},
\]
\[
\hspace{1cm}
\{d_2,d_3,d_3\},\{d_2,d_3,d_3\}
\hspace{1.7cm}
\{d_2,d_2,d_3\},\{d_3,d_3,d_3\}
\hspace{1.7cm}
\{d_2,d_3,d_2\},\{d_3,d_3,d_2\}\]
\[
\{d_{1} \ = \ \sqrt{3} \ + \ \sqrt{8}, \ d_{2} \ = \ 3, \ d_{3} \ = \ 2\}
\hspace{0.5cm}
\{d_{1} \ = \ 1, \ d_{2} \ = \ \frac{\sqrt{17}}{2}, \ d_{3} \ = \ \frac{17}{16}\}
\hspace{0.5cm}
\{d_{1} \ = \ \sqrt{\frac{3}{2}}, \ d_{2} \ = \ 2, \ d_{3} \ = \ 1\}
\]
\caption{Types M, C, and R.}
\label{fig:mcr}
\end{figure}
\end{comment}
\begin{comment}
\begin{figure}[h]
\includegraphics[width=\textwidth]{configson52.png}
\caption{Representatives for all possible distance sets on five points.}
\label{fig:allFIVE}
\end{figure}
\end{comment}
\section{Rigidity of Crescent Configurations}\label{sec: rigid}
So far in this paper, we have shown that crescent configurations can be classified into a finite number of isomorphism classes for each positive integer $n$ and defined a method to geometrically realize these configurations. We now turn our attention to testing whether our crescent configurations are rigid, which can help answer questions, such as, whether one distance set could define two different realizations of crescent configurations belonging to the same isomorphism class.
For this reason, we shall treat our configurations as graphs and adopt the following definition of graph rigidity.
\begin{defi}[Asimow \& Roth]\cite{AsimowRoth}
Let $G$ be a graph $(V,E)$ on $v$ vertices in $\ensuremath{\mathbb{R}}^n$ then $G(p)$ is $G$ together with the point $ p = (p_1,p_2, \dots, p_v) \in \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n \dots \ensuremath{\mathbb{R}}^n = \ensuremath{\mathbb{R}}^{nv}$. Let $K$ be the complete graph on $v$ vertices. The graph $G(p)$ is \emph{rigid} in $\ensuremath{\mathbb{R}}^n$ if there exists a neighbordhood $\mathbf{U}$ of $p$ such that
$$e_K^{-1}(e_K(p)) \cap \mathbf{U} = e_G^{-1}(e_G(p)) \cap \mathbf{U},$$
where $e_K$ and $e_G$ are the edge functions of $K$ and $G$, which return the distances of edges of the associated graphs.
\end{defi}
In other words, a rigid graph cannot have its vertices be continuously moved to noncongruent positions while preserving the distances. The rigidity testing of these configurations would not only serve as a verification for our classification under graph isomorphism, but would also give us another way to characterize these crescent configurations.
\par
One important note is that all crescent configurations are complete graphs. It is a rather direct result of the following theorem and its corollary that all complete graphs are rigid in $\ensuremath{\mathbb{R}}^2$.
\begin{theorem}[Laman]\label{thm:rigidR2}\cite{Laman}
The edges of a graph $G = (V,E)$ are independent in two dimensions if and only if no subgraph $G' = (V',E')$ has more than $2n-3$ edges.
\end{theorem}
\begin{cor}
A graph with $2n-3$ edges is rigid in two dimensions if and only if no subgraph $G'$ has more than $2n'-3$ edges.
\end{cor}
We can easily check that the complete graph on $1$, $2$ and $3$ points ($K_1$, $K_2$ and $K_3$ respectively) satisfy the condition and thus are rigid. For $n \geq 3$, the complete graph on $n$ points (or $K_n$ by convention) is composed of triangles, which are $K_3$. Since each $K_3$ composing $K_n$ is rigid, it must also be the case that one cannot move the vertices of $K_n$ to noncongruent positions while preserving the distances because this would inevitably change the distances in each $K_3$ subgraph. Therefore, all crescent configurations are rigid.
\par
Nonetheless, this fact does not imply our work is over, as there is more than one kind of rigidity for graphs. Thus, we want to study whether all crescent configurations fall into just one category of rigidity or if there are certain crescent configurations that are more rigid than others. In addition, Theorem \ref{thm:rigidR2} can only be used to show the rigidity of crescent configurations in two dimensions. Furthermore, in the near future, we want to extend our definition of crescent configuration to $\epsilon$ crescent configurations, where two edges are considered equal if their lengths are within a sufficiently small $\epsilon$ from each other. Thus we need another method that could accommodate the exploration in higher dimensions and the assessment of stability of an extended family of crescent configurations.\\
\par
This leads us to using \textbf{rigidity matrices
. These are an extremely powerful and flexible tool that can be used in $\ensuremath{\mathbb{R}}^d$ for all $d \in \mathbb{N}$. We introduce them and other necessary terminologies in Subsection \ref{sub:prelim} and apply them to our analysis in Subsection \ref{sub:analysis}.
\subsection{Preliminaries}\label{sub:prelim}
\par
The following definitions and theorems will be used through out the rest of the paper to analyze the rigidity of crescent configurations on $4$ and $5$ points in $\ensuremath{\mathbb{R}}^2$. The formulation of each of these definitions have been adopted from one paper by Bruce Hendrickson \cite{Hendrickson} due to their accessible nature. For other formulations and characterizations of rigidity, see also \cite{AsimowRoth, Connelly, Roth, Roth2}.\\
\begin{defi}[Realization]\cite{Hendrickson}
Let $G=(V,E)$ be a graph with some pairwise associated distance measurements. A realization $f$ of $G$ is a function that maps the vertices of $G$ to coordinates in some Euclidean space such that the distance measurements are realized.
\end{defi}
\begin{defi}[Framework]\cite{Hendrickson}
A framework is a combination of a graph $G = (V,E)$ and a realization of $G$ in some Euclidean space, denoted $f(G)$
\end{defi}
In our case, we will mostly be concerned with frameworks in $\ensuremath{\mathbb{R}}^2$, though the techniques can be generalized to higher dimension and in any Euclidean space.
\begin{defi}[Flexibility of Frameworks] \label{def:flex} \cite{Hendrickson}
A framework is called flexible if and only if it can be continuously deformed while preserving the distance constraints; otherwise the framework is rigid. A framework is redundantly rigid if and only if one can remove any edge and the remaining framework is rigid.
\end{defi}
\begin{defi}[Infinitesimal motion]\cite{Hendrickson}
An \textbf{infinitesimal motion} is an assignment of velocity to each vertex such that $(v_i - v_j)(f_i - f_j) = 0$ for all pairs $(i,j) \in E$. A framework $f(G)$ is infinitesimally rigid if and only if it does not have any infinitesimal motion.
\end{defi}
\begin{theorem}[Gluck 1975]\label{thm:gluck} \cite{Gluck}
If a graph has a single infinitesimally rigid realization, then all its generic realizations are rigid.
\end{theorem}
The main tool we will use to study the rigidity of our crescent configurations, as mentioned, are \textbf{rigidity matrices}. Each framework has a rigidity matrix associated to it. This matrix is the set of all equations whose solutions are the infinitesimal motions of that framework. Its rows correspond to the edges and its $nd$ columns correspond to the components of the vertices in $\mathbb{R}^d$. There are $2d$ nonzero elements in each row, one for each coordinate of the vertices connected to the corresponding edge. The differences in the coordinate values for the two vertices are these nonzero values.
\begin{theorem}[Hendrickson 1992]\label{thm:Hend} \cite{Hendrickson}
A framework $f(G)$ is rigid if and only if its rigidity matrix has rank exactly equal to $S(n,d)$,the number of allowed motions, which is equal to $nd - d(d+1)/2$ for $n \geq d$ and $n(n-1)/2$ otherwise.
\end{theorem}
All the above theorems and objects are tied together in the following lemma, which provides the basis for our analysis in the next section.
\begin{lemma}\label{lem:BJackson} \cite{Jackson}
The following statements are equivalent:
\begin{enumerate}
\item $G$ is rigid in $\ensuremath{\mathbb{R}}^d$,
\item some framework $f(G)$ in $\ensuremath{\mathbb{R}}^d$ is infinitesimally rigid,
\item every generic framework of $G$ in $\ensuremath{\mathbb{R}}^d$ is rigid.
\end{enumerate}
\end{lemma}
\subsection{Rigidity Analysis for 4-point Crescent Configurations} \label{sub:analysis}
\vspace{0.2cm}
The first thing we need to do before we are able to study the rigidity matrix is to construct a realization. By Theorem \ref{thm:gluck}, it suffices to construct a single realization for each type of crescent configurations on $n$ points and study the rigidity of that framework.
Figure 6 is a realization of type C obtained by fixing $d_1 = 1$. We note that since the distances in these configurations must satisfy general position as well as geometric realizability, fixing one distance to calculate the rest of the distances in the distance set does not affect the rigidity characterization of the configuration.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{TypeC.png}
\caption{Realization of type C obtained by fixing $d_1 = 1$}
\end{figure}
The above realization yield the following rigidity matrix, which we denote $A_C$:
\[A_C =
\begin{bmatrix}
\frac{1}{2} & y + \sqrt{\frac{1+4y^2}{4}}&-\frac{1}{2} & -y - \sqrt{\frac{1+4y^2}{4}}&0&0&0&0\\
-\frac{1}{2}&y + \sqrt{\frac{1+4y^2}{4}}& 0& 0& \frac{1}{2}& -y - \sqrt{\frac{1+4y^2}{4}}&0&0\\
0& \sqrt{\frac{1+4y^2}{4}}& 0&0& 0& 0& 0&-\sqrt{\frac{1+4y^2}{4}}\\
0& 0& -1& 0& 1& 0& 0& 0\\
0& 0& -\frac{1}{2}& -y& 0& 0& \frac{1}{2}& y\\
0& 0& 0& 0& \frac{1}{2}& -y& -\frac{1}{2}& y\\
\end{bmatrix}
.\]
By row reduction operations, we get that Rank($A_C$)$=5 = S(4,2)$. Thus by Theorem \ref{thm:Hend}, the framework is infinitesimally rigid. Since the row reduction operations yield the same result whether $y > 0$ or $y < 0$, we can conclude that type $C$ is rigid for all $y \neq 0$ by Lemma \ref{lem:BJackson}.\\
Similarly, we can carry out the same analysis on type M. There are two realizations of type $M$, which are included in Figure 7.
\begin{figure}[h]
\includegraphics[width=0.7\textwidth]{TypeM.png}
\caption{Two realizations of type M: $M_1$ and $M_2$}
\end{figure}
Note that $d_1$ is the only distance that differs in these two realizations. If we remove $d_1$ and denote the remaining framework as $M'_1$ and $M'_2$ respectively, we could continuously deform one into the other. This fact also means that type M cannot be redundantly rigid by definition. Therefore, type M is another rigid graph.
\begin{comment}
\[A_{M_1} =
\begin{bmatrix}
-2 x & 0 & 2 x & 0 & 0 & 0 & 0 & 0\\
-x & -x \sqrt{3} & 0 & 0 & x & x \sqrt{3} & 0 & 0 \\
-x & -x \sqrt{3} - y & 0 & 0 & 0 & 0 & x& x\sqrt{3} + y\\
0 & 0 & x & -x \sqrt{3} & -x & x\sqrt{3} & 0 & 0 \\
0 & 0 & x & -x \sqrt{3} - y & 0 & 0 & -x & x\sqrt{3} + y \\
0 & 0 & 0 & 0 & 0 & -y & 0 & y \\
\end{bmatrix}
\]
Again by row reduction operations, Rank($A_{M_1}$) $=5$ thus by \ref{thm:Hend}, the framework is infinitesimally rigid therefore type M is another rigid graph.
\end{comment}
Last but not least, type R can also be studied using the same method. We start with a realization that is included in Figure 8 and a rigidity matrix $A_R$ to follow:
\begin{figure}[ht]
\includegraphics[width=0.7\textwidth]{TypeR.png}
\caption{Realization of type R obtained by fixing $d_1 = 1$}
\end{figure}
\[A_R =
\begin{bmatrix}
-x & 0 & x & 0 & 0 & 0 & 0 & 0\\
\frac{-x}{2} & \frac{-x}{y} & 0 & 0 & \frac{x}{2} & \frac{x}{y} & 0 & 0 \\
\frac{-1}{2x} & \frac{-y}{2x} & 0 & 0 & 0 & 0 & \frac{1}{2x} & \frac{y}{2x}\\
0 & 0 & x - \frac{x}{2} & \frac{-x}{2y} & -x + \frac{x}{2} & \frac{x}{2y} & 0 & 0\\
0 & 0 & x - \frac{1}{2x} & \frac{-y}{2x} & 0 & 0 & -x + \frac{1}{2x} & \frac{y}{2x}\\
0 & 0 & 0 & 0 & \frac{x}{2} - \frac{1}{2x} & \frac{x}{2y} - \frac{y}{2x} & \frac{-x}{2} + \frac{1}{2x}& \frac{-x}{2y} + \frac{y}{2x}\\
\end{bmatrix}
.\]
By carrying out row reduction operations once more, we find that Rank($A_R$) $= 6 > S(4,2)$. However, since removing any row in the matrix is equivalent to removing any edge in the framework and the rank of any remaining matrix obtained this way is always $5$, we conclude that type R is redundantly rigid by Definition \ref{def:flex}. By \cite{Hendrickson}, the conditions for unique realization of a graph is rigidity, $(d+1)$-connectedness, and redundant rigidity. Since type R satisfies all these conditions, it immediately follows that for each value of $x$ that we choose, there is a \emph{unique realization} of type R.
In conclusion we have found that not all crescent configurations on four points have the same rigidity. As stated above, only type R has a unique realization in $\ensuremath{\mathbb{R}}^2$. Similarly, this analysis may be done on all $27$ crescent configurations on $5$ points to determine which ones have a unique realization in $\ensuremath{\mathbb{R}}^2$. \\
As mentioned previously, rigidity matrices are very accommodating. Suppose we need to lift our crescent configurations into $\ensuremath{\mathbb{R}}^d$ for $d > 2$, which we discuss in Subsection \ref{sub:high}. Then, the number of rows and columns of the associated rigidity matrices as well as $S(n,d)$ would change but the method of evaluating rigidity would remain the same. Similarly, with the extension of our definition from crescent configuration to $\epsilon$ crescent configuration (\ref{sub:epsilon}), we can insert $\epsilon$ into the matrix by constructing a realization that involves $\epsilon$ and then solve for $\epsilon$ depending on what stability type we want the configuration to exhibit.
\section{Future Work}
\subsection{Further Explorations in the Plane}\label{sub:further}
Thus far, we have used our techniques to classify crescent configurations in the plane for $n=4$ and $n=5$. Because of the complexity of our algorithm, we have not been able to apply our techniques to higher $n$. As mentioned above, the runtime of our current algorithm is on the order of $n^n$, which prevents us from carrying out this process for large $n$. However, thus far no configurations have been found for $n>8$, so even running a similar algorithm for $n=9$ would yield significant progress on this problem. Thus, we are interested in the possibility of modifying our algorithm or finding a new technique that would allow us to count crescent configurations on higher $n$. In this way, we could develop a sequence of $\{c_i\}$, where each $c_i$ gives the number of crescent configurations on $i$ points. If Erd\H{o}s' conjecture is correct, then $\{c_i\}$ only has a finite number of non-zero terms. It would be interesting to see Erd\H{o}s' conjecture realized as a sequence that goes to zero.
Since our techniques yield every possible crescent configuration for a given $n$, we can use these to observe patterns. For example, one can see from Figure \ref{fig:allFIVE} that many of crescent configurations on 5 points contain crescent configurations on 4 points. We may be able to develop techniques using such patterns that generate some of the possible crescent configurations for larger $n$.
\subsection{Extensions to Higher Dimensions}\label{sub:high}
As mentioned earlier, the distance geometry techniques that we use naturally extend to higher dimensions. Thus, we are interested in using these techniques to find the number of crescent configurations on $n$ points in a given dimension. Our goal is to construct a sequence for each $d$ consisting of the number of crescent configurations on $i$ points in $\ensuremath{\mathbb{R}}^d$ for each $i$. Currently, constructions in $\ensuremath{\mathbb{R}}^3$ have been found for 3, 4, and 5 points. Thus, even finding a single 6 point configuration in 3D would give new information. We have attempted to use techniques from distance geometry to find a realization in $\ensuremath{\mathbb{R}}^3$ of a known distance set for $n=6$ in the plane. However, the resulting systems of equations exceeded our computational resources.
Recently, Burt et. al \cite{SM15} found that given $d$ high enough, one can always construct a crescent configuration on $n$ points in $\ensuremath{\mathbb{R}}^d$. We can consider similar questions using the concept of distance coordinates. We are interested in determining whether given a distance set there always exists a dimension in which the set is geometrically realizable.
\subsection{Properties of Crescent Configuration Types}\label{sub:epsilon}
Now that we have developed a way of classifying crescent configurations, we can examine certain properties for each of the types of crescent configurations. We started to explore in this direction with our rigidity calculations. \\
One direction we are interested in is to develop a concept of stability for these configurations, as we noticed that moving the points of the M, R, and C- type configurations resulted in different amounts of change in the distances. Further, should we define two distances to be equal if they are $\epsilon$ apart, then our study of the stability of crescent configurations could have some powerful applications to the study of molecules.
\begin{comment}
Lastly, the techniques here can extend to problems more general than the problem of finding crescent configurations. Given some restraints on the distances between a set of points, we can use these techniques to find geometric realizations.
\begin{comment}\section {Proof of the classifications of crescent configurations on 4 points} \label{sec: 4crescent}
\par
The remaining three steps of the program require the user to input specific values for $\{d1,d2,d3\}$. Once the user has done this, the second step of the program checks to see if any of the three general crescent configurations defined above are geometrically realizable given these distances. To do this, the program uses the \textit{Caley-Menger Determinant}.
\par
\par
Once this check has been completed, the final step of the program, "circleCheck4[distances,dim]," employs Ptolemy's formula to remove any of the remaining configurations that define cyclic quadrilaterals. The output of this final step is the list of all geometrically realizable crescent configurations for the specified distances.\\
\section{Proof of the classifications of crescent configurations on 5 points}
\begin{theorem}\label{thm:5points}
Given a set of four distinct distances, $\{d1,d2,d3,d4\}$, there are at most 85 five-point crescent configurations that may be formed with these distances.
\end{theorem}
\begin{proof}
As with \ref{thm: 4point}, this result was determined using a Mathematica program to carry out the combinatorics. The full code can be found in Appendix B
\par
Just as in \ref{thm: 4point}, this program has four steps, and we again only consider the first step, the function "crescentClassStep1[distances,dim]," to acheive the upper bound of 85. The first step is nearly identical to the first step in the proof of \ref{thm: 4point}, except that we can no longer eliminate cases of isosceles trapezoids because, with four distinct distances, a collection of four points with only two distinct distance coordinates may define a non-square rhombus and its diagonals. A non-square rhombus is not a cyclic quadrilateral
\par
Instead, the program tests for distance coordinates with only one distinct distance. This distance coordinate would define a point at the center of a circle surrounded by four points on said circle, violating general position.
\par
Once these cases have been eliminated, we are left with only 85 distance matrices.
\end{proof}
CAYLEY-MENGER DETERMINANTS
\end{comment}
| {
"timestamp": "2016-10-26T02:05:01",
"yymm": "1610",
"arxiv_id": "1610.07836",
"language": "en",
"url": "https://arxiv.org/abs/1610.07836",
"abstract": "Let $n$ points be in crescent configurations in $\\mathbb{R}^d$ if they lie in general position in $\\mathbb{R}^d$ and determine $n-1$ distinct distances, such that for every $1 \\leq i \\leq n-1$ there is a distance that occurs exactly $i$ times. Since Erdős' conjecture in 1989 on the existence of $N$ sufficiently large such that no crescent configurations exist on $N$ or more points, he, Pomerance, and Palásti have given constructions for $n$ up to $8$ but nothing is yet known for $n \\geq 9$. Most recently, Burt et. al. had proven that a crescent configuration on $n$ points exists in $\\mathbb{R}^{n-2}$ for $n \\geq 3$. In this paper, we study the classification of these configurations on $4$ and $5$ points through graph isomorphism and rigidity. Our techniques, which can be generalized to higher dimensions, offer a new viewpoint on the problem through the lens of distance geometry and provide a systematic way to construct crescent configurations.",
"subjects": "Combinatorics (math.CO)",
"title": "Classification of crescent configurations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9883127402831221,
"lm_q2_score": 0.8104789109591831,
"lm_q1q2_score": 0.8010066334317508
} |
https://arxiv.org/abs/1806.09810 | On Representer Theorems and Convex Regularization | We establish a general principle which states that regularizing an inverse problem with a convex function yields solutions which are convex combinations of a small number of atoms. These atoms are identified with the extreme points and elements of the extreme rays of the regularizer level sets. An extension to a broader class of quasi-convex regularizers is also discussed. As a side result, we characterize the minimizers of the total gradient variation, which was still an unresolved problem. | \section{Application to some popular regularizers}
\label{sec:app}
We now show that the extreme points and extreme rays of numerous convex regularizers can be described analytically, allowing to describe important analytical properties of the solutions of some popular problems.
The list given below is far from being exhaustive, but it gives a taste of the diversity of applications targeted by our main results.
\subsection{Finite-dimensional examples}
We first consider examples where one has $\mathrm{dim} \vecgal<+\infty$. In that case, $\Phi$ is continuous, and since the considered regularizations~$\reg$ are lower semi-continuous, we deduce that
\begin{itemize}
\item[$\circ$] the level set $\minset$ is closed,
\item[$\circ$] the solution set $\sol=\minset\cap \Phi^{-1}(\{y\})$ is closed, and locally compact (even compact in most cases), hence it admits extreme points provided it contains no line.
\end{itemize}
\subsubsection{Nonnegativity constraints}
\label{sec:nonneg}
In a large number of applications, the signals to recover are known to be nonnegative. In that case, one may be interested in solving nonnegatively constrained problems of the form:
\begin{equation}\label{eq:nonnegative}
\inf_{u\in \RR^n_+} f(\Phi u - y).
\end{equation}
An important instance of this class of problems is the nonnegative least squares \cite{lawson1995solving}, which finds it motivation in a large number of applications. Applying the results of \Cref{sec:abstract} to Problem \eqref{eq:nonnegative} yields the following result.
\begin{proposition}
If the solution set of \eqref{eq:nonnegative} is nonempty, then it contains a solution which is $m$-sparse.
In addition if $f$ is convex and the solution set is compact, then its extreme points are $m$-sparse.
\end{proposition}
Choosing $\reg$ as the characteristic function of ${\RR^n_+}$, the result simply stems from the fact that the extreme rays of the positive orthant are the half lines $\{\alpha e_i, \alpha \geq 0\}$, where $(e_i)_{1\leq i \leq n}$ denote the elements of the canonical basis. We have to consider $m$ atoms and not $m-1$ since $t^\star=\inf \reg=0$, see \cref{rem:infR}.
It may come as a surprise to some readers, since the usual way to promote sparsity consists in using $\ell^1$-norms. This type of result is one of the main ingredients of \cite{donoho2005sparse} which shows that the $\ell^1$-norm can sometimes be replaced by the indicator of the positive orthant when sparse \emph{positive} signals are looked after.
\subsubsection{Linear programming}
Let $\psi \in \mathbb{R}^n$ be a vector and $\Phi \in \RR^{m\times n}$ be a matrix and consider the following linear program in standard (or equational) form:
\begin{equation}\label{eq:linearprogramming}
\inf_{\substack{u \in \RR^n_+ \\ \Phi u = y}} \langle \psi,u\rangle
\end{equation}
Applying Theorem \eqref{thm:first} to the problem \eqref{eq:linearprogramming}, we get the following well-known result (see e.g. \cite[Thm. 4.2.3]{matousek2007understanding}):
\begin{proposition}\label{prop:linprog}
Assume that the solution set of \eqref{eq:linearprogramming} is nonempty and compact.
Then, its extreme points are $m$-sparse, i.e. of the form:
\begin{equation}
u = \sum_{i=1}^m \alpha_i e_i, \alpha_i\geq0,
\end{equation}
where $e_i$ denotes the $i$-th element of the canonical basis.
\end{proposition}
In the linear programming literature, solutions of this kind are called \emph{basic solutions}.
To prove the result, we can reformulate \eqref{eq:linearprogramming} as follows:
\begin{equation}\label{eq:linearprogramming2}
\inf_{\substack{(u,t) \in \RR^n_+\times \RR \\ \Phi u = y \\ \langle \psi,u\rangle=t}} t
\end{equation}
\begin{sloppypar}
Letting $R(u,t) = t + \iota_{\RR_+^n}(u)$, we get $\inf R = -\infty$. Hence, if a solution exists, we only need to analyze the extreme points and extreme rays of ${\minset = \{(x,t)\in \RR^n\times \RR, R(x,t) \le t^\star\}=\RR_+^n\times ]-\infty,t^\star]}$, where $t^\star$ denotes the optimal value. The extreme rays of this set (a shifted nonnegative orthant) are of the form $\{\alpha e_i, \alpha > 0 \}\times \{t^\star\}$ or $\{0\}\times]-\infty,t^\star[$. In addition $\minset$ possesses only one extreme point $(0,t^\star)$. Applying \cref{thm:first}, we get the desired result.
\end{sloppypar}
\subsubsection{$\ell^1$ analysis priors}
An important class of regularizers in the finite dimensional setting $\vecgal=\RR^n$ contains the functions of the form $R(u) = \| Lu \|_1$, where {$L$ is a linear operator from $\RR^{n}$ to $\RR^{p}$}. They are sometimes called \emph{analysis priors}, since the signal $u$ is ``analyzed'' through the operator $L$. Remarkable practical and theoretical results have been obtained using this prior in the fields of inverse problems and compressed sensing, even though many of its properties are -to the belief of the authors- still quite obscure.
Since $R$ is one-homogeneous, it suffices to describe the extremality properties of the unit ball $C=\{u\in \RR^n, \| Lu \|_1\leq 1\}$ to use our theorems. The lineality space is simply equal to $\mathrm{lin}(C) = \mathrm{ker}(L)$. Let $K=\mathrm{ker}(L)$, $K^\perp$ denote the orthogonal complement of $K$ in $\RR^n$ and $L^+:\RR^n\to K^\perp$ denote the pseudo-inverse of $L$.
We can decompose $C$ as $C= K + C_{K^\perp}$ with $C_{K^\perp}=C\cap K^\perp$.
Our ability to characterize the extreme points of $C_{K^\perp}$ depends on whether $L$ is surjective or not.
Indeed, we have
\begin{equation}\label{eq:extL1}
\ext(C_{K^\perp})=L^+\left( \ext\left( \mathrm{ran}(L)\cap B_1^p\right) \right),
\end{equation}
where $B_1^p$ is the unit $\ell^1$-ball defined as
\begin{equation*}
B_1^p=\{z\in \RR^p, \|z\|_1\leq 1\}.
\end{equation*}
Property \eqref{eq:extL1} simply stems from the fact that $C_{K^\perp}$ and $D=\mathrm{ran}(L)\cap B_1^p$ are in bijection through the operators $L$ and $L^+$.
\paragraph{The case of a surjective operator $L$}
When $L$ is surjective {(hence $p\leq n$)}, the problem becomes quite elementary.
\begin{proposition}\label{prop:extanalysis}
If $L$ is surjective, the extreme points $u$ of $C_{K^\perp}$ are $\ext(C_{K^\perp}) = (\pm L^+ e_i)_{1\leq i \leq p}$, where $e_i$ denotes the $i$-th element of the canonical basis.
{Consider Problem \eqref{eq::data-fitting:convex} and assume that at least one solution exists. }
Then Problem \eqref{eq::data-fitting:convex} has solutions of the form
\begin{equation}\label{eq:extremeanalysisfinite}
u^\star = \sum_{i\in I} \alpha_i L^+e_i + u_K,
\end{equation}
where $u_K\in \mathrm{ker}(L)$ and $I\subset \{1,\hdots, p\}$ is a set of cardinality $|I|\leq m - \mathrm{dim}(\Phi\mathrm{ker}(L))$.
\end{proposition}
{The proof of Proposition \ref{prop:extanalysis} follows from Corollary \ref{coro:lines} and Section \ref{subsec:nonconvex}, with $j=0$ and observing that $\projv$ is the orthogonal projection on $K^\perp$.}
\paragraph{The case of an arbitrary operator $L$}
When $L$ is not surjective the des\-crip\-tion of the extreme points $\ext\left(D\right)$ becomes untractable in general. A rough upper-bound on the number of extreme points can be obtained as follows.
We assume that $L$ has full rank {$n$} and that $\mathrm{ran}(L)$ is in general position. The extreme points of $\mathrm{ran}(L)\cap B_1^p$ correspond to the intersections of some faces of the $\ell^1$-ball with a subspace of dimension {$n$}. In order for some $k$-face to intersect the subspace $\mathrm{ran}(L)$ on a singleton, $k$ should satisfy $n+k-p=0$, i.e. $k=p-n$. The $k$-faces of the $\ell^1$-ball contain $(k+1)$-sparse elements. The number of $(k+1)$-sparse supports in dimension $p$ is $\binom{p}{k+1}$. For a fixed support, the number of sign patterns is upper-bounded by $2^{k+1}$. Hence, the maximal number of extreme points satisfies $|\ext\left( \mathrm{ran}(L)\cap B_1^p\right)|\leq 2^{k+1}\binom{p}{k+1}$. This upper-bound is pessimistic since the subspace may not cross all extreme points, but it provides an idea of the combinatorial explosions that may happen in general.
Notice that enumerating the number of faces of a polytopes is usually a hard problem. For instance, the Motzkin conjecture \cite{motzkin1957comonotone} which upper bounds the number of $k$ faces of a $d$ polytope with $z$ vertices was formulated in 1957 and solved by Mc Mullen \cite{mcmullen1970maximum} in 1970 only.
\subsubsection{Matrix examples}
\label{sec:matrix:examples}
In several applications, one deals with optimization problems in matrix spaces. The following regularizations/convex sets are commonly used.
\paragraph{Semi-definite matrix constraint}
Similarly to \Cref{sec:nonneg}, one may consider in $\RR^{n\times n}$ the following constrained problem
\begin{equation}\label{eq:sdpcone}
\inf_{M\succeq 0} f(\Phi M - y),
\end{equation}
where $M\succeq 0$ means that $M$ must be symmetric positive semi-definite (p.s.d.). The extreme rays of the positive semi-definite cone $\minset$ are the p.s.d. matrices of rank $1$ (see for instance~\cite[Sec. 2.9.2.7]{dattorro_convex_2005}).
Hence, arguing as in \Cref{sec:nonneg}, we may deduce that if there exists a solution to~\eqref{eq:sdpcone}, there is also a solution which has rank (at most) $m$.
However, that conclusion is not optimal, as in that case a theorem by Barvinok~\cite[Th. 2.2]{barvinok_problems_1995} ensures that there exists a solution $M$ with
\begin{equation}\label{eq:ranksdp}
\rank(M) \leq \frac{1}{2}\left(\sqrt{8m+1}-1\right).
\end{equation}
To understand the gap with Barvinok's result, let us note that the p.s.d.\ cone has a very special structure which makes the Minkowski-Carath\'eodory theorem (or its extension by Klee) too pessimistic. By~\cite[Sec. 2.9.2.3]{dattorro_convex_2005}, given $M\succeq 0$, the smallest face of the p.s.d. cone which contains $M$ (\ie{} the set of p.s.d matrices which have the same kernel) has dimension
\begin{equation}\label{eq:ranksdp2}
d= \frac{1}{2}\rank(M)(\rank(M)+1).
\end{equation}
\begin{sloppypar}
Equivalently, if the smallest face which contains $M$ has dimension $d$, then $\rank(M)=\frac{1}{2}\left(\sqrt{8d+1}-1\right)$, hence $M$ is a convex combination of ${\frac{1}{2}\left(\sqrt{8d+1}-1\right)}$ points in extreme rays, a value which is less than the value $d$ predicted by Klee's extension of Carath\'eodory's theorem.
As a result, we recover Barvinok's result by noting that, as ensured by the first claim\footnote{or, more precisely, by its variant when $t^\star=\inf \reg$, see \cref{rem:infR}.} of \cref{thm:first}, any extreme point $M$ of the solution set belongs to a face of dimension $m$. Then, taking into account~\eqref{eq:ranksdp2} improves upon the second claim of \cref{thm:first}, and we immediately obtain~\eqref{eq:ranksdp}.
\end{sloppypar}
\paragraph{Semi-definite programming}
Semi-definite programs are problems of the form:
\begin{equation}\label{eq:semidefiniteprogramming}
\inf_{\substack{M\succeq 0\\ \Phi(M) = y}} \langle A,M\rangle,
\end{equation}
where $A\in \RR^{n\times n}$ is a matrix and $\langle A,M\rangle\eqdef \mathrm{Tr}(A M)$.
Arguing as in \cref{prop:linprog}, if the solution set of \eqref{eq:semidefiniteprogramming} is nonempty, our main result allows to state that its extreme points are matrices of rank $m$. In view of the above discussion, it is possible to refine this statement and show that~\eqref{eq:ranksdp} holds.
\paragraph{The nuclear norm}
The nuclear norm of a matrix $M\in \RR^{p\times n}$ is often denoted $\|M\|_*$ and defined as the sum of the singular values of $M$. It gained a considerable attention lately as a regularizer thanks to its applications in matrix completion \cite{candes2009exact} or blind inverse problems \cite{ahmed2014blind}.
The geometry of the unit ball $\{M\in \RR^{p\times n}, \|M\|_*\leq 1\}$ is well studied due to its central role in the field of semi-definite programming \cite{pataki2000geometry}. Its extreme points are the rank one matrices $M= uv^T$, with $\|u\|_2=\|v\|_2=1$.
Combining \cref{thm:first} with this result explains why regularizing pro\-blems over the space of matrices with the nuclear norm allows recovering \emph{rank-m solutions}.
\paragraph{The rank-sparsity ball}
The \emph{rank-sparsity} ball is the set $\{M\in \RR^{m\times n}, \|M\|_* + \|M\|_1 \leq 1\}$, where $\|M\|_1$ is the $\ell^1$-norm of the entries of $M$. The corresponding regularization is sometimes used in order to favor sparse and low-rank matrices.
The authors of \cite{drusvyatskiy2015extreme} have described the extreme points of this unit ball.
They have proved that \emph{the extreme points $M$ of the rank sparsity ball satisfy $\frac{r(r+1)}{2}-|I|\leq 1$}, where $|I|$ denotes the number of non-zero entries in $M$ and $r$ denotes its rank.
This result partly explains why using the rank-sparsity gauge promotes sparse and low rank solution. Let us outline that this effect might be better obtained using different strategies \cite{Richard2013Intersecting,richard2014tight}.
\paragraph{Bi-stochastic matrices}
A doubly stochastic matrix is a matrix with nonnegative rows and columns summing to one. The set of such matrices is called the Birkhoff polytope. The Birkhoff-von Neumann theorem states that its extreme points are the permutation matrices.
We refer the interested reader to \cite{Fogel2015Convex} for an use of such matrices in DNA sequencing.
\subsection{Examples in infinite dimension}
In this section, we provide results in infinite dimensional spaces, which echoe the ones described in finite dimension.
\subsubsection{Problems formulated in Hilbert or Banach sequence spaces}
The case of Hilbert spaces (or countable sequences) can be treated within our formalism and all the examples presented previously have their na\-tu\-ral counterpart in this setting. {In the same vein, one can also treat Banach sequence spaces $\ell^p$ for $1\leq p\leq\infty$}. We do not reproduce the results here for space limitations. Let us however mention that two works treat this specific case with $\ell^1$ regularizers \cite{unser2016representer,adcock2016generalized}.
\subsubsection{Linear programming and the moment problem}\label{sec:momprob}
~
Let $\Omega$ be a {compact} metric space, $\mathcal{M}(\Omega)$ be the set of Radon measures on $\Omega$ and let $\mathcal{M}_+(\Omega)\subseteq \mathcal{M}(\Omega)$ be the cone of nonnegative measures on $\Omega$.
Let $\psi$ and $(\phi_i)_{1\leq i \leq m}$ denote a collection of continuous functions on $\Omega$.
Now, let $\Phi:\mathcal{M}(\Omega)\to \RR^m$ be defined by $(\Phi \mu)_i = \langle \phi_i, \mu\rangle$, where $ \langle \phi_i, \mu\rangle\eqdef\int_\Omega \phi_i \d \mu$, and consider the following linear program in standard form:
\begin{equation}\label{eq:linearprogramminginf}
\inf_{\substack{\mu \in \mathcal{M}_+(\Omega) \\ \Phi \mu = y}} \langle \psi,u\rangle.
\end{equation}
Applying Theorem \eqref{thm:first} to the problem \eqref{eq:linearprogramminginf}, we get \cref{prop:linearprogramminginf} below. We do not provide a proof here, since it mimics very closely the one given for linear programming in finite dimension. The extreme rays of $\mathcal{M}_+(\Omega)$ can be described, arguing as in~\cite[Th.~15.9]{aliprantis_infinite_2006}, as the rays directed by the Dirac masses.
\begin{proposition}\label{prop:linearprogramminginf}
Assume that the solution set \eqref{eq:linearprogramminginf} is {nonempty}.
Then, its extreme points are $m$-sparse, i.e. of the form:
\begin{equation}
u = \sum_{i=1}^m \alpha_i \delta_{x_i}, x_i\in \Omega, \alpha_i\geq0.
\end{equation}
\end{proposition}
{
To make sure that the above proposition is non-trivial, one may wish to ensure that the solution set $\sol$ has indeed extreme points, using arguments from Section~\ref{sec:existextreme}. It is straightforward that $\sol$ is convex and does not contain any line. Now, let us endow $\mathcal{M}(\Omega)$ with the weak-* topology (\ie{} the coarsest topology for which $\mu\mapsto \int_{\Omega}\eta\d \mu$ is continuous for every $\eta\in \Cder{}(\Omega)$). By lower semi-continuity, $\sol$ is closed. Moreover, $\sol$ is locally compact since the closed convex cone $\mathcal{M}_+$ is itself locally compact (take any $\mu\in \mathcal{M}_+(\Omega)$, its neighborhood $\enscond{\nu\in \mathcal{M}_+(\Omega)}{\nu(\Omega)\leq \mu(\Omega)+1}$ is compact in the weak-* topology).
}
{Proposition~\ref{prop:linearprogramminginf}} is well known, see e.g. \cite{shapiro2001duality}. Note that if we optimize the linear form $\langle \psi,u \rangle$ over the set of probability measures instead of the set of nonnegative measures, we get the so-called moment problem \cite{shohat1943problem} for which we can obtain a similar result.
\subsubsection{The total variation ball}
Let $\Omega$ denote an open subset of $\RR^d$ and $\mathcal{M}(\Omega)$ denote the set of Radon measures on $\Omega$.
The total variation ball $B_\mathcal{M} = \{u \in \mathcal{M}(\Omega), \|u\|_{\mathcal{M}(\Omega)}\leq 1\}$ plays a critical role for problems such as super-resolution \cite{candes2014towards,Tang2013Compressed,Duval2015Exact}. {It is compact for the weak-* topology and }its extreme points are the Dirac masses: $\ext(B_\mathcal{M})=\{\pm \delta_x, x\in \Omega\}$. Hence total variation regularized problems of the form:
\begin{equation*}
\inf_{u \in \mathcal{M}} f(\Phi u) + \|u\|_\mathcal{M},
\end{equation*}
yield $m$-sparse solutions (under an existence assumption). A few variations around this central result were provided in \cite{fernandez2016super}.
\paragraph{Demixing of Sines and Spikes}
In \cite[Page 262]{fernandez2016super}, the author presents a regularization of the type
\[
\|\mu\|_{\mathcal{M}}+\eta\|v\|_1
\]
where $\eta>0$ is a tuning parameter, $\mu$ a complex measure and $v\in\mathbb C^n$ a sparse vector. Define $E$ as the set of $(\mu,v)$ where $\mu$ is a complex Radon measure on a domain $\Omega$ and $v\in\mathbb C^n$. Consider the unit ball
\[
B\eqdef\{(\mu,v)\in E\ :\ \|\mu\|_{\mathcal{M}}+\eta \|v\|_1\leq 1\}.
\]
Its extreme points are the points
\begin{itemize}
\item
$(a\delta_t,0)$ for all $t\in\Omega$ (and $\delta_t$ denotes the Dirac mass at point $t$) and all $a\in\mathbb C$ such that $|a|=1$,
\item
$(0,ae_k)$ for all $k=1,\ldots,n$ and all $a\in\mathbb C$ such that $|a|=1/\eta$ and $e_k$ denotes the vector with $1$ at entry $k$ and $0$ otherwise.
\end{itemize}
\paragraph{Group Total Variation: Point sources with a common support}
In \cite[Page 266]{fernandez2016super}, the author presents a regularization of the type
\[
\|\mu\|_{\mathcal{M}^n}:=\sup_{F:\Omega\to\mathbb C^n,\ \| F(t) \|_2\leq 1,\ t\in\Omega}\int_\Omega \langle F(t), \nu(t)\rangle \mathrm d|\mu|(t)
\]
were $F$ is continuous and vanishing at infinity, and $\mu$ is a vectorial Radon measure on~$\Omega$ such that $|\mu|$-a.e. $\mu=\nu \cdot |\mu|$ with~$\nu$ a measurable function from~$\Omega$ onto $\mathbb S^{n-1}$ the $n$-sphere and $|\mu|$ a positive finite measure on $\Omega$. Consider the unit ball
\[
B\eqdef\{\mu, \|\mu\|_{\mathcal{M}^n}\leq 1\}.
\]
Its extreme points are $a\delta_t$ for all $t\in\Omega$ (and $\delta_t$ denotes the Dirac mass at point $t$) and all $a\in\mathbb C^n$ such that $\|a\|_2=1$.
\subsubsection{Analysis priors in Banach spaces}
\label{sec:prior}
The analysis of extreme points of analysis priors in an infinite dimensional setting is more technical.
Fisher and Jerome~\cite{fisher_spline_1975} proposed an inte\-res\-ting result, which can be seen as an extension of \eqref{eq:extremeanalysisfinite}. This result was recently revisited in \cite{unser2017splines} and \cite{flinth2017exact}. Below, we follow the presentation in \cite{flinth2017exact}.
{
Let $\Omega$ denote an open set in $\RR^d$.
Let $\mathcal{D}'(\Omega)$ denote the set of distributions on~$\Omega$ and let $L:\mathcal{D}'(\Omega) \to \mathcal{D}'(\Omega)$ denote a linear operator with kernel $K=\mathrm{ker}(L)$.
We let $E=\{u\in \mathcal{D}'(\Omega), Lu\in \mathcal{M}(\Omega)\}$ and let $\|\cdot\|_K$ denote a semi-norm on $E$, which restricted to $K$ is a norm. We define a function space $\mathcal{B}(\Omega)$ as follows:
$$\mathcal{B}(\Omega) = \{u\in E, \|Lu\|_{\mathcal{M}(\Omega)}+\|u\|_K<+\infty\}$$
and equip it with the norm $\|u\|_{\mathcal{B}(\Omega)} = \|Lu\|_{\mathcal{M}(\Omega)} + \|u\|_K$.
We assume that $L$ is surjective, i.e. $\mathcal{M}(\Omega) = L(\mathcal{B}(\Omega))$, and that $K$ has a topological complement (with respect to $\mathcal{B}(\Omega)$), which we denote by $K^\perp$. This setting encompasses all surjective Fredholm operators for instance.
Under the stated assumptions, we can define a pseudo-inverse $L^+$ of $L$ relative to $K^\perp$ \cite{beutler}
The representer theorems in \cite{fisher_spline_1975,unser2017splines,flinth2017exact} can be obtained using \cref{thm:first} as exemplified below.
\begin{proposition}
Let $B=\{u \in \mathcal{B}(\Omega), \|Lu\|_{\mathcal{M}(\Omega)}\leq 1\}$.
Then the extreme points of the set $C_{K^\perp}=B\cap K^\perp$ are of the form $\pm L^+\delta_{x}$, for $x\in \Omega$.
Let $f:\mathbb{R}^m\to \mathbb{R}\cup \{+\infty\}$ denote a convex function and define
\begin{equation*}
\sol = \argmin_{u \in \mathcal{B}(\Omega)} f(\Phi u) + \|Lu\|_{\mathcal{M}(\Omega)}.
\end{equation*}
Assume that $\sol$ is nonempty and does not contain $0$. Then the extreme points (if they exist) of $\pi_K(\sol)$ are of the form $u = \sum_{i=1}^m \alpha_i L^+ \delta_{x_i}$.
\end{proposition}
\begin{proof}
The proof mimics the finite dimensional case \eqref{eq:extremeanalysisfinite}.
First notice that $B=L^{-1}(B_\mathcal{M})$, where $L^{-1}(\{\mu\})$ is the pre-image of $\mu$ by $L$ and $B_\mathcal{M}$ is the unit total variation ball.
We have $L^{-1}(B_\mathcal{M})=L^+(B_\mathcal{M})+K$ and we can identify $C_{K^\perp}$ with $L^+(B_\mathcal{M})$. Since $L^+$ is bijective from $\mathcal{M}(\Omega)$ to $K^\perp$, the extreme points of $C_{K^\perp}$ are the image by $L^+$ of the Dirac masses.
The end of the proposition follows from Corollary \ref{cor:convex_fit} and from the fact that the lineality space of $\{u \in \mathcal{B}(\Omega), \|Lu\|_{\mathcal{M}(\Omega)}\leq 1\}$ is equal to $K$.
\end{proof}
Let us mention that, although the description of the extreme points follows directly from the results of Section 3, proving the existence of minimizers and the existence of extreme points is a considerably more difficult problem which needs a careful choice of topologies. The paper \cite{unser2017splines} provides a systematic way to construct Banach spaces and pseudo-inverse $L^+$ for ``spline admissible operators'' $L$ such as the fractional Laplacian. In addition, they prove existence of solutions by adding weak-* continuity assumptions on the sensing operator $\Phi$.}
\subsubsection{The total gradient variation}
Since its introduction in the field of image processing \cite{rudin1992nonlinear}, the total gradient variation proved to be an extremely valuable regularizer in diverse fields of data science and engineering. It is defined, for any locally integrable function $u$ as
\begin{equation*}
TV(u) \eqdef \sup\left(\int u\mathrm{div}(\phi) \, dx, \phi \in C^1_c(\RR^d)^d, \sup_{x\in \RR^d} \|\phi(x)\|_2\leq 1\right).
\end{equation*}
If the above quantity is finite, we say that $u$ has bounded variation and its gradient $D u$ is a Radon measure, with
\begin{equation*}
TV(u) = \int_{\RR^d}|D u| = \|Du \|_{(\Mm(\RR^d))^d}.
\end{equation*}
Working in $\vecgal=L^{d/(d-1)}(\RR^d)$, one is led to consider the convex set $\cvx=\{ u\in \vecgal, TV(u)\leq 1\}$, referred to as the TV unit ball.
The generalized gradient operator is not a surjective operator. Hence, the analysis of \Cref{sec:prior} cannot help finding the extreme points of the TV ball. Still, those have been described in the fifties by Fleming in \cite{fleming1957functions} and refined analyses have been proposed more recently by Ambrosio, Caselles, Masnou and Morel in \cite{ambrosio2001connected}.
\begin{theorem}[Extreme points of the TV ball \cite{fleming1957functions,ambrosio2001connected}]
\label{thm:extremeBV}
The extreme points of the unit TV unit ball are the indicators of simple sets normalized by their perimeter, i.e. functions of the form $u=\pm \frac{\mathbbm{1}_F}{TV(\mathbbm{1}_F)}$, where $F$ is an indecomposable and saturated subset of $\RR^d$.
\end{theorem}
Informally, the simple sets of $\RR^d$ are the simply connected sets with no hole. We refer the reader to \cite{ambrosio2001connected} for more details. Using \cref{thm:extremeBV} in conjunction with our results tell us that functions minimizing the total variation subject to a finite number of linear constraints can be expressed as a sum of a small number of indicators of simple sets, see for instance \cref{fig:extremeTV}, which is yet another theoretical result explaining the common observation that total variation tends to produce stair-casing \cite{nikolova2000local}.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=4cm]{meas_support_3.pdf} &
\includegraphics[height=4cm]{blobs.pdf} \\
{ (a)} & { (b)}
\end{tabular}
\caption{\label{fig:extremeTV}Illustration for the total gradient variation problem $\min \left\{ TV(u) : \Phi(u) =y \right\}$. Here, $\Phi$ is a linear mapping giving access to $3$ measurements, $y\in \mathbb{R}^3$, by performing the mean of an image $u$ of size $200 \times 200$ on $3$ different disks represented in (a). The TV problem is solved using a primal-dual algorithm, also known as the Chambolle-Pock algorithm \cite{Chambolle2011}. The recovered image is displayed in (b): it can be represented as the sum of $3$ indicator functions of simple sets. }
\end{center}
\end{figure}
\section*{Acknowledgments}
This work was initially started by two different groups composed of A. Chambolle, F. de Gournay and P. Weiss
on one side and C. Boyer, Y. De Castro and V. Duval on the other side. The authors realized that they were working on a similar topic when a few of them met at the Cambridge semester in mathematical imaging during the Isaac Newton Institute (Cambridge) semester ``Variational methods and effective algorithms for imaging and vision'', supported by EPSRC Grant N.~EP/K032208/1. They therefore decided to write a joint paper and wish to thank the organizers for giving them this opportunity.
This work initially started with the help of T. Pock through a few numerical experiments and with discussions with J. Fadili and C. Poon.
The work of A.C. was partially supported by a grant of the Simons Foundation.
\bibliographystyle{abbrv}
\section{Conclusion}
In this paper we have developed representer theorems for convex regularized inverse problems \eqref{eq::mainproblem}, based on fundamental properties of the geometry of convex sets: the solution set can be entirely described using convex combinations of a small number of extreme points and extreme rays of the regularizer level set.
Obviously, the conclusion of Theorem~\ref{thm:first} is only nontrivial when $\minset$ has a ``sufficiently flat boundary'', in the sense that two or more faces of $\minset$ have dimension larger than $m$. For instance, if $\minset$ is strictly convex (\ie{} has only $0$-dimensional faces, except its interior\footnote{In this example, to simplify the discussion, we assume that $\vecgal$ has finite dimension.}), then the solution set $\sol$ is always reduced to a single extreme point of $\minset$! Nevertheless, several regularizers which are commonly used in the literature (notably sparsity-promoting ones) have that flatness property, and Theorem~\ref{thm:first} then provides interesting information on the structure of the solution set, as illustrated in Section~\ref{sec:app}.
To conclude, the structure theorem presented in this paper highlights the importance of describing the extreme points and extreme rays of the regularizer: this yields a fine description of the set of solutions of variational problems of the form~\eqref{eq::mainproblem}. Our theorem also suggests a principled way to design a regularizer. If a particular family of solutions is expected, then one may construct a suitable regularizer by taking the convex hull of this family.
Finally, representer theorems have had a lot of success in the fields of approximation theory and machine learning \cite{scholkopf2001generalized}, in the frame of reproducible kernel Hilbert spaces.
A reason of this success is that they allow to design efficient numerical procedures that yield \emph{infinite dimensional} solutions by solving \emph{finite dimensional} li\-near systems.
Such numerical procedures have recently been extended to the case of Banach spaces for some simple instances of the problems described in this paper \cite{fernandez2016super,de2017exact,flinth2017exact}. The price to pay when going from a Hilbert space to a Banach space is that semi-infinite convex programs have to be solved instead of simpler linear systems.
We foresee that the results in this paper may help designing new efficient numerical procedures, since they allow to parameterize the solutions using extreme points and extreme rays only.
\section{Introduction}
Let $\vecgal$ denote a {real} vector space.
Let $\Phi : \vecgal\to \RR^m$ be a linear mapping called \emph{sensing operator} and $u\in \vecgal$ denote a signal.
The main results in this paper describe the structural properties of certain solutions of the following problem:
\begin{equation}\label{eq::mainproblem}
\inf_{u \in \vecgal} f(\Phi u) + \reg(u),
\end{equation}
where $R:\vecgal\to \mathbb{R}\cup\{+\infty\}$ is a convex function called \emph{regularizer} and $f$ is an arbitrary convex or non-convex function called \emph{data fitting term}.
{
In many applications, one looks for ``sparse solutions'' that are linear sums of a few atoms. This article investigates the theoretical legitimacy of this usage.
}
\paragraph{Representer theorems and Tikhonov regularization}
The name \emph{representer theorem} comes from the field of machine learning \cite{Scholkopf2002Learning}.
To provide a first concrete example\footnote{Here, we follow the presentation of \cite{gupta2018continuous}.}, assume that $\Phi \in \RR^{m\times n}$ is a finite dimensional measurement operator and $L\in \mathbb{R}^{p\times n}$ is a linear transform. Solving an inverse problem using Tikhonov regularization amounts to finding the mini\-mi\-zers of
\begin{equation}\label{eq:tikhonov}
\min_{u\in \RR^m} \frac{1}{2}\|\Phi u - y\|_2^2 + \frac{1}{2}\|Lu\|_2^2.
\end{equation}
Provided that $\mathrm{ker}\Phi\cap \mathrm{ker} L=\{0\}$, it is possible to show that, whatever the data $y$ is, solutions are always of the form
\begin{equation}\label{eq:firstrepresentertheorem}
u^\star = \sum_{i=1}^m \alpha_i \psi_i + u_K,
\end{equation}
where $u_K\in \mathrm{ker}(L)$ and $\psi_i=(\Phi^T\Phi + L^TL)^{-1}(\phi_i)$, where $\phi_i^T\in \RR^n$ is the $i$-th row of~$\Phi$. This result characterizes structural properties of the minimi\-zers without actually needing to solve the problem. In addition, when $\vecgal$ is an infinite dimensional Hilbert space, Equation \eqref{eq:firstrepresentertheorem} sometimes allows to compute exact solutions, by simply solving a finite dimensional linear system. This is a critical observation that explains the practical success of kernel methods and radial basis functions \cite{Wendland2005Scattered}.
\paragraph{Representer theorems and convex regularization}
The Tikhonov regularization \eqref{eq:tikhonov} is a powerful tool when the number $m$ of observations is large and the operator $\Phi$ is not too ill-conditioned. However, recent results in the fields of compressed sensing \cite{Donoho2006Compressed}, matrix completion \cite{candes2009exact} or super-resolution \cite{Tang2013Compressed,candes2014towards} - to name a few - suggest that much better results may be obtained in general, by using convex regularizers, with level sets containing singularities.
Popular examples of regularizers in the finite dimensional setting include the indicator of the nonnegative orthant \cite{donoho2005sparse}, the $\ell^1$-norm~\cite{Donoho2006Compressed} or its composition with a linear operator \cite{rudin1992nonlinear} and the nuclear norm \cite{candes2009exact}.
Those results were nicely unified in \cite{chandrasekaran2012convex} and one of the critical arguments behind all these techniques is a representer theorem of type \eqref{eq:firstrepresentertheorem}.
In most situations however, this argument is only implicit. The main objective of this paper is to state a generalization of \eqref{eq:firstrepresentertheorem} to arbitrary convex functions $R$.
It covers all the aforementioned problems, but also new ones for problems formulated over the space of measures.
To the best of our knowledge, the name ``{\it representer theorem}'' is new in the field of convex regularization and its first mention is due to Unser, Fageot and Ward in~\cite{unser2017splines}. Describing the solutions of~\eqref{eq::mainproblem} is however an old problem which has been studied since at least the 1940's in the case of Radon measure recovery.
\paragraph{Total variation regularization of Radon measures}
A typical example of inverse problem in the space of measures i
\begin{equation}\label{eq::beurling}
\min_{\mu\in \Mm(\Omega)} |\mu|(\Omega) \quad \mbox{s.t.}\quad \Phi \mu=y
\end{equation}
where $\Omega\subseteq \RR^N$, $\Mm(\Omega)$ denotes the space of Radon measures, $|\mu|(\Omega)$ is the total variation of the measure $\mu$ (see Section~\ref{sec:app}) and $\Phi \mu$ is a vector of \textit{generalized moments}, \ie{} $\Phi \mu=\left(\int_\Omega \varphi_i(x)\d\mu(x)\right)_{1\leq i\leq m}$ where $\{\varphi_i\}_{1\leq i\leq m}$ is a family of continuous functions (which ``vanish at infinity'' if $\Omega$ is not compact).
Problems of the form~\eqref{eq::beurling} have received considerable attention since the pioneering works of Beurling~\cite{Beurling1938} and Krein \cite{Krein1938}, sometimes under the name \textit{L-moment problem} (see the monograph~\cite{krein_markov_1977}). To the best of our knowledge, the first ``representer theorem'' for problems of the form~\eqref{eq::mainproblem} is given for~\eqref{eq::beurling} by \zuho{}~\cite{Zuhovickii1948} (see~\cite[Th. 3]{Zuhovickii1962} for an English version). It essentially states that
\begin{equation}\label{statement1}
\mbox{\textit{There exists a solution to~\eqref{eq::beurling} of the form $\displaystyle\sum_{i=1}^{r}a_i \delta_{x_i}$, with $r\leq m$.}}
\end{equation}
{A more precise result was given by Fisher and Jerome in~\cite{fisher_spline_1975}. When considering the problem \eqref{eq::beurling}, and for a bounded domain $\Omega$, the result reads as follows:
\begin{align}\label{statement2}
\begin{split}
&\mbox{\textit{The extreme points of the solution set to~\eqref{eq::beurling} are of the form}} \\
&\qquad\qquad \qquad\qquad\qquad \displaystyle\sum_{i=1}^{r}a_i \delta_{x_i}, \mbox{ \textit{with} } r\leq m.
\end{split}
\end{align}
Incidentally, the Fisher-Jerome theorem considers more general problems of the form:
\begin{equation}\label{eq::spline}
\min_{u\in \vecgal} |Lu|(\Omega) \quad \mbox{s.t.}\quad Lu\in \Mm(\Omega) \qandq \Phi u =y,
\end{equation}
where $\vecgal\subseteq \mathcal{D}'(\Omega)$ is a suitably defined Banach space of distributions, $L:\mathcal{D}'(\Omega)\rightarrow \mathcal{D}'(\Omega)$ maps $\vecgal$ onto $\Mm(\Omega)$ and $\Phi:\vecgal\to \RR^m$ is a continuous linear operator. We refer to Section~\ref{sec:app} for precise assumptions. Let us mention that the initial results by Fisher-Jerome were extended to a significantly more general setting in \cite{unser2017splines}.}
It is important to note that the Fisher-Jerome theorem~\cite{fisher_spline_1975} provides a much finer description of the solution set than \zuho's result \cite{Zuhovickii1948}. Indeed, the well-known Krein-Milman theorem states that, if $\vecgal$ is endowed with the topology of a locally convex Hausdorff vector space and $\cvx\subset \vecgal$ is compact convex, then $\cvx$ is the closed convex hull of its extreme points,
\begin{equation}
\label{eq:krein}
\cl \mathrm{conv} \left(\ext(\cvx)\right)=\cvx.
\end{equation}
In other words, the solutions described by the Fisher-Jerome theorem are sufficient to recover \emph{the whole set of solutions}.
Let us mention that the Krein-Milman theorem was extended by Klee~\cite{klee_extremal_1957} to unbounded sets: if $C$ is locally compact, closed, convex, and contains no line, then
\begin{equation}
\label{eq:klee}
\cl \mathrm{conv} \left(\ext(\cvx)\cup\rext(\cvx)\right)=\cvx,
\end{equation}
where $\rext(\cvx)$ denotes the union of the extreme rays of $\cvx$ (see Section~\ref{sec:notations} below).
\paragraph{``Representer theorems'' for convex sets}
As the Dirac masses are the extreme points of the total variation unit ball, each of the above-mentioned ``representer theorems'' for inverse problems actually reflect some phenomenon in the geometry of convex sets.
In that regard, the celebrated Minkowski-Carath\'eodory theorem~\cite[Th.~III.2.3.4]{hiriart-urruty_convex_1993} is fundamental: any point of a compact convex set in an $m$-dimensional space is a convex combination of (at most) $m+1$ of its extreme points.
In~\cite[Th.~(3)]{klee_theorem_1963}, Klee removed the boundedness assumption and obtained the following extension: any point of a closed convex set in an $m$-dimensional space is a convex combination of (at most) $m+1$ extreme points, or $m$ points, each an extreme point or a point in an extreme ray.
One purpose of the present paper is to point out the connection between the Fisher-Jerome theorem and a lesser known theorem by Dubins~\cite{dubins1962extreme} (see also~\cite[Exercise II.7.3.f]{bourbaki_espaces_2007}):
\medskip
\begin{center}
\noindent\emph{The extreme points of the intersection of $\cvx$ with an affine space of codimension $m$ are convex combination of $($\!at most$)$\footnote{In the rest of the paper, we omit the mention ``at most'', with the convention that some points may be chosen identical.} $m+1$ extreme points of~$\cvx$},
\end{center}
\medskip
\noindent
provided $\cvx$ is linearly bounded and linearly closed (see Section~\ref{sec:notations}). That theorem was extended by Klee~\cite{klee_theorem_1963} to deal with the unbounded case.
Although the connection with the Fisher-Jerome is striking, Dubins' theorem actually provides one extreme point too many. In the case of~\eqref{eq::beurling}, it would yield two Dirac masses for one linear measurement. We provide in this paper a refined analysis of the case of variational problems, which ensures at most $m$ extreme points.
\paragraph{Contributions}
The main results of this paper yield a description of some solutions to \eqref{eq::mainproblem} of the following form:
\begin{equation*}
u^\star = \sum_{i=1}^r \alpha_i \psi_i + u_K,
\end{equation*}
where $r\leq m$, the atoms $\psi_i$ are identified with some extreme points (or points in extreme rays) of the regularizer level sets, and $u_K$ is an element of the so-called constancy space of $R$, \ie{} the set of directions along which $\reg$ is invariant. The results take the form \eqref{statement1}, when $f$ is an arbitrary function and the form \eqref{statement2} when it is convex.
We provide tight bounds on the number of atoms $r$ that depend on the geometry of the level sets and on the link between the constancy space of $R$ and the measurement operator $\Phi$.
Our general theorems then allow us to revisit many results of the literature (linear programming, semi-definite programming, nonnegative constraints, nuclear norm, analysis priors), yielding simple and accurate descriptions of the minimizers. Our analysis also allows us to characterize the solutions of a resisting problem: we provide a representation theorem for the minimizers of the total gradient variation \cite{rudin1992nonlinear} as sums of indicators of simple sets. This provides a simple explanation to the staircaising effect when only a few measurements are used.
{Let us mention that, shortly after this work was posted on arXiv, similar results appeared, with somewhat different proofs, in a paper by Bredies and Carioni~\cite{bredies_sparsity_2018}.}
\section{Abstract representer theorems}
\label{sec:abstract}
\subsection{Main result}
Our main result describes the facial structure of the solution set to
\begin{equation}\label{eq:thminreg}
\min_{u\in\vecgal} \reg(u) \quad \mbox{s.t.}\quad \Phi u=y, \tag{$\Pp$}
\end{equation}
where $y\in \RR^m$, $\Phi:\vecgal \rightarrow \RR^m$ is a linear operator, and $m\leq \mathrm{dim} \vecgal$, $m<+\infty$.
In the following, let $\minval$ denote the \emph{optimal value} of~\eqref{eq:thminreg}, $\sol$ denote its \emph{solution set}, and~$\minset$ denote the \emph{corresponding level set} of $\reg$,
\begin{align}
\label{eq:defminset}
\minset\eqdef \enscond{u\in \vecgal}{\reg(u)\leq \minval}.
\end{align}
\begin{theorem}
\label{thm:first}
Let $\reg:\vecgal \to \RR\cup\{+\infty\}$ be a convex function. Assume that $\inf_\vecgal \reg< t^\star< +\infty$, that $\sol$ is nonempty and that the convex set $\minset$ is linearly closed and contains no line.
Let $p\in\sol$ and let $j$ be the dimension of the face $\face{p}{\sol}$. Then $p$ belongs to a face of $\minset$ with dimension at most $m+j-1$.
In particular, $p$ can be written as a convex combination of:
\begin{itemize}
\item[$\circ$] $m+j$ extreme points of $\minset$,
\item[$\circ$] or $m+j-1$ points of $\minset$, each an extreme point of $\minset$ or in an extreme ray of $\minset$.
\end{itemize}
Moreover, $\rec{\sol}=\rec{\minset}\cap \mathrm{ker}(\Phi)$ and therefore $\mathrm{lin}(\sol)=\mathrm{lin}(\minset)\cap \mathrm{ker}(\Phi)$.
\end{theorem}
\begin{figure}
\centering
\tdplotsetmaincoords{63}{55}
\input{fig-th}
\caption{An illustration of \cref{thm:first} for $m=2$. The solution set $\sol=\minset\cap \Phi^{-1}(\{y\})$ is made of an extreme point and an extreme ray. The extreme point is a convex combination of~$\{e_0,e_1\}$. Depending on their position, the points in the ray are a convex combination of~$\{e_0,e_1,e_2\}$ or a pair of points, one in $\rho_1$ and the other in $\rho_2$.}
\label{fig:thm}
\vspace{0.5cm}
\end{figure}
The proof of \cref{thm:first} is given in Section~\ref{sec:prooffirst}.
Before extending this theorem to a wider setting, let us formulate some remarks.
\begin{remark}[Extreme points and extreme rays of $\sol$]
In particular ($j=0$), \emph{each extreme point} of $\sol$ is a convex combination of $m$ extreme points of~$\minset$, or a convex combination of $m-1$ points of~$\minset$, each an extreme point of~$\minset$ or in an extreme ray. Similarly $(j=1)$, \emph{each point on an extreme ray} of~$\sol$ is a convex combination of $m+1$ extreme points of~$\minset$, or a convex combination of $m$ points of~$\minset$, each an extreme point of $\minset$ or in an extreme ray. {Hence, provided the assumptions of Klee's theorem (see \eqref{eq:klee}) hold, Theorem \ref{thm:first} completely charaterizes the solution set.} An illustration is provided in \cref{fig:thm}.
\end{remark}
\begin{remark}[The hypothesis $\inf_\vecgal \reg< t^\star$\label{rem:infR}]
We have focused on the case $\minval>\inf_\vecgal \reg$ in the theorem since the case $\minval=\min \reg$ is easier.
In that case, $M=\Phi^{-1}(\{y\})$ can be in arbitrary position (\ie{} not necessarily tangent) w.r.t.\ $\minset=\argmin \reg$, and one can only use the general Dubins-Klee theorem~\cite{dubins1962extreme,klee_theorem_1963} to describe their intersection. As a result the conclusions of \cref{thm:first} are slightly weakened, $p$ belongs to a face of $\minset$ with dimension $m+j$, and one must add one more point in the convex combination (\eg{}, for $j=0$, each extreme point of $\sol$ is a convex combination of $m+1$ extreme points of $\minset$, or $m$ points\dots).
\end{remark}
\begin{remark}[Gauge functions or semi-norms] A common practice in inverse problems is to consider positively homogeneous regularizers $\reg$, such as (semi)-norms or gauge functions of convex sets. In that case the extreme points of $\minset$ correspond, up to a rescaling, to the extreme points of $\{u\in\vecgal : \reg(u)\leq 1\}$. In several cases of interest, the extreme points of such convex sets are well understood, see Section~\ref{sec:app} for examples in Banach spaces or, for instance, the paper \cite[Sec. 2.2]{chandrasekaran2012convex} for examples in finite dimensional spaces.
\end{remark}
\begin{remark}[Extension to semi-strictly quasi convex functions]
\Cref{thm:first} can be extended to the case where $R$ is a semi-strictly quasi-convex function.
A function $R$ is said to be {\em semi-strictly quasi-convex} \cite{daniilidis2007some} if it is quasi-convex and if
\[
R(x)< R(y) \Longrightarrow R(\lambda x + (1-\lambda) y) < R(y) \quad \forall \lambda \in \oi{0}{1}.
\]
In words, semi-strictly quasi-convex functions are functions that, when restricted to a line are successively decreasing, constant and increasing on their domain. In comparison, strictly quasi-convex functions are successively decreasing and increasing while quasi-convex functions successively non-increasing and non-decreasing.
The set of semi-strictly quasi-convex functions is a subset of quasi-convex functions and it contains all convex and strictly quasi-convex functions. In the proof of \cref{thm:first}, only the semi-strictly quasi-convex property is required to ensure that \eqref{eq:for_cvx_like} holds.
\end{remark}
\begin{remark}[Topological properties\label{rem:lsc}]
The assumption that $\minset$ is linearly closed is fulfilled in most practical cases, since $\vecgal$ is usually endowed with the topology of a Banach (or locally convex) vector space and $\reg$ is assumed to be lower semi-continuous (so as to guarantee the existence of a solution to~\eqref{eq:thminreg}).
{Note also that if $\reg$ is lower semi-continuous on any line (for the natural topology of the line), the set $\minset$ is linearly closed.
}
\end{remark}
\subsection{The case of level sets containing lines}
The reader might be intrigued by the assumption of \cref{thm:first} that $\minset$ contains no line, since in several applications the regularizer $\reg$ is invariant by the addition of, \eg, constant functions or low-degree polynomials (see Section~\ref{sec:app}). In that case, one is generally interested in the non-constant or non-polynomial part, and it is natural to consider a quotient problem for which the theorem applies. We describe below (see \cref{coro:lines}) how our result extends to the case where $\minset$ contains lines.
\begin{figure}[htbp]
\centering
\input{fig-coro1b}
\caption{Taking the quotient by $\vecrec{}=\mathrm{lin}(\minset)$ yields a level set $\tilde\minset$ with no line. In this figure, to simplify the notation, we have omitted the isomorphism $\qiso$ (\ie{} in this figure $\minset$ shoud be replaced with $\qiso(\minset)$, and similarly for $\sol$ and $\Phi^{-1}(\{y\})$).}
\label{fig:coro1b}
\end{figure}
If $\minset$ is linearly closed and contains some line, it is translation-invariant in the corresponding direction. The collection of all such directions is the lineality space of $\minset$ (see \Cref{sec:notations}), we denote it by $\vecrec{}\eqdef \mathrm{lin}(\minset)$ (typically, if $\reg$ is the composition of a linear operator and a norm, $\vecrec{}$ is the \emph{kernel} of that linear operator). Let $\projv:\vecgal \rightarrow \vecgal/\vecrec{}$ be the canonical projection map. We recall that there exists a linear isomorphism $\qiso: \vecgal\rightarrow (\vecgal/\vecrec{})\times \vecrec{}$ such that the first component of $\qiso(p)$ is $\projv(p)$ for all $p\in \vecgal$.
We may now describe the equivalence classes (modulo $\vecrec{}$) of the solutions.
\begin{corollary}
\label{coro:lines}
Let $\reg:\vecgal \to [-\infty,+\infty]$ be a convex function. Assume that $\inf_\vecgal \reg< t^\star< +\infty$, that $\sol$ is nonempty and that the convex set $\minset$ is linearly closed. Let $\vecrec{}\eqdef\mathrm{lin}(\minset)$ be the lineality space of $\minset$ and $d\eqdef \mathrm{dim} \Phi(\vecrec{})$. Let $p\in\sol$, let $\projv(p)$ denote its equivalence class, and let $j$ be the dimension of the face $\face{\projv(p)}{\projv(\sol)}$.
Then, $\projv(p)$ belongs to a face of $\projv(\minset)$ with dimension at most $m+j-d-1$.
In particular,
\begin{itemize}
\item[$\circ$] $\projv(p)$ is a convex combination of $m+j-d$ extreme points of $\projv(\minset)$,
\item[$\circ$] or $\projv(p)$ is a convex combination of $m+j-d-1$ points of $\projv(\minset)$, each an extreme point of $\projv(\minset)$ or in an extreme ray of $\projv(\minset)$.
\end{itemize}
As a result, letting $\qe_1,\ldots, \qe_r$ denote those extreme points (or points in extreme rays),
\begin{equation}\label{eq:convcomb}
p= \sum_{i=1}^r \theta_i \qiso^{-1}(\qe_i,0) + u_{\vecrec{}}, \qwhereq \theta_i\geq 0,\ \sum_{i=1}^r \theta_i =1, \qandq u_{\vecrec{}}\in \vecrec{}.
\end{equation}
\end{corollary}
The proof of \cref{coro:lines} is given in \Cref{sec:proofconvex}.
One can have an explicit representation with elements of $E$ of a solution $p\in \sol $.
Indeed, let $W$ be some linear complement to $K=\mathrm{lin}(\minset)$. One may decompose $\minset = \tilde \minset + K$, where $\tilde{\minset}=\cvx\cap W$, and observe that $\projv (\minset) $ and $\tilde \minset$ are isomorphic.
In this case, \cref{coro:lines} implies that
$p$ can be written as the sum of one point in $\mathrm{lin}(\minset)$ and of a convex combination of:
\begin{itemize}
\item[$\circ$] $m+j-d$ extreme points of $\tilde \minset$,
\item[$\circ$] or $m+j-1-d$ points of $\tilde \minset$, each an extreme point of $\tilde \minset$ or in an extreme ray of $\tilde \minset$.
\end{itemize}
\subsection{Extensions to data fitting functions}
In this section, we discuss the extension of the above results to more general problems of the form
\begin{equation}
\label{eq::data-fitting:convex}
\inf_{u \in \vecgal} f(\Phi u)+\reg(u), \tag{$\Pp_f$}
\end{equation}
where $f:\RR^m\rightarrow \RR\cup\{+\infty\}$ is an arbitrary fidelity term.
\subsubsection{Convex data fitting term}
When $f$ is a \emph{convex} data fitting function $f$, we get the following result.
\begin{corollary} \label{cor:convex_fit}
Assume that $f$ is convex and that the solution set $\sol_{f}$ of \eqref{eq::data-fitting:convex} is nonempty. Let $p\in \sol_{f}$ such that $\minset\eqdef \{u\in \vecgal, \reg(u)\leq \reg(p)\}$ is linearly closed, $\vecrec{}\eqdef\mathrm{lin}{\minset}$ and let $j$ be the dimension of the face $\face{\projv(p)}{\projv(\sol_{f})}$.
If $\inf_\vecgal \reg < \reg(p)$, then the conclusions of \cref{coro:lines} (or \cref{thm:first} if $\vecrec{}=\{0\}$) hold.
If $\inf_\vecgal \reg = \reg(p)$, they hold with $1$ more dimension (see \cref{rem:infR}).
\end{corollary}
Let us recall that, in view of \cref{rem:lsc}, if $\vecgal$ is a topological vector space and $R$ is lower semi-continuous, $\minset$ is closed regardless of the choice $p$.
\begin{proof}
Let $y=\Phi p$ and
consider the following problem:
\begin{align}
\label{eq:singleton}
\tag{$\Pp_{\{y\}}$}
\min_{u\in\vecgal } \reg(u) &\quad \mbox{s.t.}\quad
\Phi u = y.
\end{align}
Let $\sol_{\{y\}}$ denote its solution set. It is a convex subset of $\sol_{f}$, with $p\in \sol_{\{y\}}$. Additionally, if $j$ is the dimension of $\face{p}{\projv(\sol_f)}$ (resp. $\face{p}{\sol_f}$ if $\minset$ contains no line), then the face $\face{p}{\projv(\sol_{\{y\}})}$ (resp. $\face{p}{\sol_{\{y\}}}$) has dimension at most $j$, since $\sol_{\{y\}}\subseteq \sol_{f}$.
It suffices to apply \cref{coro:lines} (resp. \cref{thm:first}) to obtain the result.
\end{proof}
\begin{remark}[The case of a strictly-convex function]
In the case when $f$ is strictly convex, it is known that $\Phi \sol_f$ is a singleton, which means that $\sol_{\{y\}}=\sol_f$.
\end{remark}
\begin{remark}[The case of quasi-convex functions]
The result actually holds whenever the solution set $\sol_f$ is convex. In particular, this property holds when $f$ is quasi-convex and $R$ is convex.
\end{remark}
\subsubsection{Non-convex function}
\label{subsec:nonconvex}
In the general case, \ie{} when $f:\RR^m\rightarrow \RR\cup\{+\infty\}$ is an arbitrary function, it is difficult to describe the structure of the solution set. However, one may choose a solution $p_0$ (as before, provided it exists) and observe that it is also a solution to~\eqref{eq:thminreg} for $y\eqdef \Phi p_0$. Then, one may apply \cref{coro:lines}, but the difficult part is that the dimensions $j$ to consider are with respect to the solution set $\sol$ of the \emph{convex} problem~\eqref{eq:thminreg}. Nevertheless, if one is able to assert that the solution set $\sol$ has at least an extreme point $p$, then \cref{coro:lines} ensures that $p$ can be written in the form~\eqref{eq:convcomb}, where $r\leq m$ and the $\qe_i$'s are extreme points (or points in extreme rays) of $\minset$. Since $p$ must also be a solution to~\eqref{eq::data-fitting:convex}, one obtains that there exists a solution to~\eqref{eq::data-fitting:convex} of the form~\eqref{eq:convcomb}.
\subsection{Ensuring the existence of extreme points}\label{sec:existextreme}
It is important to note that, in \cref{thm:first}, the existence a face of $\sol$ with dimension $j$ is not guaranteed (nor, for $j=0$, the existence of extreme points). The convex set $\sol$ might not even have any finite-dimensional face!
For instance, let $\vecgal$ be the space of Lebesgue-integrable functions on $[0,1]$. If $\reg(u)=\int_0^1|u(x)|\d x$, $\Phi:\vecgal\rightarrow \RR$ is defined by $u\mapsto \int_0^1 u(x)\d x$, and $y=1$, then
\begin{equation}
\sol=\enscond{u\in\vecgal}{\int_0^1 |u(x)|\d x\leq 1 \qandq \int_0^1 u(x)\d x=1}.
\end{equation}
It is possible to prove that such a set $\sol$ does not have any extreme point. As a consequence $\sol$ does not have any finite-dimensional face (otherwise an extreme point of the closure of a face would be an extreme point of $\sol$).
However, \cref{thm:first} (in fact the Dubins-Klee theorem~\cite{dubins1962extreme,klee_theorem_1963}) asserts that, \textit{if there is} a finite-dimensional face in $\sol$, then $\minset$ has indeed extreme points (and possibly extreme rays), and the convex combinations of such points generate the above-mentioned face.
As a result, it is crucial to be able to assert a priori the existence of some finite-dimensional face for $\sol$, and this is where topological arguments come into play. If $\vecgal$ is endowed with the topology of a locally convex (Hausdorff) vector space, the theorems~\cite[3.3 and 3.4]{klee_extremal_1957} which generalize the celebrated Krein-Milman theorem, state that
\emph{$\sol$ has an extreme point provided}
\begin{itemize}
\item[$\circ$] $\sol$ is nonempty, convex,
\item[$\circ$] $\sol$ contains no line,
\item[$\circ$] and $\sol$ is closed, locally compact.
\end{itemize}
The last two conditions hold in particular if $\sol$ is compact. Moreover, as in \cref{coro:lines}, the second condition can be ensured by considering a suitable quotient map, provided it preserves the other topological properties (\eg{} if $\mathrm{lin}(\cvx)$ has a topological complement).
{
\begin{remark}
Whereas local compactness is a very strong property for topological vector spaces (implying their finite-dimensionality, see~\cite[Th.~3,Ch.~1]{bourbaki_espaces_2007}), it is not so difficult to ensure the local compactness of $\sol$ in practice. Indeed, very often, even the existence of solutions is usually ensured using compactness arguments for a suitable weak or weak-* topology. The unbounded cases require more specific arguments, but let mention that there are examples of cones which are locally compact without being contained in any finite-dimensional vector space. In Section~\ref{sec:momprob} below, we discuss the example of the cone $\mathcal{M}^+(\Omega)$ of non-negative measures over a compact set for the weak-* topology. Another example of locally compact convex cone is
\[
\mathcal C=\Big\{ x\in\RR^\NN \text{ such that } x_n \ge 0 \text{ and } \sum_{n\in\mathbb{N} } x_n \omega_n \le \sum_{n\in\mathbb{N} } x_n <+\infty \Big\}\subseteq \ell^1(\NN)\,,
\]
for some non-decreasing positive sequence $(\omega_n)_n$ converging to $+\infty$ (note that $\omega_0<{1}$ for the cone to be non-empty). The cone $\mathcal{C}$ is locally compact for the strong topology.
Indeed, consider the intersection $K$ of the cone $\mathcal C$ and the strong unit ball, namely $K = \{x=(x_n)_n\ : \ x_n\geq0,\ \sum_n \omega_n x_n \le \sum_{n } x_n\leq 1\}$ and consider a sequence of elements of $K$ denoted~$(x^k)_k \subset K$. Using a diagonal argument, each $(x_n^k)_k$ converges to some $\bar{x}_n \geq0$. Furthermore, using that $\{n\,:\, w_n<1\}$ is finite and Fatou's lemma, it holds that
\begin{align*}
\sum_{n } \bar x_n&\leq \liminf_k \sum_{n } x^k_n\leq1\quad \mathrm{and}\\%\quad\text{and}\quad
\sum_{n\,:\, w_n\geq1 } (w_n-1)\bar x_n&\leq \liminf_k \sum_{n\,:\, w_n\geq1 } (w_n-1) x^k_n\\
&\leq \liminf_k \sum_{n\,:\, w_n<1 } (1-w_n) x^k_n=\sum_{n\,:\, w_n<1 } (1-w_n)\bar x_n
\end{align*}
and we deduce that $\bar x= (\bar{x}_n)_n\in K$. Furthermore, one has
\[
||x^k-\bar{x}||_1 = \sum_{n=0}^M |x_n^k-\bar{x}_n| + \sum_{n>M} |x_n^k-\bar{x}_n|
\]
and $M$ will be chosen later. Finally,
\begin{align*}
\sum_{n>M} |x_n^k-\bar{x}_n| &\leq \sum_{n>M} (x_n^k+ \bar{x}_n) \leq (1/\omega_M) \sum_{n>M} \omega_n (x_n^k+\bar{x}_n)
\leq 2/\omega_M
\end{align*}
since $\omega_n/\omega_M \geq 1$ for $n>M$. Therefore choosing $M$ large enough ensures that the second term $\sum_{n>M} |x_n^k-\bar{x}_n|$ is less than some $\varepsilon>0$. Choosing $k$ large enough leads to $||x^k-\bar{x}||_1\leq 2\varepsilon$.
\end{remark}
}
\section{Notation and Preliminaries}\label{sec:notations}
Throughout the paper, unless otherwise specified, $\vecgal$ denotes a finite or infinite dimensional real vector space and $\cvx\subseteq \vecgal$ is a convex set. Given two distinct points $x$ and $y$ in $\vecgal$, we let $\oi{x}{y}=\enscond{tx+(1-t)y}{0<t<1}$ and $[x,y]=\{tx+(1-t)y : $ $0\leq t\leq 1\}$ denote the open and closed segments joining $x$ to $y$. We recall the following definitions, and we refer to~\cite{dubins1962extreme,klee_extremal_1957} for more details.
\paragraph{Lines, rays, and linearly closed sets}
A \textit{line} is an affine subspace of $\vecgal$ with dimension $1$. An open half-line, \ie{} a set of the form $\rho=\{ p+tv : $ $t>0 \}$, where $p,v\in \vecgal$, $v\neq 0$, is called a \emph{ray} (through $p$).
We say that the set $\cvx$ is \emph{linearly closed} (resp. linearly bounded) if the intersection of $\cvx$ and a line of $\vecgal$ is closed (resp. bounded) for the natural topology of the line. If $\vecgal$ is a topological vector space and $\cvx$ is closed for the corresponding topology, then $\cvx$ is linearly closed.
If $\cvx$ is linearly closed and contains some ray $\rho=p+\RR_+^*v$, it also contains the endpoint $p$ as well as the rays $q+\mathbb{R}_+v$ for all $q\in \cvx$. Therefore, if $\cvx$ contains a ray (resp. line), it recesses in the corresponding direction.
\paragraph{Recession cone and lineality space} The set of all $v\in\vecgal$ such that $\cvx+\RR_+^*v\subseteq \cvx$ is a convex cone called the \emph{recession cone of $\cvx$}, which we denote by $\rec{\cvx}$. If $\cvx$ is linearly closed then so is $\rec{\cvx}$, and $\rec{\cvx}$ is the union of $0$ and all the vectors $v$ which direct the rays of $\cvx$. In particular, $\cvx$ contains a line if and only the vector space
\begin{equation}
\label{eq::lineality space}
\mathrm{lin}(\cvx)\eqdef \rec{\cvx}\cap (-\rec{\cvx})
\end{equation}
is non trivial. The vector space $\mathrm{lin}(\cvx)$
is called the \emph{lineality space} of $\cvx$. It corresponds to the largest vector space of invariant directions for $\cvx$.
If $\vecgal$ is finite dimensional and $\cvx$ is closed, the recession cone coincides with the \emph{asymptotic cone}.
\paragraph{Extreme points, extremal rays, faces}
An \textit{extreme point} of $\cvx$ is a point $p\in \cvx$ such that $\cvx\setminus \{p\}$ is convex.
An \textit{extremal ray} of $\cvx$ is a ray $\rho \in \cvx$ such that if $x,y\in \cvx$ and $\oi{x}{y}$ intersects $\rho$, then $\oi{x}{y}\subset \rho$. If $\cvx$ contains the endpoint~$p$ of $\rho$ (\eg{} if $\cvx$ is linearly closed), this is equivalent to $p$ being an extreme point of $\cvx$ and $\cvx\setminus \rho$ being convex.
Following~\cite{dubins1962extreme,klee_theorem_1963}, if $p\in \cvx$, the smallest face of $\cvx$ which contains $p$ is the union of $\{p\}$ and all the open segments in $\cvx$ which have $p$ as an inner point. We denote it by $\face{p}{\cvx}$. The (co)dimension of $\face{p}{\cvx}$ is defined as the (co)dimension of its affine hull. The collection of all elementary faces, $\{\face{p}{\cvx}\}_{p\in\cvx}$, is a partition of $\cvx$.
Extreme points correspond to the zero-dimensional faces of $\cvx$, while extreme rays are (generally a strict subcollection of the) one-dimensional faces.
\paragraph{Quotient by lines}
As noted above, if $\cvx$ is linearly closed, it contains a line if and only if the vector space $\mathrm{lin}(\cvx)$ defined in \eqref{eq::lineality space}
is nontrivial. In that case, letting $W$ denote some linear supplement to $\mathrm{lin}(\cvx)$
, we may write
\begin{equation}
\label{eq::quotientbyline}
\cvx= \qcvx+\mathrm{lin}(\cvx), \textrm{ with } \qcvx\eqdef \cvx\cap W
\end{equation}
and the corresponding decomposition is unique (\ie{} any element of $\cvx$ can be decomposed in a unique way as the sum of an element of $\qcvx$ and $\mathrm{lin}(\cvx)$). The convex set $\qcvx$ (isomorphic to the projection of $\cvx$ onto the quotient space $\vecgal/\mathrm{lin}(\cvx)$) is then linearly closed, and the decomposition of $\cvx$ in elementary faces is exactly given by the partition $\{\face{p}{\qcvx}+\mathrm{lin}(\cvx)\}_{p\in\qcvx}$, where $\face{p}{\qcvx}$ is the smallest face of $p$ in $\qcvx$.
One may check that $\qcvx$ contains no line, as its recession cone $\rec{\qcvx}$, the projection of $\rec{\cvx}$ onto $W$ parallel to $\mathrm{lin}(\cvx)$, is a \emph{salient} convex cone.
\section{Proofs of Section~\ref{sec:abstract}}
\label{sec:proof}
\subsection{Proof of Theorem~\ref{thm:first}}
\label{sec:prooffirst}
The set of solutions $\sol$ is precisely $\minset\cap \Phi^{-1}(\{y\})$, and the statement of the theorem amounts to describing its elementary faces. Since $\Phi^{-1}(\{y\})$ is an affine space with codimension at most $m$, the main theorem of~\cite{klee_theorem_1963} almost provides the desired conclusion, but, for our particular case, it yields one extreme point/ray too many. Here is how to obtain the correct number.
Let $p$ be a point of $\sol$ such that $\face{p}{\sol}$ has dimension $j$. Up to a translation, it is not restrictive to assume that $p=0$, so that $\sol = \minset \cap \mathrm{ker} \Phi$.
Let $\vecrest$ be the union of $\{0\}$ and all the lines $\ell$ such that $\minset\cap \ell$ contains an open interval which contains $0$. Note that $\vecrest$ is a linear space, the linear hull of $\face{0}{\minset}$.
We claim that $\codim_{\vecrest}\left(\vecrest\cap\mathrm{ker} \Phi\right)\leq m-1$. By contradiction assume that there is a complement $Z$ to $\vecrest\cap\mathrm{ker}\Phi$ in $\vecrest$ with dimension $m$. Then $\restr{\Phi}{Z}$ has rank $m$ and is a bijection, hence we may define
\begin{equation*}
z\eqdef -\frac{\theta}{(1-\theta)}(\restr{\Phi}{Z})^{-1}(\Phi u_0) \in Z\subset \vecrest,
\end{equation*}
where $\theta\in ]0,1[$ and $u_0\in \minset$ is such that $\inf \reg \leq \reg(u_0)<\minval$. For $\theta$ small enough, $z\in \minset$, hence $\reg(z)\leq \minval$. Moreover,
\begin{equation}
\Phi\left((1-\theta)z+\theta u_0 \right)= (1-\theta)\Phi z + \theta \Phi u_0 = 0.
\end{equation}
so that $(1-\theta)z+\theta u_0$ lies in {$\minset\cap \mathrm{ker} \Phi$}. Since $\reg(u_0) < \reg(z)$, and $\reg$ is convex
\begin{equation}
\label{eq:for_cvx_like}
\reg\left((1-\theta)z+\theta u_0 \right)< \reg\left(z\right)\leq\minval,
\end{equation}
we obtain a contradiction with the fact that $\minval$ is the minimal value of~\eqref{eq:thminreg}. As a result, $\codim_{\vecrest}\left(\vecrest\cap\mathrm{ker} \Phi\right)\leq m-1$.
Observing that $\vecrest\cap\mathrm{ker} \Phi$ is the linear hull of $\face{0}{\sol}$, hence $j= \mathrm{dim}\left(\vecrest\cap\mathrm{ker} \Phi\right)$, we deduce that
\begin{equation}
\mathrm{dim} \face{0}{\minset}\eqdef\mathrm{dim} \vecrest = \codim_{\vecrest}\left(\vecrest\cap\mathrm{ker} \Phi\right) + \mathrm{dim}\left(\vecrest\cap\mathrm{ker} \Phi\right) \leq m-1+j,
\end{equation}
and the first claim of the theorem is proved.
Now, applying the Carath\'eodory-Klee theorem (3) in~\cite{klee_theorem_1963}, $p$ is convex combination of at most $m+j$ (resp. $m+j-1$) extreme points (resp. extreme points or in an extreme ray) of $\face{0}{\minset}$. The conclusion stems from the fact that the extreme points (resp. rays) of $\face{0}{\minset}$ are extreme points (resp. rays) of $\minset$, see the proof of the main theorem in~\cite{klee_theorem_1963}.
\qed
\subsection{Proof of Corollary~\ref{coro:lines}}
\label{sec:proofconvex}
\begin{sloppypar}
Now, assume that the vector space ${\vecrec{}\eqdef\rec{\minset}\cap (-\rec{\minset})}$ is non trivial (otherwise the conclusion follows from Theorem~\ref{thm:first}). We note that for any $u\in \minset$, the convex function $v\mapsto \reg(u+v)$ is upper bounded by $\minval$ on $\vecrec{}$, hence is constant. As a result, possibly replacing $\reg$ with $\reg+\chi_{\minset}$, it is not restrictive to assume that $\reg$ is invariant by translation along $\vecrec{}$.
\end{sloppypar}
Now, let $\projv$, $\projp$ be the canonical quotient maps and define $\tilde{\reg}$ and $\tilde{\Phi}$ by the commutative diagrams
\[ \begin{tikzcd}
\vecgal \arrow{r}{\reg} \arrow[swap]{d}{\projv} & \left[-\infty,\infty\right] \\%
\vecgal/\mvecrec \arrow[swap]{ru}{\tilde{\reg}}&
\end{tikzcd}\qquad
\begin{tikzcd}
\vecgal \arrow{r}{\Phi} \arrow[swap]{d}{\projv} & \RR^m \arrow{d}{\projp} \\%
\vecgal/\mvecrec \arrow{r}{\tilde{\Phi}}& \RR^m/\Phi(\mvecrec)
\end{tikzcd}\]
Note that $\tilde{\reg}$ is a convex function and that $\tilde{\Phi}$ is a linear map with rank $m-d$, where $d\eqdef \mathrm{dim}\left(\Phi(\mvecrec)\right)$. It is then natural to consider the problem
\begin{equation}\label{eq:qthminreg}
\min_{\tilde{u}\in \vecgal/\mvecrec} \tilde{\reg}(\tilde{u}) \quad \mbox{s.t.}\quad \tilde{\Phi} \tilde{u}=\tilde{y},\tag{$\tilde{\Pp}$}
\end{equation}
where $\tilde{y}\eqdef\projp(y)$. In other words, one still wishes to minimize $\reg(u)$, but one is satisfied if the constraint $\Phi u=y$ merely holds up to an additional term $\Phi v$, where $v\in\mvecrec$. We observe that \eqref{eq:thminreg} and \eqref{eq:qthminreg} have the same value $\minval$, and the level set
\begin{equation}
\qminset\eqdef\enscond{\tilde{u}\in\vecgal/\mvecrec}{\tilde{\reg}(\tilde{u})\leq t^\star}=\projv(\minset)
\end{equation}
is convex linearly closed \emph{and contains no line}. Let $\qsol$ be the solution set to~\eqref{eq:qthminreg}. Theorem~\ref{thm:first} now describes the elements of the $j$-dimensional faces of $\qsol$ as convex combinations of $m-d+j$ (resp.\ $m-d+j-1$) extreme points (resp. extreme points or points in an extreme ray) of $\qminset$ that we denote by $\qe_1$,$\qe_2$,\ldots, $\qe_r$.
To conclude, we have obtained $\projv(p)=\sum_i \theta_i \qe_i$, for some $\theta\in \RR_+^r$ with $\sum_i\theta_i=1$.
Equivalently, since
$\qiso^{-1}(\cdot,0)$ provides one element in the corresponding class, this means that $p\in \qiso^{-1}(\sum_i \theta_i \qe_i,0)+\vecrec{}$. We get the claimed result by linearity of $\qiso^{-1}$.\qed
\begin{remark}
Incidentally, we note that for $\cible=\{y\}$, the face $\face{p}{\sol_{\{y\}}}$ is isomorphic (through $\qiso$) to $\face{\projv(p)}{\projv(\sol_{\{y\}})}\times (\vecrec{}\cap \mathrm{ker}\Phi)$.
\end{remark}
| {
"timestamp": "2018-11-27T02:29:11",
"yymm": "1806",
"arxiv_id": "1806.09810",
"language": "en",
"url": "https://arxiv.org/abs/1806.09810",
"abstract": "We establish a general principle which states that regularizing an inverse problem with a convex function yields solutions which are convex combinations of a small number of atoms. These atoms are identified with the extreme points and elements of the extreme rays of the regularizer level sets. An extension to a broader class of quasi-convex regularizers is also discussed. As a side result, we characterize the minimizers of the total gradient variation, which was still an unresolved problem.",
"subjects": "Optimization and Control (math.OC); Information Theory (cs.IT)",
"title": "On Representer Theorems and Convex Regularization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9883127416600689,
"lm_q2_score": 0.8104789018037399,
"lm_q1q2_score": 0.8010066254992959
} |
https://arxiv.org/abs/1909.08687 | Unusual elementary axiomatizations for abelian groups | One of the most studied algebraic structures with one operation is the Abelian group, which is defined as a structure whose operation satisfies the associative and commutative properties, has identical element and every element has an inverse element. In this article, we characterize the Abelian groups with other properties and we even reduce it to two the list of properties to be fulfilled by the operation. For this, we make use of properties that, in general, are hardly ever mencioned. | \section*{Introduction}
\noindent The axiomatic presentation of mathematical theories allows the selection of different sets of axioms for its development. This choice depends on criteria of economy, elegance, simplicity or pedagogy.
The definition of an abelian group was initially formulated for finite groups (with the axioms of closure, associativity, commutativity and existence of inverses) by Kronecker in 1870 and by Weber for infinite groups in 1893 \cite{waerden}. In 1878 Cayley introduces the notion of abstract group and in 1882 Dick Van presents the first explicit definition of this notion \cite{wussing}. In 1938 Tarski \cite{tarski} defines an abelian group $(G, +)$ as an associative and commutative quasigroup and characterizes it in terms of subtraction using only two axioms, one that indicates that the subtraction is an operation in $G$ and the other which is a property that includes three variables. In 1952 Higman and Neumann \cite{higman} give an axiomatization for groups with one axiom in terms of division, using three variables.
In 1981 Neumann \cite{neumann81} proposes another single law in terms of multiplication and inversion, in equational form with four variables. In 1993 McCune \cite{mccune} presents cha\-rac\-terizations of abelian groups with one axiom that has three or five variables, using computational tools, but in terms of operations such as \{addition and inverse\}, \{double subtraction\}, \{double subtraction, identity\}, \{subtraction, identity\}, \{subtraction, inverse\}, \{double subtraction, inverse\}.
In all cases in which the groups or abelian groups are characterized in equational form with one axiom, it has an extensive expression and the proofs are intricate. How\-ever, in 1996 McCune and Sands \cite{mcsands} proposed a single law but in implicative form, which is simpler than the equational form, not only in appearance but in the proofs too.
In the present work we give some characterizations of abelian groups with two elementary axioms whose expressions display an elegant simplicity. The same applies to the proofs which can help to understand this basic algebraic structure.
Algebraic structures can be classified, giving them special names, according to the operations they involve. We have limited ourselves to consider algebraic structures with one operation $(G,+)$ with $G$ a nonempty set, called \textit{magma} or \textit{groupoid}. The best known are \textit{Semigroup}, a groupoid with one associative operation (A); \textit{Monoid}, a semigroup that has neutral element (NE); \textit{Group}, a monoid such that all elements have inverse elements (IN); and \textit{Abelian group}, a commutative, (C), group.
However, there are other properties (see \cite{ilse}) such as : for all $a$, $b$, $c$, $d \in G$
\begin{itemize}
\item CAI. \textit{Cyclic associativity I}: $a + (b + c) = c + (a + b)$.
\item CAII. \textit{Cyclic associativity II}: $a + (b + c) = (c + a) + b$.
\item AGI. \textit{Abel-Grassmann I}: $a + (b + c) = c + (b + a)$.
\item AGII. \textit{Abel-Grassmann II}: $a + (b + c) = (b + a) + c$.
\item R. \textit{Reduced product property}: $(a + b) + c = a + (c + b)$.
\item H. \textit{Hilbert property}\footnote{This property was presented as part of an axiomatization for real numbers in \cite[p. 51-52]{hilbert}.}: the equations $x + a = b$ and $a + y = b$ have a unique solution.
\end{itemize}
Algebraic structures whose operations satisfy some of these properties have also received special names such as \textit{Quasigroup}\footnote{This concept was introduced by B. A. Hausmann and O. Ore in 1937 \cite[p. 22]{ilse}. An equivalent definition appears in \cite[p. 50]{warner}.}, a groupoid that satisfy H and \textit{Loop}, a quasigroup having a neutral element.
\begin{theorem}\label{teor1}
If $(G, +)$ is a commutative semigroup then it satisfies the properties CAI, CAII, AGI, AGII and R.
\end{theorem}
\begin{theorem}\label{teor2}
If $(G, +)$ is an abelian group then it satisfies H.
\end{theorem}
Although classical structures such as the abelian group and the commutative semigroup satisfy the properties mentioned, this does not mean that these properties are not independent.
\section*{Examples}
\begin{enumerate}
\item The natural numbers with usual addition and multiplication are commutative semigroups, have neutral elements 0 and 1 respectively, but are not quasigroups.
\item Integers, rational, real and complex numbers with the usual sum are commutative semigroups and loops.
\item A lattice with the meet ($\land$) and join ($\lor$) operations is a commutative semigroup, but not a quasigroup.
\item The integers with subtraction $x \circ y = x - y$ is a quasigroup and satisfies AGI but not AGII. It neither satisfies ACI nor ACII, R, A or C and also does not have a neutral element.
\item The integers with reciprocal subtraction $x \bullet y = y - x$ is a quasigroup which satisfies AGII, but not AGI. It neither satisfies ACI nor ACII, R, A or C and does not have a neutral element.
\item A set A with the second projection operation defined by $x \ \pi_2 \ y = y$, is a non-commutative semigroup, which satisfies AGII but not AGI. It neither satisfies ACI nor ACII or R. It has a neutral element and it is not a quasigroup.
\item A set A with the first projection operation defined by $x \ \pi_1 \ y = x$, is a non-commutative semigroup, which satisfies R but not AGI. It neither satisfies AGII nor ACI or ACII. It does not have a neutral element and it is not a quasigroup.
\item In the real interval $[0, 1]$ the operation $p * q = 1 - pq$ is commutative but not associative. It does not have a neutral element. It is neither AGI nor AGII, ACI, ACII or R and it is not a quasigroup. This operation is used in probability theory to determine the probability for two independent events to not occur simultaneously when the probability of occurrence of one is $p$ and of the other is $q$.
\item In the ordered set $\{0, 1/2, 1\}$ the operation defined by table \ref{tabla1}, is commutative, but not associative. It has a neutral element 1. It is not AGI, neither AGII nor ACI, ACII or R and it is not a quasigroup. This operation is the logical equivalence which is used in a trivalent Heyting algebra and it was used by Reichenbach in a formulation of quantum mechanics \cite[366-367]{jammer}.
\begin{table}[h]
\begin{center}
\begin{tabular}{c|ccc}
$\leftrightarrow$&0&1/2&1\\ \hline
0&1&0&0 \\
1/2&0&1&1/2 \\
1&0&1/2&1\\
\end{tabular}
\end{center}
\caption{Trivalent logical equivalence} \label{tabla1}
\end{table}
\end{enumerate}
\section{Substituting associativity and commutativity}
A strategy to search for new characterizations of abelian groups is to exchange or replace some of the properties that define them by others such that these new properties, when mixed with the remaining ones, give us a new definition of the abelian group. To this end, we now establish some relations between structures that satisfy some of the properties mentioned above.
We can find structures characterized by some of the unusual properties mentioned above which together with the property NE will result into the properties A and C.
\begin{theorem}\label{teor3}
If a groupoid $(G, +)$ has a neutral element, $e$, and satisfies the property AGII, then it is a commutative semigroup.
\end{theorem}
\begin{proof}
We first show that $+$ is commutative. Applying the properties NE and AGII we obtain
\[a + b = a + (b + e) = (b + a) + e = b + a\]
From properties AGII and C we deduce that $+$ is associative:
\[a + (b + c) = (b + a) + c = (a + b) + c \qedhere\]
\end{proof}
The proof of theorem \ref{teor4} below is analogous to the one of theorem \ref{teor3}.
\begin{theorem}\label{teor4}
If a groupoid $(G, +)$ has a neutral element and satisfies one of the pro\-perties CAI, CAII, AGI or R, then it is a commutative semigroup.
\end{theorem}
Note that from theorems \ref{teor3} and \ref{teor4} we can replace the associative and commutative properties by any of the properties CAI, CAII, AGI, AGII or R, in the de\-fi\-ni\-tion of an abelian group. This way we obtain another characterization of the structure under consideration, only with three axioms.
\begin{theorem}
The following conditions are equivalent:
\begin{enumerate}
\item $(G, +)$ is an abelian group.
\item $(G, +)$ is a groupoid that satisfies the properties NE, IN and CAI.
\item $(G, +)$ is a groupoid that satisfies the properties NE, IN and CAII.
\item $(G, +)$ is a groupoid that satisfies the properties NE, IN and AGI.
\item $(G, +)$ is a groupoid that satisfies the properties NE, IN and AGII.
\item $(G, +)$ is a groupoid that satisfies the properties NE, IN and R.
\end{enumerate}
\end{theorem}
It should be noted that the properties CAI, CAII, AGII and AGI have been used \cite[p. 10]{pad} for axiomatizing the lattice theory.
\section{Substituting inverse elements and neutral element}
From theorems \ref{teor1} and \ref{teor2} we deduce that an abelian group satisfies the properties CAI, CAII, AGI, AGII, R and H.
The next theorem indicates how the property H may be used to characterize the abelian groups, but without forgetting the commutative and associative properties.
\begin{theorem}\label{teor6}
If $(G, +)$ is an associative and commutative quasigroup then it is an abelian group.
\end{theorem}
\begin{proof}
Since $(G, +)$ is a quasigroup, for all $a \in G$, the equation $a + x = a$ has a unique solution, say $e_a$, i.e. $a + e_a = a$. By C, $a + e_a = e_a + a = a$.
Now, let $b \in G$ then by A, $a + b = (a + e_a) + b = a + (e_a + b)$ and as the equation $a + y = d$ with $d = a + b$, has a unique solution, we conclude that $b = e_a + b$. Therefore, $e_a + b = b = e_b + b$ and again by uniqueness of the solution of the equation $y + b = b$, it follows that $e_a = e_b$. Hence, $e_a$ is the neutral element of $G$ since the above argument is valid for all $b \in G$.
The existence of an inverse element for each element $a$ of $G$ is guaranteed by the exis\-tence of the solution of the equation $a + x = e$ with $e$ the neutral element, and pro\-perty C.
\end{proof}
From the arguments presented in the proof of theorem \ref{teor6} we can conclude:
\begin{theorem}\label{CA}
If $(G, +)$ is a quasigroup then it satisfies the property of being cancelative (CA). CA is defined as follows: for all $a, b, c \in G$,
\center{if \ $a + b = a + c$ \ then \ $a = c$ \ \ and \ \ if \ $b + a = c + a$ \ then \ $b = c$}
\end{theorem}
Combining the results of theorems \ref{teor2} and \ref{teor6} we obtain other characterizations of abelian groups with three axioms.
\begin{theorem}
The following conditions are equivalent:
\begin{enumerate}
\item $(G, +)$ is an abelian group.
\item $(G, +)$ is a groupoid that satisfies the properties H, A and C.
\end{enumerate}
\end{theorem}
\section{Substituting all properties}
Below we present results in which we cha\-rac\-te\-ri\-ze the structure of an abelian group without using the usual properties. We focus on replacing the property NE, a key pro\-per\-ty that has been used in previous results, without having to resort to the properties A and C.
\begin{theorem}\label{CAI}
If $(G, +)$ is a quasigroup that satisfies the property CAI, then it is a loop.
\end{theorem}
\begin{proof}
For all $a \in G$, let
\begin{equation}
a + e_a = a
\end{equation}
with $e_a$ the unique solution of the equation $a + x = a$. Combining (1) with the pro\-per\-ty CAI we obtain $e_a + a = e_a + (a + e_a) = e_a + (e_a + a)$. From theorem \ref{CA} we get
\begin{equation}
e_a + a = a.
\end{equation}
Given $b \in G$, from (2) and CAI we have
\[(e_a + b) + a = (e_a + b) + (e_a + a) = e_a + (a + (e_a + b)) = e_a + (b + (a + e_a))\]
and by (1) and CAI we obtain
\[e_a + (b + (a + e_a)) = e_a + (b + a) = b + (a + e_a) = b +a.\]
Then $(e_a + b) + a = b +a$ and by theorem \ref{CA} we conclude that $e_a + b = b$. Hence, $e_a + b = e_b + b$ and again by theorem \ref{CA}, $e_a = e_b$. As this argument is valid for all $b \in G$ we arrive at the conclusion that $e_a$ is the neutral element of $G$ which proves the theorem.
\end{proof}
\begin{theorem}\label{CAII}
If $(G, +)$ is a quasigroup that satisfies the property CAII, then it is a loop.
\end{theorem}
We shall give two proofs for this theorem: one, similar to the previous proof, sho\-wing directly that there is a neutral element and the other, proving that under these assumptions the property CAI holds and so the assertion follows from theorem \ref{CAI}.
\begin{proof}[Proof 1]
As for all $a \in G$ the equation $a + x = a$ has a unique solution, say $e_a$, i.e.
\begin{equation}
a + e_a = a
\end{equation}
Applying (3) and the property CAII we have
\[e_a + a = e_a + (a + e_a) = (e_a + e_a) + a\]
From theorem \ref{CA} we get
\begin{equation}
e_a + e_a = e_a
\end{equation}
Given $b \in G$, from (4) and CAII it follows that
\[a + e_a = a + (e_a + e_a) = (e_a + a) + e_a\]
Again by theorem \ref{CA} we have $a = e_a + a$ for all $a \in G$. Now let $b \in G$, by (3) and CAII we obtain $b + a = b + (a + e_a) = (e_a + b) + a$. By theorem \ref{CA}, $b = e_a + b$. Thus, $e_a + b = e_b + b$ and so $e_a = e_b$. As this argument is valid for all $b \in G$ we again conclude that $e_a$ is the neutral element of $G$ and as a consequence $(G, +)$ is a loop.
\end{proof}
\begin{proof}[Proof 2]
We first show that $+$ is associative. Let $a, b, c \in G$, as $G$ is a quasigroup there is $u \in G$ such that $b + u = c$. Hence,
\begin{equation}
a + (b + c) = a + (b + (b + u))
\end{equation}
Applying the property CAII repeatedly we get
\[a + (b + (b + u)) = ((b + u) + a) + b = (u + (a + b)) + b = (a + b) + (b + u)\]
and replacing $b+u$ by $c$ we conclude that $a + (b + c) = (a + b) + c$.
Now let us prove that $+$ satisfies CAI. Again applying the property CAII repeatedly to (5), we obtain
\begin{align*}
&a + (b + (b + u)) = a + ((u + b) + b) = (b + a) + (u + b) \\
&\hspace*{0.5cm}= (b + (b + a)) + u = ((a + b) + b) + u = b + (u + (a + b))
\end{align*}
By the property A and replacing $b+u$ by $c$, we have $a + (b + c) = c + (a + b)$. Then from theorem \ref{CAI} it follows that $(G, +)$ is a loop.
\end{proof}
\begin{theorem}\label{AGII}
If $(G, +)$ is a quasigroup that satisfies the property AGII, then it is a loop.
\end{theorem}
\begin{proof}
For each $a \in G$ let $e_a$ the unique solution of the equation $a + x = a$, i.e.
\begin{equation}
a + e_a = a
\end{equation}
By (6) and property AGII we have
\[e_a + a = e_a + (a + e_a) = (a + e_a) + e_a = a + e_a = a\]
Therefore, for all $a \in G$, it holds that
\begin{equation}
e_a + a = a = a + e_a
\end{equation}
Now, let $b \in G$, then by (7) and the property AGII we obtain
\[b + a = b + (e_a + a) = (e_a + b) + a\]
Furthermore, by theorem \ref{CA} we conclude that $b = e_a + b$. Hence, $e_a + b = e_b + b$ and so $e_a = e_b$. Since this argument is valid for all $b \in G$, $e_a$ is the neutral element of $G$ which proves the theorem.
\end{proof}
\begin{theorem}\label{R}
If $(G, +)$ is a quasigroup which satisfies the property R, then it is a loop.
\end{theorem}
\begin{proof}
Since $G$ is a quasigroup, for each $a \in G$ there is $e_a \in G$ such that
\begin{equation}
a + e_a = a
\end{equation}
Therefore, given any $b \in G$, by (8) and the property R we get
\[a + b = (a + e_a) + b = a + (b + e_a) \]
By the theorem \ref{CA} we obtain $b = b + e_a$. As a consequence $b + e_a = b + e_b$ and so $e_a = e_b$. This argument holds for all $b \in G$. We therefore conclude that there is a unique right neutral element, which we denote by $e$.
On the other hand, for all $a \in G$ the equation $y + a = a$ has a unique solution, say $\hat{e}_a$, i.e.
\begin{equation}
\hat{e}_a + a = a
\end{equation}
As $e$ is a right neutral element we have $(e + a) + e = e + a$. Applying (9) and the pro\-per\-ty R, we obtain $e + a = e + (\hat{e}_a + a) = (e + a) + \hat{e}_a$. Then $(e + a) + e = (e + a) + \hat{e}_a$ and by theorem \ref{CA}, $e = \hat{e}_a$ for all $a \in G$. Thus, $e$ is also left neutral element of $G$ which completes the proof.
\end{proof}
Finally, the results of theorems \ref{CAI}-\ref{R} together with the theorems \ref{teor1}-\ref{teor4} and \ref{teor6} can be condensed into the following final result.
\begin{theorem}
The following conditions are equivalent:
\begin{enumerate}
\item $(G, +)$ is an abelian group.
\item $(G, +)$ is a groupoid that satisfies the properties H and CAI.
\item $(G, +)$ is a groupoid that satisfies the properties H and CAII.
\item $(G, +)$ is a groupoid that satisfies the properties H and AGII.
\item $(G, +)$ is a groupoid that satisfies the properties H and R.
\end{enumerate}
\end{theorem}
Examples 1 and 4 show the independence of each of the properties CAI, CAII, AGII and R with respect to property H, and examples 1 and 5 show the independence of AGI and H.
One would think that with AGI we would have an analogous theorem to those presented in this section, however this combination of properties does not even provide us a loop. For example, integers with subtraction satisfy H and AGI but it is not an abelian group as it does not have a neutral element and hence no inverses.
| {
"timestamp": "2019-09-20T02:02:16",
"yymm": "1909",
"arxiv_id": "1909.08687",
"language": "en",
"url": "https://arxiv.org/abs/1909.08687",
"abstract": "One of the most studied algebraic structures with one operation is the Abelian group, which is defined as a structure whose operation satisfies the associative and commutative properties, has identical element and every element has an inverse element. In this article, we characterize the Abelian groups with other properties and we even reduce it to two the list of properties to be fulfilled by the operation. For this, we make use of properties that, in general, are hardly ever mencioned.",
"subjects": "Group Theory (math.GR)",
"title": "Unusual elementary axiomatizations for abelian groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9796676460637103,
"lm_q2_score": 0.8175744850834649,
"lm_q1q2_score": 0.800951271283468
} |
https://arxiv.org/abs/2202.11875 | Characterizing Spectral Properties of Bridge | The Bridge graph is a special type of graph which are constructed by connecting identical connected graphs with path graphs. We discuss different types of bridge graphs $B_{n\times l}^{m\times k}$ in this paper. In particular, we discuss the following: complete-type bridge graphs, star-type bridge graphs, and full binary tree bridge graphs. We also bound the second eigenvalues of the graph Laplacian of these graphs using methods from Spectral Graph Theory. In general, we prove that for general bridge graphs, $B_{n\times l}^2$, the second eigenvalue of the graph Laplacian should be between $0$ and $2$, inclusive. In the end, we talk about future work on infinite bridge graphs. We created definitions and found the related theorems to support our future work about infinite bridge graphs. | \section{Introduction}
Spectral graph theory is the process of characterizing graphs by means of the eigenvalues and eigenvectors of the graph Laplacian. It connects graphs to matrices, and allows us to understand properties of graph using more analytic means. Recently, spectral graph theory has found application in machine learning and deep learning. In particular, there are many clustering algorithms based on spectral methods, like spectral clustering, that are more effective than tradition clustering methods like K-means. Many theoretical properties of these algorithms rely on bounding eigenvalues of the graph Laplacian. \newline
Research has already been done in extracting bounds of eigenvalues of special graphs such as complete graphs, path graphs, the binary tree, and so on. In this paper, we will be focusing on some special types of graphs which are constructed by connecting some identical connected graphs by a path or multiple edges, which we'll call Bridge graphs. Bridge graphs are constructed by using path graphs, $P_m$ with $n\geq 2$, and putting some identical graphs on each end of the path. \newline
We will then bound the second eigenvalues of the graph Laplacians of the graphs we discussed above. The second eigenvalues are the most important because the first eigenvalue of the graph Laplacian is always $0$. We'll use test vectors and Loewner partial ordering to approach this. At the end of this paper, we'll discuss constructing infinite bridge graphs, which are constructed by connecting a countably infinite number of identical connected graphs using path graphs. We'll also discuss the general idea for bounding the spectrum of the generalized Laplacian operator.
\section{Basic Definitions}
The following definitions are from Dan Spielman\cite{b1}.Assume we have a graph $G$ with vertex set $V$ and edge set $E$. Assume that the number of vertices is $|V|=n$. We can label vertices to be $\{1,2,3,\dots,n-1,n\}$.
\begin{definition}The adjacency matrix $\bold{M}$ of a weighted graph $G=(V,E, w)$ is defined as the matrix with the following entries
$$
\bold{M}(a,b) =
\begin{dcases}
w_{a,b} \quad (a,b)\in E\\
0 \quad (a,b)\notin E
\end{dcases}
$$
When the graph is unweighted, $w(a) = 1$ for all $(a,b) \in E$.
\end{definition}
\begin{definition}
The degree of a vertex $a$ is the number of edges attached to it. For a weighted graph, the degree $d(a)$ of the vertex $a$ is the sum of the weights of the edges attached to it.
\end{definition}
\begin{definition}
The degree matrix $\bold{D}$ of a graph $G=(V,E)$ is a diagonal matrix whose entries are given by
$$
\bold{D}(a,b) =
\begin{dcases}
d(a)\quad &a=b\\
0 \quad &a\neq b
\end{dcases}
$$
\end{definition}
\begin{definition}
The graph laplacian $\bold{L}$ of a graph $G$ is defined to be
$$
\bold{L} =\bold{M}- \bold{D}.
$$
\end{definition}
\begin{definition}(Loewner partial order)
Let $G_1$ and $G_2$ be graphs each with $n$ vertices. Then for the graph Laplacians of $G_1$ and $G_2$, $L_{G_1}$ and $L_{G_2}$, we write $L_{G_1} \succcurlyeq L_{G_2}$ if and only if $\bold{v}^TA\bold{v}\geq \bold{v}^TB\bold{v}$ for all vectors $\bold{v} \in \mathbb{R}^n$. The relation $\succcurlyeq$ above is called Loewner partial order. In this case, the graphs $G_1$ and $G_2$ also have relation $G_1 \succcurlyeq G_2$.
\end{definition}
\section{Basic Theorems}
The following theorems are from Dan Spielman\cite{b1}
\begin{theorem}
Assume we have a weighted graph $G=(V,E)$, for every edge $e = (a,b)$, let the weight be $w_{a,b}$. For a function $\bold{x}: V \to \mathbb{R}^n$, the quadratic form of the graph Laplacian is
$$
\bold{x}^T\bold{L}\bold{x}=\sum_{(a,b)\in E} \bold{w}_{a,b}(x(a)-x(b))^2
$$
\end{theorem}
\begin{theorem}
For a $n\times n$ symmetric matrix $\bold{A}$ with ordered eigenvalues $\lambda_1\le \lambda_2\le \dots\le\lambda_n$ and corresponding eigenvectors $\phi_1,\phi_2,\dots,\phi_n$ we have
$$\phi_i=\min\limits_{\substack{(\bold{x},\phi_k)=0,\\1\le k \le i-1} }{\frac{\bold{x}^T \bold{L}\bold{x}}{\bold{x}^T\bold{x}}}
$$
with
$$
\phi_i=\arg\min\limits_{\substack{(\bold{x},\phi_k)=0, \\1\le k \le i-1}} {\frac{\bold{x}^T \bold{L}\bold{x}}{\bold{x}^T\bold{x}}}.
$$
\end{theorem}
\begin{theorem}
For a graph $G=(V,E)$, with graph Laplacian $L_G$, ordered eigenvalues $\lambda_1\le \lambda_2\le \dots\le\lambda_n$, and corresponding eigenvectors $\phi_1,\phi_2,\dots,\phi_n$, we have $\lambda_1=0$ and $\phi_1=\bold{1}$ where $\bold{1} = (1, \ldots, 1)^{T}$.
\end{theorem}
\begin{theorem}
For a unweigted graph $G=(V,E)$ And $L_G$ is the graph Laplacian with ordered eigenvalues $0=\lambda_1\le \lambda_2\le \dots\le\lambda_n$. Then $G$ is connected if and only if $\lambda_2>0$
\end{theorem}
\begin{theorem}
Suppose $G_1$ and $G_2$ are graphs with the relation $G_1\succcurlyeq cG_2$. Then $\lambda_k(G_1)\succcurlyeq \lambda_k(G_2)$
\end{theorem}
\begin{theorem}
If $G_1$ is a subgraph of $G_2$ then $G_1\preccurlyeq cG_2$.
\end{theorem}
\section{$K_n$ Type Bridge Graphs}
Now we will discuss dumbbell-like graphs $D_n^m$, which are formed by joining two complete graphs with $n$ vertices, $K_{n,1}$ and $K_{n,2}$, with a path graph $P_m$. For example, if we connect two $K_8$'s together with $P_3$, we have the following:
\begin{center}
\includegraphics[scale = 0.35]{K2.png}
\end{center}
Notice that this is a simple example of a bridge graph. \\
\begin{corollary}(The Path Inequality)
A path graph $P_{a,b}$ is a path from $a$ to $b$, and $G_{a,b}$ is a graph with a single edge $(a,b)$, then the following path inequality holds:
$$
|P_{a,b}|P_{a,b} \succcurlyeq G_{a,b}.
$$
\end{corollary}
\begin{theorem}
For the dumbbell-like graph we mentioned above, $D_n^m$, we know that $|V_{D_n^m}|=2n+m-2$ , we have the following bound on the eigenvalues:
$$
\frac{2}{(2n+m-3)(m+1)}\le\lambda_2(D_n^m)\le\frac{12}{6(m-1)(n-1)+m(m-1)}
$$
\begin{proof}
Let $K_{n,1}$ be the first complete graph and $K_{n,2}$ be the second complete graph. We label the vertices of $K_{n,1}$ as $\{1,2,3, \dots ,n-1,n\}$ and suppose that the vertex shared by $K_{n,1}$ and $P_m$ is labeled as $n$. Then the next vertex on $P_m$, which is attached to the vertex $n$ is labeled as $n+1$. Repeat the same process until we label the vertex which is on both $P_m$ and $K_{n,2}$ as $n+m-1$. Finally we label the vertices of $K_{n,2}$ to be $\{n+m-1,n+m,\dots,2n+m-2\}$ \newline
To get the upper bound we construct test vector $\bold{x}$ to be
$$
\bold{x}(i) =
\begin{dcases}
m-1\quad &1\le i < n\\
2n+m-1-2i \quad & n\le i <n+m-1\\
1-m \quad &n+m-1\le i \le 2n+m-2
\end{dcases}.
$$
The vertices $n$ and $n+m-1$ are both on one of the complete graphs and the path graphs, so we need to check their value on both graphs to make sure our construction of test vector $\bold{x}$ is consistent. \newline
When $i=n$ we plug into $x(i)=m-1$ gets ${x(i)=m-1}$. Also, if substitute $i = n$ into $x(i)=2n+n-2i$, we get $x(i)=2n+m-1-2n=m-1$. This matches with the other graph. When $i=n+m-1$ we substitute into $x(i)=2n+m-1-2i$, which yields $x(i)=2n+m-1-2(n+m-1)=-m+1$. Also if we substitute $i = n$ into $x(i)=1-m$, we get $x(i)=1-m$, This also matches with the other graph. Hence we have verified the consistency of our test vector.\\
Now we need to calculate the inner product of test vector $\bold{x}$ and the vector $\bold{1}=(1,\dots,1)^T$. We have
\begin{align*}
(\bold{x},\bold{1}) &= \sum_{i\in V}x(i)\\
&=\sum_{i=1}^{2n+m-2} x(i)\\
&=\sum_{i=1}^{n-1} x(i) + \sum_{i=n}^{n+m-1}x(i)+\sum_{i=n+m}^{2n+m-2} x(i)\\
&=\sum_{i=1}^{n-1} (m-1)+\sum_{i=n}^{n+m-1} (2n+m-1-2i) + \sum_{i=n+m}^{2n+m-2} (1-m).
\end{align*}
We now deal with the three terms separately. For the first term,
$$\sum_{i=1}^{n-1} (m-1) = (m-1)(n-1).$$
For the second second term, we get
$$\sum_{i=n}^{n+m-1} (2n+m-1-2i) = (2n+m-1)(n+m+1-n-1)-2\cdot\frac{n+m-1+n}{2}.$$
Lastly,
$$\sum_{i=n+m}^{2n+m-2} (1-m) = (1-m)(n-1).$$
Adding the terms up, we get
$$(\bold{x},\bold{1}) = 0.$$ \newline
From the calculation above, we know that we can use $\bold{x}$ to get the upper bound of $\lambda_2(D_n^m)$. From Theorem 4 we have
\begin{align*}
\lambda_2(D_n^m)
&\le \frac{\bold{x}^T\bold{L}\bold{x}}{\bold{x}^T\bold{x}}\\
&\le \frac{\sum_{(a,b)\in E} (x(a)-x(b))^2}{\sum_{i=1}^{2n+m-2}x(i)^2}\\
& = \frac{\sum_{1\le i,j<n} (x(i)-x(j))^2}{\sum_{1\le i<n} x(i)^2+\sum_{n\le i<n+m-1} x(i)^2 +\sum_{n+m-1\le i\le 2n+m-1} x(i)^2}\\
&+ \frac{\sum_{n\le i,j<n+m-1} (x(i)-x(j))^2}{\sum_{1\le i<n} x(i)^2+\sum_{n\le i<n+m-1} x(i)^2 +\sum_{n+m-1\le i\le 2n+m-1} x(i)^2}\\
&+ \frac{\sum_{n+m-1\le i,j\le 2n+m-1} (x(i)-x(j))^2}{\sum_{1\le i<n} x(i)^2+\sum_{n\le i<n+m-1} x(i)^2 +\sum_{n+m-1\le i\le 2n+m-1} x(i)^2}.
\end{align*}
The first and last term of the sum are zero, so this means that
\begin{align*}
\lambda_2(D_n^m)&=\frac{0+\sum_{i=n}^{n+m-2} (x(i)-x(i+1))^2+0}{(m-1)(n-1)+\frac{m(m-1)(m+1)}{3}+(m-1)(n-1)}\\
&=\frac{12}{6(m-1)(n-1)+m(m-1)}.
\end{align*}
To get the lower bound, we use the Loewner partial order. \newline
For every pair of edge $(a,b)\in E_{D_n^m}$,let the path graph $P_{a,b}$ be a path from $a$ to $b$, and $G_{a,b}$ be a graph with a single edge $(a,b)$, then from the corollary we have $|P_{a,b}|P_{a,b} \succcurlyeq G_{a,b}$. We know that if $1\le a \le n$, then this means $a$ is a vertex of the $K_{n,1}$. Thus, $a$ is connected to vertex $n$. If $n+m-1\le a \le 2n+m-2$ then this means $a$ is a vertex of the $K_{n,2}$ and $a$ is connected to vertex $n+m-1$. If $a$ and $b$ are in the same complete graph then the length of $P_{a,b}$ is $1$; if $a$ and $b$ are in the different complete graphs, then the length of $P_{a,b}$ will be $1+m-1+1=m+1$. If either of $a$ or $b$ are in the path graph, then the length of $P_{a,b}$ shorter than the case when $a$ and $b$ are in different complete graphs. Hence, we conclude that $|P_{a,b}|\le m+1$. \newline
It follows that
$$G_{a,b}\preccurlyeq |P_{a,b}|P_{a,b} \preccurlyeq(m+1)P_{a,b}\preccurlyeq(m+1)D_n^m.
$$
Also, we notice that complete graph $K_{2n+m-2}$ is constructed by connecting all edges together. It has $\binom{2n+m-2}{2}$ single edges.Thus $$K_{2n+m-2}\preccurlyeq \sum_{(a,b)\in E_{K_{2n+m-2}}}G_{a,b}\preccurlyeq \binom{2n+m-2}{2} G_{a,b} \preccurlyeq \binom{2n+m-2}{2}(m+1) D_m^n.
$$
Thus,
$$
2n+m-2=\lambda(K_{2n+m-2}) \le \binom{2n+m-2}{2}(m+1) \lambda(D_m^n).
$$
From the above, we get that $\lambda(D_m^n)\geq \frac{2}{(2n+m-1)(m+1)}$.
\end{proof}
\end{theorem}
We notice that when $m=1$ then $D_n^1$ is a graph constructed by connecting two complete graphs with a single edges. For a bridge graph $D_n^{2\times k}$ with $k\le n$ which is constructed by two identical complete graphs $K_{n,1}$ and $K_{n,2}$ k different edges $e_1,\dots,e_k$ with the edge length of them are all 2, and for every edge
$e_i=(v_{i,1},v_{i,2})$ where $v_{i,1}$ is in $K_{n,1}$ and $v_{i,2}$ is in $K_{n,2}$. A picture is given below in the case where $n = 8$ with $k=2$ and $e_1=(8,9),e_2=(7,16)$:
\begin{center}
\includegraphics[scale = 0.35]{K.png}
\end{center}
We can generalize our results from before to the following theorem.
\begin{theorem}
For the graph we mentioned above, $D_n^{2\times K}$, we know that $|V_{D_n^{2\times 2}}|=2n+1$. We also have the following bound on the second eigenvalue of the graph laplacian:
$$
\frac{2}{3(2n-1)}\le\lambda_2(D_n^{2\times k})\le\frac{4}{n}.
$$
\begin{proof}
From $D_n^2$ is a subgraph of $D_n^{2\times k}$ we can get $D_n^2 \preccurlyeq D_n^{2\times k}$. Thus $\frac{2}{3(2n-1)}=\lambda_2(D_n^2) \le\lambda_2(D_n^{2\times k})$.
For the other half of the inequality we can label vertices the same way as the graph $D_n^2$. We know vertices $v_{i,1}$ and $v_{i,2}$ are adjacent to each other for $1\le i \le k$ such that $v_{i,i} \in K_{n,1}$ and $v_{i,2} \in K_{n,2}$. Then we have $1\le v_{i,1}\le n$ and $n+1\le v_{i,2}\le 2n$. Also, we can use the same test vector $\bold{x}$ as the graph $D_n^m$ too. Let
$$
\bold{x}(i) =
\begin{dcases}
1\quad &1\le i \le n\\
-1 \quad &n+1\le i \le 2n
\end{dcases}.
$$
We can easily verify that $(\bold{x},\bold{1})=0$. \newline
Now we can estimate the upper bound of $\lambda_2({D_n^{2\times2}})$:
\begin{align*}
\lambda_2(D_n^{2\times 2})
&\le \frac{\bold{x}^T\bold{L}\bold{x}}{\bold{x}^T\bold{x}}\\
&\le \frac{\sum_{(a,b)\in E_{D_n^{2\times 2}}} (x(a)-x(b))^2}{\sum_{i=1}^{2n}x(i)^2}\\
& = \frac{\sum_{1\le i,j<n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le 2n} x(i)^2}\\
&+ \frac{\sum_{i=1}^{k}(x(v_{i,1})-x(v_{i,2}))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le 2n} x(i)^2}\\
&+ \frac{\sum_{n+2\le i,j\le 2n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le 2n} x(i)^2}.
\end{align*}
The first and last term of the sum are zero, so this means that
\begin{align*}
\lambda_2(D_n^m)&\le \frac{0+4k+0}{n+n}\\
&=\frac{4k}{2n}\\
&=\frac{2k}{n}.
\end{align*}
So we have finished bounding $D_n^{2\times k}$.
\end{proof}
\end{theorem}
Consider a general bridge graph $B_n^{2\times k}$ which is constructed by two arbitrary identical graphs $G_{n,1}$ and $G_{n,2}$ with $k$ different edges $e_1,\dots,e_k$ and $k\le n$. We also assume that there are $k$ distinct edges connecting the two graphs. An example is given in the figure:
\begin{center}
\includegraphics[scale = 0.35]{K3.png}
\end{center}
For every edge
$e_i=(v_{i,1},v_{i,2})$, where $v_{i,1}$ is in $G_{n,1}$ and $v_{i,2}$ is in $G_{n,2}$, we have the following theorem.
\begin{theorem}
For a bridge graph $B_n^{2 \times k}$ we have
$$
0< \lambda_2(B_n^{2\times k}) \le \frac{2k}{n}.
$$
\begin{proof}
The proof of the lower bound follows from the fact that $B_m^{2\times k}$ is a connected graph. Thus the second eigenvalue should be positive . The upper bound is a straight forward consequence of Theorem 4.2. We see that $B_n^{2\times k}$ is a subgraph of $B_n^{2\times k}$; hence $B_n^{2\times k}\preccurlyeq D_n^{2\times k}$. It follows that $\lambda_2(B_n^{2\times k}) \le \lambda_2(D_n^{2\times k})\le \frac{2k}{n}$.
\end{proof}
\end{theorem}
\section{$S_n$ Type Bridge Graphs}
The star graph, $S_n$, is another graph we will consider. The star graph is special because it is a complete bipartite graph, $K_{1,n-1}$. Now we can construct star-type bridge graphs, $S_n^m$, by connecting two identical star graph $S_{n,1}$ and $S_{n,2}$ with a path graph $P_m$.
\begin{theorem}
For the star-like graphs $S_n^m$ we mentioned above, we have the following bound on the eigenvalues:
$$
\frac{2}{(2n+m-3)(m+3)} \le \lambda_2(S_n^m) \le \frac{4n+2}{2n+m-4}.
$$
\begin{proof}
Since $S_{n,1}$ is also a bipartite graph ,we can separate it to different set $V_{(1,1)}$ and $V_{(1,2)}$ where $V_{(1,1)}$ only has one vertex which is internal vertex for the tree $S_{n,1}$. And the remaining vertices are all leaves of tree and they are in $V_{(1,2)}$. Notice that no edges has both vertices in the same sets, and every edges that connect vertices in different set is part of the graph. We label the only vertex in $V_{(1,1)}$ as $1$ and remaining as $2,\dots,n$. \newline
We know that there is a vertex $v^1$ in $S_{n,1}$ is also on the graph $P_m$. Then we label the vertex which is attached to $v^1$ but not in graph $S_{n,1}$ as $n+1$, repeat the same process until we label the vertex $n+m-2$.We can also separate it to different set $V_{(2,1)}$ and $V_{(2,2)}$ where $V_{(2,2)}$ only has one vertex which is internal vertex for the tree $S_{n,2}$. And the remaining vertices are all leaves of tree and they are in $V_{(2,2)}$. Also notice that no edges has both vertices in the same sets, and every edges that connect vertices in different set is part of the graph. We label the only vertex in $V_{(1,1)}$ as $n+m-1$ and remaining as $n+m,\dots,2n$. We notice that there is a vertex $v^2$ in $S_{n,2}$ is also on the graph $P_m$. Thus $n+m-1\le v^2 \le 2n$. \newline
Now we can set the test vector. Now we need to discuss different cases based on whether $n$ is odd or even and based on the value of $v^1$ and $v^2$. \newline
\textbf{Case 1:}
$n$ is odd, $v^1=1$ and $v^2=n+m-1$, the figure below is the case when $n=9,m=3$ \newline
\begin{center}
\includegraphics[scale = 0.20]{S1.jpg}
\end{center}
We choose the test vector
$$
\bold{x}(i) =
\begin{dcases}
1\quad &i=1,n+m-1\\
0\quad &i=n \text{ or } i=2n \text{ or } n+1\le i \le n+m-2\\
1 \quad &2\le i \le \frac{n-1}{2}\\
-1 \quad &\frac{n+1}{2}\le i \le n-1\\
1 \quad &n+m\le i \le \frac{3n+2m-3}{2}\\
-1\quad &\frac{3n+2m-1}{2}\le i \le 2n-1 \\
\end{dcases}.
$$
Notice that
\begin{align*}
(\bold{x},\bold{1})&=\sum_{i=1}^{2n+m-2} x(i)\\
&=x(1)+x(n+m-1)+\sum_{i=n+1}^{n+m-2}x(i)+\sum_{i=2}^{\frac{n-1}{2}}x(i)+\sum_{\frac{n+1}{2}}^{n-1} x(i)\\
&+\sum_{i=n+m}^{\frac{3n+m-3}{2}}x(i)+\sum_{\frac{3n+m-1}{2}}^{2n} x(i)\\
&=1+1+0+\left(\frac{n-1}{2}-2+1\right) -\left(n-1-\frac{n+1}{2}+1\right)\\
&+\left(\frac{3n+2m-3}{2}-(n+m)+1\right)-\left(2n-1-\frac{3n+2m-1}{2}+1\right)\\
&=0.
\end{align*}
Hence it's possible to use our test vector to get the upper bound.
\begin{align*}
&\lambda_2(T_n^m)\\
&\le \frac{\bold{x}^T\bold{L}\bold{x}}{\bold{x}^T\bold{x}}\\
&=\frac{\sum_{(a,b)\in E_{T_n^{m}}} (x(a)-x(b))^2}{\sum_{i=1}^{2n+m-2}x(i)^2}\\
& = \frac{\sum_{1\le i,j\le n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2} x(i)^2}\\
&+ \frac{\sum_{n+1\le i,j\le n+m-2} (x(i)-x(j))^2+(x(1)-x(n+1))^2+(x(n+m-2)-x(n+m-1))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2}x(i)^2}\\
&+ \frac{\sum_{n+m-1\le i,j\le 2n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}.
\end{align*}
Notice that the first term is equal to
\begin{align*}
&\frac{\sum_{2\le j\le n} (x(1)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2} x(i)^2}\\
=& \frac{\sum_{2\le j\le n} (x(1)-x(j))^2}{n-1+m-2+n-1}\\
=& \frac{\sum_{j=2}^{\frac{n-1}{2}}(x(1)-x(j))^2+\sum_{j= \frac{n+1}{2}}^{j=n-1}(x(1)-x(j))^2+(x(1)-x(n))^2}{2n+m-4}\\
=&\frac{4\frac{n-1}{2}+1}{2n+m-4}\\
=&\frac{2n-3}{2n+m-4}.
\end{align*}
The second term is equal to
\begin{align*}
\frac{0+1+1}{2n+m-4}=\frac{2}{2n+m-4}.
\end{align*}
From symmetry, the third term and the second term are the same, so the third term is $\frac{2n-3}{2n+m-4}$. Add all terms together, and we get
$$
\lambda_2(T_n^m)\le \frac{4n-6}{2n+m-4}.
$$\newline
\textbf{Case 2:}
$n$ is odd, $v^1=1$ and $v^2\neq n+m-1$ or $n$ is odd, $v^1\neq 1$ and $v^2= n+m-1$ \newline
\begin{center}
\includegraphics[scale = 0.4]{S2.jpg}
\end{center}
We will only discuss when $n$ is odd, $v^1=1$ and $v^2\neq n+m-1$, or $n$ is odd because the other case will get us the same result from symmetry. When we label our vertices, we can make $v^2=n+m$ now. Then we can still use the same test vector as case 1. So the text vector is well defined. The upper bound process will be the same as case 1. Thus we will get the same bound as case 1.\newline
\textbf{Case 3:}
$n$ is odd, $v^1\neq1$ and $v^2\neq n+m-1$ \newline
\begin{center}
\includegraphics[scale = 0.4]{S3.png}
\end{center}
When we label the vertices, we can make $v^1=2$ and $v^2=n+m$ now. Then we can still use the same test vector as case 1. So the text vector is well defined. The upper bound process will be the same as case 1 thus we will get the same bound as case 1.\newline
\textbf{Case 4:}
$n$ is even, $v^1=1$ and $v^2=n+m-1$ \newline
\begin{center}
\includegraphics[scale = 0.4]{S4.jpg}
\end{center}
We define the test vector as
$$
\bold{x}(i) =
\begin{dcases}
1\quad &i=1,n+m-1\\
0\quad & n+1\le i \le n+m-2\\
1 \quad &2\le i \le \frac{n-1}{2}\\
-1 \quad &\frac{n+1}{2}\le i \le n\\
1 \quad &n+m-1\le i \le \frac{3n+2m-3}{2}\\
-1\quad &\frac{3n+2m-1}{2}\le i \le 2n \\
\end{dcases}.
$$
Notice that
\begin{align*}
(\bold{x},\bold{1})=&=\sum_{i=1}^{2n+m-2} x(i)\\
&=x(1)+x(n+m-1)+\sum_{i=n+1}^{n+m-2}x(i)+\sum_{i=2}^{\frac{n-1}{2}}x(i)\\
&+\sum_{\frac{n+1}{2}}^{n-1} x(i)+\sum_{i=n+m}^{\frac{3n+m-3}{2}}x(i)+\sum_{\frac{3n+m-1}{2}}^{2n} x(i)\\
&=1+1+\left(\frac{n-1}{2}-2+1\right) \\
&-\left(n-1-\frac{n+1}{2}+1\right)+\left(\frac{3n+2m-3}{2}-(n+m)+1\right)\\
&-\left(2n-\frac{3n+2m-1}{2}+1\right)\\
&=0.
\end{align*}
Now we get
\begin{align*}
&\lambda_2(T_n^m)\\
&\le \frac{\bold{x}^T\bold{L}\bold{x}}{\bold{x}^T\bold{x}}\\
&=\frac{\sum_{(a,b)\in E_{T_n^{m}}} (x(a)-x(b))^2}{\sum_{i=1}^{2n+m-2}x(i)^2}\\
& = \frac{\sum_{1\le i,j\le n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2} x(i)^2}\\
&+ \frac{\sum_{n+1\le i,j\le n+m-2} (x(i)-x(j))^2+(x(1)-x(n+1))^2+(x(n+m-2)-x(n+m-1))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2}x(i)^2}.\\
&+ \frac{\sum_{n+m-1\le i,j\le 2n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}.
\end{align*}
Notice that the first term is equal to
\begin{align*}
&\frac{\sum_{2\le j\le n} (x(1)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2} x(i)^2}\\
=& \frac{\sum_{2\le j\le n} (x(1)-x(j))^2}{n-1+m-2+n-1}\\
=& \frac{\sum_{j=2}^{\frac{n-1}{2}}(x(1)-x(j))^2+\sum_{j= \frac{n+1}{2}}^{j=n-1}(x(1)-x(j))^2}{2n+m-4}\\
=&\frac{4\frac{n}{2}}{2n+m-4}\\
=&\frac{2n}{2n+m-4}.
\end{align*}
From case 1 we know that the second term is $\frac{2}{2n+m-4}$, and from the symmetry, the third term and the second term are the same, so the third term is $\frac{2n}{2n+m-4}$. Adding three terms together gets us
$$
\lambda_2(T_n^m)\le \frac{4n+2}{2n+m-4}.
$$
\textbf{Case 5:}
$n$ is even, $v^1=1$ and $v^2\neq n+m-1$ or $n$ is even, $v^1\neq 1$ and $v^2= n+m-1$ \newline
\begin{center}
\includegraphics[scale = 0.4]{S5.png}
\end{center}
We will only discuss when $n$ is odd,$v^1=1$ and $v^2\neq n+m-1$ or $n$ is odd because the other case will get us the same result from symmetry. When we label it we can make $v^2=n+m$ now. Then we can still use the same test vector as case 4. So the text vector is well defined.And the upper bound process will be the same as case 1 thus we will get the same bound as case 1.\newline
\textbf{Case 6:}
$n$ is even, $v^1\neq1$ and $v^2\neq n+m-1$\newline
\begin{center}
\includegraphics[scale = 0.4]{S6.png}
\end{center}
When we label it we can make $v^1=2$ and $v^2=n+m$ now. Then we can still use the same test vector as case 4. So the text vector is well defined.And the upper bound process will be the same as case 1 thus we will get the same bound as case 1. \newline
Hence we have finished upper bound since we have exhausted all possible cases. \newline
For the lower bound, we can compare our graphs to complete graphs. For every pair of edge $(a,b)\in E_{S_n^m}$,let the path graph $P_{a,b}$ be a path from $a$ to $b$, and $G_{a,b}$ be a graph with a single edge $(a,b)$. From the lemma we have $|P_{a,b}|P_{a,b} \succcurlyeq G_{a,b}$. We know that if $a$ and $b$ are both in the same star graph, without loss of generality, we can assume they are both in $S_{n,1}$, Thus, the length from vertex $a$ to $b$ is at most $2$.
If $a$ and $b$ are in different star graphs, without loss of generality, we suppose $a$ is in $S_{n,1}$ and $b$ is in $S_{n,2}$. Then the length of the path $P_{a,b}$ is at most $2+m-1+2=m+3$.Hence, the length of the path $P_{a,b}$ is at most $2+m-1+2=m+3$. It follows that
\begin{align*}
G_{a,b}\preccurlyeq |P_{a,b}|P_{a,b} & \preccurlyeq(m+3) \\
&\preccurlyeq(m+3)S_n^m.
\end{align*}
Also we know that complete graph $K_{2n+m-2}$ has $\binom{2n+m-2}{2}$ single edges. Thus
\begin{align*}
K_{2n+m-2}\preccurlyeq \sum_{(a,b)\in E_{K_{2n+m-2}}}G_{a,b}&\preccurlyeq \binom{2n+m-2}{2} G_{a,b} \\
&\preccurlyeq \binom{2n+m-2}{2}(m+3) T_m^n.
\end{align*}
Hence,
$$
2n+m-2=\lambda_2(K_{2n+m-2}) \le \binom{2n+m-2}{2}(m+3) \lambda_2(S_m^n).
$$
Finally, we arrive at
$$
\lambda_2(S_m^n)\geq \frac{2}{(2n+m-3)(m+3)}.
$$
\end{proof}
\end{theorem}
\begin{section}{$T_n$ Type Bridge Graphs}
Now we will discuss binary tree-like graphs $T_n^m$, which are formed by by joining two full binary trees with $n$ vertices, $T_{n,1}$ and $T_{n,2}$, with a path graph $P_m$. Notice that this is also a simple example of a bridge graph. \\
\begin{theorem}
For the binary tree-like graphs $T_n^m$ we mentioned above we have the following bound on the eigenvalues:
$$
\frac{2}{(2n+m-1)(2 \log_2(n+1)+m-3)}\le \lambda_2{(T_n^m)}\le \frac{5}{2(n-1)}.
$$
\begin{proof}
Now we need to label $T_n^m$. We label $T_{n,1}$ the following way. We label the vertex which is ancestor of $T_{n,1}$ all other vertices as $1$. Then $1$ has two children. We label them as $2$ and $3$,then we label children of $2$ as $4$ and $5$, the children of $3$ as $6$ and $7$ and so on until $n$. \newline
Then we label the path $P_m$. We know one end of $P_m$ is $i$ where $1\le i\le n$, then we label the vertex which is on the path $P_m$ and attached to $0$ as $n+1$ repeat the process until the vertex $n+m-2$, then the next vertex which in $P_m$ and attach to $n+m-2$ is on the graph $T_{n,2}$. \newline
Then we label the graph $T_{n,2}$. We label the vertex which is ancestor of all other vertices of $T_{n,2}$ as $n+m-1$. Then $n+m-1$ has two children. We label them as $n+m$ and $n+m+1$,then we label children of $n+m$ as$n+m+1$ and $n+m+2$, the children of $n+m+1$ as $n+m+3$ and $n+m+4$ and so on until $2n+m-2$. Now we need to break into 3 cases depends on where the ends of $P_m$ locate at. We know that one end is between $1$ and $n$ and the other is between $n+m-1$ and $2n+m-2$. \\
\textbf{Case 1:} One end of $P_m$ is $1$ and the other end of $P_m$ is $n+m-2$. Figure $2$ demonstrates the case of $T_{7}^3$:
\begin{center}
\includegraphics[scale = 0.5]{T.png}
\end{center}
We can set the test vector to be:
$$
\bold{x}(i) =
\begin{dcases}
0\quad &i=1,n+m-1\\
0\quad &n+1\le i \le n+m-2\\
1 \quad &i=2,n+m\\
1 \quad &2<i\le n \text{ and } i \text{ is a descendant of } 2\\
1 \quad &n+m<i\le 2n+m-2 \text{ and } i \text{ is a descendant of } n+m\\
-1\quad & \text{otherwise}
\end{dcases}.
$$
We notice that for elements of $T_{n,1}$ the number of $1$ is $\frac{n-1}{2}$ and the number of $-1$ is $\frac{n-1}{2}$. For elements of $T_{n,1}$ the number of $1$ is $\frac{n-1}{2}$ and the number of $-1$ is $\frac{n-1}{2}$. Hence we have
\begin{align*}
(\bold{x},\bold{1})&=\sum_{i=1}^{2n+m-2} x(i)\\
&=x(1)+x(n+m-1) + \sum_{i=2}^{n}x(i)+\sum_{i=n+m}^{2n+m-2} x(i)\\
&=0+0+\frac{n-1}{2}-\frac{n-1}{2}+\frac{n-1}{2}-\frac{n-1}{2}\\
&=0.
\end{align*}
We have finished verifying $(\bold{x},\bold{1})=0$. Now we estimate the upper bound of $\lambda_2(T_n^m)$:
\begin{align*}
\lambda_2(T_n^m)&\le \frac{\bold{x}^T\bold{L}\bold{x}}{\bold{x}^T\bold{x}}\\
&=\frac{\sum_{(a,b)\in E_{T_n^{m}}} (x(a)-x(b))^2}{\sum_{i=1}^{2n+m-2}x(i)^2}\\
& = \frac{\sum_{1\le i,j\le n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2} x(i)^2}\\
&+ \frac{\sum_{n+1\le i,j\le n+m-2} (x(i)-x(j))^2+(x(1)-x(n+m-1))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2}x(i)^2}.\\
&+ \frac{\sum_{n+m-1\le i,j\le 2n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}.
\end{align*}
Notice that the first term is
\begin{align*}
&=\frac{ (x(1)-x(2))^2+(x(2)-x(3))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-1}x(i)^2}\\
&=\frac{2}{n-1+0+n-1}\\
&=\frac{1}{n-1}
\end{align*}
The second term is $0$, and the third term is
\begin{align*}
&=\frac{ (x(n+m-1)-x(n+m))^2+(x(n+m-1)-x(n+m+1))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-1}x(i)^2}\\
&=\frac{2}{n-1+0+n-1}\\
&=\frac{1}{n-1}.
\end{align*}
Adding all the three terms, we get
$$
\lambda_2({T_n^m})\le \frac{1}{n-1}+\frac{1}{n-1}=\frac{2}{n-1}.
$$\newline
\textbf{Case 2:} One end of $P_m$ is $1$, and the other end of $P_m$ is $J$ where $n+m\le J \le 2n$ or one end of $P_m$ is $n+m-1$ and the other end of $P_m$ is $K$ where $1\le K \le n$. Without loss of generality we only discuss when one end of $P_m$ is $1$ and the other end of $P_m$ is $J$ because the other case follows by an identical argument. Figure $3$ demonstrates $T_7^3$ in this case:
\begin{center}
\includegraphics[scale = 0.5]{T2.png}
\end{center}
We can set the test vector to be same as case 1. Since we already know from case 1 that $(\bold{x},\bold{1})=0$,
\begin{align*}
\lambda_2(T_n^m)&\le \frac{\bold{x}^T\bold{L}\bold{x}}{\bold{x}^T\bold{x}}\\
&=\frac{\sum_{(a,b)\in E_{T_n^{m}}} (x(a)-x(b))^2}{\sum_{i=1}^{2n+m-2}x(i)^2}\\
& = \frac{\sum_{1\le i,j\le n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2} x(i)^2}\\
&+ \frac{\sum_{n+1\le i,j\le n+m-2+(x(1)-x(J))^2} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2}x(i)^2}\\
&+ \frac{\sum_{n+m-1\le i,j\le 2n+m-2} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2}x(i)^2}.
\end{align*}
Notice that that the second term is
$$
\frac{0+1}{n-1+n-1}=\frac{1}{2(n-1)}.
$$
We have calculated the first term and the third term in case 1. Hence
$$
\lambda_2({T_n^m})\le \frac{1}{n-1}+\frac{1}{2(n-1)}+\frac{1}{n-1}=\frac{5}{2(n-1)}.
$$\newline
\textbf{Case 3:} One end of $P_m$ is $J$ where $2\le J \le n$ and the other end of $P_m$ is $K$ where $n+m\le K \le 2n$. Figure $4$ demonstrates $T_7^3$ in this case:
\begin{center}
\includegraphics[scale = 0.5]{T3.png}
\end{center}
Similar to before, we can set the test vector to be
$$
\bold{x}(i) =
\begin{dcases}
0\quad &i=1,n+m-1\\
0\quad &n+1\le i \le n+m-2\\
1 \quad &i=J,K\\
1 \quad &2<i\le n \text{ and } i \text{ is a descendant of } J \\
1 \quad &2<i\le n \text{ and } i \text{ is a ancestor of } J \\
1 \quad &i<n+m\le n \text{ and } i \text{ is a descendant of } K\\
1 \quad &i<n+m\le n \text{ and } i \text{ is a ancestor of } K\\
-1\quad & \text{otherwise}
\end{dcases}.
$$
We notice that for elements of $T_{n,1}$, the number of $1$'s in $\bold{x}$ is $\frac{n-1}{2}$ and the number of $-1$'s in $\bold{x}$ is $\frac{n-1}{2}$. For elements of $T_{n,1}$ the number of $1$'s in $\bold{x}$ is $\frac{n-1}{2}$ and the number of $-1$'s in $\bold{x}$ is $\frac{n-1}{2}$. Hence we have
\begin{align*}
(\bold{x},\bold{1})&=\sum_{i=1}^{2n+m-2} x(i)\\
&=x(1)+x(n+m-1) + \sum_{i=2}^{n}x(i)+\sum_{i=n+m}^{2n+m-2} x(i)\\
&=0+0+\frac{n-1}{2}-\frac{n-1}{2}+\frac{n-1}{2}-\frac{n-1}{2}\\
&=0.
\end{align*}
Now we can estimate the upper bound of second eigenvalue
\begin{align*}
\lambda_2(T_n^m)&\le \frac{\bold{x}^T\bold{L}\bold{x}}{\bold{x}^T\bold{x}}\\
&=\frac{\sum_{(a,b)\in E_{T_n^{m}}} (x(a)-x(b))^2}{\sum_{i=1}^{2n+m-2}x(i)^2}\\
& = \frac{\sum_{1\le i,j\le n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2} x(i)^2}\\
&+ \frac{\sum_{n+1\le i,j\le n+m-2} (x(i)-x(j))^2+(x(J)-x(K)^2)}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2}x(i)^2}.\\
&+ \frac{\sum_{n+m-1\le i,j\le 2n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+m-2} x(i)^2+\sum_{n+m-1\le i\le 2n+m-2}x(i)^2}.
\end{align*}
Notice that the second term is
$$
\frac{0}{n-1+n-1}=0.
$$
The first term and the second term calculation is basically the same as case 1, and the result is the same too. It follows that
$$
\lambda_2(T_n^m)\le \frac{1}{n-1}+0+\frac{1}{n-1}=\frac{2}{(n-1)},
$$
which gives us our upper bound estimation. \newline
For the lower bound, we can compare our graphs to complete graphs. For every pair of edge $(a,b)\in E_{T_n^m}$,let the path graph $P_{a,b}$ be a path from $a$ to $b$, and $G_{a,b}$ be a graph with a single edge $(a,b)$, then from the lemma we have $|P_{a,b}|P_{a,b} \succcurlyeq G_{a,b}$. We know that if $a$ and $b$ are both in the same binary tree, without loss of generality we assume they are both in $P_{n,1}$,from the definition of full binary tree we know that the length from the vertex $1$ to the vertices which have no children is $\log_2{(n+1)}-1$, so the length from vertex $a$ to $b$ is at most $2log_2{(n+1)}-2$.
If $a$ and $b$ are in different binary trees, without loss of generality, we suppose $a$ is in $T_{n,1}$ and $b$ is in $T_{n,2}$ then the length of the path $P_{a,b}$ is the longest when $a$ and $b$ are the vertices which have no descendants. Hence the length of the path $P_{a,b}$ is at most $2\log_2{(n+1)}+m-3$. That is, $|P_{a,b}|\le 2\log_2(n+1)+m-3$.
It follows that
\begin{align*}
G_{a,b}\preccurlyeq |P_{a,b}|P_{a,b} & \preccurlyeq((2\log_2(n+1)+m-3)P_{a,b} \\
&\preccurlyeq(2\log_2(n+1)+m-3)T_n^m.
\end{align*}
Also we know that complete graph $K_{2n+m-2}$ t has $\binom{2n+m-2}{2}$ single edges. Thus
\begin{align*}
K_{2n+m-2}\preccurlyeq \sum_{(a,b)\in E_{K_{2n+m-2}}}G_{a,b}&\preccurlyeq \binom{2n+m-2}{2} G_{a,b} \\
&\preccurlyeq \binom{2n+m-2}{2}(2log_2(n+1)+m-3) T_m^n.
\end{align*}
Hence,
$$
2n+m-2=\lambda(K_{2n+m-2}) \le \binom{2n+m-2}{2}(2log_2(n+1)+m-3) \lambda(T_m^n).
$$
From the above, we conclude $\lambda_2(T_m^n)\geq \frac{2}{(2n+m-1)(2log_2(n+1)+m-3}$.
\end{proof}
\end{theorem}
We notice that when $m=2$ the graph $T_n^2$ is connected by a single edge. Now we are doing the same thing as we did for the complete graphs. When graphs $T_{n,1}$ and $T_{n,2}$ are connected by $k$ different single edges $e_1,e_2,\dots e_k$ we get the graph $T_n^{2\times k}$. An example for $k = 3$ is given below:
\begin{center}
\includegraphics[scale = 0.5]{T4.jpg}
\end{center}
We now have following theorem.
\begin{theorem}
For the graph $T_n^{2\times k}$ described above, we have the following bound:
$$
\frac{2}{(2n+1)(2\log_2(n-1)+1)}\le \lambda_2(T_n^{2\times k})\le \frac{2m+2}{n-1}+\chi_{k\geq n-1}\frac{-3m+3n-3}{2(n-1)}
$$
where $\chi$ denotes the characteristic function.
\begin{proof}
We know that $T_n^2$ is a subgraph of $T_n^{2\times k}$ so we have $T_n^2\preccurlyeq T_n^{2\times k}$.Hence $$\frac{2}{(2n+1)(2\log_2(n-1)+1)}\le \lambda_2({T_n^2})\le \lambda_2(T_n^{2\times k})
$$
Thus we finished the lower bound. \newline
For the upper bound we still use test vector. Now we notice the graph $T_{n,1}$ contains three different sets of vertices. One set, $V_{1,1}$, only contain the vertex $1$, the second set, $V_{1,2}$, contains vertex $2$ and all of it's descendants. The third set, $V_{1,3}$, contains vertex $3$ and its descendants. The graph $T_{n,2}$ also contains three different vertices sets. One set, $V_{2,1}$, only contains the vertex $n+1$, the second set, $V_{2,2}$, contains vertex $n+2$ and all of it's descendants. The third set, $V_{2,3}$, contains vertex $n+3$ and all of its descendants. \newline
We know that edges $e_1,e_2,\ldots e_k$ contain $k$ vertices in $T_{n,1}$ and $k$ different vertices in $T_{n,2}$. For those $k$ vertices in $T_{n,1}$ we know that they might be in vertex set $V_{1,1}$ or $V_{1,2}$ or $V_{1,2}$. We also know that there are at most one vertex in $V _{1,1}$. Hence when $k\ge 2$ there are one or more vertices in either $V_{1,2}$ or $V_{1,3}$. \newline
Without loss of generality, assume there are more vertices in set $V_2$. And for graph $T_{n,2}$ we also assume there are more vertices in $V_{2,2}$ We set the test vector to be
$$
\bold{x}(i) =
\begin{dcases}
0\quad &i=1,n+1\\
1 \quad &i\in V_{1,2} \text{ or } i\in V_{2,2}\\
-1\quad & \text{otherwise}
\end{dcases}.
$$
We notice that the test vector here is basically the same when we define the test vector for the graph $T_n^m$. For elements of $T_{n,1}$, the number of occurrences of $1$ is $\frac{n-1}{2}$ and the number of occurrences of $-1$ is $\frac{n-1}{2}$. For elements of $T_{n,1}$ the number of occurrences of $1$ is $\frac{n-1}{2}$ and the number of occurrences of $-1$ is $\frac{n-1}{2}$. Hence we have
\begin{align*}
(\bold{x},\bold{1})&=\sum_{i=1}^{2n} x(i)\\
&=x(1)+x(n+2) + \sum_{i\in V_{1,2}}x(i)+\sum_{i\in V_{2,2}}^ x(i)+\sum_{i\in V_{1,3}}x(i)+\sum_{i\in V_{2,3}}x(i)\\
&=0+0+\frac{n-1}{2}+\frac{n-1}{2}-\frac{n-1}{2}-\frac{n-1}{2}\\
&=0.
\end{align*}
We have finished verifying $(\bold{x},\bold{1})=0$. Now we can try to bound $\lambda_2(T_n^m)$:
\begin{align*}
\lambda_2(T_n^{2\times k})&\le \frac{\bold{x}^T\bold{L}\bold{x}}{\bold{x}^T\bold{x}}\\
&=\frac{\sum_{(a,b)\in E_{T_n^{2\times k}}} (x(a)-x(b))^2}{\sum_{i=1}^{2n}x(i)^2}\\
& = \frac{\sum_{1\le i,j\le n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n} x(i)^2}\\
&+ \frac{\sum_{(i,j)\in {\{e_1,e_2,\dots e_k\}}} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}\\
&+ \frac{\sum_{n+m-1\le i,j\le 2n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}.
\end{align*}
Notice that the first term is
\begin{align*}
&\frac{\sum_{1\le i,j\le n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n} x(i)^2}\\
=&\frac{ (x(1)-x(2))^2+(x(2)-x(3))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le n+1} x(i)^2+\sum_{n+1\le i\le 2n}x(i)^2}\\
=&\frac{2}{n-1+n-1}\\
=&\frac{1}{n-1}.
\end{align*}
The second term is
\begin{align*}
\frac{\sum_{(i,j)\in {\{e_1,e_2,\dots e_k\}}} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}
&=\frac{\sum_{i\in V_{1,1} \text{ or } j\in V_{2,1}}(x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}\\
&+\frac{\sum_{i\in V_{1,2}, j\in V_{2,2}(x(i)-x(j))^2}}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}\\
&+\frac{\sum_{i\in V_{1,2}, j\in V_{2,3}(x(i)-x(j))^2}}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}\\
&+\frac{\sum_{i\in V_{1,3}, j\in V_{2,2}(x(i)-x(j))^2}}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}\\
&+\frac{\sum_{i\in V_{1,3}, j\in V_{2,3}(x(i)-x(j))^2}}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}\\
\end{align*}
We know that there are at most $\frac{n-1}{2}$ vertices which have nonzero value connected to vertex $1$ and at most $\frac{n-1}{2}$ vertices which have nonzero value connected to vertex $n+1$. Hence we have
$$
\frac{\sum_{i\in V_{1,1} \text{ or } j\in V_{2,1}}(x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2} \le \frac{2(n-1)(0-1)^2}{2(n-1}=1.
$$
The second term in the sum above is $0.$ We know that $i=j\le \frac{n-1}{2}$, where $i\in V_{1,2},j\in V_{2,3}$. Thus, the third term in the sum above obeys
$$
\frac{\sum_{i\in V_{1,2}, j\in V_{2,3}}(x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2} < \frac{(n-1)(1+1)^2}{2(n-1)}=2.
$$
We know that $i=j\le \frac{n-1}{2}$ where $i\in V_{1,3},j\in V_{2,2}$, Thus, the fourth term in the sum above satisfies
$$
\frac{\sum_{i\in V_{1,3}, j\in V_{2,2}}(x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2} < \frac{(n-1)(-1-1)^2}{2(n-1)}=2.
$$
Lastly, the fifth term in the sum above is $0$.\\
We can also see that
\begin{align*}
&\frac{\sum_{n+m-1\le i,j\le 2n} (x(i)-x(j))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+m-1\le i\le 2n}x(i)^2}\\
=&\frac{ (x(n+1)-x(n+2))^2+(x(n+1)-x(n+3))^2}{\sum_{1\le i\le n} x(i)^2+\sum_{n+1\le i\le 2n}x(i)^2}\\
=&\frac{2}{n-1+n-1}\\
=&\frac{1}{n-1}.
\end{align*}
But we don't need to add above terms when $k$ is relatively small. We can make the inequality tighter depending on the value of $k$. We notice that when $k\le n-1$ the $\lambda_2({T_n^{2\times k}})$ takes the greatest value when $i\in V_{1,3},j\in V_{2,2}$ or $i\in V_{1,2}, j\in V_{2,3}$. Hence, we actually have
$$
\lambda_2({T_n^{2\times k}})\le \frac{m(-1-1)^2}{2(n-1)}+\frac{2}{n-1}=\frac{2m+2}{n-1}
$$
when $k> n-1$. Also, $\lambda_{T_n^{2\times k}}$ takes the greatest value when $i\in V_{1,3},j\in V_{2,2}$ or $i\in V_{1,2}, j\in V_{2,3}$. It's follows that
\begin{align*}
\lambda_{T_2(n^{2\times k}})&\le 2+ \frac{(m-(n-1))(0+-1)^2}{2(n-1)}+\frac{2}{n-1} \\
&=\frac{m+3n+1}{2(n-1)} \\
&=\frac{2m+2}{n-1}+\frac{-3m+3n-3}{2(n-1)}.
\end{align*}
From the above we get that
\begin{align*}
\lambda_{T_n^{2\times k}}\le \frac{2m+2}{n-1}+\chi_{k\geq n-1}\frac{-3m+3n-3}{2(n-1)}
\end{align*}
\end{proof}
\end{theorem}
Now we can construct a graph $T_{n\times l}^2$ which is connected by $l$ identical full binary graphs $T_{n,1},\dots T_{n,l}$ using single edge. For every graph $T_{n,j}$ where $1\le j \le l-1$, there is a vertex $v^i$ which is ancestor of all other vertices in $T_{n,j}$, we connect that with $v^{j+1}$. As an example, we display $T_{7\times 3}^2$:
\begin{center}
\includegraphics[scale = 0.7]{T5.png}
\end{center}
We have following theorem.
\begin{theorem}
For the graphs $T_{n\times l}^2$ which we described above we have following bound of the second eigenvalues:
$$
\frac{2}{(nl-1)(l\cdot \log_2(n-1)-1)}\le T_{n\times l}^2 \le \frac{l}{n-1}.
$$
\begin{proof}
For the vertex $v^i$ which is ancestor of all other vertices in $T_{n,i}$, we label it as $(j-1)n+1$. Then $(j-1)n+1$ has two children. We label them as $(j-1)n+2$ and $(j-1)n+3$. Then we label children of $(j-1)n+2$ as $(j-1)n+4$ and $(j-1)n+5$, the children of $(j-1)n+3$ as $(j-1)n+6$ and $(j-1)n+7$, and so on until $jn$.
To get the upper bound we still need to use a test vector. We can set the test vector to be:
$$
\bold{x}(i) =
\begin{dcases}
0\quad &j\in\{1,\dots l\},(j-1)n+1\\
1 \quad &j\in\{1,\dots l\},(j-1)n+2\\
1 \quad &j\in\{1,\dots l\},(j-1)n+2<i\le jn \text{ and } i \text{ descendant of } (j-1)n+2\\
-1\quad & \text{otherwise}
\end{dcases}.
$$
We notice that for elements of $T_{n,j}$ where $j\in \{1,\dots l\}$, the number of occurrences of $1$'s in $\bold{x}$ is $\frac{n-1}{2}$ and the number of occurrences of $-1$'s in $\bold{x}$ is $\frac{n-1}{2}$. Hence we have
\begin{align*}
(\bold{x},\bold{1})&=\sum_{j=1}^{l}\sum_{i=(j-1)n+1}^{jn} x(i)\\
&=l(0)+l\frac{n-1}{2}-l\frac{n-1}{2}\\
&=0.
\end{align*}
We have finished verifying $(\bold{x},\bold{1})=0$. Now we estimate the upper bound of $\lambda_2(T_n^m)$ :
\begin{align*}
\lambda_2(T_{n\times l}^2)&\le \frac{\bold{x}^T\bold{L}\bold{x}}{\bold{x}^T\bold{x}}\\
&=\frac{\sum_{(a,b)\in E_{T_{n\times l}^{2}}} (x(a)-x(b))^2}{\sum_{i=1}^{nl}x(i)^2}.\\
\end{align*}
We notice that only when $a=(j-1)n+1$ with $b=(j-1)n+2$ or $b=(j-1)n+3$, then the term $(x(a)-x(b))^2$ is not zero. There are $n-1$ vertices in each $T_{n,j}$ such that $x{a}^2=1$. Hence the above equation has the following form:
\begin{align*}
& = \frac{\sum_{j=1}^{l}\sum_{1\le a,b\le nj} (x(a)-x(b))^2}{\sum_{j=1}^{l}\sum_{1\le a\le nj} x(a)^2}\\
& = \frac{2l}{(n-1)l}\\
& = \frac{2}{(n-1)}.
\end{align*} \newline
Now we need to find the lower bound. We still compare our graphs to complete graphs. For every pair of edge $(a,b)\in E_{T_n^m}$, let the path graph $P_{a,b}$ be a path from $a$ to $b$, and $G_{a,b}$ be a graph with a single edge $(a,b)$; then from the lemma, we have $|P_{a,b}|P_{a,b} \succcurlyeq G_{a,b}$. We notice that length of the path $P_{a,b}$ is the longest when $a$ and $b$ where $a\in T_{n,1}$ and $b\in T_{n,l}$, and $a$ and $b$ have no descendants. Hence the length of the path $P_{a,b}$ is at most $l \cdot \log_2{(n-1)}-1$, which means that $|P_{a,b}|\le 2 \log_l(n-1)-1$. Now,
\begin{align*}
G_{a,b}&\preccurlyeq |P_{a,b}|P_{a,b} \\
&\preccurlyeq(l \cdot \log_2(n-1)-1)P_{a,b} \\
&\preccurlyeq(l \cdot \log_2(n-1)-1)T_{n\times l}^m.
\end{align*}
Also, we know that complete graph $K_{nl}$ has $\binom{nl}{2}$ edges, so $$K_{nl}\preccurlyeq \sum_{(a,b)\in E_{K_{nl}}}G_{a,b}\preccurlyeq \binom{nl}{2} G_{a,b} \preccurlyeq \binom{nl}{2}(llog_2(n-1)-1) T_m^n.
$$
Hence,
$$
nl=\lambda_2(K_{nl}) \le \binom{nl}{2}(l\cdot \log_2(n-1)-1) \lambda(T_{n\times l}^2).
$$
From the above, we get that $\lambda_2(T_m^n)\geq \frac{2}{(nl-1)(l \log_2(n-1)-1)}$.
\end{proof}
\end{theorem}
We have finished discussing the graph $T_{n\times l}^2$. Now we will discuss a more general case when a graph $B_{n\times l}^2$ which is connected by $l$ identical graphs $G_{n,1},\dots G_{n,l}$ using a single edge. Graphs $G_{n,1},\dots G_{n,l}$ are all identical, but they can be any arbitrary graph. We have following theorem.
\begin{theorem}
For graphs of the form $B_{n\times l}^2$, which we described above, we have following bound on the second eigenvalue:
$$
0<\lambda_2(B_{n\times l}^2)\le 2
$$
\end{theorem}
\begin{proof}
Now we need to label graph $B_{n\times l}^2$ first. For graph $G_{n,1}$, we know that there is a vertex which is attached to another vertex in graph $G_{n,2}$; we label this vertex as $n$. All other vertices in the graph $G_{n,1}$ can be labeled from $1$ to $n-1$ without repeating. For graph $G_{n,l}$, we know that there is a vertex which is attached to another vertex in graph $G_{n,l-1}$; we label this vertex as $n(l-1)+1$. All other vertices in the graph $G_{n,l}$ can be labeled from $n(l-1)+2$ to $nl$ without repeating. For every graph $G_{n,i}$ where $2\le i \le l-1$, we know that there is a vertex which is attached to another vertex in graph $G_{n,i-1}$. We label this vertex as $n(i-1)+1$, and there is a vertex which is attached to another vertex in graph $G_{n,i+1}$. We label that vertex as $ni$. All other vertices in the graph $G_{n,i}$ can be labeled from $n(i-1)+2$ to $ni-1$ without repeating.\\
We notice that $B_{n\times l}^2$ is connected. Hence we have
$$
\lambda_2(B_{n\times l}^2)>0.
$$
Also, we notice that if we take away an edge $(n,n+1)$ from graph $B_{n\times l}^2,$ then we get a new graph $\tilde{B}_{n\times l}^2=B_{n\times l}^2\setminus(n,n-1),$ which is not connected. Thus, we have $\lambda_2({\tilde{B}_{n\times l}^2})=0$\\
From Theorem 3.2, we know that
$$
0=\lambda_2(B_{n\times l}^2)=\min\limits_{\substack{(\bold{x},\bold{1})=0,x\in R }}{\frac{\bold{x}^T \bold{L_{B_{n\times l}^2}}\bold{x}}{\bold{x}^T\bold{x}}}
$$
and
$$
\lambda_2({\tilde{B}_{n\times l}^2})=\min\limits_{\substack{(\bold{x},\bold{1})=0,x\in R }}{\frac{\bold{x}^T \bold{L_{\tilde{B}_{n\times l}^2}}\bold{x}}{\bold{x}^T\bold{x}}}.
$$
Notice that
\begin{align*}
\lambda_2(B_{n\times l}^2)&=\min\limits_{\substack{(\bold{x},\bold{1})=0,x\in R }}{\frac{\bold{x}^T \bold{L_{B_{n\times l}^2}}\bold{x}}{\bold{x}^T\bold{x}}}\\
&=\min\limits_{\substack{(\bold{x},\bold{1})=0,x\in R }}\, \frac{1}{\bold{x}^T\bold{x}}\sum\limits_{(a,b)\in B_{n\times l}^2}(x(a)-x(b))^2\\
&\le \min\limits_{\substack{(\bold{x},\bold{1})=0, x\in R }}\, \left[\sum\limits_{\substack{(a,b)\in B_{n\times l}^2, \\ (a,b)\neq (n,n+1)}}\frac{(x(a)-x(b))^2}{{\bold{x}^T\bold{x}}}+\frac{(x(n)-x(n+1))^2}{{\bold{x}^T\bold{x}}}\right]\\
&= \min\limits_{\substack{(\bold{x},\bold{1})=0,x\in R }}\left[\sum\limits_{(a,b)\in \tilde{B}_{n\times l}^2}\frac{(x(a)-x(b))^2}{{\bold{x}^T\bold{x}}}+\frac{(x(n)-x(n+1))^2}{{\bold{x}^T\bold{x}}}\right]\\
&= \min\limits_{\substack{(\bold{x},\bold{1})=0,x\in R }}\left[\sum\limits_{(a,b)\in \tilde{B}_{n\times l}^2}\frac{(x(a)-x(b))^2}{{\bold{x}^T\bold{x}}} +\frac{(x(n)-x(n+1))^2}{{\bold{x}^T\bold{x}}}\right] \\
&= \min\limits_{\substack{(\bold{x},\bold{1})=0,x\in R }} (I_1 + I_2)
\end{align*}
where
$$I_1 = \sum\limits_{(a,b)\in \tilde{G}_{n\times l}^2}\frac{(x(a)-x(b))^2}{{\bold{x}^T\bold{x}}} \text{ and } I_2 = \frac{(x(n)-x(n+1))^2}{{\bold{x}^T\bold{x}}}.$$
Notice that $I_1$ is the same as
$$
0=\lambda_2(\tilde{B}_{n\times l}^2)=\min\limits_{\substack{(\bold{x},\bold{1})=0,x\in R }}{\frac{\bold{x}^T \bold{L_{\tilde{B}_{n\times l}^2}}\bold{x}}{\bold{x}^T\bold{x}}}= \frac{1}{\bold{x}^T\bold{x}}\sum\limits_{(a,b)\in \tilde{B}_{n\times l}^2}(x(a)-x(b))^2.
$$
We have the following bound for the denominator of $I_2$:
\begin{align*}
\bold{x}^T\bold{x}&=\sum_{a=1}^{a=nl}(x(a))^2\\
&\geq (x(n))^2+(x(n+1))^2\\
&= \frac{1}{2}(x(n))^2+\frac{1}{2}(x(n+1))^2+\frac{1}{2}(x(n))^2+\frac{1}{2}(x(n+1))^2-x(n)x(n+1)+x(n)x(n+1)\\
&= \frac{1}{2}((x(n))^2+(x(n+1))^2+2x(n)x(n+1))+\frac{1}{2}((x(n))^2+(x(n+1))^2-2x(n)x(n+1))\\
&= \frac{1}{2}(x(n)+x(n+1))^2+\frac{1}{2}(x(n)-x(n+1))^2\\
& \geq \frac{1}{2}(x(n)-x(n+1))^2.
\end{align*}
We now use this to bound the second term.
$$
{\frac{(x(n)-x(n+1))^2}{\bold{x}^T\bold{x}}}\le \frac{(x(n)-x(n+1))^2}{\frac{1}{2}(x(n)-x(n+1))^2}=2.
$$
Adding two terms together gets us
$$
\lambda_2(B_{n\times l}^2)\le 2.
$$
Thus, the result is proven.
\end{proof}
\begin{remark}
The upper bound in the inequality above cannot be improved. We notice that when $G_{1\times 2}^2=P_2,$ we know that the graph $P_2$ is constructed by connecting two identical graphs $G_{1,1}$ together, where $G_{1,1}$ is a single vertex. Then we have $\lambda_2(G_{1\times 2}^2)=2$, so equality is achieved.
\end{remark}
\end{section}
\begin{section}{Conclusion and future work}
Our work from section 4 to section 6 went through various different graphs. We noticed that for the $K_n$ type of graphs $D_n^m$, we have the approximate bound $\lambda_2(D_n^m)\sim\frac{1}{n}$. We also observe that when $n$ and $m$ increase the second eigenvalues decrease. Similarly, for $D_n^{2\times k}$, we have $\lambda_2(D_n^{2\times k})\sim\frac{1}{n}$, and when $n$ increases the second eigenvalues decrease too. \newline
We noticed that for $S_n$ type graphs, $S_n^m$, we still have $\lambda_2(S_n^m)\sim\frac{1}{n}$. This is expected because star graph is a type of complete bipartite graph $K_{1,n-1}$, and complete graphs can also be complete bipartite graphs depending on the choice of $n$. But the $T_n$ type of bridge graph is different from the first two types of graphs. We noticed the lower bound of the second eigenvalues of $T_n^m$ and $T_n^{2\times k}$ and $T_{n\times}^{l}$ are all dependent on $\log(n+1)$. Also, the upper bound is still asymptotically dependent on $\frac{1}{n}$. \newline
From the above proofs, we noticed that the test vector method is a very good technique for upper bound of the eigenvalues. This is because theorem 3.2 enables us to find a test vector which is orthogonal to the first eigenvector, and from theorem 3.3 we know that the first eigenvector is $\bold{1}$. It's also important to use theorem 6.4, which bounded general bridge graphs $B_{n\times l}^2$. Now we are curious what will happen if we construct a bridge graph like $B_{n\times \infty}^2$. It will not be the same as the case when $l$ is finite because we can't count the vertices one by one anymore. However, we still want to know if the results are somewhat similar.\newline
Our future work will be constructing infinite bridge graphs. Now we can start from some basic definition and related theorems.
\begin{definition}
An infinite graph $G=(V,E)$ is a graph which has a countably infinite number of vertices. Infinite Bridge graphs $G_{n\times \infty}^m$ are constructed by using path graphs, $P_m$ with $n\geq 2$, and gluing together a countably infinite number of identical finite graphs on each end of the paths. Usually we can find an invertible map from vertex set $V$ to $\mathbb{Q}$. We only discuss unweighted graphs.
\end{definition}
For infinite graphs we cannot use adjacency matrices anymore. Now we are seeking a substitution of matrices to associate our graph with an operator. Definitions of operators related to infinite graphs are mentioned by other authors like Bojann Mohar\cite{b4} and Dragos M. Cvetokvic\cite{b3}, and Ayadi Hela\cite{b5} also defined the Laplacian Operator. But since we are only interested in infinite bridge graphs, we will use a different definition than other authors. The following is an example formed by attaching a countably infinite number of $K_8$ graphs together:
\begin{center}
\includegraphics[scale = 0.5]{T_inf.jpg}
\end{center}
\begin{definition} The space $\ell^2(\mathbb{N} \times \mathbb{N})$ is defined as the space of sequences $\{x_{i,j}\}_{i.j \in \mathbb{N}}$ such that
$$\sum_{\mathbb{N} \times \mathbb{N}} |x_{i,j}|^2 < \infty.$$
\end{definition}
\begin{definition} The adjacency operator $M$ of a weighted graph $G=(V,E)$ is defined as operator with the following entries
$$
M(a,b) =
\begin{dcases}
1 \quad (a,b)\in E\\
0 \quad (a,b)\notin E.
\end{dcases}
$$ Notice that $\{M(a,b)\}_{a,b \in \mathbb{N}}$ forms a sequence.
\end{definition}
\begin{definition}
The degree operator $D$ of a graph $G=(V,E)$ is a diagonal matrix whose entries are given by
$$
D(a,b) =
\begin{dcases}
d(a)\quad &a=b\\
0 \quad &a\neq b.
\end{dcases}
$$ Like before, $\{D(a,b)\}_{a,b \in \mathbb{N}}$ forms a sequence.
\end{definition}
\begin{definition}
The graph laplacian operator $L$ of a graph $G$ is defined to be
$$
L =M - D,
$$ where the subtraction operation is subtracting corresponding elements in each sequence.
\end{definition}
\begin{theorem}
The adjacency operator, degree operator and laplacian operator are all linear operators.
\begin{proof}
The proof is straightforward from the definition of the operators.
\end{proof}
\end{theorem}
\begin{theorem}
For the laplacian operator $L_G: \mathcal{X} \rightarrow \mathcal{Y}$ with $\mathcal{X}, \mathcal{Y} \subset \ell^2(\mathbb{N})$, where $G$ is a bridge graph, $L$ is a well defined mapping.
\begin{proof}
We notice that for $a\in V_G$, we have
$$Lx(a)=\sum_{b\in N(a)}(x(a)-x(b)).$$
Since we assume that $x\in \ell^2(\mathbb{N})$, we know there is an $M$ such that $(\sum_{a=1}^{\infty} \|a\|_2)^{\frac{1}{2}}<M$ for some $M > 0$. Since $G$ is a bridge graph, then we know that for every vertex $a$, then $a$ has a finite number of vertices in its neighborhood. Hence we have
\begin{align*}
\| y\|_2 &=\| L x \|_2\\
&=\left(\sum_{a=1}^{\infty}\sum_{b\in N(a)}(x(a)-x(b))^2\right)^{\frac{1}{2}}\\
&\le \left(\sum_{a=1}^{\infty}m \cdot \max_{b\in N(a)}\left\{\|a\|_2^2,\|b\|_2^2\right\}\right)^{\frac{1}{2}}\\
&\le \left(\sum_{a=1}^{\infty} m^2 \|a\|_2^2\right)^{\frac{1}{2}}\\
&=m\| L x \|_2 \\
&< mM.
\end{align*}
Hence from the second last line of the above equations we get that Laplacian operator $L_G$ is a bounded operator and $y\in \ell^2(\mathbb{N})$.
\end{proof}
\end{theorem}
The following definitions are from Elias M. Stein and Rami Shakarchi\cite{b2}.
\begin{definition}
It is well known that $\ell^2(\mathbb{N})$ is a Hilbert space. Therefore, it admits an inner product. For vector $x,y\in \ell^2(\mathbb{N})$, the inner product of $x$ and $y$ is defined as
$$(x,y)= \sum_{n \in \mathbb{N}} x_n y_n.$$
\end{definition}
\begin{definition}
For a graph operator $T$, if we have
$$
T\phi=\lambda \phi,
$$
then we call $\lambda$ the eigenvalue and $\phi$ the eigenvector corresponding to eigenvalue $\lambda$ for operator $T$.
\end{definition}
\begin{definition}
We say $\lambda\in \mathbb{R}$ is in the spectrum of $A$ if $A-\lambda I$ has no bounded inverse. The spectrum is denoted by $\sigma({A})$ where $\sigma({A}) \subset \mathbb{R}$, and the resolvent set for $A$ is $\rho({A})=\mathbb{R}\setminus\sigma({S})$.
\end{definition}
Our future work will be about the spectrum of Laplacian operator.
\end{section}
\newpage
| {
"timestamp": "2023-01-03T02:06:27",
"yymm": "2202",
"arxiv_id": "2202.11875",
"language": "en",
"url": "https://arxiv.org/abs/2202.11875",
"abstract": "The Bridge graph is a special type of graph which are constructed by connecting identical connected graphs with path graphs. We discuss different types of bridge graphs $B_{n\\times l}^{m\\times k}$ in this paper. In particular, we discuss the following: complete-type bridge graphs, star-type bridge graphs, and full binary tree bridge graphs. We also bound the second eigenvalues of the graph Laplacian of these graphs using methods from Spectral Graph Theory. In general, we prove that for general bridge graphs, $B_{n\\times l}^2$, the second eigenvalue of the graph Laplacian should be between $0$ and $2$, inclusive. In the end, we talk about future work on infinite bridge graphs. We created definitions and found the related theorems to support our future work about infinite bridge graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "Characterizing Spectral Properties of Bridge",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9740426405416754,
"lm_q2_score": 0.8221891239865619,
"lm_q1q2_score": 0.8008472653525177
} |
https://arxiv.org/abs/1308.0861 | Bounds of incidences between points and algebraic curves | We prove new bounds on the number of incidences between points and higher degree algebraic curves. The key ingredient is an improved initial bound, which is valid for all fields. Then we apply the polynomial method to obtain global bounds on $\mathbb{R}$ and $\mathbb{C}$. | \section{introduction}
The Szemer\'{e}di--Trotter theorem \cite{szemeredi1983extremal} says that for a finite set, $L$, of lines and a finite set, $P$, of points in $\mathbb{R}^2$, the number of incidences is less than a constant times $|P|^{\frac{2}{3}} |L|^{\frac{2}{3}} + |P| + |L|$. There have been several generalizations of this theorem. For example, Pach and Sharir \cite{pach1998number} allow simple curves that have $k$ degrees of freedom and multiplicity-type $C$: (i) for any $k$ distinct points there are at most $C$ curves in $L$ that pass through them, and (ii) any two distinct curves in $L$ have at most $C$ intersection points. This is summarized in the following theorem:
\begin{thm}[Pach--Sharir 98]\label{PSthm}
For a finite set $P$ of points in $\mathbb{R}^2$ and a finite set $L$ of simple curves which have $k$ degrees of freedom and multiplicity-type $C$. The number of incidences $|\mathcal{I}(P,L)|:=|\{(p,l)\in P\times L, p\in l\}|$ satisfies
\begin{equation}
|\mathcal{I}(P,L)|\lesssim_{C, k} |P|^{\tfrac{k}{2k-1}}|L|^{\tfrac{2k-2}{2k-1}}+|P|+|L|.
\end{equation}
\end{thm}
\subsection*{Notation} We use the asymptotic notation $X=\OO(Y)$ or $X\lesssim Y$ to denote the estimate $X\leq CY$ for some constant $C$. If we need the implicit constant $C$ to depend on additional parameters, then we indicate this by subscripts. For example, $X=\OO_{d}(Y)$ or $X\lesssim_{d} Y$ means that $X\leq C_{d}Y$ for some constant $C_{d}$ that depends on $d$.
The main result of this paper is an improvement to Theorem \ref{PSthm} when $L$ is a set of higher degree algebraic curves.
Let $L$ be a finite set of algebraic curves of degree $\leq d$ in $\mathbb{R}^{2}$, any two of which do not share a common irreducible component. Let $P$ be a finite set of distinct points in $\mathbb{R}^{2}$. By B\'{e}zout's theorem, there is at most one curve in $L$ that goes through a subset of $P$ of size $d^{2}+1$. In the notation introduced by Pach and Sharir, a degree $d$ algebraic curve has $d^2+1$ degrees of freedom and multiplicity-type $d^2$. However, one may wonder whether $d^2 +1$ is a misleading definition of the degrees of freedom since generically $A:= {d+2 \choose 2} -1$ points determine a degree $d$ algebraic curve and $A\leq d^{2}+1$ when $d\geq 3$.
This suggests that Theorem ~\ref{PSthm} may still hold for degree $d$ curves with the ``generic degree of freedom" $A$. Indeed, we prove that this is the case.
\begin{thm}\label{main theorem}
Let $d$ be a positive integer, $A={d+2\choose 2} -1$, $L$ a finite set of degree $\leq d$ algebraic curves in $\mathbb{R}^{2}$ such that any two distinct algebraic curves do not share a common irreducible component, and $P$ a finite set of points in $\mathbb{R}^{2}$. Then,
\begin{equation}\label{estimatemainthm}
|\mathcal{I}(P,L)|\lesssim_{d} |P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}}+|P|+|L|.
\end{equation}
\end{thm}
This gives a better bound than Theorem ~\ref{PSthm} for degree $d$ algebraic curves when $d\geq 3$. It gives the same bound when $d=1$ or $d=2$.
We generalize Theorem ~\ref{main theorem} to algebraic curves parametrized by an algebraic variety.
\begin{definition}
Given an integer $d\geq 1$ and a field $\F$, consider the Veronese embedding $\nu_{d}: \PP^{2}\rightarrow \PP^{A}$ given by:$$\nu_{d}: [x, y, z]\mapsto [x^{d},\dots, x^{i}y^{j}z^{k},\dots, z^{d}]_{i+j+k=d}.$$
We identify a degree $d$ curve in $\PP^{2}$ with the preimage of a hyperplane in $\PP^{A}$ or a point in the dual space $(\PP^{A})^{*}$. We say a subset $\M $ of the space of degree $d$ polynomials $\subset S_{d}$ is \emph{parametrized by an algebraic variety} $M$ if it is the preimage of $M\subset (\PP^{A})^{*}$, and we define the dimension of $\M$ to be $\dim M$.
\end{definition}
A consequence of the Theorem ~\ref{main theorem} is the following:
\begin{cor}\label{variety}
Given an integer $d\geq 1$, $A={d+2\choose 2}-1$, and a subset $\mathcal{M}$ of the space of degree $d$ polynomials is parametrized by an algebraic variety $M$ of dimension $\leq k$. Let $P$ be a finite set of points in $\mathbb{RP}^2$, and $L$ a finite subset of $\M$ such that no two curves in $L$ share a common irreducible component. Then,
\begin{equation}
|\mathcal{I}(P,L)|\lesssim_{\mathcal{M}}|P|^{\tfrac{k}{2k-1}}|L|^{\tfrac{2k-2}{2k-1}}+|P|+|L|.
\end{equation}
\end{cor}
This generalization is helpful when we consider a family of curves with special properties, for example, parabolas, circles and a family of curves passing through a common point.
The proof of Theorem ~\ref{main theorem} is a standard application of the polynomial method (see for example \cite{dvir2009size} and \cite{guth2010erdos}) with the following new initial bound. For an exposition of the polynomial method we refer the reader to \cite{kaplan2012simple}.
\begin{lem}[Initial bound]\label{trivial bound}
Let $\mathbb{F}$ be a field. Given an integer $d\geq 1$, $A={d+2\choose 2} -1$, a finite set of algebraic curves, $L$, in $\mathbb{F}^{2}$ of degree $\leq d$ such that any two distinct curves in $L$ do not share a common component, and a finite set of points, $P$, in $\mathbb{F}^{2}$. Then,
\begin{equation}
|\mathcal{I}(P,L)|\lesssim_{d} |P|^A+|L|.\label{initial bound}
\end{equation}
\end{lem}
Combining (\ref{initial bound}) with Solymosi and Tao's polynomial method and induction on the number of points \cite{solymosi2012incidence}, we obtain the following incidence theorem on complex space:
\begin{thm}\label{complex epsilon}
Given an integer $d\geq 1$, $A={d+2\choose 2} -1$, a finite set of points $P$ in $\mathbb{C}^{2}$, a finite set of algebraic curves, $L$, in $\mathbb{C}^{2}$ of degree $\leq d$ such that any two curves: (i) do not share a common irreducible component; (ii) intersect transversally at smooth points. Then, for any $\epsilon>0$,
\begin{equation}\label{estimateepsilonleft}
|\mathcal{I}(P,L)|\lesssim_{\epsilon,d} |P|^{\tfrac{A}{2A-1}+\epsilon}|L|^{\tfrac{2A-2}{2A-1}}+|P|+|L|
\end{equation}
and
\begin{equation}\label{estimateepsilonright}
|\mathcal{I}(P,L)|\lesssim_{\epsilon,d} |P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}+\epsilon}+|P|+|L|.
\end{equation}
\end{thm}
\subsection*{Acknowledgements} We would like to thank Larry Guth for encouraging us to work on the problem. He suggested that we try: improving the initial bound and working on algebraic subsets. We also thank Alex Townsend for reading drafts and for the suggestions.
\section{Proof of Theorem ~\ref{main theorem}}
\subsection{The initial bound}
In this subsection we prove the initial bound by double counting.
\begin{definition}
Given a point $p\in \PP^{2}$, let $H_{p}$ denote the corresponding hyperplane in $\PP^{A*}$ via the Veronese embedding and dual. Given a finite set of points $\Gamma=\{p_{1},\dots,p_{n}\}$, we define $m_{d}(\Gamma)=\dim(\cap H_{p_{i}})$, which characterizes the dimension of curves passing through all the points in $\Gamma$. In particular, if $m_{d}(\Gamma)=0$, there is at most one curve of degree $d$ that passes through $\Gamma$.
\end{definition}
\begin{proof}[Proof of Lemma \ref{trivial bound}]
We may remove curves containing fewer than $d^{2}+1$ points by adding $\OO(|L|)$ to the bound. Now we assume that each curve contains more than $d^{2}+1$ points of $P$.
Fix a curve $l\in L$. We call an $A$-tuple $\Gamma'$ \emph{good} if it is a subset of $\Gamma\subset l$ with $|\Gamma|=d^{2}+1$ and $m_{d}(\Gamma')=m_{d}(\Gamma)$. For any $(d^{2}+1)$-tuple $\Gamma\in l\cap P$, there exists a good $A$-tuple $\subset \Gamma$ since $L$ is parametrized by $\PP^{A}$.
Since $\cap_{p\in \Gamma}H_{p}$ and $\cap_{p\in \Gamma'}H_{p}$ have the same dimension and $\cap_{p\in \Gamma}H_{p}\subseteq\cap_{p\in \Gamma'}H_{p}$, the two vector subspaces are in fact the same. In other words, any curve in $L$ passing through $\Gamma'$ must pass through all of $\Gamma$. Since curves in $L$ do not have a common irreducible component, by B\'{e}zout's theorem, every set of $d^{2}+1$ points determines a unique curve in $L$. Hence, every good $A$-tuple determines a unique curve in $L$. There are at least ${|l\cap P| \choose d^2+1}/{ |l\cap P| - A\choose d^{2}+1-A}$ distinct good $A$-tuples $\Gamma'$ determining $l$. The number of good $A$-tuples determines the number of points in $l\cap P$ because ${|l\cap P| \choose d^2+1}/{ |l\cap P| - A\choose d^{2}+1-A}= \OO(|l\cap P|^{A})\gtrsim|l\cap P|$. On the other hand, the number of $A$-tuples is ${|P|\choose A}= \OO(|P|^{A})$. Then,
$$|\mathcal{I}(P,L)| = \sum_{l \in L} |l\cap P|\lesssim |P|^{A}+|L|,$$
where the $|L|$ term comes from the first step when we deleted curves with fewer than $d^{2}+1$ points from $P$.
\end{proof}
\subsection{Polynomial method}\label{polynomialpartitioningsection}
Now we can apply the polynomial method to the initial bound and conclude the proof of Theorem \ref{main theorem}. We shall use the following polynomial partitioning proposition (see, for example, Theorem 4.1 in \cite{guth2010erdos}):
\begin{prop}\label{cell decomposition}
Let $P$ be a finite set of points in $\mathbb{R}^m$ and let $D$ be a positive integer. Then, there exists a nonzero polynomial $Q$ of degree at most $D$ and a decomposition
\[\mathbb{R}^{m}=\{Q=0\}\cup U_{1}\cup\cdots \cup U_{M}\]
into the hypersurface $\{Q=0\}$ and a collection $U_{1},\ldots ,U_{M}$ of open sets (which we call \emph{cells}) bounded by $\{Q=0\}$, such that $M =\OO_{m}( D^m)$ and that each cell $U_{i}$ contains $\OO_{m}(|P|/D^{m})$ points.
\end{prop}
\begin{proof}[Proof of Theorem \ref{main theorem}]
Applying Proposition ~\ref{cell decomposition}, we find a polynomial $Q$ of degree $D$ (to be chosen later) that partitions $\mathbb{R}^{2}$ into $M$ cells:
\[\mathbb{R}^{2}=\{Q=0\}\cup U_{1}\cup\cdots \cup U_{M},\]
where $M=\OO( D^2)$ and $U_{1},\ldots ,U_{M}$ are open sets bounded by $\{Q=0\}$ and $P_{i}=U_{i}\cap P$. Let $L_{i}$ be the set of curves that have non-empty intersection with $U_{i}$. Then $|P_{i}|=\OO(|P|/D^{2})$. By B\'{e}zout's theorem we see every curve meets $\{Q=0\}$ at no more than $d\cdot D\leq \OO_{d}(D)$ components. Fix a curve $l\in L$, by Harnack's curve theorem~\cite{harnack1876ueber}, the curve itself has $\OO_d (1)$ connected components. Moreover, since a component of $l$ is either contained in $U_{i}$ for some $i$ or it must meet the partition surface $\{Q=0\}$, $l$ can only meet $\OO_{d}(D)$ many $U_{i}$'s. Thus, we have the inequality $\sum |L_{i}|\leq D|L|$.
We may assume that every curve is irreducible, otherwise we can replace the curve with its irreducible components and the cardinality of curves increase by only a constant factor.
Let $P_{cell}$ denote the points of $P$ in cells $U_{i}$ and let $P_{alg}$ denote those on the partition surface $\{Q=0\}$. Similarly, let $L_{alg}$ denote those curves that belong to $\{Q = 0 \}$ and let $L_{cell}$ be the union of the other curves. We deduce by Lemma \ref{trivial bound} and H\"{o}lder's inequality:
\begin{align}\label{polypartitionestimate}
|\mathcal{I}(P,L)|&= |\mathcal{I}(P_{cell}, L_{cell}) |+ |\mathcal{I}(P_{alg},L_{cell}) |+|\mathcal{I}(P_{alg}, L_{alg})|\nonumber\\
&\lesssim_{d} \sum_i (|P_{i}|^{A}+|L_{i}|) + D|L_{cell}|+|\mathcal{I}(P_{alg}, L_{alg})|\nonumber\\
&\lesssim_{d} |P|^{A}D^{-2(A-1)}+D|L|+|\mathcal{I}(P_{alg}, L_{alg})|.
\end{align}
In addition, we may assume that $ |P|^{1/2}\leq |L|\leq |P|^{A}$, otherwise (\ref{estimatemainthm}) already holds either by Lemma \ref{trivial bound} or by another initial bound $|\mathcal{I} (P, L)| \lesssim |P| + |L|^2$ (every two curves intersect at at most $\OO(1)$ points). In this case, we may choose $D =\OO_{d}( |P|^{\tfrac{A}{2A-1}} |L|^{-\tfrac{1}{2A-1}})$ and $D\leq|L|/2$. Then, the first two terms on the right-hand side of (\ref{polypartitionestimate}) are $O_{d}( |P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}})$. Since $|L_{alg}| \leq D \leq \frac{|L|}{2}$, we can perform a dyadic induction on $|L|$. By repeating the above process on $L_{alg}$, we obtain the following:
\begin{align}
|\mathcal{I}(P,L)|&\lesssim_{d} \sum_{ 0\leq i \leq \log(|L|/|P|^{1/2})} |P|^{\tfrac{A}{2A-1}}(2^{-i}|L|)^{\tfrac{2A-2}{2A-1}} + |P|+|L|\nonumber\\&\lesssim_{d}|P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}}+|P|+|L|.\nonumber
\end{align}
The induction stops when $i> \log(|L|/|P|^{1/2})$ because when $2^{-i}L\lesssim |P|^{1/2}$ the number of incidences in the $i$-th step is bounded by $\OO( |P| )$. This proves (\ref{estimatemainthm}).
\end{proof}
\section{An estimate for parametrized curves}
We prove an initial bound with parametrized curves, which implies Corollary \ref{variety}.
We first state two propositions that we will need to prove the initial bound.
\begin{prop}(see \cite{hartshorne1977algebraic}, Ch I, Exercise 1.8)\label{dimension diminute}
If $V$ is an $r$-dimensional variety in $\mathbb{F}^{d}$, and $P :\mathbb{F}^{d}\rightarrow V$ is a polynomial which is not identically zero on $V$, then every component of $V\cap \{P=0\}$ has dimension $r-1$.
\end{prop}
\begin{prop}(see \cite{fulton1984intersection}, Section 2.3)\label{refine Bezout}
Let $V_{1},\ldots V_{s}$ be subvarieties of $\mathbb{FP}^{N}$, and let $Z_{1},\ldots ,Z_{r}$ be the irreducible components of $V_{1}\cap\cdots \cap V_{r}$. Then,
\[\sum_{i=1}^{r} \deg(Z_{i})\leq \prod_{j=1}^{s} \deg(V_{j}).\]
\end{prop}
\begin{lem}\label{algebraicsettrivialbound}
With the same setting and notation as in Corollary \ref{variety}, we have
\begin{equation}
|\mathcal{I}(P,L)|\lesssim_{\mathcal{M}}|P|^{k}+|L|.
\end{equation}
\end{lem}
\begin{proof}
Without loss of generality we assume that $\mathbb{F}$ is an algebraically closed field, every curve in $L$ is irreducible, of degree $d$, and contains more than $d^{2}+1$ points from $P$. A curve $l\in L$ corresponds to a point of intersection $\cap_{p\in l\cap P}H_{p}\cap M$. Comparing with the proof of Lemma \ref{trivial bound}, it suffices to prove that for every $(d^{2}+1)$-tuple $\Gamma$ in $l\cap P$, there is a $\Gamma' \subseteq \Gamma$ such that $\cap_{p\in \Gamma'} H_{p}\cap M$ contains at most $O_{\mathcal{M}} (1)$ curves and $|\Gamma'|=k$. This follows from Proposition~\ref{dimension diminute} and Proposition~\ref{refine Bezout} above.
Indeed, by iterately applying Proposition ~\ref{dimension diminute}, we can choose $\Gamma'$ such that $| \Gamma'|=k$ and $\cap_{p\in\Gamma'}H_{p}\cap\mathcal{M}$ has dimension $0$. By Proposition \ref{refine Bezout}, the cardinality of $\cap_{p\in\Gamma'}H_{p}\cap\mathcal{M}$ is bounded by a constant depending on $k$ and $\deg \mathcal{M}$.
\end{proof}
\section{A theorem on the complex field with $\epsilon$}
In this section, we follow the approach of \cite{solymosi2012incidence} to sketch a proof of Theorem \ref{complex epsilon}. The idea is to partition the point set $P$ with a polynomial of constant degree and then use induction on the size of $P$. In other words, the degree does not depend on the size of $P$ and $L$. With an $\epsilon$ loss in the exponents, one can perform induction on $|P|$, which controls incidences in the complement of the partition surface (in the cells). Here, a constant degree implies constant complexity, and we can use dimension reduction to estimate incidences on the partition surface.
\begin{proof}[Proof of Theorem \ref{complex epsilon}]
In our proof, $C$ is a constant depending only on $d$ and $\epsilon$ which may vary from place to place. $C_0, C_1$ and $C_2$ are positive constants to be chosen later, where $C_{0}, C_1 >2$ are sufficiently large depending on $d$ and $\epsilon$, and $C_{2}$ is sufficiently large depending on $C_{1}$, $C_{0}$, $d$ and $\epsilon$.
We do an induction on $|P|$. Suppose for $|P'|\leq \tfrac{|P|}{2}$ and $|L'| \leq |L|$, we have by the induction hypothesis,
\begin{equation}\label{hypothesis}
|\mathcal{I}(P',L')|\leq C_{2}|P'|^{\tfrac{A}{2A-1}+\epsilon}|L'|^{\tfrac{2A-2}{2A-1}}+C_{0}(|P'|+|L'|).
\end{equation}
Our goal is to prove
\begin{equation}\label{inductiveclaim}
|\mathcal{I}(P,L)|\leq C_{2}|P|^{\tfrac{A}{2A-1}+\epsilon}|L|^{\tfrac{2A-2}{2A-1}}+C_{0}(|P|+|L|).
\end{equation}
We apply Proposition \ref{cell decomposition} to $D = C_{1}$ on $\mathbb{C}^2 \simeq \mathbb{R}^{4}$ and obtain the following partition:
\begin{equation}
\mathbb{R}^{4}=\{Q=0\}\cup U_{1}\cup\cdots \cup U_{M}.
\end{equation}
Here, $Q: \mathbb{R}^{4}\rightarrow\mathbb{R}$ has degree at most $C_{1}$, $M =\OO( C_{1}^{4})$, and $|P_{i}| = |P \cap U_{i}| = O(\tfrac{|P|}{C_{1}^{4}})\leq \tfrac{|P|}{2}$. We denote $L_{i}$ to be the set of curves in $L$ with nonempty intersection with $U_{i}$. Thus, by the induction hypothesis we have,
\begin{align}
|\mathcal{I}(P_{i},L_{i})| \leq & C_{2}|P_{i}|^{\tfrac{A}{2A-1}+\epsilon}| L_{i}|^{\tfrac{2A-2}{2A-1}}+C_{0}(|P_{i}| + |L_{i}|)\nonumber\\
\leq & C [C_{2}C_{1}^{-4(\tfrac{A}{2A-1}+\epsilon)}|P|^{\tfrac{A}{2A-1}+\epsilon}|L_{i}|^{\tfrac{2A-2}{2A-1}}+C_{0}(\tfrac{|P|}{C_1^4}+|L_{i}|)].
\end{align}
For $l$ belonging to some $L_{i}$, we apply a result in real algebraic geometry that implies the number of connected components of $l\setminus \{Q=0\}$ is at most $\OO_{d} (C_{1}^{2})$ (see Theorem A.2 of \cite{solymosi2012incidence}, \cite{milnor1964betti},\cite{petrovskii1949topology},\cite{thom1965homologie}). We deduce that
\begin{equation}
\sum_{i=1}^{M}|L_{i}| \leq CC_{1}^{2}|L|.
\end{equation}
Adding up $|\mathcal{I}(P_i,L_{i})|$ and applying H\"{o}lder's inequality, we obtain
\begin{align}
|\mathcal{I}(P_{cell},L_{cell})| = & \sum_{i=1}^{M}|\mathcal{I}(P_{i},L_{i})|\nonumber\\
\leq & C(C_{2}C_{1}^{-4(\tfrac{A}{2A-1}+\epsilon)}|P|^{\tfrac{A}{2A-1}+\epsilon}(\sum_{i=1}^{M}|L_{i}|^{\tfrac{2A-2}{2A-1}})+C_0 (|P|+C_{1}^{2}|L|))\nonumber\\
\leq & C(C_{1}^{-4\epsilon}C_{2}|P|^{\tfrac{A}{2A-1}+\epsilon}|L|^{\tfrac{2A-2}{2A-1}}+C_{0}(|P|+C_{1}^{2}|L|)).
\end{align}
Now we recall the two trivial bounds (\ref{L2+P}) explained in the case of real space and by Lemma~\ref{trivial bound}, we have:
\begin{equation}\label{L2+P}
|\mathcal{I}(P,L)|\lesssim_{d} |L|^{2}+|P|,~~~~~
|\mathcal{I}(P,L)|\lesssim_{d} |P|^{A}+|L|.
\end{equation}
Thus, one may assume that $|P|^{\tfrac{1}{2}}\lesssim_{d}|L|\lesssim_{d}|P|^{A}$, otherwise we immediately have $|\mathcal{I}(P,L)|\lesssim_{d}|P|+|L|$ and it suffices to choose $C_0$ larger than the implicit constant. With this assumption, we have
\begin{equation}\label{PcellLcell}
|\mathcal{I}(P_{cell},L_{cell})| \leq C(C_{1}^{-4\epsilon}C_{2}+C_{0}(|P|^{-\epsilon}+C_{1}^{2}|P|^{-\epsilon}))|P|^{\tfrac{A}{2A-2}+\epsilon}|L|^{\tfrac{2A-2}{2A-1}}.
\end{equation}
If the following inequality is given
\begin{equation}\label{PalgL}
\mathcal{I}(P_{alg}, L)\lesssim_{C_1}|P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}} + |P|+|L|,
\end{equation}
then $ \lesssim_{C_1} |P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}}$ by assumption. Hence, we can combine this with (\ref{PcellLcell}) and a careful choice of $C_0, C_1$ and $C_2$ gives us (\ref{inductiveclaim}). When $|P| = 1$, (\ref{inductiveclaim}) is trivial, and we obtain (\ref{estimateepsilonleft}). For (\ref{estimateepsilonright}) the argument is similar and is omitted.
Finally, (\ref{PalgL}) follows from Proposition ~\ref{inductionondim} below when $r=3$, $D=C_{1}$ and $\Sigma=\{Q=0\}$.
\end{proof}
\begin{prop}\label{inductionondim}
Let $P$ and $L$ be as in Theorem ~\ref{complex epsilon}, $0\leq r\leq 3$, and let $\Sigma$ be a subvariety in $\mathbb{C}^2 \simeq \mathbb{R}^{4}$ of (real) dimension $\leq r$ and of degree $\leq D$. Then,
\begin{equation}
\mathcal{I}(P\cap \Sigma, L) \lesssim_{D} |P|^{\tfrac{A}{2A-1}}|L|^{\tfrac{2A-2}{2A-1}}+|P|+|L|.
\end{equation}
\end{prop}
\begin{proof}
When $r=0$, $\Sigma$ is a single point and the inequality trivially holds.
When $r=1$, we decompose $\Sigma=\Sigma_{1}\cup\Sigma_{2}$, where every component of $\Sigma_{1}$ belongs to some curve in $L$ and $\Sigma_{2}$ do not have a common component with curves in $L$. Then, $|\mathcal{I}(P\cap \Sigma_{1})|\lesssim_{D}|P|$ and $|\mathcal{I}(P\cap \Sigma_{2})|\lesssim_{D, d}|L|$.
Now we deal with the case $r=2$. By an algebraic geometry result (see, for example, Corollary 4.5 in \cite{solymosi2012incidence}), one can decompose $\Sigma$ into smooth points on subvarieties
\[\Sigma = \Sigma^{smooth}\cup\Sigma_{i}^{smooth},\]
where $\Sigma_{i}$'s are subvarieties of $\Sigma$ of dimension $\leq 1$ and of degree $\OO_{D}(1)$. The number of $\Sigma_{i}$'s is at most $\OO_{D}(1)$. It suffices to bound $\mathcal{I}(P\cap\Sigma^{smooth}, L)$.
If $l_{1}, l_{2} \in \Sigma$ intersect at $p\in \Sigma^{smooth}$, by considering the tangent space and the transverse assumption, we know that $p$ is a singular point of $l_{1}$ or $l_{2}$. Since each curve has $\OO_d (1)$ singular points, we obtain
\begin{equation}\label{SigmaLalg}
\mathcal{I}(P\cap\Sigma^{smooth}, L_{alg}) \leq |P| + \OO_d (1)|L|.
\end{equation}
It remains to estimate $\mathcal{I}(P\cap\Sigma^{smooth}, L_{cell})$. If $l$ does not belong to $\Sigma$, then by Corollary 4.5 of \cite{solymosi2012incidence} the intersection of $l$ and $\Sigma$ can be decomposed as $l\cap\Sigma=\cup_{j=0}^{J (l)}l_{j}$ for some $J(l) \leq \OO_{D}(1)$, where $l_{j}$ is an algebraic variety of dimension $\leq 1$ and of degree $\OO_D (1)$ for each $1\leq j\leq J(l)$. Let $\mathcal{I}_{l, j}$ denote the set $\{p \in P: p \in l_{j}\}$, we obtain
\begin{equation}
|\mathcal{I}(P\cap \Sigma^{smooth}, L_{cell})|\leq \sum_{l, j: j \leq J(l)} |\mathcal{I}_{l, j}|.
\end{equation}
If $l_{j}$ is not the union of $\OO_D (1)$ points, then $l_{j}$ belongs to a unique $l$ because distinct curves in $L$ do not share a common component. By taking a generic projection from $\mathbb{R}^4$ to $\mathbb{R}^2$, we can apply arguments in the proof of Theorem ~\ref{main theorem}: use the initial bound given by $L$ and $P$, then apply the polynomial method.
When $r=3$, we can repeat the proof of $r=2$ assuming that the bound holds for $r\leq 2$.
\end{proof}
We also have the following corollary for complex curves parametrized by an algebraic variety:
\begin{cor}\label{complex variety}
Given a finite point set $P\in \mathbb{C}^{2}$, an integer $d\geq 1$, $A={d+2\choose 2} -1$ and a subset $\mathcal{M}\in S_d$ parametrized by an algebraic variety of dimension $\leq k$. Let $L$ be a finite subset of $\mathcal{M}$ such that any two distinct curves of $L$ do not share a common component and intersect transversally at smooth points. Then, for any sufficiently small $\epsilon>0$,
\begin{equation}\label{estimatecomplexepsilonleft}
|\mathcal{I}(P,L)|\lesssim_{\epsilon, \mathcal{M}} |P|^{\tfrac{k}{2k-1}+\epsilon}|L|^{\tfrac{2k-2}{2k-1}}+|P|+|L|
\end{equation}
and
\begin{equation}\label{estimatecomplexepsilonright}
|\mathcal{I}(P,L)|\lesssim_{\epsilon, \mathcal{M}} |P|^{\tfrac{k}{2k-1}}|L|^{\tfrac{2k-2}{2k-1}+\epsilon}+|P|+|L|.
\end{equation}
\end{cor}
\bibliographystyle{amsalpha}
| {
"timestamp": "2015-03-31T02:19:36",
"yymm": "1308",
"arxiv_id": "1308.0861",
"language": "en",
"url": "https://arxiv.org/abs/1308.0861",
"abstract": "We prove new bounds on the number of incidences between points and higher degree algebraic curves. The key ingredient is an improved initial bound, which is valid for all fields. Then we apply the polynomial method to obtain global bounds on $\\mathbb{R}$ and $\\mathbb{C}$.",
"subjects": "Combinatorics (math.CO)",
"title": "Bounds of incidences between points and algebraic curves",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877023336243,
"lm_q2_score": 0.8152324938410784,
"lm_q1q2_score": 0.8007928532428634
} |
https://arxiv.org/abs/1504.03029 | The covering radius of randomly distributed points on a manifold | We derive fundamental asymptotic results for the expected covering radius $\rho(X_N)$ for $N$ points that are randomly and independently distributed with respect to surface measure on a sphere as well as on a class of smooth manifolds. For the unit sphere $\mathbb{S}^d \subset \mathbb{R}^{d+1}$, we obtain the precise asymptotic that $\mathbb{E}\rho(X_N)[N/\log N]^{1/d}$ has limit $[(d+1)\upsilon_{d+1}/\upsilon_d]^{1/d}$ as $N \to \infty $, where $\upsilon_d$ is the volume of the $d$-dimensional unit ball. This proves a recent conjecture of Brauchart et al. as well as extends a result previously known only for the circle. Likewise we obtain precise asymptotics for the expected covering radius of $N$ points randomly distributed on a $d$-dimensional ball, a $d$-dimensional cube, as well as on a 3-dimensional polyhedron (where the points are independently distributed with respect to volume measure). More generally, we deduce upper and lower bounds for the expected covering radius of $N$ points that are randomly and independently distributed on a metric measure space, provided the measure satisfies certain regularity assumptions. | \section{Introduction and Notation}
The purpose of this paper is to obtain asymptotic results for the expected value of the covering
radius of $N$ points $X_N=\{x_1, x_2,\ldots, x_N\}$ that are randomly and independently distributed with respect to a given measure $\mu$
over a metric space $(\mathcal{X}, m)$. By the \emph{covering radius} $\rho(X_N, \mathcal{X})$ (also known as the \emph{mesh norm }or \emph{fill radius}) of the set $X_N$ with respect to $\mathcal{X}$, we mean the radius of the largest neighborhood centered at a point of $\mathcal{X}$ that contains no points of $X_N$; more precisely,
$$
\rho(X_N, \mathcal{X}):=\sup_{y\in \mathcal{X}}\inf_j m(y, x_j).
$$
Our focus is on the limiting behavior as $N \to \infty$ of the expected value $\mathbb{E}\rho(X_N, \mathcal{X}).$ \
The covering radius of a discrete point set is an important characteristic that arises in a variety of contexts. For example, it plays an
essential role in determining the accuracy of various numerical approximation schemes such as those involving radial basis techniques (see, e.g. \cite{FW}, \cite{MNPW}). Another area where the covering radius arises is in ``1-bit sensing'', i.e., the problem of approximating an unknown vector (signal) $x\in K$ from knowledge of $m$ numbers $\textup{sign}\langle x, \theta_j \rangle$, $j=1,\ldots, m$, where the vectors $\theta_j$ are selected independently and randomly on a sphere; see discussion after Corollary \ref{nets} for details.
With regard to asymptotics for the expected value of the covering radius, of particular interest is the case where $\mathcal{X}$ is the unit sphere $\mathbb{S}^d$ in $\mathbb{R}^{d+1}$ and the metric is Euclidean
distance in $\mathbb{R}^{d+1}$. In \cite{BSR}, Bourgain, Sarnak and Rudnick study local statistics of certain spherical point configurations derived from normalized
sums of squares of integers. Their investigation focuses on whether such configurations exhibit features of randomness,
and for this purpose they study various local statistics, including the covering radius of random points on $\mathbb{S}^d$. They prove
that this radius is bounded from above by $N^{-1/d +o(1)}$ as $N \to \infty.$\
For $d=1$, i.e. the unit
circle, it is shown in \cite {DN} by using order statistics, that for $N$ points independently and randomly distributed
with respect to arclength on the circle,
$$ \lim_{N \to \infty}\mathbb{E}\rho(X_N,\mathbb{S}^1)\left[ \frac{N}{\log N}\right] =\pi.
$$
Up to now, there has been no extension of this result to higher-dimensional spheres where the order statistics approach is more elusive.
Based on a heuristic argument and numerical experiments, Brauchart et al. \cite{BHS} have conjectured that the appropriate extension of the circle case
is the following:
\begin{equation}\label{sphereequiv1}
\lim_{N\to\infty} \mathbb{E}\rho(X_N, \mathbb{S}^d)\cdot \left[\frac{N}{\log N }\right]^{1/d}=\left(\frac{(d+1)\upsilon_{d+1}}{\upsilon_d}\right)^{1/d}=\left(2\sqrt{\pi}\frac{\Gamma(\frac{d+2}2)}{\Gamma(\frac{d+1}2)}\right)^{1/d} ,%
\end{equation}
where $\upsilon_d:=\frac{\pi^{d/2}}{\Gamma(1+d/2)}$ is the volume of a $d$-dimensional unit ball in $\mathbb{R}^d$, and the points of $X_N$ are
randomly and independently distributed with respect to surface measure on $\mathbb{S}^d$ (more precisely, $d$-dimensional Hausdorff measure
$\mathcal{H}_d).$ Their conjecture is also consistent with a result of H. Maehara \cite{M} who obtained probabilistic estimates for the size of random
caps that cover the sphere $\mathbb{S}^2$. He showed that with asymptotic probability one, random caps with radii that are a constant factor larger than the expected radii will cover the sphere,
whereas this asymptotic probability becomes zero when the random caps all have radii that are a factor smaller. However, his results fall short of providing a sharp asymptotic for the expected covering radius (in addition, his methods do not readily generalize to other smooth manifolds). As discussed in Section 3, our results for the sphere cannot be directly derived from Maehara's; however, his results are a direct consequence of our Corollary \ref{corsphere}.
The main goal of this article is to provide a proof of \eqref{sphereequiv1} and its various generalizations.
We remark that for any compact metric space $(\mathcal{X}, m)$ with $\mathcal{X}$ having finite $d$-dimensional Hausdorff measure, there exists a positive constant
$C$ such that
for any $Y_N=\{y_1,\ldots, y_N\} \subset \mathcal{X}$, there holds
\begin{equation}\label{lowerbound}
\rho_N:=\rho(Y_N,\mathcal{X}) \geq \frac{C}{N^{1/d}},\quad N \geq 1.
\end{equation}
Indeed, a lemma of Frostman (see, e.g. Theorem 8.17 in \cite{Mat}) implies the existence of a finite positive measure $\mu$ on $\mathcal{X}$ for which
$\mu(B(x,r)) \leq (2r)^d$ for all $x \in \mathcal{X}$ and all $0<r\leq \operatorname{diam}(\mathcal{X}),$ where $B(x,r)$ denotes the ball centered at $x$ having
radius $r$. Consequently,
$$0<\mu(\mathcal{X}) \leq \sum_{i=1}^N \mu(B(y_i,\rho_N) )\leq N(2\rho_N)^d,
$$
which verifies \eqref{lowerbound}. Thus, as also remarked in \cite{BSR} and made more explicit by \eqref{sphereequiv1}, randomly distributed
points have relatively good covering properties, differing from the optimal by a factor of $(\log N)^{1/d}$.\
The outline of this paper is as follows. In Section 2, we state our probabilistic and expected covering radius estimates for general compact metric spaces, where the points
are randomly distributed with respect to a measure satisfying certain regularity conditions. Results for compact subsets of
Euclidean space are given in Section 3, including sharp asymptotic results for randomly distributed points with respect
to Hausdorff measure on rectifiable curves, smooth surfaces, bodies with smooth boundaries, $d$-dimensional cubes, and $3$-dimensional polyhedra.
The proofs of our stated results are provided in Section 5 utilizing properties established in Section 4 for a commonly arising probability function.\
We conclude this section with a listing of some notational conventions and terminology that will be utilized throughout the paper.
\begin{itemize}
\item We denote by $B(x,r)$ a closed ball in the metric space $(\mathcal{X}, m);$ more precisely,
$B(x,r):=\{y\in \mathcal{X}\colon m(y,x)\leqslant r\}$. For $d$-dimensional balls in Euclidean space we write $B_d(x,r).$
\item For a positive finite Borel measure $\mu$ supported on a set $\mathcal{X}$, we say that a point $x$ is {\it randomly distributed over $\mathcal{X}$ with respect to $\mu$}, if it is distributed with respect to the probability measure $\mu/\mu(\mathcal{X})$; i.e., for any Borel set $K$ it holds that $\P(x\in K) = \mu(K)/\mu(\mathcal{X})$.
\item For a positive integer $s\leqslant d$, we denote by $\mathcal{H}_s$ the $s$-dimensional Hausdorff measure on the Euclidean space $\mathbb{R}^d$ with the Euclidean metric, normalized by $\mathcal{H}_s([0,1]^s)=1$. Thus, $\mathcal{H}_s(E)=\frac{\pi^{s/2}}{2^s\Gamma(1+s/2)}\mathcal{H}^s(E)$, where $\mathcal{H}^s$ is the Hausdorff measure defined in \cite{Fal}.
\item If $K$ is a subset of the Euclidean space $\mathbb{R}^d$, we always equip it with the Euclidean metric $m(x,y)=|x-y|.$
\item The symbols $c_1, c_2, \ldots$, and $C_1, C_2, \ldots$ shall denote positive constants that may differ from one inequality to another. These constants never depend on $N$.
\end{itemize}
\section{Main Theorems for Metric Spaces}\label{sectionmetric}
Throughout this section, we assume that $(\mathcal{X}, m)$ is a metric space, $\mu$ is a finite positive Borel measure supported on $\mathcal{X}$, and $X_N=\{x_1, \ldots, x_N\}$ is a set of $N$ points, independently and randomly distributed over $\mathcal{X}$ with respect to $\mu$. Our theorems provide estimates for the probability and expected values of the covering radius $\rho(X_N, \mathcal{X})$ when the measure $\mu$ satisfies certain regularity conditions described by a function $\Phi$.
\begin{theorem}\label{metricabove}
Suppose $\Phi$ is a continuous non-negative strictly increasing function on $(0, \infty)$ satisfying $\Phi(r)\to 0$ as $r\to 0^+$. If there exists a positive number $r_0$ such that $\mu(B(x,r))\geqslant \Phi(r)$ holds for all $x\in \mathcal{X}$ and every $r<r_0$,
then there exist positive constants $c_1$, $c_2$, $c_3$, and $\alpha_0$ such that for any $\alpha>\alpha_0$ we have
\begin{equation}\label{metricaboveprob}
\P\left[\rho(X_N, \mathcal{X})\geqslant c_1\Phi^{-1}\left(\frac{\alpha\log N}{N}\right)\right]\leqslant c_2N^{1-c_3\alpha}.
\end{equation}
If, in addition, $\Phi$ satisfies $\Phi(r)\leqslant r^\sigma$ for all small $r$ and some positive number $\sigma$, then there exist positive constants $c_1, c_2$ such that
\begin{equation}\label{metricaboveexpect}
\mathbb{E}\rho(X_N, \mathcal{X}) \leqslant c_1 \Phi^{-1}\left(c_2 \frac{\log N }{N}\right).
\end{equation}
\end{theorem}
A lower bound for the expected covering radius is given in our next result.
\begin{theorem}\label{metricbelow}
Let $\Phi$ be a continuous non-negative strictly increasing function on $(0, \infty)$ satisfying $\Phi(r)\to 0$ as $r\to 0^+$ and the strict doubling property; i.e., for some constants $C_1, C_2>1$ and any small $r$ it holds that $C_1\Phi(r)\leqslant\Phi(2r)\leqslant C_2\Phi(r)$.
If there exists a subset $\mathcal{X}_1\subset \mathcal{X}$ with the following two properties:
\begin{enumerate}[label={\upshape(\roman*)}]
\item $\mu(\mathcal{X}_1)>0$;
\item there exist positive numbers $r_0$ and $c$ such that for any $x\in \mathcal{X}_1$ and every $r<r_0$ the regularity condition $c\Phi(r)\leqslant \mu(B(x,r))\leqslant \Phi(r)$ holds,
\end{enumerate}
then there exist positive constants $c_1$ , $c_2$, and $c_3$ such that
\begin{equation}\label{metricbelowprob}
\P\left[\rho(X_N, \mathcal{X})\geqslant c_1\Phi^{-1}\left(\frac{c_2\log N-c_3\log\log N}N\right)\right]=1-o(1), \; \; N\to \infty.
\end{equation}
Consequently, there exist positive constants $c_1$ and $c_2$ such that
\begin{equation}\label{metricbelowexpect}
\mathbb{E}\rho(X_N, \mathcal{X}) \geqslant c_1 \Phi^{-1}\left(c_2 \frac{\log N}{N}\right).
\end{equation}
\end{theorem}
Combining Theorems \ref{metricabove} and \ref{metricbelow} we deduce the following.
\begin{corollary}\label{twosided}
Assume the function $\Phi$ is continuous non-negative, strictly increasing, strictly doubling, and that there exist positive numbers $r_0$ and $\sigma$ such that $\Phi(r)\leqslant r^\sigma$ for every $r<r_0$. If for some positive constants $c, C$, any $x\in \mathcal{X}$ and every $r<r_0$ we have
\begin{equation}\label{regularity}
c\Phi(r)\leqslant \mu(B(x, r))\leqslant C\Phi(r),
\end{equation}
then there exist positive constants $c_1, c_2, c_3, c_4$ such that for any $\varepsilon>0$ there is a number $N(\varepsilon)$ such that for any $N>N(\varepsilon)$ we have
\begin{equation}\label{distribineq}
\P\left[c_1\Phi^{-1}\left(c_2\frac{\log N}{N}\right ) \leqslant \rho(X_N, \mathcal{X})\leqslant c_3\Phi^{-1}\left(c_4\frac{\log N}{N}\right)\right] > 1-\varepsilon.
\end{equation}
Moreover, there exist positive constants $C_1, C_2, C_3, C_4$ such that
\begin{equation}\label{expectineq}
C_1\Phi^{-1}\left(C_2\frac{\log N}{N}\right) \leqslant \mathbb{E}\rho(X_N, \mathcal{X})\leqslant C_3\Phi^{-1}\left(C_4\frac{\log N}{N}\right).
\end{equation}
\end{corollary}
For recent estimates similar to \eqref{distribineq} and \eqref{expectineq} for the spherical cap discrepancy of random points on the unit sphere $\mathbb{S}^2\subset \mathbb{R}^3$, see Theorems $9$ and $10$ in \cite{ABD}.
An important class of sets in $\mathbb{R}^d$ to which Corollary \ref{twosided} applies are described in the following definition.
\begin{defin}
We call a set $\mathcal{X}\subset \mathbb{R}^d$ {\it $s$-regular} if the condition \eqref{regularity} holds for $\mu=\mathcal{H}_s$ and $\Phi(r)=r^s$; i.e., for some positive constants $r_0, c$, and $C$ there holds
\begin{equation}\label{sregset}
cr^s\leqslant \mathcal{H}_s(B_d(x,r)\cap \mathcal{X})\leqslant Cr^s \; \; \mbox{for any $x\in \mathcal{X}$ and every $r<r_0$}.
\end{equation}
\end{defin}
\begin{zamech}
Examples of sets in Euclidean space for which Corollary \ref{twosided} holds include a cube $[0,1]^d$, a rectifiable curve $\Gamma\subset \mathbb{R}^d$, the unit sphere $\mathbb{S}^{d-1}\subset \mathbb{R}^d$, or any $s$-regular set $\mathcal{X}\subset \mathbb{R}^d$. Furthermore, the results of the Corollary \ref{twosided} hold not only for $\Phi(r)=r^s$, but for more general regularity functions, such as $\Phi(r)=r^\alpha \log^\beta\left(1/r\right)$, with $\alpha>0$ and $\beta\geqslant 0$.
In particular, Corollary \ref{twosided} applies for the ``middle $1/3$'' Cantor set $\mathcal{C}$ in $[0,1]$ with $\textup{d}\mu=\mathbbm{1}_{\mathcal{C}}\textup{d}\mathcal{H}_{\log2/\log3}$. We remark that for $\mu$-a.e. point $x\in \mathcal{C}$ we have
$$
\liminf_{r\to 0^+}\frac{\mu(B_1(x,r)\cap \mathcal{C})}{r^{\log2/\log3}}\not=\limsup_{r\to 0^+}\frac{\mu(B_1(x,r)\cap \mathcal{C})}{r^{\log2/\log3}};
$$
i.e., at $\mu$-a.e. point $x$ of $\mathcal{C}$ the density of $\mu$ at $x$ does not exist, which essentially precludes obtaining
a sharp asymptotic for $\mathbb{E}\rho(X_N, \mathcal{C})$ (compare with \eqref{uniform} below). However, Corollary \ref{twosided} provides the two-sided estimate
$$
c_1 \left(\frac{\log N}{N}\right)^{\log 3/\log 2} \leqslant \mathbb{E}\rho(X_N, \mathcal{C}) \leqslant c_2 \left(\frac{\log N}{N}\right)^{\log 3/\log 2}.
$$
\end{zamech}
\begin{zamech}
The condition in Theorem \ref{metricabove} that $\mu(B(x,r))\geqslant \Phi(r)$ for every $x\in \mathcal{X}$ is essential. Indeed, if we consider the set $\mathcal{X}=[0,1]\cup \{2\}$ with $\mu$ Lebesgue measure, then $\mu(B_1(x,r))\geqslant r$ for $x\in \mathcal{X}\setminus \{2\}$. However, we have $\P\left[\rho(X_N, \mathcal{X})\geqslant 1\right]=1$, and so $\mathbb{E}\rho(X_N, \mathcal{X})\geqslant 1$. The reason that inequality \eqref{metricaboveexpect} fails in this case is that for the point $x=2$ we have $\mu(B_1(x,r))=0$ for small values of $r$. However, Theorem \ref{metricabove} does apply if $\mu=m_{[0,1]}+\alpha\delta_2$, where $m_{[0,1]}$ is Lebesgue measure on $[0,1]$, $\delta_2$ is the unit point mass at $x=2$, and $\alpha>0$. In this case we get
$$
\mathbb{E}\rho(X_N, \mathcal{X})\leqslant {C(\alpha)}\cdot \frac{\log N}N.
$$
In fact, repeating the proofs from Sections \ref{abovvvve} and \ref{sectionfrombelow} (with $K_1=[0,1]$), we obtain
$$
\lim_{N\to \infty} \mathbb{E}\rho(X_N, \mathcal{X}) \cdot \frac{N}{\log N} = \frac{1+\alpha}{2} \; \; \mbox{for any $\alpha>0$}.
$$
\end{zamech}
The above results have immediate consequences for $\varepsilon$-nets. Since different definitions of an ``$\varepsilon$-net'' occur in the literature, the terminologies that we use are made precise in what follows.
\begin{defin}
A subset $A$ of a metric space $(\mathcal{X}, m)$ is called an {\it $\varepsilon$-net} (or {\it $\varepsilon$-covering}) if, for any point $y\in \mathcal{X}$, there exists a point $x\in A$ such that $m(x,y)\leqslant \varepsilon$. Equivalently, $A$ is an $\varepsilon$-net if $\rho(A, \mathcal{X})\leqslant \varepsilon$.
\end{defin}
\begin{defin}
A subset $A$ of a metric space $(\mathcal{X}, m)$ with a positive Borel measure $\mu$ is called a {\it measure $\varepsilon$-net} if any ball $B(y,r)$ with $\mu(B(y,r))\geqslant \varepsilon$ intersects $A$.
\end{defin}
We remind the reader that on $\mathbb{S}^d$ with $\mu$ surface area measure $\mathcal{H}_d$, the minimal $\varepsilon$-net has cardinality $c\varepsilon^{-d}$ (for the proof see, for example, Lemma $5.2$ in \cite{V}), while the minimal measure $\varepsilon$-net has cardinality $c\varepsilon^{-1}$.
\begin{corollary}\label{nets}
If $\Phi$ and $\mu$ are as in the first part of Theorem \ref{metricabove}, then there exists a positive constant $c_1$ such that for any number $\alpha$ there is a positive constant $C_\alpha$ for whic
$$
\P\left[X_N \; \mbox{is an $\varepsilon$-net}\right] \geqslant 1-N^{-\alpha}, \; \; \; \mbox{for} \; \varepsilon=c_1\Phi^{-1}\left(C_\alpha\frac{\log N}N\right).
$$
Furthermore, if the function $\Phi$ is doubling, and the measure $\mu$ satisfies the condition \eqref{regularity}, then for any positive number $\alpha$ there exists a positive constant $C_\alpha$ such that
$$
\P\left[X_N \; \mbox{is a measure $\varepsilon$-net}\right] \geqslant 1-N^{-\alpha}, \; \; \; \mbox{for} \; \varepsilon=C_\alpha\frac{\log N}N.
$$
\end{corollary}
By way of illustration, suppose for simplicity that $\Phi(r)=Cr^d$ for some positive constant $C$ and $\varepsilon=\left[(\log N)/N\right]^{1/d}$, which implies that $N$ is of the order $\varepsilon^{-d}\log(1/\varepsilon)$. Then, from the first part of Corollary \ref{nets}, if we take $C_1\varepsilon^{-d}\log\left(1/\varepsilon\right)$ random points, we get an $\varepsilon$-net ($\varepsilon$-covering) with high probability.
The cardinality of an $\varepsilon$-covering of a set $K\subset \mathbb{S}^d$ plays an important role in ``1-bit compressed sensing''. The estimates for the number $m$ of random vectors $\{\theta_j\}_{j=1}^m$, essential to approximate an unknown signal $x\in K$ from knowledge of $m$ ``bits'' $\textup{sign}\langle x,\theta_j\rangle$ involve finding an $\varepsilon$-covering of the set $K$ with $\log(N(K, \varepsilon))\leqslant C\varepsilon^{-2}w(K)$, where $N(K, \varepsilon)$ is the cardinality of the covering, and $w$ is the so-called ``mean width'' of $K$. As can be seen from our results, for many sets $K$ a random set of $C\varepsilon^{-d}\log(1/\varepsilon)$ points satisfies this condition with high probability. For further discussion, see \cite{PV1}, \cite{PV2}.
\section{Expected Covering Radii for Subsets of Euclidean Space}\label{sectioneuclid}
In some cases we are able to ``glue'' upper and lower estimates together to obtain sharp asymptotic results.
For this purpose we state the following definitions.
\begin{defin}\label{flat}
Let $s$ be a positive integer, $s\leqslant d$. Suppose $K$ is a compact $s$-dimensional set in $\mathbb{R}^d$ with the Euclidean metric.
We call $K$ an {\it asymptotically flat $s$-regular} set if
for any $x\in K$ it holds that
\begin{equation}\label{uniform}
r^{-s}\mathcal{H}_s(B_d(x,r)\cap K)\rightrightarrows \upsilon_s \;\; \mbox{as $r\to 0^+$},
\end{equation}
where the convergence is uniform in $x$, and $\upsilon_s$ is the volume of the $s$-dimensional unit ball $B_s(0,1)$.
We call $K$ a {\it quasi-nice $s$-regular} set if
\begin{enumerate} [(i)]
\item $K$ is countably $s$-rectifiable; i.e., $K$ is of the form $\bigcup_{j=1}^\infty f_j(E_j) \cup G$, where $\mathcal{H}_s(G)=0$ and where each $f_j$ is a Lipschitz function from a bounded subset $E_j$ of $\mathbb{R}^s$ to $\mathbb{R}^d$;
\item There exist positive numbers $c, C, r_0$ such that for any $x\in K$ and any $r<r_0$ the $s$-regularity condition holds: $c r^s\leqslant \mathcal{H}_s(B_d(x,r)\cap K)\leqslant C r^s$;
\item There is a finite set $T\subset K$ such that for any $r<r_0$ and $y\in K\setminus \bigcup_{x_t\in T}B_d(x_t, r)$ it holds that $\mathcal{H}_s(B_d(y,r)\cap K)\geqslant \upsilon_s r^s$.
\end{enumerate}
\end{defin}
We remark that the appearance of the constant $\upsilon_s$ in the above definitions is quite natural. Indeed, if $K$ is a countably $s$-rectifiable compact set and $0<\mathcal{H}_s(K)<\infty$, then for $\mathcal{H}_s$-almost every point $x\in K$ the following holds: $r^{-s}\mathcal{H}_s(B_d(x,r)\cap K)\to \upsilon_s$ as $r\to 0^+$. For the details see the Theorem 17.6 in \cite{Mat} or Theorem 3.33 in \cite{Fal}. Thus, if any uniform limit in \eqref{uniform} exists, then it must equal $\upsilon_s$.
For asymptotically flat $s$-regular and quasi-nice $s$-regular sets we deduce the following precise asymptotics for the expected covering radius as well as its moments.
\begin{theorem}\label{manifolds}
Suppose $K\subset \mathbb{R}^d$ is an asymptotically flat $s$-regular or a quasi-nice $s$-regular set for integer $s\leqslant d$. Then for $X_N=\{x_1, \ldots, x_N\}$ a set of $N$ independently and randomly distributed points over $K$ with respect to the measure $\textup{d}\mu:=\mathbbm{1}_K\cdot \textup{d}\mathcal{H}_s / \mathcal{H}_s(K)$, and any $p \geq 1,$
\begin{equation}\label{asymp}
\lim_{N\to \infty} \mathbb{E}[\rho(X_N, K)^p]\cdot \left[\frac{N}{\log N}\right]^{p/s}=\left(\frac{\mathcal{H}_s(K)}{\upsilon_s}\right)^{p/s}.
\end{equation}
\end{theorem}
Important examples of asymptotically flat $s$-regular sets are given in the following result, which includes the verification of the conjecture of Brauchart et al. in \cite{BDSSWW} for the expected covering radius of randomly distributed points on the unit sphere.
\begin{corollary}\label{corsphere}
Suppose $K$ is a closed $C^{(1,1)}$ $s$-dimensional embedded submanifold of $\mathbb{R}^d$; i.e., $0<\mathcal{H}_s(K)<\infty$ and, for any embedding $\varphi$, all its first partial derivatives exist and are uniformly Lipschitz. Then $K$ is an asymptotically flat $s$-regular manifold, and thus for $N$ points independently and randomly distributed over $K$ with respect to $\textup{d}\mu=\mathbbm{1}_K \cdot \textup{d}\mathcal{H}_s/\mathcal{H}_s(K)$, equation \eqref{asymp} holds.
In particular, if $K=\mathbb{S}^d$ is a unit sphere in $\mathbb{R}^{d+1}$ and $p \geq 1$, then
\begin{equation}\label{sphereequiv}
\lim_{N\to\infty} \mathbb{E}[\rho(X_N, \mathbb{S}^d)^p]\cdot \left[\frac{N}{\log N }\right]^{p/d}=\left(\frac{(d+1)\upsilon_{d+1}}{\upsilon_d}\right)^{p/d}=\left(2\sqrt{\pi}\frac{\Gamma(\frac{d+2}2)}{\Gamma(\frac{d+1}2)}\right)^{p/d} . %
\end{equation}
Thus \eqref{sphereequiv1} holds.
\end{corollary}
As a consequence of the corollary, we shall deduce in Section 5 the result of Maehara mentioned in the Introduction.
\begin{corollary}[Maehara \cite{M}]\label{maehara}
Suppose $X_N=\{x_1, \ldots, x_N\}$ is a set of $N$ points, independently and randomly distributed over the unit sphere $\mathbb{S}^d$ with respect to $\textup{d}\mu=\mathbbm{1}_{\mathbb{S}^d} \cdot \textup{d}\mathcal{H}_d/\mathcal{H}_s(\mathbb{S}^d))$ and set
$$
Z_N:=\rho(X_N, \mathbb{S}^d)\cdot \left(\frac{\upsilon_d}{(d+1)\upsilon_{d+1}}\cdot \frac{N}{\log N}\right)^{1/d}.
$$
Then $Z_N$ converges in probability to 1 as $N \to \infty$; i.e., for each $\epsilon > 0,$
\begin{equation}\label{Mae}
\lim_{N \to \infty} \mathbb{P}(|Z_N-1|\geq\epsilon)=0.
\end{equation}
\end{corollary}
\begin{zamech}
We remark that our results for $\mathbb{S}^d$ do not directly follow from \eqref{Mae}.
Maehara's result implies that the bounded sequence
$$
p_N(t):=\P(Z_N\geqslant t)\to \mathbbm{1}_{[0,1]}(t) \; \; \mbox{for a.e. $t>0$};
$$
however, since the range of $t$ is $[0, \infty)$, the constant function $1$ is not integrable, and we cannot apply the Lebesgue dominated convergence theorem to get $\mathbb{E} Z_N = \int_0^{\infty} p_N(t)dt \to 1$.
\end{zamech}
The next corollary gives an example of a quasi-nice $1$-regular set.
\begin{corollary}\label{corcurve}
Suppose $\gamma$ is a rectifiable curve in $\mathbb{R}^d$(i.e., $0<\mathcal{H}_1(\gamma)<\infty$ and $\gamma$ is a continuous injection of a closed interval of $\mathbb{R}$). If $X_N$ denotes a set of $N$ points independently and randomly distributed over $\gamma$ with respect to $\textup{d}\mu:=\mathbbm{1}_\gamma\cdot \textup{d}\mathcal{H}_1/\mathcal{H}_1(\gamma)$, then $\gamma$ is a quasi-nice $1$-regular set, and for any $p\geqslant 1$
\begin{equation}\label{curveequiv}
\lim_{N\to \infty} \mathbb{E}[\rho(X_N, \gamma)^p]\cdot \left[\frac{N}{\log N}\right]^p=\left(\frac{\mathcal{H}_1(\gamma)}{2}\right)^p.
\end{equation}
\end{corollary}
Next we deal with the following problem: suppose $A\subset \mathbb{R}^d$ is a $d$-dimensional set, but the condition
$$
\mathcal{H}_d(A\cap B_d(x,r))\geqslant \upsilon_d r^{d}
$$
fails for a certain number of points $x\in A$ and the limit \eqref{uniform} in the Definition \ref{flat} is not uniform. Such situations arise for sets with boundary, which include the unit ball $B_d(0,1)$ and the unit cube $[0,1]^d$. The case of the ball is included in the next theorem, while the case of the cube is studied in the Theorem \ref{cube}.
\begin{theorem}\label{theoremball}
Let $d\geqslant 2$ and $K\subset \mathbb{R}^d$ a set that satisfies the following conditions.
\begin{enumerate}[label={\upshape(\roman*)}]
\item $K$ is compact and $0<\mathcal{H}_d(K)<\infty$;
\item $K=\textup{clos}(K_0)$, where $K_0$ is an open set in $\mathbb{R}^d$ with $\partial K_0 = \partial K$;
\item The boundary $\partial K$ of $K$ is a $C^2$ smooth $(d-1)$-dimensional embedded submanifold of $\mathbb{R}^d$.
\end{enumerate}
Let $X_N=\{x_1, \ldots, x_N\}$ be a set of $N$ points, independently and randomly distributed over $K$ with respect to $\textup{d}\mu=\mathbbm{1}_K \cdot \textup{d}\mathcal{H}_d / \mathcal{H}_d(K)$.
Then for any $p\geqslant 1$
\begin{equation}\label{smoothequiv}
\lim_{N\to \infty}\mathbb{E}[\rho(X_N, K)^p]\cdot \left[\frac{N}{\log N}\right]^{p/d}=\left(\frac{2(d-1)}d\cdot \frac{\mathcal{H}_d(K)}{\upsilon_d}\right)^{p/d}.
\end{equation}
In particular, for the unit ball,
\begin{equation}\label{ballequiv}
\lim_{N\to \infty}\mathbb{E}[\rho(X_N, B_d(0,1))^p]\cdot \left[\frac{N}{\log N}\right]^{p/d}=\left(\frac{2(d-1)}d\right)^{p/d}.
\end{equation}
\end{theorem}
\begin{zamech}
We see that in the case $d=2$ we have $2(d-1)/d=1$, and so the constant on the right-hand side of \eqref{smoothequiv} coincides with the constant for smooth closed manifolds, see \eqref{asymp}. However, when $d>2$ we have $2(d-1)/d>1$; thus this constant becomes bigger than for smooth closed manifolds.
\end{zamech}
The next two propositions deal with cases when the boundary of the set is not smooth. For simplicity, we formulate them for a cube $[0,1]^d$ and a polyhedron in $\mathbb{R}^3$. However, the proof can be applied to other examples, such as cylinders.
\begin{prop}\label{cube}
Suppose $d\geqslant 2$ and $[0,1]^d$ is the $d$-dimensional unit cube. Let $\textup{d}\mu=\mathbbm{1}_{[0,1]^d}\cdot \textup{d}\mathcal{H}_d$. If $X_N=\{x_1, \ldots, x_N\}$ is a set of $N$ points, independently and randomly distributed over $[0,1]^d$ with respect to $\mu$, then for any $p\geqslant 1$
\begin{equation}\label{cubeequiv}
\lim_{N \to \infty} \mathbb{E}[\rho(X_N, [0,1]^d)^p]\cdot \left[\frac{N}{\log N }\right]^{p/d}=\left(\frac{2^{d-1}}{d\upsilon_d}\right)^{p/d}.
\end{equation}
\end{prop}
\begin{prop
Suppose $P$ is a polyhedron in $\mathbb{R}^3$ of volume $V(P)$. Let $X_N=\{x_1, \ldots, x_N\}$ be a set of $N$ points, independently and randomly distributed over $P$ with respect to $\textup{d}\mu=\mathbbm{1}_P\cdot \textup{d}\mathcal{H}_3/V(P)$. If $\theta$ is the smallest angle at which two faces of $P$ intersect, then for any $p\geqslant 1$
\begin{equation}\label{polyhequiv1}
\lim_{N \to \infty} \mathbb{E}[\rho(X_N, P)^p]\cdot \left[\frac{N}{\log N}\right]^{p/3}=\left(\frac{2\pi V(P)}{3\theta \upsilon_3}\right)^{p/3} = \left(\frac{V(P)}{2\theta}\right)^{p/3}, \;\; \mbox{if $\theta\leqslant \frac{\pi}{2}$};
\end{equation}
\begin{equation}\label{polyhequiv2}
\lim_{N \to \infty} \mathbb{E}[\rho(X_N, P)^p]\cdot \left[\frac{N}{\log N}\right]^{p/3} = \left(\frac{V(P)}{\pi}\right)^{p/3}, \;\; \mbox{if $\theta\geqslant \frac{\pi}{2}$}.
\end{equation}
\end{prop}
In the theorems up to now we dealt with measures $\mu$ on sets $\mathcal{X}$ satisfying for all $x\in \mathcal{X}$ the condition $cr^s\leqslant \mu(B(x,r)\cap \mathcal{X})\leqslant Cr^s$ (i.e., the regularity function $\Phi$ was the same for all points of $\mathcal{X}$); only the values of best constants $c, C$ differed for points $x$ deep inside $\mathcal{X}$ from those near the boundary. We now give an example of a measure for which
the regularity function parameter $s$ depends upon the distance to the boundary.
\begin{prop
Consider the interval $[-1,1]$ and the measure $\textup{d}\mu=\frac{dx}{\pi\sqrt{1-x^2}}$. Let $X_N=\{x_1, \ldots, x_N\}$ be a set of $N$ points, independently and randomly distributed over $[-1,1]$ with respect to $\mu$. Define
$$\hat{\rho}(X_N, [0,1]):=\sup_{y\in [1-\frac{1}{N^a}, 1]} \inf_j |y-x_j|, \;\;\;\;\; \tilde{\rho}(X_N, [0,1]):=\sup_{y\in [-1+\frac{1}{N^a}, 1-\frac{1}{N^a}]} \inf_j |y-x_j|. $$
\begin{enumerate}[label={\upshape(\roman*)}]
\item If $a=2$, then there exist positive constants $c_1$ and $c_2$ such that
\begin{equation}\label{arcsinineq1}
\frac{c_1}{N^2}\leqslant \mathbb{E}\hat{\rho}(X_N, [0,1]) \leqslant \frac{c_2}{N^2}.
\end{equation}
\item If $0<a<2$, then there exist positive constants $c_1$ and $c_2$ such that
\begin{equation}\label{arcsinineq2}
\frac{c_1\log N}{N^{1+\frac{a}2}}\leqslant \mathbb{E}\hat{\rho}(X_N, [0,1]) \leqslant \frac{c_2\log N}{N^{1+\frac{a}2}}.
\end{equation}
\item For any $a>0$ there exist positive constants $c_1$ and $c_2$ such that
\begin{equation}\label{arcsinineq3}
\frac{c_1\log N}{N}\leqslant \mathbb{E}\tilde{\rho}(X_N, [0,1]) \leqslant \frac{c_2\log N}{N}.
\end{equation}
\end{enumerate}
\end{prop}
Observe that if we stay away from the endpoints $\pm1$, the measure $\mu$ acts as the Lebesgue measure, and thus the order of the expectation of the covering radius is $(\log N)/N$. However, when we are close to the points $\pm 1$ (where ``close'' depends on $N$), the measure $\mu$ acts somewhat like the Hausdorff measure $\mathcal{H}_{1/2}$, and we get a different order for the covering radius.
\section{An auxiliary function}\label{auxproofs}
The proofs of the results stated in Sections \ref{sectionmetric} and \ref{sectioneuclid} rely heavily on the properties of the following function.
For three positive numbers $N, n, m$, with $m$ and $N$ being integers and $m\leqslant n\leqslant N$, set
\begin{equation}\label{functionf}
f(N, n, m):=\sum\limits_{k=1}^m (-1)^{k+1}\binom{m}{k} \left(1-\frac{k}{n}\right)^N.
\end{equation}
The useful fact about the function $f(N, n,m)$ is the following.
\begin{lemma}
Suppose $X_N=\{x_1, \ldots, x_N\}$ is a set of $N$ points independently and randomly distributed on a set $\mathcal{X}$ with respect to a Borel probability measure $\mu$. Let $B_1, \ldots, B_m$ be disjoint subsets of $\mathcal{X}$ each of $\mu$-measure $1/n$. Then
\begin{equation}\label{oneisempty}
\P\big(\exists k \colon B_k\cap X_N=\emptyset\big)=f(N, n, m).
\end{equation}
\end{lemma}
\begin{proof}
We use well-known formula that, for any $m$ events $A_1, \ldots, A_m$,
\begin{equation}\label{probunion}
\P\left(\bigcup_{k=1}^m A_j\right) = \sum\limits_{k=1}^m (-1)^{k+1} \sum\limits_{(j_1, \ldots, j_k)} \P(A_{j_1}\cap A_{j_2}\cap \cdots \cap A_{j_k}),
\end{equation}
where the integers $j_1, \ldots, j_k$ are distinct.
Let the event $A_i$ occur if the set $B_i$ does not intersect $X_N$. Then for any $k$-tuple $(j_1, \ldots, j_k)$ the event $A_{j_1}\cap \cdots \cap A_{j_k}$ occurs if the points $x_1, \ldots, x_N$ are in the complement of the union $B_{j_1}\cup \cdots \cup B_{j_k}$; i.e., $x_1, \ldots, x_N$ are in a set of measure $1-k/n$. We see that for any $k$-tuple the probability of this event is equal to $\left(1-k/n \right)^N$. Moreover, there are exactly $\binom{m}{k}$ such $k$-tuples. Therefore,
$$
\sum\limits_{(j_1, \ldots, j_k)} \P(A_{j_1}\cap \cdots \cap A_{j_k}) =\binom{m}{k} \left(1-\frac{k}n\right)^N,
$$
and \eqref{oneisempty} follows from \eqref{probunion}.
\end{proof}
For the lower bounds in Theorems \ref{metricbelow} and \ref{manifolds} we will need the following estimate on the function $f(N,n,m)$.
\begin{lemma}
For any three numbers $0<m\leqslant n\leqslant N$, such that $m$ and $N$ are integers,
\begin{equation}\label{estimateoff}
f(N, n,m)\geqslant 1-\left[1-\left(1-\frac{1}n\right)^N\right]^m-\frac{N}{n^2}\cdot \frac{m(m-1)}2 \cdot \left(1-\frac1n\right)^{2(N-1)}\cdot \left[1+\left(1-\frac1n\right)^{N-1}\right]^{m-2}.
\end{equation}
\end{lemma}
\begin{proof}
Notice first that for $k\geqslant 1$ and $0\leqslant x \leqslant 1$ we have
$$
1-kx\leqslant (1-x)^k \leqslant 1-kx + \frac{k(k-1)}2 x^2.
$$
Thus, for $x=1/n$, we get
$$
\left(1-\frac1n\right)^k-\frac{k(k-1)}2\frac{1}{n^2} \leqslant 1-\frac{k}{n}\leqslant \left(1-\frac1n\right)^{k}
$$
Suppose $(1-\frac1n)^k\geqslant \frac{k(k-1)}2\frac{1}{n^2}$. Using the inequality
$$
a^N - (a-b)^N =b \cdot (a^{N-1}+(a-b)a^{N-2}+\cdots + (a-b)^{N-1})\leqslant N\cdot b \cdot a^{N-1}, \; \mbox{if $a>b>0$},
$$
we get
\begin{align}\label{brekekek}
\left(1-\frac kn\right)^N& \geqslant \left(\left(1-\frac1n\right)^k-\frac{k(k-1)}2\frac{1}{n^2}\right)^N \\
& \geqslant \left(1-\frac1n\right)^{kN }- N\cdot \frac{k(k-1)}2\frac{1}{n^2} \cdot \left(1-\frac{1}n\right)^{k(N-1)}. \notag
\end{align}
Suppose now that $(1-\frac1n)^k< \frac{k(k-1)}2\frac{1}{n^2}$. Then
$$
\left(1-\frac1n\right)^{kN }- N\cdot \frac{k(k-1)}2\frac{1}{n^2} \cdot \left(1-\frac{1}n\right)^{k(N-1)} = \left(1-\frac1n\right)^{k(N-1)}\left(\left(1-\frac1n\right)^k - N\frac{k(k-1)}2\frac{1}{n^2}\right)<0,
$$
so as in inequality \eqref{brekekek} for $k\leqslant n$,
$$
\left(1-\frac kn\right)^N\geqslant \left(1-\frac1n\right)^{kN }- N\cdot \frac{k(k-1)}2\frac{1}{n^2} \cdot \left(1-\frac{1}n\right)^{k(N-1)}
$$
also holds.
Therefore,
\begin{multline*
f(N,n,m)=\sum\limits_{\text{$k$ odd, $k\leqslant m$}} \binom{m}{k} \left(1-\frac{k}n\right)^N - \sum\limits_{\text{$k$ even, $k\leqslant m$}} \binom{m}{k}\left(1-\frac{k}n\right)^N \geqslant \\
\sum\limits_{\text{$k$ odd}} \binom{m}{k}\left[\left(1-\frac1n\right)^{kN }- N\cdot \frac{k\left(k-1\right)}2\frac{1}{n^2} \cdot \left(1-\frac{1}n\right)^{k\left(N-1\right)}\right] - \sum\limits_{\text{$k$ even}} \binom{m}{k}\left(1-\frac1n\right)^{kN } \geqslant
\end{multline*}
\begin{equation}\label{koaks1}
\sum\limits_{k=1}^m \left(-1\right)^{k+1}\binom{m}{k}\left(1-\frac1n\right)^{kN } - \frac{N}{n^2}\sum\limits_{k=0}^m \binom{m}{k}\frac{k\left(k-1\right)}2 \cdot \left(1-\frac{1}n\right)^{k\left(N-1\right)}.
\end{equation}
The first sum in \eqref{koaks1} is equal to $1-(1-(1-\frac1n)^N)^m$. To calculate the second sum we notice that
$$
\frac{m(m-1)}{2}x^2 (1+x)^{m-2}=\frac{1}{2}x^2 ((1+x)^m)'' = \sum\limits_{k=0}^m \binom{m}k \cdot \frac{k(k-1)}2 x^{k}.
$$
Thus, for $x=(1-\frac1n)^{N-1}$ we get
$$
\sum\limits_{k=0}^m \binom{m}k \frac{k(k-1)}2 \cdot \left(1-\frac{1}n\right)^{k(N-1)} = \frac{m(m-1)}2 \left(1-\frac1n\right)^{2(N-1)}\cdot \left(1+\left(1-\frac1n\right)^{N-1}\right)^{m-2}.
$$
Combining the above estimates we obtain \eqref{estimateoff}.
\end{proof}
With the help of \eqref{estimateoff} we can deduce some asymptotic properties of $f(N,n,m)$ as $N\to \infty$.
\begin{lemma}\label{lemmaforf}
Let $N$ be a positive integer and $n,m$ be numbers satisfying $1\leqslant m\leqslant n\leqslant N$. Further, let $\kappa_n$ denote constants depending on $n$ such that $0<c_1\leqslant\kappa_n\leqslant c_2$ for all $n$.
\begin{enumerate}[label={\upshape(\roman*)}]
\item If $m=\left\lfloor \kappa_n n \right\rfloor$ and $c_2\leqslant 1$, then there exists a number $\alpha$ such that for $n= \frac{N}{\log N-\alpha \log\log N}$ we have $f(N,n,m)\to 1$ as $N\to \infty$.
\item If $d>1$ and $m=\left\lfloor \kappa_n n^{\frac{d-1}{d}}\right\rfloor$, then there exists a number $\alpha$ such that for $n= \frac{N}{\frac{d-1}d\log N-\alpha\log\log N}$ we have $f(N, n, m)\to 1$ as $N\to \infty$.
\item If $d>1$ and $m=\left\lfloor \kappa_n n^{\frac{1}{d}}\right\rfloor$, then there exists a number $\alpha$ such that for $n= \frac{N}{\frac{1}d\log N-\alpha\log\log N}$ we have $f(N, n, m)\to 1$ as $N\to \infty$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove only part (i) since the proofs of the second and third parts are similar.
In what follows, to simplify the displays, we omit the symbol for the integer part. If $a_N$ and $b_N$ are two sequences of positive numbers, we write $a_N\sim b_N$ to mean $a_N/b_N\to 1$ as $N\to \infty$.
For our choice of $n$ in part (i) we have
$$
\left(1-\frac{1}{n}\right)^N \sim \exp\left(-\frac{N}n\right) \sim \frac{(\log N)^\alpha}{N}.
$$
Thus,
$$
\left(1-\left(1-\frac{1}{n}\right)^N\right)^{\kappa_n n} \sim \left(1-\frac{(\log N)^\alpha}{N}\right)^{\frac{\kappa_n N}{\log N-\alpha\log\log N}}\sim \exp\left(-\frac{\kappa_n (\log N)^\alpha}{\log N-\alpha\log\log N}\right).
$$
If $\alpha>1$, then the last expression tends to zero. Moreover,
\begin{align*}
\frac{N}{n^2}\cdot \frac{m(m-1)}2 &\left(1-\frac1n\right)^{2(N-1)}\cdot \left(1+\left(1-\frac1n\right)^{N-1}\right)^{m-2} \\
&\sim \frac{\kappa_n^2}2\cdot\frac{(\log N)^{2\alpha}}{N}\cdot \left(1+\frac{(\log N)^\alpha}{N}\right)^{\frac{\kappa_n N}{\log N-\alpha\log\log N}} \\
&\sim \frac{\kappa_n^2}2\cdot\frac{(\log N)^{2\alpha}}{N}\cdot \exp\left(\kappa_n(\log N)^{\alpha-1}\right).
\end{align*}
For $\alpha=3/2$ (actually, any $0<\alpha<2$ will work) the last expression is comparable to
$$
\frac{(\log N)^3}{N}\exp(\kappa_n(\log N)^{\frac12}),
$$
which tends to zero as $N$ tends to infinity.
Thus from \eqref{estimateoff} we deduce that $\liminf_{N\to \infty} f(N,n,m) \geqslant~ 1$. However, since $f(N,n,m)$ is equal to a certain probability, we have that $f(N, n, m)\leqslant ~1$, and so $\lim_{N\to \infty}f(N,n,m)=1$.
\end{proof}
\section{Proofs}\label{proofs}
\subsection{Preliminary objects}\label{sectionprelim}
Fix a compact set $\mathcal{X}_0$ with a metric $m$. For any large positive number $n$ let $\mathcal{E}_n(\mathcal{X}_0)$ be a maximal set of points such that for any $y,z\in \mathcal{E}_n$ we have $m(y,z)\geqslant 1/n$. Then for any $x\in \mathcal{X}_0$ there exists a point $y\in \mathcal{E}_n$ such that $m(x,y)\leqslant 1/n$ (otherwise we can add $x$ to $\mathcal{E}_n$, which contradicts its maximality).
In what follows we will clearly indicate the set $\mathcal{X}_0$, and then just write $\mathcal{E}_n$.
\subsection{Proof of the Theorem \ref{metricabove}}
Recall that $(\mathcal{X}, m)$ is a metric space, and $B(x,r)$ denotes a closed ball (in the metric $m$) with center $x\in \mathcal{X}$ and radius $r$. Put $\mathcal{E}_N:=\mathcal{E}_N(\mathcal{X})$ and
note that
\begin{equation}\label{cardbound}
\mu(\mathcal{X})\geqslant \sum\limits_{x\in \mathcal{E}_n}\mu\left(B\left(x, \frac{1}{3n}\right)\right) \geqslant \textup{card}(\mathcal{E}_n) \Phi(1/(3n)).
\end{equation}
Suppose now that $X_N=\{x_1, \ldots, x_N\}$ is a set of $N$ random points, independently distributed over $\mathcal{X}$ with respect to the measure $\mu$. We denote its covering radius by
$$
\rho(X_N):=\rho(X_N, \mathcal{X}).
$$
Suppose $\rho(X_N)> \frac{2}n$. Then there exists a point $y\in \mathcal{X}$ such that $X_N\cap B\left(y, \frac{2}n\right)=\emptyset$. Choose a point $x\in \mathcal{E}_n$ such that $m(x,y)<\frac{1}n$. Then $B(x, \frac{1}{n})\subset B(y, \frac{2}n)$, and so the ball $B(x, \frac{1}{n})$ (and thus $B(x, \frac{1}{3n})$) does not intersect $X_N$. Therefore,
\begin{align}\label{aboveprobbb}
\P\left(\rho(X_N)\geqslant \frac{2}{n}\right) &\leqslant \P\left(\exists x\in \mathcal{E}_n\colon B(x, 1/(3n))\cap X_N=\emptyset\right) \\
& \leqslant \textup{card}(\mathcal{E}_n)\cdot \left(1- \frac{\Phi(\frac1{3n})}{\mu(\mathcal{X})}\right)^N. \notag
\end{align}
We now choose $n$ to be such that $\frac{1}{3n}=\Phi^{-1}(\frac{\alpha \log N}{N})$. There exists such an $n$ since $\Phi$ is continuous and $\Phi(r)\to 0$ as $r\to 0^+$. Then utilizing the upper bound for $\textup{card}(\mathcal{E}_n)$ from \eqref{cardbound}, we deduce that for some $C>0$ we have
$$
\P\left[\rho(X_N)\geqslant \frac{2}n\right] \leqslant C \frac{N}{\log N}\cdot N^{-C\alpha},
$$
which concludes the proof of the estimate \eqref{metricaboveprob}.
To establish the estimate \eqref{metricaboveexpect}, notice that since for small values of $r$ we have $\Phi(r)\leqslant r^\sigma$, it follows that for small $r$ and $D=\frac1\sigma$ we have $\Phi^{-1}(r)\geqslant r^D$. Choose $\alpha$ so large that $N^{1-C\alpha}=o(N^{-D})$ as $N\to \infty$. Then
$$
\mathbb{E}\rho(X_N) \leqslant \frac{2}{n} + C\operatorname{diam}(\mathcal{X})\cdot o(N^{-D}) =6\Phi^{-1}(\frac{\alpha \log N}{N})+o(N^{-D}).
$$
Finally, since $\Phi^{-1}(\frac{\alpha\log N}{N})\geqslant \Phi^{-1}(N^{-1}) \geqslant N^{-D}$, inequality \eqref{metricaboveexpect} follows. \hfill $\Box$
\subsection{Proof of the Theorem \ref{metricbelow}}\label{proofofmetricbelow}
Let $\mathcal{E}_n:=\mathcal{E}_n(\mathcal{X}_1)$, where $\mathcal{X}_1$ is as in the hypothesis.
Notice that
\begin{equation}\label{cardbound2}
0<\mu(\mathcal{X}_1) \leqslant \sum\limits_{x\in \mathcal{E}_n} \mu\left(B(x, \frac{1}{n})\right) \leqslant \textup{card}(\mathcal{E}_n)\Phi\left(\frac{1}n\right).
\end{equation}
An estimate as in \eqref{cardbound} together with the doubling property of $\Phi$ imply that
$$
\mu(\mathcal{X})\geqslant c \cdot \textup{card}(\mathcal{E}_n) \Phi\left(\frac1{3n}\right) \geqslant \tilde{c} \cdot \textup{card}(\mathcal{E}_n)\Phi\left(\frac1n\right).
$$
Thus, $\tau_n:=\textup{card}(\mathcal{E}_n)\cdot \Phi(1/n)$ satisfies $0<c_1<\tau_n<c_2$ for some constants $c_1$ and $c_2$ independent of $n$.
Clearly if a ball $B(x, \frac{1}{3n})$ does not intersect $X_N$, then $\rho(X_N)=\rho(X_N, \mathcal{X})\geqslant \frac{1}{3n}$. Thus
$$
\P\left(\rho(X_N)\geqslant \frac{1}{3n}\right)\geqslant \P\left(\exists x\in \mathcal{E}_n\colon B(x, 1/(3n))\cap X_N = \emptyset \right).
$$
Notice that the balls $B(x, \frac{1}{3n})$ are disjoint for $x\in \mathcal{E}_n$, and their $\mu$-measure is comparable to $t:=\Phi(\frac{1}{n})$.
Next we claim that for every $x\in \mathcal{E}_n$ there exists a constant $c_x\leqslant 1$ such that the balls $B(x, c_x \frac{1}{3n})$ have the same measure $c_0 \Phi(\frac{1}{n})=c_0 t$, and moreover that the uniform estimate $c_x > c>0$ holds for some constant $c$.
To see this, take two points $x_1, x_2\in \mathcal{X}_1$ and assume that the balls $B_i := B(x_i, r)$, $i=1,2$ are disjoint. Suppose $\mu(B_1)<\mu(B_2)$. Define the function $\varphi(s):=\mu(B(x_2, s\cdot r))$. The strict doubling property of $\Phi$ implies
$$
\mu(B_1)\geqslant c\Phi(r) \geqslant c\cdot C_1^k \Phi(r/2^k).
$$
Choose $k$ such that $c\cdot C_1^k \geqslant 1$. Then
$$
\mu(B_1)\geqslant \Phi(r/2^k) \geqslant \varphi(2^{-k}).
$$
Thus, $\varphi(2^{-k})\leqslant\mu(B_1)<\mu(B_2)=\varphi(1)$. By continuity of $\varphi$ we see that there exists a constant $c_{x_2}$ such that $\mu(B(x_2, c_{x_2}r))=\mu(B(x_1, r))$. Notice that $c_{x_2} \geqslant 2^{-k}=:c_0$, where $k$ depends only on the constants $c, C_1$ from Theorem \ref{metricbelow} and not on $x_1, x_2$, or $r$. Applying this procedure to all balls $B(x, 1/(3n))$, $x\in \mathcal{E}_n$, and using the fact that $\textup{card}(\mathcal{E}_n)=\tau_n /t$, we obtain
\begin{align}\label{blahblahblah}
\P\left(\rho(X_N)\geqslant \frac{c_0}3 \Phi^{-1}(t)\right)&\geqslant \P\left(\mbox{one of $\frac{\tau_n}t$ disjoint balls of measure $c_0t$ is disjoint from $X_N$}\right) \notag
\\&= f(N, \frac{1}{c_0t}, \frac{\tau_n}t) = f(N, \frac1{c_0t}, \frac{\kappa_n}{c_0t}),
\end{align}
where $\kappa_n:=c_0\tau_n$ and $f$ is given in \eqref{functionf}. If necessary, we can decrease the size of $c_0$ so that $\kappa_n \leqslant 1$ for $n$ large.
As we have seen in Lemma \ref{lemmaforf}(i), there exists a number $\alpha$ such that if
$$\frac1{c_0t}= \frac{N}{\log N-\alpha\log\log N},$$
then $f(N, \frac{1}{c_0t}, \frac{\kappa_n}{c_0t})\to 1$ as $N\to \infty$.
Thus, for any sufficiently large number $N$ we have
$$
\P\left(\rho(X_N)\geqslant \frac{c_0}3 \Phi^{-1}(\frac{\log N-\alpha\log\log N}{c_0 N})\right)\geqslant 1-o(1), \; N\to \infty,
$$
which is the desired inequality \eqref{metricbelowprob}.
Moreover, for large values of $N$ we have $\log N-\alpha\log\log N\geqslant \frac{1}2\log N$; thus
$$
\mathbb{E} \rho(X_N)\geqslant c_1 \Phi^{-1}(c_2 \frac{\log N}N),
$$
which proves inequality \eqref{metricbelowexpect}. \hfill $\Box$
\subsection{Estimates from above for asymptotically flat sets}\label{abovvve}
Let $K$ be an asymptotically flat $s$-regular subset of $\mathbb{R}^d$ and put
$$
\rho(X_N)=\rho(X_N, K), \qquad \varepsilon_N:=\frac{1}{\log N}.
$$
In order to deduce sharp asymptotic results we first improve our estimates from above by considering a better net of points.
For each $N>4$ let $\mathcal{E}_{n/\varepsilon_N}:=\mathcal{E}_{n/\varepsilon_N}(K)$.
From estimates similar to \eqref{cardbound} and \eqref{cardbound2} we see that $\textup{card}(\mathcal{E}_n)$ is comparable to $\left(n/\varepsilon_N\right)^s$ independently of $N$.
Suppose $\rho(X_N)> \frac{1}{n}$. Then, since $K$ is compact, for some $y\in K$ we have $B_d(y,\frac{1}{n})\cap X_N=\emptyset$, and thus there exists a point $x\in \mathcal{E}_{n/\varepsilon_N}$ such that $B_d(x, \frac{1-\varepsilon_N}{n})\cap X_N=\emptyset$. We fix a number $\delta$, $0<\delta<1$, and take $n$ so large that
$$
\mathcal{H}_s\left(B_d(x, \frac{1-\varepsilon_N}n)\cap K\right)\geqslant (1-\delta)\upsilon_s \frac{(1-\varepsilon_N)^s}{n^s}
\geqslant (1-\delta)\upsilon_s \frac{1-s\varepsilon_N}{n^s}.
$$
As in \eqref{aboveprobbb},
\begin{equation}\label{probmanifoldabove}
\P\left(\rho(X_N)> \frac{1}{n}\right)\leqslant C \left(\frac{n}{\varepsilon_N}\right)^s \left(1-\frac{1}{\mathcal{H}_s(K)}(1-\delta)\upsilon_s\frac{1-s\varepsilon_N}{n^s}\right)^N.
\end{equation}
Fix a number $A>0$ and choose
$$
n_1 := \left(\frac{(1-\delta)\upsilon_s}{\mathcal{H}_s(K)} \frac{N}{\log N+A\log\log N}\right)^{1/s}.
$$
Then with $n=n_1$ in \eqref{probmanifoldabove} we get for all $N$ large,
\begin{equation}\label{loglog}
\P\left(\rho(X_N)> \frac{1}{n_1}\right) \leqslant C \cdot N (\log N )^{s-1} e^{-(1-s/\log N)(\log N+A\log\log N)}.
\end{equation}
Recall that $C$ does not depend on $N$. Thus if $A$ and $N$ are sufficiently large, it follows that
\begin{equation}\label{problessthanlog}
\P\left(\rho(X_N)> \frac{1}{n_1}\right) \leqslant \frac{1}{\log N}.
\end{equation}
Furthermore, if we plug $n=n_2 := \left(\frac{N}{B\log N}\right)^{1/s}$ in \eqref{probmanifoldabove} we get for sufficiently large $B$
\begin{equation}\label{problessthanpower}
\P\left(\rho(X_N)> \frac{1}{n_2}\right) \leqslant N^{-p/s - 1}.
\end{equation}
With $\textup{d}\mu=\mathbbm{1}_K \textup{d}\mathcal{H}_s/\mathcal{H}_s(K)$, we make use of the formula
\begin{multline}\label{superest}
\mathbb{E}[\rho(X_N)^p] = \int_{K^N} \rho(X_N)^p \textup{d}\mu(x_1)\ldots \textup{d}\mu(x_N) = \int\limits_{\rho(X_N)\leqslant1/n_1} \rho(X_N)^p\textup{d}\mu(x_1)\ldots \textup{d}\mu(x_N) + \\ \int\limits_{1/n_1 < \rho(X_N)\leqslant 1/n_2} \rho(X_N)^p\textup{d}\mu(x_1)\ldots \textup{d}\mu(x_N) + \int\limits_{\rho(X_N)> 1/n_2}\rho(X_N)^p\textup{d}\mu(x_1)\ldots \textup{d}\mu(x_N) \leqslant \\ \frac1{n_1^p} + \frac{1}{n_2^p}\cdot \P\left(\rho(X_N)> \frac{1}{n_1}\right) + (\textup{diam}(K))^p \cdot \P\left(\rho(X_N)> \frac1{n_2}\right).
\end{multline}
From \eqref{problessthanlog}, \eqref{problessthanpower}, and the definitions of $n_1$ and $n_2$, we obtain
\begin{multline}\label{superest2}
\mathbb{E}[\rho(X_N)^p] \leqslant \left(\frac{\log N+A\log\log N}{N}\right)^{p/s} \cdot \left(\frac{\mathcal{H}_s(K)}{\upsilon_s}\right)^{p/s} \cdot (1-\delta)^{-p/s} + C\left(\frac{\log N}{N}\right)^{p/s} \frac{1}{\log N} \\ +CN^{-p/s-1}.
\end{multline}
Therefore, for any $\delta$ with $0<\delta<1$,
$$
\limsup_{N \to \infty} \mathbb{E}[\rho(X_N)^p]\cdot \left(\frac{N}{\log N}\right)^{p/s} \leqslant (1-\delta)^{-p/s}\cdot \left(\frac{\mathcal{H}_s(K)}{\upsilon_s}\right)^{p/s},
$$
and consequently
\begin{equation}\label{flatabovelastformula}
\limsup_{N \to \infty} \mathbb{E}[\rho(X_N)^p] \cdot \left(\frac{N}{\log N}\right)^{p/s} \leqslant \left(\frac{\mathcal{H}_s(K)}{\upsilon_s}\right)^{p/s}.
\end{equation} \hfill $\Box$
\subsection{Estimate from above for quasi-nice sets}\label{abovvvve}
Let $K$ be a quasi-nice $s$-regular subset of $\mathbb{R}^d$, and again set $\varepsilon_N:=1/\log N$ and $\mathcal{E}_{n/\varepsilon_N}:=\mathcal{E}_{n/\varepsilon_N}(K)$, where $n/\varepsilon_N\to \infty$ as $N\to \infty$. Since the set $T$ from part (iii) of Definition \ref{flat} is finite, the regularity condition (ii) implies
$$
\mathcal{H}_s\left(\bigcup_{x\in T}B_d(x, r)\right) \leqslant C \cdot \textup{card}(T) \cdot r^s = C_1 r^s, \; \; 0<r<r_0.
$$
Suppose $y_1, \ldots, y_k \in \mathcal{E}_{n/\varepsilon_N} \cap \bigcup_{x\in T}B_d(x, \frac{1-\varepsilon_N}n)$. Then the balls $B_d(y_j, \frac{\varepsilon_N}{3n})$ are disjoint and $B_d(y_j, \frac{\varepsilon_N}{3n})\subset \bigcup_{x\in T}B_d(x, \frac{1+\varepsilon_N}n)$ for $j=1,\ldots,k$. The chain of inequalities
$$
C_1\left(\frac{1+\varepsilon_N}n\right)^s \geqslant \mathcal{H}_s\left(\bigcup_{x\in T}B_d(x, \frac{1+\varepsilon_N}n)\right)\geqslant \sum\limits_{j=1}^k \mathcal{H}_s\left(B_d(y_j, \frac{\varepsilon_N}{3n})\right) \geqslant c\cdot k\cdot (\frac{\varepsilon_N}n)^s
$$
implies that $k\leqslant C_2/\varepsilon_N^s$, and $C_2$ does not depend on $N$. Further, if $y\in \mathcal{E}_{n/\varepsilon_N}\setminus\bigcup_{x\in T}B_d(x, \frac{1-\varepsilon_N}n)$, then $\mathcal{H}_s\left(B_d(y, \frac{1-\varepsilon_N}n)\right)\geqslant \upsilon_s \left(\frac{1-\varepsilon_N}n\right)^s$.
As we have seen in \eqref{probmanifoldabove}, $\P(\rho(X_N)> 1/n)$ is bounded from above by the probability that for some $y\in \mathcal{E}_{n/\varepsilon_N}$ we have $B_d\left(y, \frac{1-\varepsilon_N}n\right)\cap X_N=\emptyset$. Taking into account that $\textup{card}(\mathcal{E}_{n/\varepsilon_N}) \leqslant C_3(n/\varepsilon_N)^s$, we obtain
\begin{multline*}
\P\left(\rho(X_N)> \frac1n\right) \leqslant \P\bigg(\mbox{one of $\leqslant\frac{C_2}{\varepsilon_N^s}$ balls of measure $\geqslant\frac{c_1}{n^s}$ is disjoint from $X_N$ or} \\ \mbox{one of $\leqslant C_3\left(\frac{n}{\varepsilon_N}\right)^s$ balls of measure $\geqslant\frac{\upsilon_s(1-\varepsilon_N)^s}{n^s}$ is disjoint from $X_N$}\bigg).
\end{multline*}
This last probability is bounded from above by
$$
\frac{C_2}{\varepsilon_N^s}\left(1-\frac{c_1}{n^s}\right)^N + C_4\left(\frac{n}{\varepsilon_N}\right)^s\left(1-\frac{1}{\mathcal{H}_s(K)}\frac{\upsilon_s (1-\varepsilon_N)^s}{n^s}\right)^N.
$$
As in the preceding proof, if
$$
n_1=\left(\frac{\upsilon_s}{\mathcal{H}_s(K)} \frac{N}{\log N+A\log\log N}\right)^{1/s},
$$
then, for $N$ large,
$$
C_4\left(\frac{n_1}{\varepsilon_N}\right)^s\cdot\left(1-\frac{1}{\mathcal{H}_s(K)}\frac{\upsilon_s (1-\varepsilon_N)^s}{n_1^s}\right)^N \leqslant \frac{C_5}{\log N}.
$$
Furthermore notice that if $C_6$ is sufficiently large, then
$$
\frac{C_2}{\varepsilon_N^s}\left(1-\frac{c_1}{n_1^s}\right)^N \leqslant C_6(\log N)^s N^{-c_2}, \;\; N\to \infty.
$$
Repeating estimates \eqref{superest} and \eqref{superest2}, we obtain
\begin{equation}\label{lastinquasinice}
\limsup_{N \to \infty} \mathbb{E}[\rho(X_N)^p] \cdot \left(\frac{N}{\log N}\right)^{p/s} \leqslant \left(\frac{\mathcal{H}_s(K)}{\upsilon_s}\right)^{p/s}.
\end{equation} \hfill $\Box$
Note that \eqref{lastinquasinice} holds whether or not $K$ is countably $s$-rectifiable; it requires only that properties (ii) and (iii) of Definition \ref{flat} hold.
\subsection{Estimate from below for quasi-nice sets}\label{sectionfrombelow}
For the proof of Theorem \ref{manifolds}, it remains in view of inequalities \eqref{flatabovelastformula} and \eqref{lastinquasinice}, to establish
\begin{equation}\label{liminfbigger}
\liminf_{N \to \infty} \mathbb{E}[\rho(X_N)^p] \cdot \left(\frac{N}{\log N}\right)^{p/s} \geqslant \left(\frac{\mathcal{H}_s(K)}{\upsilon_s}\right)^{p/s}
\end{equation}
for asymptotically flat and quasi-nice $s$-dimentional manifolds $K$. Since by the H\"older inequality we have
$$
\liminf_{N\to \infty}\mathbb{E}[\rho(X_N)^p] \cdot \left[\frac{N}{\log N}\right]^{p/s} \geqslant \left(\liminf_{N\to \infty}\mathbb{E} \rho(N_N)\cdot \left[\frac{N}{\log N}\right]^{1/s}\right)^p,
$$
it is enough to prove \eqref{liminfbigger} for $p=1$.
If $K$ is quasi-nice, then $K$ is countably $s$-rectifiable ($s$ is an integer) and $0<\mathcal{H}_s(K)<\infty$; thus as previously remarked, the following holds for $\mathcal{H}_s$-almost every point $x\in K$:
$$
r^{-s}\cdot \mathcal{H}_s(B_d(x,r)\cap K)\to \upsilon_s, \;\; r\to 0^+.
$$
Fix a number $\delta$ with $0<\delta<1$ and define $r_n:=1/n$ and $q_n:=\left(\frac{1-\delta}{1+\delta}\right)^{1/s}\cdot 1/n$, where $\{n\}$ is a given countable sequence tending to infinity.
By Egoroff's theorem, there exists a set $K_1=K_1(\delta)\subset K$ with $\mathcal{H}_s(K_1)>\frac{1}{2}\mathcal{H}_s(K)$ on which the above limit is uniform for radii $r$ equal to $r_n$ and $q_n$. That is,
\begin{equation}\label{uniforr}
r^{-s}\mathcal{H}_s(B_d(x,r)\cap K)\rightrightarrows \upsilon_s, \;\; r=r_n \; \mbox{or} \; r=q_n, \;\; n\to \infty.
\end{equation}
This means that there exists a large number $n(\delta)$, such that for any $n>n(\delta)$ we have, for every $x\in K_1$,
\begin{align}
&(1-\delta)\upsilon_s r_n^s \leqslant \mathcal{H}_s(B_d(x,r_n)\cap K) \leqslant (1+\delta)\upsilon_sr_n^s, \label{tilitili1} \\
&(1-\delta)\upsilon_sq_n^s \leqslant \mathcal{H}_s(B_d(x,q_n)\cap K) \leqslant (1+\delta)\upsilon_s q_n^s = (1-\delta)\upsilon_s r_n^s. \label{tilitili2}
\end{align}
Recalling the notation of Section \ref{sectionprelim}, we set $\mathcal{E}_{n/2}:=\mathcal{E}_{n/2}(K_1)$. Then, as in the preceding sections, there exist positive constants $c_1$ and $c_2$ (independent of $n$) such that $c_1 n^s \leqslant \textup{card}(\mathcal{E}_{n/2}) \leqslant c_2 n^s$ where, for the lower bound, we use
$$
0<\mathcal{H}_s(K_1)\leqslant \mathcal{H}_s\left(\bigcup_{x\in \mathcal{E}_{n/2}} (B_d(x, 2/n)\cap K)\right) \leqslant C\cdot \textup{card}(\mathcal{E}_{n/2})(2/n)^s.
$$ Thus, $\tau_n:=\textup{card}(\mathcal{E}_{n/2})/n^s$ satisfies $0<c_1\leqslant \tau_n \leqslant c_2$. Clearly, if for some $x\in \mathcal{E}_{n/2}$ the ball $B_d(x, \frac1n)$ is disjoint from $X_N$, then $\rho(X_N)\geqslant \frac 1n$.
Thus, for a given $\delta>0$ and sufficiently large $n$ we have a family $\{B_d(x, 1/n)\cap K \colon x\in \mathcal{E}_{n/2}(K_1)\}$ of $\tau_n n^s$ balls (relative to $K$) with disjoint interiors of radius $1/n$ and $\mathcal{H}_s$-measure between $(1-\delta)\upsilon_s/n^s$ and $(1+\delta)\upsilon_s /n^s$.
For a fixed $x\in \mathcal{E}_{n/2}(K_1)$, define $\varphi(s):=\mathcal{H}_s(B(x, s/n)\cap K)$. Then $\varphi(1)\geqslant (1-\delta)\upsilon_s/n^s$. On the other hand, inequalities \eqref{tilitili2} imply
$$
\varphi\left(\left(\frac{1-\delta}{1+\delta}\right)^{1/s}\right) \leqslant (1-\delta)\upsilon_s/n^s.
$$
\noindent Thus, there is a number $c_x=c_{x,n}$, with $c_x\geqslant (\frac{1-\delta}{1+\delta})^{1/s}$, such that $\varphi(c_x)=(1-\delta) \upsilon_s/n^s$.
That is, there exists a new family $\{B_d(x, c_x/n)\cap K\colon x\in \mathcal{E}_{n/2}(K_1)\}$, with $c_x \geqslant (\frac{1-\delta}{1+\delta})^{1/s}$, and the sets $B_d(x, c_x/n)\cap K$ all have the same $\mathcal{H}_s$ measure, namely $(1-\delta)\upsilon_s/n^s$.
As in \eqref{blahblahblah}, it follows that
\begin{align}\label{onemoreestimate}
\P\left(\rho(X_N)\geqslant \left(\frac{1-\delta}{1+\delta}\right)^{1/s}\frac1{n}\right) &\geqslant f\left(N, \frac{\mathcal{H}_s(K)n^s}{(1-\delta)\upsilon_s}, \tau_n n^s\right)\\ &= f\left(N, \frac{\mathcal{H}_s(K)n^s}{(1-\delta)\upsilon_s}, \kappa_n \cdot\frac{\mathcal{H}_s(K)n^s}{(1-\delta)\upsilon_s}\right),\notag
\end{align}
where
$$
\kappa_n:=\tau_n \cdot \frac{(1-\delta)\upsilon_s}{\mathcal{H}_s(K)}.
$$
It is easily seen that
$$
\mathcal{H}_s(K)\geqslant \tau_n n^s \cdot \frac{(1-\delta) \upsilon_s}{n^s} = \mathcal{H}_s(K)\kappa_n;
$$
thus $\kappa_n \leqslant 1$.
Part (i) of Lemma \ref{lemmaforf} therefore implies that the sequence in \eqref{onemoreestimate} tends to $1$ as $N\to \infty$ if (for suitable $\alpha$) we have
$$
\frac{(1-\delta) \upsilon_s}{\mathcal{H}_s(K)n^s} = \frac{\log N-\alpha\log\log N}{N},
$$
which is equivalent to
\begin{equation}\label{newdefinofn}
n := \left[\frac{(1-\delta) \upsilon_s}{\mathcal{H}_s(K)}\cdot \frac{N}{\log N-\alpha\log\log N}\right]^{1/s}.
\end{equation}
We take $N$ so large that $n$ exceeds $n(\delta)$, which ensures that the inequalities \eqref{tilitili1}--\eqref{tilitili2} hold.
From \eqref{onemoreestimate} we obtain
$$
\mathbb{E}\rho(X_N)\geqslant \left(\frac{1-\delta}{1+\delta}\right)^{1/s}\frac{1}{n} \cdot f\left(N, \frac{\mathcal{H}_s(K)n^s}{(1-\delta)\upsilon_s}, \tau_n n^s\right).
$$
Using the definition of $n$ in \eqref{newdefinofn}, we get
\begin{multline}\label{manifbelowlastformula}
\mathbb{E} \rho(X_N) \cdot \left[\frac{N}{\log N}\right]^{1/s} \geqslant \\ \left[\frac{N}{\log N}\right]^{1/s}\cdot \left(\frac{1-\delta}{1+\delta}\right)^{1/s} \cdot \left[\frac{\mathcal{H}_s(K)}{(1-\delta) \upsilon_s}\cdot \frac{\log N-\alpha\log\log N}{N}\right]^{1/s} \cdot f\left(N, \frac{\mathcal{H}_s(K)n^s}{(1-\delta)\upsilon_s}, \tau_n n^s\right),
\end{multline}
and passing to the $\liminf$ as $N\to \infty$ yields
$$
\liminf_{N \to \infty} \mathbb{E}\rho(X_N) \cdot \left(\frac{N}{\log N}\right)^{1/s} \geqslant \left(\frac{1}{1+\delta}\right)^{1/s} \cdot \left[\frac{\mathcal{H}_s(K)}{\upsilon_s}\right]^{1/s}.
$$
Recalling that $\delta$ can be taken arbitrarily small, we obtain \eqref{liminfbigger} for quasi-nice sets. For asymptotically flat sets the same (but even simpler) argument applies.
\hfill $\Box$
\subsection{Proof of Corollary \ref{maehara}}
Recall that
$$
Z_N=\rho(X_N, \mathbb{S}^d)\cdot \left(\frac{\upsilon_d}{(d+1)\upsilon_{d+1}}\cdot \frac{N}{\log N}\right)^{1/d}.
$$
Corollary \ref{corsphere} implies that $\mathbb{E} Z_N \to 1$ and $\mathbb{E}[Z_N^2]\to 1$; thus $\mathbb{E}[(Z_N-1)^2] = \mathbb{E}[Z_N^2]-2\mathbb{E} Z_N + 1 \to 0$. The Chebyshev inequality then implies
$$
\P(|Z_N-1|>\varepsilon) \leqslant \frac{\mathbb{E}[(Z_N-1)^2]}{\varepsilon^2} \to 0,
$$
which completes the proof.
\hfill $\Box$
\subsection{Proof of the Corollaries \ref{corsphere} and \ref{corcurve}}
It is well known that a closed $C^{(1,1)}$ manifold is an asymptotically flat set, and a rectifiable curve is a quasi-nice $1$-dimensional set.
For the first fact, we refer the reader to a textbook on Riemannian geometry, for instance, \cite[Chapters 5--10]{BurIv}. The second fact can be deduced from \cite[Section 3.2]{Fal}.
\subsection{Proof of the Theorem \ref{theoremball}: estimate from above}
The proof of the theorem is similar to the proof for asymptotically flat sets. However, we need to take into account that the limit \eqref{uniform} is not equal to $\upsilon_d$ for points on the boundary. We use properties (ii) and (iii) of $K$ to obtain
\begin{align}
&r^{-d}\mathcal{H}_d(B_d(x,r)\cap K) \rightrightarrows \frac12 \upsilon_d, \; \; r\to 0, \;\; x\in \partial K; \label{fact1}\\
& x\in K, \; \textup{dist}(x, \partial K)>r \Rightarrow \mathcal{H}_d(B_d(x,r)\cap K)=\mathcal{H}_d(B_d(x,r))=\upsilon_d r^d; \label{fact2} \\
& \forall \delta>0 \; \exists r(\delta)>0 \colon \forall r<r(\delta), \forall x\in K\colon \mathcal{H}_d(B_d(x,r)\cap K)\geqslant (\frac12-\delta)\upsilon_d r^d. \label{fact3}
\end{align}
For the details, we refer the reader to Lee, \cite[Chapter 5]{L}
For large $N$, set $\mathcal{E}_{n/\varepsilon_N}:=\mathcal{E}_{n/\varepsilon_N}(K)$ and $\varepsilon_N:=1/\log N$, where $n(N)$ is a sequence such that $n\asymp (N/\log N)^{1/d}$. We now fix a number $\delta$ with $0<\delta<1/2$.
Notice that if $x\in \mathcal{E}_{n/\varepsilon_N}$ and $\textup{dist}(x, \partial K) > (1-\varepsilon_N)/n$, then
$$\mathcal{H}_d(B_d(x,(1-\varepsilon_N)/n)\cap K)=\upsilon_d \left((1-\varepsilon_N)/n\right)^d;$$ if $x\in \mathcal{E}_{n/\varepsilon_N}$ and $\textup{dist}(x, \partial K) \leqslant (1-\varepsilon_N)/n$ then, for large enough $n$,
$$
\mathcal{H}_d(B_d(x, (1-\varepsilon_N)/n)\cap K)\geqslant (\frac12-\delta)\upsilon_d ((1-\varepsilon_N)/n)^d.
$$
On considering disjoint balls (relative to $K$) of radius $\varepsilon_N/(3n)$ and using that
$$
\mathcal{H}_d(\{x\colon \textup{dist}(x, \partial K)\leqslant (1-\frac23\varepsilon_N)/n\}) \leqslant C_1/n,
$$
we deduce, as in \eqref{cardbound}, that
$$
\textup{card}\left\{x\in \mathcal{E}_{n/\varepsilon_N}\colon \operatorname{dist}(x, \partial K)\leqslant \frac{1-\varepsilon_N}n\right\} \leqslant C_2\frac{n^{d-1}}{\varepsilon_N^d}.
$$
Therefore, for large enough $n$, we get
\begin{multline}\label{ballabovelong}
\P\left(\rho(X_N)> \frac1n\right) \leqslant \P\left(\exists x\in \mathcal{E}_{n/\varepsilon_N}\colon B_d\left(x, \frac{1-\varepsilon_N}n\right)\cap K \cap X_N =\emptyset\right) \leqslant \\
C_2\frac{n^{d-1}}{\varepsilon_N^d} \left(1- \frac{(1/2-\delta)\upsilon_d}{\mathcal{H}_d(K)}\left(\frac{1-\varepsilon_N}{n}\right)^d\right)^N + C_3\frac{n^d}{\varepsilon_N^d} \left(1- \frac{\upsilon_d}{\mathcal{H}_d(K)} \left(\frac{1-\varepsilon_N}{n}\right)^d\right)^N.
\end{multline}
Repeating the estimates \eqref{problessthanlog}--\eqref{flatabovelastformula} with
$$
n_1:=\left( \frac{(1/2-\delta)\upsilon_d}{\mathcal{H}_d(K)} \cdot \frac{N}{\frac{d-1}d \log N + A\log\log N}\right)^{1/d},
$$
and
$$
n_2:=\left(\frac{N}{B\log N}\right)^{1/d},
$$
where $A$ and $B$ are sufficiently large, we obtain, after letting $\delta\to 0^+$, the estimate
$$
\limsup_{N\to \infty}\mathbb{E}[\rho(X_N)^p] \left( \frac{N}{\log N}\right)^{p/d} \leqslant \left(\frac{2(d-1)}d \cdot \frac{\mathcal{H}_d(K)}{\upsilon_d}\right)^{p/d}.
$$\hfill $\Box$
\subsection{Proof of the Theorem \ref{theoremball}: estimate from below}\label{sectionballbelow}
We repeat the proof from the Section \ref{sectionfrombelow}, but now we will place our net $\mathcal{E}$ only on the boundary $\partial K$. Namely, put $\mathcal{E}_{n/2}:=\mathcal{E}_{n/2}(\partial K)$. Since $\partial K$ is a smooth $d-1$-dimensional submanifold, we see that $\textup{card}(\mathcal{E}_{n/2})=\tau_n n^{d-1}$ with $0<c_1<\tau_n <c_2$. Moreover, from \eqref{fact1} we obtain as in \eqref{uniforr} that
$$
r^{-d}\mathcal{H}_d(B_d(x,1/n)\cap K) \rightrightarrows \frac12 \upsilon_d/n^d, \; \; r=r_n \; \mbox{or} \; r=q_n, \; n\to \infty,
$$
uniformly for $x\in \mathcal{E}_{n/2}$.
The remainder of the proof just involves repeating the estimates \eqref{onemoreestimate}--\eqref{manifbelowlastformula}, using part (ii) of Lemma \ref{lemmaforf}.
\hfill $\Box$
\subsection{Estimate from above for the cube $[0,1]^d$}
The proof is similar to the case of the bodies with smooth boundary. The only change we need to make is to the formula \eqref{fact1}. Namely, if a point $x$ lies on a $(d-k)$-dimensional edge of the cube, then $\mathcal{H}_d(B_d(x,r)\cap [0,1]^d) \asymp 2^{-k}\upsilon_d r^d$. Moreover, $\mathcal{H}_d(B_d(x,r)\cap [0,1]^d) = 2^{-k}\upsilon_d r^d$ for points $x$ on the $(d-k)$-dimensional edge that are at distance larger than $r$ from all $(d-k-1)$-dimensional edges. Thus, if we consider a set $\mathcal{E}_{n/\varepsilon_N}:=\mathcal{E}_{n/\varepsilon_N}([0,1]^d)$, we have for any $k=0,\ldots, d$ at most $C_k n^{d-k}/\varepsilon_N^d$ points $x\in \mathcal{E}_{n/\varepsilon_N}$ with $\mathcal{H}_d(B_d(x, (1-\varepsilon_N)/n)\cap [0,1]^d)\geqslant 2^{-k}\upsilon_d((1-\varepsilon_N)/n)^d$ . In particular, if $k=d$ we have only finitely many such points $x\in \mathcal{E}_{n/\varepsilon_N}$; and if $k=d-1$, we have no more than $Cn/\varepsilon_N^d$ such points. We now repeat the estimates \eqref{problessthanlog}--\eqref{flatabovelastformula} and \eqref{ballabovelong} with
$$
n_1 := \left(2^{-(d-1)}\cdot d\cdot\upsilon_d\cdot \frac{N}{\log N+A\log\log N}\right)^{1/d}.
$$\hfill $\Box$
\subsection{Estimate from below for the cube $[0,1]^d$}
The proof is almost identical to the proof in the Section \ref{sectionballbelow}; the only difference is that now we take $\mathcal{E}_{n/2}:=\mathcal{E}_{n/2}(L)$, where $L$ is a $1$-dimensional edge of the cube $[0,1]^d$. To complete the analysis we appeal to part (iii) of Lemma \ref{lemmaforf}. \hfill $\Box$
\subsection{Estimates for a polyhedron in $\mathbb{R}^3$}
The estimates here are the same as for the unit cube $[0,1]^d$. The only difference is that, for points $x\in L$, where $L$ is the edge where two faces intersect at angle $\theta$, we have, if $x$ is far enough from the vertices of $P$:
$$
\mathcal{H}_3(B(x,r)\cap P) = \frac{\theta}{2\pi} \cdot \upsilon_3 \cdot r^3.
$$
Consequently, for $k=0,1,2,3$ we have at most $a_k n^{3-k}/\varepsilon_N^3$ points $x\in \mathcal{E}_{n/\varepsilon_N}(P)$ with $\mathcal{H}_3(B_3(x, (1-\varepsilon_N)/n)\cap P)\geqslant c_k\upsilon_3((1-\varepsilon_N)/n)^3$, where $a_0=1$, $a_1=1/2$, and $a_2=\theta/(2\pi)$.
In the case $\theta\leqslant \pi/2$, one needs to choose
$$
n_1:=\left(\frac{2\theta}{V(P)}\cdot \frac{N}{\log N + A\log\log N}\right)^{1/3},
$$
and in the case $\theta\geqslant \pi/2$, one needs to choose
$$
n_1:=\left(\frac{\pi}{V(P)}\cdot \frac{N}{\log N + A\log\log N}\right)^{1/3}.
$$
For the estimate from above, consider $\mathcal{E}_{n/2}(L)$ and repeat the estimates for the cube. \hfill $\Box$
\subsection{Estimates for $\textup{\textmd{d}}\mu=\frac{\textup{\textmd{d}}x}{\sqrt{1-x^2}}$}
We remind the reader that $\hat{\rho}(X_N)=\hat{\rho}(X_N, [0,1])=\sup_{y\in [1-\frac{1}{N^2}, 1]} \inf_j |y-x_j|$, where $x_j$, $j=1,\ldots,N$, are randomly and independently distributed over $[0,1]$ with respect to $\mu$.
\subsubsection{Case $a=2$}
Suppose that an interval $I_\alpha:=[1-\frac{\alpha}{N^2}, 1]$ is disjoint from $X_N$ for some $\alpha>1$. Then we get
$$
\hat{\rho}(X_N)\geqslant \frac{\alpha-1}{N^2}.
$$
We notice that if $\alpha<C_1\log^2(N)$, and $N$ is sufficiently large, then
$$
\mu(I_\alpha)\leqslant C_2 \frac{\sqrt{\alpha}}{N}.
$$
Therefore, if $\alpha$ is some number greater than $1$,
$$
\P\left(\hat{\rho}\geqslant \frac{\alpha-1}{N^2}\right)\geqslant \left(1-C_2\frac{\sqrt{\alpha}}N\right)^N \geqslant C_3.
$$
Consequently,
$$
\mathbb{E}\hat{\rho}\geqslant \frac{C_4}{N^2},
$$
where $C_4 = C_3(\alpha-1)$.
For the estimate from above, notice that $\mu(I_\alpha)\geqslant \sqrt{\alpha}/(\sqrt{2}\pi N)$. Assuming $\hat{\rho}(X_N)\geqslant \frac{\alpha}{N^2}$, we get that the distance from $1$ to any $x_j$ exceeds $\alpha/N^2$, and thus the interval $[1-\frac{\alpha}{N^2}, 1]$ is disjoint from $X_N$. The probability of this event is less than
$$
\left(1-C_5\frac{\sqrt{\alpha}}{N}\right)^N \leqslant e^{-C_5\sqrt{\alpha}}.
$$
Thus, for any $\alpha$, $1<\alpha<N^2$, it follows that
$$\P\left(\hat{\rho}(X_N)\geqslant \frac{\alpha}{N^2}\right)\leqslant e^{-C_5\sqrt{\alpha}}.
$$
In particular, for sufficiently large $C_6$ we have
$$\P\left(\hat{\rho}(X_N)\geqslant \frac{C_6\log^2(N)}{N^2}\right)\leqslant N^{-3}.
$$
Therefore,
$$
\mathbb{E}\hat{\rho}(X_N)\leqslant \frac{1}{N^2} + \sum\limits_{\alpha=1}^{C_6\log^2(N)}\frac{\alpha+1}{N^2}e^{-C_5\sqrt{\alpha}} + N^{-3}.
$$
It is easy to see that the latter expression is bounded by $C_7/N^2$, which completes the proof for this case.
\subsubsection{Case $0<a<2$}
We again notice that, if $\alpha$ is a number and $I=[\alpha, \alpha+\varepsilon]\subset [1-\frac{1}{N^a},1]$ is an interval of length $\varepsilon$, then
$$
\mu(I)=\int\limits_{\alpha}^{\alpha+\varepsilon} \frac{dt}{\pi\sqrt{1-t^2}}\geqslant \frac{1}{\pi} \frac{\varepsilon}{\sqrt{1-\alpha^2}}\geqslant \frac{1}{\pi} \frac{\varepsilon}{\sqrt{1-(1-\frac{1}{N^a})^2}}\geqslant C_1 \varepsilon N^{\frac{a}2}.
$$
Now consider $n$ intervals of length $\frac{1}{nN^a}$ (and thus having $\mu$-measure $\mu$ greater than $\frac{C_1}{nN^{\frac{a}2}}$) inside $[1-\frac{1}{N^a}, 1]$. As we have seen before, if $\hat{\rho}(X_N)>\frac{2}{nN^a}$, then for some $y\in [1-\frac{1}{N^a}, 1]$ the interval of length $\frac{2}{nN^a}$ centered at $y$ is disjoint from $X_N$; thus one of the fixed intervals of length $\frac{1}{nN^a}$ is disjoint from $X_N$. Consequently,
$$
\P\left(\hat{\rho}(X_N)\geqslant \frac2{nN^a}\right)\leqslant n\left(1-\frac{C_1}{nN^{\frac{a}{2}}}\right)^N.
$$
With
$$
n:=\frac{N^{1-\frac{a}{2}}}{A\log N},
$$
where $A$ large enough, we get
$$
\P\left(\hat{\rho}(X_N)\geqslant \frac2{nN^a}\right)\leqslant n\left(1-\frac{C_1}{nN^{\frac{a}{2}}}\right)^N \leqslant N^{-3}.
$$
Therefore,
$$
\mathbb{E}\hat\rho \leqslant C\frac{\log N}{N^{1+\frac{a}2}} + N^{-3},
$$
which finishes the estimate from above.
\bigskip
For the estimate from below we notice that if $I=[\alpha, \alpha+\varepsilon]\subset [1-\frac{1}{N^a},1-\frac{1}{2N^a}]$, then
$$
\mu(I)\leqslant C_2 \frac{\varepsilon}{\sqrt{1-(1-\frac{1}{2N^a})^2}}\leqslant C_2\frac{\varepsilon}{N^\frac{a}{2}}.
$$
Take $n$ intervals in $[1-\frac{1}{N^a},1-\frac{1}{2N^a}]$ of length comparable to $\frac{1}{nN^a}$ and having equal $\mu$-measures $C_3\frac{1}{nN^\frac{a}2}$ (notice that if we are allowed to take such intervals near $1$, then the best measure we can get is $\frac{1}{\sqrt{nN^a}}$). If one of them is disjoint from $X_N$, then $\hat{\rho}(X_N)\geqslant \frac{C_4}{nN^a}$. Thus,
$$
\P\left(\hat\rho(X_N)\geqslant \frac{C_4}{nN^a}\right)\geqslant f\left(N, nN^{\frac{a}2}/C_3, n\right).
$$
It is easy to see that if we take
$$
n:=\frac{N^{1-\frac{a}2}}{A\log N-B\log\log N}
$$
for suitable $A$ and $B$, then the latter expression tends to one. Recall that $0<a<2$. Therefore, for large values of $N$ we have
$$
\P\left(\hat\rho(X_N) \geqslant C_4\frac{\log N}{N^{1+\frac{a}2}}\right) \geqslant \frac 12,
$$
which completes the proof for this case.
\subsubsection{The estimate for $\tilde\rho$}
For the estimate from above simply notice that for any interval $I$ we have $\mu(I)\geqslant |I|$. For the estimate from below take the interval $[-\frac12, \frac12]$. For any interval $I\subset [-\frac12, \frac12]$ we have $\mu(I)\leqslant C|I|$, and thus the estimate from below runs as usual.
\hfill $\Box$
| {
"timestamp": "2015-04-14T02:09:57",
"yymm": "1504",
"arxiv_id": "1504.03029",
"language": "en",
"url": "https://arxiv.org/abs/1504.03029",
"abstract": "We derive fundamental asymptotic results for the expected covering radius $\\rho(X_N)$ for $N$ points that are randomly and independently distributed with respect to surface measure on a sphere as well as on a class of smooth manifolds. For the unit sphere $\\mathbb{S}^d \\subset \\mathbb{R}^{d+1}$, we obtain the precise asymptotic that $\\mathbb{E}\\rho(X_N)[N/\\log N]^{1/d}$ has limit $[(d+1)\\upsilon_{d+1}/\\upsilon_d]^{1/d}$ as $N \\to \\infty $, where $\\upsilon_d$ is the volume of the $d$-dimensional unit ball. This proves a recent conjecture of Brauchart et al. as well as extends a result previously known only for the circle. Likewise we obtain precise asymptotics for the expected covering radius of $N$ points randomly distributed on a $d$-dimensional ball, a $d$-dimensional cube, as well as on a 3-dimensional polyhedron (where the points are independently distributed with respect to volume measure). More generally, we deduce upper and lower bounds for the expected covering radius of $N$ points that are randomly and independently distributed on a metric measure space, provided the measure satisfies certain regularity assumptions.",
"subjects": "Probability (math.PR)",
"title": "The covering radius of randomly distributed points on a manifold",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9766692298333415,
"lm_q2_score": 0.8198933293122506,
"lm_q1q2_score": 0.80076458648489
} |
https://arxiv.org/abs/1602.05298 | On critical points of random polynomials and spectrum of certain products of random matrices | In the first part we study critical points of random polynomials. We choose two deterministic sequences of complex numbers,whose empirical measures converge to the same probability measure in complex plane. We make a sequence of polynomials whose zeros are chosen from either of sequences at random. We show that the limiting empirical measure of zeros and critical points agree for these polynomials. As a consequence we show that when we randomly perturb the zeros of a deterministic sequence of polynomials, the limiting empirical measures of zeros and critical points agree. This result can be interpreted as an extension of earlier results where randomness is reduced. Pemantle and Rivin initiated the study of critical points of random polynomials. Kabluchko proved the result considering the zeros to be i.i.d. random variables.In the second part we deal with the spectrum of products of Ginibre matrices. Exact eigenvalue density is known for a very few matrix ensembles. For the known ones they often lead to determinantal point process. Let $X_1,X_2,...,X_k$ be i.i.d matrices of size $n \times n$ whose entries are independent complex Gaussian random variables. We derive the eigenvalue density for matrices of the form $Y_1.Y_2....Y_n$, where each $Y_i = X_i$ or $X_i^{-1}$. We show that the eigenvalues form a determinantal point process. The case where $k=2$, $Y_1=X_1,Y_2=X_2^{-1}$ was derived earlier by Krishnapur. The case where $Y_i =X_i$ for all $i=1,2,...,n$, was derived by Akemann and Burda. These two known cases can be obtained as special cases of our result. | \chapter{Critical points of random polynomials}
\label{ch:criticalpoints4}
\section{Introduction}In this chapter we will investigate the distribution of the critical points in relation to the zeros of a polynomial. For a holomorphic function $f:\mathbb{C} \rightarrow \mathbb{C}$ a point $z \in \mathbb{C}$ is called a critical point of $f$ if $f'(z)=0$.
The oldest known result relating the zeros and critical points of a polynomial is Gauss-Lucas theorem, which states that the critical points of any polynomial with complex coefficients lie inside the convex hull formed by the zeros of the polynomial.
\begin{theorem}[Gauss-Lucas; See Chapter 2, Theorem 6.1 in~\cite{marden}]\label{guass-lucas}
Let $P$ be a non-constant complex polynomial then the zeros of $P'$ are contained in the convex hull formed by the zeros of $P$.
\end{theorem}
In general nothing more can be said. Our interest is in dealing with sequences of polynomials, usually randomness included, with increasing degrees. We consider the case in which the point cloud made from the zeros of these polynomials will converge to a probability measure in the complex plane. We want to understand the behaviour of critical points of these polynomials. We recall the definition of weak convergence.
\begin{definition}
For a sequence of probability measures, $\{\mu_n\} \text{ and } \mu$ on $\text{$\mathbb{C}$}$, we say that $\mu_n \xrightarrow{w} \mu$ \textit{weakly}, if for any $ f\in C_c^{\infty}(\text{$\mathbb{C}$})$, we have $\lim\limits_{n \rightarrow \infty}\int_{X}^{}fd\mu_n = \int_{X}^{}fd\mu$.
\end{definition}
The following definition formalizes the notion of point cloud converging to a probability measure. Here the point cloud being the collection of terms from a sequence of complex numbers.
\begin{definition}We say a sequence of complex numbers $\{a_n\}_{n\geq1}$ to be \textit{$\mu$-distributed} if its empirical measures $\frac{1}{n}\sum_{k=1}^{n}\delta_{a_k}$ converge weakly to the probability measure $\mu$.
\end{definition}
In the next section we study the critical points of deterministic sequence of polynomials. We show that if all the zeros are confined in regions that are well separated, then the critical points also confine to these regions. Then, in subsequent sections we discuss a few examples in which the limiting measures of zeros and of critical points do not agree.
Later we give a brief overview of existing results in the literature where a sequence of random polynomials are considered. In all these cases it was shown that the limiting measures of zeros and critical points agree.
In the Section 2.3 we show the results we have obtained and discuss their consequences. In the first result we construct the zeros sequence, for each term choose the term from one of the two deterministic sequences at random. We construct a sequence of polynomials whose zeros are the terms of this sequence. We show that the limiting empirical measures of zeros and critical points of these polynomials agree. We then state the corollaries of this result, where we perturb this sequence randomly and show that the limiting measure of zeros and critical points agree. In our second result we consider a random rational function which can be used to get a generalized derivative and show that the limiting distribution of zeros and poles agree. As a corollary of this we show that if we choose a random subsequence of a deterministic sequence then the limiting measures of zeros and critical points agree. In the last section we prove the corollaries mentioned earlier. We defer the proofs of the theorems to the next chapter.
We will recall a well known proof of Gauss-Lucas theorem. If $z_1,z_2,\dots,z_n$ are the roots of the polynomial $P$, then for some $c$, $P(z)=c(z-z_1)(z-z_2)\dots(z-z_n)$. Define $L(z):=\frac{P'(z)}{P(z)}=\sum_{k=1}^{n}\frac{1}{z-z_k}$. If $z$ is a zero of $P'$ and not equal to any of the $z_i$s, then $L(z)=0$. Hence,
\[
\sum_{k=1}^{n}\frac{1}{z-z_k}=0 \text{\hspace{12 pt} or, \hspace{12 pt} } \sum_{k=1}^{n}\frac{\overline{z}-\overline{z}_k}{|z-z_k|^2}=0.
\]
Therefore if $L(z)=0$, then $z$ satisfies,
\[
z=\frac{\sum_{k=1}^{n}\frac{1}{|z-z_k|^2}z_k}{\sum_{k=1}^{n}\frac{1}{|z-z_k|^2}}.
\]
In the above equation $z$ is expressed as a convex combination of $z_k$s. In the other case where $z$ is one of the $z_k$s, it trivially true that $z$ is in the convex hull formed by $z_1,z_2,\dots,z_n$.
For a different proof of this theorem the reader can refer to \cite[Chapter-2, Theorem 6.1]{marden}. After the proof of Gauss-Lucas theorem, there have been several results concerning critical points and zeros of polynomials. Interested reader may see the references in \cite{pemantle}. Several conjectures on the same can be found in \cite{mardenconjectures} and \cite{borcea}. A brief survey on results connecting zeros and critical points of polynomials can be found in \cite{sury}.
In the proof of Gauss-Lucas theorem we have defined a function $L(z)=\sum_{k=1}^{n}\frac{1}{z-z_k}$. It is interpreted as potential at the point $z$ due to unit charges present at points $z_1,z_2,\dots,z_n$. In studying critical points this potential function plays a key role. All the results in this chapter are obtained by analyzing this function.
\section{Critical points of a sequence of deterministic polynomials.}
Consider a sequence of deterministic polynomials. In the case where all the zeros of these polynomials are in a bounded convex set, Gauss-Lucas theorem asserts that the critical points of these polynomials lie inside the same set. In this context we state a well known related result by Walsh.
\begin{theorem}[J.L.Walsh ~\cite{walsh}]
Let $C_1$, $C_2$ be disks with centres $c_1$, $c_2$ and radii $r_1$, $r_2$. Let $P$ be a polynomial of degree $n$ with all its zeros in $C_1\cup C_2$, say $n_1$ zeros in $C_1$ and $n_2$ zeros in $C_2$. Then $P$ has all its critical points in $C_1\cup C_2 \cup C_3$, where $C_3$ is the disk with centre $c_3$ and radius $r_3$ given by
\[
c_3=\frac{n_1c_2+n_2c_1}{n}, \hspace{5 pt} r_3=\frac{n_1r_2+n_2r_1}{n}.
\]
Furthermore, if $C_1$, $C_2$ and $C_3$ are pairwise disjoint, then $C_1$ contains $n_1-1$ critical points, $C_2$ contains $n_2-1$ critical points and $C_3$ contains $1$ critical point.
\end{theorem}
Inspired by the above theorem of Walsh, we derive the following result. Consider a sequence of polynomials whose zeros are in well separated clusters (say for example separated unit disks). Further assume that the number of zeros in these sets grow proportionately. Under these assumptions we show that in a neighbourhood of each of these sets the number of critical points differ from the number of zeros by at most a constant number.
\begin{theorem}\label{walsh_general}
Let $S_1,S_2,\dots, S_k$ be pairwise disjoint bounded convex sets in the complex plane. Assume that $\mbox{diam}(S_i)\leq 1$, for $i=1,2,\dots,k$ and the distance of separation between any two $S_i,S_j$, for $i\neq j$, is at least $5k$. Define the sequence of polynomials $\{P_n\}_{n\geq 1}$ as $P_n(z):=\prod\limits_{j=1}^{k}\prod\limits_{i=1}^{n}(z-z^{(j)}_i)$, where $z_i^{(j)} \in S_j$ for $j=1,\dots, k$ and $i=1,\dots, n$. Then, for $\epsilon > \frac{3-\sqrt{5}}{2}$ we have a constant $c(k,\epsilon)$, such that the number of zeros of $P_n'(z)$ in the $S_i^{\epsilon}$ is at least $n-c(k,\epsilon)$ for any $i=1,\dots,k$.
\end{theorem}
\begin{proof}
We will estimate the number of critical points of $P_n(z)$ in $S_1^\epsilon$ the $\epsilon$ neighbourhood of $S_1$. Let the diameter of the sets $S_i$ be at most $d$ for any $i \in \{1,2,\dots,k\}$. Assume that the separation between any two $S_i,S_j$ is at least $d_s$, where $d_s>0$. In the course of this proof we will substitute the values $d=1$ and $d_s=5k$ as given in the statement of the theorem. For $\epsilon$ small enough, we know that there are $n$ zeros of $P_n(z)$ in $S_1^{\epsilon}$. Argument principle computes the difference between zeros and poles of a meromorphic function in a domain by evaluating a certain integral on the boundary of the domain. We will use argument principle to estimate the critical points of $P_n(z)$ in $S_1^{\epsilon}$. A version of argument principle is stated below.
Let $f: U \rightarrow\mathbb{C}$ be a meromorphic function on a simply connected domain $U$ and let $C$ be a rectifiable simple closed curve in $U$. Assume that $f$ does not vanish on $C$. Then
\begin{equation}
\frac{1}{2\pi i}\oint_{C}\frac{f'(z)}{f(z)}dz = N_Z\left(f,C\right)-N_P\left(f,C\right)\label{argumentprinciple}
\end{equation}
where $N_Z\left(f,C\right)$ and $N_P\left(f,C\right)$ are the number of zeros and poles of $f$ enclosed by the curve $C$.
Define $L_n(z):=\frac{P_n'(z)}{P_n(z)}=\sum\limits_{j=1}^{k}\sum\limits_{s=1}^{n}\frac{1}{z-z_s^{(j)}}$ and notice that the zeros and poles of $L_n(z)$ are zeros of $P_n'(z)$ and $P_n(z)$ respectively. We shall apply the argument principle for the function $L_n(z)$ for the boundary curve $\gamma_1^\epsilon$ obtained from $\partial S_1^{\epsilon}$. Assume that there are no zeros of $P_n'(z)$ on the curve $\gamma_1^\epsilon$.
Hence applying the formula \eqref{argumentprinciple} to $L_n(z)$ we get,
\begin{align}
|N_Z\left(L_n,\gamma_1^\epsilon\right)-N_P\left(L_n,\gamma_1^\epsilon\right)| & = \bigg|\frac{1}{2\pi}\oint\limits_{\gamma_1^\epsilon}\frac{L_n'(z)}{L_n(z)}dz\bigg| & \text{(or)} \\
|N_Z\left(P_n,\gamma_1^\epsilon\right)-n| & \leq \frac{1}{2\pi}\oint\limits_{\gamma_1^\epsilon}\bigg|\frac{L_n'(z)}{L_n(z)}\bigg||dz| \label{lineintegral}
\end{align}
For the above integrand we will give an upper bound for the numerator and a lower bound for the denominator on the curve $\gamma_1^\epsilon$. Because $|z-z_s^{(j)}|\geq\epsilon$ and from the triangle inequality, for $z$ on the curve $\gamma_1^\epsilon$ we get,
\begin{equation}
|L_n'(z)|=\Biggl|-\sum\limits_{j=1}^{k}\sum\limits_{s=1}^{n}\frac{1}{\left(z-z^{(j)}_s\right)^2}\Biggr| \leq \sum\limits_{j=1}^{k}\sum\limits_{s=1}^{n}\frac{1}{\bigl|z-z^{(j)}_s\bigr|^2} \leq \frac{kn}{\epsilon^2}.\label{numerator}
\end{equation}
Similarly for $L_n(z)$ using triangle inequality we get,
\begin{align}
|L_n(z)|=\left|\sum\limits_{j=1}^{k}\sum\limits_{s=1}^{n}\frac{1}{z-z^{(j)}_s}\right| & \geq \left|\sum\limits_{s=1}^{n}\frac{1}{z-z^{(1)}_s}\right|- \left|\sum\limits_{j=2}^{k}\sum\limits_{s=1}^{n}\frac{1}{z-z^{(j)}_s}\right|\label{eqn:lemma0:1} \\
\end{align}
The first term in the right most expression in \eqref{eqn:lemma0:1} is invariant under multiplication by $e^{i\theta}$. Therefore for any $\theta \in [0,2\pi)$ we have,
\begin{align}
|L_n(z)| &\geq\left|\sum\limits_{s=1}^{n}\frac{e^{i\theta}}{z-z^{(1)}_s}\right|-\left|\sum\limits_{j=2}^{k}\sum\limits_{s=1}^{n}\frac{1}{z-z^{(j)}_s}\right|,\\
&\geq \left|\sum\limits_{s=1}^{n}\Im\left(\frac{e^{i\theta}}{z-z^{(1)}_s}\right)\right|- \left|\sum\limits_{j=2}^{k}\sum\limits_{s=1}^{n}\frac{1}{z-z^{(j)}_s}\right|, \\ & \geq \frac{n\epsilon}{(d+\epsilon)^2}-\frac{(k-1)n}{d_s-\epsilon}.
\end{align}
because $S_1^\epsilon$ is a convex set, there is a line passing through $z\in \partial S_1^\epsilon$ (from separating hyperplane theorem) such that all the points $z_s^{(1)}$ lie on the same side of this line. Let $\theta$ be the angle made by this line with real axis, then all the terms in the $\sum\limits_{s=1}^{n}\Im{\frac{e^{i\theta}\left(\overline z- \overline z_s^{(1)}\right)}{\left|z-z_s^{(1)}\right|^2}}$ have the same sign and the absolute value of the numerators is atleast $\epsilon$ and denominators with at most $(d+\epsilon)^2$. Similarly $\bigg|\sum\limits_{j =2}^{k}\sum\limits_{s=1}^{n}\frac{1}{z-z_s^{(j)}}\bigg| \leq \sum\limits_{j=2}^{k}\sum\limits_{s=1}^{n}\frac{1}{|z-z_s^{(j)}|}$, and for $j\neq 1$ the denominator in the previous expression $|z-z_s^{(j)}|$ is at least $d_s-\epsilon$, because $|z-z_s^{(j)}| \geq d(S_1^\epsilon,S_j)\geq d(S_1,S_j)-\epsilon \geq d_s-\epsilon$. For $z \in \partial S_1^\epsilon$, we obtain
\begin{equation}
\left|L_n(z)\right| \geq \frac{n\epsilon}{(d+\epsilon)^2}-\frac{(k-1)n}{d_s-\epsilon}. \label{denominator}
\end{equation}
By substituting $d=1$, for the right hand side of \eqref{denominator} to be positive, we need $d_s$ to satisfy
\begin{equation}
d_s>\epsilon+\frac{(k-1)(1+\epsilon)^2}{\epsilon}.\label{eqn:d_sandepsilon}
\end{equation}
For any choice of $d_s\geq 5k-2$ and $\epsilon \in [\frac{3-\sqrt{5}}{2},\frac{3+\sqrt{5}}{2}]$, the inequality \eqref{eqn:d_sandepsilon} is satisfied. For these choices of variables we have,
\begin{align}
|N_Z\left(P_n,\gamma_1^\epsilon\right)-n| & \leq \frac{1}{2\pi}\oint\limits_{\gamma_1^\epsilon}\bigg|\frac{L_n'(z)}{L_n(z)}\bigg||dz|,\\
& \leq
\frac{1}{2\pi}\oint\limits_{\gamma_1^\epsilon}\dfrac{\frac{kn}{\epsilon^2}}{\frac{n\epsilon}{(1+\epsilon)^2}-\frac{(k-1)n}{d_s-\epsilon}}|dz|,\\
&\leq \frac{1+2\epsilon}{2\epsilon^2}\dfrac{k}{\frac{\epsilon}{(1+\epsilon)^2}-\frac{(k-1)}{d_s-\epsilon}} =: c(k,\epsilon,d_s). \label{eqn:integral}
\end{align}
The inequality \eqref{eqn:integral} is obtained from the fact $p\leq \pi d$, where $p$ is the perimeter of the convex set and $d$ is the diameter of the convex set. Substituting $d_s=5k$ we obtain that $|N_Z(L_n,\gamma_1^\epsilon)-N_P(L_n,\gamma_1^\epsilon)|\leq c(k,\epsilon)$. By the choice of $\epsilon$, we have proved the Theorem for $\epsilon \in [\frac{3-\sqrt{5}}{2},\frac{3+\sqrt{5}}{2}]$. For $\epsilon>\frac{3+\sqrt{5}}{2}$, $S_1^\epsilon$ contains $\frac{3+\sqrt{5}}{2}$-neighbourhood of $S_1$, hence number of critical points in $S_1^\epsilon$ is at least $n-c(k,\frac{3+\sqrt{5}}{2})$. Choosing $c(k,\epsilon)=c(k,\frac{3+\sqrt{5}}{2})$, the Theorem is proved for $\epsilon>\frac{3+\sqrt{5}}{2}$.
\end{proof}
\begin{remark}
In Theorem \ref{walsh_general} we have assumed all $S_i$ have equal number of zeros of $P_n$. Instead we may assume that the number of zeros $P_n$ in $S_i$ be $c_in$ for constants $c_1,c_2,\dots,c_k$ and obtain a similar result. The same ideas in the above proof can be used to prove this result.
\end{remark}
It was raised by Pemantle and Rivin whether it is true that if the limiting measure of zeros of the sequence of polynomials converging to a probability measure $\mu$, then the limiting measure of critical points also converge to $\mu$. We will see in the forthcoming example that this is indeed false. For convenience we will introduce the following notation. For any polynomial $P$, denote $Z(P)$ to be the multi-set of zeros of $P$ and $\text{$\mathscr{M}$}(P)$ to be the uniform probability measure on $Z(P)$.
Here we will construct sequence of polynomials for which the limiting measure of zeros and critical points do not agree. The most commonly quoted~\cite{pemantle} sequence of polynomials in this regard is $P_n(z)=z^n-1$. In this case the limiting zero measure is the uniform probability measure on $S^1$ and the limiting critical point measure is the Dirac measure at origin. We generalize the above stated example and construct new set of examples for which the limiting measures of zeros and critical points are different.
\begin{eg}
Observe that if a polynomial has all zeros real, then all its critical points have to be real and are interlaced between the zeros of the polynomial. Consider the polynomial $P_n(z)=(z-a_1^n)(z-a_2^n)\dots(z-a_k^n)$, where $a_1,a_2,\dots,a_k$ are real numbers such that $0 < a_1 < a_2 < \dots < a_k$. Define the sequence of polynomials to be $Q_n(z)=P_n(z^n)$, then $Q_n'(z)=nz^{n-1}P_n'(z^n)$. The zero set of $Q_n$ is \[Z(Q_n)=\bigcup\limits_{j=1}^{k}\bigcup\limits_{\ell=1}^{n}\{a_je^{ 2\pi i\frac{\ell}{n}}\}.\] Where as the zero set of $Q_n'$ is \[Z(Q_n')=\left(\bigcup\limits_{j=1}^{k-1}\bigcup\limits_{\ell=1}^{n} \{b_{j,n}^{\frac{1}{n}}e^{ 2\pi i\frac{\ell}{n}}\}\right)\bigcup\{0,0,\dots,0\},\] where $b_{1,n},b_{2,n},\dots,b_{k-1,n}$ are the zeros of the polynomial $P_n'(z)$. The probability measure $\text{$\mathscr{M}$}(Q_n')$ has mass $\frac{n-1}{kn-1}$ at $0$, hence its limiting measure will have mass $\frac{1}{k}$ at $0$. On the other hand the probability measure $\text{$\mathscr{M}$}(Q_n)$ is supported on $\bigcup\limits_{j=1}^{k}a_jS^1$. Hence the limiting measures do not agree.
\end{eg}
\begin{eg}
For the second class of examples choose a sequence of complex numbers all containing in a disk of radius $r$ around 0. Let the sequence be $\{a_n\}_{n\geq1}$ and $|a_n|\leq r$. Make a sequence of polynomials using the terms of this sequence as its zeros. Define $P_n(z)=(z-a_1)(z-a_2)\dots(z-a_n)$. Using these define the polynomials $Q_n(z)=\int_{0}^{z}P_n(w)dw+(2r+d)^{n+1}$, where $d>0$. Notice that $Q_n'(z)=P_n(z)$. We will show that the polynomial $Q_n(z)$ does not vanish in the disk $\textbf{$\mathbb{D}$}_r$. For this observe
\begin{align}
\min\limits_{z \in \text{$\mathbb{D}$}_r}|Q_n(z)| & \geq |(2r+d)^n-\max\limits_{z \in \text{$\mathbb{D}$}_r}|\int_{0}^{z}P_n(w)dw||,\\
& \geq |(2r+d)^{n+1}-r\max\limits_{z\in \text{$\mathbb{D}$}_r}|P_n(z)||,\\
& =|(2r+d)^{n+1}-r(2r)^n| >0.
\end{align}
Therefore $Q_n(z)$ does not vanish in the disk $\text{$\mathbb{D}$}_r$. Hence for all large $n$ the zeros of $Q_n(z)$ are outside the disk $\text{$\mathbb{D}$}_{2r+d}$. Assuming that $Q_n(z)$ has a limiting zero measure, the support of the limiting zero measure of $Q_n(z)$ is disjoint from the support of the limiting zero measure of $P_n(z)$.
\end{eg}
\begin{eg}
We will illustrate a more concrete example based on the above technique. Choose a polynomial $P$, whose zeros are in $\text{$\mathbb{D}$}_r$, where $r<1$. Define $Q_n(z)=P^n(z)-1$, then $Q_n'=nP^{n-1}(z)P'(z)$. If $z$ is a zero of $Q_n(z)$, then it satisfies $P^n(z)=1$, or $|P(z)|=1$. Therefore the limiting zero measure of $Q_n(z)$ is supported on the boundary of the lemniscate $\{z:|P(z)|\leq1\}$ of the polynomial $P$. The limiting zero measure for the sequence $\{Q_n\}_{n \geq 1}$ exists because $Q_n$ is the $nk$-th Chebyshev polynomial of the lemniscate of $P$. Hence the limiting zero measure is the equilibrium measure for the domain $\{z:|P(z)|\leq1\}$. For a detailed discussion on the relation between Chebyshev polynomials and equilibrium measures, the reader can refer Chapter 5 in \cite{ransford}. On the other side, if $z_1,z_2,\dots,z_k$ are the roots of the polynomial $P$, then the limiting zero distribution of $Q_n'$ will be $\frac{1}{k}\sum\limits_{i=1}^{k}\delta_{z_i}$. Hence the limiting measures of zeros and critical points of the given sequence of polynomials do not agree.
\end{eg}
In this context we quote the question posed by Pemantle and Rivin in \cite{pemantle}.
\begin{question}\label{Pemantle_question}
When are the zeros of $P_n'$ stochastically similar to the zeros of $P_n$?
\end{question}
\section{Critical points of random polynomials.}
To tackle the Question \ref{Pemantle_question}, $P_n$s can be considered to be random. The study of critical points of random polynomials through random zeros was initiated by Pemantle and Rivin in \cite{pemantle}. They considered a sequence of random polynomials whose zeros are i.i.d. with law $\mu$ having finite 1-energy and proved that the empirical law of critical points converge weakly to the same probability measure $\mu$. A similar result for probability measures supporting on $S^1$ was proved by Subramanian \cite{sneha}. Kabluchko in \cite{kabluchko} proved the result without any assumption on $\mu$.
Before stating the above mentioned results we recall the modes of convergence for random measures.
\begin{definition}\label{modes of convergence}
Let $\mbox{\textbf{M}}(\text{$\mathbb{C}$})$ be the set of probability measures on the complex plane, equipped with \textit{weak topology}. Let $\{\mu_n\}_{n\geq1}$ be a sequence in $\mbox{\textbf{M}}(\text{$\mathbb{C}$})$ and $\mu \in \mbox{\textbf{M}}(\text{$\mathbb{C}$})$ we say,
\begin{itemize}
\item $\mu_n \xrightarrow{w} \mu$ in probability if $\lim\limits_{n\rightarrow\infty}\Pr(\mu_n \in N_\mu)=1$ for any neighbourhood $N_\mu$ of $\mu$,
\item $\mu_n \xrightarrow{w} \mu$ almost surely if $\Pr(\lim\limits_{n\rightarrow\infty}\mu_n \in N_\mu)=1$ for any neighbourhood $N_\mu$ of $\mu$.
\end{itemize}
\end{definition}
We now give precise statements of the results in \cite{kabluchko} and \cite{pemantle}.
\begin{definition}
Define the \textit{p-energy of $\mu$} to be
\[
\mathcal{E}_\text{$p$}(\mu):=\left(\int\limits_{\mathbb{C}}\int\limits_{\mathbb{C}}\text{$\frac{1}{|z-w|^p}d\mu(z)d\mu(w)$}\right)^\textbf{$\frac{1}{p}$}
\]
\end{definition}
\begin{theorem}[Pemantle-Rivin ~\cite{pemantle}]\label{pemantle-rivin}Let $X_1,X_2,\dots $ be a sequence of i.i.d random variables from the probability measure $\mu$. Assume that $\mu$ has finite 1-energy. Let $P_n(z)=(z-X_1)(z-X_2)\dots(z-X_n)$, then the critical points measure $\text{$\mathscr{M}$}(P_n')\xrightarrow{w}\mu$ almost surely.
\end{theorem}
One limitation of the above result is that it is not applicable to probability measures that are supported on 1-dimensional subsets of the complex plane. But the result can be easily verified for probability measures supported on real line. By Rolle's theorem the critical points are interlaced between the roots of the polynomial. Hence the L\'{e}vy distance between the zeros measure $\text{$\mathscr{M}$}(P_n)$ and critical points measure $\text{$\mathscr{M}$}(P_n')$ is at most $\frac{1}{n}$. On the other side the zeros measure $\text{$\mathscr{M}$}(P_n)$ has a limiting measure $\mu$ which is the probability measure from which the random variables are drawn. Combining the previous two observations the result follows. Pemantle and Rivin in \cite{pemantle} conjectured that the statement of Theorem \ref{pemantle-rivin} is true without any assumptions on $\mu$.
\begin{conjecture}[Pemantle-Rivin \cite{pemantle}]\label{pemantle_conjecture}
Let $X_1,X_2,\dots$ be i.i.d. random variables distributed according to a probability measure $\mu$ and $P_n(z):=(z-X_1)(z-X_2)\dots(z-X_n)$. Then $\text{$\mathscr{M}$}(P_n')\xrightarrow{w}\mu$ almost surely.
\end{conjecture}
Kabluchko proved the conjecture of Pemantle and Rivin in a weak form.
\begin{theorem}[Kabluchko~\cite{kabluchko}]\label{kabluchko}
Let $X_1, X_2 ,\dots$ be i.i.d. random variables distributed according to a probability measure $\mu$ and $P_n(z):=(z-X_1)(z-X_2)\dots(z-X_n)$. Then $\text{$\mathscr{M}$}(P_n')\xrightarrow{w}\mu$ in probability.
\end{theorem}
Further results concerning critical points and zeros of random polynomials are discussed below. In \cite{cheung} the authors (Pak-Leong Cheung, Tuen Wai Ng, Jonathan Tsai and SCP Yam) prove that the empirical law of zeros of the higher derivatives for the polynomial whose zeros are i.i.d. with law $\mu$ supported in $S^1$ converge to the same probability measure $\mu$. In \cite{cheung} the authors also obtain similar results for the zeros of generalized derivatives of polynomials. Similar results for critical points of characteristic polynomials of random matrix ensembles (Haar distributed on $O(n)$, $SO(n)$, $U(n)$, $Sp(n)$) are proved in \cite{orourke} by O'Rourke.
In this section we will present two results concerning the zeros and critical points of the sequence of random polynomials. In the previous section we have seen examples of polynomials for which the limiting empirical distribution of zeros and critical points do not agree. Where as if the zeros of the polynomial are chosen to be i.i.d. random variables, then the statement holds \cite{kabluchko}. These results bridge the gap between the two scenarios, i.e., we reduce the randomness in choosing the zeros and show that the statement holds. In Theorem \ref{thm1} we will start with two sequences of complex numbers which are asymptotically distributed according to a same probability measure. We also assume that the two sequences are sufficiently different (precise conditions are stated in the theorem). Then we construct a sequence of random numbers, whose terms are chosen independently at random from the corresponding terms of either of the sequences. If we make a sequence of polynomials whose zeros are the terms of the obtained random sequence, then the limiting measure of the critical points of this sequence of polynomials will agree with that of the limiting measure of the sequences we started with.
We prove the result for a specific class of sequences which we call as \textit{log-Ces\'{a}ro-bounded} which is defined as follows.
\begin{definition}\label{def:log-cesaro}
We say a sequence of complex numbers $\{a_n\}_{n\geq1}$ to be \textit{log-Ces\'{a}ro-bounded} if the Ces\'{a}ro means of the positive part of their logarithms are bounded i.e., the sequence $\{\frac{1}{n}\sum_{k=1}^{n}\log_+|a_k|\}$ is bounded.
\end{definition}
\begin{eg}
Any bounded sequence is a log-Ces\'{a}ro bounded.
\end{eg}
\begin{theorem}\label{thm1}
Let $\{a_k\}_{k\geq1}$ and $\{b_k\}_{k\geq1}$ be two $\mu$-distributed and log-Ces\'{a}ro bounded sequences of complex numbers. Additionally assume that, $a_k \neq b_k$ for infinitely many $k$.
Define the sequence of independent random variables $\xi_k$ such that $\xi_k = a_k $ or $b_k$ with equal probability, for $k\geq1$. Define the polynomials $P_n(z):=(z-\xi_1)(z-\xi_2)\dots(z-\xi_n)$. Then, $\text{$\mathscr{M}$}(P_n)\xrightarrow{w}\mu$ almost surely and $\text{$\mathscr{M}$}(P_n')\xrightarrow{w}\mu$ in probability.
\end{theorem}
For the assertion of the above theorem to hold, it is necessary to assume that the two sequences differ in infinitely many terms. Suppose not, we may choose one of the sequence to be a sequence for which the assertion of the theorem doesn't hold. Since both the sequences differ only in finitely many terms, the resulting sequence will be same as that of the sequence for which the assertion doesn't hold, with non zero probability. Hence with positive probability the statement of the Theorem \ref{thm1} doesn't hold. The log-Ces\'{a}ro boundedness on the sequences is assumed to enable the proof. We don't have any strong reason for either of the cases whether it is necessary or not.
The Theorem \ref{thm1} can be used to obtain corollaries of the following form. Choose a deterministic sequence which is $\mu$-distributed and perturb each of its term by a random variable with diminishing variances. It can be obtained that the empirical measure of the critical points of the polynomial, made from the perturbed sequence also converge to the same limiting probability measure $\mu$.
\begin{corollary}\label{Symmetric perturbations}
Let $\{u_n\}_{n\geq1}$ be a $\mu$-distributed sequence and log-Ces\'{a}ro bounded sequence. Let $\{v_n\}_{n\geq1}$ be the sequence such that $v_n=u_n+\sigma_nX_n$, where $X_n$s are i.i.d random variables satisfying $X_n\stackrel{d}{=}-X_n$, $\ee{\text{$|X_n|$}}<\infty$ and $\sigma_n \downarrow 0$, $\sigma_n\neq0$. Define the polynomial $P_n(z):=(z-v_1)(z-v_2)\dots(z-v_n)$. Then, $\text{$\mathscr{M}$}(P_n)\xrightarrow{w}\mu$ almost surely and $\text{$\mathscr{M}$}(P_n')\xrightarrow{w}\mu$ in probability.
\end{corollary}
\begin{remark}
In Corollary \ref{Symmetric perturbations}, we may choose the random variables $X_n$s to have complex Gaussian distribution or uniform distribution on unit disk centred at $0$. In the case of complex Gaussian distributed random variables we get the result for unbounded perturbations and in the case of uniformly distributed random variables the perturbations are bounded.
\end{remark}
It is an easy fact (Page 15 in \cite{manjubook}) that if $\{X_n\}_{n \geq 1}$ is a sequence of i.i.d random variables that are not identically $0$ such that $\ee{\textbf{$\log_+|X_1|$}}<\infty$, then $\limsup\limits_{n \rightarrow \infty}|X_n|^\frac{1}{n}=1$.
A special case of Theorem \ref{kabluchko} can be obtained as a corollary of the Theorem \ref{thm1}. The special case being the one in which the probability measure $\mu$ in consideration has bounded $\log_+$-moment.
\begin{corollary}\label{corollary4:kabluchko}
Let $\mu$ be any probability measure on $\mathbb{C}$ satisfying $\int\limits_{\mathbb{C}}\log_+|z|d\mu(z)<\infty$. Let $X_1,X_2,\dots, X_n$ be i.i.d random variables distributed according to $\mu$. Define the polynomials $P_n(z):=(z-X_1)(z-X_2)\dots(z-X_n)$. Then, $\text{$\mathscr{M}$}(P_n)\xrightarrow{w}\mu$ almost surely and $\text{$\mathscr{M}$}(P_n')\xrightarrow{w}\mu$ in probability.
\end{corollary}
Let $\{u_n\}_{n\geq1}$ be $\mu$-distributed sequence and $\{v_n\}_{n\geq1}$ be $\nu$-distributed sequence and both are log-Ces\'{a}ro bounded. We replace the terms in the first sequence with those of the second sequence each with probability $p>0$. Let the random sequence be $\{\xi_n\}_{n\geq1}$, define $P_n(z)=(z-\xi_1)(z-\xi_2)\dots(z-\xi_n)$. Then, the limiting empirical measures of zeros and critical points of the polynomial $P_n$ will agree. We state this as the following proposition.
\begin{proposition}\label{corollary5:thm1}
Let $\{u_k\}_{k\geq1}$ be a $\mu$-distributed sequence of complex numbers and $\{v_k\}_{k\geq1}$ be a $\nu$-distributed sequence of complex numbers. Assume that both the sequences $\{u_k\}_{k\geq1}$ and $\{v_k\}_{k\geq1}$ are log-Ces\'{a}ro bounded and $u_k\neq v_k$ for infinitely many $k$. For $i\geq1$ define the sequence of independent random variables to be $\xi_i = u_i $ with probability $p$ and $v_i$ with probability $1-p$, where $0 < p<1$. Define the polynomials $P_n(z):=(z-\xi_1)(z-\xi_2)\dots(z-\xi_n)$. Then, $\text{$\mathscr{M}$}(P_n)\xrightarrow{w} p\mu+(1-p)\nu$ almost surely and $\text{$\mathscr{M}$}(P_n')\xrightarrow{w} p\mu+(1-p)\nu$ in probability.
\end{proposition}
We have seen examples of sequence of polynomials for which the limiting measure of zeros and critical points do not agree. Consider the case of the sequence of polynomials whose $n$-th term is $P_n(z)=z^n-1$. We have seen that the limiting measure of zeros is the uniform probability measure on $S^1$ and that of critical points is dirac measure at $0$. The zeros of $P_n$ are the $n$-th roots of unity. The zeros are symmetrical and balanced in many respects. Removing any of these zeros can disturb this symmetry and be considered as a perturbation asymptotically. Define the sequence to be $\{Q_n\}_{n\geq1}$, where \[Q_n(z)=\frac{P_{n+1}(z)}{z-1}=z^n+z^{n-1}+\dots+1.\]
It will be shown that the limiting zero measure of the sequence $\{Q_n\}_{n\geq1}$ is the uniform probability measure on $S^1$. The derivative of these polynomials is \[Q_n'(z)=nz^{n-1}+(n-2)z^{n-1}+\dots+1=\frac{nz^{n+1}-(n+1)z^n+1}{(z-1)^2}.\]
We will show that the limiting zero measure of $(z-1)^2Q_n'(z)$ $(=nz^{n+1}-(n+1)z^n+1)$ is the uniform probability measure on $S^1$ which in turn gives the limiting zero measure for $Q_n'$. Fix any $r>1$, there is $N_r$ such that whenever $n>N_r$, for $|z|\geq r$ we have,
\[|nz^{n+1}-(n+1)z^n+1|\geq|n|z|^{n+1}+1-(n+1)|z|^n|>0.\]
Similarly fix any $r<1$, there is $N_r$ such that whenever $n>N_r$, for $|z|\leq r$ we have,
\[|nz^{n+1}-(n+1)z^n+1|=|z|^{n+1}\bigg|n-\frac{n+1}{z}+\frac{1}{z^{n+1}}\bigg|\geq|z|^{n+1}\bigg|\big|\frac{n+1}{z}-n\big|-\frac{1}{|z|^{n+1}}\bigg|>0.\]
Hence the limiting zero measure of the sequence $\{Q_n'\}_{n\geq1}$ is supported on $S^1$. If we show that asymptotically the angular distribution of the zeros of $Q_n'$ is uniform on $[0,2\pi)$, then it follows that the limiting zero measure of the sequence $\{Q_n'\}_{n\geq1}$ is the uniform probability measure on $S^1$. To show this we use a bound of Erd\"{o}s-Turan for the discrepancy between a probability measure and uniform measure on $S^1$. We will sate the inequality in the case where the two measures are counting probability measure zeros of polynomial and uniform probability measure on $S^1$.
\begin{theorem}[Erd\"{o}s-Turan~\cite{erdos-turan}]
Let $\{a_k\}_{0\leq k\leq N}$ be a sequence of complex numbers such that $a_0a_N\neq 0$ and let, \[P(z)=\sum\limits_{k=0}^{N}a_kz^k.\]
Then, \[\bigg|\frac{1}{N}\nu_N(\theta,\phi)-\frac{\phi-\theta}{2\pi}\bigg|^2\leq\frac{C}{N}\log\bigg|\frac{\sum_{k=0}^{N}|a_k|}{\sqrt{|a_0a_N|}}\bigg|,\]
for some constant $C$ and $\nu_N(\theta,\phi):=\#\{z_k:\theta\leq\arg(z_k)<\phi\}$, where $z_1,z_2,\dots,z_N$ are zeros of $P(z)$.
\end{theorem}
Applying the above inequality for the polynomial $(z-1)^2Q_n'(z)$, we get
\[\bigg|\frac{1}{n}\nu_n(\theta,\phi)-\frac{\phi-\theta}{2\pi}\bigg|^2\leq\frac{C}{n}\log\bigg|\frac{2n+2}{\sqrt{n}}\bigg|\stackrel{n\rightarrow \infty}{\longrightarrow}0.\]
Therefore the limiting zero measure of $Q_n'$ is uniform probability measure on $S^1$ which agrees with the limiting zero measure of $Q_n$. As an application of the forthcoming theorem we will see that if we choose random subsequence from a $\mu$-distributed sequence, then the limiting distribution of zeros and critical points agree for the polynomials made from this random sequence.
The next result (Theorem \ref{thm2}) deals with counting the zeros and pole of a random rational function. The random rational function is defined as $L_n(z)=\sum\limits_{k=1}^{n}\frac{a_k}{z-z_k}$. In a special case where $\sum_{k=1}^{n}a_k=n$ and $a_k>0$ for every $k=1,2,\dots,n$, it is called generalized Sz.-Nagy derivative. For a classical derivative all $a_k$s are equal to $1$. It is mentioned in \cite{rahmanbook} that the motivation in studying generalized derivative is that many of the results for classical derivatives extend to the generalized derivatives.
\begin{theorem}\label{thm2}
Let $a_1, a_2, \dots$ be i.i.d. random variables satisfying $\ee{\text{$|a_1|$}}\text{$< \infty$}$. Let $\{z_n\}_{n\geq 1}$ be a sequence satisfying that for Lebesgue a.e. $z \in \mathbb{C}$ there exists a compact set $K_z$ with $d(z,K_z)>0$ such that there are infinitely many $z_k$'s in $K_z$, and there is a point $\omega$ that is not a limit point of $z_n$'s. Define $L_n(z):= \frac{a_1}{z-z_1}+\frac{a_2 }{z-z_2}\dots+\frac{a_n}{z-z_n} $. Then $\frac{1}{n}\Delta \log(|L_n(z)|)\rightarrow 0$ in probability, in the sense of distributions.
\end{theorem}
In the statement of the above theorem, there is a mention of the sequence $\{z_n\}_{n\geq1}$ satisfying that for Lebesgue a.e. $z \in \mathbb{C}$ there exists a compact set $K_z$ with $d(z,K_z)>0$ such that there are infinitely many $z_k$'s in $K_z$, and there is a point $\omega$ that is not a limit point of $z_n$'s. Several classes of sequences satisfy this condition. For example any bounded sequence or any sequence that is not dense and $\mu$-distributed for appropriate $\mu$ satisfies this condition.
\begin{remark}
In Theorem \ref{thm2}, let $L_n(z)=\frac{Q_n(z)}{P_n(z)}$. Where $Q_n(z)$ id defined to be the generalized derivative of the polynomial $P_n$. Then Theorem \ref{thm2} asserts that $\frac{1}{n}\Delta\log|L_n(z)|\rightarrow0$, which in turn imply that $\text{$\mathscr{M}$}(Q_n)-\text{$\mathscr{M}$}(P_n)\rightarrow 0$ in the sense of distributions. If we assume that the sequence $\{z_k\}_{k\geq1}$ is $\mu$-distributed then it follows that the limiting measure of critical points converge to $\mu$.
\end{remark}
As an application of previous Theorem \ref{thm1} we choose a $\mu$-distributed deterministic sequence and perturb it randomly and show that the empirical distribution of critical points is also $\mu$. Instead here we choose a random subsequence of a $\mu$-distributed sequence and show that the corresponding result holds. We state this result as the following corollary.
\begin{corollary}\label{corollary:thm2}
Let $\{z_n\}_{n\geq1}$ be a $\mu$-distributed sequence that is not dense in $\mathbb{C}$, for a $\mu$ which is not supported on the whole complex plane. Choose a subsequence $\{z_{n_k}\}_{k \geq 1}$ at random that is, each of $z_n$ is part of subsequence with probability $p<1$ independent of others. Define the polynomials $P_k(z):=(z-z_{n_1})(z-z_{n_2})\dots(z-z_{n_k})$. Then, $\text{$\mathscr{M}$}(P_k)\xrightarrow{w}\mu$ almost surely and $\text{$\mathscr{M}$}(P_k')\xrightarrow{w}\mu$ in probability.
\end{corollary}
\section{Proofs of corollaries and Proposition \ref{corollary5:thm1}.}
In Corollary \ref{Symmetric perturbations} we deal with perturbations of a $\mu$-distributed sequence. We expect that the perturbed sequence will also have the same limiting probability measure as of the original sequence. It is formally stated and proved in the following lemma.
\begin{lemma}\label{lemma:perturb_limit_measure}
Let $\{a_n\}_{n\geq1}$ be a $\mu$-distributed sequence, $\sigma_n\downarrow0$ and $X_1,X_2,\dots$ are i.i.d. random variables. Then, $\{a_n+\sigma_nX_n\}_{n\geq1}$ is a $\mu$-distributed sequence almost surely.
\end{lemma}
\begin{proof}
It is enough to show that for any $f\in C_c^\infty(\mathbb{C})$,
\[\frac{1}{n}\sum\limits_{k=1}^{n}\left(f(a_k)-f(a_k+\sigma_kX_k)\right)\rightarrow 0,\]
almost surely. Fix $\epsilon>0$, choose $M$ such that $\Pr(|X_n|>M)<\epsilon.$ Then,
\begin{align}
\frac{1}{n}|\sum\limits_{k=0}^{n}(f(a_k)-f(a_k+\sigma_kX_k)| & \leq \frac{1}{n}\sum\limits_{k=1}^{n}|(f(a_k)-f(a_k+\sigma_kX_k))\text{$\mathbbm{1}$}\{|X_k|>M\}|\\&+\frac{1}{n}\sum\limits_{k=1}^{n}|(f(a_k)-f(a_k+\sigma_kX_k))\text{$\mathbbm{1}$}\{|X_k|\leq M\}|,\\
&\leq\frac{ 2||f||_\infty}{n} \sum\limits_{k=1}^{n}\text{$\mathbbm{1}$}\{|X_k|>M\}+\frac{1}{n}\sum\limits_{k=1}^{n}|\sigma_kX_k|||f'||_\infty,\\
&\leq \frac{ 2||f||_\infty}{n} \sum\limits_{k=1}^{n}\text{$\mathbbm{1}$}\{|X_k|>M\} + \frac{M||f'||_\infty}{n}\sum\limits_{k=1}^{n}\sigma_k. \label{eqn:perturb_limit_measure}
\end{align}
Using law of large numbers and $\sigma_n\downarrow0$in the above equation \ref{eqn:perturb_limit_measure} we have
\[
\lim\limits_{n\rightarrow\infty}\frac{1}{n}\bigg|\sum\limits_{k=0}^{n}(f(a_k)-f(a_k+\sigma_kX_k)\bigg| \leq 2||f||_\infty\epsilon.
\]
Because $\epsilon>0$ is arbitrary, we get $\lim\limits_{n\rightarrow \infty}\frac{1}{n}\sum\limits_{k=1}^{n}\left(f(a_k)-f(a_k+\sigma_kX_k)\right)=0$.
\end{proof}
The main idea in proving the corollaries is that we condition the random sequences suitably, so that the resulting sequences satisfy the hypothesis of the Theorem \ref{thm1} and then apply to obtain the result. More formally, say we condition the sequence on the event $E$. Assume the conditioned sequence can be realized as a random sequence which satisfies the hypothesis of Theorem \ref{thm1}. Let $\nu_n^E$ be the empirical measure of the critical points of the degree-$n$ polynomial formed by conditioned sequence. Fix $\epsilon>0$, then
\begin{align}
\Pr\left(d(\nu_n,\mu)>\epsilon\right) &=\ee{\text{$\text{$\mathbbm{1}$}\{d(\nu_n,\mu)>\epsilon\}$}},\\
&=\ee{\cee{\text{$\text{$\mathbbm{1}$}\{d(\nu_n,\mu)>\epsilon\}$}}{\text{$E$}}},\\
&=\ee{\text{$\text{$\mathbbm{1}$}\{d(\nu_n^E,\mu)>\epsilon\}$}}. \label{eqn:convergence}
\end{align}
But from the assumption made above, for every $\epsilon>0$ we have,
\[\ee{\text{$\text{$\mathbbm{1}$}\{d(\nu_n^E,\mu)>\epsilon\}$}}=\text{$\Pr\left(d(\nu_n^E,\nu)>\epsilon\right)\xrightarrow{n \rightarrow \infty}0$}.\]
Applying the dominated convergence theorem to \eqref{eqn:convergence} it follows that, for everey $\epsilon>0$
\[\Pr\left(d(\nu_n,\mu)>\epsilon\right)\xrightarrow{n \rightarrow \infty} 0.\]
Therefore it is justified that to show the convergence of probability measures, it is enough to show the convergence of conditioned probability measures almost surely.
To invoke the hypothesis of the Theorem \ref{thm1} we need to show that the perturbed sequence is also log-Ces\'{a}ro bounded. It will be proved in the following lemma.
We will use the following inequalities, whenever required.
\begin{align}
\log_+|ab| & \leq \log_+|a| + \log_+|b| \label{logplusprod}\\
\log_-|ab| & \leq \log_-|a| +\log_-|b| \label{logminusprod}\\
\log_+|a_1+a_2+ \dots + a_n| & \leq \log_+|a_1| + \log_+|a_2|+ \dots +\log_+|a_n| +\log(n)\label{logplussum}
\end{align}
\begin{remark}
The inequality \eqref{logplussum} is obtained by using the inequalities
$|a_1+\dots+a_n|\leq |a_1|+\dots+|a_n| \leq n\max\limits_{i\leq n}{|a_i|}$
and
$\log_+(\max\limits_{i\leq n}|a_i|)\leq \log_+|a_1|+\dots+\log_+|a_n|.$
\end{remark}
\begin{lemma}\label{lemma:log-cesaro-bounded}
Let $\{a_n\}_{n\geq1}$ be a sequence that is log-Ces\'{a}ro bounded and $\{b_n\}_{n\geq1}$ be a sequence such that $b_n=a_n+\sigma_nX_n$, $\sigma_n\downarrow0$ and $X_1,X_2,\dots$ are i.i.d. random variables with $\ee{\text{$\log_+|X_1|$}}<\infty$. Then the sequence $\{b_n\}_{n\geq1}$ is also log-Ces\'{a}ro bounded.
\end{lemma}
\begin{proof}
\begin{align}
\frac{1}{n}\sum\limits_{k=1}^{n}\log_+|b_k| & \leq \frac{1}{n}\sum\limits_{k=1}^{n}(\log_+(|a_k|+|a_k-b_k|)),\\
&\leq \frac{1}{n}\sum\limits_{k=1}^{n}\log_+|a_k|+\frac{1}{n}\sum\limits_{k=1}^{n}\log_+|\sigma_kX_k|+\log(2)\\
&\leq \frac{1}{n}\sum\limits_{k=1}^{n}\log_+|a_k|+\frac{1}{n}\sum\limits_{k=1}^{n}\log_+|\sigma_k|+\frac{1}{n}\sum\limits_{k=1}^{n}\log_+|X_k|+\log(2) \label{eqn:lemma:log-cesaro:1}
\end{align}
The sequence $\{\frac{1}{n}\sum_{k=1}^{n}\log_+|\sigma_k|\}_{n\geq1}$ goes to $0$, because $\lim\limits_{n\rightarrow0}\sigma_n=0$. Using law of large numbers and the fact that $\ee{\text{$\log_+|X_1|$}}<\infty$, the sequence $\{\frac{1}{n}\sum_{k=1}^{n}\log_+|X_k|\}_{n\geq1}$ is bounded almost surely. Combining \eqref{eqn:lemma:log-cesaro:1} and the above facts we get that the sequence $\frac{1}{n}\sum\limits_{k=1}^{n}\log_+|b_k|$ is bounded. This completes the proof.
\end{proof}
\begin{proof}[Proof of Corollary \ref{Symmetric perturbations}]
Fix $r_n$ and $\theta_n$ for $n \geq 1$. Choose $E=\{w:X_n(w)=\pm r_ne^{i\theta_n} \linebreak\text{ for }n\geq1\}$. Because $X_n$s are symmetric random variables, the $n^{th}$ term of the resulting sequence will be $u_n + \sigma_nr_ne^{i\theta_n}$ or $u_n - \sigma_nr_ne^{i\theta_n}$ with equal probability independent of other terms. Choose $a_n=u_n+\sigma_nr_ne^{i\theta_n}$ and $b_n=u_n-\sigma_nr_ne^{i\theta_n}$. We need to show that almost surely the sequences $\{a_n\}_{n\geq1}$ and $\{b_n\}_{n\geq1}$ satisfy the hypotheses of the Theorem \ref{thm1}. It follows from Lemmas \ref{lemma:perturb_limit_measure} and \ref{lemma:log-cesaro-bounded} the sequences $\{a_n\}_{n \geq 1}$ and $\{b_n\}_{n \geq 1}$ are $\mu$-distributed and log-Ces\'{a}ro bounded almost surely.
\end{proof}
\begin{proof}[Proof of Corollary \ref{corollary4:kabluchko}]
If $\mu$ is a degenerate probability measure then the result is trivial to verify. If $\mu$ is not deterministic then choose two independent sequences of random numbers $\{a_n\}_{n\geq1}$ and $\{b_n\}_{n\geq1}$, where $a_n$s and $b_n$s are i.i.d random numbers obtained from measure $\mu$. Choose $X_n= a_n$ or $b_n$ with equal probability independent of other terms, then $\{X_n\}_{n\geq1}$ is a sequence of i.i.d random variables distributed according to probability measure $\mu$. Using the hypothesis $\int\limits_{\mathbb{C}}\log_+|z|d\mu(z)<\infty$ and applying law of large numbers for the random variables $\{\log_+|X_n|\}_{n \geq 1}$, we get that the sequences $\{a_n\}_{n\geq1}$ and $\{b_n\}_{n\geq1}$ are log-Ces\'{a}ro bounded almost surely. Therefore the constructed sequences satisfy the hypothesis of the Theorem \ref{thm1}.
\end{proof}
Before proving Proposition \ref{corollary5:thm1} we will prove the following lemma which give the limiting empirical measure of random sequence whose terms are drawn from either of two deterministic sequences.
\begin{lemma}\label{random_seq_limit}
Let $\{a_k\}_{k\geq1}$ and $\{b_k\}_{k\geq1}$ be two sequences which are $\mu$ and $\nu$ distributed respectively. Define a random sequence $\{\xi_k\}_{k\geq1}$, where $\xi_k=a_k$ with probability $p$ and $\xi_k=b_k$ with probability $1-p$. Then $\mu_n=\frac{1}{n}\sum\limits_{k=1}^{n}\delta_{\xi_k}$ weakly converge to $\lambda=p\mu+(1-p)\nu$ almost surely.
\end{lemma}
\begin{proof}
It is enough to show that for any open set $U\subset\mathbb{C}$, $ \frac{1}{n}\sum\limits_{k=1}^{n}\text{$\mathbbm{1}$}\left\{\xi_k \in U\right\}$ converge to $\lambda(U) $ almost surely. But from a version of law of large numbers we know that if $X_1,X_2,\dots$ are independent random variables (not necessarily identical), then
\[
\frac{1}{n}\sum\limits_{k=1}^{n}\left(X_k-\ee{\text{$X_k$}}\right)\xrightarrow{a.s}0
\]
provided that $\sum\limits_{k=1}^{\infty}\frac{1}{k^2}\var{X\textbf{$_k$}}<\infty$. Applying this to the random variables $\textbf{$\mathbbm{1}$}\left\{\xi_k \in U\right\}$ we get that $\frac{1}{n}\sum\limits_{k=1}^{n}\text{$\mathbbm{1}$}\left\{\xi_k \in U\right\}$ converge to $\lambda(U)$ almost surely.
\end{proof}
\begin{proof}[Proof of Proposition \ref{corollary5:thm1}]
For $i\geq1$ choose a sequence of independent random variables to be $a_k$ which assumes values $u_k $ with probability $p$ and $v_k$ with probability $1-p$. Independent of this sequence choose another sequence of independent random variables whose terms are $b_k = u_k $ with probability $p$ and $v_k$ with probability $1-p$. The terms of the sequences $\{a_n\}_{n\geq1}$ and $\{b_n\}_{n\geq1}$ satisfy,
\begin{align*}
\log_+|a_n|\leq \log_+|u_n|+\log_+|v_n|,\\
\log_+|b_n|\leq \log_+|u_n|+\log_+|v_n|.
\end{align*}
Because the sequences $\{u_n\}_{n\geq1}$ and $\{v_n\}_{n\geq1}$ are log-Ces\'{a}ro bounded, it follows from above inequalities, the sequences $\{a_n\}_{n\geq1}$ and $\{b_n\}_{n\geq1}$ are also log-Ces\'{a}ro bounded.
Therefore, from the above arguments and Lemma \ref{random_seq_limit} the sequences $\{a_n\}_{n\geq1}$ and $\{b_n\}_{n\geq1}$ satisfy the hypothesis of the Theorem \ref{thm1} almost surely. Hence the corollary is proved.
\end{proof}
\begin{proof}[Proof of Corollary \ref{corollary:thm2}]
Choose $a_1,a_2,\dots$ be i.i.d $\mbox{Bernoulli}(p)$ random variables. Let $\{k_n\}_{n\geq1}$ be a random sequence such that $a_{k_n}=1$ and $a_\ell=0$ whenever $\ell \notin \{k_1,k_2,\dots\}$. Define $L_n^{(1)}(z)=L_{k_n}(z)=\frac{P_n'(z)}{P_n(z)}$. It is enough to show that $\frac{1}{n}\Delta\log|L_{k_n}(z)|\rightarrow 0$ in probability. The sequences $\{a_n\}_{n\geq1}$ and $\{z_n\}_{n\geq1}$ satisfy the hypothesis of the Theorem \ref{thm2}. Therefore $\frac{1}{n}\Delta \log|L_n(z)|\rightarrow 0$ in probability. Because $\{L_{k_n}^{(1)}(z)\}_{n\geq1}$ is a subsequence of $\{L_n(z)\}_{n\geq1}$ it follows that $\frac{1}{k_n}\Delta \log|L_{k_n}^{(1)}(z)|\rightarrow 0$ in probability. Because $k_n$ is a negative binomial random variable with parameters $(n,p)$, we have $\frac{k_n}{n}\rightarrow p$ almost surely. Therefore, $\frac{1}{n}\Delta \log|L_{k_n}^{(1)}(z)|\rightarrow 0$ in probability.
\end{proof}
In the next chapter we provide proofs for both the Theorems \ref{thm1} and \ref{thm2}.
\chapter{Determinantal point processes from product of random matrices.}
\label{ch:ginibreproduct2}
\section{Introduction}
A Ginibre matrix is a random matrix whose entries are i.i.d. complex Gaussian random variables. In this chapter we derive exact eigenvalue density for certain products of random matrices. In this section we give a over view of the results where the exact eigenvalue density was obtained. Then we state our result and show that the earlier results were special cases of our result. In the next section we give a brief discussion about generalized Schur decomposition, which will used as a transformation to derive the eigenvalue density. In the later section we compute the Jacobian for this transformation. We complete the proof in the last section.
We will now recall a well known fact (Theorem 4.5.5 in \cite{manjubook}, Lemma 4 in \cite{soshnikovsurvey}) about the determinantal point process on the complex plane. Let the vector $(z_1,z_2,\dots,z_n)$ be a random vector in $\text{$\mathbb{C}$}^n$ having density proportional to $\prod\limits_{i<j}|z_i-z_j|^2$ w.r.t a measure $\mu^{\otimes n}$. Notice that by doing column operations on the matrix $$\left[ \begin{smallmatrix}
\phi_0(z_1) & \phi_1(z_1) & \dots & \phi_{n-1}(z_1) \\
\phi_0(z_2) & \phi_1(z_2) & \dots & \phi_{n-1}(z_2) \\
\vdots & \vdots & \ddots & \vdots
\\
\phi_0(z_n) & \phi_1(z_n) & \dots & \phi_{n-1}(z_n) \end{smallmatrix}\right], $$ where $\phi_i$s are orthonormal polynomial w.r.t measure $\mu$, we get
\[
\det\left[ \begin{smallmatrix}
\phi_0(z_1) & \phi_1(z_1) & \dots & \phi_{n-1}(z_1) \\
\phi_0(z_2) & \phi_1(z_2) & \dots & \phi_{n-1}(z_2) \\
\vdots & \vdots & \ddots & \vdots
\\
\phi_0(z_n) & \phi_1(z_n) & \dots & \phi_{n-1}(z_n)\end{smallmatrix} \right] \left[ \begin{smallmatrix}
\phi_0(z_1) & \phi_1(z_1) & \dots & \phi_{n-1}(z_1) \\
\phi_0(z_2) & \phi_1(z_2) & \dots & \phi_{n-1}(z_2) \\
\vdots & \vdots & \ddots & \vdots
\\
\phi_0(z_n) & \phi_1(z_n) & \dots & \phi_{n-1}(z_n)\end{smallmatrix} \right]^* = c_n\prod\limits_{i<j}|z_i-z_j|^2,
\]
for some constant $c_n$. On simplification the left hand side of the above equation can be written as $\det[((\textbf{$\mathbb{K}$}(z_i,z_j)))_{1\leq i,j \leq n}]$, where $\textbf{$\mathbb{K}$}_n(z,w)=\sum\limits_{i=0}^{n-1}\phi_i(z)\overline{\phi}_i(w)$. Therefore the entries of the vector $(z_1,z_2,\dots,z_n)$ form a determinantal point process with kernel given by \[\textbf{$\mathbb{K}$}_n(z,w)=\sum\limits_{i=0}^{n-1}\phi_i(z)\overline{\phi}_i(w),\]
where $\phi_0, \phi_1, \dots, \phi_n$ are the orthonormal polynomials w.r.t measure $\mu$ on complex plane.
Ginibre \cite{ginibre} introduced three ensembles of matrices with i.i.d. real, complex and quaternion Gaussian entries respectively without imposing a Hermitian condition. These matrices are called Ginibre matrices in the literature. Here we restrict our attention to matrices with i.i.d. complex Gaussian entries. In \cite{ginibre}, Ginibre derived the eigenvalue density for $n\times n$ matrix with i.i.d. standard complex Gaussian entries.
\begin{theorem}[Ginibre ~\cite{ginibre}]
Let $A$ be an $n\times n$ matrix with i.i.d standard complex Gaussian entries. Then the eigenvalues of $A$ form a determinantal point process on the complex plane with kernel
\begin{equation*}
\text{$\mathbb{K}$}_n(z,w) = \sum_{k=0}^{n-1}\frac{(z\bar w)^k}{k!}
\end{equation*}
w.r.t to background measure $\frac{1}{\pi}e^{-|z|^2}dm(z)$. Equivalently, the vector of eigenvalues has density
\[
\frac{1}{\pi^n\prod_{k=1}^{n}k!}e^{-\sum_{k=1}^{n}|z_k|^2}\prod_{i<j}|z_i - z_j|^2
\]
w.r.t Lebesgue measure on $\text{$\mathbb{C}$}^n$.
\end{theorem}
These are the first non-hermitian matrix ensembles for which the exact eigenvalue density is computed.
Later Krishnapur \cite{manjunath} showed that the eigenvalues of $A^{-1}B$ form a determinantal point process on the complex plane when $A$ and $B$ are independent random matrices with i.i.d. standard complex Gaussian entries. In random matrix literature this matrix ensemble $A^{-1}B$ is known as \textit{spherical ensemble}.
\begin{theorem}[M.Krishnapur ~\cite{manjunath}]
Let $A$ and $B$ be i.i.d. $n\times n$ matrix with i.i.d. standard complex Gaussian entries. Then the eigenvalues of $A^{-1}B$ form a determinantal point process on the complex plane with kernel
\begin{equation*}
\text{$\mathbb{K}$}_n(z,w) = (1+z\overline{w})^{n-1}
\end{equation*}
w.r.t to background measure $\frac{n}{\pi}\frac{dm(z)}{(1+|z|^2)^{n+1}}$. Equivalently, the vector of eigenvalues has density
$$
\frac{1}{n}\left(\dfrac{n}{\pi}\right)^n\prod_{k=1}^{n}{{n-1}\choose k}\prod_{k=1}^{n}\dfrac{1}{(1+|z_k|^2)^{n+1}}\prod_{i<j}|z_i - z_j|^2
$$
w.r.t Lebesgue measure on $\mathbb{C}\text{$^n$}$
\end{theorem}
Akemann and Burda \cite{akemann} have derived the eigenvalue density for the product of $k$ independent $n\times n$ matrices with i.i.d. complex Gaussian entries.
In this case the joint probability distribution of the eigenvalues of the product matrix is found to be given by a determinantal point process as in the case of Ginibre,
but with a weight given by a Meijer $G$-function depending on $k$. Their derivation hinges on the generalized Schur decomposition for matrices and the method of orthogonal polynomials. We shall state that result as the following theorem.
\begin{theorem}[Akemann-Burda ~\cite{akemann}]
Let $A_1,A_2,\dots,A_n$ be i.i.d. $n\times n$ matrices with i.i.d. standard complex Gaussian entries. Then the eigenvalues of $A_1A_2\dots A_n$ has density
(with respect to Lebesgue measure on $\text{$\mathbb{C}$}^{n}$) proportional to
\[
\prod_{\ell=1}^{n}\omega(z_{\ell})\prod_{i<j}^{n}|z_i-z_j|^2
\]
with a weight function $ \omega(z)$, where
\begin{equation}\label{weight0}
\omega(z)=\int_{x_1\cdots x_{k}
=z}e^{-\sum_{j=1}^{k}|x_{j}|^2}
\prod_{j=1}^{k}|x_{j}|^{(n-1)}d\sigma.
\end{equation}
Here $\sigma$ is the Lebesgue measure restricted to the hyper surface $\{(x_1,x_2,\dots,x_k):x_1\cdots x_{k}
=z\}$.
\end{theorem}
In all the above results after calculating the density of the eigenvalues, it turns out that they form a determinantal point process.
Now following the work of Krishnapur \cite{manjunath} on spherical ensembles and the work of Akemann and Burda \cite{akemann} on the product of $k$ independent $n\times n$ Ginibre matrices, it is a natural question to ask, what can be said about the eigenvalues of product of $k$ independent Ginibre matrices when a few of them are inverted? We investigated the case $A=A_1^{\epsilon_1}A_2^{\epsilon_2}
\cdots A_k^{\epsilon_k}$, where each $\epsilon_i$
is $+1$ or $-1$ and $A_1,
A_2,\ldots, A_k$ are independent matrices with i.i.d. standard complex Gaussian entries, and obtained the density of eigenvalues of $A$. We state the result as the following theorem.
\begin{theorem}[K Adhikari, K Saha, NK Reddy, TR Reddy ~\cite{atr}]\label{chap5:thm1}
Let $A_1,A_2,\ldots,A_k$ be independent $n\times n$ random matrices with i.i.d. standard complex Gaussian entries.
Then the eigenvalues of $A=A_1^{\epsilon_1}A_2^{\epsilon_2}
\ldots A_k^{\epsilon_k}$, where each $\epsilon_i$
is $+1$ or $-1$, has density
(with respect to Lebesgue measure on $\mathbb{C}\textbf{$^n$}$) proportional to
\[
\prod_{\ell=1}^{n}\omega(z_{\ell})\prod_{i<j}^{n}|z_i-z_j|^2
\]
with a weight function $ \omega(z)$, where
\begin{equation}\label{weight1}
\omega(z)=\int_{x_1^{\epsilon_1}\cdots x_{k}^{\epsilon_k}
=z}e^{-\sum_{j=1}^{k}|x_{j}|^2}
\prod_{j=1}^{k}|x_{j}|^{(1-\epsilon_j)(n-1)}d\sigma.
\end{equation}
\noindent Here $\sigma$ is the Lebesgue measure restricted to the hyper surface given by $\{(x_1,x_2,\dots,x_k):x_1^{\epsilon_1}\cdots x_{k}^{\epsilon_k}
=z\}$.
\end{theorem}
\begin{remark}
From the symmetry of the expressions in the Theorem \ref{chap5:thm1} notice that the density of the eigenvalues of the matrix $A=A_1^{\epsilon_1}A_2^{\epsilon_2}
\ldots A_k^{\epsilon_k}$ depends only on $\sum_{i=1}^{k}\epsilon_i$ but not on individual $\epsilon_i$s.
\end{remark}
\begin{remark}
If $k=2$, $\epsilon_1=-1$ and $\epsilon_2=1$, then from \eqref{weight1} we get that
\[\omega(z)=\int_{\frac{x_{2}} {x_{1}}=z}e^{- (|x_{1}|^{2}+|x_{2}|^2)}
|x_{1 }|^{2(n-1)}d\sigma=C_n \frac{1}{(1+|z|^2)^{(n+1)}},
\]
where $\sigma$ is the Lebesgue measure restricted to the hyper surface $\{(x_1,x_2):\frac{x_{2}} {x_{1}}=z\}$ $C_n$ is a constant . Hence the density of the eigenvalues of $A_1^{-1}A_2$
is proportional to
\[
\prod_{i=1}^{n}\frac{1}{(1+|z_i|^2)^{n+1}}\prod_{i<j}|z_{i}-z_j|^2.
\]
From the above expression it is clear that the eigenvalues of $A_1^{-1}A_2$ form a determinantal point process in a complex plane. This result was proved by Krishnapur in \cite{manjunath} through a different approach.
\end{remark}
\begin{remark}
If $\epsilon_i=1$ for $i=1,2,\ldots,k$, then by Theorem \ref{chap5:thm1} it follows that the eigenvalues of
$A_1A_2\ldots A_k$ form a determinantal point process. This result is due to Akemann and Burda \cite{akemann}.
\end{remark}
For proving the Theorem \ref{chap5:thm1}, we will do an appropriate transformation and then integrate the auxiliary variables to obtain the eigenvalue density. We will use generalized Schur decomposition for the matrices $X_1,X_2,\dots,X_k$, where $X_i=A_i^{\epsilon_i}$ for $i=1,2,\dots,k$. In the forthcoming section we will present generalized Schur decomposition appeared in \cite{akemann}. In the subsequent section we compute the Jacobian for this transformation.
\section{Generalized Schur decomposition}
We will first recall the Schur decomposition and then state generalized Schur decomposition.
\paragraph{Schur decomposition:}Let $A$ be a $n\times n$ matrix. Then there exists an unitary matrix $U$, a diagonal matrix $D$ and a strictly upper triangular matrix $T$, such that $A=U(D+T)U^*$. The diagonal elements of $D$ are the eigenvalues of $A$. Further, if the eigenvalues of $A$ are all distinct and we fix the order of their appearance in $D$, then the decomposition is unique up to a conjugation by a diagonal matrix, whose diagonal entries are in $S^1$.
The key ingredient in deriving the result \ref{chap5:thm1} is the generalization of the above mentioned Schur product.
\paragraph{Generalized Schur decomposition:}Let $A_1,A_2,\dots,A_k$ be $n \times n$ matrices, then there exists unitary matrices $U_1,U_2,\dots,U_k$, diagonal matrices $D_1,D_2,\dots,D_k$ and strictly upper triangular matrices $T_1,T_2,\dots,T_k$ such that they can be decomposed in the following form.
\begin{align}
A_1 & = U_1(Z_1+T_1)U_2^*,\\
A_2 & = U_2(Z_2+T_2)U_3^*,\\
&\vdots\\
A_k &=U_k(Z_k+T_k)U_1^*.
\end{align}
To prove this decomposition we first consider the Schur decomposition for the matrix $A_1A_2\dots A_k$. Let it be $A_1A_2\dots A_k=U_1(Z+T)U_1^*$. Now performing the Gram-Schimdt orthogonalization to the columns of the matrix $U_1^*A_1$, we get $U_1^*A_1=(Z_1+T_1)U_2^*$ for some unitary matrix $U_2$, and $Z_1, T_1$ being diagonal and strictly upper triangular matrices respectively. We repeat this process i.e., in the $i^{th}$ step we perform the Gram-Schimdt orthogonalization to the matrix $U_i^*A_i$ to get $U_i^*A_i=(Z_i+T_i)U_(i+1)$. After performing $n-1$ steps it forces that $U_n^*A_n=(Z_n+T_n)U_1^*.$
This decomposition is not unique in general. But, if we incorporate certain conditions on the matrices then it will be unique. Assume that the diagonal entries of $Z_1Z_2\dots Z_k$ are distinct, and appear in a particular order (in particular may choose lexicographical ordering). Observe that replacing $U_i$ with $\Theta_iU_i$, where $\Theta_i$ is a diagonal unitary matrix, we have that $A_i$s assume a similar decomposition.
Hence if all the diagonal entries of $U_i$s are non-zero we may assume them to be positive. On assuming these two conditions the decomposition will be unique. Another criterion for the uniqueness of this decomposition is to assume that the first non-zero entry in each row of $U_i$ is non-negative. In the next section while computing Jacobian we assume that all the diagonal entries are positive as the unitary matrices with zero diagonal entries form a null set.
Notice that the eigenvalues of $A_1A_2\dots A_k$ are same as that of $Z_1Z_2\dots Z_k$. We will exploit this and use generalized Schur decomposition as the transformation in recovering the eigenvalue density.
We will compute the Jacobian for this transformation in the next section. For a more general discussion on this the reader can refer to the appendix in \cite{atr}.
\section{Jacobian computation}
To obtain eigenvalue density of $A$, we need to do an appropriate change of variables. We do generalized Schur decomposition as mentioned in the previous section. We will compute the Jacobian for this transformation in this section. The computation of Jacobian is on the lines of the computation, given in \cite{manjubook} (Section 6.3, Chapter-6), while deriving the eigenvalue density for Ginibre matrices.
Before doing the Jacobian determinant calculation, we state a basic property about wedge product, which will be used repeatedly.
If $dy_j=\sum_{k=1}^na_{j,k}dx_k$, for $1\le j\le n$, then using the alternating property $dx\wedge dy=-dy\wedge dx$ it is easy to see that
\begin{equation}\label{eqn:wedge:relation}
dy_1\wedge dy_2\wedge \ldots\wedge dy_n=\det[((a_{j,k}))_{j,k\le n}]dx_1\wedge x_2\wedge\ldots\wedge dx_n.
\end{equation}
As a consequence we can see that, if $\underline{x}=(x_1,\dots,x_n)$ is a unitary transformation of $\underline{y}=(y_1,\dots,y_n)$, then \begin{equation}\label{eqn:wedge:relation2}
dy_1\wedge d\overline{y}_1\wedge dy_2\wedge d\overline{y}_2 \ldots\wedge dy_n\wedge d\overline{y}_n=dx_1\wedge d\overline{x}_1\wedge dx_2\wedge d\overline{x}_2 \ldots\wedge dx_n\wedge d\overline{x}_n.
\end{equation}
From generalized Schur decomposition we have that for any matrices $X_1,X_2,\dots,X_k$ in $g\ell (n,\mathbb{C})$ can be written as
\begin{align}\label{eqn:gschur}
X_1 & = U_1(Z_1+T_1)U_2^*,\\
X_2 & = U_2(Z_2+T_2)U_3^*,\\
&\vdots\\
X_k &=U_k(Z_k+T_k)U_{k+1}^*.
\end{align}
Where $U_1,U_2,\dots,U_k,U_{k+1}$ are unitary matrices satisfying $U_{k+1}=U_1$, $Z_1,Z_2,\dots,Z_k$ are diagonal matrices and $T_1,T_2,\dots,T_k$ are strictly upper triangular matrices. Because $U_iU_i^*=\text{$\mathbb{I}$}_n$ we have $(dU_i)U_i^*=-U_i(dU_i^*)$, for $i=1,2,\dots,k$. Using this fact and the generalized Schur decomposition \ref{eqn:gschur}, for any $\ell=1,2,\dots,k$ we get,
\begin{align}
dX_\ell &= (dU_\ell)(Z_\ell+T_\ell)U_{\ell+1}^*+U_\ell (dZ_\ell+dT_{\ell})U_{\ell+1}^*+U_\ell (Z_\ell+T_{\ell})dU_{\ell+1}^*\\
&= (dU_\ell)(Z_\ell+T_\ell)U_{\ell+1}^*+U_\ell (dZ_{\ell}+dT_\ell)U_{\ell+1}^*-U_\ell (Z_\ell+T_\ell)U_{\ell+1}^*(dU_{\ell+1})U_{\ell+1}^*\\
&=
U_\ell\left[(U_\ell^*dU_\ell)(Z_\ell+T_\ell)-(Z_\ell+T_\ell)(U_{\ell+1}^*dU_{\ell+1})+dZ_\ell+dT_\ell\right]U_{\ell+1}^*.
\end{align}
For convenience let us denote $\Lambda_\ell:=U_\ell^*(dX_\ell)U_{\ell+1}$, $\Omega_\ell:=U_\ell^*dU_\ell$ and $S_\ell:=Z_\ell+T_\ell$. Note that $\Lambda_\ell=(\lambda_\ell (i,j))$ and $\Omega_\ell=(\omega_\ell (i,j))$ are $n\times n$ matrices of one forms, and $dS_\ell$ ($=dZ_\ell+dT_\ell$) is an upper triangular matrix of one form. Let $Z_\ell=\mbox{diag}(Z_\ell(1),\linebreak Z_\ell(2),\dots,Z_\ell(n))$, $T_\ell=(t_\ell(i,j))$ and $\Lambda_\ell=(\lambda_\ell(i,j))$.
Define,
\begin{equation}\label{eqn:lambda_l}
\Lambda_\ell=\Omega_\ell S_\ell-S_\ell\Omega_{\ell+1}+dS_\ell.
\end{equation}
For any unitary matrix $U$ the transformation $X \rightarrow UX$ is a unitary transformation. Therefore from \eqref{eqn:wedge:relation2}, we have
\[\bigwedge_{i,j}\left(dX_\ell(i,j)\wedge d\overline{X}_\ell(i,j)\right)=
\bigwedge_{i,j}\left(d\lambda_\ell(i,j)\wedge d\overline{\lambda}_\ell(i,j)\right),\]
for $\ell=1,2,\dots,k.$
Throughout this computation we will ignore the constants, hence each equality is indeed an equality up to a constant. Because we will be dealing with probability densities the constants can be retrieved by equating the integral to $1$.
Expanding the equation \eqref{eqn:lambda_l} we get,
\begin{eqnarray}\label{eqn:wedge_nullify}
\lambda_\ell (i,j) &=&\sum\limits_{m=1}^{j}S_\ell (m,j)\omega_\ell (i,m)-\sum\limits_{m=i}^{n}S_\ell (i,m)\omega_{\ell+1}(m,j)+dS_\ell (i,j)
\\&=&\left\{
\begin{array}{lcr}
S_\ell (j,j)\omega_\ell (i,j)-S_\ell (i,i)\omega_{\ell+1}(i,j) \\ +\left[\sum\limits_{m=1}^{j-1}S_\ell (m,j)\omega_\ell (i,m)-\sum\limits_{m=i+1}^{n}S_\ell (i,m)\omega_{\ell+1}(m,j)\right] \mbox{ if } i>j;\\
dS_\ell (i,j)+S_\ell (i,j)\left(\omega_\ell (i,i)-\omega_{\ell+1}(j,j)\right)\\ +\left[\sum\limits_{\substack{m=1\\m\neq i}}^{j}S_\ell (m,j)\omega_\ell (i,m)-\sum\limits_{\substack{m=i+1\\m\neq j}}^{n}S_\ell (i,m)\omega_{\ell+1}(m,j)\right]
\mbox{ if }i\leq j.\\
\end{array}
\right.
\end{eqnarray}
To execute the wedge product, we will now arrange $\{\lambda_\ell (i,j),\overline{\lambda}_\ell (i,j)\}$ in a particular order. We will use lexicographic order on the indices associated with $\lambda_\ell (i,j)$. The indices associated with $\lambda_\ell (i,j)$ are $(i,j,\ell)$. The lexicographical order will be taken on $(i,n-j,k-\ell)$. The corresponding conjugate terms will be followed by the term for which it is conjugate. For convenience, we present the ordering as the following table (each row is read from left to right and top row precedes bottom rows).
\[
\begin{smallmatrix}
\lambda_1(n,1), \overline{\lambda}_1(n,1), \dots, \lambda_k(n,1), \overline{\lambda}_k(n,1),&\dots&, \lambda_1(n,n), \overline{\lambda}_1(n,n), \dots, \lambda_k(n,n), \overline{\lambda}_k(n,n)\\
\lambda_1(n-1,1), \overline{\lambda}_1(n-1,1), \dots, \lambda_k(n-1,1), \overline{\lambda}_k(n-1,1),&\dots&, \lambda_1(n-1,n), \overline{\lambda}_1(n-1,n), \dots, \lambda_k(n-1,n),\overline{\lambda}_k(n-1,n) \\
\vdots &\ddots&\vdots\\
\lambda_1(1,1), \overline{\lambda}_1(1,1), \dots, \lambda_k(1,1), \overline{\lambda}_k(1,1),&\dots&, \lambda_1(1,n), \overline{\lambda}_1(1,n), \dots, \lambda_k(1,n),\overline{\lambda}_k(1,n)
\end{smallmatrix}
\]
Using the fact that $\Omega_\ell$ is skew-hermitian (i.e., $\omega_\ell(i,j)=-\overline{\omega}_\ell(j,i)$), while executing the wedge product, notice that the terms in the square brackets are one forms and have already appeared before in the given ordering. Hence their contribution to the entire product is nullified. In the next couple of paragraphs we will explain in detail about this cancellation, the reader who already got convinced may skip them.
If $i>j$, then each of the terms in the square brackets contain either $\omega_\ell(i,m_1)$ or $\omega_{\ell+1}(m_2,j)$, where $m_1<j$ and $m_2>i$. These terms have already appeared in $\lambda_\ell(i,m_1)$ and $\lambda_{\ell+1}(m_2,j)$ respectively, outside the square brackets, which are leading the order we have executed the product.
If $i\leq j$, then each of the terms in the square brackets contain either $\omega_\ell(i,m_1)$ or $\omega_{\ell+1}(m_2,j)$, where $m_1\leq j$, $m_1 \neq i$, $m_2>i$ and $m_2\neq j$. For the case $j\geq m_1>i$, by skew hermitian property of $\Omega_\ell$, we have $\omega_\ell(i,m_1)=-\overline{\omega}_\ell(m_1,i)$ which has already appeared in $\overline{\lambda}_\ell(m_1,i)$, outside the square brackets. For the case $m_1<i$, $\omega_\ell(i,m_1)$ has already appeared in $\lambda_\ell(i,m_1)$ outside square brackets. Similarly for the case $m_2<j$,we have $\omega_{\ell+1}(m_2,j)=-\overline{\omega}_{\ell+1}(j,m_2)$ which has appeared outside square brackets in $\overline{\lambda}_{\ell+1}(j,m_2)$. Lastly in the case of $m_2>j$, $\omega_{\ell+1}(m_2,j)$ has already appeared in $\lambda_{\ell+1}(m_2,j).$
Therefore, if we assume
\begin{eqnarray}
\mu_{\ell}(i,j) &=&\left\{
\begin{array}{lcr}
S_\ell (j,j)\omega_\ell (i,j)-S_\ell (i,i)\omega_{\ell+1} (i,j) & \mbox{ if } i>j;\\
dS_\ell (i,j)+S_\ell (i,j)\left(\omega_\ell (i,i)-\omega_{\ell+1} (j,j)\right) & \mbox{ if }i\leq j;
\end{array}
\right.
\end{eqnarray}
then,
\[\bigwedge_{\ell}^{}\bigwedge_{i,j}^{}\left(\lambda_\ell (i,j)\wedge \overline{\lambda}_\ell (i,j)\right)=\bigwedge_{\ell}^{}\bigwedge_{i,j}^{}\left(\mu_\ell (i,j)\wedge \overline{\mu}_\ell (i,j)\right).\]
Recall that $S_\ell=Z_\ell+T_\ell$. For $i>j$, the term $\bigwedge_\ell(\mu_\ell(i,j)\wedge\overline{\mu}_\ell(i,j))$ yields $\big|\prod\limits_{\ell=1}^{k}Z_\ell(i)-\prod\limits_{\ell=1}^{k}Z_\ell(j)\big|^2$.
Hence we get,
\begin{align}\label{eqn:manifold_nullification_1}
\bigwedge_{\ell}\bigwedge_{i,j}|\lambda_\ell(i,j)|^2 =\left(\prod_{i>j}\bigg|\prod\limits_{\ell=1}^{k}Z_\ell(i)-\prod\limits_{\ell=1}^{k}Z_\ell(j)\bigg|^2\right)&\bigwedge_{\ell}\left(\bigwedge_{i>j}|\omega_\ell(i,j)|^2\bigwedge_{i}|dZ_\ell(i)|^2\right)\\\times\bigwedge_{\ell}
\bigwedge_{i,j}\big|dT_\ell(i,j)&+T_\ell(i,j)\left(\omega_\ell(i,i)-\omega_{\ell+1}(j,j)\right)\big|^2.
\end{align}
Note that we have simplified the notation by denoting $|dz|^2:=dz\wedge d\overline{z}$. Consider the set of unitary matrices $\text{$\mathcal{M}$}_\ell=\{U_\ell:U_\ell^*U_\ell=\text{$\mathbb{I}$}_n,U_\ell(i,i)>0\}$. $\text{$\mathcal{M}$}_\ell$ is a sub manifold of dimension $n^2-n$ in the manifold $\mathcal{U}\textbf{$(n)$}$. The dimension of $\text{$\mathcal{M}$}_\ell$ can be obtainded by imposing the constrains $\{U_\ell(i,i)>0;i=1,2,\dots,n\}$, on $\mathcal{U}\textbf{$(n)$}$. Now the product $\omega_\ell(m,m)\wedge\left(\bigwedge_{i>j}|\omega_\ell(i,j)|^2\right)=0$, because $\omega_\ell(i,j)$s are one forms on the manifold $\text{$\mathcal{M}$}_\ell$ whose dimension is $n^2-n$ and the product contains $n^2-n+1$ terms. Hence \eqref{eqn:manifold_nullification_1}, will be reduced to,
\begin{align}
\bigwedge_{\ell}\bigwedge_{i,j}&|\lambda_\ell(i,j)|^2\\&=\left(\prod_{i>j}\bigg|\prod\limits_{\ell=1}^{k}Z_\ell(i)-\prod\limits_{\ell=1}^{k}Z_\ell(j)\bigg|^2\right)\bigwedge_{\ell}\bigwedge_{i>j}|\omega_\ell(i,j)|^2\bigwedge_{i}\bigwedge_{\ell}|dZ_\ell(i)|^2\bigwedge_{\ell}\bigwedge_{i<j}|dT_\ell(i,j)|^2\label{eqn:manifold_nullification_2}.
\end{align}
Notice that $\bigwedge_{i>j}|\omega_\ell(i,j)|^2$ is $n^2-n$ form on space of unitary matrices whose diagonal entries are non-negative (or in other words $\text{$\mathcal{U}$}(n)/\text{$\mathcal{U}$}(1)$) and it is invariant under any unitary transformation. Hence the measure induced by this form is Haar measure on $\text{$\mathcal{U}$}(n)/\text{$\mathcal{U}$}(1)$. We will denote it by $|dH(U_\ell)|$. Therefore we have
\begin{align}
\bigwedge_{\ell}\bigwedge_{i,j}|\lambda_\ell(i,j)|^2=&\left(\prod_{i>j}\bigg|\prod\limits_{\ell=1}^{k}Z_\ell(i)-\prod\limits_{\ell=1}^{k}Z_\ell(j)\bigg|^2\right)\\ &\times\bigwedge_{\ell}|dH(U_\ell)|\bigwedge_{i}\bigwedge_{\ell}|dZ_\ell(i)|^2\bigwedge_{\ell}\bigwedge_{i<j}|dT_\ell(i,j)|^2\label{eqn:manifold_nullification_3}.
\end{align}
Now that we have the basic ingredients ready, we will proceed for the proof of the theorem in the next section.
\section{Proof of Theorem \ref{chap5:thm1}}
The density of $(A_1,A_2,\ldots,A_k)$ is proportional to
\[
\prod_{\ell=1}^{k}e^{-\Tr( A_{\ell}A_{\ell}^*)}\bigwedge_{\ell=1}^{k}\bigwedge_{i,j=1}^n|dA_{\ell}(i,j)|^2
\]
where $|dA_{\ell}(i,j)|^2=dA_{\ell}(i,j)\wedge d\bar{A}_{\ell}({i,j}).$ Through out the proof, we will ignore the proportionality constants where ever present. Since we are dealing with probability densities, the proportionality constants can be recovered by equating the integral of the density to $1$.
Let $X_{\ell}=A_{\ell}^{\epsilon_\ell}$ for $\ell=1,2,\ldots,k$. The Jacobian for the transformation $A_\ell \rightarrow A_\ell^{\epsilon_\ell}$ is $|\det(A_\ell)|^{2(\epsilon_\ell-1)n}$. Hence the joint density of $(X_1,X_2,\ldots,X_k)$ is
proportional to
\begin{align}
\prod_{\ell=1}^{k}e^{-\Tr( X_{\ell}^{\epsilon_{\ell}}X_{\ell}^{\epsilon_{\ell}*})}\prod_{\ell=1}^k
|\det(X_{\ell})|^{2(\epsilon_{\ell}-1)n}
\bigwedge_{\ell=1}^{k}\bigwedge_{i,j=1}^n|dX_{\ell}(i,j)|^2 \label{eqn:dencityofx}
\end{align}
Using generalized Schur decomposition we have
\begin{align}
X_{\ell}=U_{\ell}S_{\ell}U_{\ell+1}^*,\;\;\mbox{for $\ell=1,2,\ldots,k,$} \label{eqn:schurdecomposition}
\end{align}
where $U_{k+1}=U_1$ and $U_{\ell}$ are unitary matrices, $S_{\ell}$ are upper triangular matrices. We write $S_{\ell}$ for $1\leq l \leq k$ as
\begin{equation}
S_{\ell}=Z_{\ell}+T_{\ell}, \label{eqn:diag+uppertriangular}
\end {equation}
where $Z_{\ell}=\mbox{diag}(Z_{\ell }(1),Z_{\ell }(2),\ldots,Z_{\ell}(n))$ and $T_{\ell}$ are strictly upper triangular matrices.
Now from \eqref{eqn:manifold_nullification_3},
we have
\begin{eqnarray}
\bigwedge_{\ell=1}^{k}\bigwedge_{i,j=1}^n|dX_{\ell}(i,j)|^2&=&\prod_{i<j}|z_i-z_j|^2\bigwedge_{\ell=1}^k\big(|dH(U_\ell)|\bigwedge_{i\le j}|dS_{\ell}(i,j)|^2\big)\\ \nonumber
&=&\prod_{i<j}|z_i-z_j|^2\bigwedge_{\ell=1}^k\bigg(|dH(U_\ell)|\bigwedge_{i<j}|dT_{\ell}(i,j)|^2\bigwedge_{i=1}^n |dZ_\ell(i)|^2\bigg),\label{eqn:{wedge}}
\end{eqnarray}
where
$|dH(U_{\ell})|$ are independent Haar measures on $\mathcal{U}\text{$(n)$}/\mathcal{U}\text{$(1)$}$ and $z_i=\prod _{\ell=1}^k Z_{\ell}(i)$ for $1\leq i \leq n$. Notice that $z_1,z_2,\ldots,z_n$
are the eigenvalues of $X_1X_2\cdots X_k$. Now using \eqref{eqn:schurdecomposition}
and \eqref{eqn:{wedge}}, \eqref{eqn:dencityofx} can be written as
\begin{align}
&\prod_{\ell=1}^{k}\left[e^{-\Tr( S_{\ell}^{\epsilon_{\ell}}S_{\ell}^{\epsilon_{\ell}*})}
|\det(S_{\ell})|^{2(\epsilon_{\ell}-1)n}\right]\prod_{i<j}|z_i-z_j|^2\\ &\times \bigwedge_{\ell=1}^k\bigg(|dH(U_\ell)|
\bigwedge_{i<j}|dT_{\ell}(i,j)|^2\bigwedge_{i=1}^n |dZ_{\ell}(i)|^2\bigg).\label{eqn:sipmle}
\end{align}
Now our aim is to integrate out all auxiliary variables from \eqref{eqn:sipmle} to get the density of eigenvalues of $X_1X_2\cdots X_k$.
Observe that if $S=Z+T$ where $Z=diag(x_1,x_2,\ldots,x_n)$ and $T$ is a strictly upper triangular matrix, then
\[ S^{-1}S^{-1*}=(I+Z^{-1}T)^{-1}Z^{-1}Z^{-1*}(I+Z^{-1}T)^{-1*}.\]
Observe that $Z^{-1}T$ is also a strictly upper triangular matrix. If we let $P=Z^{-1}T$, we get
$P$ is an upper triangular matrix and
\[
|DP|=\prod_{i=1}^{n-1}\frac{1}{|x_{i}|^{2(n-i)}}|DT|,
\]
where $|DP|=\bigwedge_{i,j}|dP(i,j)|^2$ and $|DT|=\bigwedge_{i,j}|dT(i,j)|^2$.
Now replacing $(I+P)^{-1}$ by $Q$, we have $|DQ|=|DP|$ and therefore
\[
|DT|=\prod_{i=1}^{n-1}|x_{i}|^{2(n-i)}|DQ|.
\]
Hence we have
\begin{align}
e^{-\Tr(S^{-1}S^{-1*})}|DT|&= e^{-\Tr(QZ^{-1}Z^{-1*}Q^*)}\prod_{i=1}^{n-1}|x_{i}|^{2(n-i)}|DQ|\\
&= e^{-\sum_{i=1}^{n}\frac{1}{|x_{i}|^2}(\sum_{j=1}^{i-1}|Q(j,i)|^2+1)}\prod_{i=1}^{n-1}|x_{i}|^{2(n-i)}|DQ|.\label{eqn:simplification}
\end{align}
Now using \eqref{eqn:simplification} for each $S_{\ell}$, $1\leq \ell \leq k$, we get that the expression \eqref{eqn:sipmle} is proportional to
\begin{eqnarray}
\prod_{i<j}|z_i-z_j|^2\prod_{\ell=1}^{k}\bigg[e^{-\sum_{i=1}^n(|Z_{\ell}(i)|^{2\epsilon_{\ell}}
+\frac{1-\epsilon_{\ell}}{2}|Z_{\ell }(i)|^{2\epsilon_\ell}\sum_{j=1}^{i-1}|Q_{\ell}(j,i)|^2
+\frac{1+\epsilon_{\ell}}{2}\sum_{j=i+1}^n|T_\ell(i,j)|^2)}
\\\times\prod_{i=1}^{n-1}|Z_{\ell}(i)|^{(n-i)(1-\epsilon_\ell)}
\prod_{i=1}^{n}|Z_{\ell }(i)|^{2(\epsilon_{\ell}-1)n}\bigg]\bigwedge_{\ell=1}^k|dH({U_{\ell}})|
|DQ_{\ell}|^{\frac{1-\epsilon_{\ell}}{2}}|DT_{\ell}|^{\frac{1+\epsilon_{\ell}}{2}}|DZ_{\ell}|.\nonumber
\end{eqnarray}
Now integrating this expression with respect to all variables except the variables of $Z_{\ell}$ for $\ell=1,2,\ldots,k$, we have
(omitting the constant)
\begin{eqnarray}\label{eqn:result1}
\prod_{i<j}|z_i-z_j|^2\prod_{\ell=1}^{k}\left(e^{-\sum_{i=1}^n(|Z_{\ell }(i)|^{2\epsilon_{\ell}})}
\prod_{i=1}^{n}|Z_{ \ell }(i)|^{(\epsilon_{\ell}-1)(n+1)}\right)\bigwedge_{\ell=1}^k |DZ_{\ell}|.
\end{eqnarray}
Now integrating the above expression on the respective hypersurfaces given by \linebreak $\left\{(Z_{1}(i),Z_{2}(i),\dots, Z_{k}(i)):Z_{1}(i)Z_{2}(i)\dots Z_{k}(i)=z_i\right\}$, for $i=1,2,\dots,n$
we get the density of the eigenvalues $(z_1,z_2,\ldots,z_n)$ of
$A_1^{\epsilon_1}A_2^{\epsilon_2}\cdots A_k^{\epsilon_k}$.
This completes the proof of the theorem. \qed
\chapter{Introduction}
\label{ch:introduction1}
The fundamental theorem of algebra states that every polynomial of degree $n$ of a single variable has, counted with multiplicity, exactly $n$ zeros (roots) in the complex plane. Abel and Galois proved that the roots of any polynomial of degree $5$ or higher cannot be expressed in terms of radicals. Like in many other problems, where exact formulae are not known (or the formulae not amenable to analysis), one may study the statistics of a `typical' polynomial. This is done by equipping a probability measure on the space of polynomials (or by introducing randomness within the polynomial) and choosing a random polynomial according to this measure. There are two natural ways of inducing randomness into polynomials - one by considering the characteristic polynomials of a random matrix and other by choosing the coefficients of polynomials to be random variables. Both cases are of interest. The former leads to the study of random matrices while the latter to the theory of random polynomials.
In the theory of random polynomials, the central problem is to understand the behaviour of zeros which is studied by choosing coefficients to be random variables (often independent). The study of random polynomials many times provide us with interesting phenomenon. For example, the zeros of Kac's polynomials, which are defined as random polynomials with i.i.d. complex Gaussian random variables as coefficients, accumulate near the unit circle. Pemantle and Rivin, in \cite{pemantle}, considered a sequence of polynomials whose zeros are i.i.d. complex random variables. They conjectured that the empirical measures of zeros and critical points of these polynomials agree in limit.
The first part of this thesis is inspired by Kabluchko's proof \cite{kabluchko} to the problem of Pemantle and Rivin. Here we construct a random sequence of zeros by choosing its terms from two predefined deterministic sequences. It is shown that for the sequence of random polynomials, whose zeros are the terms of the random sequence constructed above, the limiting empirical measures of zeros and critical points agree. This phenomenon fails in general for deterministic sequence of polynomials. Examples of deterministic sequence of polynomials where the limiting measures of zeros and critical points don't agree can be constructed. However, as a consequence of the previous result, it can be shown that if we slightly perturb the zeros of these polynomials, then the limiting measures of zeros and critical points will agree.
Hannay in \cite{hannay} and Hanin in \cite{hanin1} observed that the critical points of random polynomials are closely paired with the zeros of the polynomials. In this setting, we study the matching distance between zeros and critical points. When the zeros of the polynomials are all i.i.d. real valued random variables, it is shown that the matching distance between zeros and critical points of these polynomials will remain bounded if the random variables have finite first moment. It is also shown that in limit the bound is exactly the first moment of these random variables.
The theory of random matrices deals with the study of eigenvalues of a random matrix. Finding exact eigenvalue density is an important problem in random matrix theory. There are only a handful matrix ensembles for which the exact eigenvalue density is known. For these ensembles, it is often true that the eigenvalues constitute an important class of point processes called determinantal point processes for which a theory and framework is already available for analysis. Of the known ensembles, very few are non-hermitian matrix ensembles. Ginibre, in \cite{ginibre}, derived the eigenvalue density for a matrix whose entries are i.i.d. complex Gaussian random variables. These matrices are called Ginibre matrices since. Krishnapur, in \cite{manjunath} derived the eigenvalue density for $A^{-1}B$, where $A$ and $B$ are independent Ginibre matrices. Akemann and Burda in \cite{akemann} derived the eigenvalue density for random matrices obtained as the product of independent Ginibre matrices. A generalization for these matrices is to consider product of independent matrices where each matrix or its inverse is a Ginibre matrix. In this thesis, the eigenvalue density for these matrices is derived and it is shown that they form a determinantal point process. This result generalizes all the previously mentioned results.
Studying the behaviour of real eigenvalues for real random matrices and real zeros for real random polynomials have posed different challenges (due to the lack of conventional symmetries) and simultaneously offered various insights. A different problem on the products of i.i.d. real matrices of fixed size with i.i.d. entries was considered by Lakshminarayan in \cite{arul}. He considered the case when the entries are i.i.d. real Gaussian random variables. He conjectured that the probability of the product of these matrices have all real eigenvalues, converge to $1$ as the size of the product increase to infinity. He established this conjecture for the matrices of size $2 \times 2$. Forrester, in \cite{forrester}, proved this conjecture for any $k\geq1$. It is natural to believe that this phenomenon is universal and hence may hold for any matrix with i.i.d. entries. We show this in a case where the entries of these matrices are distributed according to the probability measure $\mu$ which has an atom.
\section{Outline}
We now outline the contents of this thesis briefly. This thesis broadly deals with two themes. In the first part we study the zeros and critical points of random polynomials, which is covered in Chapters 2, 3 and 4.
\begin{itemize}
\item
In Chapter 2, a brief history of the results relating critical points and zeros of random polynomials are given. We deal with sequences of deterministic polynomials to provide explicit examples in which the limiting measures of zeros and critical points do not agree. Thereafter, a little randomness is introduced into these polynomials which ensures that the limiting measures agree. We also discuss some of their consequences.
\item
In Chapter 3, we prove that the limiting measure of zeros and critical points of a sequence of random polynomials agree when a little randomness is introduced. The results stated in Chapter 2 are proved.
\item
In Chapter 4, we consider the matching problem between zeros and critical points of random polynomials. We show that when the zeros are real and i.i.d. from a given distribution with finite first moment, then the $\ell^1$ matching distance will be finite. We also consider the spacing between the zeros and critical points of the random polynomials. In the case where zeros are i.i.d. exp($\lambda$) random variables, we show that the extremal critical point is much closer to the extremal zero of the random polynomial.
\end{itemize}
In the second part we study the eigenvalues of certain products of random matrices. This is covered in Chapters 5 and 6.
\begin{itemize}
\item
In Chapter 5, we derive the exact eigenvalue density for $X=X_1^{\epsilon_1}X_2^{\epsilon_2}\dots X_n^{\epsilon_n}$, where $\epsilon_i=\pm1$ and $X_i$ are i.i.d. complex Ginibre matrices. In other words, we derive the eigenvalue density for products of complex Ginibre matrices of fixed size in which some of them are inverted. It is also observed that they form a determinantal point process.
\item
In Chapter 7, we present a stronger version of the conjecture, by Lakshminarayan in \cite{arul}. We prove the conjecture in a special case when the entries of the matrices are distributed according to $\mu$ which has an atom.
\end{itemize}
\chapter{Matching between zeros and critical points of random polynomials}
\label{ch:matching6}
\section{Introduction}
In the previous chapters we have seen the behaviour of the point cloud of zeros and critical points in bulk. In this chapter we study the pairing of zeros and critical points. Dennis and Hannay in \cite{hannay}, gave an electrostatic argument to show that the zeros and critical points are closely paired for a generic (random) polynomial of higher degree. In \cite{hanin1} Hanin argued that the critical points and zeros for a random polynomial are mutually paired by computing the covariance between these measures.
We will restrict our attention to critical points of polynomials having all real zeros. We choose the real zeros to be i.i.d. random variables. In the next section we show that the sum of distances between zeros and critical points, when paired appropriately, remains bounded under the assumption that the random variables have finite first moment. In the next section we study the spacings between the extremal zeros and critical points when the zeros are i.i.d. exponential or uniform random variables. We prove that the extremal critical point is much closer to the extremal zero than any other zeros.
\section{Matching distance between zeros and critical points of random polynomials.}
Matching between two sets is defined as follows. Let $U,V$ be two sets of finite and equal cardinality in the complex plane. A matching is a bijection from $U$ to $V$. The concept of matching is used in qualitatively defining distance between two sets of same cardinality. Matching distance is a natural distance to quantify the closeness of the sets of zeros and critical points of a polynomial. There are several notions of matching distance. In this chapter we will deal with $\ell^1$ matching distance, which is defined as,
\[
d_1(U,V)=\inf\limits_{\pi\in \text{$\mathfrak{S}$}_n}\sum\limits_{i=1}^{n}|u_i-v_{\pi(i)}|,
\]
where $U=\{u_1,u_2,\dots,u_n\}$, $V=\{v_1,v_2,\dots,v_n\}$ and $\pi=(\pi(1),\pi(2),\dots,\pi(n))$ is an element in set of permutations of size $n$ denoted by $\text{$\mathfrak{S}$}_n$.
To define mapping distance between set of zeros and critical points, we include the element $0$ in the set of critical points. We map the set of critical points to set of zeros so that the sum of the distances between the critical points and zero set is minimized and call that as matching distance.
The order statistics for a set of real numbers $\{\alpha_1,\alpha_2,\dots,\alpha_n\}$ are denoted as $\alpha_{(1)}\leq \alpha_{(2)} \leq \dots \leq \alpha_{(n)}$. In the following lemma we will compute the $\ell_1$ matching distance between two sets in the real line.
\begin{lemma}\label{matching_distance}
Let $X=\{x_1,x_2,\dots,x_n\}$ and $Y=\{y_1,y_2,\dots,y_n\}$ be two sets in the set of real numbers. Then the $\ell_1$ matching distance between $X$ and $Y$ is given by
\[
d_1(X,Y)=\sum\limits_{i=1}^{n}|x_{(i)}-y_{(i)}|.
\]
\end{lemma}
\begin{proof}
Without loss generality assume that $x_i=x_{(i)}$ and $y_i=y_{(i)}$ for all $i=1,2,\dots,n$. We will show that the matching distance is attained for identity matching. Suppose not, let $\pi$ be the permutation for which the matching distance is attained. Then there is $i<j$ such that $\pi(i)>\pi(j)$. If we tweak the permutation to $\pi'$ by choosing $\pi'(i)=\pi(j)$, $\pi'(j)=\pi(i)$ and $\pi'(\ell)=\pi(\ell)$ for $\ell\neq i,j$, then $\sum\limits_{i=1}^{n}|x_{i}-y_{\pi'(i)}|\leq\sum\limits_{i=1}^{n}|x_{i}-y_{\pi(i)}|$. Repeating this argument, it follows that $\sum\limits_{i=1}^{n}|x_{i}-y_{i}|\leq\sum\limits_{i=1}^{n}|x_{i}-y_{\pi(i)}|$. Hence the matching distance is attained for identity permutation.
\end{proof}
As an application the above Lemma \ref{matching_distance}, we compute the $\ell_1$ matching distance between the sets of zeros and critical points of a given polynomial.
\begin{proposition}\label{matching_zeros_critical_positive}
Let $x_1,x_2,\dots,x_n$ be all non-negative numbers. Then the $\ell_1$ matching distance between the sets of zeros and critical points of a polynomial $P_n(z)=(z-x_1)(z-x_2)\dots(z-x_n)$ is given by
\[
d_1(Z(P_n),Z(P_n')\cup\{0\})=\frac{1}{n}\sum\limits_{i=1}^{n}x_i.
\]
\end{proposition}
\begin{proof}
Let $\eta_1,\eta_2,\dots,\eta_{n-1}$ be the critical points of $P_n$. Because the critical points interlace the zeros of the $P_n$, we have $0\leq x_{(1)}\leq \eta_{(1)} \leq x_{(2)} \leq \dots, \leq \eta_{(n-1)} \leq x_{(n)}$. Applying Lemma \ref{matching_distance}, we get that
\begin{equation}
d_1(Z(P_n),Z(P_n')\cup\{0\}) = \sum\limits_{i=2}^{n}(x_{(i)}-\eta_{(i-1)})+x_{(1)}=\sum\limits_{i=1}^{n}x_i-\sum\limits_{i=1}^{n-1}\eta_{i}.\label{l_1_dist_eqn}
\end{equation}
Recall Vieta's formula that if $\alpha_1,\alpha_2,\dots,\alpha_n$ are the roots of the polynomial defined as $P(z)=a_0+a_1z+\dots+a_nz^n$, then
\[ \sum\limits_{1\leq i_1<i_2<\dots<i_k\leq n}\alpha_{i_1}\alpha_{i_2}\dots \alpha_{i_k}=(-1)^k\frac{a_{n-k}}{a_n}.\]
From Vieta's formula, observe the identity $\sum\limits_{i=1}^{n}x_i = \dfrac{n}{n-1}\sum\limits_{i=1}^{n-1}\eta_i$. Substituting this in \eqref{l_1_dist_eqn}, we get
\[
d_1(Z(P_n),Z(P_n')\cup\{0\}) =\sum\limits_{i=1}^{n}x_i-\frac{n-1}{n}\sum\limits_{i=1}^{n}x_i=\frac{1}{n}\sum\limits_{i=1}^{n}x_i.
\]
\end{proof}
\begin{proposition}\label{matching_zeros_critical}
Let $x_1,x_2,\dots,x_k$ be negative numbers and $x_{k+1},x_{k+2},\dots,x_{n}$ be non-negative numbers.
Then the $\ell_1$ matching distance between the sets of zeros and critical points of a polynomial $P_n(z)=(z-x_1)(z-x_2)\dots(z-x_n)$ is bounded by
\[
d_1(Z(P_n),Z(P_n')\cup\{0\})\leq\frac{1}{k}\sum\limits_{i=1}^{k}|x_i|+\frac{1}{n-k}\sum\limits_{i=k+1}^{n}|x_i|.
\]
\end{proposition}
Before proving Proposition \ref{matching_zeros_critical}, we prove the following lemma, which indicates that the critical points move towards right when a new zero is introduced into the polynomial towards the left of all the zeros.
\begin{lemma}\label{assistant}
Let $\eta_1,\eta_2,\dots,\eta_{n-1}$ be the critical points of the polynomial $P(z)=(z-\alpha_1)(z-\alpha_2)\dots(z-\alpha_n)$. Let $\eta_0',\eta_1',\eta_2',\dots,\eta_{n-1}'$ be the critical points of $Q(z)=(z-\alpha)P(z)$ where $\alpha<\alpha_i$ for $i=1,2,\dots,n$. Then, $\alpha_{(i+1)}-\eta_{(i)}' \leq \alpha_{(i+1)}-\eta_{(i)}$, for $i=1,2,\dots,n$
\end{lemma}
\begin{proof}
It is enough to show that $\eta_{(i)}\leq\eta_{(i)}'$. Define $L_P(z):=\frac{P'(z)}{P(z)}=\sum\limits_{i=1}^{n}\frac{1}{z-\alpha_i}$ and $L_Q(z):=\frac{Q'(z)}{Q(z)}=\frac{1}{z-\alpha}+L_P(z)$. Both $L_P$ and $L_Q$ are decreasing functions in any interval which does not contain any of the zeros of $P$ and $Q$. Fix an $i\in \{1,2,\dots,n\}$. Because $\eta_{(i)}$ is a critical point of $P$, we have $L_P(\eta_{(i)})=0$ and $L_Q(\eta_{(i)})=\frac{1}{\eta_{(i)}-\alpha}>0$. But $L_Q$ vanishes exactly once in the interval $(\alpha_{(i)},\alpha_{(i+1)})$ at $\eta_{(i)}'$. Combing the facts that $L_Q$ is decreasing in $(\alpha_{(i)},\alpha_{(i+1)})$ and $L_Q(\eta_{(i)})>0$, we get that $\eta_{(i)}\leq\eta_{(i)}'$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{matching_zeros_critical}]
With out loss of generality assume that $x_1\leq x_2\leq\dots x_k \leq0 \leq x_{k+1}\leq \dots x_n$. Let $\eta_1\leq\eta_2\dots \eta_{n-1}$ be the critical points of $P_n$. Factorize the polynomial $P_n$ as $P_n(z)=Q_n(z)R_n(z)$, where $Q_n(z)=(z-x_1)(z-x_2)\dots(z-x_k)$ and $R_n(z)=(z-x_{k+1})(z-x_{k+2})\dots(z-x_{n})$. If $\eta_1'\leq\eta_2'\leq \dots \eta_{k-1}'$ and $\eta_{k+1}'\leq\eta_{k+2}'\leq\dots\eta_{n-1}'$ are critical points of $Q_n$ and $R_n$ respectively, then by repeatedly applying the previous Lemma \ref{assistant} to $Q_n$ and $R_n$, we get $\eta_1-x_1 \leq \eta_1'-x_1, \dots, \eta_{k-1}-x_{k-1}\leq \eta_{k-1}'-x_{k-1}$ and $x_{k+2}-\eta_{k+1} \leq x_{k+2}-\eta_{k+1}', \dots,x_{n}-\eta_{n-1}\leq x_{n}-\eta_{n-1}'.$ The $\ell_1$ matching distances between zeros and critical points of $Q_n$ and $R_n$ are bounded by $\frac{1}{k}\sum\limits_{i=1}^{k}|x_i|$ and $\frac{1}{n-k}\sum\limits_{i=k+1}^{n}|x_i|$ respectively. Therefore we get,
\begin{align}
d_1(Z(P_n),Z(P_n')\cup\{0\}) &\leq d_1(Z(Q_n),Z(Q_n')\cup\{0\})+d_1(Z(R_n),Z(R_n')\cup\{0\}) \\&= \frac{1}{k}\sum\limits_{i=1}^{k}|x_i|+\frac{1}{n-k}\sum\limits_{i=k+1}^{n}|x_i|.
\end{align}
Hence the proposition is proved.
\end{proof}
Notice that Propositions \ref{matching_zeros_critical_positive} and \ref{matching_zeros_critical} are stated for deterministic polynomials. As an application we obtain the matching distance for the polynomials whose zeros are i.i.d. random variables.
\begin{theorem}\label{random_matching_distance}
Let $X_1,X_2,\dots$ be i.i.d. random variables satisfying $\ee{\text{$|X_1|$}}$, define the polynomial $P_n(z)=(z-X_1)(z-X_2)\dots(z-X_n)$. Then
\[
\limsup\limits_{n\rightarrow \infty}d_1(Z(P_n),Z(P_n')\cup\{0\})\leq \ee{\text{$|X_1|$}}.
\]
Moreover if $X_i$s are non-negative random variables, then
\[
\limsup\limits_{n\rightarrow \infty}d_1(Z(P_n),Z(P_n')\cup\{0\})= \ee{\text{$X_1$}}.
\]
\end{theorem}
Range of the zeros of $P_n$ gives a trivial bound for the $\ell_1$ matching distance between the zeros and critical points of $P_n$. If the random variables are all bounded then this $\ell_1$ matching distance remains bounded uniformly for any $n$. Where as if the random variables $X_i$s are unbounded the above Theorem \ref{random_matching_distance} shows that the $\ell_1$ matching distance between the zeros and critical points of $P_n$ remains bounded almost surely uniformly for any $n$.
\begin{proof}[Proof of Theorem \ref{random_matching_distance}]
The second part of the theorem follows immediately by applying law of large numbers for Proposition \ref{matching_zeros_critical_positive}. For the first part, write $X_i=X_i^+-X_i^-$, where $X_i^+\geq0$ and $X_i^-<0$. Let $k_n=\#\{X_i:X_i<0\}$. Applying Proposition \ref{matching_zeros_critical} we get that $$d_1(Z(P_n),Z(P_n')\cup\{0\})\leq\frac{1}{k_n}\sum\limits_{i=1}^{n}X_i^-+\frac{1}{n-k_n}\sum\limits_{i=1}^{n}X_i^+.$$ Applying law of large numbers for the above we get $$\limsup\limits_{n\rightarrow \infty}d_1(Z(P_n),Z(P_n')\cup\{0\}) \leq \ee{\text{$X_1^-$}}+\ee{\text{$X_1^+$}}=\ee{\text{$|X_1|$}}.$$
\end{proof}
\section{Spacings of zeros and critical points of random polynomials.}
Theorem \ref{random_matching_distance} shows that even if $X_i$s are unbounded random variables then the $\ell_1$ matching distance between the zeros and critical points remain bounded. This indicates that the extremal critical points stay much closer to one of the zeros than the others. We formalize this in the case of exponential random variables as the following result.
\begin{theorem}\label{exponential}
Let $X_1,X_2,\dots X_n$ be i.i.d exponential random variables. Let $\eta_{(1)} \leq \eta_{(2)} \leq \dots \leq \eta_{(n-1)}$ be the critical points of the polynomial $P_n(z):=(z-X_1)(z-X_2)\dots(z-X_n)$. Then the following hold true,
\begin{itemize}
\item[1.]
$n\log n(\eta_{(1)}-X_{(1)})\rightarrow 1$ in probability.
\item[2.]
$n\log n(X_{(n)}-\eta_{(n-1)})\rightarrow 1$ in probability.
\end{itemize}
\end{theorem}
We will use R\'{e}nyi's representation~\cite{boucheron} for the order statistics of exponential random variables while proving Theorem \ref{exponential} and is stated below.
\paragraph{R\'{e}nyi's representation for order statistics:}
Let $Y_{(1)},Y_{(2)},\dots,Y_{(n)}$ be the order statistics of the sample of i.i.d exponential random variables, then
$$(Y_{(1)},Y_{(2)},\dots,Y_{(n)}) \,{\buildrel d \over =}\, \left(\frac{E_n}{n},\frac{E_{n-1}}{n-1}+\frac{E_n}{n},\dots,E_1+\frac{E_2}{2}+\dots+\frac{E_n}{n}\right),$$ where $E_1,E_2,\dots,E_n$ are i.i.d exponential random variables.
\begin{proof}[Proof of Theorem \ref{exponential}]
Let $L_n(z):= \dfrac{P_n'(z)}{P_n(z)}=\sum\limits_{i=1}^{n}\frac{1}{z-X_i}=\sum\limits_{i=1}^{n}\frac{1}{z-X_{(i)}}$. If $z$ is a critical point of $P_n(z)$ then it satisfies $L_n(z)=0$.
Hence from the equation $L_n(z)=0$ we have,
\begin{align}
\eta_{(1)}-X_{(1)} =& \left(\sum\limits_{i=2}^{n}\frac{1}{X_{(i)}-\eta_{(1)}}\right)^{-1} \\ \leq& \left(\sum\limits_{i=2}^{n}\frac{1}{X_{(i)}-X_{(1)}}\right)^{-1}
\end{align}
But $X_{(i)}-X_{(1)}=\frac{E_{n-1}}{n-1}+ \dots + \frac{E_{n-i+1}}{n-i+1}$ for all $i=2,3,\dots,n$, where $E_1,E_2,\dots,E_n$ are i.i.d exponential random variables. From R\'{e}nyi's representation it can be noticed that $\frac{E_{n-1}}{n-1}+ \dots + \frac{E_{n-i}}{n-i} \,{\buildrel d \over =}\, Y_{(i)}$ for $i=1,2,\dots,n-1$, where $Y_{(1)},Y_{(2)},\dots,Y_{(n-1)}$ are order statistics of $Y_1,Y_2,\dots,Y_{n-1}$ which are i.i.d exponential random variables. Therefore,
\begin{align}
\eta_{(1)}-X_{(1)} \,{\buildrel d \over =}\,&
\left(\sum\limits_{i=2}^{n}\frac{1}{\frac{E_{n-1}}{n-1}+ \dots + \frac{E_{n-i+1}}{n-i+1}}\right)^{-1}\\
=& \left(\sum\limits_{i=1}^{n-1}\frac{1}{Y_{(i)}}\right)^{-1}
= \left(\sum\limits_{i=1}^{n-1}\frac{1}{Y_i}\right)^{-1}.\label{eqn:thm1:1}
\end{align}
Observe that $\frac{1}{Y_1}$ is regularly varying-1 and applying central limit theorem ( Chapter 2, Theorem 7.7 in \cite{durrett}) for \eqref{eqn:thm1:1} we get,
\begin{equation}\label{dist}
\frac{(\eta_{(1)}-X_{(1)})^{-1}-n\log n}{n} \overset{d}{\to} R
\end{equation}
where $R$ has stable-1 distribution. From \eqref{dist} it follows that $n\log n(\eta_{(1)}-X_{(1)})\overset{p}{\to}1.$
The proof of the second statement $n\log n(X_{(n)}-\eta_{(n-1)})\stackrel{p}{\rightarrow} 1$ is similar as that of the first statement.
\end{proof}
\begin{remark}In the above Theorem \eqref{exponential}, instead of i.i.d exponential random variables we can choose i.i.d uniform random variables and obtain the same result. One may need to use the fact that if $U_1,U_2,\dots,U_n$ are i.i.d uniform random variables, then the order statistics
\begin{align}
(U_{(1)},U_{(2)}&,\dots,U_{(n)}) \stackrel{d}{=}\\
&\left(\frac{E_1}{E_1+E_2+\dots+E_{n+1}},\frac{E_1+E_2}{E_1+E_2+\dots+E_{n+1}}, \dots,\frac{E_1+E_2+\dots+E_n}{E_1+E_2+\dots+E_{n+1}}\right),
\end{align}
where $E_1,E_2,\dots,E_{n+1}$ are i.i.d exponential random variables.
\end{remark}
We believe the same result holds for any random variables whose densities satisfy certain regularity properties. The proof may be following the same idea but using R\'{e}nyi's representation theorem in more general form as given in \cite{boucheron} .
\chapter{Poisson process from spectrum of Ginibre matrices}
\label{ch:PoissonGinibre3}
\section{Introduction}
Let $X_n$ be a complex Ginibre matrix of size $n \times n$, whose entries $X(i,j)$ are i.i.d $CN(0,\frac{1}{n})$. Let $\lambda_{1,n},\lambda_{2,n},\dots,\lambda_{n,n}$ be the eigenvalues of $X_n$. It is long known that the point process constituted by the eigenvalues of $X_n$ is a determinantal point process with kernel given by $\textbf{$\mathbb{K}$}(z,w)=\sum\limits_{j=0}^{n-1}\frac{(nz\overline{w})^j}{j!}$ w.r.t the reference measure $\frac{1}{\pi}ne^{-n|z|^2}dm(z)$. Let $\mu_{1,n},\mu_{2,n},\dots,\mu_{n,n}$ be the eigenvalues of $X_n^n$ which are related as $\lambda_{k,n}^n=\mu_{k,n}$. In this chapter we will investigate whether the point processes obtained by $\mu_{k,n}$s will converge to some point process as $n \rightarrow \infty$. For this it is enough to check if the $k^{th}$ joint intensity $\rho_{k,n}'(z_1,z_2,\dots,z_k)$ of $\mu_{\ell,n}$s converge to the corresponding joint intensity of the limiting point process. We will show that the limiting point process is Poisson on $\textbf{$\mathbb{C}$}\backslash \{0\}$ with intensity $f(.)$. As the $k^{th}$ joint intensity of Poisson point process at points $z_1,z_2,\dots,z_k$ is $f(z_1)f(z_2)\dots f(z_k)$, it is enough to show that $\rho_{k,n}'(z_1,z_2,\dots,z_n)$ converge to the same.
We will show the computational proof of the above fact by demonstrating the proof for $k=1,2$ and later prove for the general $k$.
\section{Preliminaries and results}
\begin{theorem}\label{poissonginibre}
\end{theorem}
\section{Proof of Theorem\ref{poissonginibre}}
Let $\omega_n=e^{\frac{2\pi i}{n}}$ is an $n^{th}$root of unity.
\subsubsection{Case:k=1}
For this case there isn't anything to verify but merely obtain the intensity of the Poisson point process.
\begin{align}
\rho_{1,n}'(z) & = \lim\limits_{\epsilon \rightarrow 0}\dfrac{\Pr\left(\sum\limits_{i=1}^{n}\text{$\mathbbm{1}$}\left(\mu_{i,n}\in B(z,\epsilon)\right)=1 \right)}{\pi\epsilon^2}\\
& = \lim\limits_{\epsilon \rightarrow 0}\dfrac{\Pr\left(\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\text{$\mathbbm{1}$}\left(\lambda_{i,n}\in B\left(z^{\frac{1}{n}}\omega_n^j,\dfrac{\epsilon}{n|z|^{1-\frac{1}{n} }}+O(\epsilon^2)\right)\right)=1\right)}{\pi\epsilon^2}\\
& = \lim\limits_{\epsilon \rightarrow 0}\dfrac{\Pr\left(\sum\limits_{i=1}^{n}\text{$\mathbbm{1}$}\left(\lambda_{i,n}\in \bigcup\limits_{j=1}^{n}B\left(z^{\frac{1}{n}}\omega_n^j,\dfrac{\epsilon}{n|z|^{1-\frac{1}{n}}}+O(\epsilon^2)\right)\right)=1\right)}{\pi\epsilon^2}\\
& = \lim\limits_{\epsilon \rightarrow 0}\dfrac{\int\limits_{A_n}\rho_{1,n}(w)dm(w)+O(\epsilon^4)}{\pi\epsilon^2}
\end{align}
Where $A_n=\bigcup\limits_{j=1}^{n}B\left(z^{\frac{1}{n}}\omega_n^j,\dfrac{\epsilon}{n|z|^{1-\frac{1}{n}}}+O(\epsilon^2)\right)$.
Hence
\begin{align}
\rho_{1,n}(z) & = \dfrac{\sum\limits_{i=1}^{n}\rho_{1,n}(z^\frac{1}{n}\omega_n^j)}{n^2|z|^{2(1-\frac{1}{n})}}= \dfrac{K_n(z^\frac{1}{n},z^\frac{1}{n})}{n|z|^{2-\frac{2}{n}}}\\
& = \sum\limits_{i=0}^{n-1}\dfrac{n^i|z|^\frac{2i}{n}}{i!}\dfrac{e^{-n|z|^\frac{2}{n}}}{\pi|z|^{2-\frac{2}{n}}}\\
& = \dfrac{1}{\pi|z|^{2-\frac{2}{n}}}\sum\limits_{i=0}^{n-1}\dfrac{n^i|z|^\frac{2i}{n}}{i!}e^{-n|z|^\frac{2}{n}}\\
& = \frac{1}{\pi |z|^{2-\frac{2}{n}}}\Pr\left(\sum_{i=1}^{n}P_i^{(|z|^\frac{2}{n})}<n\right)\\
& \text{(where $P_1^{(t)},P_2^{(t)},\dots,P_n^{(t)}$ are i.i.d Poisson(t) random variables)}
\end{align}
Therefore the 1-point intensity of the limiting point process will be $$\rho_1'(z)=\lim\limits_{n\rightarrow\infty}\rho_{1,n}'(z)=\lim\limits_{n\rightarrow\infty}\frac{1}{\pi|z|^2}|z|^\frac{2}{n}\Pr\left(\sum\limits_{i=1}^{n}P_i^{(|z|^\frac{2}{n})}<n\right)=\frac{1}{2\pi|z|^2}$$
\subsubsection{Case:k=2}
\begin{align}
\rho_{2,n}'(z_1,z_2) & = \lim\limits_{\epsilon\rightarrow 0}\dfrac{\Pr\left(\sum\limits_{i=1}^{n}\text{$\mathbbm{1}$}\left(\mu_{i,n}\in B(z_1,\epsilon)\right)=1 \& \sum\limits_{i=1}^{n}\text{$\mathbbm{1}$}\left(\mu_{i,n}\in B(z_2,\epsilon)\right)=1 \right)}{(\pi\epsilon^2)^2}\\
\end{align}
Using the relation $\mu_{i,n}=\lambda_{i,n}^n$ and changing variables we obtain that the events $\{\mu_{i,n}\in B(z_1,\epsilon)\}$ and $\{\lambda_{i,n}\in \bigcup\limits_{j=1}^{n}B(z_1^\frac{1}{n}\omega_n^j,\epsilon_n')\}$ as well as $\{\mu_{i,n}\in B(z_2,\epsilon)\}$ and $\{\lambda_{i,n}\in \bigcup\limits_{j=1}^{n}B(z_2^\frac{1}{n}\omega_n^j,\epsilon_n'')\}$ are identical for some $\epsilon_n'=\frac{\epsilon}{n|z_1|^{1-\frac{1}{n}}}+O(\epsilon^2)$ and $\epsilon_n''=\frac{\epsilon}{n|z_2|^{1-\frac{1}{n}}}+O(\epsilon^2)$. Hence we have
\begin{align}
\rho_{2,n}'(z_1,z_2)& = \lim\limits_{\epsilon\rightarrow 0} \dfrac{\Pr\left(\sum\limits_{i=1}^{n}\text{$\mathbbm{1}$}\left(\lambda_{i,n}\in \bigcup\limits_{j=1}^{n}B(z_1^\frac{1}{n}\omega_n^j,\epsilon_n')\right)=1 \& \sum\limits_{i=1}^{n}\text{$\mathbbm{1}$}\left(\lambda_{i,n}\in\bigcup\limits_{j=1}^{n} B(z_2^\frac{1}{n}\omega_n^j,\epsilon_n'')\right)=1 \right)}{(\pi\epsilon^2)^2}\\
& = \lim\limits_{\epsilon \rightarrow 0} \dfrac{\int\limits_{A_n\times B_n}\rho_{2,n}(\alpha,\beta)dm(\alpha)dm(\beta)}{(\pi\epsilon^2)^2}
\end{align}
where $A_n=\bigcup\limits_{j=1}^{n}B(z_1^\frac{1}{n}\omega_n^j,\epsilon_n'),B_n=\bigcup\limits_{j=1}^{n}B(z_2^\frac{1}{n}\omega_n^j,\epsilon_n'')$ and $\rho_{2,n}(.,.)$ is the $2^{nd}$ joint intensity of the eigenvalue process of $X_n$. Therefore,
\begin{align}
\rho_{2,n}'(z_1,z_2) &= \dfrac{\sum\limits_{i,j=1}^{n}\left(\rho_{2,n}(z_1^{\frac{1}{n}}\omega_n^i,z_2^\frac{1}{n}\omega_n^j)\right)}{n^4|z_1z_2|^{2-\frac{2}{n}}}\\
&= \frac{1}{n^4|z_1z_2|^{2-\frac{2}{n}}}\sum\limits_{i,j=1}^{n}\left(K(z_1^{\frac{1}{n}}\omega_n^j,z_1^{\frac{1}{n}}\omega_n^i)K(z_2^{\frac{1}{n}}\omega_n^j,z_2^{\frac{1}{n}}\omega_n^j)-K(z_1^{\frac{1}{n}}\omega_n^i,z_2^{\frac{1}{n}}\omega_n^j)K(z_2^{\frac{1}{n}}\omega_n^j,z_1^{\frac{1}{n}}\omega_n^i)\right)\\
&= \frac{1}{n^4|z_1z_2|^{2-\frac{2}{n}}}\left(n^2K(z_1^\frac{1}{n},z_1^\frac{1}{n})K(z_2^\frac{1}{n},z_2^\frac{1}{n})-\sum\limits_{i,j=1}^{n}|K(z_1^\frac{1}{n}\omega_n^i,z_2^\frac{1}{n}\omega_n^j)|^2\right) \label{last2}
\end{align}
We will now show that $\lim\limits_{n\rightarrow\infty}\dfrac{\sum\limits_{i,j=1}^{n}|K(z_1^\frac{1}{n}\omega_n^i,z_2^\frac{1}{n}\omega_n^j)|^2}{n^4|z_1z_2|^{2-\frac{2}{n}}}=0$.
\begin{align}
\sum\limits_{i,j=1}^{n}|K(z_1^\frac{1}{n}\omega_n^i,z_2^\frac{1}{n}\omega_n^j)|^2 & = n^2e^{-n(|z_1|^\frac{2}{n}+|z_2|^\frac{2}{n})}\sum\limits_{i,j=1}^{n}\left(\sum\limits_{\ell=0}^{n-1}(z_1^\frac{1}{n}\omega_n^i\overline{z_2}^\frac{1}{n}\overline{\omega_n}^{j})^\ell\frac{n^\ell}{\ell!}\right)\left(\sum\limits_{\ell=0}^{n-1}(\overline{z_1}^\frac{1}{n}\overline{\omega_n}^{i}z_2^\frac{1}{n}\omega_n^{j})^\ell\frac{n^\ell}{\ell!}\right)\\
& = n^2e^{-n(|z_1|^\frac{2}{n}+|z_2|^\frac{2}{n})}\sum\limits_{i,j=1}^{n}\left(\sum\limits_{\ell,k=1}^{n-1}\frac{\left(z_1^\frac{\ell}{n}\omega_n^{i\ell}\overline{z_2}^\frac{\ell}{n}\overline{\omega_n}^{j\ell}\overline{z_1}^\frac{k}{n}\overline{\omega_n}^{ki}z_2^\frac{k}{n}\omega_n^{kj}\right)n^{k+\ell}}{k!\ell!}\right)\\
& =n^2e^{-n(|z_1|^\frac{2}{n}+|z_2|^\frac{2}{n})}\sum\limits_{i,j=1}^{n}\left(\sum\limits_{\ell,k=1}^{n-1}\frac{n^{k+\ell}}{k!\ell!}\left(\omega_n^{(i-j)(\ell-k)}\right)\left(z_1^\frac{\ell}{n}\overline{z_1}^\frac{k}{n}z_2^\frac{k}{n}\overline{z_2}^\frac{\ell}{n}\right)\right)
\end{align}
Let $z_1=r_1e^{i\theta_1}$, $z_2=r_2e^{i\theta_2}$ and $\alpha=e^{i(\theta_1-\theta_2)}$.
\begin{align}
\sum\limits_{i,j=1}^{n}|K(z_1^\frac{1}{n}\omega_n^i,z_2^\frac{1}{n}\omega_n^j)|^2
& = n^2e^{-n(|z_1|^\frac{2}{n}+|z_2|^\frac{2}{n})}\sum\limits_{i,j=1}^{n}\left(\sum\limits_{\ell,k=0}^{n-1}\frac{n^{k+\ell}}{k!\ell!}\left(\omega_n^{(i-j)(\ell-k)}\right)\alpha^{\ell-k}(r_1r_2)^\frac{\ell+k}{n}\right)\\
& = n^2e^{-n(|z_1|^\frac{2}{n}+|z_2|^\frac{2}{n})}\sum\limits_{\ell\neq k=0}^{n-1}\left(\sum\limits_{i,j=1}^{n}\frac{n^{k+\ell}}{k!\ell!}\left(\omega_n^{(i-j)(\ell-k)}\alpha^{\ell-k}(r_1r_2)^\frac{\ell+k}{n}\right)\right)\\\label{eqn:case2:1}
& +n^2e^{-n(|z_1|^\frac{2}{n}+|z_2|^\frac{2}{n})}\sum\limits_{i,j=1}^{n}\left(\sum\limits_{\ell=k=0}^{n-1}\frac{n^{k+\ell}}{k!\ell!}(r_1r_2)^\frac{\ell+k}{n}\right)
\end{align}
The term \eqref{eqn:case2:1} doesnot contribute to the sum, because for fixed $\ell\neq k$ and $j$ the sum $\sum\limits_{i=1}^{n}\omega_n^{(\ell-k)(i-j)}=0$. Hence,
\begin{align}
\lim\limits_{n \rightarrow \infty}\frac{1}{n^4|z_1z_2|^{2-\frac{2}{n}}}\sum\limits_{i,j=1}^{n}|K(z_1^\frac{1}{n}\omega_i,z_2^\frac{1}{n}\omega_j)|^2 & = \lim\limits_{n \rightarrow \infty}\frac{n^2e^{-n(r_1^\frac{2}{n}+r_2^\frac{2}{n})}}{n^4(r_1r_2)^{2-\frac{2}{n}}}\sum\limits_{i,j=1}^{n}\left(\sum\limits_{\ell=k=0}^{n-1}\frac{n^{k+\ell}}{k!\ell!}(r_1r_2)^\frac{\ell+k}{n}\right)\\
& =\lim\limits_{n \rightarrow \infty}\frac{1}{\pi^2(r_1r_2)^{2-\frac{2}{n}}} \sum\limits_{\ell=0}^{n-1}\left(\frac{n^\ell e^{-n r_1^{\frac{2}{n}}}r_1^{\frac{2\ell}{n}}}{\ell!}\frac{n^\ell e^{-n r_2^{\frac{2}{n}}}r_2^{\frac{2\ell}{n}}}{\ell!}\right)\\
& \leq \lim\limits_{n \rightarrow \infty}\frac{1}{\pi^2(r_1r_2)^{2-\frac{2}{n}}} \sum\limits_{\ell=0}^{\infty}\left(\frac{n^\ell e^{-n r_1^{\frac{2}{n}}}r_1^{\frac{2\ell}{n}}}{\ell!}\frac{n^\ell e^{-n r_2^{\frac{2}{n}}}r_2^{\frac{2\ell}{n}}}{\ell!}\right)\\
& = \lim\limits_{n\rightarrow\infty}\frac{1}{\pi^2(r_1r_2)^{2-\frac{2}{n}}}\Pr\left(Y_1^{(n)}=Y_2^{(n)}\right)
\end{align}
Where $Y_1^{(n)}$ and $Y_2^{(n)}$ are independent Poisson random variables with means $nr_1^\frac{2}{n}$ and $nr_2^\frac{2}{n}$ respectively. At the same time we can write each $Y_1^{(n)}$ and $Y_2^{(n)}$ as the sum of n-independent identical Poisson random variables whose means are $r_1^\frac{2}{n}$ and $r_2^\frac{2}{n}$ respectively. In other words $Y_1^{(n)}=P_1^{n}+P_2^{(n)}+\dots +P_n^{(n)}$, $Y_2^{(n)}=Q_1^{(n)}+Q_2^{(n)}+\dots+Q_n^{(n)}$ where $P_i^{(n)}$s and $Q_i^{(n)}$s are independent Poisson random variables with means $r_1^{\frac{2}{n}}$ and $r_2^\frac{2}{n}$ respectively. We will show that
$$
\lim\limits_{n\rightarrow\infty}\frac{1}{\pi^2(r_1r_2)^{2-\frac{2}{n}}}\Pr\left(Y_1^{(n)}-Y_2^{(n)}\right)=0 \label{eqn:case2:3}
$$
Once we prove \eqref{eqn:case2:3} we will have
$$\rho_2'(z_1,z_2)=\lim\limits_{n\rightarrow\infty}\rho_{2,n}'(z_1,z_2)=\lim\limits_{n\rightarrow\infty}\dfrac{n^2K(z_1^\frac{2}{n},z_1^\frac{2}{n})K(z_2^\frac{2}{n},z_2^\frac{2}{n})}{n^4|z_1z_2|^{2-\frac{2}{n}}}=\frac{1}{(2\pi)^2|z_1z_2|^2}=\rho_1'(z_1)\rho_1'(z_2)$$
Now it remains to prove \eqref{eqn:case2:3}. We shall state Lynaponav central limit theorem and show that the triangular array $P_i^{(n)}-Q_i^{(n)}$ satisfies the hypothesis of Lyapunov central limit theorem and use this to show \eqref{eqn:case2:3}.
\begin{theorem}[Lyaponov CLT \cite{billingsley}]
Let $\{X_{i,n}\}$ be a triangular array of random variables with i.i.d random variables in each row and let $s_n^2=\sum\limits_{i=1}^{n}Var(X_{i,n})$, $S_n=\sum\limits_{i=1}^{n}X_{i,n}$. If $ \lim\limits_{n\rightarrow\infty}\dfrac{\sum\limits_{i=1}^{n}\text{$\mathbb{E}$}|X_{i,n}-\text{$\mathbb{E}$}(X_{i,n})|^{2+\delta}}{s_n^{2+\delta}}=0$ for some $\delta>0$ then as $n$ approaches to infinity the random variables $\dfrac{S_n-\text{$\mathbb{E}$}(S_n)}{s_n}$ converges in distribution to $N(0,1)$. $$\dfrac{S_n-\text{$\mathbb{E}$}(S_n)}{s_n}\xrightarrow{d} N(0,1)$$.
\end{theorem}
For showing the hypothesis of Lyaponov CLT is satisfied choose $X_{i,n}=P_i^{(n)}-Q_i^{(n)}$. Then $$s_n^4=\left(\sum\limits_{i=1}^{n}Var(X_{i,n})\right)^2=\left(\sum\limits_{i=1}^{n}Var(P_i^{(n)}-Q_i^{(n)})\right)^2=\left(\sum\limits_{i=1}^{n}\left(Var(P_i^{(n)})+Var(Q_i^{(n)})\right)\right)^2=n^2(r_1^\frac{2}{n}+r_2^\frac{2}{n})^2$$ and $$\text{$\mathbb{E}$}\left(P_i^{(n)}+Q_i^{(n)}-\text{$\mathbb{E}$}(P_i^{(n)}+Q_i^{(n)})\right)^4=\left(r_1^\frac{2}{n}+r_2^\frac{2}{n}\right)\left(3(r_1^\frac{2}{n}+r_2^\frac{2}{n})+4\right)$$.
Therefore
$$
\lim\limits_{n\rightarrow\infty}\dfrac{\sum\limits_{i=1}^{n}\text{$\mathbb{E}$}|X_{i,n}-\text{$\mathbb{E}$}(X_{i,n})|^{2+2}}{s_n^{2+2}} = \lim\limits_{n\rightarrow\infty}\dfrac{n\left(r_1^\frac{2}{n}+r_2^\frac{2}{n}\right)\left(3(r_1^\frac{2}{n}+r_2^\frac{2}{n})+4\right)}{n^2\left(r_1^\frac{2}{n}+r_2^\frac{2}{n}\right)^2}=0
$$
So $\dfrac{Y_1^{(n)}-Y_2^{(n)}-n(r_1^\frac{2}{n}-r_2^\frac{2}{n})}{\sqrt{n(r_1^\frac{2}{n}+r_2^\frac{2}{n})}}\xrightarrow{d}N(0,1)$. Observing that $\lim\limits_{n \rightarrow \infty}\sqrt{n}(r_1^\frac{2}{n}-r_2^\frac{2}{n})=0$ and $\lim\limits_{n\rightarrow \infty}(r_1^\frac{2}{n}+r_2^\frac{2}{n})=2$ we get $\dfrac{Y_1^{(n)}-Y_2^{(n)}}{\sqrt{n}}\xrightarrow{d}N(0,2)$. From the above facts we obtain
\begin{align}
\lim\limits_{n\rightarrow\infty}\Pr(Y_1^{(n)}-Y_2^{(n)}=0)
& \leq\lim\limits_{n\rightarrow\infty}\Pr(|Y_1^{(n)}-Y_2^{(n)}|<1)\\
&=\lim\limits_{n\rightarrow\infty}\Pr\left(\frac{|Y_1^{(n)}-Y_2^{(n)}|}{\sqrt{n}}<\frac{1}{\sqrt{n}}\right)
=0
\end{align}
Hence we have shown that \eqref{eqn:case2:3} is satisfied and verified the case k=2.
\subsubsection{Case:Arbitrary-$k$}
For proving the result for $k \geq 2$, we need to
Like in the previous cases let us define $z_j=r_j\alpha_j$ for $j=1,2,\dots,k$ where $|\alpha_j|=1$.
\begin{align}
\rho_{k,n}'(z_1,z_2,\dots,z_k) & =\frac{\sum\limits_{i_1,i_2,\dots,i_k=1}^{n}\rho_{k,n}(z_1^{\frac{1}{n}}\omega_n^{i_1},z_2^{\frac{1}{n}}\omega_n^{i_2},\dots,z_k^{\frac{1}{n}}\omega_n^{i_k})}{n^{2k}|z_1z_2\dots z_k|^{2-\frac{2}{n}}}\\
& = \frac{\sum\limits_{i_1,i_2,\dots,i_k=1}^{n}\det\left(\left(K(z_\ell^{\frac{1}{n}}\omega_n^{i_\ell},z_m^{\frac{1}{n}}\omega_n^{i_m})\right)\right)_{1\leq\ell,m\leq k}}{n^{2k}|z_1z_2\dots z_k|^{2-\frac{2}{n}}}\\
& = \frac{\sum\limits_{i_1,i_2,\dots,i_k=1}^{n}\sum\limits_{\sigma \in S_k} sgn(\sigma) \prod\limits_{s=1}^{k}K(z_s^{\frac{1}{n}}\omega_n^{i_s},z_{\sigma(s)}^{\frac{1}{n}}\omega_n^{i_{\sigma(s)}})}{n^{2k}|z_1z_2\dots z_k|^{2-\frac{2}{n}}}\\
& = \frac{\sum\limits_{i_1,i_2,\dots,i_k=1}^{n}\sum\limits_{\sigma \in S_k} sgn(\sigma) \prod\limits_{s=1}^{k}\sum\limits_{j=0}^{k-1}\frac{n^j}{j!}(r_sr_{\sigma(s)}\alpha_s\overline{\alpha_{\sigma(s)}})^\frac{j}{n}\omega_n^{j(i_s-i_{\sigma(s)})}\times ne^{-(r_s^\frac{2}{n}+r_{\sigma(s)}^\frac{2}{n})/2}}{n^{2k}|z_1z_2\dots z_k|^{2-\frac{2}{n}}}\\
& =\sum\limits_{i_1,i_2,\dots,i_k=1}^{n}\dfrac{n^ke^{-(r_1^\frac{2}{n}+\dots+r_k^\frac{2}{n})}}{n^{2k}\pi^k|z_1z_2\dots z_k|^{2-\frac{2}{n}}}
\sum\limits_{\sigma \in S_k} sgn(\sigma)\prod\limits_{s=1}^{k}\left(\sum\limits_{j=0}^{k-1}(r_sr_{\sigma(s)})^\frac{j}{n}\omega_n^{j(i_s-i_{\sigma(s)})}(\alpha_s\overline{\alpha_{\sigma(s)}})^\frac{j}{n}\frac{n^j}{j!}\right)\label{eqn:perm}\\
\end{align}
Expanding the product in \ref{eqn:perm} we get
\begin{align}
\sum\limits_{i_1,i_2,\dots,i_k=1}^{n}\dfrac{n^ke^{-(r_1^\frac{2}{n}+\dots+r_k^\frac{2}{n})}}{n^{2k}\pi^k|z_1z_2\dots z_k|^{2-\frac{2}{n}}} \sum\limits_{\sigma \in S_k}sgn(\sigma)\sum\limits_{j_1,j_2,\dots,j_k=0}^{k-1}\dfrac{n^{(j_1+j_2+\dots+j_k)}}{j_1!j_2!\dots j_k!}\prod\limits_{\ell=1}^{k}(r_\ell r_{\sigma(\ell)}\alpha_\ell\overline{\alpha_{\sigma(\ell)}})^{\frac{j_\ell}{n}}\omega_n^{j_\ell(i_\ell-i_{\sigma(\ell)})}\label{eqn:perm1}
\end{align}
Let $\sigma=C_1C_2\dots C_\ell$ where $C_1,C_2,\dots ,C_\ell$ are disjoint cycles of $\sigma$. Then \ref{eqn:perm1} will be reduced to\\
***************************************\\
need more explanation\\
***************************************\\
\begin{align}
\sum\limits_{i_1,i_2,\dots,i_k=1}^{n}\dfrac{e^{-(r_1^\frac{2}{n}+\dots+r_k^\frac{2}{n})}}{n^{k}|z_1z_2\dots z_k|^{2-\frac{2}{n}}}\sum\limits_{\sigma \in S_n}sgn(\sigma)\sum\limits_{j_{k_1},j_{k_2},\dots,j_{k_\ell}=0}^{k-1}\prod\limits_{v=1}^{\ell}\prod\limits_{u\in C_i}(r_u r_{\sigma(u)})^{j_{k_v}}\frac{n^{j_{k_v}}}{j_{k_v}!}
\end{align}
\section{Result}
\chapter{Proofs of Theorems \ref{thm1} and \ref{thm2}.}
\label{ch:proofs5}
\section{Outline of proofs.}
The proofs here are adapted from the proof of Kabluchko's theorem as presented in \cite{kabluchko}. The proofs involve in analysing the function $L_n(z)$. In case of Theorem \ref{thm1} define $L_n(z)=\frac{P_n'(z)}{P_n(z)} = \sum\limits_{k=1}^{n}\frac{1}{z-\xi_k}$.
We shall prove the theorems by showing that the hypotheses of the Theorems \ref{thm1} and \ref{thm2} imply the following three statements.
\begin{align}
&\text{For Lebesgue a.e. $z \in \mathbb{C}$ } \text{ and for every }\epsilon>0, \lim\limits_{n\rightarrow \infty}\Pr\left(\frac{1}{n}\log|L_n(z)|>\epsilon\right)=0. \tag{A1} \label{A1}\\
&\text{For Lebesgue a.e. $z \in \mathbb{C}$ } \text{ and for every }\epsilon>0, \lim\limits_{n\rightarrow \infty}\Pr\left(\frac{1}{n}\log|L_n(z)|<-\epsilon\right)=0. \tag{A2} \label{A2}\\
&\text{For any }r>0, \text{the sequence }\left\{\int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log^2|L_n(z)|\right\}_{n\geq 1} \text{ is tight.} \tag{A3} \label{A3}
\end{align}
Statements \eqref{A1} and \eqref{A2} assert that $\frac{1}{n}\log|L_n(z)|$ converge to $0$ in probability. Statement \eqref{A3} assert that the sequence $\{\int\limits_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log^2|L_n(z)|\}_{n \geq 1}$ is tight. A lemma of Tao and Vu links the above two facts to yield that $\{\int\limits_{\text{$\mathbb{D}$}_r}\frac{1}{n}\log|L_n(z)|\}_{n \geq 1}$ converge to $0$ in probability. We state this lemma below.
\begin{lemma}[Lemma~3.1 in~\cite{taovu}]\label{lem:tao_vu}
Let $(X,\mathcal{A},\nu)$ be a finite measure space and $f_n:X\to \mathbb{R}$, $n\geq 1$ random functions which are defined over a probability space $(\Omega, \mathcal{B}, \mathbb{P})$ and are jointly measurable with respect to $\mathcal{A}\otimes \mathcal{B}$.
Assume that:
\begin{enumerate}
\item For $\nu$-a.e.\ $x\in X$ we have $f_n(x)\to 0$ in probability, as $n\to\infty$.
\item For some $\delta>0$, the sequence $\int_X |f_n(x)|^{1+\delta} d\nu(x)$ is tight.
\end{enumerate}
Then, $\int_X f_n(x)d\nu(x)$ converge in probability to $0$.
\end{lemma}
Thus it follows from the above assertions \eqref{A1}, \eqref{A2}, \eqref{A3} and Lemma \ref{lem:tao_vu}, that $\int\limits_{\text{$\mathbb{D}$}_r}\frac{1}{n}\log|L_n(z)|dm(z) \rightarrow 0$ in probability for any $r>0$. Choose any $f\in C_c^{\infty}(\mathbb{C})$, assume that $\mbox{support}(f) \subseteq \text{$\mathbb{D}$}_r$ and define $f_n(z)=\frac{1}{n}\left(\log|L_n(z)|\right)\Delta f(z)$. Because $f$ is a bounded function and $\frac{1}{n}\log|L_n(z)|$ satisfy the hypothesis of Lemma \ref{lem:tao_vu}, the functions $f_n$ also satisfy the hypothesis of Lemma \ref{lem:tao_vu}. Therefore we get that $\int\limits_{\text{$\mathbb{D}$}_r}f_n(z)dm(z)\rightarrow 0$ in probability. Applying Green's theorem twice we have the identity,
\[\int_{\text{$\mathbb{D}$}_r}^{}f(z)\Delta\frac{1}{n}\log|L_n(z)| = \int_{\text{$\mathbb{D}$}_r}^{}\frac{1}{n}\log|L_n(z)|\Delta f(z)dm(z).
\]
The left hand side of the above integral is defined in the sense of distributions. Therefore it follows that
$\int_{\text{$\mathbb{D}$}_r}^{} f(z)\frac{1}{n}\Delta\log|L_n(z)|\rightarrow 0 $ in probability. This suffices for Theorem \ref{thm2}. We complete the proof of Theorem \ref{thm1} by the following arguments. In the sense of distributions we have
\begin{equation}\label{eqn:distribution}
\int_{\text{$\mathbb{D}$}_r}^{} f(z)\frac{1}{n}\Delta\log|L_n(z)| = \frac{1}{n}\sum\limits_{k=1}^{n}f(\xi_k)-\frac{1}{n}\sum\limits_{k=1}^{n-1}f(\eta_k^{(n)})
\end{equation}
From Lemma \ref{random_seq_limit} it follows that the sequence $\{\xi_n\}_{n\geq1}$ is $\mu$-distributed. Hence \linebreak $\frac{1}{n}\sum_{k=1}^{n}f(\xi_k) \rightarrow \int\limits_{\text{$\mathbb{D}$}_r}f(z)d\mu(z)$ almost surely. Therefore from \eqref{eqn:distribution} we get, \begin{equation}
\frac{1}{n}\sum_{k=1}^{n-1}f(\eta_k^{(n)}) \rightarrow \int\limits_{\text{$\mathbb{D}$}_r}f(z)d\mu(z) \text{ in probability.} \label{proof_1}
\end{equation} Because for any $f \in C_c^\infty(\text{$\mathbb{C}$})$ and $\epsilon>0$, the sets of the form $\{\mu:|\int\limits_{\text{$\mathbb{C}$}}f(z)d\mu(z)|<\epsilon\}$ form an open base at origin, from Definition \ref{modes of convergence} and \eqref{proof_1} it follows that $\frac{1}{n-1}\sum_{k=1}^{n-1}\delta_{\eta_i^{(n)}} \xrightarrow{w} \mu$ in probability.
We show \eqref{A1}, by obtaining moment bounds for $L_n(z)$. To show \eqref{A2} we will use a concentration bound for the function $L_n(z)$. In either of the Theorems \ref{thm1} and \ref{thm2}, observe that $L_n(z)$ is a sum of independent random variables. Kolmogorov-Rogozin inequality gives the concentration bounds for sums of independent random variables. A version of Kolmogorov-Rogozin inequality which will be used later in the proofs is stated below.
\paragraph{Kolmogorov-Rogozin inequality (multi-dimensional version)}\label{KR-Inequality} [Corollary 1. of Theorem 6.1 in \cite{KR1}.]
Let $X_1,X_2, \dots $ be independent random vectors in $\mathbb{R}^\text{$n$}$. Define the concentration function,
\begin{equation}
Q(X,\delta) := \sup_{a\in \mathbb{R}^\text{$n$}}\Pr(X \in B(a,\delta)).
\end{equation}
Let $\delta_i \leq \delta$ for each $i$, then
\begin{equation}
Q(X_1+\dots + X_n,\delta) \leq \frac{C\delta}{\sqrt{\sum_{i=1}^{n}\delta_i^2(1-Q(X_i,\delta_i))}}.\label{kol-rog-ineq}
\end{equation}
It remains to show that the hypotheses of Theorems \ref{thm1} and \ref{thm2} imply \eqref{A1}, \eqref{A2} and \eqref{A3}. We show this in the subsequent sections.
\section{Proof of Theorem \ref{thm1}}
In the following lemma we show that the hypothesis of the Theorem \ref{thm1} imply \eqref{A1}.
\begin{lemma}\label{momentbound}
Let $L_n(z)=\sum\limits_{k=1}^{n}\frac{1}{z-\xi_k}$ where $\xi_ks$ are as in the Theorem \ref{thm1}. Then for any $\epsilon > 0$,
and for Lebesgue a.e. $z \in \mathbb{C}$ we have $\lim\limits_{n\rightarrow \infty}\Pr(\frac{1}{n}\log|L_n(z)|\geq \epsilon)= 0$.
\end{lemma}
\begin{proof}
Define $A_n^{\epsilon}=\bigcup\limits_{k=1}^{n}\{z:|z-a_k|<e^{-n\epsilon} \text{ or } |z-b_k|<e^{-n\epsilon}\}$ and $F^{\epsilon}=\limsup\limits_{n \rightarrow \infty} A_n^{\epsilon}$, then $F^\epsilon$ are decreasing sets in $\epsilon$. For these sets we have $\sum\limits_{n=1}^{\infty}m(A_n^{\epsilon}) \leq \sum\limits_{n=1}^{\infty}2\pi ne^{-2n\epsilon} < \infty$, where $m$ is Lebesgue measure on complex plane. Applying Borel-Cantelli lemma to the sequence $\{A_n^{\epsilon}\}_{n\geq 1}$ we get $m(F^{\epsilon})=0$. Because $F^\epsilon$ are decreasing sets in $\epsilon$, we have that if $F=\bigcup\limits_{\epsilon>0}F^{\epsilon}$, then $m(F)=0$. Choose $z \in F^c$, there is $N_z^{\epsilon}$ such that for any $n>N_z^{\epsilon}$ we have $z \notin A_n^{\epsilon}$. Therefore $\frac{1}{|z-\xi_n|}>e^{n\epsilon}$ is satisfied only for finitely many $n$. Hence we have $|L_n(z)|<M+ne^{n\epsilon}$, where $M$ is the finite random number obtained from the terms for which the inequality $\frac{1}{|z-\xi_n|}>e^{n\epsilon}$ is violated. It follows from here $\limsup\limits_{n\rightarrow \infty}\frac{1}{n}\log|L_n(z)|<\epsilon$ almost surely. Therefore for $z\notin \mathbb{C}$, we have $\lim\limits_{n\rightarrow \infty}\Pr(\frac{1}{n}\log|L_n(z)|\geq \epsilon)= 0$.
\end{proof}
\begin{remark}
In proof of Lemma \ref{momentbound}, we have proved a stronger statement that for Lebesgue almost every $z$, $\limsup\limits_{n\rightarrow \infty}\frac{1}{n}\log|L_n(z)|=0$ almost surely.
\end{remark}
We will use the null set $F$ defined in the proof of above Lemma \ref{momentbound} in the proofs of subsequent lemmas. In the forthcoming lemma we establish \eqref{A2}.
\begin{lemma}\label{kolmogorov-rogozin}
Let $L_n(z)=\sum\limits_{k=1}^{n}\frac{1}{z-\xi_k}$ where $\xi_ks$ are as in the Theorem \ref{thm1}. Then for any $\epsilon > 0$,
and almost every $z$ we have $\lim\limits_{n\rightarrow \infty}\Pr(\frac{1}{n}\log|L_n(z)|\leq -\epsilon)= 0$.
\end{lemma}
\begin{proof}
Fix $z \in F^c$, where $F$ is as defined in proof of lemma \ref{momentbound}.
From Kolmogorov-Rogozin inequality \eqref{kol-rog-ineq} and taking $\delta_i=\delta=e^{-n\epsilon}$ we have,
\begin{equation}\label{kreq}
\Pr\left(\bigg|\sum\limits_{k=1}^{n}\frac{1}{z-\xi_k}\bigg|<e^{-n\epsilon}\right) \leq \frac{C}{\sqrt{\sum_{k=1}^{n}(1-Q(\frac{1}{z-\xi_k},e^{-n\epsilon}))}}.
\end{equation}
We shall show that $\sum_{k=1}^{n}(1-Q(\frac{1}{z-\xi_k},e^{-n\epsilon}))$ goes to $\infty$. Observe that,
\begin{align}
Q\left(\frac{1}{z-\xi_k},e^{-n\epsilon}\right) =& \sup\limits_{\alpha \in \mathbb{C}}\Pr\left( \bigg|\frac{1}{z-\xi_k}-\alpha\bigg|<e^{-n\epsilon} \right)
\leq \frac{1}{2},
\end{align}
whenever $|\frac{1}{z-a_k}-\frac{1}{z-b_k}| > 2e^{-n\epsilon}$.
Define $S_n=\{k\leq n:|\frac{1}{z-a_k}-\frac{1}{z-b_k}| > 2e^{-n\epsilon}\}$. Notice that if $a_k \neq b_k$, then there is $N_k$ such that whenever $n>N_k$, we have $k\in S_n$. Because $a_k\neq b_k$ for infinitely many $k$, $|S_n|$ increases to infinity as $n \rightarrow \infty$. The denominator on the right hand side of \eqref{kreq} is at least $\sqrt{\frac{|S_n|}{2}}$.
Therefore $\Pr\left(\bigg|\sum\limits_{k=1}^{n}\frac{1}{z-\xi_k}\bigg|<e^{-n\epsilon}\right)\leq \frac{C\sqrt{2}}{\sqrt{|S_n|}}\rightarrow 0$, as $n \rightarrow \infty$. This completes the proof of the lemma.
\end{proof}
It remains to prove the tightness for the sequence $\{\int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log^2|L_n(z)|dm(z)\}_{n\geq1}$ and will be proved in the following lemma.
\begin{lemma}\label{tight}
Let $L_n(z):=\sum\limits_{k=1}^{n}\frac{1}{z-\xi_k}$, where $\xi_ks$ are as in the Theorem \ref{thm1}. Then, for any $r>0$, the sequence $\{\int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log^2|L_n(z)|dm(z)\}_{n\geq1}$ is tight.
\end{lemma}
\begin{proof}
We will first decompose $\log|L_n(z)|$ into its positive and negative parts and analyze them separately. Let $\log|L_n(z)|=\log_+|L_n(z)|-\log_-|L_n(z)|$. Then,
\[\int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log^2|L_n(z)|dm(z)= \int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log_+^2|L_n(z)|dm(z)+\int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log^2_-|L_n(z)|dm(z).\]
Using \eqref{logplussum}, we get,
\begin{align}
\int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log_+^2|L_n(z)|dm(z) & = \int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log_+^2\bigg|\sum\limits_{k=1}^{n}\frac{1}{z-\xi_k}\bigg|dm(z),\\
& \leq \int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\left(\sum\limits_{k=1}^{n}\log_+\bigg|\frac{1}{z-\xi_k}\bigg|+\log(n)\right)^2dm(z).
\end{align}
Using the Cauchy-Schwarz inequality $(a_1+a_2+\dots+a_n)^2\leq n(a_1^2+a_2^2+\dots+a_n^2)$ for the above, we get,
\begin{align}
\int_{r\text{$\mathbb{D}$}}\frac{1}{n^2}\log_+^2|L_n(z)|dm(z)
& \leq \int_{\text{$\mathbb{D}$}_r}\frac{n+1}{n^2}\left(\sum\limits_{k=1}^{n}\log_+^2\bigg|\frac{1}{z-\xi_k}\bigg|+\log^2(n)\right)dm(z),\\
& = \frac{n+1}{n^2}\sum\limits_{k=1}^{n}\int_{\text{$\mathbb{D}$}_r}\log_-^2|z-\xi_k|dm(z) + \frac{n+1}{n^2}\log^2(n)\pi r^2.\label{eqn:lemma:tight:1}
\end{align}
Because Lebesgue measure on complex plane is translation invariant, we have $$\int_{\text{$\mathbb{D}$}_r}\log_-^2|z-\xi|dm(z)=\int_{\xi+\text{$\mathbb{D}$}_r}\log_-^2|z|dm(z)\leq\int_{\text{$\mathbb{D}$}_1}\log_-^2|z|dm(z)<\infty.$$ Therefore $\sup\limits_{\xi\in \text{$\mathbb{C}$}}\int_{K}\log^2|z-\xi|dm(z)<\infty$ for any compact set $K \subset \mathbb{C}$ it can be seen that each of the terms in the final expression \eqref{eqn:lemma:tight:1} are bounded. Hence the sequence $\{\int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log_+^2|L_n(z)|dm(z)\}_{n\geq1}$ is bounded.
We will now show that the sequence $\{\int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log_-^2|L_n(z)|dm(z)\}_{n\geq1}$ is bounded.
Let $P_n(z)=\prod\limits_{k=1}^{n}(z-\xi_k)$ and $P_n'(z)=n\prod\limits_{k=1}^{n-1}(z-\eta_{k}^{(n)})$. Applying inequality \eqref{logminusprod} and Cauchy-Schwarz inequality we get,
\begin{align}
\int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log_-^2|L_n(z)|dm(z) & = \int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log_-^2\bigg|\frac{P_n'(z)}{P_n(z)}\bigg|dm(z),\\
& \leq \int_{\text{$\mathbb{D}$}_r}\frac{2}{n^2}\log_-^2|P_n'(z)|dm(z)+ \int_{\text{$\mathbb{D}$}_r}\frac{2}{n^2}\log_-^2\bigg|\frac{1}{P_n(z)}\bigg|dm(z).
\end{align}
Again applying inequalities \eqref{logminusprod}, \eqref{logplusprod}, \eqref{logplussum} and Cauchy-Schwarz inequality to the above we obtain,
\begin{align}
\int_{\text{$\mathbb{D}$}_r}\frac{1}{n^2}\log_-^2 & |L_n(z)|dm(z)\\
& \leq \int_{\text{$\mathbb{D}$}_r}\frac{2}{n^2}\left(\sum\limits_{k=1}^{n-1}\log_-|z-\eta_{k}^{(n)}|\right)^2dm(z) + \int_{\text{$\mathbb{D}$}_r}\frac{2}{n^2}\left(\sum_{k=1}^{n}\log_+|z-\xi_k|\right)^2dm(z),\\
& \leq \frac{2}{n}\sum\limits_{k=1}^{n-1}\int_{\text{$\mathbb{D}$}_r}\log_-^2|z-\eta_{k}^{(n)}|dm(z) \label{eqn:lemma:tight1}\\&+ 2\int_{\text{$\mathbb{D}$}_r}\left(\log(2)+\log_+|z|+\frac{1}{n}\sum\limits_{k=1}^{n}\log_+|\xi_k|\right)^2dm(z).
\label{eqn:lemma:tight2}
\end{align}
From the hypothesis, we have that both the sequences $\{a_n\}_{n\geq1}$ and $\{b_n\}_{n\geq 1}$ are log-Ces\'{a}ro bounded, which in turn implies that $\{\xi_n\}_{n\geq1}$ is also log-Ces\'{a}ro bounded, almost surely. Hence the integrand in \eqref{eqn:lemma:tight2} is bounded uniformly in $n$. Therefore \eqref{eqn:lemma:tight2} is bounded uniformly in $n$. Using the fact that $\sup\limits_{\xi\in \text{$\mathbb{C}$}}\int_{K}\log^2|z-\xi|dm(z)<\infty$ , we get \eqref{eqn:lemma:tight1} is bounded uniformly in $n$.
From the above facts we get that the sequence $\left\{\frac{1}{n^2}\int_{\text{$\mathbb{D}$}_r}\log^2|L_n(z)|\right\}_{n\geq1}$ is tight.
\end{proof}
Lemmas \ref{momentbound}, \ref{kolmogorov-rogozin}, \ref{tight} show that the statements \eqref{A1}, \eqref{A2} and \eqref{A3} are satisfied. Hence the Theorem \ref{thm1} is proved.
\section{Proof of Theorem \ref{thm2}}
We will prove the theorem when $\omega=0$ i.e, $0$ is not a limit point of the sequence $\{z_n\}_{n\geq1}$. For other cases we can translate all the points by $\omega$ and apply the theorem. We will first prove a general lemma for sequences of numbers which will later be used in proving the subsequent lemmas.
\begin{lemma}\label{liminf}
Given any sequence $\{z_k\}_{k\geq1}$, where $z_k \in \mathbb{C}$, $\liminf\limits_{n \rightarrow \infty}\left(\inf\limits_{|z|=r}|z-z_n|^{\frac{1}{n}}\right) \geq 1$ for Lebesgue a.e. $r \in \mathbb{R^+}$ w.r.t Lebesgue measure.
\end{lemma}
\begin{proof}
Fix $\epsilon > 0$ and let $A_n = \{r>0:\inf\limits_{|z|=r}|z-z_n|< (1-\epsilon)^{n} \}$. Let $m$ denote the Lebesgue measure on the complex plane. Then,
\begin{align}
m\left(\left\{r>0 :\liminf\limits_{n \rightarrow \infty}(\inf\limits_{|z|=r}|z-z_n|^\frac{1}{n}) \leq (1-\epsilon)\right\}\right) & = m\left(\limsup\limits_{n \rightarrow \infty}A_n\right) \\
& \leq \lim\limits_{k \rightarrow \infty}m\left(\mathop{\cup}_{n \geq k} A_n\right)
\end{align}
If $r \in A_k, $ then from the definition of $A_k$ we have that $r \in [|z_k|-(1-\epsilon)^k,|z_k|+(1-\epsilon)^k] $. Hence we get,
\begin{align}
m&\left(\left\{r>0 :\liminf\limits_{n \rightarrow \infty}(\inf\limits_{|z|=r}|z-z_n|^\frac{1}{n}) \leq (1-\epsilon)\right\}\right)\\ & \leq \lim\limits_{k\rightarrow \infty } \sum_{n=k}^{\infty}m\left(\left\{r:|z_n|-(1-\epsilon)^{n} \leq r \leq |z_n|+(1-\epsilon)^n\right\}\right)\\
& \leq \lim\limits_{k \rightarrow \infty } \sum_{n=k}^{\infty} 2(1-\epsilon)^n = 0
\end{align}
The above is true for every $\epsilon >0$, therefore $\liminf\limits_{n \rightarrow \infty}\left(\inf\limits_{|z|=r}|z-z_n|^{\frac{1}{n}}\right) \geq 1$ outside an exceptional set $E\subset\mathbb{R}^\text{$+$}$ whose Lebesgue measure is $0$.
\end{proof}
Define the set $F=\{z:\liminf\limits_{n \rightarrow \infty}|z-z_n|^{\frac{1}{n}} < 1\}$. Because $0$ is not a limit point of $\{z_n\}_{n\geq1}$, we have $\liminf\limits_{n\rightarrow\infty}|z_n|^\frac{1}{n}\geq1$. Hence $0\notin F$. For $|z|=r$, we have \[\liminf\limits_{n \rightarrow \infty}|z-z_n|^\frac{1}{n}\geq\liminf\limits_{n \rightarrow \infty}\left(\inf\limits_{|z|=r}|z-z_n|^{\frac{1}{n}}\right).\]
Hence $F\subseteq \{z:|z|=r, r\in E\}$ and by invoking Fubini's theorem we get $m( \{z:|z|=r, r\in E\})=0$. From the above two observations it follows that $m(F)=0$.
The following lemma shows that the hypothesis of the Theorem \ref{thm2} implies \eqref{A1}.
\begin{lemma}\label{momentthm2}
Let $L_n(z)$ be as in the Theorem \ref{thm2}. Then for any $\epsilon>0$, and Lebesgue a.e. $z \in \mathbb{C}$,
$$\limsup\limits_{n \rightarrow \infty}\frac{1}{n}\log|L_n(z)|< \epsilon$$ almost surely.
\end{lemma}
\begin{proof}
From the hypothesis, Lemma \ref{liminf} and Using Markov's inequality we get $$\sum_{n=1}^{\infty}\Pr\left(\sup\limits_{|z|=r}\big|\frac{a_n}{z-z_n}\big|>e^{n\epsilon}\right) \leq \sum_{n=1}^{\infty}\sup\limits_{|z|=r}\frac{\ee{\text{$|a_n|$}}}{|z-z_n|e^{n\epsilon}}.$$ Denoting $t_n(r)=\sup\limits_{|z|=r}\bigg|\frac{1}{z-z_n}\bigg|$ we have
\begin{equation}
\sum_{n=1}^{\infty}\sup\limits_{|z|=r}\frac{\ee{\text{$|a_n|$}}}{|z-z_n|e^{n\epsilon}}=\sum_{n=1}^{\infty}\frac{\ee{\text{$|a_n|$}}}{e^{n\epsilon}}t_n(r). \label{eqn:power_series}
\end{equation}
Because $a_n$s are i.i.d. random variables, $\ee{\text{$|a_n|$}}=\ee{\text{$|a_1|$}}$. Using the root test for the convergence of sequences and the Lemma \ref{liminf}, it follows that the right hand side of \eqref{eqn:power_series} is convergent for Lebesgue a.e. $r \in (0,\infty)$. Invoking Borel-Cantelli lemma we can say that $\sup\limits_{|z|=r} \frac{|a_n|}{|z-z_n|}>e^{n\epsilon}$ only for finitely many times. From here we get $|L_n(z)| \leq M_\epsilon + ne^{n\epsilon}$, where $M_\epsilon$ is a finite random number which is obtained by bounding the finite number of terms for which $\sup\limits_{|z|=r} \frac{|a_n|}{|z-z_n|}>e^{n\epsilon}$ is satisfied. Therefore we get that $\limsup\limits_{n \rightarrow \infty}\frac{1}{n}\log|L_n(z)|< \epsilon$ almost surely.
\end{proof}
Notice that we have proved a stronger version of the Lemma \ref{momentthm2}. We will state this as a remark which will be used further lemmas.
\begin{remark}\label{one}
Define $M_n(R):=\sup\limits_{|z|=R}|L_n(z)|$. Then for any $\epsilon>0$, we have $$\limsup\limits_{n \rightarrow \infty}\frac{1}{n}\log M_n(R)< \epsilon$$ for almost every $R>0$.
\end{remark}
For proving a similar result for the lower bound of $\log|L_n(z)|$ and establish \eqref{A2}, we need the Kolmogorov-Rogozin inequality \ref{KR-Inequality} which was stated at the beginning of this chapter.
\begin{lemma}\label{krthm2}
Let $L_n(z)$ be as in Theorem \ref{thm2}. Then for any $\epsilon>0$, and Lebesgue a.e. $z \in \mathbb{C}$ $$\lim\limits_{n \rightarrow \infty}\Pr\left(\frac{1}{n}\log|L_n(z)|<-\epsilon\right)=0.$$
\end{lemma}
\begin{proof}
Fix $z \in \mathbb{C}$ which is not in the exceptional set $F$. Let $z_{i_1},z_{i_2},\dots z_{i_{l_n}}$ be the points in $K_z$ from the set $\{z_1,z_2,\dots,z_n\}$. From the definition of concentration function and the fact that the concentration function $ Q(X_1+X_2+\dots+X_n,\delta)$ is decreasing in $n$ we get,
\begin{align}
\Pr\left(|L_n(z)|\leq e^{-n\epsilon}\right) & \leq Q\left(\sum_{i=1}^{n}\frac{a_i}{z-z_i},e^{-n\epsilon}\right), \\
&\leq Q\left(\sum_{k=1}^{l_n}\frac{a_{i_k}}{z-z_{i_k}},e^{-n\epsilon}\right).
\end{align}
The random variables $\frac{a_{i_k}}{z-z_{i_k}}$s are independent. Hence we can apply Kolmogorov-Rogozin inequality to get,
\begin{align}
\Pr\left(|L_n(z)|\leq e^{-n\epsilon}\right) & \leq C_\epsilon\left\{\sum_{i=1}^{l_n}\left(1-Q\bigg(\frac{a_{i_k}}{z-z_{i_k}},e^{-n\epsilon}\bigg)\right)\right\}^{-\frac{1}{2}}.
\end{align}
Because $|z-z_{i_k}|\leq d(z,K_z)+diam(K_z)$, from above we get,
\begin{align}
\Pr\left(|L_n(z)|\leq e^{-n\epsilon}\right) & \leq C_\epsilon \left\{\sum_{i=1}^{l_n}\left(1-Q\left(a_{i_k},\left(d(z,K_z)+diam(K_z)\right)e^{-n\epsilon}\right)\right) \right\}^{-\frac{1}{2}}\label{eqn:lemma5.4.8:1}
\end{align}
Because $a_{i_k}$s are non-degenerate i.i.d random variables and $l_n\rightarrow\infty$, the right hand side of \eqref{eqn:lemma5.4.8:1} converges to $0$ as $n\rightarrow\infty$. Hence the lemma is proved.
\end{proof}
It remains to show that the hypothesis of Theorem \ref{thm2} implies \eqref{A3}. Fix $R>r$. The idea here is to write the function $\log|L_n(z)|$ for $z \in \text{$\mathbb{D}$}_r$ as an integral on the boundary of a larger disk $\text{$\mathbb{D}$}_R$ and bound the integral uniformly on the disk $\text{$\mathbb{D}$}_r.$ This is facilitated by Poisson-Jensen's formula for meromorphic functions.
The Poisson-Jensen's formula is stated below. Let $\alpha_1, \alpha_2, \dots \alpha_k$ and $\beta_1,\beta_2, \dots \beta_\ell$ be the zeros and poles of a meromorphic function $f$ in $\text{$\mathbb{D}$}_R$. Then
\begin{align}
\log|f(z)| = \frac{1}{2\pi}\int_{0}^{2\pi}\Re\bigg(\frac{Re^{i\theta}+z}{Re^{i\theta}-z}\bigg)\log|f(Re^{i\theta})|d\theta - \sum_{m=1}^{k}\log\bigg|\frac{R^{2}-\overline{\alpha}_jz}{R(z-\alpha_j)}\bigg|\\ + \sum_{m=1}^{l}\log\bigg|\frac{R^{2}-\overline{\beta}_jz}{R(z-\beta_j)}\bigg|
\end{align}
The following lemma \ref{tighttwo} gives an estimate of the boundary integral obtained in the Poisson-Jensen's formula when applied for the function $\log|L_n(z)|$ at $z=0$. Define \[\text{$\mathcal{I}$}\text{$_n(z,R)$}:=\frac{1}{2\pi}\int_{0}^{2\pi}\Re\bigg(\dfrac{Re^{i\theta}+z}{Re^{i\theta}-z}\bigg)\log|L_n(Re^{i\theta})|d\theta.\]
\begin{lemma}\label{tighttwo}
There is a constant $c_2>0$ such that
\begin{align}
\lim\limits_{n \rightarrow \infty}\Pr\left( \dfrac{1}{n}\text{$\mathcal{I}$}(0,R)\leq-c_2\right) = 0.
\end{align}
\end{lemma}
\begin{proof}
From Poisson-Jensen's formula at $0$ we get,
\begin{equation}
\dfrac{1}{n}\text{$\mathcal{I}$}\text{$_n(0;R) = \dfrac{1}{n}\log|L_n(0)| +\dfrac{1}{n}\sum_{m=1}^{k}\log\bigg|\dfrac{z_{i_m}}{R}\bigg| - \dfrac{1}{n}\sum_{m=1}^{l}\log\bigg|\dfrac{\alpha_{i_m}}{R}\bigg|$},\label{eqn:lemma5.4.9:1}
\end{equation}
where $z_{i_m}s$ and $\alpha_{i_m}s$ are zeros and critical points respectively of $P_n(z)$ in the disk $\text{$\mathbb{D}$}_R$. Because $0$ is not a limit point of $\{z_1,z_2, \dots\}$, $\left\{\dfrac{1}{n}\sum_{m=1}^{k}\log\big|\dfrac{z_{i_m}}{R}\big|\right\}_{n\geq1}$ is a sequence of negative numbers bounded from below. $\left\{\dfrac{1}{n}\sum_{m=1}^{l}\log\bigg|\dfrac{\alpha_{i_m}}{R}\bigg|\right\}_{n\geq1}$ is also a sequence of negative numbers. Therefore the last two terms in the right hand side of \eqref{eqn:lemma5.4.9:1} are bounded below. Because $0$ is not in exceptional set $F$, from Lemma \ref{krthm2} we have that the sequence $\lim\limits_{n\rightarrow\infty}\Pr\left(\frac{1}{n}\log|L_n(z)|<-1\right)=0$ is bounded from below. Therefore there exists $C_1$ such that $$\lim\limits_{n\rightarrow\infty}\Pr\left(\frac{1}{n}\log|L_n(z)|<-1 \text{ and }\dfrac{1}{n}\sum_{m=1}^{k}\log\bigg|\dfrac{z_{i_m}}{R}\bigg| - \dfrac{1}{n}\sum_{m=1}^{l}\log\bigg|\dfrac{\alpha_{i_m}}{R} <-C_1\right)=0.$$ Choosing $c_2=C_1+1$ the statement of lemma is established.
\end{proof}
Using above lemma \ref{tight} and exploiting formula of Poisson kernel for disk we will now obtain an uniform bound for the corresponding integral $\text{$\mathcal{I}$}_n(z,R)$.
\begin{lemma}\label{tight3}
There is a constant $b>0$ such that for any $z \in \text{$\mathbb{D}$}\text{$_r$}$
\begin{equation}
\lim\limits_{n\rightarrow \infty}\Pr\left(\dfrac{1}{n}\text{$\mathcal{I}$}_n(z,R)\leq -b\right)=0.
\end{equation}
\end{lemma}
\begin{proof}
We will decompose the function $\log|L_n(z)|$ into its positive and negative components. Let $\log|L_n(z)|=\log_+|L_n(z)|-\log_-|L_n(z)|$, where $\log_+|L_n(z)|$ and $\log_-|L_n(z)|$ are positive. Using this we can write,
\begin{align}
2\pi\text{$\mathcal{I}$}\text{$_n(z)$} = & \int_{0}^{2\pi}\log|L_n(Re^{i\theta})|\Re\bigg(\dfrac{Re^{i\theta}+z}{Re^{i\theta}-z}\bigg)d\theta, \\
= & \int_{0}^{2\pi}\log_+|L_n(Re^{i\theta})|\Re\bigg(\dfrac{Re^{i\theta}+z}{Re^{i\theta}-z}\bigg)d\theta -\int_{0}^{2\pi}\log_-|L_n(Re^{i\theta})|\Re\bigg(\dfrac{Re^{i\theta}+z}{Re^{i\theta}-z}\bigg)d\theta.
\end{align}
We can find constants $C_3$ and $C_4$ such that for any $z \in \text{$\mathbb{D}$}\text{$_r$}$,
$0 < C_3 \leq \Re\bigg(\dfrac{Re^{i\theta}+z}{Re^{i\theta}-z}\bigg) \leq C_4 < \infty$ is satisfied. Therefore,
\begin{align}
2\pi\text{$\mathcal{I}$}\text{$_n(z)$} \geq & C_3\int_{0}^{2\pi}\log_+|L_n(Re^{i\theta})|d\theta - C_4\int_{0}^{2\pi}\log_-|L_n(Re^{i\theta})|^-d\theta,\\
\geq & 2\pi C_3\text{$\mathcal{I}$}\textbf{$_n(0)-2\pi(C_4-C_3) M_n(R)$}.\label{eqn:lemma5.4.10:1}
\end{align}
From the Remark \ref{one} and Lemma \ref{tighttwo} we get
\begin{equation}
\lim\limits_{n \rightarrow \infty}\Pr\left( \dfrac{1}{n}\text{$\mathcal{I}$}\text{$_n(0)\leq -c \text{ or } \dfrac{1}{n}M_n(R) > 1 $}\right) = \textbf{$0$} \label{eqn:lemma5.4.10:2}
\end{equation}
The proof is completed from above \eqref{eqn:lemma5.4.10:2} and \eqref{eqn:lemma5.4.10:1} and by choosing $b=2\pi(cC_3+C_4-C_3)$.
\end{proof}
To complete the argument we now need to control the other terms in Poisson-Jensen's formula. It is shown in the forthcoming expressions.
Let $\xi_{i_m}s$ and $\beta_{i_m}s$ be the poles and zeros of $L_n(z)$ in $\text{$\mathbb{D}$}_R$ and $k,l(\leq n)$ are the number of zeros and poles of $L_n(z)$ respectively in $\text{$\mathbb{D}$}_R$. Now applying Poisson-Jensen's formula to $L_n(z)$ we have,
\begin{align}
\frac{1}{n^{2}}\int_{\text{$\mathbb{D}$}\text{$_r$}}^{}&\log^{2}|L_n(z)|dm(z) \\ &=\frac{1}{n^{2}}\int_{\text{$\mathbb{D}$}\text{$_r$}}\left(\text{$\mathcal{I}$}\textbf{$_n(z)+\sum\limits_{m=1}^{k}\log\biggl|\frac{R(z-\beta_{i_m})}{R^2-\overline{\beta}_{i_m}z}\biggr|+\sum\limits_{m=1}^{l}\log\biggl|\frac{R(z-\xi_{i_m})}{R^2-\overline{\xi}_{i_m}z}\biggr|$}\right)^2dm(z)
\end{align}
Invoking a case of Cauchy-Schwarz inequality $(a_1+a_2+\dots+a_n)^2\leq n(a_1^2+a_2^2+\dots+a_n^2)$ repeatedly we get,
\begin{align}
\int_{\text{$\mathbb{D}$}\text{$_r$}}^{}\dfrac{1}{n^{2}}&\log^{2}|L_n(z)|dm(z)
\\ & \leq \dfrac{3}{n^{2}}\int_{\text{$\mathbb{D}$}\text{$_r$}}^{}|\text{$\mathcal{I}$}_n(z)|^2dm(z) + \dfrac{3}{n^{2}}\int_{\text{ $\mathbb{D}$}_r}^{}\left(\sum_{m=1}^{k}\log\bigg|\dfrac{R(z-\beta_{i_m})}{R^{2}-\overline{\beta}_{i_m}z}\bigg|\right)^{2}dm(z)\\
&+\dfrac{3}{n^{2}}\int_{\mathbb{D}\text{$_r$}}^{}\left(\sum_{m=1}\log\bigg|\dfrac{R(z-\xi_{i_m})}{R^{2}-\overline{\xi}_{i_m}z}\bigg|\right)^{2}dm(z),\\
&\leq \int_{\mathbb{D}\text{$_r$}}^{}\dfrac{3}{n^{2}}|\text{$\mathcal{I}$}_n(z)|^2dm(z) + \dfrac{3k}{n^{2}}\sum_{m=1}^{k}\int_{\mathbb{D}\text{$_r$}}^{}\log^{2}\bigg|\dfrac{R(z-\beta_{i_m})}{R^{2}-\overline{\beta}_{i_m}z}\bigg|dm(z)\\
&+\dfrac{3l}{n^{2}}\sum_{m=1}^{l}\int_{\mathbb{D}\text{$_r$}}^{}\log^{2}\bigg|\dfrac{R(z-\xi_{i_m})}{R^{2}-\overline{\xi}_{i_m}z}\bigg|dm(z).
\end{align}
For $z \in\text{ $\mathbb{D}$}_r$, we have $|R^2-\overline{\beta}_{i_m}z|\geq R(R-r)$. Applying this inequality in the above we get,
\begin{align}
\int_{\mathbb{D}\text{$_r$}}^{}\dfrac{1}{n^{2}}\log^{2}|L_n(z)|dm(z)
& \leq \int_{\mathbb{D}\text{$_r$}}^{}\dfrac{3}{n^{2}}|\text{$\mathcal{I}$}_n(z)|^2dm(z) + \dfrac{3k}{n^{2}}\sum_{m=1}^{k}\int_{\mathbb{D}\text{$_r$}}^{}\log^{2}\bigg|\dfrac{z-\beta_{i_m}}{R-r}\bigg|dm(z)\\
&+\dfrac{3l}{n^{2}}\sum_{m=1}^{l}\int_{\mathbb{D}\text{$_r$}}^{}\log^{2}\bigg|\dfrac{z-\xi_{i_m}}{R-r}\bigg|dm(z). \label{eqn:poisson-jensen:1}
\end{align}
From the Lemmas \ref{tighttwo} and \ref{tight3}, the corresponding sequence $\frac{3}{n^{2}}\int_{\mathbb{D}\text{$_r$}}^{}|\text{$\mathcal{I}$}$$_n(z)|^2dm(z)$ is tight. The function $\log^{2}|z|$ is an integrable function on any bounded set in $\mathbb{C}$. Combining these facts and above inequality \eqref{eqn:poisson-jensen:1} we have that the sequences \linebreak $\left\{\int_{\mathbb{D}\text{$_r$}}^{}\frac{1}{n^{2}}\log^{2}|L_n(z)|dm(z)\right\}_{n\geq1}$ are tight. Hence the hypothesis of the Theorem \ref{thm2} implies \eqref{A3}. Therefore the proof of the theorem is complete.
\chapter{Probability that products of real random matrices have all eigenvalues real tend to $1$.}
\label{ch:realeigenvalues3}
\section{Background}
In this chapter we consider products of real random matrices with fixed size.
In \cite{arul}, Arul Lakshminarayan observed an interesting phenomenon in products of Ginibre matrices. He considered products of i.i.d Ginibre matrices with real Gaussian entries. Let $p_n^{(k)}$ be the probability that product of $n$ such matrices have all real eigenvalues. Using numerical simulations he computed $p_n^{(k)}$. Based on the observations, he conjectured that $p^{(k)}_n$ increases to $1$ with the size of the product.
Peter Forrester, in \cite{forrester}, considered the case of $k \times k$ Ginibre matrices with real Gaussian entries. He gave a formula for $p_n^{(k)}$. From that formula he deduced that this probability increases to $1$ exponentially.
We state a generalization of the conjecture stated in \cite{arul}.
\begin{conjecture}\label{con1}Let $X_1,X_2, \dots X_n$ be i.i.d. matrices of size $k \times k$, whose entries are i.i.d. real random variables distributed according to probability measure $\mu$ and $A_n=X_1X_2\dots X_n$. Then,
$$\lim\limits_{n\rightarrow \infty}\Pr(A_n\text{ has all real eigenvalues})=1.$$
\end{conjecture}
\section{Results}
We prove the conjecture for the special case when the probability measure $\mu$ has an atom i.e., there is $x \in \mathbb{R}$ such that $\mu(\{x\})>0$. The proof for this case is based on a simple observation that rank of product of matrices is at most the minimum of the ranks of the individual matrices. In the given scenario, each individual matrix will be of rank at most $1$ with non zero probability. If a real matrix has rank at most $1$, then it has all real eigenvalues (they are $0$ and the trace of the matrix). The following theorem formalizes the result.
\begin{theorem}
Let $X_1,X_2, \dots X_n$ be i.i.d. matrices of size $k \times k$, whose entries are i.i.d. real random variables distributed according to an atomic probability measure $\mu$ and $A_n=X_1X_2\dots X_n$. Then,
\[
\lim\limits_{n\rightarrow\infty}\Pr(A_n \text{ has all real real eigenvalues})=1.
\]
\end{theorem}
\begin{proof}
Let $x$ be an atom of measure $\mu$. Then $X_j$ has rank at most $1$, with probability at least $\mu(\{x\})^{k^2}$. All the matrices $X_j$s are independent of each other. Therefore,
\begin{align}
\Pr(A_n \text{ has rank at most } 1 ) & \geq \Pr(\text{at least one of }X_1,X_2,\dots,X_n \text{ has rank at most }1),\\
& \geq 1-(1-\mu(\{x\})^{k^2})^n.\label{eqn:chapter4:lemma1:1}
\end{align}
We know that real matrices with rank at most $1$, have all eigenvalues real. Hence,
$$
\Pr (A_n \text{ has all real eigenvalues}) \geq \Pr(A_n \text{ has rank at most }1).
$$
Hence from above and \eqref{eqn:chapter4:lemma1:1} we have, $\lim\limits_{n\rightarrow\infty}\Pr(A_n \text{ has all real eigenvalues})=1.$
\end{proof}
We make the following observation about $2 \times 2$ real matrices. It says that if the rows or columns of a real random matrix are exchangeable then that matrix has both real eigenvalues with probability at least $\frac{1}{2}$. Later this will be applied to product of $2 \times 2$ real Ginibre matrices, which gives us the probability that the product of $2 \times 2$ real Ginibre matrices is at least $\frac{1}{2}$.
\begin{lemma}\label{halfbound}
Let $M = \left[\begin{smallmatrix} a&b\\ c&d \end{smallmatrix}\right]$, where $(a,b)$ and $(c,d)$ are real exchangeable random variables. Then,
$$
\Pr(M \text{ has both real eigenvalues}) \geq \frac{1}{2}.
$$
\end{lemma}
\begin{proof}
The characteristic polynomial of the matrix $M$ is $P_M(x)=x^2-(a+d)x+(ad-bc)$. The matrix $M$ has all real eigenvalues if the discriminant of the characteristic polynomial,
$$(a+d)^2-4(ad-bc) \geq 0.$$
Hence the probability that $M$ has both real eigen values is,
$$\Pr((a+d)^2-4(ad-bc) \geq 0).$$
Because $(a,c)$ and $(b,d)$ are exchangeable we have,
$$
\Pr((a+d)^2-4(ad-bc) \geq 0)=\Pr((b+c)^2-4(bc-bd) \geq 0).
$$
Therefore,
\begin{align}
\Pr(M \text{ has both} &\text{ real eigenvalues})\\ &= \frac{1}{2}(\Pr((a+d)^2-4(ad-bc) \geq 0)+\Pr((b+c)^2-4(bc-bd) \geq 0)),\\
& \geq \frac{1}{2}\Pr((a+d)^2-4(ad-bc) \geq 0 \text{ or }(b+c)^2-4(bc-bd) \geq 0). \label{eqn:chapter4:lemma2:1}
\end{align}
Because $(a+d)^2-4(ad-bc)+(b+c)^2-4(bc-bd)\geq 0$, at least one of $(a+d)^2-4(ad-bc)$ and $(b+c)^2-4(bc-bd)$ is non-negative. Therefore,
$$
\Pr((a+d)^2-4(ad-bc) \geq 0 \text{ or }(b+c)^2-4(bc-bd) \geq 0)=1.
$$
Combining above and \eqref{eqn:chapter4:lemma2:1} we have that,
$\Pr(M \text{ has both real eigenvalues}) \geq \frac{1}{2}.$
\end{proof}
Using Lemma \ref{halfbound}, in the case of $2 \times 2$ Ginibre matrices we can obtain that the probability of the products having all real eigenvalues to be at least $\frac{1}{2}$.
\begin{corollary}
Let $X_1,X_2, \dots X_n$ are i.i.d. matrices of size $2 \times 2$ whose entries are i.i.d real random variables distributed according to probability measure $\mu$ and $A_n=X_1X_2\dots X_n$. Then,
$$
\Pr(A_n\text{ has all real eigenvalues})\geq \frac{1}{2}.
$$
\end{corollary}
\begin{proof}
To satisfy the hypothesis of Lemma \ref{halfbound}, it is enough to show that the rows of the matrix $A_n$ are exchangeable. It can be noticed that the matrices $X_1$ and $\left[\begin{smallmatrix}
0 & 1\\
1 & 0
\end{smallmatrix}\right]X_1$ are identically distributed, so are $A_n$ and $\left[\begin{smallmatrix}
0 & 1\\
1 & 0
\end{smallmatrix}\right]A_n$. Hence the rows of $A_n$ are exchangeable.\end{proof}
\chapter{Declaration}
\vspace{0.5in}
\noindent I hereby declare that the work reported in this thesis is entirely original and has
been carried out by me under the supervision of Prof.~Manjunath Krishnapur at the Department of
Mathematics, Indian Institute of Science, Bangalore. I further declare
that this work has not been the basis for the award of any degree, diploma, fellowship,
associateship or similar title of any University or Institution.\\
\vspace*{1in}
\noindent $\begin{array}{lcr}
\textrm{Tulasi Ram Reddy A} & \hspace*{1.95in} &~ \\
\textrm{S. R. No. 6910-110-101-08085} & \hspace*{1.95in} &~ \\
\textrm{Indian Institute of Science} & \hspace*{1.95in} & ~ \\
\textrm{Bangalore} & \hspace*{1.95in} & ~ \\
~ & \hspace*{1.95in} & ~ \\
~ & \hspace*{1.95in} & ~ \\
~ & \hspace*{1.95in} & ~ \\
~& \hspace*{1.95in} & \textrm{Prof.~Manjunath Krishnapur} \\
~& \hspace*{1.95in} & \textrm{(Research advisor)}
\end{array}$
\frontmatter
\clearpage
\thispagestyle{empty}
\par\vspace*{.35\textheight}{\centering TO \\[2em]
\large\it My Parents\\
and\\
\large\it Teachers \par}
\chapter{Acknowledgements}
For an outsider a doctoral thesis might appear as a solitary endeavour. But several people (including virtual communities) have contributed in various forms to this thesis. I make an attempt here to thank the people who have contributed to the same. The set of people I have acknowledged here is a subset of the many people who have helped me in this endeavour and is far from complete.
I am deeply indebted to my thesis advisor Manjunath Krishnapur. It is a privilege to be his student. To say the least he has played several roles from being my thesis advisor to a great companion. I believe that I have also acquired some of his inexhaustible enthusiasm towards mathematics, which is indeed very contagious.
My interest in the study of random polynomials surged after discussions with Zakhar Kabluchko during the Trondheim spring school 2013, along with several conversations with Manjunath.
I have immensely benefited by the courses offered in the Mathematics department at IISc. The many thought provoking talks and seminars held at IISc, ISI, ICTS and TIFR-CAM have also been helpful.
Probabilists in Bangalore deserve a special mention for their efforts in organising various academic activities in the city. They, along with many probabilists visiting Bangalore, have certainly made this city very lively and thereby stimulating my research activity at various levels.
I was introduced to mathematics at the Indian Statistical Institute, Kolkata. Courses taught by BV Rao, late SC Bagchi, Arup Bose, Gopal Basak, Arnab Chakraborthy and Probal Chaudhuri among others created first impressions on mathematics and probability in me. Arni Srinivasa Rao encouraged me to pursue research in mathematics and has been great support since. From my high school teachers I recall K Subba Rao and M Radhakrishna who constantly encouraged me to do mathematics.
I take this opportunity to thank various organizations for providing their support generously. I thank NBHM for providing me travel grant to attend the School on Random matrices and Growth models in 2013 in Trieste, Italy. Apart from this, I was also supported by NBHM during my first year of Ph.D program. I thank CSIR for supporting me with the SPM fellowship. I am grateful to KVPY which provided me a fellowship during my undergraduate days. These fellowships have helped me make choices without any confusion at various junctures in my life.
My friends at IISc have bestowed me with several moments which can be cherished forever. This wouldn't have been possible without people like Arpan, Divakaran, Jaikrishnan, Kartick, Nanda, Pranav, Prathamesh, Rajeev, Sayani and Vikram.
Kartick, Manjunath and Nanda read through this thesis carefully before pointing out many mistakes. They also suggested many invaluable changes. A special thanks to them. I would also like to thank them along with Koushik Saha for having good discussions in the subject.
I thank the office staff in the Mathematics department and the many workers at IISc for ensuring an environment which facilitated a very smooth stay for me in the past five years.
Many of my endeavours have posed several challenges for my parents at times. Regardless, they have endorsed my decisions and backed me unflinchingly all along. No less was the patience and love unveiled by my sister and brother-in-law. My grandfather, who probably is my first friend ever, used to present creative answers to my questions during our rounds in the fields. He indeed deserves a special mention here. My aunts, uncles and cousins have made my visits to home very eventful and something to crave for. Friends were constantly in touch with me, despite no efforts from my side in reaching to them. I am sure all of them will be happy seeing this thesis.
\chapter{Abstract}
In the first part of this thesis, we study critical points of random polynomials. We
choose two
deterministic sequences of complex numbers, whose empirical measures converge to the
same probability measure in complex plane. We make a sequence of polynomials whose
zeros are chosen from either of sequences at random. We show that the limiting
empirical
measure of zeros and critical points agree for these polynomials. As a consequence we
show that when we randomly perturb the zeros of a deterministic sequence of
polynomials,
the limiting empirical measures of zeros and critical points agree. This result can be
interpreted as an extension of earlier results where randomness is reduced. Pemantle
and
Rivin initiated the study of critical points of random polynomials. Kabluchko proved
the result
considering the zeros to be i.i.d. random variables.\\
In the second part we deal with the spectrum of products of Ginibre matrices. Exact eigenvalue density is known for a very few matrix ensembles. For the known ones they often lead to determinantal point process. Let $X_1,X_2,\dots, X_k$ be i.i.d Ginibre matrices of size $n \times n$ whose entries are standard complex Gaussian random variables. We derive eigenvalue density for matrices of the form $X_1^{\epsilon_1}X_2^{\epsilon_2}\dots X_k^{\epsilon_k}$, where $\epsilon_i=\pm1$ for $i=1,2,\dots,k$. We show that the eigenvalues form a determinantal point process. The case where $k=2$, $\epsilon_1+\epsilon_2=0$ was derived earlier by Krishnapur. In the case where $\epsilon_i=1$ for $i=1,2,\dots,n$ was derived by Akemann and Burda. These two known cases can be obtained as special cases of our result.
\tableofcontents
\mainmatter
\include{introduction1}
\include{criticalpoints5}
\include{proofs6}
\include{matching7}
\include{ginibreproducts2}
\include{realeigenvalues4}
\backmatter
| {
"timestamp": "2016-02-18T02:05:51",
"yymm": "1602",
"arxiv_id": "1602.05298",
"language": "en",
"url": "https://arxiv.org/abs/1602.05298",
"abstract": "In the first part we study critical points of random polynomials. We choose two deterministic sequences of complex numbers,whose empirical measures converge to the same probability measure in complex plane. We make a sequence of polynomials whose zeros are chosen from either of sequences at random. We show that the limiting empirical measure of zeros and critical points agree for these polynomials. As a consequence we show that when we randomly perturb the zeros of a deterministic sequence of polynomials, the limiting empirical measures of zeros and critical points agree. This result can be interpreted as an extension of earlier results where randomness is reduced. Pemantle and Rivin initiated the study of critical points of random polynomials. Kabluchko proved the result considering the zeros to be i.i.d. random variables.In the second part we deal with the spectrum of products of Ginibre matrices. Exact eigenvalue density is known for a very few matrix ensembles. For the known ones they often lead to determinantal point process. Let $X_1,X_2,...,X_k$ be i.i.d matrices of size $n \\times n$ whose entries are independent complex Gaussian random variables. We derive the eigenvalue density for matrices of the form $Y_1.Y_2....Y_n$, where each $Y_i = X_i$ or $X_i^{-1}$. We show that the eigenvalues form a determinantal point process. The case where $k=2$, $Y_1=X_1,Y_2=X_2^{-1}$ was derived earlier by Krishnapur. The case where $Y_i =X_i$ for all $i=1,2,...,n$, was derived by Akemann and Burda. These two known cases can be obtained as special cases of our result.",
"subjects": "Probability (math.PR); Mathematical Physics (math-ph); Complex Variables (math.CV)",
"title": "On critical points of random polynomials and spectrum of certain products of random matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9850429116504952,
"lm_q2_score": 0.8128673246376009,
"lm_q1q2_score": 0.8007091962465707
} |
https://arxiv.org/abs/1603.08651 | Parkable convex sets and finite-dimensional Hilbert spaces | A subset of a convex body $B$ containing the origin in a Euclidean space is {\it parkable in $B$} if it can be translated inside $B$ in such a manner that the translate the origin. We provide characterizations of ellipsoids and of centrally symmetric convex bodies in Euclidean spaces of dimension $\ge 3$ based on the notion of parkability, answering several questions posed by G. Bergman.The techniques used, which are based on characterizations of Hilbert spaces among finite-dimensional Banach spaces in terms of their lattices of subspaces and algebras of endomorphisms, also apply to improve a result of W. Blaschke characterizing ellipsoids in terms of boundaries of illumination. | \section{#1}\setcounter{lemma}{0}}
\title{Parkable convex sets and finite-dimensional Hilbert spaces}
\author{Alexandru Chirvasitu\footnote{University of Washington, \url{chirva@uw.edu}}}
\begin{document}
\date{}
\maketitle
\begin{abstract}
A subset of a convex body $B$ containing the origin in a Euclidean space is {\it parkable in $B$} if it can be translated inside $B$ in such a manner that the translate the origin. We provide characterizations of ellipsoids and of centrally symmetric convex bodies in Euclidean spaces of dimension $\ge 3$ based on the notion of parkability, answering several questions posed by G. Bergman.
The techniques used, which are based on characterizations of Hilbert spaces among finite-dimensional Banach spaces in terms of their lattices of subspaces and algebras of endomorphisms, also apply to improve a result of W. Blaschke characterizing ellipsoids in terms of boundaries of illumination.
\end{abstract}
\noindent {\em Key words: convex body, Blaschke, centrally symmetric, ellipsoid, parkable set, Hilbert space, Banach space}
\vspace{.5cm}
\noindent{MSC 2010: 52A20, 52A21, 47L10, 46C15}
\section*{Introduction}
The present paper answers several questions raised in \cite{berg} regarding convex bodies in euclidean spaces. The setup is based on the following notion; it is introduced by G. Bergman in the process of studying ``efficient'' embeddings of metric spaces into other metric spaces.
All convex sets are understood to be subsets of some ambient Euclidean space $\bR^n$.
\begin{definition'}\label{def.parkable}
Let $C\subseteq B$ be convex sets with $0\in B$. $C$ is \define{parkable in} $B$ if some translate of $C$ is still contained in $B$ and contains $0$.
\end{definition'}
The questions in \cite{berg} referred to above have to do with characterizing particularly nice convex bodies in $\bR^n$ (i.e. centrally symmetric or ellipsoids) by means of parkability. First recall
\begin{definition'}
Let $C\subseteq \bR^n$ be a convex subset. A \define{center of symmetry} for $C$ is a point $p\in C$ such that
\begin{equation*}
C=2p-C:=\{2p-x\ |\ x\in C\}.
\end{equation*}
$C$ \define{has a center of symmetry} if a center of symmetry exists.
$C$ is \define{centrally symmetric} if $0\in C$ is a center of symmetry for it.
\end{definition'}
\cite[Question 32]{berg} then reads as follows.
\begin{question'}\label{qu.sym}
Let $C$ be a compact convex subset of $\bR^n$ for some $n>2$, with the property that for every centrally symmetric compact convex subset $B\subset \bR^n$ containing some translate of $C$, that translate is parkable in $B$.
Is it true that $C$ must have a center of symmetry?
\end{question'}
One of the main results is
\begin{theorem'}\label{th.sym}
The answer to \Cref{qu.sym} is affirmative.
\mbox{}\hfill\ensuremath{\blacksquare}
\end{theorem'}
A more elaborate result has to do with recognizing ellipsoids by means of parkability. Recall
\begin{definition'}
A \define{convex body} is a compact convex set with non-empty interior which contains $0$.
\end{definition'}
The starting point for the discussion that follows is the following result from \cite{berg}.
\begin{proposition'}\label{pr.ell}
Let $B\subset \bR^n$ be a convex body for some $n>2$. Then, the
following properties are successively weaker.
\begin{enumerate}
\renewcommand{\labelenumi}{(\roman{enumi})}
\item $B$ is an ellipsoid centered at $0$.
\item $B$ is centrally symmetric and its intersection with every hyperplane has a center of symmetry (provided it is not empty).
\item Every closed convex subset of $B$ is parkable in $B$.
\end{enumerate}
\end{proposition'}
It is then natural to ask \cite[Question 31]{berg}:
\begin{question'}\label{qu.ell}
Is either of the implications from \Cref{pr.ell} reversible?
\end{question'}
In this context, the second main result is
\begin{theorem'}\label{th.ell}
The three properties in \Cref{pr.ell} are equivalent to one another.
\mbox{}\hfill\ensuremath{\blacksquare}
\end{theorem'}
The rest of this introduction contains a few remarks about the proofs.
\Cref{th.sym,th.ell} do not directly imply one another as such, but they are nevertheless interlinked through the methods used in their proofs. The same auxiliary lemma about plane compact convex sets, for instance, leads both to \Cref{th.sym} and to the fact that condition (iii) in \Cref{pr.ell} implies that $B$ is centrally symmetric (in other words, the partial implication (iii) $\Rightarrow$ (ii)).
Once we show that (iii) implies central symmetry, we can assume $B$ to be centrally symmetric throughout the rest of the proof of \Cref{th.ell}. As such, it is the unit ball of a unique Banach space structure $(\bR^n,\|\cdot\|)$ on $\bR^n$, and now functional-analytic techniques and results can be brought to bear.
More specifically, using the same auxiliary result referred to in passing above we first prove
\begin{proposition'}\label{pr.aux}
Let $B$ be a convex centrally symmetric body $B$ satisfying condition (iii) from \Cref{pr.ell}. Then, for every linear hyperplane $H\subset \bR^n$, the non-empty intersections
\begin{equation*}
(H+x)\cap B,\quad x\in \bR^n
\end{equation*}
have centers of symmetry. Moreover, these centers are collinear. \mbox{}\hfill\ensuremath{\blacksquare}
\end{proposition'}
Associating to $L$ the unique line through the origin containing the centers of symmetry from the statement of \Cref{pr.aux} extends, it turns out, to an inclusion-reversing involution on the lattice of subspaces of the Banach space $(\bR^n,\|\cdot\|)$ referred to above. This, together with a lattice-theoretic characterization of Hilbert spaces among Banach spaces due to Kakutani et al. leads to the conclusion that $(\bR^n,\|\cdot\|)$ is a Hilbert space. But this is equivalent to its unit ball being an ellipsoid, and the conclusion follows.
There are other results that might be of independent interest, such as an improvement of a result of Blaschke on characterizations of ellipsoids by means of light rays (\cite{bla}).
\section{Preliminaries}\label{se.prel}
\subsection{Convex geometry}\label{subse.conv}
Our main reference on the topic will be \cite{tomo}, to which we refer the reader for basic terminology.
We say that two convex bodies in a Euclidean space $\bR^n$ are {\it mutual translates} if one is the image of the other through some translation of $\bR^n$.
The following result will make an appearance several times in the
sequel; the reader can consult e.g. \cite{Rya15} and the references
therein for background on the result (which is also \cite[Theorem 3.1.3]{tomo}).
\begin{theorem}\label{th.transl}
Let $K$ and $L$ be two compact convex subsets of $\bR^n$ for $n\ge 3$, and $2\le m\le {n-1}$ a positive integer. If the projections of $K$ and $L$ on every $m$-dimensional linear subspace of $\bR^n$ are mutual translates, then so are $K$ and $L$.
\mbox{}\hfill\ensuremath{\blacksquare}
\end{theorem}
As a consequence, we get (\cite[Corollary 3.1.5]{tomo})
\begin{corollary}\label{cor.centr}
Let $m$ and $n$ be positive integers as in \Cref{th.transl}, and
$B\subset \bR^n$ a compact convex subset. Then, $B$ has a center of symmetry
if and only if its projection on every $m$-dimensional linear
subspace of $\bR^n$ does.
\end{corollary}
\begin{proof}
Having a center of symmetry is equivalent to $B$ and $-B$ being
translates; we can now simply apply \Cref{th.transl} to this pair of
convex sets.
\end{proof}
\subsection{Banach and Hilbert spaces}\label{subse.ban}
We will assume some basics on Banach and Hilbert spaces and bounded operators thereon, as covered e.g. in the introductory sections of \cite[Chapters 1 and 12]{rud} or in the first three chapters of \cite{con}.
Given a centrally symmetric convex body $B=-B$ in $\bR^n$, we can associate to it the unique Banach space structure on $\bR^n$ making $B$ the unit ball: the norm of $x\in \bR^n$ is defined to be
\begin{equation*}
\|x\| = \|x\|_B = \inf\{r\ge 0\ |\ x\in rB\}.
\end{equation*}
The correspondence $B\mapsto \|\cdot\|_B$ is a bijection between centrally symmetric convex bodies and Banach space structures on $\bR^n$; the inverse map associates to a Banach space structure $(\bR^n,\|\cdot\|)$ its unit ball.
As \Cref{th.ell} above suggests, one of our main goals will be characterizing ellipsoids among convex bodies. In terms of the bijection $B\leftrightarrow \|\cdot\|_B$ the ellipsoids correspond to the {\it Hilbert space} structures on $\bR^n$, i.e. those Banach space structures whose underlying norm $\|\cdot\|$ arises from an inner product $\langle-,-\rangle$ via the usual formula
\begin{equation*}
\|x\|^2 = \langle x,x\rangle,\ \forall x\in \bR^n.
\end{equation*}
For this reason, it will be important to have at our disposal results that allow for the recognition of Hilbert spaces among Banach spaces. One such tool is \cite[Theorem 1.1]{istr} (or rather a variant thereof, with real Banach spaces instead of complex ones):
\begin{theorem}\label{th.istr}
Let $(\bR^n,\|\cdot\|)$ be a Banach space. The norm is induced by an inner product if and only if there exists an operation $T\mapsto T^*$ on the space $M_n(\bR)$ of endomorphisms of $\bR^n$ such that
\begin{enumerate}
\item $(T+S)^* = T^*+S^*$;
\item $(T^*)^* = T$;
\item $(TS)^* = S^*T^*$;
\item $\|P\|\le 1$ if $P^2=P=P^*$, where $\|P\|$ is the norm on $M_n(\bR)$ induced by that on $\bR^n$.
\end{enumerate}
\mbox{}\hfill\ensuremath{\blacksquare}
\end{theorem}
In other words, we have a recognition criterion for Hilbert space norms in terms of involutions $T\mapsto T^*$ on their algebras of endomorphisms.
\section{Universally parkable sets}\label{se.univ}
The aim of this section is to prove \Cref{th.sym} above, answering \Cref{qu.sym} in the affirmative. We recall the statement, after introducing the following notion relevant to the setup of the theorem.
\begin{definition}\label{def.univ_park}
A compact convex set $C\subset \bR^n$ is {\it universally parkable} if for any centrally symmetric convex subset $B\subset \bR^n$ containing a translate of $C$, that translate is parkable in $B$ in the sense of \Cref{def.parkable}.
\end{definition}
\begin{theorem}\label{th.sym_bis}
For a positive integer $n\ge 3$, every universally parkable compact
convex subset $C\subset \bR^n$ has a center of symmetry.
\end{theorem}
We proceed through a series of auxiliary results. First off, we narrow down the class of centrally symmetric convex sets that witness the parkability of $C\subset \bR^n$. For this purpose as well as for use in the sequel, denote by $C_u$ the translate $C+u$ for $u\in \bR^n$.
\begin{lemma}\label{le.Cu}
A compact convex subset $C\subset \bR^n$ is universally parkable if and only if for every $u\in \bR^n$ the translate $C_u$ is parkable in the convex hull $\co(-C_u\bigcup C_u)$.
\end{lemma}
\begin{proof}
On the one hand, if $C$ is universally parkable, every $\co(-C_u\bigcup C_u)$ is certainly a centrally symmetric convex subset of $\bR^n$ containing a translate $C_u$, and hence the latter must be parkable in the former.
Conversely, every centrally symmetric compact convex $B\subset \bR^n$ containing a translate $C_u$ contains $\co(-C_u\bigcup C_u)$, so that if $C_u$ is parkable in the latter then it is parkable in the former as well.
\end{proof}
As a consequence, we obtain
\begin{corollary}\label{cor.Cu}
If $C\subset \bR^n$ is universally parkable, then so are its projections on linear subspaces of $\bR^n$.
\end{corollary}
\begin{proof}
This follows from the fact that the property from the statement of \Cref{le.Cu} is clearly invariant under taking orthogonal projections.
\end{proof}
In order to fix ideas, we now specialize \Cref{th.sym_bis} to $n=3$.
\begin{lemma}
If \Cref{th.sym_bis} holds for $n=3$ then it holds in general.
\end{lemma}
\begin{proof}
Starting with a universally parkable $C\subset \bR^n$ for some $n\ge 3$, \Cref{cor.Cu} ensures that its projections on all $3$-dimensional linear subspaces of $\bR^n$ are universally parkable. If \Cref{th.sym_bis} holds for $\bR^3$ we can conclude that these projections all have centers of symmetry, which according to \Cref{cor.centr} means that so does $C$.
\end{proof}
This now allows us to focus on the case $n=3$. Let $\bS\subset \bR^3$ be an origin-centered sphere whose radius
is large enough to ensure that no translates $C_u$ (and hence no reflections
$-C_u$) contain $0$ as $u$ ranges over $\bS$. The following lemma shows
that when $u\in \bS$ there is a certain rigidity in how one may
translate $C_u$ inside $\co(-C_u\bigcup C_u)$ so as to engulf $0$.
\begin{lemma}\label{le.unique}
Let $C\subset \bR^3$ be as in the statement of \Cref{th.sym_bis} For
any $u\in \bS$, the direction of a vector $0\ne v\in \bR^3$ such that
\begin{equation*}
0\in C_u+v\subset \co(-C_u\bigcup C_u)
\end{equation*}
is uniquely determined.
\end{lemma}
\begin{proof}
Let $u\in \bS$ and $0\ne v\in \bR^3$ be vectors as in the statement, and let
$H\subset \bR^3$ be a $2$-dimensional linear such that the
projection $C_u|H$ does not contain $0\in H$ and is not collinear
with the origin (i.e. is not a segment whose supporting line
contains $0$). Note that such planes
$H$ form an open subset of the Grassmannian $\cG(3,2)$ (because
their defining property is open), and by our
choice of radius for $\bS$ this subset of $\cG(3,2)$ is non-empty.
In the plane $H$ the part of the boundary $\partial \co(-C_u|H\bigcup
C_u|H)$ that is not contained in the boundaries of $C_u|H$ and
$-C_u|H$ consists of two open segments forming two opposite edges of a
parallelogram centered at $0$:
\begin{equation*}
\begin{tikzpicture}[auto,baseline=(current bounding
box.center),scale=1]
\node (c) at (2.2,.8) {$\scriptstyle C_u|H$};
\node (-c) at (-2.3,-.8) {$\scriptstyle -C_u|H$};
\node (aux) at (-1.125,-.425) {};
\draw (0,0) node[circle, inner sep=1pt, fill=black,
label={right:{$\scriptstyle 0$}}] (0) {};
\draw[-] (2,1.5) .. controls (2.5,1.7) and (3.3,1.2) .. (3,.7);
\draw[-] (3,.7) .. controls (2.7,.2) and (2.625,.25) .. (2.5,.2);
\draw[-] (2.5,.2) to[bend left=9] (1.5,.4);
\draw[-] (1.5,.4) .. controls (1,.6) and (1.75,1.4) .. (2,1.5);
\draw[-] (-2,-1.5) .. controls (-2.5,-1.7) and (-3.3,-1.2) .. (-3,-.7);
\draw[-] (-3,-.7) .. controls (-2.7,-.2) and (-2.625,-.25) .. (-2.5,-.2);
\draw[-] (-2.5,-.2) to[bend left=9] (-1.5,-.4);
\draw[-] (-1.5,-.4) .. controls (-1,-.6) and (-1.75,-1.4) .. (-2,-1.5);
\draw[-] (2,1.5) -- (-2.5,-.2);
\draw[-] (-2,-1.5) -- (2.5,.2);
\draw[->] (0) to node[pos=.3,auto,swap] {$\scriptstyle w$} (aux);
\draw (-.25,.65) node[circle, inner sep=1pt, fill=black] () {};
\draw (.25,-.65) node[circle, inner sep=1pt, fill=black] () {};
\draw[-] (.5,-1.3) -- (-.5,1.3);
\draw (-.5,1.3) node[circle, inner sep=0pt, fill=black,
label={right:{$\scriptstyle L$}}] () {};
\end{tikzpicture}
\end{equation*}
More formally, as the diagram illustrates, the two segments in question are the tangents to the boundary $\partial\co(-C_u|H\bigcup C_u|H)$ at the points where a hyperplane (i.e. line) $L$ that separates $C_u|H$ and $-C_u|H$ intersects said boundary.
Denoting by $w$ the orthogonal projection of $v$ on $H$, it follows
from the above remark about the boundary of $\co(-C_u|H\bigcup C_u|H)$
that $w$ must be parallel to the two segments and point away from
$C_u|H$. This determines the direction of $w\in H$ uniquely, and since
$w$ was the orthogonal projection of $v$ on any one of the elements of
an open subset of $\cG(3,2)$, we deduce the desired uniqueness of the
direction of $v$.
\end{proof}
\begin{remark}\label{re.unique}
Note that the proof of \Cref{le.unique} actually shows more than the statement claims: it shows that the unique direction in which $C_u$ can be translated without leaving $\co(-C_u\bigcup C_u)$ is that of $\varphi(u)$.
\end{remark}
This gives us, for every $u\in \bS$, a unique unit vector $\varphi(u)=\frac
v{\|v\|}\in \bS_1$ (unit sphere centered at $0$) such that translation
in its direction will position $C_u$ so that it contains $0$. The map
$\varphi$ is in fact very well-behaved. Recall that an {\it odd} map
between two spheres is one that sends antipodes to antipodes. With this in hand, we have
\begin{lemma}\label{le.cont}
The map $\varphi:\bS\to \bS_1$ deduced from \Cref{le.unique} as in
the discussion above is continuous and odd.
\end{lemma}
\begin{proof}
The fact that $\varphi$ is odd is immediate: if $-\varphi(u)$
belongs to $C_u$ then $\varphi(u)$ belongs to $-C_u$, meaning that
translation of $-C_u$ by $-\varphi(u)$ will position the former so
that it contains $0$.
As for the continuity of $\varphi$, note first that because of our choice of $\bS$ (such that no $C_u$ contains the origin) there is a positive lower bound on the length of vectors $v$ such that $0\in C_u+v$ as $u$ ranges over $\bS$. In conclusion, there is some $t>0$ such that for all $u\in \bS$ we have
\begin{equation}\label{eq:cont}
C_u+t \varphi(u)\in \co(-C_u\bigcup C_u).
\end{equation}
The condition
\begin{equation*}
C_u+tv\in \co(-C_u\bigcup C_u)
\end{equation*}
is closed in $(u,v)\in \bS\times \bS_1$, and together with \Cref{re.unique} above, \Cref{eq:cont} shows that the set of pairs $(u,v)$ that satisfy it is precisely the graph of $\varphi:\bS\to \bS_1$. Since we now know that the graph of the map $\varphi$ between compact spaces is closed, we can conclude that it is continuous.
\end{proof}
\begin{corollary}\label{cor.onto}
The map $\varphi:\bS\to \bS_1$ from \Cref{le.cont} is onto.
\end{corollary}
\begin{proof}
Indeed, \Cref{le.cont} shows that it is a continuous odd map between $2$-spheres. By one
version of the Borsuk-Ulam theorem (e.g. the main result of \cite{dold}) it follows that the map is not nullhomotopic and hence, because the complement of a point in the $2$-sphere is contractible, must be onto.
\end{proof}
\begin{lemma}\label{le.tough}
For every $u\in \bS$, the orthogonal projection of $C_u$ on $H=\varphi(u)^\perp$ is centrally symmetric.
\end{lemma}
\begin{proof}
We have to show that the projections of $C_u$ and $-C_u$ on $H$ coincide. If they do not, then there is a point $x$ on the boundary $\partial(C_u|H)$ that does not belong to $-C_u|H$. The line in $\bR^3$ through $x$ and orthogonal to $H$ (and hence parallel to $\varphi(u)$) intersects $-C_u\bigcup C_u$ along a segment (possibly degenerate, i.e. a single point) contained in the boundary $\partial C_u$.
\begin{equation*}
\begin{tikzpicture}[auto,baseline=(current bounding
box.center),scale=1]
\node (c) at (-.7,1.4) {$\scriptstyle C_u$};
\node (-c) at (.8,-1.5) {$\scriptstyle -C_u$};
\draw (0,0) node[circle, inner sep=1pt, fill=black, label={45:{$\scriptstyle 0$}}] () {};
\draw[-] (-2,2) -- (-2,1.5);
\draw[-] (-2,2) .. controls (-2,3) and (.5,2) .. (.5,1);
\draw[-] (-2,1.5) .. controls (-2,1) and (.5,0) .. (.5,1);
\draw[-] (2,-2) -- (2,-1.5);
\draw[-] (2,-2) .. controls (2,-3) and (-.5,-2) .. (-.5,-1);
\draw[-] (2,-1.5) .. controls (2,-1) and (-.5,0) .. (-.5,-1);
\draw[-] (-3,0) -- (3,0);
\node (H) at (2.8,.2) {$\scriptstyle H$};
\draw[->] (2,2) to node[pos=.5] {$\scriptstyle \phi(u)$} (2,1);
\draw[-] (-2,2.5) -- (-2,-.3);
\draw (-2,0) node[circle, inner sep=1pt, fill=black, label={135:{$\scriptstyle x$}}] () {};
\end{tikzpicture}
\end{equation*}
Translation by $\varphi(u)$ will move one of the endpoints of the segment outside of $\co(-C_u\bigcup C_u)$, contradicting
\begin{equation*}
C_u+\varphi(u)\subset \co(-C_u\bigcup C_u).
\end{equation*}
It follows from the contradiction that $-C_u|H=C_u|H$, as desired.
\end{proof}
We are now ready to complete the proof of the main result of this section.
\begin{proof_of_symbis}
\Cref{le.tough,cor.onto} show that the orthogonal projection of $C$ on every hyperplane in $\bR^3$ has a center of symmetry and hence, according to \Cref{cor.centr}, so does $C$.
\end{proof_of_symbis}
\section{Parkability and finite-dimensional Banach spaces}\label{se.park}
Here, we prove \Cref{th.ell}. Recall the statement:
\begin{theorem}\label{th.ell_bis}
For a convex body $B\subset \bR^n$, $n\ge 3$ the following properties
are equivalent.
\begin{enumerate}
\renewcommand{\labelenumi}{(\roman{enumi})}
\item $B$ is an ellipsoid centered at $0$.
\item $B$ is centrally symmetric and its intersection with every hyperplane has a center of symmetry (provided it is not empty).
\item Every closed convex subset of $B$ is parkable in $B$.
\end{enumerate}
\end{theorem}
We already know from \Cref{pr.ell} that (i) implies (ii), which in
turn implies (iii); hence, it suffices to go backwards. We once again
specialize to $n=3$ in order to simplify some of the proofs and
language.
\begin{proposition}\label{pr.n=3}
If \Cref{th.ell_bis} holds for $n=3$ then it holds in general.
\end{proposition}
\begin{proof}
Property (iii) is clearly preserved by passing to orthogonal
projections from $\bR^n$ down to its three-dimensional subspaces. If
the implication (iii) $\Rightarrow$ (i) holds in $\bR^3$, then we
know that all orthogonal projections of $B$ on such subspaces are
ellipsoids. This implies that $B$ itself is an ellipsoid by \cite[Theorem 3.1.7]{tomo}.
\end{proof}
As explained in the introduction, the first priority will be to prove that (iii)
implies the property of being centrally symmetric so as to be able to think of the
convex body as the unit ball of a Banach space structure on
$\bR^n$. The following lemma provides the first step in this direction.
\begin{lemma}\label{le.2impln}
If the implication (iii) $\Rightarrow$ (centrally symmetric) holds
in $\bR^2$, then it holds for all $\bR^n$, $n\ge 3$.
\end{lemma}
\begin{proof}
Let $B\subset \bR^n$, $n\ge 3$ be a convex body satisfying condition
(iii) of the theorem, and suppose (iii) implies central symmetry in
the plane.
Clearly, (iii) is preserved upon projecting down to any plane, and
hence, by assumption, all projections of $B$ on planes are centrally
symmetric. The conclusion that $B$ itself is then follows from
\Cref{cor.centr} (or rather a variant thereof whereby the center of
symmetry is in fact the origin).
\end{proof}
\begin{remark}\label{re.hyp}
As seen in \cite[Lemma 29]{berg}, condition (iii) of \Cref{th.ell_bis}
is equivalent to the fact that every hyperplane section of $B$ is
parkable. We will henceforth use this equivalence without further
comment when convenient.
\end{remark}
\begin{proposition}\label{pr.2}
Let $B\subset \bR^2$ be a convex body such that every intersection
of $B$ with a line is parkable in $B$. Then, $B$ is centrally
symmetric.
\end{proposition}
\begin{proof}
Let $v\in \bR^2$ be a non-zero vector, and $H_1$ and $H_2$ the two $B$-supporting lines that are orthogonal to $v$.
{\bf Claim: The convex hull of the segments $H_1\cap B$ and $H_2\cap
B$ contains the origin.} Indeed, if $p_1\in H_1\cap B$ and $p_2\in
H_2\cap B$ are two points on the boundary of $B$, then by our
hypothesis some translate $\overline{p_1p_2}+w\subset B$ contains the
origin. Because $H_i$ are supporting lines the translates $p_1+w$ and
$p_2+w$ belong to $H_1$ and $H_2$ respectively (i.e. if it is
non-zero, then $w$ is parallel to $H_1$ and $H_2$); this finishes the proof of the claim.
\begin{equation*}
\begin{tikzpicture}[auto,baseline=(current bounding
box.center),scale=.8]
\draw (0,-1) node[circle, inner sep=0pt, fill=black, label={225:{$\scriptstyle H_1$}}] () {};
\draw (-1.2,.2) node[circle, inner sep=1pt, fill=black, label={225:{$\scriptstyle p_1$}}] (p1) {};
\draw (4,-2) node[circle, inner sep=0pt, fill=black, label={45:{$\scriptstyle H_2$}}] () {};
\draw (2.5,-.5) node[circle, inner sep=1pt, fill=black, label={45:{$\scriptstyle p_2$}}] (p2) {};
\draw (.3,.3) node[circle, inner sep=1pt, fill=black, label={45:{$\scriptstyle 0$}}] () {};
\draw[-] (-2,1) -- (-1,0);
\draw[-] (1,1) -- (3,-1);
\draw[-] (-2,1) -- (1,1);
\draw[-] (-1,0) -- (3,-1);
\draw[-] (-2,1) .. controls (-2.5,1.5) and (.5,1.5) .. (1,1);
\draw[-] (-1,0) .. controls (-.5,-.5) and (3.5,-1.5) .. (3,-1);
\draw[-] (-2.5,1.5) -- (0,-1);
\draw[-] (0,2) -- (4,-2);
\draw[->] (4,.5) to node[pos=.5,auto,swap] {$\scriptstyle w$} (3.5,1);
\end{tikzpicture}
\end{equation*}
Given the claim, the conclusion now follows from the technical \Cref{le.supports} below.
\end{proof}
\begin{lemma}\label{le.supports}
Let $B\subset \bR^2$ be a convex body. Suppose that for every non-zero $v\in \bR^2$ the two supporting lines $L_1=L_1(v)$ and $L_2=L_2(v)$ of $B$ that are orthogonal to $v$ satisfy
\begin{equation*}
0\in \co((L_1\cap B)\cup(L_2\cap B)).
\end{equation*}
Then, $B$ is centrally symmetric.
\end{lemma}
\begin{proof}
We have to show that $B$ and $-B$ coincide. Assuming they do not, the condition on $B$ at least ensures that the union $-B\cup B$ is convex. Indeed, otherwise for some $p\in \partial B\cap \partial(-B)$ some $B$-supporting line $L$ through $p$ would not be $-B$-supporting, and hence $v\perp L$ would violate the hypothesis.
In conclusion, $B$ and $-B$ admit common parallel supporting lines $L$ and $-L$ at two points $p$ and $-p$ respectively in the intersection $\partial B\cap \partial(-B)$. We will moreover choose coordinates in $\bR^2$ so that the segment $\overline{(-p)p}$ is horizontal and $L$ is vertical. We will derive a contradiction from the assumption that the portions of $\partial B$ and $\partial(-B)$ lying in the lower half plane are distinct.
\begin{equation*}
\begin{tikzpicture}[auto,baseline=(current bounding
box.center),scale=1]
\draw (0,0) node[circle, inner sep=1pt, fill=black, label={90:{$\scriptstyle 0$}}] () {};
\draw (-1,0) node[circle, inner sep=1pt, fill=black, label={135:{$\scriptstyle -p$}}] () {};
\draw (1,0) node[circle, inner sep=1pt, fill=black, label={45:{$\scriptstyle p$}}] () {};
\draw (-1,-2) node[circle, inner sep=0pt, fill=black, label={225:{$\scriptstyle -L$}}] () {};
\draw (1,-2) node[circle, inner sep=0pt, fill=black, label={315:{$\scriptstyle L$}}] () {};
\draw (.5,-1) node[circle, inner sep=0pt, fill=black, label={100:{$\scriptstyle B$}}] () {};
\draw (-.5,-2) node[circle, inner sep=0pt, fill=black, label={85:{$\scriptstyle -B$}}] () {};
\draw[-] (-1,1) -- (-1,-2);
\draw[-] (1,1) -- (1,-2);
\draw[-] (-2,0) -- (2,0);
\draw[-] (-1,0) .. controls (-1,-.5) and (.1,-1) .. (.5,-1);
\draw[-] (.5,-1) .. controls (.9,-1) and (1,-.5) .. (1,0);
\draw[-] (-1,0) .. controls (-1,-.5) and (-.9,-2) .. (-.5,-2);
\draw[-] (-.5,-2) .. controls (-.1,-2) and (1,-2) .. (1,0);
\end{tikzpicture}
\end{equation*}
We can now identify $\overline{(-p)p}$ with a closed interval $[-a,a]$, $a>0$ and the portions of $\partial B$ and $\partial(-B)$ depicted above with graphs of convex functions $f<0$ and $g<0$ respectively defined on $(-a,a)$. Moreover, the assumption $B\ne -B$ translates without loss of generality to $g(x)<f(x)$ for all $x\in (-a,a)$.
In this setup, the meaning of the hypothesis is that whenever a line through the origin intersects the graphs of $f$ and $g$ at smooth points $(x,f(x))$ and $(y,f(y))$ respectively, the tangents to the two graphs at those points are parallel; equivalently, $f'(x)=g'(y)$.
If, in the previous paragraph, we choose $x$ to be negative, then $y<x$. Because $g$ is convex, its derivative is non-decreasing, and hence
\begin{equation*}
f'(x)=g'(y)\le g'(x)
\end{equation*}
(always assuming we are working with points where the derivatives exist, which is the case for all but countably many of the elements in the domain $(-a,a)$).
In conclusion, whenever both $f$ and $g$ are differentiable at $x\in (-a,0]$ we have $f'(x)\le g'(x)$. This, together with
\begin{equation*}
f(0)=\int_{-a}^0f'(x)\ \mathrm{d}x\le g(0)=\int_{-a}^0g'(x)\ \mathrm{d}x
\end{equation*}
contradicts our assumption that $g(0)<f(0)$.
\end{proof}
Using \Cref{pr.2}, we obtain
\begin{corollary}\label{cor.iii_implies_sym}
Let $B\subset \bR^n$, $n\ge 3$ be a convex body satisfying condition
(iii) of \Cref{th.ell_bis}. Then, $B$ is centrally symmetric.
\end{corollary}
\begin{proof}
This follows immediately from \Cref{le.2impln} via \Cref{pr.2}.
\end{proof}
We now know that all convex bodies satisfying the weakest condition (iii) of \Cref{th.ell_bis} are centered at the origin, and as a consequence we henceforth specialize the discussion to such bodies. Next, our goal will be to prove an $n=3$ version of \Cref{pr.aux}, announced above.
\begin{proposition}\label{pr.aux_bis}
Let $B$ be a convex centrally symmetric body $B\subset \bR^3$ satisfying condition (iii) from \Cref{pr.ell}. Then, for every linear hyperplane $H\subset \bR^3$, the non-empty intersections
\begin{equation*}
(H+x)\cap B,\quad x\in \bR^3
\end{equation*}
have centers of symmetry. Moreover, these centers are collinear.
\end{proposition}
Before embarking on a proof, we need some preparation. As explained in the discussion following \cite[Question 31]{berg}, convex bodies $B$ with parkable hyperplane sections have the following property: for any linear hyperplane $H\subset \bR^3$ there is a line $L\subset \bR^3$ such that $B$ has a supporting translate of $L$ at every point of $H\cap \partial B$ (here, `supporting' is the natural extension to lines of the term `supporting hyperplane'; it simply means that the line intersects only the boundary of $B$).
It is also explained in loc. cit. how this property is in a sense dual to one considered by Blaschke \cite[pp. 157-159]{bla} in relation to illuminating convex bodies. In this latter reference, (smooth) convex bodies are considered with the property that when illuminated with parallel light rays, the boundary curve of the illuminated region is planar. In other words, given a line $L$, the intersections of the $B$-supporting translates of $L$ with $B$ form a planar curve in $\partial B$. This is the precise opposite of the situation in the previous paragraph.
Motivated by these considerations, we label the two dual properties described above.
\begin{definition}\label{def.bla}
Let $B\subset \bR^3$ be a convex body containing $0$ in its interior.
$B$ has {\it the Blaschke property} (or {\it is Blaschke}, for short) if for any line $L$ there exists a linear hyperplane $H\subset \bR^3$ such that all points in $H\cap \partial B$ lie on some $B$-supporting translate of $L$.
$B$ has {\it the dual Blaschke property} (or {\it is dual Blaschke}) if for any linear hyperplane $H\subset \bR^3$ there is a line $L\in \bR^3$ such that all points in $H\cap \partial B$ lie on some $B$-supporting translate of $L$.
\end{definition}
With these terms in place, the discussion preceding the definition amounts to (\cite[p. 257]{berg})
\begin{proposition}\label{pr.co_bla}
A centrally symmetric convex body in $\bR^3$ satisfying condition (iii) of \Cref{th.ell_bis} is dual Blaschke.
\mbox{}\hfill\ensuremath{\blacksquare}
\end{proposition}
Our next aim is to prove the dual version of this result.
\begin{proposition}\label{pr.bla}
A centrally symmetric convex body in $\bR^3$ satisfying condition (iii) of \Cref{th.ell_bis} is Blaschke.
\end{proposition}
We now introduce some tools necessary in the proof of \Cref{pr.bla}. Consider a convex body $B$ as in \Cref{pr.co_bla}. For every unit vector $v\in \bR^3$, denote by $H_v$ the plane orthogonal to $v$. We define the subset $\psi(v)=\psi_B(v)$ of the unit sphere to consist of all unit vectors $w$ such that
\begin{equation*}
w\cdot v>0 \text{ and the line } \{p+tw\ |\ t\in \bR\} \text{ is }B-\text{supporting for every }p\in \partial B\cap H_v.
\end{equation*}
In other words, it is the set of unit vectors that make acute angles with $v$ and pointing along the directions of those lines $L$ associated to the plane $H_v$ as in \Cref{pr.co_bla,def.bla}.
\Cref{pr.co_bla} is what makes $\psi(v)$ non-empty. Note moreover that since the lines supporting $B$ at points in $\partial B\cap H$ cannot be contained in $H$ (for any plane $H$), $\psi(v)$ is closed in the unit sphere $\bS^2$. Finally, it is also {\it convex} as a subset of the sphere, in the sense that for $w,w'\in \psi(v)$ the rescaled convex combinations
\begin{equation*}
\frac{tw+(1-t)w'}{\|tw+(1-t)w'\|},\ t\in (0,1)
\end{equation*}
all belong to $\psi(v)$. All in all, we have defined a map
\begin{equation*}
\psi=\psi_B:\bS^2\to\text{ closed convex subsets of }\bS^2.
\end{equation*}
The map $\psi$ has a number of other properties that are easy to check:
\begin{lemma}\label{le.odd}
For a convex body $B$ as in \Cref{pr.co_bla}, the map $\psi=\psi_B$ is odd in the sense that
\begin{equation*}
\psi(-v)=-\psi(v)
\end{equation*}
and upper semicontinuous in the sense that its graph
\begin{equation*}
\{(v,w)\in \bS^2\times \bS^2\ |\ w\in \psi(v)\}
\end{equation*}
is closed in $\bS^2\times \bS^2$.
\end{lemma}
\begin{proof}
The property of being odd is immediate from the definition of $\psi$: $v$ and $-v$ define the same plane $H_v=H_{-v}$, and hence the same set of lines supporting $B$ at the points of $\partial B\cap H_v$. The upper semicontinuity claim is similarly routine, using the observation that the set of $B$-supporting affine lines is closed in the set of all affine lines of $\bR^3$.
\end{proof}
\begin{lemma}\label{le.psi_onto}
An odd, upper semicontinuous map
\begin{equation*}
\psi=:\bS^2\to\text{ closed convex subsets of }\bS^2
\end{equation*}
is onto in the sense that every $w\in \bS^2$ belongs to some set $\psi(v)$, $v\in \bS^2$.
\end{lemma}
\begin{proof}
Assume the contrary, with, say, $w\in \bS^2$ lying outside all of the sets $\psi(v)$ as $v$ ranges over $\bS^2$. Flowing along great circles through $w$ down to the equator in the plane orthogonal to $w$, we can deform $\psi$ into an odd, upper semicontinuous map
\begin{equation*}
\psi=:\bS^2\to\text{ closed convex subsets of }\bS^1.
\end{equation*}
The existence of such a map contradicts the multivalued Borsuk-Ulam theorem as stated e.g. in \cite[Corollary 2.4]{multi_BU}.
\end{proof}
We have now effectively proven \Cref{pr.bla} above.
\begin{proof_of_bla}
Using the notation introduced above, the statement amounts to showing that $\psi_B$ is onto; this is precisely what \Cref{le.psi_onto} does.
\end{proof_of_bla}
This provides us with the necessary tools to attack \Cref{pr.aux_bis}.
\begin{proof_of_aux_bis}
The statement stipulates a condition that is closed among planes in $\cG(3,2)$, so it suffices to prove that this condition holds for a dense set of planes. Specifically, we will prove it for planes $H$ with the following property:
{\it The two supporting planes of $B$ that are parallel to $H$ each intersect $B$ at a single point.}
Now fix such a plane $H$, and let $p$ and $q=-p$ be the two points where the two $B$-supporting translates of $H$ intersect $B$. Let also $K$ be a planar section $(H+x)\cap B$ with non-empty relative interior. Finally, fix a line $L\in H$.
In the affine plane $H+x$, the convex body $K$ has two supporting lines parallel to $L$; denote these by $L_1$ and $L_2$.
{\bf Claim: The convex hull of $L_i\cap K$ contains the point $r=(H+x)\cap \overline{pq}$.} Given the claim, we can finish the proof of the proposition as follows.
Since the claim holds for any line $L\in H$, we can apply \Cref{le.supports} inside the affine plane $H+x$, with the point $r$ regarded as the origin, to conclude that $K$ has this point as its center of symmetry. In conclusion, we obtained the desired result: all planar sections of $B$ that are parallel to $H$ are centrally symmetric along points on the line containing $p$ and $q$.
It thus remains to prove the claim; this goal will take up the rest of the proof.
According to \Cref{pr.bla}, there is a plane $H'\subset \bR^3$ with the property that every point in $H'\cap \partial B$ is contained in a $B$-supported line parallel to $L$. By our choice of $H$ (so that its two $B$-supporting translates meet $B$ at single points $p$ and $q$ respectively), we have $p,q\in H'$. As a consequence of this, the endpoints of the segment $H'\cap K$ are contained in the $B$-supporting translates $L_1$ and $L_2$; this concludes the proof.
\end{proof_of_aux_bis}
\begin{remark}\label{re.32to31}
Note that given the plane $H\subset \bR^3$, the line containing the centers of symmetry for the non-empty planar sections $(H+x)\cap B$ is uniquely determined; this provides a map from the Grassmannian of planes $\cG(3,2)$ to the Grassmannian $\cG(3,1)$ of lines in $\bR^3$.
\end{remark}
We now take the first step towards constructing a map $\cG(3,1)\to \cG(3,2)$ in the opposite direction to that of \Cref{re.32to31}.
\begin{lemma}\label{le.aux_bis_dual}
Suppose we are under the assumptions of \Cref{pr.aux_bis}, and $L$ is the line containing the centers of symmetry of $(H+x)\cap B$ for $x$ ranging over $\bR^3$. Then, the midpoint of every non-empty intersection $(L+y)\cap B$ belongs to $H$.
\end{lemma}
\begin{proof}
\Cref{pr.aux_bis} can be restated as saying that the linear involution $T$ of $\bR^3$ that acts as the identity along $L$ and as $x\mapsto -x$ on $H$ preserves the convex body $B$. The involution $-T$ also preserves $B$, acts as the identity on $H$, and as $x\mapsto -x$ along $L$. In other words, $-T$ preserves $H$ and reverses every segment $(L+y)\cap B$; this implies the conclusion.
\end{proof}
We can now prove the dual version of \Cref{pr.aux_bis}.
\begin{proposition}\label{pr.aux_bis_dual}
Suppose we are under the hypotheses of \Cref{pr.aux_bis}. For any line $L\subset \bR^3$, the midpoints of the non-empty intersections $(L+y)\cap B$ are coplanar.
\end{proposition}
\begin{proof}
\Cref{le.aux_bis_dual} above already proves the statement for those lines $L$ arising as images of planes $H\in \cG(3,2)$ through the map $\cG(3,2)\to \cG(3,1)$ from \Cref{re.32to31}. It is thus sufficient to show that this map is onto. In other words, we have to prove that {\it every} line $L$ contains the symmetry centers of the non-empty planar sections $(H+x)\cap B$ for some $H\in \cG(3,2)$.
Consider the map $\phi=\phi_B:\bS^2\to \bS^2$ defined as follows. For a unit vector $v\in \bS^2$, let $H=H_v$ be the plane orthogonal to $v$, and take $\phi(v)$ be the unit vector making an acute angle with $v$ and pointing along the line $L$ that contains the centers of symmetry of $(H+x)\cap B$, $x\in \bR^3$.
Clearly, $\phi$ is continuous and odd, in the sense that $\phi(-v)=-\phi(v)$. We can now apply \Cref{le.psi_onto} in the simpler case of ordinary (rather than multivalued) maps to conclude via Borsuk-Ulam that $\phi$ is onto.
\end{proof}
\Cref{pr.aux_bis,pr.aux_bis_dual} allow us to introduce the following
\begin{notation}\label{not.prime}
For a linear subspace $H\subset \bR^3$ define $H'$ to be
\begin{itemize}
\item the zero subspace $\{0\}$ if $H=\bR^3$;
\item the line containing the centers of symmetry of $(H+x)\cap B$ if $H$ is a plane;
\item the plane containing the midpoints of $(H+y)\cap B$ if $H$ is a line;
\item $\bR^3$ if $H=\{0\}$.
\end{itemize}
\end{notation}
The operation $H\mapsto H'$ on the subspace of $\bR^3$ has the following properties.
\begin{lemma}\label{le.inv}
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})}
\item For any two subspaces $L,H$ of $\bR^3$ we have $L\subseteq H\Rightarrow L'\supseteq H'$;
\item For every subspace $H\subseteq \bR^3$ we have $H''=H$;
\item For every subspace $H\subseteq \bR^3$ we have $H\cap H'=\{0\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
In all three statements, the interesting cases are those where the subspaces in question are 1- or 2-dimensional. For this reason, we treat only these cases in the proof.
{\bf (1)} Here, we may as well assume that $L$ is a line contained in the plane $H$. $H'$ is by definition the line containing the symmetry centers of the sections $(H+x)\cap B$. Every such center is the midpoint of the segment $(L+x)\cap B$, and is thus on $L'$ by the definition of the latter.
{\bf (2)} The identity $H=H''$ for planes is simply a restatement of \Cref{le.aux_bis_dual} above. Applying the operation $\bullet\mapsto \bullet'$ once more we get $H'=H'''$ for planes $H$, and hence $L=L''$ for lines $L$ follows from the observation that every line arises as $H'$ for some plane $H$ (this latter claim is essentially the surjectivity of the map $\phi_B$ from the proof of \Cref{pr.aux_bis_dual}).
{\bf (3)} This is immediate: when $H$ is a plane, for instance, there are non-empty sections $(H+x)\cap B$ for non-zero $x\in \bR^3$, and hence there are points of the line $H'$ that are not contained in $H$. A similar argument applies when $H$ is a line (or alternatively, by part (2), in that case $H=H''$ and we can apply the previous argument to the plane $H'$).
\end{proof}
Properties (1), (2) and (3) in \Cref{le.inv} are, according to \cite[Theorem 1]{KM44}, precisely what is necessary in order to ensure that there is a Hilbert space structure on $\bR^3$ for which $H\mapsto H'$ is the orthogonal complement operation. Such a Hilbert space structure is given by an inner product of the form
\begin{equation*}
\langle v,w\rangle = v\cdot Aw
\end{equation*}
where $A\in M_3=M_3(\bR)$ is a positive operator and $\cdot$ is the usual dot product. Since the image of an ellipsoid through a positive operator $A:\bR^3\to \bR^3$ is again an ellipsoid, we may as well assume (for the purposes of proving \Cref{th.ell_bis}) that $A$ is the identity matrix and hence $K\mapsto K'$ is the usual orthogonal complement operation in $\bR^3$; we will do this throughout the rest of the section, in order to simplify the discussion.
Now consider the usual transposition operation $T\mapsto T^t$ on $M_3$. It satisfies conditions (1), (2) and (3) of \Cref{th.istr} above. Our goal will be to apply that result to the Banach space $(\bR^3,\|\cdot\|_B)$ induced by the centrally symmetric convex body $B$ as explained in \Cref{subse.ban}. In preparation for that, note that the symmetric idempotents $P\in M_3$ (i.e. those satisfying $P^2=P=P^t$) are precisely those whose range and kernel are orthogonal.
\begin{lemma}\label{le.istr_applies}
If $\|\cdot\|$ denotes the norm induced on $M_3$ by $\|\cdot\|_B$, then $\|P\|^2\le 1$ for all symmetric idempotents $P\in M_3$
\end{lemma}
\begin{proof}
If the range of $P$ is three- or zero-dimensional then $P$ is the identity or the zero operator respecticely, so there is nothing to prove. It thus remains to prove the claim when $\dim(\mathrm{Im}(P))$ is $2$ or $1$. We tackle the former case; the latter is entirely analogous.
If $H\in \cG(3,2)$ is the range of $P$, then $P$ is the orthogonal projection on $H$. Since, as explained in the discussion following \Cref{le.inv}, we are assuming that $H'=H^\perp$, the convex body $B$ (i.e. the unit ball of $(\bR^3,\|\cdot\|_B)$) is contained in the orthogonal cylinder based on $H\cap B$. But this means that if $\|x\|_B=1$ (i.e. $x\in \partial B$) then the orthogonal projection $Px$ on $H$ is contained in $B\cap H$, and hence $\|Px\|_B\le 1$. In conclusion,
\begin{equation*}
\|P\| = \sup_{\|x\|_B\le 1}\|Px\|_B\le 1.
\end{equation*}
This finishes the proof.
\end{proof}
This concludes the preparatory material necessary for the proof of the main result of this section.
\begin{proof_of_ellbis}
As mentioned after the statement of the theorem, it suffices to prove that (iii) implies that $B$ is an ellipsoid. Moreover, \Cref{pr.n=3} shows that it is enough to work with $n=3$.
\Cref{cor.iii_implies_sym} ensues that $B$ is centrally symmetric, and hence can be regarded as the unit ball of a Banach space $(\bR^3,\|\cdot\|_B)$. \Cref{le.istr_applies,th.istr} now ensure that the norm $\|\cdot\|_B$ is induced by an inner product on $\bR^3$, and hence the unit ball of $\|\cdot\|_B$ is an ellipsoid, as explained in \Cref{subse.ban}.
\end{proof_of_ellbis}
\section{Illuminated bodies and subspace lattice involutions}\label{se.illum}
The techniques employed in \Cref{se.park} will also help in improving on a characterization of ellipsoids via illumination by parallel rays due to Blaschke (\cite[pp. 157-159]{bla}). For the purpose of stating the result briefly, we introduce one more piece of terminology.
\begin{definition}\label{def.wbla}
A convex body $B$ in $\bR^3$ has {\it the weak Blaschke property} (or {\it is weakly Blaschke}) if for any line $L$ there exists an affine plane $H\subset \bR^3$ such that $H$ intersects the interior of $B$ and all points in $H\cap \partial B$ lie on some $B$-supporting translate of $L$.
$B$ has {\it the weak dual Blaschke property} (or {\it is weakly dual Blaschke}) if for any linear plane $H\subset \bR^3$ there is a translate $H'$ of $H$ and a line $L\in \bR^3$ such that $H'$ intersects the interior of $B$ and all points in $H'\cap \partial B$ lie on some $B$-supporting translate of $L$.
\end{definition}
\begin{remark}\label{re.wbla}
Note the difference to \Cref{def.bla}: in \Cref{def.wbla} we do not require the planes to pass through a given, fixed point (such as the origin in the case of the former definition). This justifies the adjective `weak' in the definition. In fact, it is easy to see that centrally symmetric weakly (dual) Blaschke convex bodies are automatically (dual) Blaschke.
\end{remark}
\begin{remark}
The weak Blaschke property can be interpreted as saying that if the body is illuminated with parallel rays, then the boundary of the shaded region contains a coplanar curve; hence the title of this section.
\end{remark}
With this language in place, Blaschke's result referred to above states that a smooth, strictly convex convex body in $\bR^3$ that is weakly Blaschke must be an ellipsoid. In this section we remove the smoothness and strict convexity requirements:
\begin{theorem}\label{th.illum}
A convex $B\subset \bR^3$ that is weakly Blaschke is an ellipsoid.
\end{theorem}
We once more prepare the ground before giving the proof proper. First, we show that the weak Blaschke property entails its dual.
\begin{lemma}\label{le.self-dual}
A weakly Blaschke convex body is weakly dual Blaschke.
\end{lemma}
\begin{proof}
This is very similar in spirit to the proof of \Cref{pr.bla} from \Cref{pr.co_bla} via \Cref{le.psi_onto}, using the same multivalued Borsuk-Ulam theorem.
To any unit vector $v\in \bS^2$ associate the subset of the unit $2$-sphere consisting of those vectors $w$ with the property that $w\cdot v>0$ and $w$ is orthogonal to those planes associated as in the definition of the weak Blaschke property to the line containing $v$.
This gives rise, as in the discussion following \Cref{pr.bla}, to a map
\begin{equation*}
\psi=:\bS^2\to\text{ closed convex subsets of }\bS^2.
\end{equation*}
$\psi$ can be shown to be upper semicontinuous and odd as in \Cref{le.odd}, and hence is onto according to \Cref{le.psi_onto}. This concludes the proof: we have just shown that {\it every} plane in $\bR^3$ is parallel to some affine plane associated to a line as in the definition of the weak Blaschke property.
\end{proof}
Next, we prove the existence of a center of symmetry.
\begin{lemma}\label{le.wbla_symm}
A weakly Blaschke convex body $B\subset \bR^3$ has a center of symmetry.
\end{lemma}
\begin{proof}
It suffices to show that the group of affine symmetries of $B$ is infinite. For this purpose, let $H\subset \bR^3$ be a plane with the property that the two $B$-supporting translates of $H$ intersect $B$ at single point $p$ and $q$.
We now proceed very much as in the proof of \Cref{pr.aux_bis}. We denote by $K$ a planar section $(H+x)\cap B$ of $B$ that has non-empty relative interior. Then choose a line $L\subset H$. Just as in the cited proof, we can show using the weak Blaschke property that if $L_1$ and $L_2$ are the $K$-supporting translates of $L$, then the convex hull of $L_i\cap K$ contains the point $\overline{pq}\cap K$.
\Cref{le.supports} then ensures that $K$ has a center of symmetry at $\overline{pq}\cap K$. Since the $H$-parallel section $K$ of $B$ was arbitrary, we conclude that the affine involution of $\bR^3$ that preserves the points on the line $pq$ and acts as reflection across $pq\cap (H+x)$ along the plane $H+x$ preserves $B$.
The above construction provides us with infinitely many affine symmetries of $B$, one for each choice of plane $H$ with the generic property specified at the beginning.
\end{proof}
\begin{proof_of_illum}
We now know from \Cref{le.self-dual} that $B$ is both weakly Blaschke and weakly dual Blaschke. Since moreover it has a center of symmetry by \Cref{le.wbla_symm}, we can assume that $B$ is centered at $0$ and hence is both Blaschke and dual Blaschke (cf. \Cref{re.wbla}). Running through the proof of \Cref{th.ell_bis}, this is sufficient to conclude that $B$ is an ellipsoid.
\end{proof_of_illum}
| {
"timestamp": "2016-03-30T02:06:11",
"yymm": "1603",
"arxiv_id": "1603.08651",
"language": "en",
"url": "https://arxiv.org/abs/1603.08651",
"abstract": "A subset of a convex body $B$ containing the origin in a Euclidean space is {\\it parkable in $B$} if it can be translated inside $B$ in such a manner that the translate the origin. We provide characterizations of ellipsoids and of centrally symmetric convex bodies in Euclidean spaces of dimension $\\ge 3$ based on the notion of parkability, answering several questions posed by G. Bergman.The techniques used, which are based on characterizations of Hilbert spaces among finite-dimensional Banach spaces in terms of their lattices of subspaces and algebras of endomorphisms, also apply to improve a result of W. Blaschke characterizing ellipsoids in terms of boundaries of illumination.",
"subjects": "Metric Geometry (math.MG); Functional Analysis (math.FA)",
"title": "Parkable convex sets and finite-dimensional Hilbert spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9850429129677614,
"lm_q2_score": 0.8128673155708975,
"lm_q1q2_score": 0.8007091883862414
} |
https://arxiv.org/abs/1806.03774 | Counting subgroups of fixed order in finite abelian groups | We use recurrence relations to derive explicit formulas for counting the number of subgroups of given order (or index) in rank 3 finite abelian p-groups and use these to derive similar formulas in few cases for rank 4. As a consequence, we answer some questions by M. T$\ddot{a}$rn$\ddot{a}$uceanu in \cite{MT} and L. T$\dot{\acute{o}}$th in \cite{LT}. We also use other methods such as the method of fundamental group lattices introduced in \cite{MT} to derive a similar counting function in a special case of arbitrary rank finite abelian p-groups. | \section{Introduction}
The subject of counting various kinds of subgroups of finite abelian groups has a long and rich history. For instance, in the early 1900's, Miller\cite{MI04} determined the number of cyclic subgroups of prime power order in a finite abelian p-group $G$, where p is a prime number. In about the same time, Hilton\cite{HH} found a necessary and sufficient condition for the existence of subgroups of a given order and then gave a general procedure for counting subgroups of order $p^r$ in a finite abelian p-group of order $p^n$, for positive integers $r\leq n.$ For recent work on counting subgroups of finite abelian p-groups, see \cite{GB96}, \cite{GBJW}, \cite[Sections 6 \& 7]{GLP}, \cite{HT}, \cite{AI}, \cite{MI39}, \cite{MT} and \cite{LT}.
As an example, Hilton determined the total number of normal subgroups of index $p^2$ in any finite abelian p-group and noted that giving a general formula for the number of subgroups of order $p^r$ for every value of $r$ would be somewhat complicated. Progress has been made since then and there are now recursive formulas based on works of Stehling\cite{TS} and Birkhoff\cite{GB} that can compute the numbers for any $r.$ It can also be deduced using Hall's polynomials\cite{IGM} that the function that counts the number of subgroups is a polynomial in powers of $p.$ In addition, Shokuev\cite{VNS} finds an exact expression for the number of subgroups of any (possible) order of an arbitrary finite p-group. However, this requires subgroup structure of smaller rank subgroups in order to determine the expression. So far as we know, the explicit form of these counting polynomials has not been determined beyond that for rank 2 finite abelian p-groups. In \cite[Theorem 3]{GBJW01}, Bhowmik and Wu get a unimodality result on the coefficients and determine the leading coefficient of the polynomials in any rank. In this paper, we determine the explicit form of the counting polynomials in rank 3 and for some special values of $r$ in any rank. However, finding the general explicit polynomials in rank 4 or higher will be a tedious and difficult computation and we will illustrate this with examples.
By the fundamental theorem on the structure of finite abelian groups, any finite abelian group is isomorphic to a direct product of prime power order cyclic groups. Therefore, the number of subgroups in the finite abelian group is the product of the numbers of the subgroups contained in these cyclic groups\cite{RS}. This reduces the problem of counting subgroups in finite abelian groups to counting subgroups in finite abelian p-groups.
Let $\mathbb{Z}_{n}$ denote the cyclic group of order n. Given partitions $\lambda,\mu,\nu$ and $G$ an abelian $p$-group of type $\lambda = (a_d,\dots,a_1),$ i.e., $G \cong \mathbb{Z}/p^{a_1}\times \cdots \times \mathbb{Z}/p^{a_d},$ for some positive integers $a_1,\dots,a_d$ where $1\leq a_1\leq \dots \leq a_d,$ G has $g^\lambda_{\mu\nu}(p)$ subgroups $K$ such that $K$ has type $\mu$ and $G/K$ has type $\nu,$ where $g^\lambda_{\mu\nu}(X)\in \mathbb{Z}[X]$ is a Hall polynomial \cite{IGM}. Let
\begin{equation}
\label{no_of_subgrps}
h_b^\lambda (p) = \sum_{|\nu| = b} \sum_{\mu}g^\lambda_{\mu\nu} (p)
\end{equation}
\noindent
where $\nu = (\nu_l,\dots,\nu_1)$ with $\nu_1\leq \nu_2\leq \dots \leq \nu_l$ and $|\nu| = \nu_1+\dots +\nu_l.$
Then $h_b^\lambda(p) = h_b^{(a_d,\dots,a_1)}(p)$ is the number of subgroups of index $p^b$ (or equivalently of order $p^{m-b}$ where $m=\sum_{i=1}^d a_i$) in the rank $d$ abelian $p$-group $\mathbb{Z}/p^{a_1}\times \cdots \times \mathbb{Z}/p^{a_d}.$ Moreover, $h_b^\lambda(p)$ is a polynomial as it is a sum of Hall polynomials. When the finite abelian $p$-group has rank 2, the number of subgroups of order $p^b$ in $\mathbb{Z}/p^{a_1}\times \mathbb{Z}/p^{a_2}$ is given by \cite[Theorem 3.3]{MT} as follows (for an alternate proof, see the end of Section \ref{sec:convolution}):
\begin{equation}
\label{rank2ratfns}
h_b^{(a_2,a_1)}(p)=\begin{cases}
\frac{p^{b+1}-1}{p-1} & \text{if } 0\le b \leq a_1\\
\frac{p^{a_1+1}-1}{p-1} & \text{if } a_1\le b \leq a_2\\
\frac{p^{a_1+a_2-b+1}-1}{p-1} & \text{if } a_2\le b \leq a_1+a_2.\\
\end{cases}
\end{equation}
\noindent
Except for rank 2 finite abelian p-groups and some other specific cases, there was no general direct closed formula (i.e. explicit polynomial depending on the type $\lambda$ and $b$; expressed above as a rational function in each case) that computes $h_b^\lambda(p)$ given $\lambda$ and $b.$ We determine in the next section the polynomials that give the number of subgroups of given order in rank 3 and few cases of rank 4 and one particular case for arbitrary rank finite abelian $p$-group. This answers part of a problem posed by M. T$\ddot{a}$rn$\ddot{a}$uceanu\cite[Problem 5.1]{MT}. As an application of the rational function formulas, we will prove a conjecture posed by Laszlo T${\acute{o}}$th in \cite{LT}.
\section{Rank 3}
G. Birkhoff shows in \cite{GB} that $h_b^{(a_d,\dots,a_1)}$ is symmetric in $b$, that is, $h_b^{(a_d,\dots,a_1)} = h_{a_1+\dots+a_d-b}^{(a_d,\dots,a_1)}.$ So, for instance, in the three cases in equation (\ref{rank2ratfns}) for rank 2, the first case is symmetric with the third case while the second case is symmetric to itself. For example, when $d=2,$ the case $0\le b \leq a_1$ is symmetric to $a_2\le b \leq a_1+a_2$ because $$0\le b \leq a_1$$
$$-a_1\le -b \leq 0$$
$$a_2\leq a_1+a_2-b \leq a_1+a_2.$$
\noindent
Therefore, in order to determine the formula in the case $a_2\le b \leq a_1+a_2$ from the case when $0\le b \leq a_1$, we replace $b$ in the formula for $0\le b \leq a_1$ by $a_1+a_2-b.$ So, for instance in rank 2, it suffices to determine the compact form (as rational functions) of the polynomials in only two cases (first and second cases) and then use symmetry to deduce the remaining case (the last case).
Let $\lambda=(a_3,a_2, a_1)$ and $\lambda^{'}=(a_2, a_1)$. The partition of the interval $[0,a_1+a_2+a_3]$ as in Hironaka's paper \cite{YH} leads to 10 cases for rank 3. For each of these cases, there is a polynomial $h_b^{(a_1,a_2,a_3)}(p)$ (easily described as a rational function) enumerating the number of subgroups of index $p^b$ (or order $p^{a_1+a_2+a_3-b}$) in $\mathbb{Z}/p^{a_1}\times \mathbb{Z}/p^{a_2}\times \mathbb{Z}/p^{a_3}.$
There are at least four methods of deducing the formulas. The first method is to use a lemma \cite[Lemma 2.3]{YH} that Y. Hironaka deduces from a recurrence relation of T. Stehling in \cite{TS}. This is the method used in the proof of the theorem below as her lemma shows how to express the formulas in higher rank in terms of those of lower rank which we already know and this requires the least amount of computation. The second method was suggested to the first author by Gautam Chinta (the first author's doctoral advisor) while working on a related problem on subgroup growth as such subgroup counting formulas help compute smaller rank cotype zeta functions as in \cite{CKK}. This method involves using a series of contour integrals, the residue theorem and convolution of generating series to arrive at the formulas. The third method is based on an extension of results in the second author's previous paper in \cite{PKASSS}. This will be explicated another time. The last method, called the method of fundamental group lattices, was introduced by M. T$\ddot{a}$rn$\ddot{a}$uceanu in \cite{MT} and we have been able to apply it to derive a compact formula for counting subgroups in case 1 of all ranks. We demo the second method of proof in a special case at the end of the paper.
\begin{theorem}
\label{rank3}
For every $0\leq b \leq a_1+a_2+a_3$, the number $h_b^{(a_3,a_2,a_1)}(p)$ of all subgroups of order $p^b$ (equivalently of index $p^{a_1+a_2+a_3-b}$) in the finite abelian $p$-group $\mathbb{Z}/p^{a_1}\times \mathbb{Z}/p^{a_2}\times \mathbb{Z}/p^{a_3}$ where $1\leq a_1\leq a_2 \leq a_3,$ is given by one of the following polynomials expressed as rational functions:\\
Case 1: $0\le b \leq a_1$
\begin{equation*}
h_b^{(a_3,a_2,a_1)}(p) = \frac{p^{2b+3} - p^{b+2} - p^{b+1} +1}{ (p-1)(p^2-1)}.
\end{equation*}
Case 2: $a_1\le b \leq a_2$
\begin{equation*}
h_b^{(a_3,a_2,a_1)}(p) = \frac{p^{b+a_1+3}+p^{b+a_1+2}-p^{2a_1+2}-p^{b+2}-p^{b+1}+1}{ (p-1)(p^2-1)}.
\end{equation*}
Case 3: $a_2 < b \le a_3 \leq a_1 + a_2$
\begin{equation*}
h_b^{(a_3,a_2,a_1)}(p) =\frac{(b-a_{2}+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-(b-a_2)p^{a_1+a_2+1}-p^{2a_1+2}-p^{b+2}-p^{b+1}+1}{(p-1)(p^{2}-1)}.
\end{equation*}
Case 4: $a_2 < b \le a_1 + a_2 \leq a_3$
$$h_b^{(a_3,a_2,a_1)}(p) = \frac{(b-a_{2}+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-(b-a_2)p^{a_1+a_2+1}-p^{2a_1+2}-p^{b+2}-p^{b+1}+1}{(p-1)(p^{2}-1)}.$$
Case 5: $a_1 + a_2 \le b \le a_3$
$$h_b^{(a_3,a_2,a_1)}(p) = \frac{(a_{1}+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-a_{1}p^{a_1+a_2+1}-p^{2a_1+2}-p^{a_{1}+a_{2}+2}-p^{a_{1}+a_{2}+1}+1}{(p-1)(p^{2}-1)}.$$
Case 6: $a_3 < b \leq a_{1}+a_{2} $
\begin{multline*}
h_b^{(a_3,a_2,a_1)}(p) = \frac{(a_3-a_2+1)p^{a_2+a_1+3}+ 2p^{a_2+a_1+2}-(a_3-a_2-1)p^{a_2+a_1+1}-p^{a_3+a_2+a_1-b+2}}{(p-1)(p^2-1)}\\
+ \frac{-p^{a_3+a_2+a_1-b+1} -p^{2a_1+2}-p^{b+2}-p^{b+1}+1}{(p-1)(p^2-1)}.
\end{multline*}
The remaining cases are symmetric to one of the cases above, i.e., they can be obtained when we replace $b$ by $a_1+a_2+a_3-b.$\\
Case 7: $a_1 + a_2 < a_3 < b \le a_1 + a_3$
\begin{multline*}
h_b^{(a_3,a_2,a_1)}(p) = \frac{(a_1+a_3-b+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-(a_1+a_3-b)p^{a_1+a_2+1}-p^{2a_1+2}}{(p-1)(p^{2}-1)}\\
+ \frac{-p^{a_1+a_2+a_3-b+2}-p^{a_1+a_2+a_3-b+1}+1}{(p-1)(p^{2}-1)}.
\end{multline*}
Case 8: $ a_3 < a_1 + a_2 < b \le a_1 + a_3$
\begin{multline*}
h_b^{(a_3,a_2,a_1)}(p) = \frac{(a_1+a_3-b+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-(a_1+a_3-b)p^{a_1+a_2+1}-p^{2a_1+2}}{(p-1)(p^{2}-1)}\\ + \frac{-p^{a_1+a_2+a_3-b+2}-p^{a_1+a_2+a_3-b+1}+1}{(p-1)(p^{2}-1)}.
\end{multline*}
Case 9: $a_1 + a_3 \le b \le a_2 + a_3$
$$h_b^{(a_3,a_2,a_1)}(p) = \frac{p^{2a_1+a_2+a_3-b+3}+p^{2a_1+a_2+a_3-b+2}-p^{2a_1+2}-p^{a_1+a_2+a_3-b+2}-p^{a_1+a_2+a_3-b+1}+1}{(p-1)(p^{2}-1)}.$$
Case 10: $a_2 + a_3 \le b \le a_1 + a_2 + a_3$
$$h_b^{(a_3,a_2,a_1)}(p) = \frac{p^{2a_1+2a_2+2a_3-2b+3}-p^{a_1+a_2+a_3-b+2}-p^{a_1+a_2+a_3-b+1}+1}{(p-1)(p^{2}-1)}.$$
\end{theorem}
\begin{proofof}{Theorem \ref{rank3}} To simplify notation, $h_b^{(a_3,a_2,a_1)}(p)$ will be denoted by $N_{b}(\lambda)$ as in the notation used in \cite{YH}. By \cite[Lemma 2.3]{YH}, we have the recursive formula
\begin{equation}
N_{b}(\lambda)=\sum_{i=0}^{b} p^{i}N_{i}(\lambda^{'}) - \sum_{|\lambda| +1-k}^{|\lambda^{'}|} p^{i}N_{i}(\lambda^{'}), 0\leq b\le |\lambda|
\end{equation}
\noindent where $\lambda = (a_d,\dots, a_1), \lambda^{'} = (a_{d-1},\dots, a_1), |\lambda| = a_d+\dots+a_1$ and the second summation appears only when $b > a_d.$ Thus, we compute each of the cases as follows: \\
Case 1: $0\le b \leq a_1$ \\
$N_{b}(\lambda)=\sum_{i=0}^{b} p^{i}N_{i}(\lambda^{'})=\sum_{i=0}^{b} p^{i} \frac{p^{i+1}-1}{p-1}=\sum_{i=0}^{b} \frac{p^{2i+1}-p^{i}}{p-1}=\frac{p^{2b+3}-p^{b+2}-p^{b+1}+1}{(p-1)(p^{2}-1)}.$\\
Case 2: $a_1\le b \leq a_2$ \\
$N_{b}(\lambda)=\sum_{i=0}^{a_1} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{1}+1}^{b} p^{i}N_{i}(\lambda^{'})=\frac{p^{2a_1+3}-p^{a_1+2}-p^{a_1+1}+1}{(p-1)(p^{2}-1)}+\sum_{i=a_{1}+1}^{b} p^{i} \frac{p^{a_{1}+1}-1}{p-1}$\\
$=\frac{p^{b+a_1+3}+p^{b+a_{1}+2}-p^{2a_1+2}-p^{b+2}-p^{b+1}+1}{(p-1)(p^{2}-1)}.$\\
Case 3: $a_2 < b \le a_3 \leq a_1 + a_2$\\
$N_{b}(\lambda)=\sum_{i=0}^{a_1} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{1}+1}^{a_{2}} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{2}+1}^{b} p^{i}N_{i}(\lambda^{'})$\\
$=\frac{p^{a_{2}+a_1+3}+p^{a_{2}+a_{1}+2}-p^{2a_1+2}-p^{a_{2}+2}-p^{a_{2}+1}+1}{(p-1)(p^{2}-1)}+\sum_{i=a_{2}+1}^{b} p^{i}\frac{p^{a_{1}+a_{2}+1-i}-1}{p-1}$\\
$=\frac{(b-a_{2}+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-(b-a_2)p^{a_1+a_2+1}-p^{2a_1+2}-p^{b+2}-p^{b+1}+1}{(p-1)(p^{2}-1)}.$\\
Case 4: $a_2 < b \le a_1 + a_2 \leq a_3$ \\
$N_{b}(\lambda)=\sum_{i=0}^{a_1} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{1}+1}^{a_{2}} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{2}+1}^{b} p^{i}N_{i}(\lambda^{'})$\\
$=\frac{p^{a_{2}+a_1+3}+p^{a_{2}+a_{1}+2}-p^{2a_1+2}-p^{a_{2}+2}-p^{a_{2}+1}+1}{(p-1)(p^{2}-1)}+\sum_{i=a_{2}+1}^{b} p^{i}\frac{p^{a_{1}+a_{2}+1-i}-1}{p-1}$\\
$=\frac{(b-a_{2}+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-(b-a_2)p^{a_1+a_2+1}-p^{2a_1+2}-p^{b+2}-p^{b+1}+1}{(p-1)(p^{2}-1)}.$\\
Case 5: $a_1 + a_2 \le b \le a_3$ \\
$N_{b}(\lambda)=\sum_{i=0}^{a_1} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{1}+1}^{a_{2}} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{2}+1}^{a_{1}+a_{2}} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{1}+a_{2}+1}^{b} p^{i}N_{i}(\lambda^{'})\\ - \sum_{i=a_{1}+a_{2}+a_{3}+1-b}^{a_{1}+a_{2}}p^{i}N_{i}(\lambda^{'})$\\
$=\frac{(a_{1}+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-a_{1}p^{a_1+a_2+1}-p^{2a_1+2}-p^{a_{1}+a_{2}+2}-p^{a_{1}+a_{2}+1}+1}{(p-1)(p^{2}-1)}.$\\
Case 6: $a_3 < b \leq a_{1}+a_{2} $ \\
$N_{b}(\lambda)=\sum_{i=0}^{a_1} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{1}+1}^{a_{2}} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{2}+1}^{a_{3}} p^{i}N_{i}(\lambda^{'})+\sum_{i=(+1}^{b} p^{i}N_{i}(\lambda^{'})\\ - \sum_{i=a_{1}+a_{2}+a3+1-b}^{a_{1}+a_{2}}p^{i}N_{i}(\lambda^{'})$\\
$=\frac{(a_{3}-a_{2}+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-(a_{3}-a_2)p^{a_1+a_2+1}-p^{2a_1+2}-p^{a_{3}+2}-p^{a_{3}+1}+1}{(p-1)(p^{2}-1)}+\sum_{i=a_{3}+1}^{b} p^{i}\frac{p^{a_{1}+a_{2}+1-i}-1}{p-1}$\\
$-\sum_{i=a_{1}+a_{2}+a_{3}+1-b}^{a_{1}+a_{2}}p^{i}\frac{p^{a_{1}+a_{2}+1-i}-1}{p-1}$\\
$=\frac{(a_3-a_2+1)p^{a_2+a_1+3}+ 2p^{a_2+a_1+2}-(a_3-a_2-1)p^{a_2+a_1+1}-p^{a_3+a_2+a_1-b+2}-p^{a_3+a_2+a_1-b+1}}{(p-1)(p^2-1)} \\
+ \frac{-p^{2a_1+2}-p^{b+2}-p^{b+1}+1}{(p-1)(p^2-1)}. $\\
The remaining cases are symmetric to one of the cases above, i.e., they can be obtained when we replace $b$ by $a_1+a_2+a_3-b.$\\
Case 7: $a_1 + a_2 < a_3 < b \le a_1 + a_3$ Replace $b$ in case 4 by $a_1+a_2+a_3-b.$\\
$N_{b}(\lambda)= \frac{(a_1+a_3-b+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-(a_1+a_3-b)p^{a_1+a_2+1}-p^{2a_1+2}-p^{a_1+a_2+a_3-b+2}-p^{a_1+a_2+a_3-b+1}+1}{(p-1)(p^{2}-1)}.$\\
Case 8: $ a_3 < a_1 + a_2 < b \le a_1 + a_3$ Replace $b$ in case 3 by $a_1+a_2+a_3-b.$\\
$N_{b}(\lambda) = \frac{(a_1+a_3-b+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-(a_1+a_3-b)p^{a_1+a_2+1}-p^{2a_1+2}-p^{a_1+a_2+a_3-b+2}-p^{a_1+a_2+a_3-b+1}+1}{(p-1)(p^{2}-1)}.$\\
Case 9: $a_1 + a_3 \le b \le a_2 + a_3$ Replace $b$ in case 2 by $a_1+a_2+a_3-b.$\\
$N_{b}(\lambda) = \frac{p^{2a_1+a_2+a_3-b+3}+p^{2a_1+a_2+a_3-b+2}-p^{2a_1+2}-p^{a_1+a_2+a_3-b+2}-p^{a_1+a_2+a_3-b+1}+1}{(p-1)(p^{2}-1)}.$\\
Case 10: $a_2 + a_3 \le b \le a_1 + a_2 + a_3$ Replace $b$ in case 1 by $a_1+a_2+a_3-b.$\\
$N_{b}(\lambda) = \frac{p^{2a_1+2a_2+2a_3-2b+3}-p^{a_1+a_2+a_3-b+2}-p^{a_1+a_2+a_3-b+1}+1}{(p-1)(p^{2}-1)}.$
\end{proofof}
\begin{remark} We can deduce all other cases using only case 6 as follows:\\
Case 1: replace $a_3,a_2$ and $a_1$ in case 6 formula by $b.$\\
Case 2: replace $a_3$ and $a_2$ in case 6 formula by $b.$\\
Cases 3 and 4: replace $a_3$ in case 6 formula by $b.$\\
Case 5: replace $a_3$ and $b$ in case 6 formula by $a_2+a_1$ \\
Cases 7 and 8: replace $a_3$ in case 6 formula by $a_1+a_2+a_3-b.$\\
Case 9: replace $a_3$ and $a_2$ in case 6 formula by $a_1+a_2+a_3-b.$\\
Case 10: replace $a_3,a_2$ and $a_1$ in case 6 formula by $a_1+a_2+a_3-b.$
\end{remark}
\section{Rank 4}
\subsection{Number of subgroups of given order}
While it was a somewhat lengthy but accomplishable task to determine the cases for rank 3, there does not appear to be a systematic way of determining all the cases for rank 4. We were able to list more than 22 cases some of which we list below. These formulas are also proved in exactly the same way as in Theorem \ref{rank3}. Here $\lambda=(a_4,a_3,a_2, a_1)$ and $\lambda^{'}=(a_3,a_2, a_1).$\\
When $0\le b \leq a_1,$ we get \\
$N_{b}(\lambda)=\sum_{i=0}^{b} p^{i}N_{i}(\lambda^{'})=\sum_{i=0}^{b} p^{i} \frac{p^{2i+3}-p^{i+2}-p^{i+1}+1}{(p-1)(p^{2}-1)}=\sum_{i=0}^{b} \frac{p^{3i+3}-p^{2i+2}-p^{2i+1}+p^{i}}{(p-1)(p^{2}-1)}$\\
$=\frac{p^{3b+6}-p^{2b+5}-p^{2b+4}-p^{2b+3}+p^{b+3}+p^{b+2}+p^{b+1}-1}{(p-1)(p^{2}-1)(p^{3}-1)}.$\\
When $a_1\le b \leq a_2,$ we get
$N_{b}(\lambda)=\sum_{i=0}^{a_1} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{1}+1}^{b} p^{i}N_{i}(\lambda^{'})$\\
$=\frac{p^{3a_{1}+6}-p^{2a_{1}+5}-p^{2a_{1}+4}-p^{2a_{1}+3}+p^{a_{1}+3}+p^{a_{1}+2}+p^{a_{1}+1}-1}{(p-1)(p^{2}-1)(p^{3}-1)}+\sum_{i=a_{1}+1}^{b} p^{i} \frac{p^{i+a_1+3}+p^{i+a_{1}+2}-p^{2a_1+2}-p^{i+2}-p^{i+1}+1}{(p-1)(p^{2}-1)}$\\
$=\frac{p^{2b+a_{1}+6}+p^{2b+a_{1}+5}+p^{2b+a_{1}+4}-p^{2a_{1}+b+5}-p^{2a_{1}+b+4}-p^{2a_{1}+b+3}+p^{3a_{1}+3}-p^{2b+5}-p^{2b+4}-p^{2b+3}+p^{b+3}+p^{b+2}+p^{b+1}-1}{(p-1)(p^{2}-1)(p^{3}-1)}.$\\
And as a final example, when $a_2\le b \leq min \{a_3, a_{1}+a_2 \},$ we have \\
$N_{b}(\lambda)=\sum_{i=0}^{a_1} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{1}+1}^{a_{2}} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{2}+1}^{b} p^{i}N_{i}(\lambda^{'})-\sum_{i=a_{1}+a_{2}+a_{3}+a_{4}+1-b}^{a_{1}+a_{2}+a_{3}}p^{i}N_{i}(\lambda^{'})$\\
$=\frac{p^{2a_{2}+a_{1}+6}+p^{2a_{2}+a_{1}+5}+p^{2a_{2}+a_{1}+4}-p^{2a_{1}+a_{2}+5}-p^{2a_{1}+a_{2}+4}-p^{2a_{1}+a_{2}+3}+p^{3a_{1}+3}}{(p-1)(p^{2}-1)(p^{3}-1)}\\
+\frac{-p^{2a_{2}+5}-p^{2a_{2}+4}-p^{2a_{2}+3}+p^{a_{2}+3}+p^{a_{2}+2}+p^{a_{2}+1}-1}{(p-1)(p^{2}-1)(p^{3}-1)}\\
+\sum_{i=a_{2}+1}^{b} p^{i} \frac{(i-a_{2}+1)p^{a_2+a_1+3}+p^{a_2+a_{1}+2}-(i-a_2)p^{a_1+a_2+1}-p^{2a_1+2}-p^{i+2}-p^{i+1}+1}{(p-1)(p^{2}-1)}$\\
$=\frac{(b+1-a_2)p^{a_{2}+a_{1}+b+6}+(b+1-a_{2})p^{a_{2}+a_{1}+b+5}+(a_{2}-b-1)p^{a_{2}+a_{1}+b+3}+(a_{2}-b-1)p^{a_{1}+a_{2}+b+2}}{(p-1)(p^{2}-1)(p^{3}-1)}\\
+\frac{p^{2a_{2}+a_{1}+4}+p^{a_{1}+2a_{2}+3}+p^{a_{1}+2a_{2}+2}}{(p-1)(p^{2}-1)(p^{3}-1)}$\\
$+\frac{-p^{2a_{1}+b+5}-p^{2a_{1}+b+4}-p^{2a_{1}+b+3}+p^{b+3}+p^{b+2}+p^{b+1}+p^{3a_{1}+3}-p^{2b+5}-p^{2b+4}-p^{2b+3}-1}{(p-1)(p^{2}-1)(p^{3}-1)}.$
\subsection{Total number of subgroups}
Let $\lambda = (a_4,a_3,a_2,a_1)$ and let $N(\lambda)$ denote the total number of subgroups of $\mathbb{Z}/p^{a_1}\times \mathbb{Z}/p^{a_2}\times \mathbb{Z}/p^{a_3}\times \mathbb{Z}/p^{a_4}$ where $1\le a_1\le a_2\le a_3\le a_4,$ and $n= a_1+a_2+a_3+a_4.$ Then
\begin{equation*}
N(\lambda) = \sum_{b=0}^n N_b(\lambda).
\end{equation*}
A conjecture of L. T$\acute{o}$th in \cite[Conjecutre 10]{LT} claims that the degree of the polynomial $N(m,m,m,m)$, i.e. when $\lambda = (m,m,m,m),$ is $4m$ and that it has leading coefficient of 1. We prove that this follows from our formulas in Theorem \ref{rank3}. First, let's state a corollary for the special case of a rank 3 abelian p-group of type $(m,m,m).$ The proof of the corollary follows immediately from setting $a_1=a_2=a_3=m$ in the theorem.
\begin{corollary}\label{mmm}
For every $0\leq b \leq 3m$, the number $h_b^{(m,m,m)}(p)$ of all subgroups of order $p^b$ in the finite abelian $p$-group $\mathbb{Z}/p^{m}\times \mathbb{Z}/p^{m}\times \mathbb{Z}/p^{m}$ where $1\leq m,$ is given by one of the following polynomials expressed as rational functions:\\
Case 1: $0\le b \leq m$
$$h_b^{(m,m,m)}(p) = \frac{p^{2b+3} - p^{b+2} - p^{b+1} +1}{(p-1)(p^2-1)}.$$
Case 2: $m \le b \leq 2m$
$$h_b^{(m,m,m)}(p) = \frac{p^{2m+3}+p^{2m+2}+p^{2m+1}-p^{3m+2-b}-p^{3m+1-b}-p^{b+2}-p^{b+1}+1}{ (p-1)(p^2-1)}.$$
Case 3: $2m \le b \le 3m$
$$h_b^{(m,m,m)}(p) =\frac{p^{6m+3-2b}-p^{3m+2-b}-p^{3m+1-b}+1}{(p-1)(p^{2}-1)}.$$
\end{corollary}
\begin{theorem}\label{Conj10}
The the total number of subgroups $N(m,m,m,m)$ of the rank 4 finite abelian p-group $\mathbb{Z}/p^{m}\times \mathbb{Z}/p^{m}\times \mathbb{Z}/p^{m}\times \mathbb{Z}/p^{m}$ where $1\le m$ is given by a polynomial expressed as a rational function as below:
\begin{multline*}
N(m,m,m,m) = \sum_{b=1}^{4m} N_b(m,m,m,m)\\
= \frac{ {\left(p^{2} + p + 1\right)}^{3} {\left(p^{2} + 1\right)} p^{4 \, m + 2} - {\left({\left(2 \, m + 3\right)} p^{3} - 2 \, m - 1\right)} {\left(p^{3} + p\right)} {\left(p + 1\right)}^{3} p^{3 \, m} }{{\left(p^{2} - 1\right)}^{2} {\left(p^3 - 1\right)}^{2}}\\
- \frac{4 \, m p^{4} + 4 \, m p^{3} + 7 \, p^{4} + 9 \, p^{3} - 4 \, m p + 6 \, p^{2} - 4 \, m + p - 1}{{\left(p^{2} - 1\right)}^{2} {\left(p^3 - 1\right)}^{2}}.
\end{multline*}
In particular, the degree of the polynomial is $4m$ and its leading coefficient is 1.
\end{theorem}
\begin{proofof}{Theorem \ref{Conj10}}
We break the interval $[0,4m]$ into 4 sub-intervals and use \cite[Lemma 3.2]{YH} and the corollary above in each case to derive a polynomial expression for the number of subgroups of order $p^b$ in a rank 4 abelian p-group of type $(m,m,m,m)$. Summing over these polynomials will result in the required polynomial expression.\\
Case 1: $0\le b\le m$
\begin{align*}
N_b(m,m,m,m) &= \sum_{i=0}^{b} p^{i}N_{i}(m,m,m)=\sum_{i=0}^{b} p^{i} \frac{p^{2i+3} - p^{i+2} - p^{i+1} +1}{(p-1)(p^2-1)}\\
&= \frac{p^{3 \, b + 6} - p^{2 \, b + 5} - p^{2 \, b + 4} - p^{2 \, b + 3} + p^{b + 3} + p^{b + 2} + p^{b + 1} - 1}{{\left(p - 1\right)} {\left(p^2 - 1\right)} {\left(p^3 - 1\right)}}.
\end{align*}
Case 2: $m < b\le 2m$
\begin{align*}
N_b(m,m,m,m) &= \sum_{i=0}^{m} p^{i}N_{i}(m,m,m) + \sum_{i=m+1}^{b} p^{i}N_{i}(m,m,m) - \sum_{i=4m+1-b}^{3m} p^{i}N_{i}(m,m,m)\\
&= \sum_{i=0}^{m} p^{i}\frac{p^{2i+3} - p^{i+2} - p^{i+1} +1}{(p-1)(p^2-1)}\\ &+ \sum_{i=m+1}^{b} p^{i}\frac{p^{2m+3}+p^{2m+2}+p^{2m+1}-p^{3m+2-i}-p^{3m+1-i}-p^{i+2}-p^{i+1}+1}{ (p-1)(p^2-1)}\\
&- \sum_{i=4m+1-b}^{3m} p^{i}\frac{p^{6m+3-2i}-p^{3m+2-i}-p^{3m+1-i}+1}{(p-1)(p^{2}-1)}\\
&= -\frac{p^{2 \, b + 5} + p^{2 \, b + 4} + p^{2 \, b + 3} - p^{b + 2 \, m + 6} - p^{b + 2 \, m + 5} - 2 \, p^{b + 2 \, m + 4}}{(p-1)(p^2-1)(p^3-1)} \\
&+\frac{ - p^{b + 2 \, m + 3} - p^{b + 2 \, m + 2} - p^{b + 3} - p^{b + 2}}{(p-1)(p^2-1)(p^3-1)} \\
&-\frac{ - p^{b + 1} - p^{-b + 4 \, m + 3} - p^{-b + 4 \, m + 2} - p^{-b + 4 \, m + 1} + p^{3 \, m + 5} + 2 \, p^{3 \, m + 4}}{(p-1)(p^2-1)(p^3-1)}\\
&+ \frac{2 \, p^{3 \, m + 3} + 2 \, p^{3 \, m + 2} + p^{3 \, m + 1} + 1}{(p-1)(p^2-1)(p^3-1)}.
\end{align*}
Case 3: $2m < b \le 3m$
\begin{align*}
N_b(m,m,m,m) &= \sum_{i=0}^{m} p^{i}N_{i}(m,m,m) + \sum_{i=m+1}^{2m} p^{i}N_{i}(m,m,m) + \sum_{i=2m+1}^{b} p^{i}N_{i}(m,m,m)\\ &- \sum_{i=4m+1-b}^{2m} p^{i}N_{i}(m,m,m) -\sum_{i=2m+1}^{3m} p^{i}N_{i}(m,m,m)\\
&= \sum_{i=0}^{m} p^{i}\frac{p^{2i+3} - p^{i+2} - p^{i+1} +1}{(p-1)(p^2-1)}\\
&+ \sum_{i=m+1}^{2m} p^{i}\frac{p^{2m+3}+p^{2m+2}+p^{2m+1}-p^{3m+2-i}-p^{3m+1-i}-p^{i+2}-p^{i+1}+1}{ (p-1)(p^2-1)}\\
&+ \sum_{i=2m+1}^{b} p^{i}\frac{p^{6m+3-2i}-p^{3m+2-i}-p^{3m+1-i}+1}{(p-1)(p^{2}-1)}\\
&- \sum_{i=4m+1-b}^{2m} p^{i}\frac{p^{2m+3}+p^{2m+2}+p^{2m+1}-p^{3m+2-i}-p^{3m+1-i}-p^{i+2}-p^{i+1}+1}{ (p-1)(p^2-1)}\\ &- \sum_{i=2m+1}^{3m} p^{i}\frac{p^{6m+3-2i}-p^{3m+2-i}-p^{3m+1-i}+1}{(p-1)(p^{2}-1)}\\
&=\frac{p^{b + 3} + p^{b + 2} + p^{b + 1} + p^{-b + 6 \, m + 6} + p^{-b + 6 \, m + 5} + 2 \, p^{-b + 6 \, m + 4}}{(p-1)(p^2-1)(p^3-1)}\\ &+ \frac{p^{-b + 6 \, m + 3} + p^{-b + 6 \, m + 2} + p^{-b + 4 \, m + 3}}{(p-1)(p^2-1)(p^3-1)}\\
&+ \frac{p^{-b + 4 \, m + 2} + p^{-b + 4 \, m + 1} - p^{-2 \, b + 8 \, m + 5} - p^{-2 \, b + 8 \, m + 4} - p^{-2 \, b + 8 \, m + 3} - p^{3 \, m + 5} - 2 \, p^{3 \, m + 4}}{(p-1)(p^2-1)(p^3-1)}\\
&+ \frac{- 2 \, p^{3 \, m + 3} - 2 \, p^{3 \, m + 2} - p^{3 \, m + 1} - 1}{(p-1)(p^2-1)(p^3-1)}.
\end{align*}
Case 4: $3m < b\le 4m$
\begin{align*}
N_b(m,m,m,m) &= \sum_{i=0}^{m} p^{i}N_{i}(m,m,m) + \sum_{i=m+1}^{2m} p^{i}N_{i}(m,m,m) + \sum_{i=2m+1}^{3m} p^{i}N_{i}(m,m,m) \\
&- \sum_{i=4m+1-b}^{m} p^{i}N_{i}(m,m,m) -\sum_{i=m+1}^{2m} p^{i}N_{i}(m,m,m) - \sum_{i=2m+1}^{3m} p^{i}N_{i}(m,m,m)\\
&= \sum_{i=0}^{m} p^{i}\frac{p^{2i+3} - p^{i+2} - p^{i+1} +1}{(p-1)(p^2-1)}\\
&+ \sum_{i=m+1}^{b} p^{i}\frac{p^{2m+3}+p^{2m+2}+p^{2m+1}-p^{3m+2-i}-p^{3m+1-i}-p^{i+2}-p^{i+1}+1}{ (p-1)(p^2-1)}
\end{align*}
\begin{align*}
&+ \sum_{i=2m+1}^{3m} p^{i}\frac{p^{6m+3-2i}-p^{3m+2-i}-p^{3m+1-i}+1}{(p-1)(p^{2}-1)}\\
&- \sum_{i=4m+1-b}^{m} p^{i}\frac{p^{2i+3} - p^{i+2} - p^{i+1} +1}{(p-1)(p^2-1)}\\
&- \sum_{i=m+1}^{2m} p^{i}\frac{p^{2m+3}+p^{2m+2}+p^{2m+1}-p^{3m+2-i}-p^{3m+1-i}-p^{i+2}-p^{i+1}+1}{ (p-1)(p^2-1)}\\
&- \sum_{i=2m+1}^{3m} p^{i}\frac{p^{6m+3-2i}-p^{3m+2-i}-p^{3m+1-i}+1}{(p-1)(p^{2}-1)}\\
&=\frac{p^{-b + 4 \, m + 3} + p^{-b + 4 \, m + 2} + p^{-b + 4 \, m + 1} - p^{-2 \, b + 8 \, m + 5}}{(p-1)(p^2-1)(p^3-1)}\\
&+ \frac{- p^{-2 \, b + 8 \, m + 4} - p^{-2 \, b + 8 \, m + 3} + p^{-3 \, b + 12 \, m + 6} - 1}{(p-1)(p^2-1)(p^3-1)}.
\end{align*}
Now summing over the 4 cases, we get that
\begin{align*}
N(m,m,m,m)&=\sum_{b=0}^{4m} N_b(m,m,m,m)\\
&= \frac{ {\left(p^{2} + p + 1\right)}^{3} {\left(p^{2} + 1\right)} p^{4 \, m + 2} - {\left({\left(2 \, m + 3\right)} p^{3} - 2 \, m - 1\right)} {\left(p^{3} + p\right)} {\left(p + 1\right)}^{3} p^{3 \, m} }{{\left(p^{2} - 1\right)}^{2} {\left(p^3 - 1\right)}^{2}}\\
&- \frac{4 \, m p^{4} + 4 \, m p^{3} + 7 \, p^{4} + 9 \, p^{3} - 4 \, m p + 6 \, p^{2} - 4 \, m + p - 1}{{\left(p^{2} - 1\right)}^{2} {\left(p^3 - 1\right)}^{2}}.
\end{align*}
Now, we can see that the degree of the polynomial is $4m$ and the leading coefficient is 1, as conjectured by L. T$\acute{o}$th.
\end{proofof}
\begin{remark} In an unpublished paper \cite{CCL}, Chew, Chin, and Lim derive an explicit formula for the number of subgroups of a finite abelian p-group of rank 4. Here we will use the formula to reprove the above theorem and also prove one more conjecture by T$\acute{o}$th in \cite[Conjecture 9]{LT}.
\begin{theorem} (Chew, Chin, and Lim) Let $1 \le w \le x \le y \le z$. The number of subgroups of $\mathbb{Z}/p^{w}\times \mathbb{Z}/p^{x}\times \mathbb{Z}/p^{y}\times \mathbb{Z}/p^{z}$ is
\begin{align*}
N(w,x,y,z) &= \sum_{i=0}^{w-1}\sum_{j=0}^{i} [(w+x+y+z-4i+1)(2i-2j+1)p^{3i+j}\\
&+(w+x+y+z-4i-1)(2i-2j+1)p^{3i+j+1}\\
&+ 2(w+x+y+z-4i-2)(i-j+1)p^{3i+j+2}]\\
&+\sum_{i=0}^{w}\sum_{j=w}^{x-1} (w+j-2i+1)[(x+y+z-3j+1)p^{w+2j+i}\\
\end{align*}
\begin{align*}
&+(x+y+z-3j-1)p^{w+2j+i+1}] \\
&+\sum_{i=0}^{w}\sum_{j=x}^{y} (y+z-2j+1)(w+x+-2i+1)p^{w+x+i+j}.
\end{align*}
\end{theorem}
Since all coefficients are positive, none of the terms will cancel out and the degree of the polynomial in $p$ can be easily determined. By comparing the exponents, we can see that the highest degree appears in the last double sum. Setting $i=w$ and $j=y,$ we get that the leading term of $N(w,x,y,z)$ is $(z-y+1)(x-w+1)p^{2w+x+y},$ with degree $2w+x+y,$ confirming conjecture 9 in \cite{LT}.
Moreover, setting $w=x=y=z=m,$ in the above leading term, we get that the degree of the leading term in $N(m,m,m,m)$ is $4m,$ confirming Conjecture 10 in the same paper.
\end{remark}
\begin{remark} What is new in Theorem \ref{Conj10} is the explicit polynomial. Else, both conjectures 9 and 10 of Toth follow from Theorem 3 in \cite{GBJW01} as they have determined the leading coefficient and degree of the counting polynomial for all ranks.
\end{remark}
\section{A special case of any rank}
The recursive counting of the subgroups used in the proof of Theorem \ref{rank3} based on \cite[Lemma 2.3]{YH} enabled us to deduce a compact rational form for the polynomials in all the cases for ranks 2 and 3. Here we will demo the method of fundamental group lattices in one case for all ranks. We remark that it is difficult to use this method to give an explicit formula in the remaining cases (those that are not symmetric to Case 1) of rank 3 or higher.
Below we extend T$\ddot{a}$rn$\ddot{a}$uceanu's result for counting subgroups as in \cite[Theorem 3.3]{MT} for case 1 to any rank.
\begin{theorem}
\label{funlatt} Let $0\leq b \leq a_1.$ Then
\begin{equation}
h_b^{(a_k,\dots,a_1)}(p) = \prod_{i=2}^k\frac{p^{b+i-1}-1}{p^{i-1}-1}.
\end{equation} Similarly, if $a_2+\dots+a_k\le b \leq a_1+\dots+a_k$, then replace $b$ in the above formula by $a_1+\dots+a_k-b$ to get
\begin{equation}
h_b^{(a_k,\dots,a_1)}(p) = \prod_{i=2}^k\frac{p^{a_1+\dots+a_k-b+i-1}-1}{p^{i-1}-1}. \end{equation}
\end{theorem}
\begin{proofof}{Theorem \ref{funlatt}} Following the notation in \cite{MT}, let $A= (a_{ij})$ be a solution of $(\ast)$ below corresponding to a subgroup of order $p^{a_1+\dots+a_k-b}$ in $\mathbb{Z}/p^{a_1}\times \mathbb{Z}/p^{a_2}\times \cdots \times \mathbb{Z}/p^{a_k}.$ \\
$$(\ast) \begin{cases} \text{i) } a_{ij} = 0 \text{ for any } i>j\\
\text{ii) } 0\leq a_{1j},a_{2j},\dots,a_{j-1,j} < a_{jj} \text{ for any } j \in \{1,2,\dots, k\}\\
\text{iii) } \text{1) } a_{11}|p^{a_1}\\
\hspace{0.2in}\text{ 2) } a_{22}|(p^{a_2},p^{a_1}\frac{a_{12}}{a_{11}})\\
\hspace{0.2in}\text{ 3) } a_{33}|(p^{a_3},p^{a_2}\frac{a_{23}}{a_{22}},p^{a_1}\frac{\begin{vmatrix}
a_{12} &a_{13} \\
a_{22} & a_{23}
\end{vmatrix}}{a_{22}a_{11}})\\
\vspace{0.1in}\\
\hspace{1in} \longvdots{1.5em}\\
\hspace{0.2in}\text{ k) } a_{kk}|(p^{a_k},p^{a_{k-1}}\frac{a_{k-1,k}}{a_{k-1,k-1}},p^{a_{k-2}}\frac{\begin{vmatrix}
a_{k-2,k-1} &a_{k-2,k} \\
a_{k-1,k-1} & a_{k-1,k}
\end{vmatrix}}{a_{k-1,k-1}a_{k-2,k-2}},\dots,p^{a_1}\frac{\begin{vmatrix}
a_{12} & a_{13} & \cdots & a_{1,k} \\
a_{22} & a_{23} & \cdots & a_{2,k} \\
\textbf{.} & \textbf{.} & & \textbf{.} \\
\textbf{.} & \textbf{.} & & \textbf{.} \\
\textbf{.} & \textbf{.} & & \textbf{.} \\
a_{k-1,2} & a_{k-1,3} & \cdots & a_{k-1,k}
\end{vmatrix}}{a_{k-1,k-1}a_{k-2,k-2}\dots a_{11}}).
\end{cases} $$
Put $a_{11}=p^{i_1}$ where $0\leq i_1\leq a_1.$ By the first remark on pp. 376 of \cite{MT}, the order of a subgroup corresponding to $A= (a_{ij})$ is $$\frac{p^{\sum_{i=1}^k a_i}}{\prod_{i=1}^k a_{ii}} = p^{a_1+\dots+a_k-b}$$
Therefore, $a_{11}\cdots a_{kk}=p^b.$ Since $a_{11}=p^{i_1},$ we have $a_{22}\cdots a_{kk}=p^{b-i_1}.$ Let\\
\hspace{2in} $a_{22} = p^{i_2}$ where $0\leq i_2 \leq b-i_1,$ \\
\hspace{2in} $a_{33}=p^{i_3}$ where $0\leq i_3\leq b-i_1-i_2,$\\
\hspace{2in} $\dots $ \\
\hspace{2in} $a_{k-1,k-1}=p^{i_{k-1}}$ where $0\leq i_{k-1} \leq b-i_1-\dots - i_{k-2}.$ \\
Then $a_{kk}=p^{b-i_1-\dots - i_{k-1}}.$ Now, the conditions become
$$p^{i_2}|(p^{a_2},p^{a_1-i_1}a_{12}) = p^{a_1-i_1}(p^{a_2-a_1+i_1},a_{12})$$
$$p^{i_3}|(p^{a_3},p^{a_2-i_2}a_{12}, p^{a_1-i_1-i_2}(a_{12}a_{23}-p^{i_2}a_{13})$$
$$\dots$$
$$p^{b-i_1-\dots - i_{k-1}}|(p^{a_k},p^{a_{k-1}}\frac{a_{k-1,k}}{a_{k-1,k-1}},p^{a_{k-2}}\frac{\begin{vmatrix}
a_{k-2,k-1} &a_{k-2,k} \\
a_{k-1,k-1} & a_{k-1,k}
\end{vmatrix}}{a_{k-1,k-1}a_{k-2,k-2}},\dots,p^{a_1}\frac{\begin{vmatrix}
a_{12} & a_{13} & \cdots & a_{1,k} \\
a_{22} & a_{23} & \cdots & a_{2,k} \\
\textbf{.} & \textbf{.} & & \textbf{.} \\
\textbf{.} & \textbf{.} & & \textbf{.} \\
\textbf{.} & \textbf{.} & & \textbf{.} \\
a_{k-1,2} & a_{k-1,3} & \cdots & a_{k-1,k}
\end{vmatrix}}{a_{k-1,k-1}a_{k-2,k-2}\dots a_{11}}).$$
These conditions are satisfied by all
$$a_{12}< a_{22}=p^{i_2},$$
$$a_{13},a_{23} < a_{33}=p^{i_3},$$
$$\cdots$$
$$a_{1,k-1},\dots,a_{k-2,k-1}< a_{k-1,k-1}=p^{i_{k-1}}$$
$$a_{1,k},\dots,a_{k-1,k}< a_{k,k}=p^{b-i_1-\cdots-i_{k-1}}.$$
So, we get $$p^{i_2}\cdot (p^{i_3})^2 \cdots (p^{i_{k-1}})^{k-2}\cdot (p^{b-i_1-\cdots-i_{k-1}})^{k-1} = p^{(k-1)b-(k-1)i_1-(k-2)i_2-\cdots -2i_{k-2}-i_{k-1}}$$ distinct solutions of $(\ast)$ for given $a_{11}=p^{i_1},a_{22}=p^{i_2},\cdots,a_{k-1,k-1}=p^{i_{k-1}},$ and $a_{k,k}=p^{b-i_1-\cdots-i_{k-1}}.$ The number of subgroups of index $p^b$ (or order $p^{a_1+\dots+a_k-b}$) in $\mathbb{Z}/p^{a_1}\times \cdots \times \mathbb{Z}/p^{a_k}$ is then
\begin{equation}
h_b^{(a_k,\dots,a_1)}(p) = \sum_{i_1=0}^{b}\sum_{i_2=0}^{b-i_1}\cdots \sum_{i_{k-1}=0}^{b-i_1-\cdots-i_{k-2}} p^{(k-1)b-(k-1)i_1-(k-2)i_2-\cdots -2i_{k-2}-i_{k-1}}.
\end{equation}
Now we will use induction to prove that
\begin{equation}\label{anyrankclaim}
\sum_{i_1=0}^{b}\sum_{i_2=0}^{b-i_1}\cdots \sum_{i_{k-1}=0}^{b-i_1-\cdots-i_{k-2}} p^{(k-1)b-(k-1)i_1-(k-2)i_2-\cdots -2i_{k-2}-i_{k-1}} = \prod_{i=2}^k\frac{p^{b+i-1}-1}{p^{i-1}-1}.
\end{equation}
By Theorem \ref{rank3} and previous discussion, equation (\ref{anyrankclaim}) is true for when $k=2,3.$ Let us assume that it is true for rank k. Now consider the left hand side of equation (\ref{anyrankclaim}) for rank $k+1,$ that is,
$$\sum_{i_1=0}^{b}\sum_{i_2=0}^{b-i_1}\cdots \sum_{i_{k-1}=0}^{b-i_1-\cdots-i_{k-2}} \sum_{i_{k}=0}^{b-i_1-\cdots-i_{k-2}-i_{k-1}}p^{(k)b-(k)i_1-(k-1)i_2-\cdots -3i_{k-2}-2i_{k-1}-i_{k}} $$\\
and put $\alpha=b-i_1$. Then \\
$$\sum_{i_1=0}^{b}p^{b-i_1}\sum_{i_2=0}^{\alpha}\cdots \sum_{i_{k-1}=0}^{\alpha-\cdots-i_{k-2}} \sum_{i_{k}=0}^{\alpha-\cdots-i_{k-2}-i_{k-1}}p^{(k-1)\alpha-(k-1)i_2-\cdots -3i_{k-2}-2i_{k-1}-i_{k} }$$
\begin{align*}
&= \sum_{i_1=0}^{b}p^{b-i_1} \prod_{i=3}^{k+1}\frac{p^{b+i-1}-1}{p^{i-1}-1}\\
&= \prod_{i=3}^{k+1}\frac{p^{b+i-1}-1}{p^{i-1}-1}\sum_{i_1=0}^{b}p^{b-i_1}\\
&= \prod_{i=2}^{k+1}\frac{p^{b+i-1}-1}{p^{i-1}-1}.
\end{align*} This completes the proof.
\end{proofof}
\begin{remark} An application of the method used in Theorem \ref{rank3} also shows that we can rederive the above result in a simpler way as follows. Let $\lambda=(a_k,\dots, a_1), \lambda^{'}=(a_{k-1},\dots, a_1),$ and $0\le b \leq a_1.$ Then a repeated application of the recurrence (\ref{Stehrec}) for $N_b(\lambda)$ shows that
\begin{align*}
N_{b}(\lambda) &= \sum_{i_1=0}^{b} p^{i_1}N_{i_1}(\lambda^{'})\\
&=\sum_{i_1=0}^{b} p^{i_1}\sum_{i_2=0}^{i_1} p^{i_2}N_{i_2}(\lambda^{''})\\
&=\dots \\
&=\sum_{i_1=0}^{b} \sum_{i_2=0}^{i_1} \dots\sum_{i_{k-1}=0}^{i_{k-2}} p^{i_1+\dots +i_{k-1}} N_{i_{k-1}}(a_1)\\
&=\sum_{i_1=0}^{b} \sum_{i_2=0}^{i_1} \dots\sum_{i_{k-1}=0}^{i_{k-2}} p^{i_1+\dots+i_{k-1}}.
\end{align*}
Now let $j_t = b - \sum_{l=1}^t i_t$ for $1\le t\le k-2.$ Then $i_t = j_{t-1} - j_t$ for $1\le t\le k-2,$ letting $j_0=b.$ As $i_t$ varies between 0 and $j_{t-1},$ $j_t$ will also vary between 0 and $j_{t-1}.$ Also $(k-1)b-(k-1)i_1-(k-2)i_2-\cdots -2i_{k-2}-i_{k-1}= j_1+\dots+j_{k-1}.$ Hence,
\begin{align*}
\sum_{j_1=0}^{b} \sum_{j_2=0}^{j_1} \dots\sum_{j_{k-1}=0}^{j_{k-2}} p^{j_1+\dots+j_{k-1}}
&= \sum_{i_1=0}^{b} \sum_{i_2=0}^{j_1} \dots\sum_{i_{k-1}=0}^{j_{k-2}} p^{j_1+\dots+j_{k-1}} \\
&=\sum_{i_1=0}^{b}\sum_{i_2=0}^{b-i_1}\cdots \sum_{i_{k-1}=0}^{b-i_1-\cdots-i_{k-2}} p^{(k-1)b-(k-1)i_1-(k-2)i_2-\cdots -2i_{k-2}-i_{k-1}},
\end{align*} which is the same as the left hand side of equation (\ref{anyrankclaim}). Therefore,
\begin{equation}
\sum_{i_1=0}^{b} \sum_{i_2=0}^{i_1} \dots\sum_{i_{k-1}=0}^{i_{k-2}} p^{i_1+\dots+i_{k-1}} = \prod_{i=2}^k\frac{p^{b+i-1}-1}{p^{i-1}-1}.
\end{equation}
\end{remark}
\section{The method of convolution} \label{sec:convolution}
Here, we will use convolution of generating series and a recurrence relation of T. Stehling\cite{TS} to prove the rational function formula in case 1 of rank 2. This is only to demo the method and the rest of the cases can be handled similarly. To ease the use of the recurrence relation, let's adopt Stehling's notation for type $\alpha = (\alpha_1,\dots,\alpha_d)$ where $ \alpha_1\ge \cdots \alpha_d\ge 0$ which reverses the order in the notation we used in previous sections.
Let $0\le b \leq \alpha_2$ (i.e. case 1 of rank 2), then
\begin{equation}
\label{c1rk2}
h_b^{(\alpha_1,\alpha_2)}(p) = \frac{ p^{b+1} -1}{p-1}.
\end{equation}
The symmetric case (i.e. $\alpha_1\le b \leq \alpha_1+\alpha_2$) is then (replacing $b$ by $\alpha_1+\alpha_2-b$ in the above rational function):
\begin{equation}
h_b^{(\alpha_1,\alpha_2)}(p) = \frac{ p^{\alpha_1+\alpha_2-b+1} -1}{p-1}.
\end{equation}
The idea behind the method is the following. Let $F(x) = \sum a_nx^n$ and $G(x) = \sum b_nx^n$ be two convergent series. Then the series $H(x) = \sum a_nb_nx^n$ can be obtained from $F$ and $G$ as follows. Define the \textbf{convolution} of $F$ and $G$ denoted $F\star G (x)$ as the following contour integral around a small circle about $0$ where $x$ is a complex number very small in magnitude.
\begin{align*}
F\star G (x) &= \frac{1}{2\pi i} \oint_{y\in B_{\epsilon}(0)} F(y)G(\frac{x}{y})\frac{dy}{y}\\
&= \frac{1}{2\pi i} \oint_{y\in B_{\epsilon}(0)} \sum a_nb_my^n\frac{x^m}{y^m}\frac{dy}{y} \\
&=\sum a_nb_mx^m \frac{1}{2\pi i} \oint_{y\in B_{\epsilon}(0)} y^{n-m-1}dy\\
&= \sum a_nb_nx^n.
\end{align*}
Thus $H(x) = F\star G (x).$ Now, we will prove equation (\ref{c1rk2}) using this method. Let $F_d(x,y)$ be the generating series
\begin{equation}
F_d(x,y) = \sum_{\substack{\alpha = (\alpha_1,\dots,\alpha_d) \\ \alpha_1\ge \cdots \alpha_d\ge 0 \\ 0\le r\le \alpha_1+\dots+\alpha_d}} h_r^{(\alpha_1,\dots,\alpha_d)}(p)x^{\alpha}y^r
\end{equation}
where $x = (x_1,\dots, x_d)$ and $x^\alpha = x_1^{\alpha_1}x_2^{\alpha_2}\cdots x_d^{\alpha_d}.$ Let's denote $h_r^{\alpha}(p)$ by $N_{\alpha}^{(d)}(r)$ or simply $N_{\alpha}(r)$ when the rank is understood. By \cite[Corollary]{TS}, we have the recurrence formula
\begin{equation}\label{Stehrec}
N_{\alpha}(r) = N_{\tilde{\alpha}}(r-1)+p^rN_{\hat{\alpha}}(r)
\end{equation} where $\hat{\alpha} = (\alpha_2,\dots,\alpha_d)$ and $\tilde{\alpha} = \alpha$ with $-1$ added to the $k^{th}$ position where
$$k=\begin{cases}
1 & \text{if } \alpha_1>\alpha_2\\
2 & \text{if } \alpha_1=\alpha_2>\alpha_3 \\
3 & \text{if } \alpha_1=\alpha_2=\alpha_3>\alpha_4\\
\dots & \dots
\end{cases}$$
Note $N_{\alpha}^{(1)}(r) =1$ if $0\le r\le \alpha.$ So,
\begin{equation}
F_1(x,y) = \frac{1}{(1-x)(1-xy)}.
\end{equation} For rank 2, let's break down $F_2(x_1,x_2,y)$ as follows:
\begin{align}
\label{F2}
F_2(x_1,x_2,y) &= \sum_{\substack{ \alpha_1 \ge \alpha_2\ge 0 \\ 0\le r\le \alpha_1+\alpha_2}} N_{\alpha_1,\alpha_2}^{(2)}(r)x_1^{\alpha_1}x_2^{\alpha_2}y^r
= F_2^{(0)}+F_2^{(1)}
\end{align} where
\begin{align}
F_2^{(0)}(x_1,x_2,y) &= \sum_{\alpha_1 = \alpha_2\ge 0} N_{\alpha_1,\alpha_2}^{(2)}(r)x_1^{\alpha_1}x_2^{\alpha_2}y^r \\
F_2^{(1)}(x_1,x_2,y) &= \sum_{\alpha_1 > \alpha_2\ge 0} N_{\alpha_1,\alpha_2}^{(2)}(r)x_1^{\alpha_1}x_2^{\alpha_2}y^r
\end{align} Let's first compute $F_2^{(0)}(x_1,x_2,y). $
\begin{align*}
F_2^{(0)}(x_1,x_2,y) &= \sum_{\alpha=0}^\infty \sum_r N_{\alpha,\alpha}^{(2)}(r)(x_1x_2)^{\alpha}y^r \\
&= \sum_{\alpha} \sum_r N_{\alpha,\alpha-1}^{(2)}(r-1)(x_1x_2)^{\alpha}y^r + \sum_{\alpha} p^rN_{\alpha}^{(1)}(r)(x_1x_2)^{\alpha}y^r\\
&= \sum_{\alpha} \sum_r N_{\alpha-1,\alpha-1}^{(2)}(r-2)(x_1x_2)^{\alpha}y^r + \sum_{\alpha} \sum_r p^{r-1}N_{\alpha-1}^{(1)}(r-1)(x_1x_2)^{\alpha}y^r \\
&+ \sum_{\alpha} p^rN_{\alpha}^{(1)}(r)(x_1x_2)^{\alpha}y^r\\
&= \sum_{\alpha} \sum_r N_{\alpha,\alpha}^{(2)}(r)(x_1x_2)^{\alpha+1}y^{r+2} + \sum_{\alpha} \sum_r N_{\alpha}^{(1)}(r)(x_1x_2)^{\alpha+1}p^{r}y^{r+1}\\
&+ \sum_{\alpha} N_{\alpha}^{(1)}(r)(x_1x_2)^{\alpha}p^ry^r\\
&= x_1x_2y^2F_2^{(0)}(x_1,x_2,y) + x_1x_2yF_1(x_1x_2,py) + F_1(x_1x_2,py).
\end{align*} Solving for $F_2^{(0)},$ results in \begin{equation}
F_2^{(0)}(x_1,x_2,y) = \frac{1+x_2x_2y}{(1-x_1x_2)(1-px_1x_2y)(1-x_1x_2y^2)}.
\end{equation} Similarly, we have
\begin{align*}
F_2^{(1)}(x_1,x_2,y) &= \sum_{\alpha_1>\alpha_2\ge 0} \sum_r N_{\alpha}^{(2)}(r)x_1^{\alpha_1}x_2^{\alpha_2}y^r \\
&=\sum_{\alpha_1>\alpha_2\ge 0} \left [ N_{\tilde{\alpha}}^{(2)}(r-1)+p^rN_{\hat{\alpha}}^{(1)}(r) \right ]x_1^{\alpha_1}x_2^{\alpha_2}y^r\\
&=\sum_{\alpha_1>\alpha_2\ge 0} \left [ N_{\alpha_1-1,\alpha_2}^{(2)}(r-1)+p^rN_{\alpha_2}^{(1)}(r) \right ]x_1^{\alpha_1}x_2^{\alpha_2}y^r\\
&=\sum_{\alpha_1 \ge \alpha_2\ge 0} N_{\alpha_1,\alpha_2}^{(2)}(r-1)x_1^{\alpha_1+1}x_2^{\alpha_2}y^{r+1} + \sum_{ \substack{ \alpha_1 > \alpha_2\ge 0\\ 0\le r\le \alpha_2}} x_1^{\alpha_1}x_2^{\alpha_2}(py)^r\\
&= x_1y\left [ F_2^{(1)}(x_1,x_2,y) + F_2^{(0)}(x_1,x_2,y) \right ] + \sum_{ \alpha_1 > \alpha_2\ge 0} x_1^{\alpha_1}x_2^{\alpha_2}\frac{1-(py)^{\alpha_2+1}}{1-py}.
\end{align*} Solving for $F_2^{(1)}(x_1,x_2,y),$ we have \begin{equation}
F_2^{(1)}(x_1,x_2,y) = \frac{1+y-x_1y+x_1^2x_2y^2}{(1-x_1)(1-x_1y)(1-x_1x_2)(1-x_1x_2y)(1-qx_1x_2y)}.
\end{equation} Finally, combining the two sums, equation (\ref{F2}) becomes
\begin{equation*}
F_2(x,y) = F_2(x_1,x_2,y) = \sum_{\substack{\alpha = (\alpha_1,\alpha_2) \\ \alpha_1\ge \alpha_2\ge 0 \\ 0\le r\le \alpha_1+\alpha_2}} h_r^{(\alpha_1,\alpha_2)}(p)x_1^{\alpha_1}x_2^{\alpha_2}y^r\end{equation*}
\begin{equation}
= \frac{x_1^2x_2y^2 + x_1^2x_2y - x_1x_2y - 1}{(1-x_1)(1-x_1y)(1-x_1x_2)(1-x_1x_2y^2)(px_1x_2y-1)}.
\end{equation}
We will use the idea described before to pick out the rational function expression for case 1 from the general rational function expression of $F_2$ above. The final step will be extracting a general formula for the coefficients of the general term of the power series expansion of the resulting rational function using products of power series and change of variables.
Now consider small circles in the complex plane around 0 where the complex numbers $x_1,x_2,y$ are very small in magnitude. We evaluate the following repeated contour integrals of the convolution of $F_2$ and $G$ by applying Cauchy's residue theorem to get the generating series for case 1.
Let
\begin{equation}
G(x_1,x_2,y) = \sum_{ \alpha_1\ge \alpha_2\ge r\ge 0 } x_1^{\alpha_1}x_2^{\alpha_2}y^r = \frac{1}{(1-x_1)(1-x_1x_2)(1-x_1x_2y)}.
\end{equation}
Then
\begin{align*}
F_2\star G (x_1,x_2,y) &=\frac{1}{(2\pi i)^3} \oint_{u_3\in B_{\epsilon_3}(0)} \oint_{u_2\in B_{\epsilon_2}(0)} \oint_{u_1\in B_{\epsilon_1}(0)} F_2(u_1,u_2,v)G(\frac{x_1}{u_1},\frac{x_2}{u_2},\frac{y}{v})\frac{du_1}{u_1}\frac{du_2}{u_2}\frac{dv}{v} \\
&= \sum_{ \alpha_1\ge \alpha_2\ge r\ge 0 } h_r^{(\alpha_1,\alpha_2)}(p)x_1^{\alpha_1}x_2^{\alpha_2}y^r.
\end{align*}
Using the rational functions for $F_2$ and $G$ and evaluating the integrals, we get that
\begin{equation}
\sum_{ \alpha_1\ge \alpha_2\ge r\ge 0 } h_r^{(\alpha_1,\alpha_2)}(p)x_1^{\alpha_1}x_2^{\alpha_2}y^r = \frac{1}{(1-x_1)(1-x_1x_2)(1-x_1x_2y)(1-px_1x_2y)}.
\end{equation}
It remains to extract the coefficient of $x_1^{\alpha_1}x_2^{\alpha_2}y^r$ from the last rational function in order to determine $h_r^{(\alpha_1,\alpha_2)}(p)$ for the case $0\le r \le a_2.$ We do this as follows:
\begin{align*}
&\frac{1}{(1-x_1)(1-x_1x_2)(1-x_1x_2y)(1-px_1x_2y)}\\
&= \big( \sum_{k_1\ge 0} x_1^{k_1}\big)\big( \sum_{k_2\ge 0} (x_1x_2)^{k_2}\big)\big( \sum_{k_3\ge 0} (x_1x_2y)^{k_3}\big)\big( \sum_{k_4\ge 0} (px_1x_2y)^{k_3}\big)\\
&= \sum_{k_1,k_2,k_3,k_4\ge 0} p^{k_4}x_1^{k_1+k_2+k_3+k_4}x_2^{k_2+k_3+k_4}y^{k_3+k_4}.
\end{align*}
Let $r = k_3+k_4, a_2=k_2+k_3+k_4=k_2+r, a_1=k_1+k_2+k_3+k_4=k_1+a_2.$ The last sum above becomes
\begin{align*}
\frac{1}{(1-x_1)(1-x_1x_2)(1-x_1x_2y)(1-px_1x_2y)} &=\sum_{a_1\ge a_2\ge r\ge 0} x_1^{a_1}x_2^{a_2}y^{r}\sum_{k_4=0}^r p^{k_4} \\
&= \sum_{a_1\ge a_2\ge r\ge 0} \frac{p^{r+1}-1}{p-1}x_1^{a_1}x_2^{a_2}y^{r}.
\end{align*}
Finally, by comparing coefficients, we get the required result:
$$h_r^{(\alpha_1,\alpha_2)}(p) = \frac{p^{r+1}-1}{p-1}.$$
\begin{remark}
In comparison, we can quickly prove the formulas for all cases of rank 2 using the recurrence used in the proof of Theorem \ref{rank3} as follows. First let's return to the notation of the previous sections, i.e. $a = (a_d, \dots,a_1)$ with $a_1\leq \dots \le a_d.$
Let $\lambda=(a_2, a_1)$ and $\lambda^{'}=( a_1).$\\
\text{Case 1:} $0\le b \leq a_1$
$N_{b}(\lambda)=\sum_{i=0}^{b} p^{i}N_{i}(\lambda^{'})=\sum_{i=0}^{b} p^{i}=\frac{p^{b+1}-1}{p-1}.$\\
Note $N_{i}(\lambda^{'})=1$ or 0 according as $ i \leq a_1$ or $ i> a_1 ,$ respectively.\\
\text{Case 2:} $a_1\le b \leq a_2$
$N_{b}(\lambda)=\sum_{i=0}^{a_1} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{1}+1}^{b} p^{i}N_{i}(\lambda^{'})=\sum_{i=0}^{a_1} p^{i}N_{i}(\lambda^{'})=\sum_{i=0}^{a_1} p^{i}=\frac{p^{a_1+1}-1}{p-1}.$\\
\text{Case 3:} $a_2\le b \leq a_{1}+a_2$ (This case can also be deduced by replacing $b$ in case 1 by $a_1+a_2-b$)
$N_{b}(\lambda)=\sum_{i=0}^{a_1} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{1}+1}^{a_{2}} p^{i}N_{i}(\lambda^{'})+\sum_{i=a_{2}+1}^{b} p^{i}N_{i}(\lambda^{'})-\sum_{i=a_{1}+a_{2}+1-b}^{a_1}p^{i}N_{i}(\lambda^{'})$\\
$=\sum_{i=0}^{a_1} p^{i}N_{i}(\lambda^{'})-\sum_{i=a_{1}+a_{2}+1-b}^{a_1}p^{i}=\frac{p^{a_{1}+a_{2}+1-b}-1}{p-1}.$\\
In summary, we have
$$h_b^{(a_2,a_1)}(p)=\begin{cases}
\frac{p^{b+1}-1}{p-1} & \text{if } 0\le b \leq a_2\\
\frac{p^{a_2+1}-1}{p-1} & \text{if } a_2\le b \leq a_1\\
\frac{p^{a_1+a_2-b+1}-1}{p-1} & \text{if } a_1\le b \leq a_1+a_2.
\end{cases}$$
\end{remark}
\textit{Acknowledgement.} The first named author would like to thank the City University of New York HPCC facilities for letting us verify every formula using the SageMath and Mathematica software on their HPC systems based at the College of Staten Island, New York City.
\bibliographystyle{hplain}
| {
"timestamp": "2018-06-18T02:04:29",
"yymm": "1806",
"arxiv_id": "1806.03774",
"language": "en",
"url": "https://arxiv.org/abs/1806.03774",
"abstract": "We use recurrence relations to derive explicit formulas for counting the number of subgroups of given order (or index) in rank 3 finite abelian p-groups and use these to derive similar formulas in few cases for rank 4. As a consequence, we answer some questions by M. T$\\ddot{a}$rn$\\ddot{a}$uceanu in \\cite{MT} and L. T$\\dot{\\acute{o}}$th in \\cite{LT}. We also use other methods such as the method of fundamental group lattices introduced in \\cite{MT} to derive a similar counting function in a special case of arbitrary rank finite abelian p-groups.",
"subjects": "Group Theory (math.GR); Combinatorics (math.CO)",
"title": "Counting subgroups of fixed order in finite abelian groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540728763411,
"lm_q2_score": 0.8175744695262775,
"lm_q1q2_score": 0.800694886610274
} |
https://arxiv.org/abs/2103.13727 | Conway's spiral and a discrete Gömböc with 21 point masses | We show an explicit construction in 3 dimensions for a convex, mono-monostatic polyhedron (i.e., having exactly one stable and one unstable equilibrium) with 21 vertices and 21 faces. This polyhedron is a 0-skeleton, with equal masses located at each vertex. The above construction serves as an upper bound for the minimal number of faces and vertices of mono-monostatic 0-skeletons and complements the recently provided lower bound of 8 vertices. This is the first known construction of a mono-monostatic polyhedral solid. We also show that a similar construction for homogeneous distribution of mass cannot result in a mono-monostatic solid. | \section{Introduction}\label{sec:intro}
\subsection{Mono-stability and homogeneous polyhedra}
If a rigid body has one single stable position then we call it \emph{mono-stable}, and this property was probably first explored by Archimedes as he developed his famous design for ships \cite{Archimedes}. Mono-stability might also be of advantage for rigid bodies under gravity, supported on a rigid (frictionless) surface, as it facilitates self-righting.
Beyond these applications, mono-stable bodies have also attracted considerable mathematical interest. In particular, in case of
convex polyhedra with homogeneous mass distribution, it is still unclear
what are the minimal numbers $F^S, V^S$ of faces and vertices necessary to achieve mono-stability. Conway and Guy in 1967 \cite{Conway}
offered the first upper bound by describing such an object with $F=19$ faces and $V=34$ vertices. The Conway-Guy construction was improved by Bezdek \cite{Bezdek} to $(F,V)=(18,18)$ and later by Reshetov \cite{Reshetov} to $(F,V)=(14,24)$. The mentioned values of $F$ and $V$ define the best known \emph{upper bounds} for a mono-stable polyhedron, so we have $F^S \leq 14 , V^S \leq 18$. Even less is known about the lower bounds: the only known result is due to Conway \cite{Dawson} who proved that a homogeneous tetrahedra have at least two stable equilibria, from which $F^S, V^S \geq 5$ follows.
\subsection{Mono-unstable and mono-monostatic homogeneous polyhedra}
The natural dual property to being mono-stable is being \emph{mono-unstable}, i.e. to have one single unstable static balance position. The Conway-Guy polyhedron has, beyond the single stable position on one face, 4 unstable equilibria at 4 vertices. The first example for a mono-unstable polyhedron was demonstrated in \cite{balancing}, having $F=18$
faces and $V=18$ vertices and in the same paper it was proven that a homogeneous tetrahedron can not be mono-unstable. Thus, for the minimal numbers $F^U, V^U$ for the faces and vertices that a homogeneous, mono-unstable polyhedron may have, the following bounds apply:
$ 5 \leq F^U \leq 18$, $5 \leq V^U \leq 18$.
If a rigid body is either mono-stable or mono-unstable then we call it monostatic.
If it has both properties, then we call it mono-monostatic.
The construction of the first convex, homogeneous, mono-monostatic body called G\"{o}mb\"{o}c \cite{VarkonyiDomokos} in 2006 raised the interest in the subject, because a polyhedral version of the G\"{o}mb\"{o}c is not known. This implies that for the minimal numbers
$F^{\star}, V^{\star}$ for the faces and vertices of a mono-monostatic polyhedron the only known bounds are $F^{\star}, V^{\star} \geq 5.$
\subsection{0-skeletons and the main result}
Here we highlight a new aspect of this problem: instead of looking at uniform mass distribution, we consider polyhedra with unit masses at the vertices, also called polyhedral 0-skeletons. The latter problem may appear, at first sight, almost `unsportingly' easy. However, the minimal vertex number $V^{\star}_0$ and face number $F^{\star}_0$ to produce a mono-monostatic polyhedral 0-skeleton are not known. Even more curiously, the minimal number of vertices for a mono-monostatic, \emph{polygonal} 0-skeleton (in 2 dimensions) is not known either.
The first related results have been reported in \cite{Bozoki} where, for the minimal number of vertices $V^{U}_0$ for
a mono-unstable polyhedral 0-skeleton $V^{U}_0 \geq 8$ was proven and this implies the lower bounds
$F^{U}_0 \geq 6$ (via the theorem of Steinitz \cite{Steinitz}) and it also implies the bounds $F^{\star}_0 \geq 6, V^{\star}_0 \geq 8$
for mono-monostatic polyhedral 0-skeletons.
In this paper we explain the background and show some constructions which may inspire further research. In particular, by providing an explicit construction of a mono-monostatic polyhedral 0-skeleton with 21 faces and 21 vertices, we prove
\begin{theorem}\label{th1}
$F^{\star}_0, V^{\star}_0 \leq 21.$
\end{theorem}
Our example, illustrated in Figure \ref{fig:intro}(c) and defined on line 3 of Table~\ref{tb:monomono}, appears to be the first discrete construction of a mono-monostatic object and it may help to inspire thinking about
the bounds $F^{\star}, V^{\star}$ for the homogeneous case.
The paper is structured as follows: in Section~\ref{spirals} we explain the geometric idea behind Conway's classical construction and how
this idea may be generalized in various directions. In Section~\ref{skeletons}, by relying on an idea by Dawson \cite{Dawson}, we describe the construction for a mono-monostatic 0-skeleton in 2 dimensions, having $V_0=11$ vertices and then we proceed to prove Theorem \ref{th1} by providing the construction of the mono-monostatic 0-skeleton. In Section~\ref{other} we show the connection to other problems, including the mechanical complexity of polyhedra, and also point out why the particular geometry of our constructions may not be applied to the construction of a homogeneous mono-monostatic polyhedron. In Section~\ref{sum} we draw conclusions.
\section{The geometry of Conway spirals}\label{spirals}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\textwidth]{intro.eps}
\caption{Construction of symmetric, mono-monostatic discs and polyhedra; a) Geometry of the Conway spiral $P_0 , \ldots, P_n$. $P_0$ is fixed at $z=1$ and each radius $OP_i$ is perpendicular to the corresponding edge $P_{i-1}P_i$. The geometry of the spiral is uniquely described in terms of $n$ angular variables $\alpha_1, \ldots, \alpha_n$; b) 2D mirror-symmetric mono-monostatic polygon with 11 vertices for $n=5$ and $k=2$, see Table~\ref{tb:monomono}, line 6 for numerical data; c) 3D mono-monostatic polyhedron with 5-fold rotational symmetry for $n=4$ and $k=5$, see Table~\ref{tb:monomono}, line 3 for numerical data.}
\label{fig:intro}
\end{center}
\end{figure}
\subsection{The classical Conway double spiral and the Conway-Guy monostable polyhedron}
The essence of the Conway-Guy polyhedron is a remarkable planar construction to which we will briefly refer as the \emph{Conway spiral}, illustrated in Figure~\ref{fig:intro}(a).
In terms of symbols shown in the figure, it can be defined as an open planar polygon $M$ composed of the sequence of points $P_0 , \ldots, P_n$ with $\angle O P_i P_{i-1} = \pi/2$, $i = 1\ldots n$. Without loss of generality, $O$ is considered here as the origin of the coordinate system, all points $P_i$ lie in the plane $xz$ and the coordinates of $P_0$ are fixed at (0,0,1).
If we consider double Conway spirals generated by reflection symmetry, for the $x$-coordinate
of center of mass $C$ of any double Conway spiral we have $x_C=0$ and due to the special design, the double Conway spiral is monostatic if and only if $z_C<0$.
The original Conway-Guy construction is equivalent to Figure~\ref{fig:intro}(a) if all central angles are equal, i.e., we have
\begin{equation}\label{eq:conway}
\alpha _1 = \alpha _2 =\dots =\alpha _{n+1},
\end{equation}
implying that all triangles $P_iP_{i+1}O$ are similar. This case, to which we refer as the \emph{classical Conway spiral} admits a discrete family of shapes, parametrized by the integer $n$, and a corresponding family of double Conway spirals.
None of these polygons (interpreted as homogeneous discs rolling along their circumference
on a horizontal plane) is monostatic, i.e., we have $z_C>0$ for all values of $n$,
since convex monostatic, homogeneous discs do not exist \cite{DomokosRuina}. Still, the Conway spiral may be regarded as a \emph{best shot} at a monostatic polyhedral disc with reflection symmetry. The same intuition suggests that a Conway spiral may need minimal added `bottom weight' to become monostatic.
Conway and Guy added this bottom weight by extending the shape into 3D as an oblique prism and they computed the minimal value of $n$ necessary to make
this homogeneous oblique prism (with the cross-section of a classical Conway spiral) monostable as $n=8$, resulting in a homogeneous, convex polyhedron with 34 vertices and 19 faces.
\subsection{The modified Conway double spiral and Dawson's monostable simplices in higher dimensions}
The idea of the Conway spiral may be generalized to bear more fruits. In \cite{Dawson} Dawson, seeking monostatic simplices in higher dimensions, considered the generalized version with
\begin{equation}\label{eq:dawson}
\alpha_i = c^{i-1}\alpha _1, \quad i=1,2, \dots n \qquad\mbox{and}\qquad \alpha_{n+1}=\alpha_n
\end{equation}
to which we refer as a \emph{modified Conway spiral}. To describe Dawson's construction we again consider a double spiral, with the mirror images of the vertex $P_i$ defined as $P_{-i}$. In this model the vectors $\mathbf{x}_i=OP_i$, $i=-n, -n+1 \dots n$ are interpreted as the \emph{face vectors} of a simplex ($\mathbf{x}_i$ being orthogonal to the face $f_i$ and having magnitude proportional to the area of $f_i$). To qualify as face vectors, any set of vectors must be balanced \cite{Minkowski}, i.e., we must have
\begin{equation}\label{eq:sum}
\sum_{i=-n}^{n}\mathbf{x}_i=0.
\end{equation}
Dawson proved that the condition for the simplex tipping from face $f_i$ to $f_j$ can be written as
\begin{equation}\label{dawson1}
\lvert\mathbf{x}_i\rvert < \lvert\mathbf{x}_j\rvert\cos\theta_{ij},
\end{equation}
where $\theta_{ij}$ is the angle between $\mathbf{x}_i$ and $\mathbf{x}_j$. By using this
\emph{tipping condition} he found that for $n=5, c=1.5$ the modified Conway spiral (\ref{eq:dawson}) yields a set of balanced vectors, the small
perturbation of which defines a 10-dimensional, homogeneous mono-stable simplex.
\section{Mono-monostatic 0-skeletons}\label{skeletons}
\subsection{The generalized double Conway spiral and planar 0-skeletons}\label{ss:11gon}
If, instead of considering double Conway spirals as homogeneous disks we associate unit masses with the vertices then we obtain objects which may be called \emph{polygonal 0-skeletons}. Since there are relatively many vertices with negative $z$ coordinate and relatively few ones with positive $z$ coordinate, this interpretation appears to be a convenient manner to
add `bottom weight' to the geometric double Conway spiral. In this interpretation as planar 0-skeletons, one may ask whether mono-monostable double Conway spirals exist and if yes, what is the minimal number of their vertices necessary to have this property. Since static balance equations for such a skeleton coincide with (\ref{eq:sum}) and the tipping condition (\ref{dawson1}) is equivalent to prohibit an unstable equilibrium at vertex $v_i$ \cite{Bozoki}, it is easy to see that Dawson's geometric construction, interpreted as a 0-skeleton,
has $z_C<0$ and it defines a polygon with $V=11$ vertices which is mono-monostatic.
One can ask whether this construction is optimal in two ways: whether there exists a smaller value of $n$ which defines a mono-monostatic modified double Conway spiral (interpreted as a 0-skeleton)
and whether by keeping $n=5$, one may pick other values for $\alpha_i$ which
yield a center of mass with larger negative coordinate. The first question was answered in \cite{Dawson3} in the negative by proving that monostable simplices in $d<9$ dimensions do not exist. This implies that for $n<5$ no mono-monostatic Conway spiral (interpreted as a 0-skeleton) exists, but nothing is known about the existence of mono-monostatic 10-gonal disks as 0-skeletons since they cannot be represented by a symmetric double Conway spiral. The second question may be addressed if we admit \emph{generalized} Conway spirals with arbitrary $\alpha_i$ and optimize this construction to seek the minimum of $z_C$.
In any case, to verify monostatic property of a given double Conway spiral, $z_C$ needs to be computed. In terms of coordinates $z_i$, we have from Figure~\ref{fig:intro}(a):
\begin{equation}
\label{eq:z_C}
z_C = \dfrac{1+k \displaystyle\sum_{i=1}^{n} z_i}{1+kn},
\end{equation}
where $k$ stands for the multiplicity of Conway spirals; now $k=2$.
Furthermore, any $z_i$ can be expressed in terms of angles $\angle P_0 O P_i = \sum_{j=1}^i \alpha_j$ and distances $$r_i = \overline{OP_i} = \overline{OP_0}\cdot\prod_{j=1}^i \cos\alpha_j$$ as follows:
\begin{equation}
\label{eq:z_i}
z_i = \prod_{j=1}^i \cos\alpha_j\cdot\cos\left(\sum_{j=1}^i \alpha_j\right).
\end{equation}
By merging (\ref{eq:z_C}) and (\ref{eq:z_i}) we get
\begin{equation}
\label{eq:z_prodsum}
z_C(\boldsymbol{\alpha}) = \dfrac{1+k \displaystyle\sum_{i=1}^{n} \prod_{j=1}^i \cos\alpha_j\cdot\cos\left(\sum_{j=1}^i \alpha_j\right)}{1+kn},
\end{equation}
or briefly,
\begin{equation}
\label{eq:z_C_simp}
z_C(\boldsymbol{\alpha}) = \dfrac{1+k S_n(\boldsymbol{\alpha})}{1+kn}.
\end{equation}
We performed an optimization for $\boldsymbol{\alpha} = (\alpha_1 \ldots \alpha_n)$ and found the shape in Figure~\ref{fig:intro}(b) (see Table~\ref{tb:monomono}, line 6 for computed values of $\boldsymbol{\alpha}$). Note that this single result is an alternative proof for the existence of monostable $10$-dimensional simplices given by Dawson \cite{Dawson}.
We remark that a similar optimization process of the Conway spiral is discussed in \cite{Minich} for the homogeneous case.
\subsection{Proof of Theorem \ref{th1}: Conway $k$-spirals and mono-monostatic 0-skeletons in 3 dimensions}\label{ss:21hedron}
\begin{proof}
Generalized Conway spirals may be used as the building blocks of mono-monostatic 0-skeletons in 3 dimensions. The key idea is to consider instead of a double Conway spiral \emph{multiple} Conway spirals in a $D_k$-symmetrical arrangement around the $z$-axis, rotated at angles $\beta=2\pi/k$. We call such a construction a Conway $k$-spiral. Planar double spirals correspond to $k=2$, while for higher values of $k$ one may seek to find mono-monostatic 0-skeletons.
If for $k=2$ the Conway spiral defines a mono-monostatic planar 0-skeleton then we expect that for higher values of $k$ we will obtain mono-monostatic polyhedral 0-skeletons.
The procedure of finding mono-monostatic Conway $k$-spirals (interpreted as 0-skeletons) is as follows:
Let us consider a planar polygonal line $M$ as the intersection of a symmetry plane bisecting a sequence of faces, while another polygonal line $N(Q_0, \ldots,Q_n)$ remains on a sequence of edges as before.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.6]{revol.eps}
\caption{Construction of polyhedra with rotational symmetry: side and top views. Polygonal lines $M(P_0, \ldots,P_n)$ (solid line) and $N(Q_0, \ldots,Q_n)$ (dashed line) lie in symmetry planes through faces and edges, respectively. Any optimal construction requires $Q_0Q_1$ instead of $P_0P_1$ to be perpendicular to radius $OQ_1$.}
\label{fig:revol}
\end{center}
\end{figure}
Let $e_i$ be an edge $Q_iQ_{i+1}, i=0\ldots n-1$ and face $f_i$ be adjacent to $e_i$. Call face $f_i$ (edge $e_i$) `outwards' if its upper edge (endpoint) is farther from the axis of symmetry than the bottom one, i.e., for a face $f_i$, $\sum_{j=i+2}^{n+1} \alpha_j \leq \pi/2$. Clearly, $e_i$ is outwards if and only if $f_i$ does.
By construction, $e_0$ and $f_0$ are never outwards but we assume from now on that any $e_i$, $f_i$ with $i>0$ are outwards edges and faces. For them it is clear that $\angle OQ_{i+1}Q_i > \angle OP_{i+1}P_i$ and if this latter equals $\pi/2$, there will be no equilibrium points inside $f_i$. Non-outwards edges, however, are just on the contrary and therefore an optimal construction for the entire polyhedron requires $\angle OQ_{i+1}Q_i = \pi/2$, causing the top vertex $Q_0$ to be moved up by a positive distance $h$ as shown in Figure~\ref{fig:revol}.
It is easy to read from the right triangle $OP_0P_1$ that $z_1 = \cos^2\alpha_1$ and $x_1 = \sin\alpha_1 \cos\alpha_1$. Let the distance between $z$ and $Q_1$ (also between $z$ and $Q'_1$ in the figure) be denoted by $x'_1$. Since $x_1 = x'_1 \cos(\pi/k)$ (see the top view) and $OQ'_1Q_0$ is also a right triangle, for its height of length $x'_1$ the following equality holds:
\begin{equation*}
\label{eq:height}
\cos^2\alpha_1 (\sin^2\alpha_1 + h) = \left(\dfrac{\sin\alpha_1 \cos\alpha_1}{\cos(\pi/k)}\right)^2,
\end{equation*}
which yields
\begin{equation*}
\label{eq:h}
h = \sin^2\alpha_1 \tan^2\dfrac{\pi}{k}.
\end{equation*}
Since it affects the vertical position of the top vertex and thus of the centroid, (\ref{eq:z_C_simp}) should be modified as
\begin{equation}
\label{eq:z_C_ast}
z^\ast_C(\boldsymbol{\alpha}) = \dfrac{1+k S^\ast_n(\boldsymbol{\alpha})}{1+kn},
\end{equation}
where
\begin{equation}
\label{eq:S_n_ast}
S^\ast_n(\boldsymbol{\alpha}) = S_n(\boldsymbol{\alpha}) + \dfrac{1}{k} \sin^2\alpha_1 \tan^2\dfrac{\pi}{k}.
\end{equation}
We performed calculations in search of minimum $z^\ast_C$ that lead to different constructions (denoted as $P_{n,k}
), one
of these constructions with $n=4 , k=5$ is illustrated in Figure~\ref{fig:intro}(c).
Table~\ref{tb:monomono} summarizes the possible mono-monostatic objects with minimum required $k$ found by the above method ($v = kn+1$ stands for the number of vertices or/and faces):
\begin{table}[!ht]
\begin{center}
\begin{tabular}{|c|ccclp{7.7cm}|}
\hline
no. & $n$ & $k$ & $v$ & $z_C$ & $(\alpha_{n+1},\alpha_n, \ldots, \alpha_1)$
\\ \hline \hline
1 & 2 & 25 & 51 & -0.00051277 & $(49.799, 49.799, 80.402)^\circ$
\\ \hline
2 & 3 & 8 & 25 & -0.0061413 & $(30.273, 30.273, 46.543, 72.912)^\circ$
\\ \hline
3 & 4 & 5 & 21 & -0.015354 & $(19.716, 19.716, 29.875, 44.519, 66.173)^\circ$
\\ \hline
4 & 5 & 4 & 21 & -0.029972 & $(13.494, 13.494, 20.336, 29.781, 43.215, 59.680)^\circ$
\\ \hline
5 & 7 & 3 & 22 & -0.042695 & $( 7.1815, 7.1815, 10.7864, 15.6392, 22.1409,$ $30.9129, 43.0793, 43.0788)^\circ$
\\ \hline
6 & 5 & 2$^\ast$ & 11 & -0.017984 & $( 13.201, 13.201, 19.890, 29.110, 42.172, 62.427)^\circ$
\\ \hline
\end{tabular}
\vskip 5mm
\caption{List of some mono-monostatic 0-skeletons $P_{n,k}$ with $D_k$-symmetry and $v = nk+1$ vertices; $z_C$ can be verified via (\ref{eq:z_prodsum}). $k=2$ marked by `$\ast$' is the two
-dimensional case already mentioned at the end of Subsection~\ref{ss:11gon}. The minimum number of vertices for monostatic 3D rotational polyhedra is 21.}
\label{tb:monomono}
\end{center}
\end{table}
\end{proof}
We believe that this construction is close to a (local) optimum, i.e., we think that this may be the mono-monostatic 0-skeleton defined by multiple generalized Conway spirals which has the least number of vertices. This, however, does not exclude the existence of mono-monostatic 0-skeletons with smaller number of vertices which have less symmetry. Our construction provides 21 as an \emph{upper bound} for the minimal number of vertices and faces of a mono-monostatic 0-skeleton. The lower bound for the number of vertices was given in \cite{Bozoki} as 8, from which a lower bound of 6 for the number of faces follows \cite{Steinitz1}.
\section{Connection to related other problems}\label{other}
\subsection{Mechanical complexity of polyhedra}
It is apparent that constructing monostatic polyhedra is not easy. In \cite{balancing} this general observation was formalized by introducing the \emph{mechanical complexity} $C(P)$ of a polyhedron $P$ as
\begin{equation}\label{complexity}
C(P) = 2(V(P) + F(P) - S(P) - U(P)),
\end{equation}
where $V(P),F(P),S(P),U(P)$ stand for the number of vertices, faces, stable and unstable equilibrium points of $P$, respectively.
The \emph{equilibrium class} of polyhedra with given numbers $S,U$ of stable and unstable equilibria is denoted by $(S,U)^E$ and the complexity of such class was defined as
\begin{equation}\label{complexity1}
C(S,U)=min\{C(P): P \in (S,U)^E\}.
\end{equation}
The only material distribution considered in \cite{balancing} was uniform density.
Other types of homogeneous mass distributions, commonly referred to as \emph{h-skeletons} are also possible: 0-skeletons have mass uniformly distributed on their vertices, 1-skeletons have mass uniformly distributed on the edges, 2-skeletons have mass uniformly distributed on the faces. To distinguish between these
cases we will apply an upper index to the symbol $C$ of complexity, indicating the type of skeleton (the absence of index indicates classical homogeneity).
In the case of uniform density (classical homogeneity), the complexity for all non-monostatic equilibrium classes $(S,U)^E$ for $S,U>1$ has been computed in \cite{balancing}. On the other hand, the complexity has not yet been determined for any of the monostatic classes $(1,U)^E, (S,1)^E$. Lower and upper bounds exist for $C(S,1),C(1,U)$ for $S,U>1$. The most difficult appears to be the mono-monostatic class $(1,1)^E$ for the complexity $C(1,1)$ of which the prize USD $1.000.000/C(1,1)$ has been offered in \cite{balancing}. Not only is $C(1,1)$ unknown, at this point there is no upper bound known either.
\subsection{Complexity of some monostable and mono-unstable polyhedral 0-skeletons}\label{ss:complexity}
Admittedly, computing upper bounds for 0-skeletons is easier.
This is already apparent in the planar case, where monostatic discs with homogeneous mass distribution in the interior do not exist \cite{DomokosRuina} whereas a monostatic 0-skeleton could be constructed with $V=11$ vertices \cite{Dawson}. In 3D, our construction of a 0-skeleton with $F=21$ faces and $V=21$ vertices (see the top left polyhedron in Figure~\ref{fig:compl} and Table~\ref{tb:monomono}, line 3) offers such an upper bound as
\begin{equation}\label{bound}
C^0(1,1)\leq 2(21+21-1-1) =80
\end{equation}
This is the first known such construction and its existence may help to solve the more difficult cases, in particular,
the case with uniform density. In Figure~\ref{fig:compl} we provide upper bounds for the complexity of 0-skeletons in some other monostatic equilibrium classes as well.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\textwidth]{compl.eps}
\caption{Complexity of some monostable and mono-unstable polyhedra. Drawn representatives of equilibrium classes $(S,U)$ prove an upper bound for complexity of the respective class, see the bracketed numbers as lower and upper bounds, respectively, in the top left corner of their cells. Since mono-unstable polyhedra with less than 8 vertices (and therefore, by Steinitz's theorem, with less than 6 faces) cannot exist, 24 is a lower bound of complexity of classes $(S,1)$. Complexity of the four non-monostatic classes is exactly known by the existence of simplicial representatives of each class \cite{balancing}. Coordinates of drawn polyhedra, except for the one in class (1,1), are given in Table~\ref{tb:coord}. }
\label{fig:compl}
\end{center}
\end{figure}
\subsection{Existence and non-existence of certain types of mono-monostatic 0-skeletons and homogeneous bodies}\label{ss:existence}
The following paragraphs illustrate the relative difficulty of constructing mono-monostatic $h$-skeletons from a different point of view. Firstly, it is known from \cite{DomokosRuina} that no homogeneous mono-monostatic two-dimensional objects rolling along their perimeter exist; however, $P_{5,2}$ drawn in Figure~\ref{fig:intro}b is a mono-monostatic 0-skeleton in 2D. A similar property of non-existence
of homogeneous mono-monostatic objects will be proven below for Conway $k$-spirals, interpreted as homogeneous solids.
\begin{theorem}
\label{thm:nonexst}
Let $P$ be a convex solid with center of mass at $C$. Let $a$ denote an axis intersecting $P$ and let $h(a)$ be a half-plane the boundary of which is $a$. Let $N$ denote the intersection of $P$ and $h(a)$ and let us describe $N$ as the polar distance $r(\varphi)$, measured from $C$ as origin.
If there exists an axis $a$ such that $r(\varphi)$ is strictly monotonic for all possible $h(a)$ then
$P$ is not mono-monostatic.
\end{theorem}
\begin{proof}
Let an axis $z$ be directed along $a$ and let a point $Q$ on the surface of $P$ be parametrized as $Q(\theta, \varphi,r)$ where $0 \leq\theta\leq \pi$ is the meridian angle between $CQ$ and $z$, $0\leq\varphi< 2\pi$ is the azimuth angle (with respect to a fixed starting position), $r=\lvert Q-C \rvert$.
Since $P$ is convex, $r = r(\theta, \varphi)$ for all surface points is uniquely defined. In this polar system, $C$ can only be the centre of mass of $P$ if
\begin{equation}
\label{eq:Cheight}
\int\limits_{0}^{2\pi} \int\limits_{0}^{\pi} \dfrac{2}{9} r(\theta, \varphi)^4 \sin\theta \cos\theta d\theta d\varphi = 0,
\end{equation}
once $(1/3)r^3 \sin\theta d\theta d\varphi$ is the volume of an elementary pyramid with its apex at $C$ and $(2/3)r\cos \theta$ measures the $z$ coordinate for the centre of mass of an elementary pyramid.
From the condition of the theorem, it follows that $r$ is strictly monotonic in $\theta$: assume now that $\theta_1 < \theta_2 \iff r_1 > r_2$ for all $Q_1(\theta_1, \varphi, r_1), Q_2(\theta_2, \varphi, r_2)$ and rewrite (\ref{eq:Cheight}) as follows:
\begin{equation*}
\dfrac{2}{9} \int\limits_{0}^{2\pi} \int\limits_{0}^{\pi/2} \left(r(\theta, \varphi)^4 \sin\theta \cos\theta + r(\pi-\theta, \varphi)^4 \sin(\pi-\theta) \cos(\pi-\theta) \right) d\theta d\varphi = 0
\end{equation*}
\begin{equation}
\dfrac{1}{9} \int\limits_{0}^{2\pi} \int\limits_{0}^{\pi/2} \left(r(\theta, \varphi)^4 - r(\pi-\theta, \varphi)^4 \right) \sin 2\theta d\theta d\varphi = 0.
\end{equation}
Here both terms of the product in the integrand are positive, so the definite integral cannot evaluate to zero.
\end{proof}
\begin{cor}
Conway $k$-spirals, interpreted as homogeneous solids, are never mono-monostatic.
\end{cor}
\begin{proof}
We prove the Corollary by showing that a Conway $k$-spiral satisfies the monotonicity condition of the theorem. Consider $a$ to be aligned with axis $z$ again. Since we consider polyhedral solids, the `level sets' for $r$ are concentric circles on all faces. By construction, perpendicular projection of $C$ on the base $k$-gon is incident to $a$, so $r$ increases monotonically within that $k$-gon along any $h$. For all other faces, assume that there is a plane $h$ intersecting or being tangent to a level set, but it would immediately imply that a non-horizontal edge (surely contained by some plane $h$) of the same face carries an equilibrium point which contradicts the mono-monostatic property. As a consequence, any $h$ intersects all set levels without being even tangent to any of them, which is a necessary and sufficient condition for $r$ being strictly monotonic along any line $N$ started and ended at the axis $a$.
\end{proof}
We note that Theorem~\ref{thm:nonexst} also implies that any homogeneous smooth solid of revolution cannot be mono-monostatic.
\section{Concluding comments}\label{sum}
In this paper, by relying on the geometric idea of Conway spirals, we demonstrated the existence of mono-monostatic 0-skeletons in two and three dimensions. In the former case, by drawing on an earlier result of Dawson \cite{Dawson} we showed
that mono-monostatic planar 0-skeletons with $V=11$ vertices exist. It follows from another result of Dawson \cite{Dawson3} that for $V=9$ such constructions can not exist The $V=10$ case is not known. In three dimensions we showed an explicit construction with $V=21$ vertices,
thus providing an upper bound for the minimal number of vertices. The lower bound is $V=8$ \cite{Bozoki} and other results are not known.
We hope that these constructions will motivate further research to find the minimal number of $V$ for a mono-monostatic 0-skeleton, both in two and in 3 dimensions.
\begin{table}[ht]
\begin{center}
\begin{scriptsize}
\begin{tabular}{|r|r|r|}
\hline
\multicolumn{3}{|c|}{$(S,U) = (1,2)$}\\
$x$ & $y$ & $z$ \\
\hline
0 & 374 & 0 \\
154 & 80 & 0 \\
124 & -32 & 0 \\
81 & -78 & 0 \\
47 & -95 & 0 \\
24 & -100 & 0 \\
-24 & -100 & 0 \\
-47 & -95 & 0 \\
-81 & -78 & 0 \\
-124 & -32 & 0 \\
-154 & 80 & 0 \\
0 & -1200 & 5000 \\
\hline
\end{tabular}
\begin{tabular}{|r|r|r|}
\hline
\multicolumn{3}{|c|}{$(S,U) = (1,3)$}\\
$x$ & $y$ & $z$ \\
\hline
0 & 466 & 0 \\
166 & 70 & 0 \\
121 & -47 & 0 \\
71 & -87 & 0 \\
35 & -100 & 0 \\
-35 & -100 & 0 \\
-71 & -87 & 0 \\
-121 & -47 & 0 \\
-166 & 70 & 0 \\
0 & -100 & -900 \\
0 & -100 & 900 \\
\hline
\multicolumn{3}{c}{\phantom{0} }
\end{tabular}
\vskip 5mm
\begin{tabular}{|r|r|r|}
\hline
\multicolumn{3}{|c|}{$(S,U) = (2,1)$}\\
$x$ & $y$ & $z$ \\
\hline
0 & 374.328 & 0 \\
153.589 & 80.2023 & 20 \\
124.268 & -32.3675 & 14.9819 \\
81.1006 & -77.5258 & 8.45141 \\
46.9121 & -94.4981 & 3.41302 \\
23.4562 & -100 & 0 \\
-23.4562 & -100 & 0 \\
-46.9121 & -94.4981 & 3.41302 \\
-81.1006 & -77.5258 & 8.45141 \\
-124.268 & -32.3675 & 14.9819 \\
-153.589 & 80.2023 & 20 \\
\hline
\multicolumn{3}{c}{\phantom{0} }
\end{tabular}
\begin{tabular}{|r|r|r|}
\hline
\multicolumn{3}{|c|}{$(S,U) = (3,1)$}\\
$x$ & $y$ & $z$ \\
\hline
0 & 334.907 & 0 \\
145.019 & 83.7267 & 10 \\
145.019 & 0 & 9.6018 \\
94.9161 & -68.9606 & 5.40618 \\
53.5898 & -92.8203 & 2.10256 \\
26.7949 & -100 & 0 \\
-26.7949 & -100 & 0 \\
-53.5898 & -92.8203 & 2.10256 \\
-94.9161 & -68.9606 & 5.40618 \\
-145.019 & 0 & 9.6018 \\
-145.019 & 83.7267 & 10 \\
\hline
\multicolumn{3}{c}{\phantom{0} }
\end{tabular}
\end{scriptsize}
\caption{Coordinates of some polyhedra shown in Figure~\ref{fig:compl}. Monostable objects are provided with integer coordinates which would be difficult for mono-unstable ones due to oblique polygonal faces.}
\label{tb:coord}
\end{center}
\end{table}
| {
"timestamp": "2021-10-14T02:23:03",
"yymm": "2103",
"arxiv_id": "2103.13727",
"language": "en",
"url": "https://arxiv.org/abs/2103.13727",
"abstract": "We show an explicit construction in 3 dimensions for a convex, mono-monostatic polyhedron (i.e., having exactly one stable and one unstable equilibrium) with 21 vertices and 21 faces. This polyhedron is a 0-skeleton, with equal masses located at each vertex. The above construction serves as an upper bound for the minimal number of faces and vertices of mono-monostatic 0-skeletons and complements the recently provided lower bound of 8 vertices. This is the first known construction of a mono-monostatic polyhedral solid. We also show that a similar construction for homogeneous distribution of mass cannot result in a mono-monostatic solid.",
"subjects": "Metric Geometry (math.MG); Combinatorics (math.CO)",
"title": "Conway's spiral and a discrete Gömböc with 21 point masses",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540692607816,
"lm_q2_score": 0.817574471748733,
"lm_q1q2_score": 0.8006948858308556
} |
https://arxiv.org/abs/1401.6619 | On cycles in intersection graph of rings | Let $R$ be a commutative ring with non-zero identity. We describe all $C_3$- and $C_4$-free intersection graph of non-trivial ideals of $R$ as well as $C_n$-free intersection graph when $R$ is a reduced ring. Also, we shall describe all complete, regular and $n$-claw-free intersection graphs. Finally, we shall prove that almost all Artin rings $R$ have Hamiltonian intersection graphs. We show that such graphs are indeed pancyclic. | \section{Introduction}
If $S=\{S_1,\ldots,S_n\}$ is a family of sets, then the intersection graph of $S$, is the graph having $S$ as its vertex set with $S_i$ adjacent to $S_j$ if $i\neq j$ and $S_i\cap S_j\neq\emptyset$. A well-know theorem due to Marczewski \cite{tam-frm} states that all graphs are intersection graph.
An interesting case of intersection graphs is when the members of $S$ have an algebraic structure. Bosak \cite{jb} was the first who studied graphs arising from semigroups. Cs\'{a}k\'{e}any and Poll\'{a}k \cite{bc-gp} defined and studied the intersection graphs of nontrivial proper subgroups of groups. Zelinka \cite{bz} continued the work of Cs\'{a}k\'{e}any and Poll\'{a}k on intersection graphs of subgroups of finite abelian groups, and later Shen \cite{rs} studies such graphs and classifies all finite groups whose intersection graphs of nontrivial subgroups are disconnected. Herzog, Longobardi and Maj \cite{mh-pl-mm} study the intersection graphs of maximal subgroups of finite groups and among other results classify all finite groups with disconnected graph. The same as for groups, the intersection graphs of ideals of rings and subspaces of vector spaces have been discussed in \cite{ic-sg-tkm-mks,shj-njr-1,shj-njr-2}.
Let $R$ be a commutative ring with a non-zero identity. The intersection graph of $R$, denoted by $\Gamma(R)$, is a graph whose vertices are the nontrivial ideals of $R$ and two distinct vertices are joined by an edge if the corresponding ideals of $R$ have a non-zero intersection.
In this paper, we study the cycle structure of intersection graphs. First we classify all Artin rings with a regular (hence complete) intersection graph. Next we shall investigate all rings $R$ whose intersection graphs $\Gamma(R)$ do not have an induced cycle of length $3$ or $4$. Also, we show that if $R$ is a reduce ring, then $\Gamma(R)$ is $C_n$-free $(n\geq5)$ if and only if $R$ has no ideal which is the direct sum of $n$ non-zero ideals. The same result is also established for $n$-claws instead of $n$-cycles. In the last section, we shall prove that except few cases all other Artin rings have Hamiltonian intersection graphs. Using simple modifications of the given Hamiltonian cycle, we show that $\Gamma(R)$ is pancyclic whenever it is Hamiltonian. Recall that an $n$-claw (a claw) is the star graph $K_{1,n}$ ($K_{1,3}$). Also a graph is called pancyclic if it contains cycles of possible arbitrary sizes $\geq3$.
The following theorem will be used without further reference.
\begin{recalltheorem}[{\cite[Theorem VI.2]{brm}}]
Let $R$ be an Artin commutative ring with a non-zero identity. Then
\[R=R_1\oplus\cdots\oplus R_n,\]
where $R_1,\ldots,R_n$ are local rings.
\end{recalltheorem}
If $R$ is a ring, then the ideals $\a_1,\ldots,\a_n$ are called \textit{independent} if \[\a_i\cap(\a_1+\cdots+\a_{i-1}+\a_{i+1}+\cdots+\a_n)=0\]
for $i=1,\ldots,n$. In other words, $(\a_1,\ldots,\a_n)=\a_1\oplus\cdots\oplus\a_n$ is the direct sum of $\a_1,\ldots,\a_n$. All rings in this paper are commutative rings with a non-zero identity.
\section{$C_n$-free intersection graphs}
As a most simple property we may investigate on intersection graph $\Gamma(R)$ of ideals of a ring, is whether $\Gamma(R)$ is a complete graph. We show that the class of Artin rings with a complete intersection graph coincides with the class of Artin rings with a regular intersection graph and then characterize all such rings.
\begin{theorem}
Let $R$ be an Artin ring, which is not a direct sum of two fields. If $\Gamma(R)$ is regular, then it is complete.
\end{theorem}
\begin{proof}
First we show that $R$ has no direct factor, which is a field. If $R=S\oplus F$, where $F$ is a field and $S$ is not a field, then $N_{\Gamma(R)}(F)=\{\a\oplus F:0\neq\a\vartriangleleft S\}$ and $N_{\Gamma(R)}(S)=\{\a,\a\oplus F:0\neq\a\vartriangleleft S\}$. Hence, $\deg_{\Gamma(R)}S>\deg_{\Gamma(R)}F$, which is a contradiction. Therefore, each maximal ideal of $R$ is adjacent to all other vertices of $\Gamma(R)$, from which it follows that $\Gamma(R)$ is a complete graph.
\end{proof}
\begin{theorem}
If $R$ is an Artin ring, then the graph $\Gamma(R)$ is complete if and only if there exists a sequence of rings $R_1,\ldots,R_n$, in which $R=R_1$, $(R_i,R_{i+1})$ is a local ring for all $i=1,\ldots,n-1$ and $R_n$ is a field.
\end{theorem}
\begin{proof}
If $R$ is not a local ring, then $R=S\oplus T$ for some non-zero rings $S$ and $T$. But then $S\cap T=0$, which is a contradiction. Thus $R=(R,\mathfrak{m})$ is a local ring. Continuing this way for $\mathfrak{m}$ instead of $R$ the result follows. The converse is obvious.
\end{proof}
In the following two theorems, we shall consider conditions under which the intersection graph of a ring is a star graph, which also results in a characterization of rings with a bipartite intersection graph.
\begin{theorem}\label{pendant}
Let $R$ be a ring, which is neither a direct sum of two fields nor a direct sum of a field with a local ring $(S,\mathfrak{m})$ such that $\mathfrak{m}$ is a field. If $\Gamma(R)$ has a pendant, then $\Gamma(R)$ is a star graph.
\end{theorem}
\begin{proof}
Let $\a\in V(\Gamma(R))$ be a pendant. If $\a$ is a maximal ideal, then it is easy to see that $R=(R,\a)$ is a local ring and $\a=(x)$ is a principal ideal. Let $\b$ be the ideal of $R$ adjacent to $\a$. Then $\b=(x^2)$ and $(x^3)=0$, hence $\Gamma(R)$ is an edge.
If $\a$ is not a maximal ideal, then there exists a unique maximal ideal $\mathfrak{m}$ of $R$ containing $\a$. Clearly, $\a=(x)$ is principal and $(x^2)=0$. If $R$ is not a local ring, then there exists a maximal ideal $\mathfrak{n}$ such that $\a\cap\mathfrak{n}=0$. Thus $R=\a\oplus\mathfrak{n}$ and $\a$ is a field. Then $\mathfrak{m}=\a+\b$ for some ideal $\b$ of $\mathfrak{n}$. But then $(\mathfrak{n},\b)$ is a local ring such that $\b$ is a field, which is a contradiction. Therefore $(R,\mathfrak{m})$ is a local ring. Clearly, $\mathfrak{m}=(x,y)$ for some $y\in R$. If $\mathfrak{m}$ is principal, then we may assume that $\mathfrak{m}=(y)$. Thus $(y^2)=(x)$ and $(y^3)=0$, which implies that $\Gamma(R)$ is an edge. If $\mathfrak{m}$ is not principal, then $(x)\cap(y)=0$ and consequently $xy=0$. Since $(x)\subseteq (x)+(y^2)\subseteq (x)+(y)=\mathfrak{m}$, it follows that $(y^2)=0$. Hence $\mathfrak{m}^2=0$ so that $\mathfrak{m}$ is a vector space over the field $F=R/\mathfrak{m}$, where the multiplication is defined by $(r+\mathfrak{m})\cdot m=rm$ for all $r\in R$ and $m\in\mathfrak{m}$. Clearly, there is a one to one correspondence between ideals of $R$ contained in $\mathfrak{m}$ and subspaces of $(\mathfrak{m},F)$. Hence $\dim_F\mathfrak{m}=2$ so that $\Gamma(R)$ is a star graph.
\end{proof}
\begin{theorem}\label{triangle}
\label{main}
If $\Gamma(R)$ is triangle-free, then $\Gamma(R)$ is star or two isolated vertices.
\end{theorem}
\begin{proof}
If $R$ is not a local ring, then there exist two distinct maximal ideals $\mathfrak{m}_1$ and $\mathfrak{m}_2$ in $R$. Since $\Gamma(R)$ is triangle-free, we should have $\mathfrak{m}_1\cap\mathfrak{m}_2=0$. Hence $R=\mathfrak{m}_1\oplus\mathfrak{m}_2$. Let $F_1=R/\mathfrak{m}_1$ and $F_2=R/\mathfrak{m}_2$. Then $R\cong F_1\oplus F_2$ and $\Gamma(R)$ is the union of two isolated vertices.
Now, suppose that $(R,\mathfrak{m})$ is a local ring. We have two cases for $\mathfrak{m}$.
Case 1: $\mathfrak{m}$ is not a principal ideal. First we show that $\mathfrak{m}^2=0$. If $x\in\mathfrak{m}$ and $y\in \mathfrak{m}\setminus xR$, then $ xR\cap yR=0$. Thus $xy=0$ so that $(\mathfrak{m}\setminus xR)x=0$. On the other hand, if $r\in R$, then $xr=y+(xr- y)$ so that $xrx =0$. Thus $xRx=0$ and consequently $\mathfrak{m} x=0$. Hence $\mathfrak{m}^2=0$.
Let $F=R/\mathfrak{m}$. The same as in the proof of Theorem \ref{pendant}, $(\mathfrak{m},F)$ is a vector space. If $\dim_F\mathfrak{m}\geq3$ and $\lbrace x, y, z\rbrace$ is an independent set in $(\mathfrak{m},F)$, then the set of ideals $\{(x),(x,y),(x,y,z)\}$ induces a triangle in $\Gamma(R)$, which is a contradiction. Thus $\dim_F\mathfrak{m}\leq2$.
If $\dim_F\mathfrak{m}=2$, then every two distinct non-trivial ideals of $R$ different from $\mathfrak{m}$ are disjoint. Thus $\Gamma(R)$ is a star graph with $\mathfrak{m}$ at the center. If $\dim_F\mathfrak{m}=1$, then $\Gamma(R)$ is a single vertex and we are done.
Case 2: $\mathfrak{m}=xR$ is a principal ideal. Let $\a$ be a non-zero ideal of $R$. Then $ \a\subseteq\mathfrak{m}$. If $y\in\a$, then $y=rx$ for some $r \in R$. If $r$ is a unit, then $x=yr^{-1} \in \a$ and hence $ \a= \mathfrak{m} $. If $\a\neq\mathfrak{m}$, then $r$ is not unit and so $r=sx$, for some $s\in R$. Thus $y=sx^2$ and subsequently $\a\subseteq\mathfrak{m}^2\subseteq\mathfrak{m} $. Since $\Gamma(R)$ is triangle free, it follows that $\a= \mathfrak{m}^2$. Therefore $\Gamma(R)$ is either a single vertex when $\mathfrak{m}=\mathfrak{m}^2$ or it is an edge when $\mathfrak{m}\neq\mathfrak{m}^2$.
\end{proof}
The following corollary is a direct consequence of the preceding two theorems.
\begin{corollary}
Let $R$ be a ring, which is neither a direct sum of two fields nor a direct sum of a field with a local ring $(S,\mathfrak{m})$ such that $\mathfrak{m}$ is a field. Then the following conditions are equivalent:
\begin{itemize}
\item[(1)]$\Gamma(R)$ is triangle-free,
\item[(2)]$\Gamma(R)$ has a pendant,
\item[(3)]$\Gamma(R)$ is bipartite.
\item[(4)]$\Gamma(R)$ is star.
\end{itemize}
\end{corollary}
In what follows, we shall concentrate on cycle structure of intersection graphs and give a characterization of almost all intersection graphs under investigation that do not have an induced cycle of length greater than $3$.
\begin{theorem}
The graph $\Gamma(R)$ is $C_{4}$-free if and only if $R$ has no set of four non-zero independent ideals.
\end{theorem}
\begin{proof}
First suppose that $R$ has an ideal which is a direct sum of four non-zero ideals, namely $\a_1,\a_2,\a_3$ and $\a_4$. Then $\a_1\oplus\a_2$, $\a_2\oplus\a_3$, $\a_3\oplus\a_4 ,\a_4\oplus\a_1$ induces a cycle of length $4$ in $\Gamma(R)$.
Conversely, suppose that $R$ has an induced $4$-cycle with vertices $\a_1,\a_2,\a_3$ and $\a_4$. Then $\a_1\cap\a_3=\a_2\cap\a_4=0$. Since $\a_2\cap\a_3+\a_3\cap\a_4\subseteq\a_3$, we have
\begin{align*}
(\a_1\cap\a_2)\cap(\a_2\cap\a_3+ \a_3\cap\a_4+\a_4\cap\a_1)&\subseteq(\a_1\cap\a_2)\cap(\a_3+(\a_4\cap\a_1))\\
&=(\a_1\cap\a_2)\cap (\a_3\oplus\a_4\cap\a_1).
\end{align*}
If $a+b\in (\a_1\cap\a_2)\cap(\a_3\oplus\a_4\cap\a_1)$, where $a\in\a_3$ and $b\in\a_4\cap\a_1$, then $a\in\a_1$, which implies that $a=0$. Then $b\in\a_2$ and similarly $b=0$. Hence
\[(\a_1\cap\a_2)\cap(\a_2\cap\a_3+\a_3\cap\a_4+\a_4\cap\a_1)=0.\]
Similar arguments show that $(\a_1\cap\a_2),(\a_2\cap\a_3),(\a_3\cap\a_4)$ and $(\a_4\cap\a_1)$ are non-zero independent ideals and the proof is complete.
\end{proof}
Recall that a ring is reduced if it has no non-zero nilpotent element.
\begin{theorem}
Let $R$ be a reduced ring. Then $\Gamma(R)$ is $C_n$-free ($n\geq5$) if and only if $R$ has no set of $n$ independent of ideals.
\end{theorem}
\begin{proof}
First suppose $\Gamma(R)$ is $C_n$-free. If $R$ has $n$ non-zero independent ideals $\a_1,\ldots\a_n$, then $\a_1\oplus\a_2,\a_2\oplus\a_3,\ldots,\a_n\oplus\a_1$ induces a cycle of length $n$ in $\Gamma(R)$, which is a contradiction.
Now, suppose that $R$ has no set of $n$ non-zero independent ideals and the ideals $\a_1,\ldots,\a_n$ induce a cycle of length $n$. Let $\b_n=\a_n\cap\a_1$ and $\b_i =\a_i\cap\a_{i+1}$ for all $i=1,\ldots,n-1$. Then for all distinct $1\leq i,j\leq n$, we have $\b_i\b_j= 0$. Let $\b_i^*=\b_1+\cdots+\b_{i-1}+\b_{i+1}+\cdots+\b_n$. Then $\b_i\b_i^*=0$. Thus $(\b_i\cap\b_i^*)^2\subseteq\b_i\b_i^*=0$, for all $i=1,\ldots,n$. Since $R$ is reduced, it follows that $\b_i\cap\b_i^*=0$, from which it follows that $\{\b_1,\ldots,\b_n\}$ is a set of non-zero independent ideals of $R$, which is a contradiction.
\end{proof}
In the sequel, we give another approaches to induced cycles in intersection graphs. The following lemma is straightforward.
\begin{lemma}
Suppose $\a_1,\ldots,\a_n$ induce a cycle of length $n$ in $\Gamma(R)$. Then there exist $t$ independent ideals $\a_{i_1},\ldots,\a_{i_t}$ such that $2\leq t\leq\lfloor\frac{n}{2}\rfloor$ and $\a_{i_1}\oplus\cdots\oplus\a_{i_t}$ is adjacent to $\a_i$ for all $i=1,\ldots,n$.
\end{lemma}
\begin{theorem}
Suppose $\a_1,\ldots,\a_n$ induce a cycle of length $n$ in $\Gamma(R)$ and the number $t$ introduced in the previous lemma takes it maximum value $\lfloor\frac{n}{2}\rfloor$. Then $R$ has a set of $n$ non-zero independent ideals if $n$ is even and it has a set of $n-1$ non-zero independent ideals if $n$ is odd.
\end{theorem}
\begin{proof}
Without loss of generality we may assume that $\a_1,\a_3,\ldots,\a_{2\lfloor\frac{n}2\rfloor-1}$ are independent. A simple verification shows that
\[\{\a_1\cap\a_2,\a_2\cap\a_3,\ldots,\a_{n-1}\cap\a_n,\a_n\cap\a_1\}\]
when $n$ is even,
\[\{\a_1\cap\a_2,\a_2\cap\a_3,\ldots,\a_{2\lfloor\frac{n}2\rfloor-1}\cap\a_{n -1},\a_n\cap\a_1\}\]
when $n$ is odd are sets of non-zero independent ideas of $R$, as required.
\end{proof}
\begin{theorem}
Suppose $\a_1,\ldots,\a_n$ $(n\geq3)$ are independent ideals of $R$. Let $\b_i=\a_{i_1}\oplus\cdots\oplus\a_{i_{n_i}}$, for $i=1,\ldots,n$. Then $\b_1,\ldots,\b_n$ induce a cycle of length $n$ if and only if there exist a permutation $\pi\in S_n$ such that $\b_i=\a_{\pi(i)}\oplus\a_{\pi(i+1)}$.
\end{theorem}
\begin{proof}
If $n=3$ then the result is obvious. If there exist $\pi \in S_n$ such that $\b_i=\a_{\pi(i)}\oplus\a_{\pi(i +1)}$, for all $i=1,\ldots,n$, then there is nothing to prove. Hence we may assume that $\b_1,\ldots,\b_n$ are vertices of an induced cycle with length $n\geq4$. Then $n_i\geq 2$, for all $i=1,\ldots,n$, otherwise $\b_j=\a_{j_1}$ for some $j$. But then $\a_{j_1}$ is adjacent to $\b_{j-1}$ and $\b_{j+1}$, which implies that $\b_{j-1}$ and $\b_{j+1}$ are adjacent, a contradiction. Hence $2n\leq \sum_{i=1}^{n}n_i$. On the other hand, the number of $\b_j$ containing $\a_i$ is at most two for all $i=1,\ldots,n$, which implies that $\sum_{i=1}^{n}n_i\leq2n$. Therefore $\sum_{i=1}^{n}n_i=2n$ and hence $n_i=2$, for all $i=1,\ldots,n$. Now the result is straightforward.
\end{proof}
Utilizing the same method used before, we may prove the following result for $n$-claws instead of $n$-cycles.
\begin{theorem}
Let $R$ be a reduced ring. Then the ideals $\a_1,\ldots,\a_n$ of $R$ are independent and $\a_1\oplus\cdots\oplus\a_n$ is a proper ideal of $R$ if and only if there exist an induced $n$-claw in $\Gamma(R)$.
\end{theorem}
\begin{proof}
If $\a_1,\ldots,\a_n$ are independent ideals of $R$ such that $\a_1\oplus\cdots\oplus\a_n$ is a proper ideal of $R$, then clearly $\{\a_1,\ldots,\a_n,\a_1\oplus\cdots\oplus\a_n\}$ induces an $n$-claw in $\Gamma(R)$.
Now, suppose that the ideals $\a_1,\ldots,\a_n$ and $\a$ are pendants and the center of an induced $n$-claw, respectively. Let
\[\a_i^*=\a_1+\cdots+\a_{i-1}+\a_{i +1}+\cdots+\a_n,\]
for all $i=1,\ldots,n$. Then
\[(\a_i\cap\a_i^*)^2\subseteq\a_i\a_i^*=\sum_{j \neq i}\a_i\a_j\subseteq\sum_{j\neq i}\a_i\cap\a_j=0,\]
for all $i=1,\ldots,n$, which implies that $\a_1,\ldots,\a_n$ are independent. If $R\neq\a_1\oplus\cdots\oplus\a_n$, then we are done. Now, suppose that $R=\a_1\oplus\cdots\oplus\a_n$. If $\a_i$ is not a field for some $1\leq i\leq n$, then by replacing $\a_i$ by one of its non-zero proper ideals, we may assume that $R\neq\a_1\oplus\cdots\oplus\a_n$, as required. Otherwise $\a_1,\ldots,\a_n$ are all fields. But then $\a_i\subseteq\a$, for all $i=1,\ldots,n$, which implies that $\a=R$, a contradiction.
\end{proof}
\section{Hamilton cycles}
The aim of this section is to show that except few cases all intersection graphs are Hamiltonian. Indeed, we shall prove the stronger result that such graphs are pancyclic.
A simple verification shows that if $R=S\oplus F$, where $F$ is a field and $\Gamma(S)$ has a Hamiltonian path, then $\Gamma(R)$ has a Hamiltonian cycle. This fact enables us to prove the following result. In what follows, the set of all ideals of a ring $R$ is denoted by $\mathbf{I}(R)$.
\begin{theorem}\label{hamiltonian}
Let $R$ be an Artin ring. Then $\Gamma(R)$ is Hamiltonian if and only if $R$ is not isomorphic to the following rings:
\begin{itemize}
\item[(1)]$F$ or $E\oplus F$,
\item[(2)]$S$ or $E\oplus S$ such that $(S,F)$ is a local ring,
\item[(3)]$S$ such that $(S,T)$ is a local ring and $(T,F)$ is a local ring,
\end{itemize}
where $E$ and $F$ are fields.
\end{theorem}
\begin{proof}
If $R$ is isomorphic to one of the rings in parts (1), (2) or (3), then clearly $\Gamma(R)$ is not Hamiltonian. Now, suppose that $R$ is a ring such that $\Gamma(R)$ is not Hamiltonian. We proceed in some steps:
Case 1: $R=R_1\oplus R_2$ such that $|\mathbf{I}(R_1)|,|\mathbf{I}(R_2)|\geq4$. Let
\[\mathbf{I}(R_1)=\{0=\a_0,\a_1,\ldots,\a_m=R_1\}\]
and
\[\mathbf{I}(R_2)=\{0=\b_0,\b_1,\ldots,\b_n=R_2\}.\]
Clearly an arbitrary ideal of $R$ can be expressed as $\a_i\oplus\b_j$ for some $1\leq i\leq m$ and $1\leq j\leq n$. Consider an $(m\times n)$-grid and put $\a_i\oplus\b_j$ on the $(i,j)$-th coordinate. By Figures 1, 2 and 3, the subgraph induced by ideals $\a_i\oplus\b_j$ in which $\a_i,\b_j\neq0$ is Hamiltonian with a Hamiltonian cycle in which there exists at least one edge on every row as well as one edge on every column. If $\{\a_i\oplus\b_j,\a_{i+1}\oplus\b_j\}$ is an edge such that $i,j>0$, then by removing this edge and adding two edges $\{\a_i\oplus\b_j,\b_j\}$ and $\{\b_j,\a_{i+1}\oplus\b_j\}$ we reach to a new cycle including the vertex $\b_j$. Similarly, we may enlarge the resulting cycle in which the new cyclic contains an arbitrary $\a_i\neq0$. Continuing this way, we reach to a Hamiltonian cycle for $\Gamma(R)$, a contradiction.
Case 2: $R=R_1\oplus R_2$ such that $\mathbf{I}(R_1)\geq3$ and $|\mathbf{I}(R_2)|=3$. The same as in case 1, we may present ideals of $R$ on the grids as it is shown in Figures 4 and 5, which gives rise to a Hamiltonian cycle for $\Gamma(R)$. Hence $\Gamma(R)$ is Hamiltonian, which is a contradiction.
Case 3: $R=S\oplus F$, where $F$ is a field. If $S=S_1\oplus S_2$, where either $S_1$ or $S_2$, say $S_1$ is not a field, then $R=S_1\oplus(S_2\oplus F)$ and by case 1, $\Gamma(R)$ is Hamiltonian. Now, suppose that $S_1$ and $S_2$ are both fields. Then
\[S_1\sim S_1\oplus S_2\sim S_2\sim S_2\oplus F\sim F\sim S_1\oplus F\sim S_1\]
is a Hamiltonian cycle for $\Gamma(R)$. Hence $\Gamma(R)$ is Hamiltonian, a contradiction.
Case 4: If $R$ is a field or it is a direct sum of two fields, then we are done. If not, by cases 1, 2 and 3, there exists a sequence $\{(S_i,R_i)\}_{i=1}^n$ of local rings and a sequence $\{F_i\}_{i=1}^n$ of fields such that $R=R_0=S_1$ or $S_1\oplus F_1$ and $R_i=S_{i+1}$ or $S_{i+1}\oplus F_{i+1}$ for all $1\leq i<n$. Moreover, $R_n$ is a field. If $n=1$, then either $\Gamma(R)$ is a single vertex or it is a path of length three. If $n=2$, then $R=S_1, R_1=S_2$ and $\Gamma(R)$ is an edge. If $n\geq3$, then since $\Gamma(R_{n-2})$ is a path, $\Gamma(R_{n-3})$ and hence $\Gamma(R)$ is Hamiltonian, which is a contradiction. The proof is complete.
\end{proof}
\begin{center}
\begin{tikzpicture}[scale=0.6,rotate=90]
\draw [dotted] (0,0) grid (4,4);
\draw [dotted] (5,0) grid (9,4);
\draw [dotted] (0,5) grid (4,9);
\draw [dotted] (5,5) grid (9,9);
\draw [color=white] (0,8)--(0,9)--(1,9);
\draw [color=white] (8,0)--(9,0)--(9,1);
\draw [thick] (3,4)--(3,0)--(1,0)--(1,1)--(2,1)--(2,2)--(1,2)--(1,3)--(2,3)--(2,4)--(1,4);
\draw [thick] (2,5)--(1,5)--(1,6)--(2,6)--(2,7)--(1,7)--(1,8)--(2,8);
\draw [thick] (3,5)--(3,8)--(4,8)--(4,5);
\draw [thick] (4,4)--(4,0);
\draw [thick] (5,0)--(5,4);
\draw [thick] (6,4)--(6,0)--(7,0)--(7,4);
\draw [thick] (8,2)--(8,4);
\draw [thick] (8,0)--(8,1)--(9,1)--(9,4);
\draw [thick] (5,5)--(5,8)--(6,8)--(6,5);
\draw [thick] (7,5)--(7,8)--(8,8)--(8,5);
\draw [thick] (9,5)--(9,8);
\draw [thick] (8,0) to [out=77, in=-77] (8,2);
\draw [thick] (2,8) to [out=13, in=167] (9,8);
\draw [loosely dotted,thick] (4.2,0.5)--(4.8,0.5);
\draw [loosely dotted,thick] (4.2,1.5)--(4.8,1.5);
\draw [loosely dotted,thick] (4.2,2.5)--(4.8,2.5);
\draw [loosely dotted,thick] (4.2,3.5)--(4.8,3.5);
\draw [loosely dotted,thick] (4.2,5.5)--(4.8,5.5);
\draw [loosely dotted,thick] (4.2,6.5)--(4.8,6.5);
\draw [loosely dotted,thick] (4.2,7.5)--(4.8,7.5);
\draw [loosely dotted,thick] (4.2,8.5)--(4.8,8.5);
\draw [loosely dotted,thick] (0.5,4.2)--(0.5,4.8);
\draw [loosely dotted,thick] (1.5,4.2)--(1.5,4.8);
\draw [loosely dotted,thick] (2.5,4.2)--(2.5,4.8);
\draw [loosely dotted,thick] (3.5,4.2)--(3.5,4.8);
\draw [loosely dotted,thick] (5.5,4.2)--(5.5,4.8);
\draw [loosely dotted,thick] (6.5,4.2)--(6.5,4.8);
\draw [loosely dotted,thick] (7.5,4.2)--(7.5,4.8);
\draw [loosely dotted,thick] (8.5,4.2)--(8.5,4.8);
\draw [loosely dotted,thick] (4.2,4.2)--(4.82,4.82);
\draw [loosely dotted,thick] (4.82,4.2)--(4.2,4.82);
\end{tikzpicture}\\
Figure 1. $(|\mathbf{I}(R_1)|,|\mathbf{I}(R_2)|)=(\mbox{odd}>3,\mbox{even}>3)$
\end{center}
\begin{center}
\begin{tikzpicture}[scale=0.6,rotate=90]
\draw [dotted] (0,0) grid (4,4);
\draw [dotted] (5,0) grid (9,4);
\draw [dotted] (0,5) grid (4,9);
\draw [dotted] (5,5) grid (9,9);
\draw [color=white] (0,8)--(0,9)--(1,9);
\draw [color=white] (8,0)--(9,0)--(9,1);
\draw [thick] (4,0)--(4,4);
\draw [thick](3,4)--(3,0)--(1,0)--(1,1)--(2,1)--(2,2)--(1,2)--(1,3)--(2,3)--(2,4)--(1,4);
\draw [thick] (2,8)--(1,8)--(1,7)--(2,7)--(2,6)--(1,6)--(1,5)--(2,5);
\draw [thick] (3,5)--(3,8)--(4,8)--(4,5);
\draw [thick] (5,4)--(5,0)--(6,0)--(6,4);
\draw [thick] (7,4)--(7,0)--(8,0);
\draw [thick] (8,4)--(8,1)--(9,1)--(9,4);
\draw [thick] (5,8)--(5,5);
\draw [thick] (6,5)--(6,8)--(7,8)--(7,5);
\draw [thick] (8,8)--(8,5);
\draw [thick] (9,8)--(9,5);
\draw [thick] (2,8) to [out=13, in=167] (9,8);
\draw [thick] (2,8) to [out=13, in=167] (9,8);
\draw [thick] (8,0) to [out=77, in=-77] (8,8);
\draw [loosely dotted,thick] (4.2,0.5)--(4.8,0.5);
\draw [loosely dotted,thick] (4.2,1.5)--(4.8,1.5);
\draw [loosely dotted,thick] (4.2,2.5)--(4.8,2.5);
\draw [loosely dotted,thick] (4.2,3.5)--(4.8,3.5);
\draw [loosely dotted,thick] (4.2,5.5)--(4.8,5.5);
\draw [loosely dotted,thick] (4.2,6.5)--(4.8,6.5);
\draw [loosely dotted,thick] (4.2,7.5)--(4.8,7.5);
\draw [loosely dotted,thick] (4.2,8.5)--(4.8,8.5);
\draw [loosely dotted,thick] (0.5,4.2)--(0.5,4.8);
\draw [loosely dotted,thick] (1.5,4.2)--(1.5,4.8);
\draw [loosely dotted,thick] (2.5,4.2)--(2.5,4.8);
\draw [loosely dotted,thick] (3.5,4.2)--(3.5,4.8);
\draw [loosely dotted,thick] (5.5,4.2)--(5.5,4.8);
\draw [loosely dotted,thick] (6.5,4.2)--(6.5,4.8);
\draw [loosely dotted,thick] (7.5,4.2)--(7.5,4.8);
\draw [loosely dotted,thick] (8.5,4.2)--(8.5,4.8);
\draw [loosely dotted,thick] (4.2,4.2)--(4.82,4.82);
\draw [loosely dotted,thick] (4.82,4.2)--(4.2,4.82);
\end{tikzpicture}\\
Figure 2. $(|\mathbf{I}(R_1)|,|\mathbf{I}(R_2)|)=(\mbox{odd}>3,\mbox{odd}>3)$
\end{center}
\begin{center}
\begin{tikzpicture}[scale=0.6,rotate=90]
\draw [dotted] (0,0) grid (4,4);
\draw [dotted] (5,0) grid (9,4);
\draw [dotted] (0,5) grid (4,9);
\draw [dotted] (5,5) grid (9,9);
\draw [color=white] (0,8)--(0,9)--(1,9);
\draw [color=white] (8,0)--(9,0)--(9,1);
\draw [thick] (1,0)--(2,0)--(2,1)--(1,1)--(1,2)--(2,2)--(2,3)--(1,3)--(1,4)--(2,4);
\draw [thick] (2,5)--(1,5)--(1,6)--(2,6)--(2,7)--(1,7)--(1,8)--(2,8);
\draw [thick] (3,5)--(3,8)--(4,8)--(4,5);
\draw [thick] (3,4)--(3,0);
\draw [thick] (4,4)--(4,0);
\draw [thick] (5,0)--(5,4);
\draw [thick] (6,4)--(6,0)--(7,0)--(7,4);
\draw [thick] (8,2)--(8,4);
\draw [thick] (8,0)--(8,1)--(9,1)--(9,4);
\draw [thick] (5,5)--(5,8)--(6,8)--(6,5);
\draw [thick] (7,5)--(7,8)--(8,8)--(8,5);
\draw [thick] (9,5)--(9,8);
\draw [thick] (8,0) to [out=77, in=-77] (8,2);
\draw [thick] (2,8) to [out=13, in=167] (9,8);
\draw [thick] (1,0) to [out=13, in=167] (3,0);
\draw [loosely dotted,thick] (4.2,0.5)--(4.8,0.5);
\draw [loosely dotted,thick] (4.2,1.5)--(4.8,1.5);
\draw [loosely dotted,thick] (4.2,2.5)--(4.8,2.5);
\draw [loosely dotted,thick] (4.2,3.5)--(4.8,3.5);
\draw [loosely dotted,thick] (4.2,5.5)--(4.8,5.5);
\draw [loosely dotted,thick] (4.2,6.5)--(4.8,6.5);
\draw [loosely dotted,thick] (4.2,7.5)--(4.8,7.5);
\draw [loosely dotted,thick] (4.2,8.5)--(4.8,8.5);
\draw [loosely dotted,thick] (0.5,4.2)--(0.5,4.8);
\draw [loosely dotted,thick] (1.5,4.2)--(1.5,4.8);
\draw [loosely dotted,thick] (2.5,4.2)--(2.5,4.8);
\draw [loosely dotted,thick] (3.5,4.2)--(3.5,4.8);
\draw [loosely dotted,thick] (5.5,4.2)--(5.5,4.8);
\draw [loosely dotted,thick] (6.5,4.2)--(6.5,4.8);
\draw [loosely dotted,thick] (7.5,4.2)--(7.5,4.8);
\draw [loosely dotted,thick] (8.5,4.2)--(8.5,4.8);
\draw [loosely dotted,thick] (4.2,4.2)--(4.82,4.82);
\draw [loosely dotted,thick] (4.82,4.2)--(4.2,4.82);
\end{tikzpicture}\\
Figure 3. $(|\mathbf{I}(R_1)|,|\mathbf{I}(R_2)|)=(\mbox{even}>3,\mbox{even}>3)$
\end{center}
\begin{center}
\begin{tikzpicture}[scale=0.6]
\draw [dotted] (0,0) grid (4,2);
\draw [dotted] (5,0) grid (9,2);
\draw [color=white] (0,1)--(0,0)--(1,0);
\draw [color=white] (8,2)--(9,2)--(9,1);
\draw [thick] (0,1)--(0,2)--(1,2)--(1,1)--(2,1)--(2,2)--(3,2)--(3,1)--(4,1)--(4,2);
\draw [thick] (5,2)--(5,1)--(6,1)--(6,2)--(7,2)--(7,1)--(8,1)--(8,2);
\draw [thick] (8,0)--(9,0)--(9,1);
\draw [thick] (8,2) to [out=-60, in=60] (8,0);
\draw [thick] (0,1) to [out=-13, in=-167] (9,1);
\draw [loosely dotted,thick] (4.2,.5)--(4.82,.5);
\draw [loosely dotted,thick] (4.2,1.5)--(4.82,1.5);
\end{tikzpicture}\\
Figure 4. $(|\mathbf{I}(R_1)|,|\mathbf{I}(R_2)|)=(3,\mbox{even}\geq3)$
\end{center}
\begin{center}
\begin{tikzpicture}[scale=0.6]
\draw [dotted] (0,0) grid (4,2);
\draw [dotted] (5,0) grid (9,2);
\draw [color=white] (0,1)--(0,0)--(1,0);
\draw [color=white] (8,2)--(9,2)--(9,1);
\draw [thick] (0,1)--(0,2)--(1,2)--(1,1)--(2,1)--(2,2)--(3,2)--(3,1)--(4,1)--(4,2);
\draw [thick] (5,1)--(5,2)--(6,2)--(6,1)--(7,1)--(7,2)--(8,2)--(8,0);
\draw [thick] (8,0)--(9,0)--(9,1);
\draw [thick] (0,1) to [out=-13, in=-167] (9,1);
\draw [thick , color=white] (0,0)--(1,0);
\draw [loosely dotted,thick] (4.2,.5)--(4.82,.5);
\draw [loosely dotted,thick] (4.2,1.5)--(4.82,1.5);
\end{tikzpicture}\\
Figure 5. $(|\mathbf{I}(R_1)|,|\mathbf{I}(R_2)|)=(3,\mbox{odd}\geq3)$
\end{center}
\begin{theorem}\label{pancyclic}
Let $R$ be an Artin ring. Then $\Gamma(R)$ is Hamiltonian if and only if it is pancyclic.
\end{theorem}
\begin{proof}
If $\Gamma(R)$ is pancyclic, then clearly $\Gamma(R)$ is Hamiltonian. Now, we show that the converse is also true. Suppose on the contrary that there is an Artin ring $R$ such that $\Gamma(R)$ is a non-pancyclic Hamiltonian graph and that $R$ is minimal with this property. If $R$ is neither a local ring nor a direct sum of a local ring with a field, then by applying the following transformations on the Hamiltonian cycles constructed in cases 1 and 2 of Theorem \ref{hamiltonian}, along with replacing horizontal or vertical paths of length two to a path of length one, by joining its end vertices, we would reach to cycles with possible arbitrary length $\geq4$.
\begin{center}
\begin{tikzpicture}[scale=0.6]
\draw [dotted] (0,0) grid (1,1);
\draw [dotted] (3,0) grid (4,1);
\draw [dotted] (7,0) grid (8,1);
\draw [dotted] (10,0) grid (11,1);
\draw [dotted] (0,2) grid (1,3);
\draw [dotted] (3,2) grid (4,3);
\draw [dotted] (7,2) grid (8,3);
\draw [dotted] (10,2) grid (11,3);
\draw [thick] (0,1)--(0,0)--(1,0)--(1,1);
\draw [->] (1.5,0.5)--(2.5,0.5);
\draw [thick] (3,1)--(4,1);
\draw [thick] (7,1)--(8,1)--(8,0)--(7,0);
\draw [->] (8.5,0.5)--(9.5,0.5);
\draw [thick] (10,0)--(10,1);
\draw [thick] (1,2)--(0,2)--(0,3)--(1,3);
\draw [->] (1.5,2.5)--(2.5,2.5);
\draw [thick] (4,2)--(4,3);
\draw [thick] (7,2)--(7,3)--(8,3)--(8,2);
\draw [->] (8.5,2.5)--(9.5,2.5);
\draw [thick] (10,2)--(11,2);
\end{tikzpicture}
\end{center}
On the other hand, by Theorem \ref{triangle}, the graphs under consideration contain triangles, which implies that $\Gamma(R)$ is pancyclic, a contradiction. Hence either $R=S$ or $R=S\times F$, where $(S,\mathfrak{m})$ is a local ring and $F$ is a field. If $\Gamma(\mathfrak{m})$ is Hamiltonian, then either $\Gamma(\mathfrak{m})$ is pancyclic, which implies that $\Gamma(R)$ is pancyclic too, contradicting the hypothesis, or $\Gamma(\mathfrak{m})$ is not pancyclic which contradicts the minimality of $R$. Thus $\Gamma(\mathfrak{m})$ is not Hamiltonian and $\mathfrak{m}$ is isomorphic to one of the five rings given in Theorem \ref{hamiltonian}. Now, a simple verification shows that in each case either $\Gamma(\mathfrak{m})$ is not Hamiltonian or it is pancyclic, which is a contradiction. The proof is complete.
\end{proof}
| {
"timestamp": "2014-01-28T02:08:50",
"yymm": "1401",
"arxiv_id": "1401.6619",
"language": "en",
"url": "https://arxiv.org/abs/1401.6619",
"abstract": "Let $R$ be a commutative ring with non-zero identity. We describe all $C_3$- and $C_4$-free intersection graph of non-trivial ideals of $R$ as well as $C_n$-free intersection graph when $R$ is a reduced ring. Also, we shall describe all complete, regular and $n$-claw-free intersection graphs. Finally, we shall prove that almost all Artin rings $R$ have Hamiltonian intersection graphs. We show that such graphs are indeed pancyclic.",
"subjects": "Commutative Algebra (math.AC)",
"title": "On cycles in intersection graph of rings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9908743604549526,
"lm_q2_score": 0.8080672089305841,
"lm_q1q2_score": 0.8006930788537111
} |
https://arxiv.org/abs/1911.12336 | Synchronization of Kuramoto Oscillators in Dense Networks | We study synchronization properties of systems of Kuramoto oscillators. The problem can also be understood as a question about the properties of an energy landscape created by a graph. More formally, let $G=(V,E)$ be a connected graph and $(a_{ij})_{i,j=1}^{n}$ denotes its adjacency matrix. Let the function $f:\mathbb{T}^n \rightarrow \mathbb{R}$ be given by $$ f(\theta_1, \dots, \theta_n) = \sum_{i,j=1}^{n}{ a_{ij} \cos{(\theta_i - \theta_j)}}.$$ This function has a global maximum when $\theta_i = \theta$ for all $1\leq i \leq n$. It is known that if every vertex is connected to at least $\mu(n-1)$ other vertices for $\mu$ sufficiently large, then every local maximum is global. Taylor proved this for $\mu \geq 0.9395$ and Ling, Xu \& Bandeira improved this to $\mu \geq 0.7929$. We give a slight improvement to $\mu \geq 0.7889$. Townsend, Stillman \& Strogatz suggested that the critical value might be $\mu_c = 0.75$. | \section{Introduction}
We study a simple problem that can be understood from a variety of
perspectives. Perhaps its simplest formulation is as follows: let
$G=(V,E)$ be a connected graph and $(a_{ij})_{i,j=1}^{n}$ denotes its
adjacency matrix. We assume the graph is simple, and thus
$a_{ii} = 0$ for $i = 1, \ldots, n$. We are then interested in the
behavior of the energy functional
$f:\mathbb{T}^n \cong [0,2\pi]^n \rightarrow \mathbb{R}$ given by
\begin{equation} \label{main}
f(\theta_1, \dots, \theta_n) = \sum_{i,j=1}^{n}{ a_{ij} \cos{(\theta_i - \theta_j)}}.
\end{equation}
Ling, Xu \& Bandeira \cite{ling} ask the following very interesting
\begin{quote}
\textbf{Question.} What is the relationship between the existence of
local maxima and the topology of the network?
\end{quote}
$f$ assumes its global maximum when $\theta_i \equiv \theta$ is constant and this is the unique global maximum up to rotation. Factoring out the rotation symmetry, there are at least $2^n$ critical points of the form $\theta_i \in \left\{0, \pi\right\}$. The main question is under which condition we can exclude the existence of local maxima that are not global maxima.\\
This is related to the Kuramoto model as follows: suppose we consider the system of ordinary differential equations given by
$$ \frac{d \theta_i}{dt} = -\sum_{j=1}^{n}{a_{ij} \sin{(\theta_i - \theta_j)}}.$$
We can interpret this system of ODEs as a gradient flow with respect to the energy
$$ E(\theta_1, \dots, \theta_n) = - \sum_{i,j=1}^{n}{ a_{ij} \cos{(\theta_i - \theta_j)}}.$$
In this case, local maxima that are not global correspond to stable local minima of the gradient flow. In light of this model, particles on the circle that are connected by springs, it is natural to assume that no spurious local minimizer of this energy exist if there are enough springs. This motivated an existing line of research: Taylor \cite{taylor} proved that if each vertex is connected to at least $\mu(n-1)$ vertices for $\mu \geq 0.9395$, then \eqref{main} does not have local maxima that are not global. Ling, Xu \& Bandeira \cite{ling} improved this to $\mu \geq 0.7929$. They also showed the existence of a configuration coming from the family of Wiley-Strogatz-Girvan networks \cite{wiley} where each vertex is connected to $0.68n$ other vertices that indeed has local maxima that are not global. Townsend, Stillman \& Strogatz \cite{townsend} suggest that the critical value might be $\mu_c = 0.75$ and identify networks with $\mu = 0.75$ having interesting spectral properties. \\
The problem itself arises in a variety of settings. We refer to the surveys \cite{review1, review2, review3} for an overview regarding synchronization problems, to \cite{chaos} for insights into complexities of the Kuramoto model and to \cite{lopes, lopes2} for random Kuramoto models. There is also recent interest in the landscape of non-convex loss functionals for which this problem is a natural test case, we refer to \cite{loss1, loss2, loss3, loss4}.
\begin{theorem} If $G=(V,E)$ is a connected graph such that the degree of every vertex is at least $0.7889(n-1)$, then
$$ f(\theta_1, \dots, \theta_n) = \sum_{i,j=1}^{n}{ a_{ij} \cos{(\theta_i - \theta_j)}}$$
does not have local maxima that are not global.
\end{theorem}
The main idea behind the argument is a refinement of the approach of
Ling, Xu \& Bandeira \cite{ling} in a certain parameter range using a
new decomposition of the points. We consider the problem a natural
benchmark for testing our understanding of the geometry of energy
landscapes. We conclude by reiterating the original question from
\cite{ling}: which kind of assumption on the network (this paper, for
example, is only dealing with edge-density assumptions) implies
synchronization?
\section{Proof}
\subsection{Ingredients.} The purpose of this section is to sketch several of the tools that go into the argument which is a variation on the argument given by Ling, Xu \& Bandeira \cite{ling}. We first recall their argument. They start by introducing a useful Proposition (precursors of which can be found in Taylor \cite{taylor}).
\begin{proposition}[Ling, Xu, Bandeira \cite{ling}] Let $(\theta_1, \dots, \theta_n) \in \mathbb{T}^n$ be a strict local maximizer of \eqref{main}. If there exists an angle $\theta_r$ such that
$$ \forall~1 \leq i \leq n: \qquad |\sin{(\theta_i - \theta_r)}| < \frac{1}{\sqrt{2}},$$
then all the $\theta_i$ have the same value.
\end{proposition}
The idea behind this argument is as follows: if
$(\theta_1, \dots, \theta_n) \in \mathbb{T}^n$ is a local maximizer,
then the quadratic form (corresponding to the negative Hessian) is positive semi-definite.
In other words, a necessary condition for being a local
maximum (derived in \cite{ling}) is that for all vectors
$w \in \mathbb{R}^n$
$$ \sum_{i,j=1}^{n}{ a_{ij} \cos{(\theta_i - \theta_j)} (w_i - w_j)^2} \geq 0.$$
We can derive a contradiction by defining a vector
$w \in \left\{-1, 1\right\}^n$ depending on which of the two `cones'
the variable $\theta_i$ is in.
Then the summation only ranges over pairs that are in opposite sides of the cone. The cosine is negative for those values and since the graph is connected, there
is at least one connection leading to a contradiction.\\
A second important ingredient is the Kuramoto parameter \cite{kuramoto}
$$ r = \| r\| e^{i \theta_r}:= \sum_{j=1}^{n}{ e^{i \theta_j}}.$$
The second part in the argument \cite{ling} is based on showing that
\begin{equation} \label{length}
\begin{aligned}
\|r\|^2 &\geq \frac{n^2}{2} - \sum_{i \neq j}{ (1-a_{ij}) \Abs{ \cos{(\theta_i - \theta_j)} - \cos^2{(\theta_i - \theta_j)} }} \\
&\geq \left(2 \mu - \frac{3}{2} \right)n^2 + 2(1-\mu)n,
\end{aligned}
\end{equation}
where the second inequality follows from
$$ \Abs{\cos{(\theta_i - \theta_j)} - \cos^2{(\theta_i - \theta_j)}} \leq 2$$
and
$$ \sum_{i \neq j}{(1-a_{ij})} = \sum_{i=1}^{n} \sum_{j \neq i}{(1-a_{ij})} \leq \sum_{i=1}^{n}{ (1-\mu)(n-1)} = (1-\mu) (n-1)n.$$
The argument in \cite{ling} proceeds by writing
$$ \| r\| e^{-i (\theta_i - \theta_r)} = r e^{- i\theta_i} = \sum_{j=1}^{n}{ e^{-i(\theta_i - \theta_j)}}$$
and taking imaginary parts to obtain
$$ \|r\| \sin{(\theta_i - \theta_r)} = \sum_{j=1}^{n}{ \sin{(\theta_i - \theta_j)}}.$$
However, the first order condition in a maximum implies
$$ \sum_{j=1}^{n}{a_{ij} \sin{(\theta_i - \theta_j)}} = 0$$
and thus we obtain
$$ \|r\| \sin{(\theta_i - \theta_r)} = \sum_{j=1}^{n}{(1-a_{ij}) \sin{(\theta_i - \theta_j)}}.$$
As a consequence, we have
\begin{equation} \label{bound}
| \sin{(\theta_i - \theta_r)} | \leq \frac{1}{\|r\|} \Biggl\lvert \sum_{j=1}^{n}{(1-a_{ij}) \sin{(\theta_i - \theta_j)}} \Biggr\rvert \leq \frac{(1-\mu)n}{\|r\|}.
\end{equation}
The Proposition together with \eqref{length} and \eqref{bound} then imply the result.
\subsection{The Proof}
Our proof is motivated by the following simple observation: inequality \eqref{length} is only sharp when
$$ \Abs{\cos{(\theta_i - \theta_j)} - \cos^2{(\theta_i - \theta_j)}} = 2$$
for all pairs of angles $\theta_i, \theta_j$ for which $a_{ij} = 0$. This only happens when $\theta_i - \theta_j = \pi$.
So the only extreme case of equality is when the points that are not connected are concentrated on two different sides of the torus.
Then, however, we can dramatically improve inequality \eqref{bound} since $\sin{(\pi)} = 0$.
The proof will decouple into two steps: either inequality \eqref{length}
is very far from sharp, in that case the origin Ling-Xu-Bandeira argument will result in the desired improvement, or
the inequality is close to being sharp in which case we can try to extract more information from it.\\
Our proof makes use of several new parameters.
As will come to no
surprise to the reader, we obtain them by working with unspecified
coefficients in the beginning and then solving the arising
optimization problem to obtain the optimal selection of
parameters. For readers who prefer explicit values to have an idea of
scales, we will later set
$$ \varepsilon = 0.5 \qquad \mbox{and} \qquad \delta = 0.88.$$
\begin{proof} The proof decouples into several steps.\\
\textbf{Step 1: Finding Good Vertices.}
The first part of our proof emulates the argument from \cite{ling} with one slight modification. We assume that we are working with a slightly improved bound in \eqref{length}. Specifically, we first assume that we have the identity
$$
\sum_{i \neq j} (1 - a_{ij}) \Abs{\cos(\theta_i - \theta_j) -
\cos(\theta_i-\theta_j)^2} = (2-\alpha) (1-\mu) (n-1)n
$$
for some value of $\alpha > 0.0537$. In that case, running the original Ling-Xu-Bandeira argument again shows that we have
$$
| \sin{(\theta_i - \theta_r)} | < \frac{1}{\sqrt{2}}$$ as long as
$\mu \geq 0.788897$. It thus remains to obtain a similar bound in the
case where this assumption does not hold. We may thus additionally
assume
\begin{equation} \label{standingassumption}
\sum_{i \neq j} (1 - a_{ij}) \Abs{\cos(\theta_i - \theta_j) -
\cos(\theta_i-\theta_j)^2} = (2-\alpha) (1-\mu) (n-1)n,
\end{equation}
where $\alpha$ is defined by the equation and satisfies $0 \leq \alpha \leq 0.0537$. The first assumption that we derive from this assumption is that there are many `good' vertices, a term that we will use repeatedly and that we now formally define.
\begin{definition} A vertex $i$ is `$\varepsilon-$good' if
\begin{equation} \label{firstcond}
\sum_{j=1,\, j \neq i}^{n} (1 - a_{ij})\Abs{\cos(\theta_i - \theta_j) -
\cos(\theta_i-\theta_j)^2} \geq (2-\varepsilon) (1-\mu)(n-1).
\end{equation}
\end{definition}
In practice, we will assume that $\varepsilon$ is fixed and to be optimized over later and we will refer to them as `good vertices' or `good points'. The nomenclature is motivated by the fact that the double sum has to be large (see equation \eqref{standingassumption}) and `good points' contribute to achieving this goal. It will also be reflected in the fact that we will derive some improved bounds for good points early on in the paper.
The first step in our argument is to conclude that, for every given $\varepsilon \in [\alpha, 2]$, there are at least $(1-\frac{\alpha}{\varepsilon})n$ vertices that are $\varepsilon-$good.
Suppose this is false, then the double sum in \eqref{standingassumption}
could be bounded from above by
\begin{align*}
\text{LHS of }\eqref{standingassumption}&< \frac{\alpha}{\varepsilon} n (2-\varepsilon) (1-\mu)(n-1) + \Bigl(1-\frac{\alpha}{\varepsilon}\Bigr)n 2 (1-\mu)(n-1) \\
&= (1-\mu)(n-1)n \Bigl( \frac{\alpha}{\varepsilon}(2-\varepsilon) + 2-2\frac{\alpha}{\varepsilon}\Bigr)\\
&= (2-\alpha)(1-\mu)(n-1)n
\end{align*}
which is a contradiction to \eqref{standingassumption}. \\
Let us now take a $\varepsilon-$good vertex $i$. We argue that, for
every given $ \delta \in (\varepsilon, 2)$ there are many
non-neighbors, indices $j$ such that $a_{ij} = 0$, for which
$$ \Abs{\cos(\theta_i - \theta_j) -
\cos(\theta_i-\theta_j)^2} \geq 2 - \delta.$$
Let us assume their number is $(1-\mu - c)(n-1)$. Then we can bound, using the fact that the total number of non-neighbors is at most $(1-\mu)(n-1)$, that
$$ \sum_{j=1}^{n} (1 - a_{ij})\Abs{\cos(\theta_i - \theta_j) -
\cos(\theta_i-\theta_j)^2} \leq (1-\mu-c)(n-1) 2 + c(n-1)(2-\delta).$$
We require, using \eqref{firstcond}, that
$$ (2-\varepsilon) (1-\mu)\leq (1-\mu-c) 2 + (2-\delta)c$$
and thus
$$ c \leq (1-\mu)\frac{\varepsilon}{\delta}.$$
This shows that for any $\varepsilon-$good vertex, the number of non-neighbors for which the cosine quantity exceeds $2-\delta$ is at least
$$(1-\mu - c)(n-1) \geq (1-\mu)(n-1)\left(1- \frac{\varepsilon}{\delta} \right).$$
We summarize our arguments up to this step.
\begin{enumerate}
\item Let us consider the value $\alpha$ as defined in \eqref{standingassumption}. If $\alpha > 0.0537$, then we get the desired result directly from the argument of Ling, Xu \& Bandeira. It thus remains to study the cases where $0 \leq \alpha \leq 0.0537$.
\item In this case, for each $\varepsilon \geq \alpha$, there are at least $(1-\frac{\alpha}{\varepsilon})n$ vertices $i$ (`the $\varepsilon$-good vertices') for which we have the inequality
$$
\sum_{j=1,j\neq i}^{n} (1 - a_{ij})\Abs{\cos(\theta_i - \theta_j) -
\cos(\theta_i-\theta_j)^2} \geq (2-\varepsilon) (1-\mu)(n-1).
$$
\item For each $\varepsilon < \delta < 2$, each of these $(1-\frac{\alpha}{\varepsilon})n$ good points has at least $(1-\mu) (n-1)(1 - \frac{\varepsilon}{\delta})$ non-neighbors, $a_{ij} = 0$, for which
$$ \Abs{\cos(\theta_i - \theta_j) - \cos(\theta_i-\theta_j)^2} \geq 2- \delta.$$
\end{enumerate}
\textbf{Step 2: Improved Bounds for Good Vertices.} It is an
elementary trigonometric fact that if $0 \leq \delta \leq 1$ (in
fact the inequality below holds for $0 \leq \delta < 7/4$), then
$$ \Abs{\cos(x) - \cos(x)^2} \geq 2 - \delta,\quad \text{implies} \quad \left|\sin{(x)}\right| \leq \frac{1}{\sqrt{2}} \sqrt{ \sqrt{9 - 4 \delta} - 3 + 2\delta} =: s_{\delta},$$
where we introduced the shorthand $s_{\delta}$ for simplicity of exposition.
Combining these facts, we can show that for all the good points, of which there are at least $(1-\frac{\alpha}{\varepsilon})n$, we have
\begin{align*}
\|r\| \cdot \left|\sin{(\theta_i - \theta_r)}\right| &= \left| \sum_{j=1}^{n}{ (1-a_{ij}) \sin{(\theta_i - \theta_j)}} \right|\\
&\leq (1-\mu)(n-1) \Bigl(1-\frac{\varepsilon}{\delta}\Bigr) s_{\delta} \\
&\qquad + (1-\mu)(n-1)\frac{\varepsilon}{\delta}\\
&= (1-\mu)(n-1)\Bigl( s_{\delta} + \frac{\varepsilon}{\delta} (1-s_{\delta})\Bigr).
\end{align*}
This, in turn, implies that any good point $i$ satisfies
$$ \left|\sin{(\theta_i - \theta_r)}\right|^2 \leq \frac{ \left( s_{\delta} + \frac{\varepsilon}{\delta} (1-s_{\delta})\right)^2 (1-\mu)^2 n^2}{\|r\|^2}$$
Note that we also have \eqref{length} and \eqref{standingassumption} implying
\begin{align*}
\|r\|^2 \geq \left(\frac{1}{2} - (2-\alpha)(1-\mu) \right)n^2
\end{align*}
Therefore, if $i$ is a good point, then
$$ \left|\sin{(\theta_i - \theta_r)}\right|^2 \leq \frac{ \left( s_{\delta} + \frac{\varepsilon}{\delta} (1-s_{\delta})\right)^2 (1-\mu)^2}{ \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)}.$$
Recall also that there are at most $\frac{\alpha}{\varepsilon} n$ `bad' points (points which are not good, we will also call then `outliers').
We define
$$\varphi = \varphi(\mu, \varepsilon, \delta, \alpha)$$
as the positive angle satisfying
$$ \left|\sin{(\varphi)}\right|^2 = \frac{ \left( s_{\delta} + \frac{\varepsilon}{\delta} (1-s_{\delta})\right)^2 (1-\mu)^2}{ \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)}.$$
This is the bound we get on the angle that good points have with $\theta_r$. There is a balancing act: by choosing the parameters in a more restricted fashion, we can get a better upper bound for this angle but there will be less good points. Choosing the parameters in the other direction will result in a bigger number of good points but less control on their geometric distribution.
We will require, further along in the argument, that $\varepsilon$ and $\delta$ are chosen in such a way that
\begin{equation} \label{sincond}
\left|\sin{(\varphi)}\right|^2 = \frac{ \left( s_{\delta} + \frac{\varepsilon}{\delta} (1-s_{\delta})\right)^2 (1-\mu)^2}{ \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)} < \frac{1}{2}.
\end{equation}
\textbf{Step 3: The Distribution of Good Points.}
We now introduce a bit of orientation and assume, without loss of generality after possibly rotating all the points, that
$$ r = \| r\| e^{i \theta_r} = \sum_{j=1}^{n}{ e^{i \theta_j}} \qquad \mbox{is a positive real number.}$$
We know that the good points, of which there are at least $(1-\frac{\alpha}{\varepsilon})n$ many, satisfy \eqref{sincond} and are thus contained in one of two cones.
We assume that we have $\gamma_1 n$ good points in the left cone and $\gamma_2n$ good points in the right cone (see Figure~\ref{fig:cones} for a sketch of this). Without loss of generality, we can reflect the picture if necessary and assume that $\gamma_2 \geq \gamma_1$. Recalling our lower bound on the number of good points, we have
\begin{equation}\label{eq:gamma12}
\gamma_1 + \gamma_2 \geq 1 - \frac{\alpha}{\varepsilon}.
\end{equation}
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\draw [thick] (0,0) circle (2cm);
\draw [->, thick] (0,0) -- (3,0);
\node at (3, -0.3) {$r$};
\draw [thick, dashed] (-1.6, -1.2) -- (1.6, 1.2);
\draw [thick, dashed] (-1.6, 1.2) -- (1.6, -1.2);
\node at (-4,0) {$\gamma_1 n$ points here};
\draw[->] (-2.7,0) -- (-2.2,0);
\node at (4,1) {$\gamma_2 n$ points here};
\draw [->] (2.8, 0.8) -- (2.2, 0.5);
\node at (0.6, 0.2) {$\varphi$};
\node at (3,-1) {$\sin^2{(\varphi)} < 1/2$};
\end{tikzpicture}
\caption{Introducing orientation: $r$ being a positive real forces all the good points to be in two cones. The outliers can be anywhere (and could also be in the cone).\label{fig:cones}}
\end{figure}
\end{center}
We note that the outliers, of which there are (at most) $\frac{\alpha}{\varepsilon} n$, might also be in the left or the right cone, we do not make any statement about their actual location and will always assume
that they are working against us.
The inequality
$$ \|r\|^2 \geq \left(\frac{1}{2} - (2-\alpha)(1-\mu) \right)n^2$$
forces some restrictions on $\gamma_1$ and $\gamma_2$: in particular, if all the good points and all the outliers were distributed somewhat evenly, then $\|r\|$ would actually be quite small. However, we do have a lower bound on $\|r\|$ and this forces some restrictions which we will now explore.
Assuming the worst case (where all the outliers are actually working in our favor and contribute to making $r$ as big as possible), we have
$$ \|r\| \leq \left(\gamma_2 - \cos{(\varphi)} \gamma_1 + \frac{\alpha}{\varepsilon}\right) n$$
and therefore
$$ \gamma_2 - \cos{(\varphi)} \gamma_1 + \frac{\alpha}{\varepsilon} \geq \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)^{1/2}$$
which implies, using $-\gamma_1 \leq \gamma_2 -1 + \frac{\alpha}{\varepsilon}$,
$$ \bigl(1 + \cos(\varphi)\bigr) \Bigl(\gamma_2 + \frac{\alpha}{\varepsilon}\Bigr) - \cos(\varphi) \geq \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)^{1/2}.$$
This inequality implies there cannot be too few points inside the right cone for otherwise $r$ could not attain the size it does. More precisely, this forces
\begin{equation}\label{eq:lowerboundgamma2} \gamma_2 \geq \frac{\cos(\varphi) + \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)^{1/2}}{1 + \cos{(\varphi)}} - \frac{\alpha}{\varepsilon}.
\end{equation}
\textbf{Step 4: Using the Hessian.} So far, we have obtained fairly precise information about the number of good points and their approximate location. We have not yet made strong use of the fact that the configuration is a local maximum (we did use it implicitly
when appealing to results of Ling-Xu-Bandeira). In this section, we will explicitly use the fact that the Hessian has to be negative-semidefinite in a maximum to derive a criterion that has to be satisfied for all local maxima. We will then contrast this criterion with the precise structure we have derived to conclude that for some parameters this condition is violated thus excluding them. This will prove the result.
We conclude the argument by using that the configuration of points considered in Step 3 is a
local maximum: this means that the Hessian is definite which in our
case implies that for any $(w_1, \dots, w_n) \in \mathbb{R}^n$, we
have
\begin{equation} \label{positivity}
\sum_{i,j}{ a_{ij} \cos{(\theta_i - \theta_j)} (w_i - w_j)^2} \geq 0.
\end{equation}
We are free to choose the constants $w_i$ and will choose them in a way that makes the quadratic form as small as possible: this means we want to pick $w_i, w_j$ to be quite different when $i,j$ are on different sides of the cone.
We pick these numbers as follows: for a constant $v \in \mathbb{R}$ to be determined
$$ w = \begin{cases} 1 &\qquad \mbox{for the ones on the left cone}\\
v &\qquad \mbox{for the outliers}\\
-1 &\qquad \mbox{for the ones on the right cone}.
\end{cases}$$
We will bound the expression from above using the information we have about $\gamma_1$ and $\gamma_2$: we know that $\gamma_1 + \gamma_2$ is not too small, that $\gamma_1 \leq \gamma_2$ and we have a lower bound on $\gamma_2$.
The quadratic form, which we know to be positive, is bounded from above
\begin{align*}
\frac{1}{2} \sum_{i,j}{ a_{ij} \cos{(\theta_i - \theta_j)} (w_i - w_j)^2} &=
\sum_{i~\tiny \mbox{left}}~ \sum_{j~ \tiny \mbox{right}}{ a_{ij} \cos{(\theta_i - \theta_j)} (w_i - w_j)^2}\\
&+ \sum_{i~\tiny \mbox{outlier}} ~ \sum_{j~\tiny \mbox{right/left cone}}{ a_{ij} \cos{(\theta_i - \theta_j)} (w_i - w_j)^2}
\end{align*}
Now, since $|\sin(\varphi)|^2 \leq 1/2$ by assumption, we have
that the cosine is negative for any pair of points where one is
contained in the left cone and one is contained in the right
cone. Indeed, we see that for any pair $i,j$ of good points on different
sides of the cone, we have
$$ \cos{(\theta_i - \theta_j)} \leq \cos{(\pi - (2\pi))},$$
where $\phi$ is the angle introduced above.
We further bound the quadratic form from above by assuming
that the number of connections running across good points on different
sides of the cone is minimized which leads to an upper bound since
we know that each such connection contributed a nonpositive number.
To simplify this step in the argument, we will make one more assumption:
$$ \gamma_2 \geq 1 - \mu.$$
Using this assumption, it becomes possible to determine the minimal configuration: each good point on the left-hand side
is connected to all points except the $\gamma_2$ points in the right cone as much as possible. Of course, once $\gamma_2 > 1-\mu$,
each good point on the left has to connect to at least $(\gamma_2 - (1-\mu))n$ good points on the right side of the cone since a point can only be not connected to at most $(1-\mu)n$ other points. This results in
\begin{align} \sum_{i~\tiny \mbox{left}} ~~\sum_{j~\tiny \mbox{right}}{ a_{ij} \cos{(\theta_i - \theta_j)} (w_i - w_j)^2} & \leq \gamma_1 (\gamma_2 - (1-\mu)) \cos{(\pi - 2\varphi)} (1 -(-1))^2 n^2 \notag \\
& = 4 \gamma_1 (\gamma_2 - (1-\mu)) \cos{(\pi - 2\varphi)} n^2. \label{eq:sumleftright}
\end{align}
As for outliers, we have no control on where they are. Let $i$ be an
outlier and consider the quantity that we need to bound
$$ S = \sum_{j~\tiny \mbox{left/right cone}}{ a_{ij} \cos{(\theta_i - \theta_j)} (w_i - w_j)^2}.$$
We can increase this contribution by assuming that the outlier is somewhere in $\varphi \leq \theta_i \leq \pi - \varphi$ and that
all the good points in the left and right cone are located at angles $\varphi$ and $\pi - \varphi$: this means that the good points in the cone are as close to each other as they are allowed to be from the cone condition which simplifies getting large interactions with both of them from the monotonicity of the cosine in that range. If $i$ is an outlier located at angle $\theta$, this leads to the upper bound
$$ S \leq \gamma_1 n (1-v)^2 \cos{(\theta - (\pi - \varphi))} + \gamma_2n (1+v)^2 \cos{(\theta- \varphi)}.$$
We bound expressions of this type via
\begin{align*}
A \cos{(\theta - (\pi - \phi))} + B \cos{(\theta- \varphi)} &= -A \cos{(\theta + \varphi)} + B \cos{(\theta - \varphi)} \\
&= \mbox{Re} \Bigl(-A e^{i (\theta + \varphi)} + B e^{i(\theta - \varphi)}\Bigr)\\
&= \mbox{Re}~ e^{i \theta} \left( -A e^{i \varphi} + B e^{-i \varphi} \right)\\
&\leq \left| -A e^{i \varphi} + B e^{-i \varphi} \right|. \\
&= \left| -A e^{ 2i \varphi} + B \right|.
\end{align*}
We note that, by assumption, $|\sin{\varphi}|^2 < 1/2$ and thus, since $A,B$ are positive reals,
$$ \left| -A e^{ 2i \varphi} + B \right| \leq \sqrt{A^2+B^2}.$$
This allows us to bound
$$ S \leq n\sqrt{ \gamma_1^2 (1-v)^4 + \gamma_2^2 (1+v)^4}.$$
We now choose the value of $v$ that minimizes this expression. This value is
$$ v = \frac{\gamma_1 - \gamma_2}{\gamma_1 + \gamma_2}$$
and we obtain
$$ S \leq n \frac{4 \gamma_1 \gamma_2 (\gamma_1^2 + \gamma_2^2)^{1/2}}{(\gamma_1 + \gamma_2)^2} = n \frac{4\gamma_1 \gamma_2}{\gamma_1 + \gamma_2} \frac{\sqrt{\gamma_1^2 + \gamma_2^2}}{\gamma_1 + \gamma_2} \leq n \frac{4\gamma_1 \gamma_2}{\gamma_1 + \gamma_2}. $$
Altogether, summing over all the outliers shows
$$ \sum_{i~\tiny \mbox{outlier}} \sum_{j~\tiny \mbox{left/right}}{ a_{ij} \cos{(\theta_i - \theta_j)} (w_i - w_j)^2} \leq \frac{\alpha}{\varepsilon}
\frac{4\gamma_1 \gamma_2}{\gamma_1 + \gamma_2} n^2.$$
We note that \eqref{eq:gamma12} implies an upper bound on the number of outliers and
$$ \frac{1}{\varepsilon} \frac{1}{\gamma_1 + \gamma_2} \leq \frac{1}{\varepsilon}\frac{1}{1 - \frac{\alpha}{\varepsilon}} = \frac{1}{\varepsilon - \alpha}.$$
Combining this with \eqref{eq:sumleftright}, we reach a contradiction to \eqref{positivity} if
\begin{equation} \label{condition}
\gamma_1(\gamma_2 - (1-\mu)) \cos{(\pi - 2\varphi)} + \frac{\gamma_1 \gamma_2 \alpha}{\varepsilon- \alpha} < 0.
\end{equation}
Indeed, if this inequality is satisfied, then the quadratic form corresponding to the Hessian is not definite implying that the configuration we are in does not correspond to a local maximum. We will now analyze the condition.\\
\textbf{Step 5: Analyzing the Condition.} We will now try to understand under what conditions \eqref{condition} holds.
It clearly requires $\gamma_1 > 0$. We first show that if $\gamma_1 = 0$, then the only stable configuration that can arise
is actually the one where all points are in the same spot. Then we deal with the more elaborate case that arises when $\gamma_1 > 0$. \\
\textit{Part 1: $\gamma_1 > 0$.} We will show that if $\gamma_1 = 0$, then $\|r\|$ has to be quite big and this will allow us to immediately deduce the desired statement via the Ling-Xu-Bandeira framework. We now discuss this in greater detail.
If $\gamma_1 = 0$, then there are at least $(1- \frac{\alpha}{\varepsilon})n$ points in the right cone since all the good points are in one of the two cones, none of them in the left cone and there are at least that many good points in total.
Since the opening angle is
less than $45^{\circ}$ (recall that this is an assumption and will be the case for all the parameter we will consider below), we know that the $x-$coordinate of $e^{i \theta_j}$ for each good point is at least $1/\sqrt{2}$. The $x-$coordinate of an outlier is, trivially, at most $-1$. Therefore
$$ \|r\| = \left| \sum_{j=1}^{n}{ e^{i \theta_j}} \right| \geq \mbox{Re} \sum_{j=1}^{n}{ e^{i \theta_j}} \geq \frac{1}{\sqrt{2}} \left( 1 - \frac{\alpha}{\varepsilon}\right)n - \frac{\alpha}{\varepsilon}n.$$
Recalling \eqref{bound}, we have, for any $i$, that
$$ \left| \sin{(\theta_i - \theta_r)} \right| \leq \frac{(1-\mu)n}{\|r\|} \leq \sqrt{2} \frac{1-\mu}{1 - (1+\sqrt{2})\frac{\alpha}{\varepsilon}}.$$
Recalling our regime of interest, $0 \leq \alpha \leq 0.0537$ and $\mu \geq 0.78$ as well as our parameter selection $\varepsilon =0.5$,
we see that
$$ \left| \sin{(\theta_i - \theta_r)} \right| \leq \frac{1}{\sqrt{2}} \frac{1-\mu}{1 - 2\frac{\alpha}{\varepsilon}} \leq 0.45 < \frac{1}{\sqrt{2}}.$$
This case thus reduces to the Proposition of Ling-Xu-Bandeira discussed above and we see that the only possible case is that all the points are in the same spot.\\
\textit{Part 2: Using $\gamma_1 > 0$.} We can thus assume that $\gamma_1 > 0$ in which case the configuration is clearly not the one where all the points are in one spot. We obtain a contradiction to \eqref{positivity} if
$$ (\gamma_2 - (1-\mu)) \cos{(\pi - 2\varphi)} + \frac{ \gamma_2 \alpha}{\varepsilon- \alpha} < 0.$$
Note that $\cos{(\pi - 2\varphi)} = - \cos(2 \varphi) = -1 + 2 \sin(\varphi)^2$.
By assumption, we know that $\sin(\varphi)^2 < 1/2$ and thus the
cosine contribution in the above inequality is negative. We reach a
contradiction if
$$ \gamma_2 \left( \cos{(\pi - 2\varphi)} + \frac{\alpha}{\varepsilon - \alpha} \right) < (1-\mu) \cos{(\pi - 2\varphi)}.$$
We would like to extract a bound for $\gamma_2$ from this but we cannot simply divide by a term without knowing its
sign which we now determine.
The right-hand side is negative; this means that in order to reach a
contradiction, we certainly would need to require the quantity in
the parentheses to be negative, i.e.
\begin{equation} \label{condition2}
\cos{(\pi - 2\varphi)} + \frac{\alpha}{\varepsilon - \alpha} < 0.
\end{equation}
Once this is the case, we can divide by that term and deduce that we reach a global contradiction if
\begin{equation} \label{condition3}
\gamma_2 \geq \frac{ (1-\mu) \cos{(\pi - 2\varphi)} }{ \cos{(\pi - 2\varphi)} + \frac{\alpha}{\varepsilon - \alpha} }.
\end{equation}
Notice that this is condition implies the weaker condition
$\gamma_2 \geq (1 - \mu)$ that we assumed above, and thus the latter will be
dropped. Summarizing, we have derived a lower bound on $\gamma_2$ that leads to a global contradiction. At this point, we recall that we did already derive a lower bound on $\gamma_2$ in
\eqref{eq:lowerboundgamma2} stating that
\begin{align*}
\gamma_2 & \geq \frac{\cos(\varphi) + \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)^{1/2}}{1 + \cos{(\varphi)}} - \frac{\alpha}{\varepsilon} \\
& = 1 - \frac{\alpha}{\varepsilon} - \frac{1 - \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)^{1/2}}{1 + \cos{(\varphi)}}.
\end{align*}
It is not clear that if our already established lower bound \eqref{eq:lowerboundgamma2} is larger than the lower bound leading to a contradiction, i.e. if
$$ 1 - \frac{\alpha}{\varepsilon} - \frac{1 - \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)^{1/2}}{1 + \cos{(\varphi)}}
\geq \frac{ (1-\mu) \cos{(\pi - 2\varphi)} }{ \cos{(\pi - 2\varphi)} + \frac{\alpha}{\varepsilon - \alpha} },
$$
then we have obtained a contradiction. Any collection of points with these parameters must necessarily give rise to a Hessian with a negative definite eigenvalue and thus cannot be a local maximum.\\
\textbf{Summary.}
In order to obtain a contradiction, it suffices to find, for each $\alpha \leq 0.0537$ two variables $\varepsilon$ and $\delta$
$$ \alpha < \varepsilon < \delta < 1$$
such that, abbreviating once again,
$$ s_{\delta} = \frac{1}{\sqrt{2}} \sqrt{ \sqrt{9 - 4 \delta} - 3 + 2\delta},$$
the following properties hold
\begin{enumerate}
\item we have
$$\frac{ \left( s_{\delta} + \frac{\varepsilon}{\delta} (1-s_{\delta})\right)^2 (1-\mu)^2}{ \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)} < \frac{1}{2}.$$
This says that the parameters define a critical angle $\varphi = \varphi(\alpha, \mu, \varepsilon, \delta)$ corresponding to an angle less than $45^{\circ}$ which is required for our argument.
\item we require that this angle satisfies
$$ \cos{(\pi - 2\varphi)} + \frac{\alpha}{\varepsilon - \alpha} < 0$$
which was required when dividing by it and flipping the sign in Step 5, Part 2.
\item we also require the angle $\varphi$ satisfies
$$ 1 - \frac{\alpha}{\varepsilon} - \frac{1 - \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)^{1/2}}{1 + \cos{(\varphi)}} > \frac{ (1-\mu) \cos{(\pi - 2\varphi)} }{ \cos{(\pi - 2\varphi)} + \frac{\alpha}{\varepsilon - \alpha} }.$$
This shows that the lower bound \eqref{eq:lowerboundgamma2} we derived for $\gamma_2$ is big enough to imply a contradiction, we refer to Step 5, Part 2.\\
\end{enumerate}
\textbf{Conclusion.} We distinguish the cases $\alpha \leq 0.0537$ and $\alpha > 0.0537$. If $\alpha > 0.0537$, then we immediately obtain a contradiction if
$$ \mu \geq 0.788897.$$
This follows from rerunning the Ling-Xu-Bandeira argument, described in \S 2.1. verbatim and using the definition of $\alpha$, and the value of $\mu$ comes from \eqref{length} and \eqref{bound}.
Let us now assume that $\alpha \leq 0.0537$.
We set
$$ \varepsilon = 0.5 \qquad \mbox{and} \qquad \delta = 0.88.$$
An easy (mathematica) check shows that we obtain a contradiction for the entire range $0 \leq \alpha \leq 0.0537$ and $0.788897 \leq \mu \leq 0.794$, where $0.794$ is the bound proved in \cite{ling}. More precisely, the inequalities are true with room to spare: we have, over this entire parameter range
$$\frac{ \left( s_{\delta} + \frac{\varepsilon}{\delta} (1-s_{\delta})\right)^2 (1-\mu)^2}{ \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)} < 0.46 < \frac{1}{2}$$
$$ \cos{(\pi - 2\varphi)} + \frac{\alpha}{\varepsilon - \alpha} < -0.05 < 0$$
and
$$ 1 - \frac{\alpha}{\varepsilon} - \frac{1 - \left( \frac{1}{2} - (2-\alpha)(1-\mu)\right)^{1/2}}{1 + \cos{(\varphi)}} > \frac{ (1-\mu) \cos{(\pi - 2\varphi)} }{ \cos{(\pi - 2\varphi)} + \frac{\alpha}{\varepsilon - \alpha} } + 0.004 .$$
It is the last expression which is almost satisfied and does not allow an extension to smaller parameter ranges (the critical values occur when $\alpha \sim 0.0537$ and $\mu \sim 0.7889$), the inequality is satisfied with a much bigger gap away from these parameters.
\end{proof}
\textbf{Acknowledgment.} We are grateful to the anonymous referees for the substantial and detailed reports resulting in a greatly improved manuscript.
| {
"timestamp": "2020-04-20T02:14:12",
"yymm": "1911",
"arxiv_id": "1911.12336",
"language": "en",
"url": "https://arxiv.org/abs/1911.12336",
"abstract": "We study synchronization properties of systems of Kuramoto oscillators. The problem can also be understood as a question about the properties of an energy landscape created by a graph. More formally, let $G=(V,E)$ be a connected graph and $(a_{ij})_{i,j=1}^{n}$ denotes its adjacency matrix. Let the function $f:\\mathbb{T}^n \\rightarrow \\mathbb{R}$ be given by $$ f(\\theta_1, \\dots, \\theta_n) = \\sum_{i,j=1}^{n}{ a_{ij} \\cos{(\\theta_i - \\theta_j)}}.$$ This function has a global maximum when $\\theta_i = \\theta$ for all $1\\leq i \\leq n$. It is known that if every vertex is connected to at least $\\mu(n-1)$ other vertices for $\\mu$ sufficiently large, then every local maximum is global. Taylor proved this for $\\mu \\geq 0.9395$ and Ling, Xu \\& Bandeira improved this to $\\mu \\geq 0.7929$. We give a slight improvement to $\\mu \\geq 0.7889$. Townsend, Stillman \\& Strogatz suggested that the critical value might be $\\mu_c = 0.75$.",
"subjects": "Optimization and Control (math.OC); Dynamical Systems (math.DS); Adaptation and Self-Organizing Systems (nlin.AO)",
"title": "Synchronization of Kuramoto Oscillators in Dense Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9907319885346244,
"lm_q2_score": 0.8080672204860316,
"lm_q1q2_score": 0.8005780442217729
} |
https://arxiv.org/abs/2006.14482 | A metric on directed graphs and Markov chains based on hitting probabilities | The shortest-path, commute time, and diffusion distances on undirected graphs have been widely employed in applications such as dimensionality reduction, link prediction, and trip planning. Increasingly, there is interest in using asymmetric structure of data derived from Markov chains and directed graphs, but few metrics are specifically adapted to this task. We introduce a metric on the state space of any ergodic, finite-state, time-homogeneous Markov chain and, in particular, on any Markov chain derived from a directed graph. Our construction is based on hitting probabilities, with nearness in the metric space related to the transfer of random walkers from one node to another at stationarity. Notably, our metric is insensitive to shortest and average walk distances, thus giving new information compared to existing metrics. We use possible degeneracies in the metric to develop an interesting structural theory of directed graphs and explore a related quotienting procedure. Our metric can be computed in $O(n^3)$ time, where $n$ is the number of states, and in examples we scale up to $n=10,000$ nodes and $\approx 38M$ edges on a desktop computer. In several examples, we explore the nature of the metric, compare it to alternative methods, and demonstrate its utility for weak recovery of community structure in dense graphs, visualization, structure recovering, dynamics exploration, and multiscale cluster detection. | \section{Introduction}
\subsection{Motivation}
Many finite spaces can be endowed with meaningful metrics. For undirected graphs, the geodesic (shortest path), commute time (effective resistance), and diffusion distance~\cite{lafon04diffusion,coifman05geometric,coifman06diffusion} metrics are widely applied~\cite{coifman05geometric,Liben_Nowell_2003,abraham2010highway}.
The first two can be naively generalized to directed graphs by summing shortest/average walk length in each direction, whereas the third is specifically undirected.
We know of only one graph metric specifically designed for directed graphs, namely the generalized effective resistance distance developed in~\cite{Young_2016b,Young_2016a}.
Overlaying a metric onto a directed structure is a challenge since, by definition, the metric is symmetric.
A related problem is finding metrics on the state space of a finite-state, discrete time Markov chain. In this case, there is also limited prior work, consisting of mean commute time~\cite{rozinas,chebotarev-2020,choi19resistance} and a constant-curvature metric~\cite{vollering2018}.
Metrics fit into the broader category of dissimilarity measures, with the decision whether to impose all metric axioms being application dependent. When a metric is used, this additional structure can enable various algorithmic accelerations, improved guarantees, and useful inductive biases~\cite{elkan2003kmeans,moore00anchors,hamerly10kmeans,boytsov13prune,pitis2020bias}. Furthermore, the metric structure is a key ingredient in proofs of convergence, consistency, and stability. While mostly settled for undirected graphs~\cite{Osting_2017,singer2012vector,singer2017spectral,trillos2018variational,trillos2016consistency}, the development of such theories for directed graphs (digraphs) and Markov chains is an open research problem. The first positive result for digraphs appeared recently~\cite{Yuan2020}.
In the present work, we introduce and analyze a new metric for digraphs and Markov chains based on the \emph{hitting probability} from one node to another, by which we mean the probability that a random walker starting at one node will reach the other node before returning to its starting node. By correctly combining these probabilities with the invariant distribution of an irreducible Markov chain, a metric can be constructed. This metric differs from other metrics by being insensitive to walk length, thus measuring information that is, in a sense, orthogonal to commute time, as illustrated in examples. In the special case of undirected graphs and with the scale parameter $\beta=1$ (defined below), the hitting probabilities metric is actually the logarithm of effective resistance/commute time (plus a constant), a striking fact proven in~\cite{doyle2000random}, section 1.3.4. For other values of the scale parameter, the hitting probabilities metric is a new addition to the limited catalogue of undirected graph metrics.
We illustrate the utility of our metric in several examples, both analytical and numerical, related to graph symmetrization, clustering, structure detection, data exploration, and geometry detection.
\subsection{Our contributions}
Let ${(X_t)}_{t\geq 0}$ be a discrete-time Markov chain on the state space $[n] = \{ 1, \ldots , n\}$ with initial distribution $\lambda$ and irreducible transition matrix $P$, {\it i.e.\/},
\[
\mathbf{P}(X_0 = i) = \lambda_i
\qquad \textrm{and} \qquad
\mathbf{P}(X_{t+1} = j \mid X_t = i) = P_{i,j}\,.
\]
We emphasize that $X$ is not required to be aperiodic.
Let $\phi \in \mathbb{R}_{+}^n$ be the invariant distribution for $P$, {\it i.e.\/}, $P^T \phi = \phi$.
The \emph{hitting time} (starting from a random state distributed like $\lambda$) for a state $i\in [n]$ is the random variable given by
\[
\tau_i := \inf\{ t \geq 1 \colon X_t = i \}\,.
\]
For $i,j \in [n]$, let us define
\begin{equation}
Q_{i,j} := \mathbf{P}_i [\tau_j < \tau_i]\,,
\end{equation}
which denotes the probability that starting from site $i$ ({\it i.e.\/}, the subscript on $\mathbf{P}_i$ is used to indicate that $\lambda = \delta_{i}$) the hitting time of $j$ is less than the time it takes to return to $i$. We emphasize that we consider $\tau_j < \tau_i$ here for a single walk and take the probability of such an event over all walks starting at $i$ when computing $Q_{i,j}$. An expression for the \emph{hitting probability matrix}, $Q$, in terms of the transition matrix will be given in~\cref{eq:Q}; see \cref{s:CompMeth} on computational methods.
\begin{lemma}\label{t:KeyIdentity} The following relationship holds\footnote{\Cref{t:KeyIdentity} was previously (and independently) proven in~\cite{chien04link}, in the context of Markov chain perturbation theory applied to the internet. It was possibly known even earlier.} for $i\ne j$:
\begin{equation}%
\label{Qpi}
Q_{i,j} \phi_i = Q_{j,i} \phi_j\,.
\end{equation}
\end{lemma}
The weighting by the invariant measure is motivated by connections between the invariant measure and random walks as found in \cite[Section 1.7]{Norris_1997}. A proof of \cref{t:KeyIdentity} is given in \cref{s:Proofs}.
\begin{rem}
\Cref{t:KeyIdentity}~implies that, with appropriate choice of $Q_{ii}$, $\frac1n Q$ is a reversible Markov chain with invariant distribution $\phi$.
\end{rem}
We define the \emph{normalized hitting probabilities matrix}, $\Ahtb \in \mathbb R^{n \times n}$, by
\begin{equation}
\label{Aht}
\Ahtb_{i,j} :=
\begin{cases}
\dfrac{ \phi_i^{\beta} }{ \phi_j^{1-\beta} } Q_{i,j} & i \ne j \\
1 & i=j
\end{cases}
\end{equation}
where $\beta \in [\sfrac12, \infty)$. In contexts where the choice of $\beta$ is not important, we simply write $A^{(\mathrm{hp})} = \Ahtb$. Two useful choices for $\beta$ are $1$ and $1/2$.
The $Q_{i,j}$ matrix has recently been shown to play a key role in determining the error of a family of stratified Markov chain Monte Carlo methods~\cite{dinner2017stratification,Thiede_2015}.
From \cref{t:KeyIdentity}, we immediately have the following Corollary.
\begin{corollary}\label{l:symmetric}
The matrix $\Ahtb$ defined in~\cref{Aht} is symmetric. In particular,
\begin{equation}
\Ahtb[\sfrac12]_{i,j} = \sqrt{Q_{i,j}Q_{j,i}}.
\label{d2sym}
\end{equation}
\end{corollary}
\begin{proof}
We observe
\begin{align*}
\Ahtb_{i,j} & = \frac{ \phi_i^\beta }{ \phi_j^{1-\beta} } Q_{i,j} = \frac{ \phi_i^{\beta-1} }{ \phi_j^{1-\beta} } \phi_i Q_{i,j} \\
& = \frac{ \phi_i^{\beta-1} }{ \phi_j^{1-\beta} } \phi_j Q_{j,i} = \frac{ \phi_j^{\beta} }{ \phi_i^{1-\beta} } Q_{j,i} = A^{(\mathrm{hp},\beta)}_{j,i}.
\end{align*}
Hence, $\Ahtb$ is symmetric.
To prove \eqref{d2sym}, we observe that ${\left(\Ahtb[\sfrac12]_{i,j}\right)}^2 = \frac{\phi_i}{\phi_j} Q_{i,j}^2 = Q_{i,j} Q_{j,i}$ by~\cref{Qpi}.
\end{proof}
In some applications, information about relatedness of vertices in a graph will be most immediately encoded in the form of a non-stochastic adjacency matrix $A$. In this case the input adjacency matrix can be transformed into a stochastic matrix $P$ either by a similarity transformation involving the dominant right eigenvector of $A$ or by normalization of the rows of $A$ so that they sum to 1. The resulting stochastic matrix $P$ can then be used as in~\cref{Aht} to construct $A^{(\textrm{hp},\beta)}$, itself a symmetric adjacency matrix on the vertices of the network. In this article we do not address the relative merits of methods to transform an adjacency matrix into a stochastic matrix. We use row normalization unless otherwise stated.
Given an irreducible stochastic matrix $P$, we can thus define a distance $d^\beta \colon [n] \times [n] \to \mathbb R$, which we refer to as the \emph{hitting probability metric}, by
\begin{equation} \label{e:Dist}
d^\beta(i,j) = - \log \left( \Ahtb_{i,j} \right).
\end{equation}
\begin{theorem}\label{t:Metric}
The hitting probability metric, $d^\beta \colon [n] \times [n] \to \mathbb R$, defined in~\cref{e:Dist} is a metric for $\beta \in (\sfrac12, 1]$. For $\beta = \sfrac12$, $d^\beta$ is a pseudo-metric\footnotemark and there exists a quotient graph
on which the distance function becomes a metric.
\end{theorem}
\footnotetext{Recall that a pseudo-metric on $[n]$ is a non-negative real function $f\colon [n] \times [n] \to \mathbb R_{\geq 0}$ satisfying $d(i,i) = 0$, symmetry $d(i,j) = d(j,i)$, and the triangle inequality $d(i,j) \leq d(i,k) + d(k,j)$. A pseudo-metric is a metric if we can identify indiscernible values, {\it i.e.\/}, $d(i,j) = 0 \iff i=j$.}
In~\cref{t:12bounds}, we show that there exists a quotient graph on which $d^{\sfrac12}$ is a metric and which preserves many of the metric properties of the original graph.\footnote{While the usual pseudo-metric quotienting procedure could apply here, there is no guarantee that there would be a corresponding subgraph, which is why~\cref{t:12bounds} is needed.} The key observation for the $d^{\sfrac12}$ pseudometric is that in order for two vertices to be distance $0$ from each other, the probability of hitting the other vertex before returning must be $1$ for both. Hence we provide (in~\cref{structure,quotients}) a means of effectively collapsing these vertices to a single vertex, carefully preserving the overall probabilities relative to the remaining vertices.
\begin{rem}
In light of~\cref{t:KeyIdentity,t:Metric}, $A^{(\mathrm{hp})}$ has two interpretations, first as a symmetrization of $A$, and second as a weighted similarity graph corresponding to $d$, since $A(i,j) = e^{-d(i,j)}$. The practice of associating a finite (subset of a) metric space with a similarity graph in this way is widespread, especially in the manifold learning and graph-based machine-learning communities.\footnote{\cite{self-tuning} cites~\cite{shi-malik,relocalization} as this similarity function's first use specifically for graph-based clustering.} Thus, in our experiments, we favored the use of $A^{(\mathrm{hp})}$ for certain applications where it seemed more natural.
\end{rem}
Finally, we show how advances from~\cite{Thiede_2015} enable us to compute the distance matrix in $O(n^3)$ operations, allowing us to scale up to $\approx 38M$ edges in examples on a
Lenovo ThinkStation P410 desktop with Xeon E5--1620V4 3.5 GHz CPU and 16 GB RAM using {\sc MATLAB} R2019a Update 4 (9.6.0.1150989) 64-bit (glnxa64).
We also provide various synthetic examples to help develop an intuition for the metric and its differences from other measures. We conclude with an example using New York City taxi data to illustrate how our metric can aid in data exploration.
\subsection{Relationship to other notions of similarity and metrics}\label{s:RelWork}
In this section, we discuss some related notions of similarity and metrics on finite state spaces with asymmetric (directional) relationships. Our focus is on symmetric notions of dissimilarity, with an emphasis on metrics. While, in some applications, asymmetric similarity scores may be the right choice (see, {\it e.g.\/}, Tversky's seminal work on features of similarity~\cite{Tversky_1977}), we restrict our scope to symmetric notions. We do, however, wish to mention directed metrics (also called quasi-metrics), which are a natural analogue to metric spaces for relaxations of digraph cut problems~\cite{directed_metric}.
From~\cite{chebotarev-2020,kemeny,rozinas}, we know that commute time is a metric on ergodic Markov chains. In~\cite{choi19resistance,Young_2016b,Young_2016a}, generalizations of effective resistance are developed for ergodic Markov chains and directed graphs. Commute time and resistance-based metrics are popular and more robust than shortest-path distances, although they are not informative in certain large-graph limits~\cite{vonluxborg2014}. In~\cref{sec:gluedcycles}, we compare the effective resistance of~\cite{Young_2016b,Young_2016a} to the hitting probability metric on a particular example.
In~\cite{vollering2018}, a metric is developed on Markov chains. This metric gives the chain constant curvature, in an appropriate generalized sense. Distance in this metric is then related to the distinguishability after one step of random walks beginning at the two distinct nodes. The metric is constructed jointly with the curvature using a fixed point argument. It is expected to be useful in proving, for example, concentration inequalities for Markov chains.
Notions of diffusion distance to a set $B$ on undirected graphs have been explored recently for the connection Laplacian~\cite{singer2012vector} and for the graph Laplacian~\cite{cheng2019diffusion}. The notion of distance is determined by taking $\ell$ steps using the random walk generated by the symmetric graph adjacency matrix $A$ with degree matrix $D$, i.e.\ it counts the number of walks of length $2\ell$ from $i$ to $j$.
Diffusion distances from a vertex $i$ to a sub-graph $B$ in~\cite{cheng2019diffusion} is defined as the smallest number of steps for all random walks started at $i$ to reach $B$. The work~\cite{singer2012vector} established that diffusion distances converge to geodesic distances in the high density limit of random graphs on manifolds, and \cite{cheng2019diffusion} explored how eigenvectors relate to this notion of distance. Directed graphs have been represented as magnetic connection Laplacians on undirected graphs through a notion of polarization, see~\cite{fanuel2018magnetic}, after which a version of diffusion distance can be applied.
A variety of methods exist in machine learning to compute ``graph representations,'' which are learned embeddings of nodes, subgraphs, or entire (possibly directed) graphs into Euclidean space so that they can be fed into standard machine learning tools~\cite{hamilton17representation}. These can be seen as imposing a metric on directed graphs, with the main drawbacks relative to the hitting probability metric being model complexity, difficulty of interpretation, and difficulty of analysis.
In~\cite{malliaros13clustering}, existing symmetrization techniques for directed graphs are surveyed. In particular, we mention \cite{satuluri2011symmetrizations,zhou2005semi,lai2010extracting,chen2008clustering}. In each of these articles, clustering, community detection and/or semi-supervised learning techniques are considered on directed graphs using various symmetrizations, such as that of Fan Chung ({\it e.g.\/}~\cite{satuluri2011symmetrizations}) or using commute times similar to those in the effective resistance metric ({\it e.g.\/}~\cite{chen2008clustering}). Our results use $A^{(\mathrm{hp})}$ as a symmetrization, and we will see that this enables us to perform the tasks just mentioned, although with different and sometimes more helpful results.
In~\cite{fitch}, the metric of~\cite{Young_2016a,Young_2016b} is used as the basis for a digraph symmetrization technique. It is guaranteed to preserve effective resistances, possibly relying on negative entries. Rigorous applications to directed cut and graph sparsification are given.
\subsection*{Outline}
We prove \cref{t:KeyIdentity,l:symmetric,t:Metric} in \cref{s:Proofs}.
In \cref{s:CompMeth}, we describe computational methods to compute the normalized hitting probabilities matrix, $\Ahtb$.
In \cref{s:Examples}, we give some examples of the computed metric.
We conclude in~\cref{s:Disc}.
\section{Proofs and discussion of structural properties}\label{s:Proofs}
\subsection{Structure of the normalized hitting probabilities matrix}
\begin{proof}[Proof of \cref{t:KeyIdentity}]
The probability that $X_t$ starts from $i$ and hits $j$ at least $k+1$ times before returning to $i$ can be expressed as
\[
\mathbf{P}_i [\tau_j < \tau_i] \mathbf{P}_j {[\tau_j < \tau_i]}^k\,.
\]
We let $V_i^j$ be the number of times $X_t$ hits $j$ before returning to $i$,
$
V_i^j = \sum_{t=1}^{\tau_i} {1}_{X_t = j}\,.
$
Then, we have
\[
\mathbf{P}_i [\tau_j < \tau_i] \mathbf{P}_j {[\tau_j < \tau_i]}^k = \mathbf{P}_i [V_i^j \geq k+1]\,.
\label{r1}
\]
Now observe that
\begin{align}
&\label{geo} \sum_{k=0}^\infty \mathbf{P}_i [\tau_j < \tau_i] \mathbf{P}_j {[\tau_j < \tau_i]}^k =\sum_{k=0}^\infty \mathbf{P}_i [ V_i^j \ge k + 1]
=\sum_{k=0}^\infty \mathbf{E}_i \left[1_{V_i^j \ge k+1}\right] \\
&\hspace{1cm} = \mathbf{E}_i \left[\sum_{k=0}^\infty {1}_{V_i^j \geq k+1}\right]
= \mathbf{E}_i \left[\sum_{k=0}^\infty k 1_{V_i^j=k} \right]
= \sum_{k=0}^\infty k \mathbf{P}[V_i^j=k] \notag \\
&\hspace{1cm}= \mathbf{E}_i [ V_i^j ] \label{r3}\,.
\end{align}
The expectation in \cref{r3} is known to satisfy
\begin{equation}
\label{gammaid}
\mathbf{E}_i [ V_i^j ] = \frac{\phi_j}{\phi_i}\,,
\end{equation}
which is proved in, for example,~\cite[Theorem 1.7.6]{Norris_1997}.
However, we recognize the expression \cref{geo} as a geometric series and hence have
\begin{align*}
\sum_{k=0}^\infty \mathbf{P}_i [\tau_j < \tau_i] \mathbf{P}_j {[\tau_j < \tau_i]}^k
&= \mathbf{P}_i [\tau_j<\tau_i]{\left(1-\mathbf{P}_j[\tau_j<\tau_i]\right)}^{-1} \\
&= \mathbf{P}_i [\tau_j < \tau_i] \mathbf{P}_j {[\tau_i < \tau_j]}^{-1} = Q_{i,j} Q_{j,i}^{-1}\,.
\end{align*}
Combining this with~\cref{gammaid} we arrive at $Q_{i,j} Q_{j,i}^{-1}=\frac{\phi_j}{\phi_i}$.
\end{proof}
To prove~\cref{t:Metric}, we will need one more lemma
\begin{lemma}%
\label{Qlemma}
The following inequality hold
\begin{equation}
\label{Qineq}
Q_{i,j} \geq Q_{i,k} Q_{k,j}\,.
\end{equation}
\end{lemma}
\begin{proof}
Consider the corresponding auxiliary Markov process restricted to nodes $i$, $j$, and $k$ with $3\times 3$ transition matrix $F$, the elements of which we denote by, {\it e.g.\/}, $F_{i,j} = \mathbf{P}_i[\tau_j < \min \{ \tau_i, \tau_k\}]$ and $F_{i,i} = \mathbf{P}_i[\tau_i < \min \{\tau_j, \tau_k\}]$. That is, $F_{i,j}$ gives the probability of a random walker starting at $i$ eventually reaching $j$ before either reaching $k$ or returning to $i$, while $F_{i,i}$ gives the probability of a random walker starting at $i$ returning to $i$ before reaching either $j$ or $k$.
Since $F_{i,i}+F_{i,j}+F_{i,k}=1$, we have
\begin{equation*}
Q_{i,j} = F_{i,j} + \frac{ F_{i,k}F_{k,j}}{1-F_{k,k}}\,, \quad
Q_{i,k} = F_{i,k} + \frac{ F_{i,j}F_{j,k}}{1-F_{j,j}}\,, \quad
Q_{k,j} = F_{k,j} + \frac{ F_{k,i}F_{i,j}}{1-F_{i,i}}\,.
\end{equation*}
Hence, we observe
\begin{align*}
Q_{i,k} Q_{k,j} & = \left( F_{i,k} + \frac{F_{i,j} F_{j,k}}{1-F_{jj} } \right) \left( F_{k,j} + \frac{F_{k,i} F_{i,j}}{1-F_{ii} } \right) \\
& = F_{i,k}F_{k,j} + F_{i,j} \left( \frac{F_{j,k} F_{k,j}}{1-F_{jj} } + \frac{F_{i,k} F_{k,i}}{1-F_{ii} } + \frac{F_{j,k} F_{k,i} F_{i,j}}{(1-F_{ii}) (1-F_{jj} ) } \right)\,.
\end{align*}
Using
\[
\frac{F_{j,k}}{1-F_{j,j}} = \frac{F_{j,k}}{F_{j,i} + F_{j,k}} < 1\,,
\]
we then observe
\begin{align*}
Q_{i,k} Q_{k,j} & \leq F_{i,k}F_{k,j} + F_{i,j} \left( F_{k,j} + \frac{F_{i,k} F_{k,i}}{1-F_{ii} } + \frac{ F_{k,i} F_{i,j}}{1-F_{ii}} \right) \\
&= F_{i,k}F_{k,j} + F_{i,j} \left( F_{k,j} + F_{k,i} \frac{F_{i,k} + F_{i,j} }{1-F_{ii} } \right) \\
& = F_{i,k}F_{k,j} + F_{i,j} \left( F_{k,j} + F_{k,i} \right) \\
& = \left( \frac{F_{i,k} F_{k,j} }{1-F_{k,k}} + F_{i,j} \right) (1-F_{k,k}) \\
& = Q_{i,j}(1-F_{k,k}) \leq Q_{i,j}\,.
\end{align*}
\end{proof}
\subsection{Hitting probability metric}
In this section we establish~\cref{t:Metric}. In particular, we explore the notion that, much like effective resistance, the normalized hitting probabilities matrix provides a natural notion of distance on the digraph (or between states of a Markov Chain).
To begin, we recall the definition of $d^\beta$ from~\cref{e:Dist} and note that we have already established the symmetry $d^\beta(i,j) = d^\beta (j,i)$ for all $i,j$.
As seen from the statement of~\cref{t:Metric}, we will observe that the triangle inequality holds for all $\beta \ge \sfrac12$ and that positivity holds for all $\beta > \sfrac12$. In the case $\beta = \sfrac12$, $d^{\sfrac12}$ gives a pseudometric structure, as there can indeed exist structures in a directed graph or Markov Chain on which $d^{\sfrac12} (i,j) = 0$ and $i \neq j$. As an example, consider the nodes on a cycle with in degree and out degree $1$. (See~\cref{s:Examples}.)
When $d^{\sfrac12}(i,j)=0$ there are specific structures that restrict all random walks leaving $i$ so that they must hit $j$ before returning to $i$. We show in~\cref{t:12bounds} that for any graph, there exists a canonical quotient graph on which $d^{\sfrac12}$ is indeed a metric that is closely related to $d^{\sfrac12}$ on the original graph. Let us now proceed to the proofs.
\begin{proof}[Proof of \cref{t:Metric}]
First, we show positivity for $i\ne j$. Note that $d^\beta(i,j) > 0$ iff $\Ahtb_{i,j} < 1$. Consider the converse
\[
1 \le \Ahtb_{i,j} = \frac{\phi_i^\beta}{\phi_j^{1-\beta} } Q_{i,j} = \frac{\phi_j^\beta}{\phi_i^{1-\beta} } Q_{j,i}\,,
\]
that is,
\[
\frac{\phi_j^{1-\beta}}{\phi_i^\beta} \le Q_{i,j} \quad \text{ and } \quad \frac{\phi_i^{1-\beta}}{\phi_j^\beta} \le Q_{j,i}\,.
\]
Then if $\beta > \sfrac12$, we have
\[
1 \ge Q_{i,j} Q_{j,i} \ge \phi_j^{1-2\beta} \phi_i^{1-2\beta} > 1\,,
\]
a contradiction. For $\beta=\sfrac12$, the last inequality above becomes an equality, so the corresponding argument by contradiction requires only that $\Ahtb[\sfrac12] \le 1$ and thus $d^{\sfrac12}(i,j) \ge 0$.
Symmetry follows from~\cref{l:symmetric}, and $d^\beta(i,i)=0$ is immediate, so all that remains is the triangle inequality.
To prove the triangle inequality, we observe for $i\neq j\neq k$ that
\begin{align}
d^\beta(i,j) &= - \log \left( \Ahtb_{i,j} \right) = -\beta\log\phi_i -(\beta-1)\log\phi_j - \log Q_{i,j} \notag \\
&= d^\beta(i,k) + d^\beta(k,j) + (2 \beta - 1) \log\phi_k + [\log Q_{i,k} + \log Q_{k,j} - \log Q_{i,j}]\,,
\label{e:triangle-slack}
\end{align}
which, applying~\cref{Qlemma}, proves that the triangle inequality holds for all $\beta \geq \sfrac12$.
\end{proof}
We observe that the $2 \beta -1$ coefficient of $\log \phi_k$ in~\cref{e:triangle-slack} vanishes when $\beta=\sfrac12$, hence the triangle inequality is as tight as possible (since $Q_{i,k}Q_{k,j}/Q_{i,j} = 1$, {\it e.g.\/},~for a directed cycle graph).
From the above argument, we can see that the only obstruction to $d^{\frac12}$ being a metric is if there is a pair $i,j$ such that
\[
\Ahtb[\sfrac12]=\sqrt{ \frac{\phi_j}{\phi_i} } Q_{j,i} = \sqrt{ \frac{\phi_i}{\phi_j} } Q_{i,j} = 1\,.
\]
In this case,
\[ Q_{j,i} = \sqrt{ \frac{\phi_i}{\phi_j} } \quad\text{ and }\quad Q_{i,j} = \sqrt{ \frac{\phi_j}{\phi_i} }\,. \]
Thus, $Q_{i,j} Q_{j,i} =1$,
which, as they are both probabilities, means in fact $Q_{i,j} = Q_{j,i} =1$.
Hence, also $\phi_i = \phi_j$.
\begin{obs}
The condition $\phi_i = \phi_j$ is not an extra restriction beyond $Q_{i,j} = Q_{j,i} =1$: if $Q_{i,j} =Q_{j,i}=1$ then a random walker must visit $i$ every time it visits $j$ (and vice versa) and hence the invariant probabilities of sites $i$ and $j$ must be equivalent.
\end{obs}
\subsection{Structure theory of digraphs where \texorpdfstring{$d^{\sfrac12}$}{d} is not a metric}%
\label{structure}
In this subsection, we investigate the structure of graphs where $d^{\sfrac12}$ is not a metric, which we refer to as ($d^{\sfrac12}$-) degenerate. This is useful for understanding our metric embedding and is foundational for~\cref{quotients}, where we derive the quotienting procedure to repair graph degeneracies. In this section, we first give a general construction to produce degenerate graphs and show that all degenerate graphs can be constructed in this way. Next, we give a general decomposition of degenerate graphs into equivalence classes and their segments.
\subsubsection{A general construction for degenerate graphs}
A simple example of a graph with $Q_{i,j}=Q_{j,i}=1$ is a closed, directed cycle. However, we also have the following much more general construction:
Take any two directed acyclic graphs (DAGs), $G_1$ and $G_2$. Connect all the leaves (sinks/nodes of our-degree zero) of $G_1$ to $i$ and all the leaves of $G_2$ to $j$. Connect $j$ to all the roots (sources/nodes of in-degree 0) of $G_1$ and $i$ to all the roots of $G_2$. Possibly add edges between $i$ and $j$. Then, for each node $k$ except $i$ and $j$, replace it with an arbitrary strongly connected graph $H_k$ (corresponding to an irreducible Markov chain), replacing each edge to (from) $k$ with at least one edge to (from) a node in $H_k$. The resulting graph is strongly connected and has $i$ and $j$ only reachable through each other.
In fact, all graphs with $Q_{i,j}=Q_{j,i}=1$ can be constructed this way. To see this, note that $i$ and $j$ must have positive in and out degree by strong connectedness. Let $C_i$ be those nodes reachable from $i$ without passing through $j$, and define $C_j$ similarly. Consider the following claim:
\begin{claim}
$C_i \cup C_j \cup \{i,j\}$ includes all nodes of the graph, and $C_i\cap C_j$ is empty.
\end{claim}
\begin{proof}
For the first part, consider a fixed node $k$ which is not $i$ or $j$, together with a shortest path $C_{ik}$ from $i$ to $k$. If $C_{ik}$ does not pass through $j$, then $k\in C_i$; otherwise, $k$ is reachable from $j$ without passing through $i$ since $C_{ik}$ is a shortest path.
For the second part, assume otherwise; that is, pick $k \in C_i \cap C_j$. Then there exist (1) a path $C_{ik}$ from $i$ to $k$ not passing through $j$, (2) a path $C_{jk}$ from $j$ to $k$ not passing through $i$, and (3) a path $C_{kj}$ from $k$ to $j$ (by strong connectedness). Assume WLOG that $C_{kj}$ passes through $j$ only at the end. $C_{kj}$ cannot pass through $i$ since otherwise $C_{ik}+C_{kj}$ contains a walk from $i$ to $i$ without passing through $j$, violating $Q_{i,j}=1$.
But then $C_{jk}+C_{kj}$ would be a walk from $j$ to $j$ that does not pass through $i$, contradicting $Q_{j,i}=1$.
\end{proof}
Since $C_i$ and $C_j$ can only be connected through $i$ and $j$, removing $i$ and $j$ disconnects these two sets. Now consider the subgraphs induced by $C_i$ and $C_j$, respectively. As can be done with any directed graph, we reduce each of these subgraphs to their quotients under the mutual reachability equivalence relation, yielding a pair of DAGs. The next subsection generalizes this decomposition to account for all nodes for which $d^{\sfrac12}$ vanishes rather than a single pair.
\subsubsection{Decomposition into equivalence classes and segments}
Consider an equivalence class $\alpha=\{a_1,a_2,\ldots,a_K\}$ of nodes under the equivalence relation $i\sim j \Leftrightarrow d^{\sfrac12}_{i,j}=0$.
We refer to a node in a non-singleton equivalence class as \emph{($d^{\sfrac12}$-) degenerate}. A graph is $d^{\sfrac12}$-degenerate if it has a degenerate node.
\begin{definition}\leavevmode
\begin{itemize}
\item A \emph{walk} is a sequence of nodes $\{i_1,i_2,\ldots,i_K\}$ such that $P_{i_k,i_{k+1}}\!\!>\!0$, for $1 \le k<K$.
\item A walk is \emph{closed} if $i_K = i_1$.
\item A closed walk is a \emph{commute} from $i_1$ if $i_{k} \ne i_1$, for $1<k<K $.
\item A walk is a \emph{path} if $i_k\ne i_{k'}$ when $k'\ne k$. A commute is a \emph{cycle} if it is a path when the last element is removed.
\end{itemize}
\end{definition}
\begin{lemma}
A commute from $a_k\in \alpha$ must include each of the other members of $\alpha$ exactly once in an order that depends only on the graph.
\end{lemma}
\begin{proof}
The proof is in three assertions. We assume without loss of generality that commutes are from $a_1$.
First, each $a_k$ is visited. This is the same as claiming that $Q_{a_1,a_k} = 1$, which was shown in the proof of~\cref{t:Metric}.
Second, each $a_k$ is visited at most once, since $Q_{a_k,a_1} = 1$.
Lastly, each $a_k$ is visited in a fixed order: Let $J_1$ and $J_2$ be commutes from $a_1$ that visit, respectively, $a_2$ before $a_3$ and vice versa. Then let $J_1'$ and $J_2'$ be the sub-walks from $a_1$ to $a_2$ and from $a_2$ to $a_1$ in $J_1$ and $J_2$, respectively. The concatenation of $J_1'$ and $J_2'$ is thus a commute from $a_1$ that does not visit $a_3$, a contradiction.
\end{proof}
In the rest of~\cref{structure}, we assume that equivalence classes under $\sim$ are sorted so that they must be visited in the order by commutes from their first element. Similarly, if the $K$ elements of an equivalence class are numbered $a_1,\ldots,a_K$, we naturally identify $a_x = a_{x \!\!\!\mod\! K}$.
\begin{lemma}%
\label{seg_lem}
Given an equivalence class $\alpha$ under $\sim$, for each $j\in G - \alpha$ there is a unique $k$ such that all walks $J$ containing $j$ with $\alpha\cap J \ne \varnothing$ include either $a_{k-1}$ before $j$ or $a_{k}$ after $j$.
\end{lemma}
\begin{proof}
It is enough to consider paths. Suppose $J_1$ and $J_2$ are two paths from $j$ which reach $a_k$ and $a_{k'}$, respectively, before reaching any other elements of $\alpha$, with $k\ne k'$. By strong connectivity, we can select a shortest path from $a_k$ to $j$ to extend $J_1$ to a cycle from $j$, which we call $J_3$. That is, if we select a shortest (in number of distinct steps) path, $\gamma$, from $a_k$ to $j$ then $J_1 \cup \gamma= J_3$ is the required extension of $J_1$.
Now, $J_3$ can be cyclically reordered to be a commute from $a_k$. Thus, $J_3$ includes $a_{k'}$, and since $\gamma$ was shortest possible, it includes $a_{k'}$ exactly once. Let $J_4 \subset J_3$ be the sub-walk from $a_{k'}$ to $j$. Then concatenating $J_4$ and $J_2$ gives a commute from $a_{k'}$ that does not include $a_k$, a contradiction. Thus, $j$ has a unique successor $a_k$ in $\alpha$.
The conclusion that there is a unique predecessor of $j$ in $\alpha$ follows by reversing the direction of all edges and re-applying the above argument. It must be $a_{k-1}$ since $a_k$ is the first member of the equivalence class encountered in any commute from $j$. \end{proof}
\begin{definition}
We will here refer to the equivalence classes on $G-\alpha$ induced by \cref{seg_lem} as \emph{($\alpha$-) segments} of $G$.\footnote{Alternatively, we could define segments more generally with respect to any node set $\alpha$. Then the segment corresponding to $i \in \alpha$ is the set of nodes reachable from $i$ without passing through any other elements of $\alpha$. From this perspective, the absolute segments described later are simply the intersection of the segments with respect to all the equivalence classes.}
\end{definition}
\begin{lemma}
Given distinct equivalence classes $\alpha$ and $\beta$ under $\sim$, every element of $\alpha$ must lie within a single segment induced by $\beta$.
\end{lemma}
\begin{proof}
Let $\alpha = \{a_1,a_2,\ldots,a_{K_{\alpha}}\}$ and $\beta = \{b_1,b_2,\ldots,b_{K_{\beta}}\}$.
Suppose, by way of contradiction, that $a_1$ lies between $b_{k_1}$ and $b_{k_1+1}$ and $a_2$ lies between $b_{k_2}$ and $b_{k_2+1}$ for $k_1 \ne k_2$. By strong connectedness, there exists a (shortest) path from $b_{k_1}$ to $a_1$ to $b_{k_1+1}$. If $b_{k_1+1}$ and $b_{k_2}$ are distinct nodes, there also exists a shortest path from $b_{k_1+1}$ to $b_{k_2}$. Since $Q_{b_{k_2},a_2} < 1$, there exists a shortest path from $b_{k_2}$ to $b_{k_2+1}$ not passing though $a_2$. Finally, if $b_{k_2+1}$ and $b_{k_1}$ are distinct nodes, there exists a shortest path from $b_{k_2+1}$ to $b_{k_1}$. Concatenating all these paths gives a commute from $a_1$ to itself not passing though $a_2$, a contradiction.
\end{proof}
The foregoing lemmata show that the nontrivial equivalence classes in a $d^{\sfrac12}$-degenerate digraph induce a structure of equivalence cycles and their segments, with distinct equivalence cycles restricted to lie within the segments of each other. This has potential application in segmentation of directed graphs and will be an important technical tool in the proofs in the next subsection.
\subsection{Quotients of \texorpdfstring{$d^{\sfrac12}$}{d}-degenerate Markov chains}%
\label{quotients}
Next, we develop a way to transform a Markov chain $X$ for which $d^{\sfrac12}$ is not a metric into a quotient Markov chain $X'$, for which $d^{\sfrac12}$ is a metric.
\begin{rem}
In~\cref{quotients}, we identify singleton classes with their member.
Additionally, we append a prime to any symbol when it is meant to refer to $X'$ rather than $X$.
\end{rem}
The quotient graph is given by the following construction, which has appeared in~\cite{mitavskiy08} as well as in~\cite{madras_2002_decomposition,martin_2000_staircase,caracciolo_1992_tempering}, and possibly other places.
\begin{definition}%
\label{d:quotient}
Given a Markov chain $X$ and an equivalence relation on the states of $X$, the \emph{quotient Markov chain} has one state for each equivalence class, and the transition probabilities are given by
\[
P_{U,V}' = \frac{1}{\phi_U} \sum_{i\in U} \phi_i P_{i,V}
= \frac{1}{\phi_U} \sum_{i\in U} \sum_{j\in V} \phi_i P_{i,j}\,,
\]
where $\phi_U = \sum_{i \in U} \phi_i$.
\end{definition}
The map that sends $X$ to $X'$ is denoted $\iota$.
It can be shown~\cite{mitavskiy08} that the invariant measure on $X'$ evaluated at state $U$ is $\phi'_U = \phi_U$.
Furthermore, $P$ carries information about the equilibration rate in ergodic chains~\cite{martin_2000_staircase,madras_2002_decomposition,caracciolo_1992_tempering}, although we do not use this fact in this paper.
When applying~\cref{d:quotient} to $\sim$, the definition reduces to
\[
P_{U,V}' = \frac{1}{|U|} \sum_{i\in U} \sum_{j\in V} P_{i,j}\,,
\]
since $\phi$ is constant within equivalence classes (see proof of~\cref{t:Metric}).
\begin{lemma}[Quotienting one class at a time]
Let $\sim$ induce the non-singleton classes $\{\alpha_1,\alpha_2,\ldots,\alpha_L\}$. For a node set $S$, let $\sim_{S}$ be the relation with non-singleton class $S$, keeping all other nodes in individual (singleton) classes. One can then produce a graph with the same nodes as $P'$ by performing a series of quotienting operations $P \underset{\sim_{\alpha_1}}{\to} P_1 \underset{\sim_{\alpha_2}}{\to} \cdots \underset{\sim_{\alpha_L}}{\to} P_L$. Then $P_L = P'$, after identifying nested classes with the nodes in them, {\it e.g.\/}, $\{\{a\},\{b\}\} \to \{a,b\}$.%
\label{l:order}
\end{lemma}
\begin{proof}
The proof is by induction on $L$. If $L=1$, the result is vacuously true. So assume the result is true for graphs having $L$ non-singleton equivalence classes, and we proceed to establish the result for $L+1$ non-singleton classes. Let $G$ have classes $\{\alpha_1,\ldots,\alpha_{L+1}\}$ under $\sim$. Then we apply $\sim_{\alpha_1}$ to get $P_1$ and then use the inductive assumption to conclude that $P_{L+1} = P_1'$. So we need to prove that $P_1'=P'$. We have
\[
{P_1'}_{\alpha,\beta} =
\begin{cases}
P_{\alpha,\beta} & \alpha\ne\alpha_1,\beta\ne\alpha_1 \\
\sum_{j\in\beta} {P_1}_{\alpha_1,j} & \alpha=\alpha_1,\beta\ne\alpha_1 \\
\frac1{|\alpha|} \sum_{i\in\alpha} {P_1}_{i,\alpha_1} & \alpha\ne\alpha_1,\beta=\alpha_1\\
{P_1}_{\alpha_1,\alpha_1} & \alpha=\alpha_1=\beta\, ,
\end{cases}
\]
where we have implicitly used the fact that $\phi_{P_1}$ has the form given in~\cref{l:order}.
Expanding further gives
\[
{P_1'}_{\alpha,\beta} =
\begin{cases}
P_{\alpha,\beta} & \alpha\ne\alpha_1,\beta\ne\alpha_1 \\
\sum_{j\in\beta} \frac1{|\alpha_1|}\sum_{i\in\alpha_1} P_{i,j} & \alpha=\alpha_1,\beta\ne\alpha_1 \\
\frac1{|\alpha|} \sum_{i\in\alpha} \sum_{j\in\alpha_1} P_{i,j} & \alpha\ne\alpha_1,\beta=\alpha_1\\
\frac1{|\alpha_1|} \sum_{i\in\alpha_1,j\in\alpha_1} P_{i,j} & \alpha=\alpha_1=\beta\,.
\end{cases}
\]
Rearranging sums
\[
{P_1'}_{\alpha,\beta} =
\frac1{|\alpha|} \sum_{i\in\alpha,j\in\beta} P_{i,j} = P_{\alpha,\beta}\,,
\]
as expected.
\end{proof}
\begin{lemma}%
\label{l:single-collapse}
Collapsing a single equivalence class $\alpha$ respects $Q$ in the following sense. Let $i$ and $j$ be two non-equivalent nodes.
\begin{itemize}
\item If $i$ and $j$ lie in the same $\alpha$-segment, then $Q_{i,j} = Q'_{i,j}$.
\item If $i$ and $j$ lie in different $\alpha$-segments, then $\frac{1}{2} Q_{i,j} < Q'_{i,j} < Q_{i,j}$.
\item If $i\in\alpha$, then $Q_{i,j} = |\alpha| Q'_{\alpha,j}$.
\item If $j\in\alpha$, then $Q_{i,j} = Q'_{i,\alpha}$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $\alpha=\{a_1,\ldots,a_K\}$, where $K = |\alpha|$. It is clear that $Q_{i,j}$ is unaffected by taking quotients if $i$ and $j$ lie in the same $\alpha$-segment or if $j\in\alpha$.
For $i=a_k\in\alpha$, we know that $Q_{a_k,j} = Q_{a_{\ell},j}$ for all $\ell$, so WLOG assume that $j$ lies in the segment between $a_k$ and $a_{k+1}$.
Now, let us denote by $Q_{i_1,i_2,i_3}$ the probability of a random walker starting at $i_1$ and reaching $i_2$ before reaching $i_3$ (in particular, $Q_{i,j} = Q_{i,j,i}$). Then,
\begin{align*}
Q'_{\alpha,j}
&= P'_{\alpha,j} + \sum_{i'\ne j,\alpha} P'_{\alpha,i'} Q'_{i',j,\alpha}
= \frac1K P_{i,j} + \frac1K \sum_{\ell=1}^K \sum_{i'\ne j, i'\notin \alpha} P_{a_{\ell},i'} Q_{i',j,\alpha} \\
&= \frac1K P_{i,j} + \frac1K \sum_{i'\ne j, i' \notin \alpha} P_{a_k,i'} Q_{i',j,\alpha}
= \frac1K P_{i,j} + \frac1K \sum_{i'\ne j,i} P_{i,i'} Q_{i',j,i}
= \frac1K Q_{i,j}\,.
\end{align*}
Finally let $i$ and $j$ be such that any path from $i$ to $j$ must pass through $a_1,\ldots,a_k$ before encountering $j$. Then the following reasoning applies.
Let $a=a_1,b=a_K$, $x = Q_{a,b,j}$ and $y = Q_{b,i,a}$. Then we have
\begin{align*}
Q_{i,j} &= Q_{i,a,i} Q_{a,j,i}\,, \\
Q_{a,j,i} &= (1-x) + x Q_{b,j,i}\,, \\
Q_{b,j,i} &= (1-y)Q_{a,j,i}\,.
\end{align*}
Solving for $Q_{i,j}$ yields
\[
Q_{i,j} = Q_{i,a}\frac{1-x}{1-x+xy}\,.
\]
On the quotiented graph, we also have
\begin{align*}
Q'_{i,j} = Q_{i,a} Q'_{\alpha,j,i}, \ \
Q'_{\alpha,j,i} &= \frac{1}{2} \left[ 1-x + x Q'_{\alpha,j,i}\right]
+ \frac{1}{2} \left[ (1-y) Q'_{\alpha,j,i} \right]\,.
\end{align*}
Hence,
$
Q'_{i,j} = Q_{i,a}\frac{1-x}{1-x+y}
$
and thus $Q'_{i,j} < Q_{i,j}$.
Furthermore, we can bound the ratio
\begin{equation}
\frac{Q_{i,j}}{Q'_{i,j}} = \frac1{1-\frac{(1-x)y}{1-x+y}}\,.
\label{rat}
\end{equation}
Since the function $g(x_1,x_2) = \frac{x_1x_2}{x_1+x_2}$ is bounded above by $\frac{1}{2}$ on ${(0,1)}^2$,~\cref{rat} cannot exceed $2$, which gives the bound.
The bound is tight because all values of $x$ and $y$ can be attained when considering arbitrary weighted graphs. (A graph with only the four nodes $a,b,i,j$ mentioned in the proof and edges $a\to j$, $a\to b$, $j\to b$, $b\to i$, $b\to a$, and $i\to a$ suffices to attain all possible values of $x,y$.)
\end{proof}
\begin{definition}
An \emph{absolute segment} is a maximal set of nodes which lie in the same segment with respect to all non-singleton equivalence classes.
\end{definition}
\begin{lemma}
$\iota$ respects $Q$ in the following sense for nodes $i$ and $j$ in distinct equivalence classes $\alpha$ and $\beta$:
\begin{itemize}
\item If $i$ and $j$ lie in the same absolute segment, then $Q_{i,j} = Q'_{i,j}$.
\item Otherwise, $\frac{1}{2^{c}|\alpha|} Q_{i,j} \le Q'_{\alpha,\beta} < Q_{i,j}$, where $c$ is the number of equivalence classes with respect to which $i$ and $j$ lie in different segments. (In particular, $c<L$.) Equality holds only when $c=0$.
\end{itemize}
\end{lemma}
\begin{proof}
If $i$ is degenerate, first collapse $\alpha$, scaling $Q_{i,j}$ by $|\alpha|$. Next, collapse all other equivalence classes one at a time, further scaling $Q_{i,j}$ by the appropriate factor in $(\frac{1}{2},1)$ whenever $i$ and $j$ lie in different segments with respect to the collapsing class.
\end{proof}
From this lemma we immediately get the following theorem.
\begin{theorem}\label{t:12bounds}
$X'$ is a metric space with metric $(d')^{\sfrac12}$. In particular, for $i\in\alpha$ and $j\in\beta$, with $\alpha\ne\beta$:
\begin{itemize}
\item If $i$ and $j$ lie in the same absolute segment, then $d^{\sfrac12}_{i,j} = (d')_{i,j}^{\sfrac12}$.
\item Otherwise $d_{i,j} < d_{\alpha,\beta}' \le d_{i,j} + \frac{1}{2} \log{|\alpha||\beta|} + c\log 2$, where $c$ is the number of equivalence classes with respect to which $i$ and $j$ lie in different segments. Equality holds only when $c=0$.
\end{itemize}
\end{theorem}
Thus, $\iota$ pushes apart the different absolute segments. All other distances are unaffected.
\begin{rem}
$\iota$ is analogous to a rigid motion on each absolute segment, in that none of the in-absolute-segment distances are distorted.
\end{rem}
\section{Computational methods}\label{s:CompMeth}
To compute the normalized hitting probabilities matrix and metric structure on a Markov chain (or network) consisting of $n$ nodes/states with probability transition matrix $P$, we require only the computation of the invariant measure and the $Q$ matrix. The invariant measure can be computed using iterative eigenvector methods, which need $O(m)$ operations per iteration for $m$ edges.
We briefly recall the work in~\cite[Theorem 5]{Thiede_2015}, that shows the $Q$ matrix can be computed in $O(n^3)$ time. The key idea from~\cite[Lemma 5]{Thiede_2015} is that one can compute
\begin{equation} \label{eq:Q}
Q_{i,j} (P) = \frac{e_i^T {(I - P_j)}^{-1} P_j e_j}{e_i^T {(I - P_j)}^{-1} e_i} = \frac{ {M(j)}^{-1}_{i,j}}{ {M(j)}^{-1}_{i,i}}\,,
\end{equation}
where $e_j\in \mathbb R^n$ is the vector with a $1$ in the $j$th entry and zeros elsewhere, $P_j = (I-e_j e_j^T) P \in \mathbb R^{n\times n}$, and the invertible matrix
$M(j) = I - P + e_j e_j^T P\in \mathbb R^{n\times n}$. See Theorem $5$ of~\cite{Thiede_2015} for full details, but this identity follows from realizing that as defined $M(j)$ is invertible with inverse
\[
M(j)^{-1} = \begin{pmatrix}
(I-P_j)^{-1} & (I-P_j)^{-1} P_j e_j \\
0 & 1
\end{pmatrix}
\]
given in block form on the $e_j^\perp$, $e_j$ basis.
If we then compute $M(1)^{-1}$ on the way to obtaining the first column $Q_{i,1} = {M(1)}^{-1}_{i1}/{M(1)}^{-1}_{ii}$,
then $M(j)$ is a rank-$2$ perturbation of $M(1)$ and we can apply the Sherman-Morrison-Woodbury identity to compute $M(j)^{-1}$. Since we only access $2n-2$ elements of $M(j)^{-1}$, the full $O(n^2)$-time Sherman-Morrison-Woodbury update is not needed, and we can get the $j$th column $Q_{i,j}$ in $O(n)$ computations from ${M(1)}^{-1}$. A \textsc{MATLAB} implementation of this procedure, along with code for all of the numerical experiments described in the paper, is available at~\url{https://github.com/zboyd2/hitting_probabilities_metric}.
The matrix $Q$ encodes the hitting probabilities of a random walk on the nodes of a graph and the order of the method we present here is very well documented in \cite{Thiede_2015}. However, there are several results that consider the computational complexity of the related problem of commute times, see for instance the works \cite{li2010random,boley2011commute}. The computational cost of computing hitting probabilities through inversion of the Laplacian has been explored further in \cite{golnari2019markov,cohen2016faster,cohen2018solving}, resulting in some cases in which the method may be improved to better than $O(n^3)$. As we are mostly interested in the construction of the metric here, we will not further explore the question of optimal order of the computation.
\section{Examples}\label{s:Examples}%
We consider examples of Markov Chains and directed graphs to illustrate the proposed metric. We start with simple graphs for which the calculations can be performed exactly. We then numerically explore a variety of synthetic graphs and a real-world example defined from New York City taxi cab data.
\subsection{Exact formulations}%
\label{sec:glued}
Here we consider some simple graphs on which the invariant measure and hitting probabilities can be computed exactly to help us understand $\Ahtb$ and $d^{\beta}$.
\begin{enumerate}
\item \emph{Directed cycles:} Consider a directed cycle on $n$ nodes. Then $\phi_i = \sfrac{1}{n}$ for all $i$, and $Q_{i,j} = 1$ for all $i\ne j$. Therefore, $\Ahtb$ is a weighted clique, and $d^{\beta}$ has all points equidistant. For $\beta=\sfrac12$, the weights equal to $1$ and all nodes are identified with each other in the metric topology.
\item \emph{Complete graphs:} Consider a complete graph on $n>2$ nodes. Then $\phi_i = \sfrac{1}{n}$ for all $i$, and $Q_{i,j} = \mathrm{const} < 1$ for all $i \ne j$. Therefore, $\Ahtb$ is a weighted clique. Unlike the directed cycle case, the weights in the clique are $<1$ for all $\beta\geq\sfrac12$.
\item \emph{Glued cycles:}
Consider graphs of the type depicted in~\cref{fig:glue}, namely graphs composed of $n_b$ ``backbone nodes'' forming a directed chain, which then branches into $C$ chains of length $n_c$, each of which finally connects back to the beginning of the backbone chain. Intuitively, a random walker on this graph transitions between $C+1$ groups of nodes, namely, each of the $C$ branches and the backbone. As illustrated in~\cref{tab:glue}, our metric captures this intuition by placing each node very close to the others on its chain. This is in contrast to commute-time-based metrics, where the length of the chain must be taken into account. (See~\cref{fig:effRes}.)
In~\cref{sec:num_ex}, we consider some numerical results based on this example.
\end{enumerate}
\begin{table}
\centering
\begin{tabular}{llcccc}
\toprule
$i$ & $j$ & $Q$ & $\frac{\Ahtb[\beta]}{(n_b + n_c)^{1-2\beta}}$ & $\Ahtb[\sfrac12]$ & $d^\frac12$ \\
\midrule
branch & same branch & $1$ & $\sfrac1{C^{2\beta-1}}$ & $1$ & $0$ \\\addlinespace[2pt]
branch & different branch & $\sfrac12$ & $\sfrac1{2 C^{2\beta-1}}$ & $\sfrac12$ & $\log 2$ \\\addlinespace[2pt]
backbone & branch & $\sfrac1C$ & $\sfrac1{C^{\beta}}$ & $C^{-\sfrac12}$ & $\sfrac12 \log C$ \\\addlinespace[2pt]
branch & backbone & $1$ & $\sfrac1{C^{\beta}}$ & $C^{-\sfrac12}$ & $\sfrac12 \log C$ \\\addlinespace[2pt]
backbone & backbone & $1$ & $1$ & $1$ & $0$ \\\addlinespace[2pt]
\bottomrule
\end{tabular}
\caption{Values of $Q$, $\Ahtb$, and $d$ evaluated at distinct nodes $i$ and $j$ for the glued cycles example from~\cref{sec:glued}. We include extra columns for the case $\beta=\sfrac12$, which is particularly interpretable. Observe that neither $\Ahtb$ nor $d^{\beta}$ depends on $n_b$ or $n_t$ (except up to scaling), which is a manifestation of their blindness to walk length. Also, the nodes that are closest together are those which lie on common chains. Note that we scaled $\Ahtb$ for visual clarity. The invariant measure is easily verified to be $\left(n_b + n_c\right)^{-1}$ on the backbone and $\left(C (n_b + n_c)\right)^{-1}$ elsewhere.}%
\label{tab:glue}
\end{table}
\subsection{Synthetic numerical examples}%
\label{sec:num_ex}
We consider four examples. The first two demonstrate that the spectrum of $\Ahtb$ (for $\beta = \sfrac12$ or $\beta=1$) identifies cyclic and clique-like sets in a useful manner. We compare to two alternative symmetrizations and another metric. The second example additionally shows the scalability of our approach. In the third example, we explore when it is advantageous to use $d$ for visualization and clustering purposes, using a directed planted partition model for ground truth comparisons. In dense, difficult-to-detect regimes, our method is more accurate than clustering using the input adjacency matrix directly. Finally, in the fourth example, we compare $d^{\sfrac12}$, $d^{1}$, and spatial distance for geometric graphs, finding that our distance captures comparable information to the spatial distance, with the similarity being especially tight when $\beta=\sfrac12$.
\subsubsection{Glued-cycles networks}%
\label{sec:gluedcycles}
For the two-glued-cycles networks illustrated in~\cref{fig:glue}, we construct a probability transition matrix $P$ by taking a uniform edge weight for all connected vertices and performing a row normalization. We then compute the Fiedler eigenvector corresponding to the second smallest eigenvalue of the graph Laplacian (sometimes called the Fiedler vector) for different symmetrized adjacency matrices. For the adjacency matrices $A$ constructed below, we calculate the graph Laplacian $L=D-A$, with $D$ the diagonal matrix of node degrees (row or column sums of $A$). The examples here are two directly glued cycles, as well as two glued cycles with a bidirectional edge between the cycles. In the first case, the results are all very similar regardless of the symmetrization, but for the second case the results differ significantly. In each case, we group the nodes based on whether the corresponding vector element is positive, negative, or zero.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.75]
\begin{scope}[every node/.style={circle,thick,draw}]
\node (A) at (0,0) {};
\node (B) at (0,1) {};
\node (C) at (0,2) {};
\node (D) at (1,2.5) {};
\node (E) at (1.5,1.5) {};
\node (F) at (1.5,.5) {} ;
\node (G) at (1,-.5) {} ;
\node (H) at (-1,2.5) {} ;
\node (I) at (-1.5,1.5) {} ;
\node (J) at (-1.5,.5) {} ;
\node (K) at (-1,-.5) {} ;
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill=white,circle},
every edge/.style={draw=red,very thick}]
\path [->] (A) edge (B);
\path [->] (B) edge (C);
\path [->] (C) edge (D);
\path [->] (D) edge (E);
\path [->] (E) edge (F);
\path [->] (F) edge (G);
\path [->] (G) edge (A);
\path [->] (C) edge (H);
\path [->] (H) edge (I);
\path [->] (I) edge (J);
\path [->] (J) edge (K);
\path [->] (K) edge (A);
\end{scope}
\end{tikzpicture} \ \ \
\begin{tikzpicture}[scale=0.75]
\begin{scope}[every node/.style={circle,thick,draw}]
\node (A) at (0,0) {};
\node (B) at (0,1) {};
\node (C) at (0,2) {};
\node (D) at (1,2.5) {};
\node (E) at (1.5,1.5) {};
\node (F) at (1.5,.5) {} ;
\node (G) at (1,-.5) {} ;
\node (H) at (-1,2.5) {} ;
\node (I) at (-1.5,1.5) {} ;
\node (J) at (-1.5,.5) {} ;
\node (K) at (-1,-.5) {} ;
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill=white,circle},
every edge/.style={draw=red,very thick}]
\path [<->] (F) edge (J);
\path [->] (A) edge (B);
\path [->] (B) edge (C);
\path [->] (C) edge (D);
\path [->] (D) edge (E);
\path [->] (E) edge (F);
\path [->] (F) edge (G);
\path [->] (G) edge (A);
\path [->] (C) edge (H);
\path [->] (H) edge (I);
\path [->] (I) edge (J);
\path [->] (J) edge (K);
\path [->] (K) edge (A);
\end{scope}
\end{tikzpicture} \ \ \
\begin{tikzpicture}[scale=0.75]
\begin{scope}[every node/.style={circle,thick,draw}]
\node[fill=magenta] (A) at (0,0) {};
\node[fill=magenta] (B) at (0,1) {};
\node[fill=magenta] (C) at (0,2) {};
\node[fill=blue] (D) at (1,2.5) {};
\node[fill=blue] (E) at (1.5,1.5) {};
\node[fill=blue] (F) at (1.5,.5) {} ;
\node[fill=blue] (G) at (1,-.5) {} ;
\node[fill=green] (H) at (-1,2.5) {} ;
\node[fill=green] (I) at (-1.5,1.5) {} ;
\node[fill=green] (J) at (-1.5,.5) {} ;
\node[fill=green] (K) at (-1,-.5) {} ;
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill=white,circle},
every edge/.style={draw=red,very thick}]
\path [->] (A) edge (B);
\path [->] (B) edge (C);
\path [->] (C) edge (D);
\path [->] (D) edge (E);
\path [->] (E) edge (F);
\path [->] (F) edge (G);
\path [->] (G) edge (A);
\path [->] (C) edge (H);
\path [->] (H) edge (I);
\path [->] (I) edge (J);
\path [->] (J) edge (K);
\path [->] (K) edge (A);
\end{scope}
\end{tikzpicture} \\
\vspace{.2cm}
\begin{tabular}{cccc}
\begin{tikzpicture}[scale=0.75]
\begin{scope}[every node/.style={circle,thick,draw}]
\node[fill=blue] (A) at (0,0) {};
\node[fill=blue] (B) at (0,1) {};
\node[fill=green] (C) at (0,2) {};
\node[fill=green] (D) at (1,2.5) {};
\node[fill=green] (E) at (1.5,1.5) {};
\node[fill=blue] (F) at (1.5,.5) {} ;
\node[fill=blue] (G) at (1,-.5) {} ;
\node[fill=green] (H) at (-1,2.5) {} ;
\node[fill=green] (I) at (-1.5,1.5) {} ;
\node[fill=blue] (J) at (-1.5,.5) {} ;
\node[fill=blue] (K) at (-1,-.5) {} ;
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill=white,circle},
every edge/.style={draw=red,very thick}]
\path [<->] (F) edge (J);
\path [->] (A) edge (B);
\path [->] (B) edge (C);
\path [->] (C) edge (D);
\path [->] (D) edge (E);
\path [->] (E) edge (F);
\path [->] (F) edge (G);
\path [->] (G) edge (A);
\path [->] (C) edge (H);
\path [->] (H) edge (I);
\path [->] (I) edge (J);
\path [->] (J) edge (K);
\path [->] (K) edge (A);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.75]
\begin{scope}[every node/.style={circle,thick,draw}]
\node[fill=green] (K) at (-1,-.5) {} ;
\node[fill=green] (A) at (0,0) {};
\node[fill=green] (B) at (0,1) {};
\node[fill=green] (C) at (0,2) {};
\node[fill=blue] (D) at (1,2.5) {};
\node[fill=blue] (E) at (1.5,1.5) {};
\node[fill=blue] (F) at (1.5,.5) {} ;
\node[fill=green] (G) at (1,-.5) {} ;
\node[fill=blue] (H) at (-1,2.5) {} ;
\node[fill=blue] (I) at (-1.5,1.5) {} ;
\node[fill=blue] (J) at (-1.5,.5) {} ;
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill=white,circle},
every edge/.style={draw=red,very thick}]
\path [<->] (F) edge (J);
\path [->] (A) edge (B);
\path [->] (B) edge (C);
\path [->] (C) edge (D);
\path [->] (D) edge (E);
\path [->] (E) edge (F);
\path [->] (F) edge (G);
\path [->] (G) edge (A);
\path [->] (C) edge (H);
\path [->] (H) edge (I);
\path [->] (I) edge (J);
\path [->] (J) edge (K);
\path [->] (K) edge (A);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.75]
\begin{scope}[every node/.style={circle,thick,draw}]
\node[fill=magenta] (A) at (0,0) {};
\node[fill=magenta] (B) at (0,1) {};
\node[fill=magenta] (C) at (0,2) {};
\node[fill=blue] (D) at (1,2.5) {};
\node[fill=blue] (E) at (1.5,1.5) {};
\node[fill=blue] (F) at (1.5,.5) {} ;
\node[fill=blue] (G) at (1,-.5) {} ;
\node[fill=green] (H) at (-1,2.5) {} ;
\node[fill=green] (I) at (-1.5,1.5) {} ;
\node[fill=green] (J) at (-1.5,.5) {} ;
\node[fill=green] (K) at (-1,-.5) {} ;
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill=white,circle},
every edge/.style={draw=red,very thick}]
\path [<->] (F) edge (J);
\path [->] (A) edge (B);
\path [->] (B) edge (C);
\path [->] (C) edge (D);
\path [->] (D) edge (E);
\path [->] (E) edge (F);
\path [->] (F) edge (G);
\path [->] (G) edge (A);
\path [->] (C) edge (H);
\path [->] (H) edge (I);
\path [->] (I) edge (J);
\path [->] (J) edge (K);
\path [->] (K) edge (A);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.75]
\begin{scope}[every node/.style={circle,thick,draw}]
\node[fill=magenta] (A) at (0,0) {};
\node[fill=magenta] (B) at (0,1) {};
\node[fill=magenta] (C) at (0,2) {};
\node[fill=blue] (D) at (1,2.5) {};
\node[fill=blue] (E) at (1.5,1.5) {};
\node[fill=blue] (F) at (1.5,.5) {} ;
\node[fill=blue] (G) at (1,-.5) {} ;
\node[fill=green] (H) at (-1,2.5) {} ;
\node[fill=green] (I) at (-1.5,1.5) {} ;
\node[fill=green] (J) at (-1.5,.5) {} ;
\node[fill=green] (K) at (-1,-.5) {} ;
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill=white,circle},
every edge/.style={draw=red,very thick}]
\path [<->] (F) edge (J);
\path [->] (A) edge (B);
\path [->] (B) edge (C);
\path [->] (C) edge (D);
\path [->] (D) edge (E);
\path [->] (E) edge (F);
\path [->] (F) edge (G);
\path [->] (G) edge (A);
\path [->] (C) edge (H);
\path [->] (H) edge (I);
\path [->] (I) edge (J);
\path [->] (J) edge (K);
\path [->] (K) edge (A);
\end{scope}
\end{tikzpicture} \\
$A = \max(P,P^T)$ & Chung's $L$~\cite{Chung_2005} & $A =\Ahtb[1]$ & $A=\Ahtb[\sfrac12]$
\end{tabular}
\caption{(Top Left) Two-glued-cycles example from~\cref{sec:glued} with $n_b=3$, $n_c=4$, and $C=2$. The ``backbone'' nodes run along the center, and the two partial cycles split off from and then return to it. (Top Middle) Similar two-glued-cycles network with a bidirectional edge. (Top Right) Sign of the Fiedler vector of the Laplacian for several different symmetrizations. (Bottom) The sign of the Fiedler vector of the Laplacian for several different symmetrizations.
The sign of the Fiedler vectors is encoded as ($-$, green), ($0$, magenta) and ($+$, blue).
\label{fig:glue}
\end{figure}
For the two glued cycles without the bidirectional edge, the naive symmetrizations of the directed adjacency matrix, either $A = (P+P^T)/2$ or $A=\max\left(P,P^T\right)$,
have a Fiedler vector that is $0$ on the spine and splits each cycle into signed components, see the top right plot in~\cref{fig:glue}. However, in the bottom left component~\cref{fig:glue}, for the two-glued-cycles network with the bidirectional edge, the naive symmetrization splits the network horizontally, which is reasonable, since the resulting graph cut is small, although this (by construction) does not reflect the coherent, directed structure of the original graph.
One way to account for directed structure in a way that minimizes equilibrium flux across the cut was suggested by Fan Chung~\cite{Chung_2005} (cited in \cref{s:RelWork}), defining the Laplacian by $
L = I - \frac{1}{2} \left[ \Phi^{\sfrac 1 2} P \Phi^{-\sfrac 1 2} + \Phi^{-\sfrac 1 2} P^T \Phi^{\sfrac 1 2} \right], $
where $\Phi = \textrm{diag}(\phi) \in \mathbb R^{n \times n}$.
Chung uses $L$ to establish a Cheeger-type inequality for digraphs, which is used to study the rate of convergence for Markov chains. Using Chung's Laplacian again gives a comparable outcome for the two glued cycles example (\cref{fig:glue}), but in the example with the bidirectional edge, this symmetrization places most of the non-backbone nodes in one class and all backbone nodes in the other (second plot in~\cref{fig:glue}).
The normalized hitting probabilities matrices $\Ahtb[1]$ and $\Ahtb[\sfrac12]$ each distinguish between the two branches, with the backbone set equal to zero in both the cases of the glued cycles and the glued cycles with a bidirectional edge as seen in~\cref{fig:glue}.
Thus, all three approaches uncover different structure in the two-glued-cycle graph with the bidirectional edge, with the naive symmetrization yielding small undirected cuts, Chung's approach yielding (perhaps) two different dynamical states, and $\Ahtb[\beta]$ showing all three chains in a natural way for both $\beta = \sfrac12, 1$.
Finally, we compare the total effective resistance metric of~\cite{Young_2016b} to our metric on the example of the two-glued-cycles network (with no bidirectional edge). As one might expect given the relation ship between effective resistance and commute times in the undirected case, the total effective resistance of~\cite{Young_2016b} is sensitive to cycle length.
\Cref{fig:effRes} demonstrates that the commute time approach views the distances on each cycle quite differently and that the relative distances from the total effective resistance metric are more difficult to interpret in the second loop.
\begin{figure}
\centering
\begin{tabular}{ccc}
\begin{tikzpicture}[xscale=.35,yscale=.4]
\Vertices[RGB=true,size=.1]{nodesd12.dat}
\Edges[Direct=True,lw=.01,opacity=.2]{edges.dat}
\end{tikzpicture}
&
\begin{tikzpicture}[xscale=.35,yscale=.4]
\Vertices[RGB=true,size=.1]{nodesd1.dat}
\Edges[Direct=True,lw=.01,opacity=.2]{edges.dat}
\end{tikzpicture}
&
\begin{tikzpicture}[xscale=.35,yscale=.4]
\Vertices[RGB=true,size=.1]{nodesR.dat}
\Edges[Direct=True,lw=.01,opacity=.2]{edges.dat}
\end{tikzpicture}
\\
$d^{\sfrac12}$ &
$d^1$ &
Total effective resistance
\end{tabular}
\caption{Two glued cycles with $n_c=55$ and $n_b=5$, with nodes colored by distance from a node on the far right. Blue denotes small distances. The metric $d^{\sfrac12}$ has three levels of distance corresponding to nodes on the same branch, backbone, and opposite branch, respectively. The metric $d^1$ is similar, except nodes on the same branch are not distinguished from backbone nodes. Finally, the total effective resistance metric from~\cite{Young_2016b} gives a smoother notion of distance on the right branch and backbone, but on the left branch, proceeding counterclockwise, one finds the distance decreasing and then increasing again, which is somewhat difficult to interpret. This example shows how different resistance/commute time are from hitting-probability distance.}%
\label{fig:effRes}
\end{figure}
\subsubsection{Cycle adjoined to directed Erd\H{o}s--R\'enyi Graph}
Consider the following construction, illustrated in~\cref{fig:er}.
Let $n=n_{\mathrm{er}} + n_{\mathrm{cycle}}$, and let the
\begin{wrapfigure}[15]{r}{.35\textwidth}
\centering
\includegraphics[width=.35\textwidth]{er_plus_cycle.pdf}
\vspace*{-0.2in}
\caption{Erd\H{o}s-R\'enyi plus cycle example.}
\label{fig:er}
\end{wrapfigure}
first $n_{\mathrm{er}}$ nodes form an unweighted, directed ER graph with connection probability $p$.
The remaining $n_{\mathrm{cycle}}$ nodes form an unweighted, directed cycle.
An adjacency matrix for the ER graph and cycle are connected by adding $2\, \mathrm{round}(n p)-1$ edges of weight $w$ to each cycle node from randomly selected nodes in the ER graph.\footnote{These edges are drawn with replacement with multi-edges merged to a single edge of weight $w$. Results were similar when we added the weights instead.} Finally, a single, bidirectional edge of weight $1$ is added from one cycle node to one ER node.
Normalizing the rows to form a probability transition matrix, a random walker on this graph would transition between the ER and cycle subgraphs,
where the cycle subgraph is difficult to escape quickly because of the single exit.
For the particular choice of $n_{\mathrm{er}}=20$, $n_{\mathrm{cycle}}=8$, $p=.5$, and $w=3$, we find that the Fiedler vector of (the Laplacian associated with) $\Ahtb[\sfrac12]$ is positive on the cycle nodes and negative elsewhere. In contrast, the Fiedler vector of the naive symmetrization $A=(P+P^T)/2$ or Chung's $L$~\cite{Chung_2005} does not separate the cycle and ER nodes.
Scaling up to $n=n_{\mathrm{er}}+n_{\mathrm{cycle}} = 7,200 + 2,800 = 10,000$ nodes keeping the other parameters the same ($\approx 38.7$ million edges) gives similar eigenvector results.
The computation takes 31 seconds on a Lenovo ThinkStation P410 desktop with Xeon E5--1620V4 3.5 GHz CPU and 16 GB RAM using {\sc MATLAB} R2019a Update 4 (9.6.0.1150989) 64-bit (glnxa64): 18 seconds to compute $Q$, 6 seconds to compute $\phi$, 2 seconds to form $\Ahtb[\sfrac{1}{2}]$, and 5 seconds to compute the Fiedler vector.
\subsubsection{Cluster detection and visualization for digraphs}\label{sec:kmeans}
We next use $d$ for clustering and dimension reduction.
We consider directed graphs generated by a planted partition model with nodes grouped into three ground truth communities and form a uniformly weighted adjacency matrix by connecting an edge from $i$ to $j$ with probability $p_{\mathrm{in}}$ if $i$ and $j$ are in the same community and $p_{\mathrm{out}}$ ($<p_{\mathrm{in}}$) otherwise. A probability transition matrix can then be formed using row normalization.
We then attempt to recover the ground truth node assignments.
\begin{wrapfigure}[15]{r}{.32\textwidth}
\centering
\includegraphics[width=.3\textwidth]{pca_layout.pdf}
\caption{PCA embedding of $d^{\sfrac12}$, colored by ground truth community.}
\label{fig:pca}
\end{wrapfigure}
The difficulty of this problem is generally understood in terms of $\Delta = p_{\mathrm{in}} - p_{\mathrm{out}}$ and $\rho = \frac{p_{\mathrm{in}} + 2 p_{\mathrm{out}}}{3}$. Small values of $\Delta$ correspond to more difficult clustering problems that may be solved less accurately (relative to the ground truth).
In this example we attempt to cluster the nodes into $k=3$ clusters using several approaches: (1) principal component analysis\footnotemark (PCA)~\cite{pca} on the adjacency matrix, $A$, followed by $k$-means clustering on the first $k-1$ PCA vectors; (2) PCA on $d^{\sfrac12}$ followed by $k$-means; and (3) $k$-medoids on $d^{\sfrac12}$.
(The $k$-medoids algorithm is similar in spirit to the $k$-means unsupervised clustering algorithm but applies in arbitrary metric spaces, see for instance \cite{kaufmann1987clustering,park2009simple}.) Results are shown in~\cref{fig:kmedoids,fig:regions}.
\footnotetext{Specifically, we used the PCA routine from MATLAB R2019a Update 4 (9.6.0.1150989) 64-bit (glnxa64). As expected, this gives different results in general when applied to a matrix versus its transpose. In this case, the matrix is stochastically equivalent with its transpose, and in the NY taxi example below, the PCA-based plots are similar regardless of whether the transpose is used.}
\begin{figure}[!ht]
\centering
\footnotesize
\def<desired width>{\linewidth}
\input{clusteringstudyinput.tex}
\caption{Results of (top row) PCA on $A$ followed by $k$-means, (middle) PCA on $d^{\sfrac12}$ followed by $k$-means, and (bottom) $k$-medoids on $d^{\sfrac12}$ on 300-node graphs generated using the directed planted partition model with three clusters, as described in~\cref{sec:kmeans}. We varied the mean edge density, $\rho$, and cluster quality, $\Delta=p_{\mathrm{in}}-p_{\mathrm{out}}$. Since results depend on the random initialization, we report best of 5 runs for each entry. If any generated graph was not strongly connected, we did not try to cluster it. The left column is the accuracy (purity) of the recovered partition, and the right value is the empirical $p$ value of the accuracy relative to 4,000 random partitions obtained by drawing each community label uniformly at random. Notably, method (2) has the best performance for dense, weakly-clustered graphs. [Note that the triangular blue region on the lower left of each plot represents a $(\rho,\Delta)$ parameter combination that cannot exist.]}%
\label{fig:kmedoids}
\end{figure}
\begin{figure}
\centering
\footnotesize
\def<desired width>{\linewidth}
\input{clusteringstudy1input.tex}
\caption{(Left) Regions where the methods from~\cref{fig:kmedoids} perform best. Here, light blue is method (1), green is method (2), and yellow is method (3). (Middle) Difference in accuracy between the best and second-best methods. (Right) Ratio of $p$ value of the best and second best methods. from~\cref{fig:kmedoids}. Combining these plots, we see that there is a significant parameter regime consisting of dense, difficult to detect structure, where using $d^{\sfrac12}$ instead of $A$ enhances the spectral detection of structure by 5--20\% for graphs where method (1) is recovering essentially no structure in $A$. Note that the y axis is different from~\cref{fig:kmedoids}.}%
\label{fig:regions}
\end{figure}
We find that
method (1) works best on sparse or well-separated clusters, method (2) works best with dense, difficult-to-detect clusters, and method (3) has no clear advantage. More specifically, using $d^{\sfrac12}$ in method (2) enhances our ability to get a better-than-chance clustering in dense networks.\footnote{We also tried using the shortest commute and generalized effective resistance metrics~\cite{Young_2016b,Young_2016a} as substitute for the hitting time metric in this example and found similar improvements over using the raw adjacency matrix. In particular, the shortest commute was the most effective metric for this task (although this metric is not robust, so the real-life performance may be different).} (We note that spectral methods in undirected graphs give asymptotically optimal almost-exact recovery but are not optimal for harder cases where only better-than-chance recoverability is possible~\cite{abbe}. This is consistent with~\cref{fig:kmedoids,fig:regions}.)
Finally, we can also use PCA on $d^{\sfrac12}$ to visualize the directed network. The first and second principal components, generated using {\sc MATLAB}'s built-in routine, are plotted in~\cref{fig:pca}, clearly showing the separation into three clusters, which are in accordance with the three ground-truth communities.
\subsubsection{Distances on geometric graphs}
Given known convergence properties of various graph models to continuum problems (e.g. \cite{trillos2016consistency,trillos2018variational,singer2012vector,singer2017spectral,Osting_2017}), we are motivated by the question of how our distance metric compares to a standard notion of distance when the network arises from a natural geometric setting. For instance, as mentioned in the introduction, \cite{singer2012vector} proves that the notion of diffusion distance converges to that of geodesic distance as a point cloud samples a closed manifold at higher and higher densities.
In~\cref{fig:geodistance}, we consider distances computed using our metric structure in a family of geometric graphs constructed using Euclidean distances to determine edge weights. The geometric graphs considered are
\begin{enumerate}[(a)]
\item A random point cloud on a flat torus ${[0,2 \pi]}^2$ with $36^2$ points,
\item A random point cloud on a flat torus with a hole ${[0,2 \pi]}^2 \setminus B((\pi,\pi),\pi/2)$ with $36^2$ points (distances relative to a point in the bottom left of the torus),
\item An $H$ shaped domain $({[0,2 \pi]}^2 \cap \{|x_1-\pi|\geq \pi/2\}) \cup ({[0,2 \pi]}^2 \cap \{|x_2-\pi|\leq \pi/4\})$ with $36^2$ points (distances relative to a point in the bottom right of the $H$),
\item A random point cloud on the circle of length $2 \pi$ with $1000$ points,
\item A random point cloud on a sphere of radius $1$ in $\mathbb{R}^3$ with $1000$ points,
\item A square $10 \times 10$ lattice on the flat torus ${[0,2 \pi]}^2$.
\end{enumerate}
For the regular lattice example, the edge weights are only carried on nearest neighbor vertices. In all other cases, we consider the edge weights to be of the form $e^{-\gamma d_{\text{Euc}}{(x_i,x_j)}^2}$, where $d_{\text{Euc}}$ is just the Euclidean distance metric (determined with periodicity if the domain is periodic, {\it i.e.\/}, we take shortest path distance in the flat torus). We have chosen the scale factor $\gamma = 1$ uniformly throughout.
Once the geometric graph is constructed, we computed the pairwise Euclidean distances, as well as the pairwise distances $d^{\sfrac12}$ and $d^1$ for comparison. To assist with interpretation and comparison, we have ordered the vertices in~\cref{fig:geodistance} from closest to farthest relative to the $d^{\sfrac12}$ metric and plotted for each distance function the rescaled distances $(d-d_{\min})/(d_{\max} - d_{\min})$ to normalize all of them to the same scale.
Throughout, we note that $d^{\sfrac12}$ is a reasonable fit to the measured Euclidean distances, while $d^1$ seems to do well only when the geometry is such that the invariant measure normalization (that is, the choice of $\beta$) does not matter as much. Note that the distance $d^{\sfrac12}$ and $d^1$ are identical on the square lattice, up to scaling.
In this case, we are really studying the structure of the $Q$ hitting probability matrix. Our results give some preliminary indication that in the consistency limit the $d^{\sfrac12}$ metric may converge to the Euclidean distance while the $d^1$ metric converges to something else entirely. However, we leave this pursuit for future analytical studies.
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{FlatTorusPC_alt.pdf}
\caption{Flat Torus}
\end{subfigure}\quad
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{FlatTorusHolePC_alt.pdf}
\caption{Flat Torus with a Hole}
\end{subfigure}\quad
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{HdomainPC_alt.pdf}
\caption{$H$ shaped domain}
\end{subfigure}
\\
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{CirclePC_alt.pdf}
\caption{Circle}
\end{subfigure}\quad
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{SpherePC_alt.pdf}
\caption{Sphere}
\end{subfigure}\quad
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{SquareLattice.pdf}
\caption{Square Lattice}
\end{subfigure}
\caption{Normalized distance plots comparing the scaled distances from one node in a geometric graph computed using Euclidean distances (Black), $d^{\sfrac12}$ (Blue) and $d^1$ (Red). The geometric graphs from top left to bottom right are
{\bf (a)} A random point cloud on a flat torus
{\bf (b)} A random point cloud on a flat torus with a hole,
{\bf (c)} A random point cloud on an $H$ shaped domain,
{\bf (d)} A random point cloud on the circle,
{\bf (e)} A random point cloud on a sphere,
{\bf (f)} A square lattice on the flat torus.
Note, in all subplots, we have ordered the vertices from closest to farthest from a reference node given by the first vertex generated relative to the $d^{\sfrac12}$ metric.}
\label{fig:geodistance}
\end{figure}
\subsection{Real-world example: the New York City taxi network}%
\label{sec:taxi}%
Consider the movement over time of a New York City taxi, which we interpret as a Markov chain where the states are neighborhoods and $P_{i,j}$ is the probability that a trip begun in neighborhood $i$ ends in neighborhood $j$.
Using publicly available data from the New York City Taxi and Limousine Commission,\footnote{Accessed at~\url{https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page} in April 2020.}
we computed an adjacency matrix where $A_{i,j}$ is the number of Yellow Taxi trips in January 2019 that started at $i$ and ended at $j$, where $i$ and $j$ are chosen from 262 neighborhoods\footnote{Two additional neighborhoods are marked ``unknown'' and appear to designate out-of-city or out-of-state endpoints. We excluded these from our analysis.} spread across the city's five boroughs (Manhattan, Staten Island, Queens, Brooklyn, and the Bronx). We also included trips to and from Newark Liberty International Airport (EWR) in New Jersey. We restricted our analysis to the 250-neighborhood strongly connected component
The data is dominated by degree, as shown in~\cref{nyc_heatmap}, with the busier Manhattan neighborhoods having tens of thousands of trips, and the Staten Island neighborhoods having median out-degree of 4. Traffic is also organized by borough, although the distinctions between the spatially adjacent Brooklyn, Queens, and Bronx boroughs are perhaps less apparent, and they might be properly considered as peripheral to the Manhattan core. Staten Island is notable for its remoteness, which is reflected in the sparsity of $A$ in that block.
In~\cref{nyc_heatmap}, we compare $A$ with $d^{\sfrac12}$ and $d^1$. While $d^{\sfrac12}$ highlights Manhattan in a manner similar to $A$, it does not distinguish much between Queens, Brooklyn, and the Bronx, showing them instead as a single interconnected group. In contrast, Staten Island is very clearly highlighted as its own, close group, which is reasonable given the geographic proximity of these neighborhoods and the fact that a disproportionately large number of trips involving Staten Island both started and ended there. Although the purpose of this example is not to provide an optimal clustering of the data,
we note that Staten Island does represent a difficult cluster to detect, and arguably is not even a cluster, since there are only eight interior edges (counting multiplicity but excluding self-edges) and 309 incoming or outgoing edges, all of which is hidden in over 1 million edges (again, counting multiplicity).
Note that the fact that Staten Island is highlighted by $d^{\sfrac12}$ is not simply because of degree scaling, as a heat map of $P$ does not highlight Staten Island as a block. The true explanation seems to involve two factors: (1) Staten Island has eight non-diagonal in-edges 13 neighborhoods,\footnote{Staten Island has 20 neighborhoods, but 7 have 0 out-degree and are thus excluded from the strongly connected component.} and the median out-degree is 4. Thus, a taxi that does enter Staten Island has a relatively large likelihood of visiting another Staten Island location next, relative to taxis starting at other neighborhoods. (2) The average frequency of visiting Staten Island at all is so low that the pattern of visiting is almost memoryless, with taxis leaving Staten Island having plenty of time to mix in other areas before visiting Staten Island again, so that the probability of leaving Staten Island and then reaching another Staten Island location before returning to the first one is about $\frac12$, despite the low degree of Staten Island neighborhoods. In contrast, Staten Island is far from other locations, especially Manhattan, since by~\cref{d2sym}, mutually high hitting probabilities are required for closeness, but the probability of starting in a Manhattan neighborhood and reaching Staten Island before returning is very low.
The distance $d^1$ places the Manhattan nodes close to most other nodes, especially each other, while the Staten Island nodes are far from everything, especially each other. Since $d^1_{i,j} = -\log(\phi_i) - \log(Q_{i,j})$, this distance is small only when (1) $\phi_i$ is large and (2) $Q_{i,j}$ is far from zero. Thus, the Staten Island nodes, which have small values of $\phi$, cannot be close to anything, and the Manhattan nodes, which have the largest values of $\phi$, can be close to other nodes, depending on $Q_{i,j}$. Empirically, $Q_{i,j}$ is usually not very small, with 77\% of the entries in $Q$ being at least $0.1$, which explains Manhattan's overall closeness to other nodes. The fact that the Manhattan nodes are closer to each other than to other nodes is accounted for by the fact that $Q_{i,j}$ for $i$ in Manhattan is generally larger if $j$ is also in Manhattan, which might be expected. (The medians differ by a factor of $5.4$.) A similar observation explains why the Staten Island nodes are considered farther from each other than they are from nodes in the other boroughs.
Finally, we used $d^{\sfrac12}$ to perform PCA, with the first two principal components (PCs) visualized in~\cref{nyc_pca}. These two PCs explained 64\% and 34\% of the variation, respectively, with the first PC being closely related to out-degree (Pearson correlation with $\log k_{\mathrm{out}}$ is $.96$) and the second PC being well-correlated (Pearson correlation $.9978$) with the column means of $d^{\sfrac12}$. So over 98\% of the variance is explained by these two PCs. Interestingly, both the highest- and lowest-degree nodes were on average far from other nodes. Recalling that $d^{\sfrac12}_{i,j}$ is the negative log of the geometric mean of $Q_{i,j}$ and $Q_{j,i}$ (see~\cref{d2sym}), closeness requires that both of these factors be high. If $i$ is a high-degree core node, then $Q_{i,j}$ is small for most $j$. In contrast, a mid-degree peripheral node in Queens, the Bronx, or Brooklyn, enjoys reasonable values of $Q_{i,j}$ for other peripheral nodes $j$, since once a taxi enters the Manhattan core, it is likely to visit a significant portion of the other nodes before returning to $i$. Finally, if a node's degree is too small, the probability of a taxi reaching it at all is too small for the hitting probabilities to be high. For comparison, performing PCA directly on $A$ gives a similar first PC, with a different second PC that explains about half as much variance as the second PC of $d^{\sfrac12}$. The second PC is nearly constant, except on Manhattan, where it correlates with the East-West coordinate.
\begin{figure}[t]
\centering
\footnotesize
\begin{tabular}{@{\hskip 0in}c@{\hskip 0.18in}c@{\hskip 0.18in}c@{\hskip 0in}}
\def<desired width>{.31\linewidth}
\input{nyc_adjacencyinput.tex} &
\def<desired width>{.29\linewidth}
\input{nyc_distanceinput.tex} &
\def<desired width>{.30\linewidth}
\input{nyc_d1input.tex}\\
$A$ & $d^{\sfrac12}$ & $d^1$
\end{tabular}
\caption{An example based on New York City taxi transit data, where nodes are 250 neighborhoods and $A_{i,j}$ is the number of trips from $i$ to $j$. (Left) Heatmap of $A$ with the nodes sorted by borough in this order: EWR airport (one node), Staten Island, Manhattan, Queens, Brooklyn, and the Bronx. The taxi traffic is dominated by the Manhattan block in the upper left, with Queens, Brooklyn, and the Bronx forming three blocks further down and to the right. (Middle) A similarly arranged heatmap of $d^{\sfrac12}$. The Manhattan neighborhoods are close together, and the smaller upper left block corresponding to Staten Island is distinguished as a coherent submodule, despite having only 8 interior edges. Queens, Brooklyn, and the Bronx form a large block in the lower right. See~\cref{sec:taxi} for an explanation of these differences. (Right) A heatmap of $d^{1}$. We observe that in this normalization Staten Island is quite far from everything, including itself. Manhattan is relatively close to almost everything, especially itself. This is exactly what we should expect because $d^1_{i,j}$ is small only when both $\phi_i$ and the $i\rightarrow j$ hitting probability are large. [In the right two heatmaps, the diagonal is set to a non-zero value to improve contrast.]}%
\label{nyc_heatmap}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\begin{tikzpicture}[scale=0.42]
\begin{axis}[
cycle list={
{red,mark=*},
{blue,mark=*},
{yellow,mark=*},
{green,mark=*},
{brown,mark=*},
{orange,mark=*
},
legend entries={Manhattan, Staten Island, Queens, Brooklyn, Bronx,EWR},
legend cell align=left,
only marks,
mark size=1pt,
width=\textwidth,
ticks=none,
xlabel={Principal component 1 (log scale)},
ylabel={Principal component 2},
title={Principal component analysis of $d^{\sfrac12}$ for New York City Taxis},
]
\addplot table [col sep=comma] {pca3.dat};
\addplot table [col sep=comma] {pca2.dat};
\addplot table [col sep=comma] {pca4.dat};
\addplot table [col sep=comma] {pca5.dat};
\addplot table [col sep=comma] {pca6.dat};
\addplot table [col sep=comma] {pca1.dat};
\end{axis}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.42]
\begin{axis}[
cycle list={
{red,mark=*},
{blue,mark=*},
{yellow,mark=*},
{green,mark=*},
{brown,mark=*},
{orange,mark=*
},
legend entries={Manhattan, Staten Island, Queens, Brooklyn, Bronx,EWR},
legend cell align=left,
only marks,
mark size=1pt,
width=\textwidth,
ticks=none,
xlabel={Principal component 1 (log scale)},
ylabel={Principal component 2},
title={Principal component analysis of $A$ for New York City Taxis},
]
\addplot table [col sep=comma] {pcaA3.dat};
\addplot table [col sep=comma] {pcaA2.dat};
\addplot table [col sep=comma] {pcaA4.dat};
\addplot table [col sep=comma] {pcaA5.dat};
\addplot table [col sep=comma] {pcaA6.dat};
\addplot table [col sep=comma] {pcaA1.dat};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{(Left) These two PCs explain $98.3\%$ of the variance. The first PC has a $.96$ correlation coefficient with $\log k_{\mathrm{out}}$, and the second PC as a correlation coefficient of $.9978$ with the column means of $d^{\sfrac12}$, which we interpret as the average distance to other nodes. Notably, the highest-degree nodes also have high average distance to other nodes. This is also true of the lowest-degree nodes, while the mid-degree nodes in Queens, Brooklyn, and the Bronx are closer to other nodes on average. We interpret this by noting that, while high-degree nodes are common endpoints for trips, (so $Q_{i,j}$ might be high when $j$ is a high-degree node), they have a lot of self-loops, and the taxis that leave them tend to return relatively quickly (so $Q_{j,i}$ is low for most $i$). Using~\cref{d2sym}, we see that $d^{\sfrac12}_{i,j}$ will then not be very small for high-degree $i$. The mid-degree nodes, in contrast, send a lot of taxis into the Manhattan core, which are likely to mix through the city for a long time before returning (so $Q_{i,j}$ is not very small for almost all destinations $j$). (Right) PCA on $A$ gives a similar first PC. The second PC is nearly constant except on Manhattan, where it is correlated with the East-West coordinate (Pearson .42, p=.0004). The second PC explains about half as much variance for $A$ as for $d^{\sfrac12}$.}%
\label{nyc_pca}
\end{figure}
\section{Conclusion}\label{s:Disc}
Given a probability transition matrix for an ergodic, finite-state, time-homogeneous Markov chain, we have constructed a family of (possibly pseudo-)metrics on the state space, which we refer to as hitting probability distances. Alternatively, this construction gives a metric on the nodes of a strongly connected, directed graph. In the cases where we do not obtain a proper metric, the degeneracies give global structural information, and we can quotient them away. Our metrics can be computed in $O(n^3)$ time and $O(n^2)$ space, in one example scaling up $10,000$ nodes and $\approx 38M$ edges on a desktop computer. Our metric captures different information compared to other directed graph metrics and captures multiscale structure in the taxi example. We have considered the utility of this metric for structure detection, dimension reduction, and visualization, finding in each case advantages of our method compared to existing techniques.
Some other possible applications include efficient nearest-neighbor search, new notions of graph curvature~\cite{van_Gennip_2014}, Cheeger inequalities, and provable optimality of weak recovery for dense, directed communities. Additionally, in our experiments, we observed that several eigenvalues of the symmetrized adjacency matrix contained useful information about structure such as cycles, and it would be good to understand better which structures get encoded in leading eigenspaces. Empirically, it is important to know how commonly $d^{\sfrac12}$ is degenerate, and what useful structure is revealed in practice.
A natural theoretical question is consistency of the distances in the large graph limit as we approach a natural geometric object embedded in a standard Euclidean space~\cite{Osting_2017,singer2012vector,singer2017spectral,trillos2018variational,trillos2016consistency,Yuan2020}.
In terms of possible improvements to our method, an effective means of thresholding the symmetrized hitting probability matrix could improve scalability. A natural question to pursue in a variety of settings would be the sparsification of $\Ahtb$ and its implications for spectral analysis and clustering applications. In particular, the potentially sparse $P$ will map into a full (but symmetric) matrix $\Ahtb$.
In large systems the $O(n^2)$ storage requirement may become a burden. Hence, it is natural to ask: If we sparsify the $\Ahtb$ matrix to have a comparable number of edges to that of the original $P$, how much information can be stably preserved in the spectrum? This will be a topic of future work on the hitting probability matrices we have constructed.
\bibliographystyle{siamplain}
| {
"timestamp": "2021-01-19T02:43:57",
"yymm": "2006",
"arxiv_id": "2006.14482",
"language": "en",
"url": "https://arxiv.org/abs/2006.14482",
"abstract": "The shortest-path, commute time, and diffusion distances on undirected graphs have been widely employed in applications such as dimensionality reduction, link prediction, and trip planning. Increasingly, there is interest in using asymmetric structure of data derived from Markov chains and directed graphs, but few metrics are specifically adapted to this task. We introduce a metric on the state space of any ergodic, finite-state, time-homogeneous Markov chain and, in particular, on any Markov chain derived from a directed graph. Our construction is based on hitting probabilities, with nearness in the metric space related to the transfer of random walkers from one node to another at stationarity. Notably, our metric is insensitive to shortest and average walk distances, thus giving new information compared to existing metrics. We use possible degeneracies in the metric to develop an interesting structural theory of directed graphs and explore a related quotienting procedure. Our metric can be computed in $O(n^3)$ time, where $n$ is the number of states, and in examples we scale up to $n=10,000$ nodes and $\\approx 38M$ edges on a desktop computer. In several examples, we explore the nature of the metric, compare it to alternative methods, and demonstrate its utility for weak recovery of community structure in dense graphs, visualization, structure recovering, dynamics exploration, and multiscale cluster detection.",
"subjects": "Social and Information Networks (cs.SI); Machine Learning (cs.LG); Numerical Analysis (math.NA); Probability (math.PR); Machine Learning (stat.ML)",
"title": "A metric on directed graphs and Markov chains based on hitting probabilities",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982013788985129,
"lm_q2_score": 0.8152324938410783,
"lm_q1q2_score": 0.8005695501806732
} |
https://arxiv.org/abs/1602.03246 | Reliability Polynomials of Simple Graphs having Arbitrarily many Inflection Points | In this paper we show that for each $n$, there exists a simple graph whose reliability polynomial has at least $n$ inflection points. | \section{Introduction}
The reliability of a graph $G$ is the probability that the graph remains connected when each edge is included, or ``functions", with independent probability $p$. Equivalently, we can say that each edge fails with probability $q = 1-p$. This function can be written as a polynomial in either $p$ or $q$, though for our purposes it will be convenient to use $q$; for instance, if $f(q) = R(G)(q)$ for a nontrivial graph $G$, then we have $f(0) = 1$ and $f(1) = 0$. Since the first derivative of $R(G)(q)$ is always negative on $(0,1)$, it is natural to consider whether the second derivative is ever zero, i.e., whether $R(G)(q)$ has any inflection points.
It is typical for the reliability of a graph to have at least one inflection point, and families of simple graphs with reliability polynomials having two inflection points have been found \cite{BKK}. In \cite{GM}, Graves and Milan show that there exist non-simple graphs whose reliability polynomials have at least $n$ inflection points for any integer $n$. They point out that no example is known of a simple graph whose reliability polynomial has more than two inflection points. What we show in this paper is that for each $n$, there exists a simple graph whose reliability polynomial has at least $n$ inflection points.
\section{Preliminaries}
Our proof consists of two major parts: we first demonstrate that there exist reliability polynomials whose second derivative satisfies certain bounds, and then from this collection of polynomials we form products which have arbitrarily many inflection points. The \emph{one-point union} of graphs $G$ and $H$, denoted $G*H$, is the graph union where exactly one vertex is chosen from each graph and the chosen vertices are identified. Regardless of the choices made the reliability polynomial of the one-point union of graphs is the product of the reliability polynomials of each graph. Ultimately, the graphs we use will be one-point unions of complete graphs, and in the following section we demonstrate that the second derivative of such graphs can be made arbitrarily small outside a given interval.
In this section we establish a few facts about reliability polynomials in general. The reliability polynomial of a graph $G$ can be written in the form \begin{equation}\label{reldefn}
R(G)(q) = \sum_{i=0}^m N_i (1-q)^i q^{m-i},
\end{equation} where $m$ is the number of edges in $G$, and $N_i$ counts the number of connected spanning subgraphs of $G$ with $i$ edges.
We first prove a fact about all polynomials having a form similar to \eqref{reldefn}.
\begin{prop}\label{derivbound} Let $f(q) = \sum_{i=0}^m N_i(1-q)^iq^{m-i}$, where the $N_i$ are either all non-negative or all non-positive. Then for $q \in [0,1]$,
\[q(1-q)|f'(q)| \leq m|f(q)|.\]
\end{prop}
\begin{proof}
We first compute
\begin{equation}\label{relderiv}
f'(q) = \sum_{i=0}^m N_i ((m-i)(1-q)^i q^{m-i-1} - i(1-q)^{i-1}q^{m-i}).
\end{equation}
Then we have
\begin{gather*}
q(1-q)|f'(q)| = \left|\sum_{i=0}^m (m-i)N_i(1-q)^{i+1} q^{m-i} - iN_i(1-q)^{i}q^{m-i+1}\right|\\
\leq m(1-q)\left|\sum_{i=0}^m N_i(1-q)^iq^{m-i}\right| + mq\left|\sum_{i=0}^m N_i(1-q)^iq^{m-i}\right| \leq m|f(q)|.
\end{gather*}
Note that the first inequality, where we bound the coefficients uniformly by $m$, uses the hypothesis that the $N_i$ have the same sign.
\end{proof}
In the form \eqref{reldefn} of a reliability polynomial, the coefficient $N_i$ counts a subset of the subgraphs of $G$ with $i$ edges, and thus we have $0 \leq N_i \leq \binom{m}{i}$. Then clearly the above theorem applies when $f$ is a reliability polynomial. However, we can also consider
\begin{equation}
(1-R(G))(q) = \sum_{i=0}^m \left(\binom{m}{i} - N_i\right)(1-q)^i q^{m-i}.
\end{equation}
By the above observation, the coefficients of this polynomial are non-negative, and so \hyperref[derivbound]{Lemma \ref*{derivbound}} applies to $1-R(G)$ as well. Next we show that it can also be applied to $R(G)'(q)$.
\begin{prop}
Let $f(q) = \sum_{i=0}^m N_i(1-q)^iq^{m-i}$ be a reliability polynomial. Then $f'(q)$ can be written in the same form, and all of the coefficients are non-positive.
\end{prop}
\begin{proof}
We already computed the derivative of such a function in \eqref{relderiv}; collecting like terms gives
\begin{equation}\label{relderivcollect}
f'(q) = \sum_{i=0}^{m-1} ((m-i)N_i - (i+1)N_{i+1})(1-q)^iq^{m-i-1}.
\end{equation}
Recall that $N_i$ represents the number of connected spanning subgraphs of $G$ with $i$ edges. Thus we can think of $(m-i)N_i$ as counting the pairs consisting of a connected spanning subgraph of size $i$ together with a particular edge not in the subgraph. Similarly, $(i+1)N_{i+1}$ counts the number of pairs consisting of a connected spanning subgraph of size $i+1$ and an edge in the subgraph. However, since adding an edge to a connected spanning graph gives another connected spanning graph, $(m-i)N_i \leq (i+1) N_{i+1}$. Thus each of the coefficients in the expression for $f'(q)$ is non-positive.
\end{proof}
We note that, by the preceding argument, the coefficient of $(1-q)^i q^{m-i-1}$ in $R(G)'(q)$ counts the number of pairs consisting of a connected spanning subgraph of size $i+1$ and a bridge in the subgraph. This gives the following result.
\begin{prop}\label{endpointderiv}
Let $f(q)$ be the reliability polynomial of a graph $G$. If $G$ has no bridges, then $f'(0) = 0$. If $G$ has at least 3 vertices, then $f'(1) = 0$.
\end{prop}
\begin{proof}
From \eqref{relderivcollect}, we see that $f'(0) = 0$ when the coefficient of $(1-q)^{m-1}$ is zero. Since there is only a single subgraph of size $m$, namely the graph $G$, this is equivalent to $G$ having no bridges.
Similarly, $f'(1) = 0$ whenever the coefficient of $q^{m-1}$ is zero. In a subgraph with one edge, that edge is a bridge; thus, $f'(1) = 0$ if and only if there are no connected spanning subgraphs with one edge, which is clearly true if the graph $G$ has 3 or more vertices.
\end{proof}
\section{Bounding the Reliability of Complete Graphs}
For the first half of the proof, we will work directly with the reliability polynomials of complete graphs. We make use of a recurrence relation given in Colbourn's book \cite{Colbourn}, which we restate here. If $r_n(q)$ is the reliability of the complete graph $K_n$, then
\begin{align*}
r_1 &= 1;\\
r_n &= 1 - \sum_{k=1}^{n-1} \binom{n-1}{k-1}q^{k(n-k)}r_k.
\end{align*}
To make these polynomials easier to work with, we bound them by simpler polynomials.
\begin{lem} Let $\alpha = \frac{1}{8}$, and let $r_n$ denote $R(K_n)$. For $q \in [0,\alpha]$ and $n \geq 2$, we have
\[1 - (n+1)q^{n-1} \leq r_n(q) \leq 1 - (n-1)q^{n-1}.\]
\end{lem}
\begin{proof}
We proceed by induction on $n$.
The base case where $n=2$ is clear, since $r_2 = 1-q$.
Now, suppose the claim is true for $2 \leq k \leq n-1$. We first note that, since $r_n = 1 - \sum_{k=1}^{n-1} \binom{n-1}{k-1}q^{k(n-k)}r_k$, we can rewrite the inequality we would like to prove as
\[(n-1)q^{n-1} \leq \sum_{k=1}^{n-1} \binom{n-1}{k-1}q^{k(n-k)}r_k \leq (n+1)q^{n-1}.\]
We will first prove the left hand side of the inequality. From the induction hypothesis, $r_{n-1} \geq 1 - nq^{n-2}$, and the fact that $r_k \geq 0$ for all $k$, we have
\[\sum_{k=1}^{n-1} \binom{n-1}{k-1}q^{k(n-k)}r_k \geq q^{n-1} + (n-1)q^{n-1}(1 - nq^{n-2}).\]
If we can show that $q^{n-1}(1 - n(n-1)q^{n-2}) \geq 0$, it will follow that the right hand side is greater than $(n-1)q^{n-1}$, which was what we wanted. If we suppose $q \leq \frac{1}{6}$, then it follows that $n(n-1)q^{n-2} \leq 1$ for $n \geq 3$, and the claim holds.
We now proceed to the right hand side of the inequality. Since $r_k \leq 1$ for all $k$, and $n(n-k) \geq 2n-4$ when $2 \leq k \leq n-2$, we have \begin{align*}
\sum_{k=1}^{n-1} \binom{n-1}{k-1}q^{k(n-k)}r_k &\leq \sum_{k=1}^{n-1} \binom{n-1}{k-1}q^{k(n-k)}\\
&\leq nq^{n-1} + \sum_{k=2}^{n-2} \binom{n-1}{k-1}q^{k(n-k)} \\
&\leq nq^{n-1} + 2^{n-1} q^{2n-4}.
\end{align*}
To show that this is less than or equal to $(n+1)q^{n-1}$, it suffices to show that $2^{n-1}q^{n-3} \leq 1$. If $n \geq 4$, then it is enough to take $q \leq \frac{1}{8}$. We may verify directly that
\[
r_3 = 1 - q^2 - 2q^2(1 - q) = 1 - 3q^2 + 2q^3 \leq 1 - 2q^2
\]
provided $q \leq \frac{1}{2}$, which is clearly satisfied since $q \leq \frac{1}{8}$.
This completes our induction proof, and so the bounds hold for all $n \geq 2$.
\end{proof}
We also prove another bound which will be used during the proof of the following theorem.
\begin{lem}\label{roverp2} Let $r_n(q) = R(K_n)(q)$, with $n \geq 3$. Then for $q \in [0,1]$,
\[\frac{r_n(q)}{(1-q)^2} \leq \frac{1}{2}\binom{n}{2}^2.\]
\end{lem}
\begin{proof}
Let $m = \binom{n}{2}$ denote the number of edges of $K_n$. For $2 \leq i \leq m$, we have
\[\binom{m}{i} = \frac{m(m-1)}{i(i-1)}\binom{m-2}{i-2} \leq \frac{m^2}{2}\binom{m-2}{i-2}.\]
Recall that we can write $r_n(q) = \sum_{i=0}^{m} N_i (1-q)^iq^{m-i}$. Since $N_i$ is the number of connected spanning subgraphs of $K_n$ of size $i$, we have $0 \leq N_i \leq \binom{m}{i}$ for all $i$, and $N_i = 0$ for $0 \leq i < n-1$. Since $n \geq 3$, we can say
\begin{gather*}
\frac{r_n(q)}{(1-q)^2} = \sum_{i=0}^{m-2} N_{i+2}(1-q)^iq^{m-i-2} \leq \sum_{i=0}^{m-2} \binom{m}{i+2}(1-q)^iq^{m-i-2}\\
\leq \frac{m^2}{2} \sum_{i=0}^{m-2} \binom{m-2}{i}(1-q)^iq^{m-2-i} = \frac{m^2}{2} = \frac{1}{2}\binom{n}{2}^2.\qedhere
\end{gather*}
\end{proof}
Now we have enough in place to prove our first theorem.
\begin{thm} Let $0 < a < b < \frac{1}{8}$, and let $\epsilon > 0$. Then there exists a graph $G$ such that $|R(G)''(q)| \leq \epsilon$ when $q$ in $[0,1] \setminus [a,b]$.
\end{thm}
\begin{proof}
We first recall that $r_n$ denotes the reliability of the complete graph $K_n$ and the inequalities
\[1-(n+1)q^{n-1} \leq r_n(q) \leq 1-(n-1)q^{n-1}\]
are valid for $q \in [0,1/8]$. Since we are only interested in upper bounds, we reformulate the left hand side as $1 - r_n(q) \leq (n+1)q^{n-1}$.
We compute
\[(r_n^\ell)''(q) = \ell(\ell-1)r_n^{\ell-2}(q)r_n'^2(q) + \ell r_n^{\ell-1}(q)r_n''(q),\]
from which we now obtain various bounds. Note that the coefficients of $1-r_n(q)$ are all non-negative. From our lemmas, we have
\begin{align*}
&q(1-q)|r_n'(q)| \leq \binom{n}{2}|r_n(q)|;\\
&q(1-q)|r_n'(q)| = q(1-q)|(1-r_n)'(q)| \leq \binom{n}{2}|1-r_n(q)|;\\
&q(1-q)|r_n''(q)| \leq \binom{n}{2}|r_n'(q)|.
\end{align*}
For $q \in [0,1/8]$, we have
\begin{align*}
q^2(1-q)^2|(r_n^{\ell})''(q)| &\leq \ell(\ell-1)\frac{n^2(n-1)^2}{4}|r_n^{\ell-2}(q)||1-r_n(q)|^2\\
&\quad + \ell \frac{n^2(n-1)^2}{4}|r_n^{\ell-1}(q)||1-r_n(q)|\\
&\leq \frac{\ell^2n^6}{4}(1-(n-1)q^{n-1})^{\ell-2}q^{2n-2}\\
&\quad + \frac{\ell n^5}{4}(1-(n-1)q^{n-1})^{\ell-1}q^{n-1},
\end{align*}
where we have, in addition to using the lemmas, simplified $(n-1)(n+1) \leq n^2$. We note that $(1-q)^2 \geq \frac{1}{4}$ when $q \leq \frac{1}{8}$, and so we may rearrange this inequality to obtain
\begin{align*}
|(r_n^{\ell})''(q)| &\leq \ell^2n^6(1-(n-1)q^{n-1})^{\ell-2}q^{2n-4}\\
&\quad +\ell n^5(1-(n-1)q^{n-1})^{\ell-1}q^{n-3}.
\end{align*}
We denote the two terms in the above bound by $f(q)$ and $g(q)$, respectively. Note that $f$ and $g$ are both dependent on our choice of $\ell$ and $n$; from here on it will be useful to assume that $n \geq 4$.
We'd like to demonstrate that $f(q)+g(q) < \epsilon$ on $[0,a]$ and $[b,1/8]$. To do this, we first show that $f+g$ is increasing on the first interval and decreasing on the second; then it suffices to show that $f(a)+g(a) < \epsilon$ and $f(b)+g(b) < \epsilon$.
Taking the derivative of $f$ and factoring, we see that the sign of $f'(q)$ is the same as that of the expression
\[-(\ell-2)(n-1)^2q^{n-1} + (2n-4)(1-(n-1)q^{n-1}).\]
This expression is non-negative precisely when
\[((\ell-2)(n-1)^2 + (2n-4)(n-1))q^{n-1} \leq 2n-4.\]
After some estimation, we see that if $\ell n(n-1)q^{n-1} \leq 2(n-2)$, then $f'(q) \geq 0$. Since $n \geq 4$, it suffices to show $\ell nq^{n-1} \leq 1$.
The sign of $g'(q)$ is the same as that of
\[-(\ell-1)(n-1)^2q^{n-1} + (n-3)(1-(n-1)q^{n-1});\]
similarly, if $\ell n(n-1)q^{n-1} \leq n-3$, then $g'(q) \geq 0$. Since $n \geq 4$, it suffices to show that $\ell nq^{n-1} \leq \frac{1}{4}$; if this is true, then the condition above holds as well, and so $(f+g)'(q) \geq 0$. Moreover, since $\ell nq^{n-1} \leq \ell na^{n-1}$ for $q \in [0,a]$, it is sufficient to show that $\ell na^{n-1} \leq \frac{1}{4}$.
By the same sort of approximation, we see that if $\ell q^{n-1} \geq 1$, then $(f+~g)'(q) \leq 0$. Again, if this is true at $b$, then it clearly holds for all $q \in [b,1/8]$ as well.
For $q \in [1/8,1]$, we have $\frac{1}{q^2} \leq 64$. Using our usual bounds, along with \hyperref[roverp2]{Lemma \ref*{roverp2}}, we see
\begin{align*}
|(r_n^{\ell})''(q)| &\leq \frac{1}{q^2} \ell^2 \binom{n}{2}^2 r_n^{\ell-1}(q) \frac{r_n(q)}{(1-q)^2}\\
&\leq 16\ell^2 \binom{n}{2}^4 r_n^{\ell-1}(q) \leq \ell^2n^8 r_n^{\ell-1}(q)\\
&\leq \ell^2n^8 r_n^{\ell-1}(1/8) \leq \ell^2n^8 (1-(n-1)(1/8)^{n-1})^{\ell-1}.
\end{align*}
Note that we were able to replace $q$ with $1/8$ since $q \in [1/8,1]$ and $r_n$ is decreasing.
We restate our conditions here: we want to find $\ell \geq 1$ and $n \geq 4$ such that
\begin{itemize}
\item $\ell na^{n-1} \leq \frac{1}{4}$,
\item $\ell b^{n-1} \geq 1$,
\item $f(a)+g(a) \leq \epsilon$,
\item $f(b)+g(b) \leq \epsilon$, and
\item $\ell^2n^8 (1-(n-1)(1/8)^{n-1})^{\ell-1} \leq \epsilon$.
\end{itemize}
If we can find $\ell,n$ satisfying these conditions, then the arguments above show that $|R(K_n^{\ell})''(q)| \leq \epsilon$ for $q \in [0,1] \setminus [a,b]$.
To show that all of the above inequalities can be satisfied, we define a sequence of $\ell_i$ and $n_i$ such that the quantities on the left become arbitrarily small (or large, in the case of the second one) for sufficiently large $i$. We begin by choosing positive integers $N,k$ such that $a < N^{-1/k} < b$; this is possible because there is an integer between $b^{-k}$ and $a^{-k}$ for sufficiently large $k$. Then, we let $\ell_i = N^i$ and $n_i = ik$.
Consider the first expression: we rewrite
\[\ell_i n_i a^{n_i-1} = \frac{ik}{a}(Na^k)^i,\]
which becomes small as $i$ goes to infinity, by an application of l'Hospital's rule and the fact that $Na^k < 1$. Similarly, we can write
\[\ell_i b^{n_i-1} \geq (Nb^k)^i,\]
which tends to infinity since $Nb^k > 1$.
For the next two inequalities, we analyze $f$ and $g$ separately. First, we have
\begin{align*}
\log(f(q)) &= 2\log(\ell_i) + 6\log(n_i) \\
&\quad + (\ell_i-2)\log(1-(n_i-1)q^{n_i-1}) + (2n_i-4)\log(q).
\end{align*}
Note that $\log(1 - (n_i-1)q^{n_i-1}) \leq -(n_i-1)q^{n_i-1}$; since we'd like to show that $\log(f(q)) \to -\infty$ as $i \to \infty$, we may make this replacement. Thus we must show that the expression
\begin{align*}
2i\log(N) + 6\log(ik) - (N^i-2)(ik-1)q^{ik-1} + 2i\log(q^k) - 4\log(q)
\end{align*}
becomes arbitrarily small as $i$ tends to infinity. The term $4\log(q)$ is constant with respect to $i$, and so after some rearrangement, we need only consider
\begin{align*}
&2i\log(N) + 6\log(ik) - (N^i-2)(ik-1)q^{ik-1} + 2i\log(q^k)\\
&\leq 2i\log(Nq^k) + 6\log(ik) - N^{-1}(Nq^k)^i
\end{align*}
For $q=a$, the first term tends to negative infinity, and dominates the second, while the third term is also negative; thus $\log(f(a))$ can be made arbitrarily small, and $f(a) < \frac{\epsilon}{2}$ for sufficiently large $i$. For $q=b$, we have $Nq^k > 1$, and so the third term dominates the first two terms, and again we can make $f(b)$ arbitrarily small.
Proceeding similarly, we collect the nonconstant terms of $\log(g(q))$, and make the same upward approximation as before :
\begin{align*}
\log(\ell) &+ 5\log(n) - (\ell-1)(n-1)q^{n-1} + n\log(q)\\
&= i\log(Nq^k) + 5\log(ik) - (N^i-1)(ik-1)q^{ik-1}\\
&\leq i\log(Nq^k) + 5\log(ik) - N^{-1}(Nq^k)^i.
\end{align*}
The arguments showing that $g(a)$ and $g(b)$ become arbitrarily small for sufficiently large $i$ are identical to those given above for $f$.
Finally, we consider the expression in the fifth inequality. We have
\begin{align*}
\log(\ell^2&n^8 (1-(n-1)(\tfrac{1}{8})^{n-1})^{\ell-1})\\
&\leq 2i\log(N) + 8\log(ik) -(N^i-1)(ik-1)(\tfrac{1}{8})^{ik-1}\\
&\leq 2i\log(N) + 8\log(ik) -N^{-1}(N(\tfrac{1}{8})^k)^i.
\end{align*}
Since $N(\frac{1}{8})^k > 1$, the last term dominates the others, and evidently this can also be made arbitrarily small.
Thus, if we take $i$ sufficiently large, we see that $G = K_{n_i}^{\ell_i}$ satisfies the desired condition.
\end{proof}
\section{Main Result}
Now we proceed to the proof that there are reliability polynomials with arbitrarily many inflection points. We choose a collection of intervals
\[I_{k,m} = (a_{k,m},b_{k,m}) \subset [0,1],\]
for $k \geq 0$ and $m \geq 1$, such that
\begin{itemize}
\item $0 < a_{0,1} < b_{0,1} < a_{1,1} < b_{1,1} < a_{2,1} < \cdots$, and $b_{k,1} < \frac{1}{8}$ for all $k \geq 0$;
\item $I_{k,m+1} \subset I_{k,m}$;
\item $\ell(I_{k,m}) = b_{k,m} - a_{k,m} \leq 2^{-m}$.
\end{itemize}
For $n \geq 3$, $m \geq 1$, $K_n^m$ has at least three vertices and no bridges. By our previous theorem, along with \hyperref[endpointderiv]{Lemma \ref*{endpointderiv}}, we can find a collection of reliability polynomials $s_{k,m}: [0,1] \to [0,1]$, satisfying the following properties.
\begin{itemize}
\item $s_{k,m}(0) = 1$ and $s_{k,m}(1) = 0$;
\item $s_{k,m}'(0) = s_{k,m}'(1) = 0$, and $s_{k,m}'(q) \leq 0$ for all $q$;
\item $|s_{k,m}''(q)| \leq 2^{-m-1}$ for $q \notin I_{k,m}$.
\end{itemize}
We now collect the consequences as a series of lemmas.
\begin{lem} If $q \leq a_{k,m}$, then $|s_{k,m}'(q)| \leq 2^{-m-1}$ and $|1 - s_{k,m}(q)| \leq 2^{-m-1}$. If $q \geq b_{k,m}$, then $|s_{k,m}'(q)| \leq 2^{-m-1}$ and $|s_{k,m}(q)| \leq 2^{-m-1}$.
\end{lem}
\begin{proof} Consider the first statement; we suppose otherwise and apply the mean value theorem. That is, we suppose there is a $c \in [0,a_{k,m}]$ such that $|s_{k,m}'(c)| \geq 2^{-m-1}$. But this implies that there is a $d \in [0,c]$ with
\[|s_{k,m}''(d)| = \frac{|s_{k,m}'(c) - s_{k,m}'(0)|}{|c - 0|} > 2^{-m-1},\]
contradicting our assumption about the $s_{k,m}$.
Similarly applying the mean value theorem again shows that if there was a $c \in [0,a_{k,m}]$ with $|s_{k,m}(c) - 1| \geq 2^{-m-1}$ there would be a $d$ at which the above bound is not satisfied, another contradiction.
The arguments for $q \in [b_{k,m},1]$ are similar.
\end{proof}
Now we'd like to show how {\em large} the derivatives must be on the intervals $I_{k,m}$. In particular, we'd like to find a single point in each interval at which both the first and second derivatives are ``sufficiently large".
\begin{lem} In each $I_{k,m}$, there is a point $q_{k,m}$ satisfying $s_{k,m}'(q_{k,m}) < -2^{m - 2}$ and $s_{k,m}''(q_{k,m}) > 2^{2m - 2}$.
\end{lem}
\begin{proof}
We note that for every $k \geq 0$, $m \geq 1$, we have $s_{k,m}(a_{k,m}) \geq \frac{3}{4}$ and $s_{k,m}(b_{k,m}) \leq \frac{1}{4}$; in particular, the difference is at least $\frac{1}{2}$. Then by the mean value theorem, there is a $c \in I_{k,m}$ where
\[|s_{k,m}'(c)| = \frac{|s_{k,m}(b_{k,m})-s_{k,m}(a_{k,m})|}{|b_{k,m}-a_{k,m}|} \geq \frac{2^{-1}}{2^{-m}} = 2^{m - 1}.\]
Note that since $s_{k,m}'(q) \leq 0$ for all $q$, we have $s_{k,m}'(c) = -2^{m-1}$. Now let $d$ be the smallest number such that $d > c$ and $s_{k,m}'(d) = -2^{m - 2}$; applying the mean value theorem again shows that there is a $q_{k,m} \in [c,d] \subset I_{k,m}$ satisfying
\[s_{k,m}''(q_{k,m}) = \frac{s_{k,m}'(d) - s_{k,m}'(c)}{d - c} \geq \frac{2^{m-2}}{2^{-m}} = 2^{2m - 2}.\]
Note that by the definition of $d$, we must have $s_{k,m}'(q_{k,m}) \leq s_{k,m}'(d) = -2^{m - 2}$.
\end{proof}
Now we show that there exist products of $s_{k,m}$ (that is, reliability functions of one point unions of complete graphs) with arbitrarily many inflection points. This proof is modeled after the proof of the main result given in \cite{GM}.
\begin{thm} There exist functions $g_k : [0,1] \to [0,1]$ for $k \geq 0$, such that $g_k = s_{k,m_k} g_{k-1}$ for $k \geq 1$, and $g_k$ has at least $2k$ inflection points.
\end{thm}
\begin{proof}
We prove this by induction on $k$. We are going to show that there is a sequence of reliability polynomials $g_k$, integers $m_k$, and points $q_k \in I_{k,m_k}$, for $k \geq 0$, such that
\begin{itemize}
\item $g_k''(q_i) > 0$ for $i=0,...,k$;
\item $\int_{q_{i-1}}^{q_i} g_k''(q) dq < 0$ for $i=1,...,k$;
\item $g_k = s_{k,m_k} g_{k-1}$ if $k \geq 1$.
\end{itemize}
We begin with the base case $k=0$. We simply let $g_0(q) = s_{0,1}$, $q_0 = q_{0,1}$. By our lemma, $s_{0,1}''(q_{0,1}) > 0$, and so the first condition holds; the other conditions hold vacuously.
Now suppose that we have found a $g_{k-1}$ satisfying the above properties, and we would like to find an $m_k$ such that $g_k = s_{k,m_k} g_{k-1}$ also satisfies them.
We first consider
\begin{equation}
(s_{k,m}g_{k-1})' = s_{k,m}'g_{k-1} + s_{k,m}g_{k-1}'.
\end{equation}
For each $i < k$, $q_i < a_{k,1}$, so $\underset{m \to \infty}{\lim} s_{k,m}'(q_i) = 0$ and $\underset{m \to \infty}{\lim} s_{k,m}(q_i) = 1$. It follows that
\[\underset{m \to \infty}{\lim} (s_{k,m}g_{k-1})'(q_i) = g_{k-1}'(q_i).\]
Since
\[\int_{q_{i-1}}^{q_i} (s_{k,m}g_{k-1})''(q) dq = (s_{k,m}g_{k-1})'(q_i) - (s_{k,m}g_{k-1})'(q_{i-1}),\]
we have
\[\lim_{m \to \infty} \int_{q_{i-1}}^{q_i} (s_{k,m}g_{k-1})''(q) dq = \int_{q_{i-1}}^{q_i} g_{k-1}''(q) dq < 0.\]
Now we look at $(s_{k,m}g_{k-1})'(q_{k,m})$, noting that the precise location of $q_{k,m}$ depends on $m$. By construction, we have $s_{k,m}'(q_{k,m}) \leq -2^{m-2}$, but $|s_{k,m}(q_{k,m})| \leq 1$; thus, by taking $m$ sufficiently large, we can guarantee that $(s_{k,m}g_{k-1})'(q_{k,m}) - (s_{k,m}g_{k-1})'(q_{k-1}) < 0$.
Next we need to consider
\begin{equation}
(s_{k,m}g_{k-1})'' = s_{k,m}''g_{k-1} + 2s_{k,m}'g_{k-1}' + s_{k,m}g_{k-1}''.
\end{equation}
For $q < a_{k,1}$, we have $\underset{m \to \infty}{\lim} (s_{k,m}g_{k-1})''(q) = g_{k-1}''(q)$, since $\underset{m \to \infty}{\lim} s_{k,m}''(q) = 0$, $\underset{m \to \infty}{\lim} s_{k,m}'(q) = 0$, and $\underset{m \to \infty}{\lim} s_{k,m}(q) = 1$. Thus by the induction hypothesis, for sufficiently large $m$ we have $(s_{k,m}g_{k-1})''(q_i) > 0$ for $i=0,...,k-1$.
Now we need only consider $(s_{k,m}g_{k-1})''(q_{k,m})$. Since $g_{k-1}$ is a reliability polynomial, $g_{k-1}' \leq 0$, so the second term in the expansion of $(s_{k,m}g_{k-1})''$ is positive. Since $s_{k,m} < 1$, the third term is bounded by the maximum of $|g_{k-1}''|$ on [0,1]. Finally, since $g_{k-1}$ is bounded away from 0 on $I_{k,1}$ and $\underset{m \to \infty}{\lim} s_{k,m}''(q_{k,m}) = \infty$, it follows that $(s_{k,m}g_{k-1})''(q_{k,m}) > 0$ for sufficiently large $m$.
If we let $m_k$ be sufficiently large such that all of the above constructions hold, and define $q_k = q_{k,m_k}$ and $g_k = s_{k,m_k} g_{k-1}$, then $g_k$ satisfies the induction hypothesis; thus we have constructed the desired $g_k$ for all $k \geq 0$. We know that $g_k''(q_{i-1}) > 0$ and $g_k''(q_i) > 0$ for each $i = 1, \dots ,k$; but $\int_{q_{i-1}}^{q_i} g_k'' dq < 0$ tells us that $g_k''$ is negative somewhere on $[q_{i-1},q_i]$, which implies that there are at least 2 inflection points on this interval. Thus $g_k$ has at least $2k$ inflection points, which was what we wanted to show.
\end{proof}
Note that the polynomial $g_k$ constructed in the above theorem is the product of the reliability polynomials of a number of complete graphs, and thus it is the reliability polynomial of the one-point union of those graphs. In Figure 1, the second derivatives of three such graphs are shown, demonstrating reliability polynomials of simple graphs having 3,4, and 5 inflection points.
\begin{center}
\begin{figure}
\includegraphics[width=6cm]{c3ip.pdf}
\includegraphics[width=6cm]{c4ip.pdf}
\includegraphics[width=6cm]{c5ip.pdf}
\caption{\label{Examples} The graphs of the second derivatives of $R(K_{2}^{5} * K_{3}^{4} * K_{4}^{3} * K_{5}^{92}), R(K_{2}^{2} * K_{4}^{2} * K_{14}^{750}),$ and $R(K_{2}^{5} * K_{3}^{4} * K_{5}^{116} * K_{14}^{100,000,000})$ respectively.}
\end{figure}
\end{center}
\section{Acknowledgements}
This research was conducted during the 2015 Research Experience for Undergraduates at University of Texas at Tyler under the direction of Dr. David Milan. We would like to thank the organizers of the UT Tyler REU for their guidance during this project. This REU was supported by the National Science Foundation (DMS-1359101).
\bibliographystyle{amsplain}
| {
"timestamp": "2016-02-11T02:02:54",
"yymm": "1602",
"arxiv_id": "1602.03246",
"language": "en",
"url": "https://arxiv.org/abs/1602.03246",
"abstract": "In this paper we show that for each $n$, there exists a simple graph whose reliability polynomial has at least $n$ inflection points.",
"subjects": "Combinatorics (math.CO)",
"title": "Reliability Polynomials of Simple Graphs having Arbitrarily many Inflection Points",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137910906876,
"lm_q2_score": 0.8152324871074608,
"lm_q1q2_score": 0.8005695452846877
} |
https://arxiv.org/abs/1007.4222 | Strict inequality in the box-counting dimension product formulas | It is known that the upper box-counting dimension of a Cartesian product satisfies the inequality $\dim_{B}\left(F\times G\right)\leq \dim_{B}\left(F\right) + \dim_{B}\left(G\right)$ whilst the lower box-counting dimension satisfies the inequality $\dim_{LB}\left(F\times G\right)\geq \dim_{LB}\left(F\right) + \dim_{LB}\left(G\right)$. We construct Cantor-like sets to demonstrate that both of these inequalities can be strict. | \part{Use this type of header for very long papers only}
\section{Preliminaries}
\label{intro}
\noindent
In a metric space $X$ the Hausdorff dimension of a compact set $F\subset X$ is defined as the supremum of the $d\geq 0$ such that the $d$-dimensional Hausdorff measure $\mathcal{H}^{d}\left(F\right)$ of $F$ is infinite. The Hausdorff dimension takes values in the non-negative reals and extends the elementary integer-valued topological dimension in the sense that for a large class of `reasonable' sets these two values coincide. Sets with non-coinciding Hausdorff and topological dimensions are called `fractal', a term coined by Mandelbrot in his original study of such sets \cite{Mandelbrot75}. Hausdorff introduced this generalised dimension in \cite{Hausdorff18} and its subsequent extensive use in geometric measure theory is developed by Federer \cite{BkFederer69} and Falconer \cite{BkFalconer85}. The fact that the Hausdorff dimension satisfies $\dim_{H}\left(F\times G\right)\geq \dim_{H}\left(F\right) + \dim_{H}\left(G\right)$ for the Cartesian product of sets was proved in full generality in \cite{Marstrand53} (and later summarised in \cite{BkFalconer03} \S 7.1 `Product Formulae') after some partial results: The inequality was proved in \cite{BesicovitchMoran45} with the restriction that $0<\mathcal{H}^{s}\left(F\right),\mathcal{H}^{t}\left(G\right)<\infty$ for some $s,t$ and was extended to a larger class of sets in \cite{Eggleston53}. The paper \cite{BesicovitchMoran45} also provides an example for which there is a strict inequality in the product formula and again this is summarised in \cite{BkFalconer03} \S 7.1.
\paragraph*{}
In this paper we prove similar product inequalities for the upper and lower box-counting dimensions which are less familiar generalisations of dimension (treated briefly in \cite{BkFalconer03}; see \cite{BkRobinson10} for a more detailed exposition) and have applications to dynamical systems (see, for example, \cite{BkRobinson01}). Our main result is an example analogous to that in \cite{BesicovitchMoran45} which demonstrates that the box-counting product inequalities can be strict.
In a metric space $X$ the upper and lower box-counting dimensions of a compact set $F\subset X$ are defined by
\begin{align}
\dim_{B}\left(F\right) &= \limsup_{\delta\searrow 0}\frac{\log\left(N\left(F,\delta\right)\right)}{-\log\delta}\label{dimB def}
\intertext{and}
\dim_{LB}\left(F\right) &= \liminf_{\delta\searrow 0}\frac{\log\left(N\left(F,\delta\right)\right)}{-\log\delta}\label{dimLB def}
\end{align}
respectively, where $N\left(F,\delta\right)$ is the smallest number of sets with diameter at most $\delta$ which form a cover (called a $\delta$-cover) of $F$. Essentially, if $N\left(F,\delta\right)$ scales like $\delta^{-\varepsilon}$ as $\delta\rightarrow 0$ then these quantities capture $\varepsilon$ which gives an indication of how many more sets are required to cover $F$ as the length-scales decrease and so encodes how `spread out' the set $F$ is at small length-scales. These limits are unchanged if we replace $N\left(F,\delta\right)$ with one of many similar quantities (discussed by Falconer in \cite{BkFalconer03} \S 3.1 `Equivalent Definitions'). Of these quantities we will also make use of the largest number of disjoint closed balls of diameter $\delta$ with centres in $F$, which we denote $M\left(F,\delta\right)$.
\paragraph*{}
We first recall the standard (see, for example, \cite{BkFalconer03} or \cite{BkRobinson10}) proof of the box-counting product inequalities when $F$ and $G$ are compact sets in metric spaces $X$ and $Y$ respectively, although the inequality \eqref{dimLB product inequality} is less familiar (Robinson provides a proof in \cite{BkRobinson10}). We endow the product space $X\times Y$ with the usual Euclidean metric $d_{X\times Y}=\sqrt{d_{X}^{2}+d_{Y}^{2}}$, but the proof can be adapted for a variety of product metrics (see \cite{BkRobinson10}).
\begin{theorem}\label{product inequalities theorem}
For compact sets $F\subset X$ and $G\subset Y$ the box-counting dimensions of the product set $F\times G$ satisfy the inequalities
\begin{align}
\dim_{B}\left(F\times G\right)&\leq \dim_{B}\left(F\right) + \dim_{B}\left(G\right)\label{dimB product inequality}\\
\dim_{LB}\left(F\times G\right)&\geq \dim_{LB}\left(F\right) + \dim_{LB}\left(G\right)\label{dimLB product inequality}
\end{align}
\end{theorem}
\begin{proof}
Suppose $\set{U_{i}}_{i=1}^{n_{1}}$ and $\set{V_{j}}_{j=1}^{n_{2}}$ are $\delta$-covers of $F$ and of $G$ respectively then the set of products $\set{U_{i}\times V_{j}\vert i=1\ldots n_{1}, j=1\ldots n_{2}}$ is a cover of $F\times G$ with a total of $n_{1}n_{2}$ elements and the diameter of each $U_{i}\times V_{j}$ is no greater than $\sqrt{2}\delta$. As there exist $\delta$-covers of $F$ and $G$ consisting of $N\left(F,\delta\right)$ and $N\left(G,\delta\right)$ elements respectively this construction gives a $\sqrt{2}\delta$-cover of $F\times G$ consisting of $N\left(F,\delta\right)N\left(G,\delta\right)$ elements, hence
\begin{equation}\label{minimal cover inequality}
N\left(F\times G,\sqrt{2}\delta\right) \leq N\left(F,\delta\right)N\left(G,\delta\right).
\end{equation}
Next, if both $\set{x_{i}}_{i=1}^{n_{1}}\subset F$ and $\set{y_{j}}_{j=1}^{n_{2}}\subset G$ are sets of centres of disjoint balls with diameter $\delta$ then the balls with radius $\delta$ centred on the $n_{1}n_{2}$ points $\set{\left(x_{i},y_{j}\right)\vert i=1\ldots n_{1}, j=1\ldots n_{2}}\subset F\times G$, are also disjoint. As there exist sets of disjoint balls of diameter $\delta$ with centres in $F$ and $G$ consisting of $M\left(F,\delta\right)$ and $M\left(G,\delta\right)$ elements respectively the above construction gives $M\left(F,\delta\right)M\left(G,\delta\right)$ disjoint balls of diameter $\delta$ with centres in $F\times G$, hence
\begin{equation}\label{maximal disjoint inequality}
M\left(F\times G,\delta\right) \geq M\left(F,\delta\right)M\left(G,\delta\right).
\end{equation}
From \eqref{dimB def} and \eqref{minimal cover inequality} we see that
\begin{align}
\dim_{B}\left(F\times G\right) &= \limsup_{\delta\searrow 0}\frac{\log\left(N\left(F\times G,\sqrt{2}\delta\right)\right)}{-\log\left(\sqrt{2}\delta\right)}\notag\\
&\leq \limsup_{\delta\searrow 0} \left[\frac{\log\left(N\left(F,\delta\right)\right)}{-\log\left(\sqrt{2}\delta\right)} + \frac{\log\left(N\left(G,\delta\right)\right)}{-\log\left(\sqrt{2}\delta\right)}\right]\label{limsup of sum}\\
&\leq \limsup_{\delta\searrow 0} \frac{\log\left(N\left(F,\delta\right)\right)}{-\log\delta -\log\sqrt{2}} + \limsup_{\delta\searrow 0} \frac{\log\left(N\left(G,\delta\right)\right)}{-\log\delta -\log\sqrt{2}}\notag\\
& = \dim_{B}\left(F\right) + \dim_{B}\left(G\right). \nota
\intertext{From \eqref{dimLB def} and \eqref{maximal disjoint inequality} we have}
\dim_{LB}\left(F\times G\right) &= \liminf_{\delta\searrow 0}\frac{\log\left(M\left(F\times G,\delta\right)\right)}{-\log\delta}\notag\\
&\geq \liminf_{\delta\searrow 0}\left[\frac{\log\left(M\left(F,\delta\right)\right)}{-\log\delta} + \frac{\log\left(M\left(G,\delta\right)\right)}{-\log\delta}\right]\label{liminf of sum}\\
&\geq \liminf_{\delta\searrow 0}\frac{\log\left(M\left(F,\delta\right)\right)}{-\log\delta} + \liminf_{\delta\searrow 0}\frac{\log\left(M\left(G,\delta\right)\right)}{-\log\delta} \notag \\
&= \dim_{LB}\left(F\right) + \dim_{LB}\left(G\right). \nota
\end{align}
\end{proof}
It is known that there are sets with unequal upper and lower box-counting dimension (see exercise 3.8 of \cite{BkFalconer03} or \cite{BkRobinson10} \S 3.1), however if these values coincide for a set $F$ we define their common value as the box-counting dimension of $F$. If sets $F$ and $G$ have well-defined box-counting dimension then the box-counting dimension of their product is also well behaved.
\begin{corollary}
If $\dim_{B}\left(F\right)=\dim_{LB}\left(F\right)$ and $\dim_{B}\left(G\right)=\dim_{LB}\left(G\right)$ then
\[\dim_{B}\left(F\times G\right) = \dim_{LB}\left(F\times G\right) = \dim_{B}\left(F\right)+\dim_{B}\left(G\right).\]
\end{corollary}
\begin{proof}
From the inequalities \eqref{dimB product inequality} and \eqref{dimLB product inequality} we have
\[
\dim_{B}\left(F\times G\right) \leq \dim_{B}\left(F\right) + \dim_{B}\left(G\right) = \dim_{LB}\left(F\right) + \dim_{LB}\left(G\right) \leq \dim_{LB}\left(F\times G\right)
\]
but from the definition of the box-counting dimension $\dim_{LB}\left(F\times G\right)\leq \dim_{B}\left(F\times G\right)$ so we must have equality throughout the above.
\end{proof}
\paragraph*{}
In the following construction both sets $F$ and $G$ have non-coinciding upper and lower box-counting dimensions so that as $\delta\rightarrow 0$ the box-counting functions $\frac{\log\left(N\left(F,\delta\right)\right)}{-\log\delta}$ and $\frac{\log\left(N\left(G,\delta\right)\right)}{-\log\delta}$ oscillate between two values. Further, by ensuring that these functions oscillate with different phases (see figure \ref{figure - product graph}) we can produce strict inequalities after \eqref{limsup of sum} and \eqref{liminf of sum} and so yield strict inequality in both product formulas, that is
\[
\dim_{LB}\left(F\times G\right) < \dim_{LB}\left(F\right) + \dim_{LB}\left(G\right) < \dim_{B}\left(F\right) + \dim_{B}\left(G\right) < \dim_{B}\left(F\times G\right).
\]
\begin{figure}
\vspace*{8pt}
\includegraphics[scale=0.5]{productgraph.eps}
\vspace*{8pt}
\caption{The box-counting functions for the sets $F$ and $G$ constructed in the next section (explicitly computed here for small $\delta$) oscillate between $\frac{\log\left(2\right)}{\log\left(3\right)}$ and $\frac{\log\left(2\right)}{\log\left(7\right)}$. The $x$-axis is scaled so that the slow oscillation can be graphed, and this oscillation continues as $\log\left(\log\left(-\log\left(\delta\right)\right)\right)\rightarrow \infty$, that is as $\delta\rightarrow 0$. The differing phases guarantee that the sum of these functions doesn't approach either $\frac{\log\left(2\right)}{\log\left(3\right)}+\frac{\log\left(2\right)}{\log\left(3\right)}$ or $\frac{\log\left(2\right)}{\log\left(7\right)}+\frac{\log\left(2\right)}{\log\left(7\right)}$.}
\label{figure - product graph}
\end{figure}
To this end we construct variations of the Cantor middle-third set from the initial interval $\left[0,1\right]$ except at each stage we use one of the three generators $\gen_{3},\gen_{5}$ or $\gen_{7}$ which remove the middle $\frac{1}{3},\frac{3}{5}$ or $\frac{5}{7}$ of each interval respectively. Note that if we exclusively use the $\gen_{3}$ generator we produce the usual Cantor middle-third set, which has lower and upper box-counting dimension $\frac{\log\left(2\right)}{\log\left(3\right)}$ and if we exclusively use the $\gen_{7}$ generator we produce a similar Cantor set with lower and upper box-counting dimension $\frac{\log\left(2\right)}{\log\left(7\right)}$. By switching generators at certain stages of our construction we can cause $\frac{\log\left(N\left(F,\delta\right)\right)}{-\log\delta}$ to oscillate between these values, providing that we apply the generators a sufficiently large number of times.
\paragraph*{}
To simplify notation, we choose a sequence of integers $K_{j}=10^{2^{j}}$ which increases sufficiently quickly that
\begin{align}
\sum_{i=0}^{j} K_{i} &< K_{j+1} \label{Kj upper bound}\\
\sum_{i=0}^{j} K_{i} - K_{i-1} &>K_{j-1} \label{Kj lower bound}\\
\frac{\log\left(7\right)}{\log\left(3\right)}K_{j}&<K_{j+1}\label{Kj log3log7}
\intertext{and}
\frac{\sum_{i=0}^{j-1} K_{i}}{K_{j}} &\rightarrow 0 \quad \text{as $j\rightarrow \infty$} \label{Kj ratio limit}
\end{align}
\section{Constructing sets $F$ and $G$}
We construct two sets $F$ and $G$ using the following iterative procedure: For $F$ apply the following generator at the $j^{\text{th}}$ stage
\[
\begin{cases}
\gen_{3} & K_{6n}<j\leq K_{6n+1}\quad \text{for some} \quad n\in\mathbb{N}\\
\gen_{7} &K_{6n+1}<j\leq K_{6n+2}\quad \text{for some} \quad n\in\mathbb{N}\\
\gen_{5} & \text{otherwise},
\end{cases}
\]
and for $G$ apply the following generator at the $j^{\text{th}}$ stage
\[
\begin{cases}
\gen_{3} & K_{6m+3}<j\leq K_{6m+4}\quad \text{for some} \quad m\in\mathbb{N}\\
\gen_{7} & K_{6m+4}<j\leq K_{6m+5}\quad \text{for some} \quad m\in\mathbb{N}\\
\gen_{5} & \text{otherwise}.
\end{cases}
\]
Let $f_{3}\left(j\right)$ be the number of times $\gen_{3}$ has been applied and $f_{7}\left(j\right)$ the number of times $\gen_{7}$ has been applied in the construction $F$ by stage $j$. With this notation $\gen_{5}$ has been applied $j-f_{3}\left(j\right)-f_{7}\left(j\right)$ times by stage $j$. Similarly define $g_{3}\left(j\right)$ and $g_{7}\left(j\right)$ for the construction of $G$. Clearly these functions are non-decreasing and we refrain from writing them explicitly except to note that
\begin{equation}\label{explicit f(Kj)}
\begin{aligned}
f_{3}\left(K_{6n+1}\right)&=\sum_{i=0}^{n} K_{6i+1} - K_{6i} &
f_{7}\left(K_{6n+2}\right)&=\sum_{i=0}^{n} K_{6i+2} - K_{6i+1}\\
g_{3}\left(K_{6m+4}\right)&=\sum_{i=0}^{m} K_{6i+4} - K_{6i+3} &
g_{7}\left(K_{6m+5}\right)&=\sum_{i=0}^{m} K_{6i+5} - K_{6i+4}
\end{aligned}
\end{equation}
and
\begin{equation}\label{f constant on intervals}
\begin{split}
f_{3}\left(j\right)=f_{3}\left(K_{6n+1}\right) \quad\text{for}\quad K_{6n+1}<j\leq K_{6n+6}\\
f_{7}\left(j\right)=f_{7}\left(K_{6n+2}\right) \quad\text{for}\quad K_{6n+2}<j\leq K_{6n+7}\\
g_{3}\left(j\right)=g_{3}\left(K_{6m+3}\right) \quad\text{for}\quad K_{6m+3}<j\leq K_{6m+8}\\
g_{7}\left(j\right)=g_{7}\left(K_{6m+3}\right) \quad\text{for}\quad K_{6m+4}<j\leq K_{6m+9}
\end{split}
\end{equation}
Denote the sets at the $j^{th}$ stage of the construction of $F$ and $G$ by
\begin{itemize}
\item[] $F_{j}$, which consists of $2^{j}$ intervals of length $3^{-f_{3}\left(j\right)}7^{-f_{7}\left(j\right)}5^{-j+f_{3}\left(j\right)+f_{7}\left(j\right)}$ and
\item[] $G_{j}$, which consists of $2^{j}$ intervals of length $3^{-g_{3}\left(j\right)}7^{-g_{7}\left(j\right)}5^{-j+g_{3}\left(j\right)+g_{7}\left(j\right)}$
\end{itemize}
so that $F$ and $G$ are defined by $F=\bigcap F_{j}$ and $G=\bigcap G_{j}$. Note that for every $j$ the endpoints of the intervals in $F_{j}$ and $G_{j}$ are in $F$ and $G$ respectively as the generators only remove the middle of each interval.
\begin{proposition}\label{N=M=2j prop}
For $\delta$ such that
\begin{equation}\label{length-scale for stage j}
3^{-f_{3}\left(j\right)}7^{-f_{7}\left(j\right)}5^{-j+f_{3}\left(j\right)+f_{7}\left(j\right)}\leq\delta< 3^{-f_{3}\left(j-1\right)}7^{-f_{7}\left(j-1\right)}5^{-\left(j-1\right)+f_{3}\left(j-1\right)+f_{7}\left(j-1\right)}
\end{equation}
we have $N\left(F,\delta\right)=M\left(F,\delta\right)=2^{j}$.
\end{proposition}
We refer to those $\delta$ in the range \eqref{length-scale for stage j} as length-scales corresponding to stage $j$ in the construction of $F$. Clearly every $1>\delta>0$ is a length scale corresponding to exactly one stage $j_{\delta}$ and $j_{\delta}\rightarrow\infty$ as $\delta\rightarrow 0$. We refer to length-scales corresponding to the construction of $G$ in an analogous fashion.
\begin{proof}
For $\delta$ in the range \eqref{length-scale for stage j} the obvious cover consisting of all intervals in $F_{j}$ gives $N\left(F,\delta\right)\leq 2^{j}$. The opposite inequality comes from the fact that a set with diameter $\delta$ in this range intersects at most one $\left(j-1\right)^{\text{th}}$ stage interval $I$ but cannot cover both $j^{\text{th}}$ stage subintervals of $I$ (see figure \ref{figure - 2jballsneeded}) so at least $2\times 2^{j-1}=2^{j}$ elements are needed to form a cover of $F$ .\\
Next, $\delta$ in the range \eqref{length-scale for stage j} is less than the length of the intervals in $F_{j-1}$ so balls of diameter $\delta$ centred on the end points of the intervals of $F_{j-1}$ are disjoint and have centres in $F$ (see figure \ref{figure - 2jdisjointballscover}). This gives two disjoint balls for each interval, hence $M\left(F,\delta\right)\geq 2\times 2^{j-1}=2^{j}$.
For the opposite inequality suppose for a contradiction that $M\left(F,\delta\right)> 2^{j}$, then at least one of the $2^{j-1}$ intervals in $F_{j-1}$ contains the centres of least three disjoint balls with centres in $F$. Let $I$ be such an interval. At the next stage of the construction $I$ is split into two sub-intervals, one of which contains the centres of at least two of these three disjoint balls. However, this $j^{\text{th}}$ stage subinterval has length no greater than $\delta$ so two closed balls of diameter $\delta$ centred in this interval cannot be disjoint (see figure \ref{figure - 2jdisjointballsmax}), which is a contradiction.
\end{proof}
\begin{figure}
\vspace*{8pt
\includegraphics[scale=0.7]{2jballsneeded.eps}
\vspace*{8pt}
\caption{A section of the sets $F_{j-1}$ and $F_{j}$ (shown in black) to illustrate that each set (shown as grey ellipses) with diameter $\delta<3^{-f_{3}\left(j-1\right)}7^{-f_{7}\left(j-1\right)}5^{-\left(j-1\right)+f_{3}\left(j-1\right)+f_{7}\left(j-1\right)}$ (i.e. less than the length of the intervals of $F_{j-1}$) can neither intersect two intervals of $F_{j-1}$ nor cover two intervals of $F_{j}$.}
\label{figure - 2jballsneeded}
\end{figure}
\begin{figure}
\vspace*{8pt
\includegraphics[scale=0.7]{2jdisjointballscover.eps}
\vspace*{8pt}
\caption{A section of the set $F_{j-1}$ (shown in black) to illustrate that balls (shown in grey) with diameter $\delta<3^{-f_{3}\left(j-1\right)}7^{-f_{7}\left(j-1\right)}5^{-\left(j-1\right)+f_{3}\left(j-1\right)+f_{7}\left(j-1\right)}$ (i.e. less than the length of the intervals of $F_{j-1}$) with centres the endpoints of the intervals $F_{j-1}$ are disjoint, giving two disjoint balls for each interval of $F_{j-1}$.}
\label{figure - 2jdisjointballscover}
\end{figure}
\begin{figure}
\vspace*{8pt
\includegraphics[scale=0.7]{2jdisjointballsmax.eps}
\vspace*{8pt}
\caption{A sub-interval $I\subset F_{j-1}$ (shown in black) which contains the centres of three balls (shown in grey) with diameter $\delta\geq 3^{-f_{3}\left(j\right)}7^{-f_{7}\left(j\right)}5^{-j+f_{3}\left(j\right)+f_{7}\left(j\right)}$ (i.e. greater than the the length of the intervals of $F_{j}$) with centres in $F$. As the centres are also contained in $F_{j}$ (also shown in black) at least one interval of $F_{j}$ contains the centres of two of these balls. Consequently, the distance between the centres is at most the length of an interval of $F_{j}$ which is at most $\delta$ which is the sum of the radii of the closed balls so the two balls are not disjoint.}
\label{figure - 2jdisjointballsmax}
\end{figure}
Adapting the argument we can prove a similar proposition for the set $G$.
\begin{proposition}\label{N=M=2j prop G}
For $\delta$ such that
\begin{equation}
3^{-g_{3}\left(j\right)}7^{-g_{7}\left(j\right)}5^{-j+g_{3}\left(j\right)+g_{7}\left(j\right)}\leq\delta< 3^{-g_{3}\left(j-1\right)}7^{-g_{7}\left(j-1\right)}5^{-\left(j-1\right)+g_{3}\left(j-1\right)+g_{7}\left(j-1\right)}
\end{equation}
we have $N\left(G,\delta\right)=M\left(G,\delta\right)=2^{j}$.
\end{proposition}
By taking logarithms we obtain the following more useful form of propositions \ref{N=M=2j prop} and \ref{N=M=2j prop G}
\begin{multline}\label{delta implies N(F,delta)}
f_{3}\left(j-1\right)\left[\log\left(3\right)-\log\left(5\right)\right]+f_{7}\left(j-1\right)\left[\log\left(7\right)-\log\left(5\right)\right] + \left(j-1\right)\log\left(5\right)\\
< -\log\delta \leq f_{3}\left(j\right)\left[\log\left(3\right)-\log\left(5\right)\right]+f_{7}\left(j\right)\left[\log\left(7\right)-\log\left(5\right)\right] + j\log\left(5\right)\\
\Rightarrow \log\left(N\left(F,\delta\right)\right) = \log\left(M\left(F,\delta\right)\right) = j\log\left(2\right)
\end{multline}
and
\begin{multline}\label{delta implies N(G,delta)}
g_{3}\left(j-1\right)\left[\log\left(3\right)-\log\left(5\right)\right]+g_{7}\left(j-1\right)\left[\log\left(7\right)-\log\left(5\right)\right] + \left(j-1\right)\log\left(5\right)\\
< -\log\delta \leq g_{3}\left(j\right)\left[\log\left(3\right)-\log\left(5\right)\right]+g_{7}\left(j\right)\left[\log\left(7\right)-\log\left(5\right)\right] + j\log\left(5\right)\\
\Rightarrow \log\left(N\left(G,\delta\right)\right) = \log\left(M\left(G,\delta\right)\right) = j\log\left(2\right)
\end{multline}
\paragraph*{}
The essential feature of our construction is that the sets $F$ and $G$ at some length-scales look like the Cantor middle-third set, while at other length-scales look like the Cantor middle-$\frac{5}{7}^{\text{th}}$ set. Further, this `local' behaviour is maintained over sufficient length-scales that the box-counting limits of $F$ and $G$ at these length-scales approach the box-counting dimensions of the relevant Cantor set, which we will establish in the following section. We conclude this section by proving that at any length-scale the sets $F$ and $G$ do not both look like the middle-third set nor do they both look like the middle-$\frac{5}{7}^{\text{th}}$ set.
\begin{lemma}\label{delta not in both length-scales upper}
No $\delta$ is both a length-scale corresponding to some stage in $\left(K_{6n},K_{6n+2}\right]$ in the construction of $F$ for some $n\in\mathbb{N}$ and a length-scale corresponding to some stage in $\left(K_{6m+3},K_{6m+5}\right]$ in the construction of $G$ for some $m\in\mathbb{N}$.
\end{lemma}
\begin{proof}
Assume for a contradiction that $\delta$ is such a length-scale, that is
\begin{multline}\label{fg inequality 1}
3^{-f_{3}\left(K_{6n+2}\right)}7^{-f_{7}\left(K_{6n+2}\right)}5^{-K_{6n+2}+f_{3}\left(K_{6n+2}\right)+f_{7}\left(K_{6n+2}\right)}\\ \leq \delta < 3^{-f_{3}\left(K_{6n}\right)}7^{-f_{7}\left(K_{6n}\right)}5^{-K_{6n}+f_{3}\left(K_{6n}\right)+f_{7}\left(K_{6n}\right)}
\end{multline}
and
\begin{multline}\label{fg inequality 2}
3^{-g_{3}\left(K_{6m+5}\right)}7^{-g_{7}\left(K_{6m+5}\right)}5^{-K_{6m+5}+g_{3}\left(K_{6m+5}\right)+g_{7}\left(K_{6m+5}\right)}\\ \leq \delta < 3^{-g_{3}\left(K_{6m+3}\right)}7^{-g_{7}\left(K_{6m+3}\right)}5^{-K_{6m+3}+g_{3}\left(K_{6m+3}\right)+g_{7}\left(K_{6m+3}\right)}
\end{multline}
We first demonstrate that this could only hold if $n=m$. From \eqref{fg inequality 1} and \eqref{fg inequality 2} we have $7^{-K_{6n+2}} \leq \delta < 3^{-K_{6n}}$ and $7^{-K_{6m+5}} \leq \delta < 3^{-K_{6m}}$ respectively, which in turn yield $7^{-K_{6n+2}} < 3^{-K_{6m}}$ and $7^{-K_{6m+5}} < 3^{-K_{6n}}$. Taking logarithms we get
\[
K_{6m} < K_{6n+2}\frac{\log\left(7\right)}{\log\left(3\right)}< K_{6n+3}\quad\text{and}\quad K_{6n} < K_{6m+5}\frac{\log\left(7\right)}{\log\left(3\right)}< K_{6m+6}
\]
where the final inequalities follow from the growth property \eqref{Kj log3log7}. We conclude that $6m< 6n+3$ and $6n< 6m+6$, which implies that $m=n$. With this restriction, if $\delta$ is such a length-scale then the lower bound from \eqref{fg inequality 1} and the upper bound from \eqref{fg inequality 2} imply
\begin{multline*}
3^{-f_{3}\left(K_{6n+2}\right)}7^{-f_{7}\left(K_{6n+2}\right)}5^{-K_{6n+2}+f_{3}\left(K_{6n+2}\right)+f_{7}\left(K_{6n+2}\right)} \\
< 3^{-g_{3}\left(K_{6n+3}\right)}7^{-g_{7}\left(K_{6n+3}\right)}5^{-K_{6n+3}+g_{3}\left(K_{6n+3}\right)+g_{7}\left(K_{6n+3}\right)}
\end{multline*}
For clarity we suppress the argument of the functions and take logarithms which yields
\begin{equation}\label{supressed argument condition}
\left[\log\left(5\right)-\log\left(3\right)\right]\left(f_{3}-g_{3}\right) + \left[\log\left(5\right)-\log\left(7\right)\right]\left(f_{7}-g_{7}\right) + \log\left(5\right)\left(K_{6n+3}-K_{6n+2}\right) < 0.
\end{equation}
The bounds in \eqref{Kj upper bound} and \eqref{Kj lower bound} give
\begin{align}
g_{3}\left(K_{6n+3}\right)-f_{3}\left(K_{6n+2}\right) &= \sum_{i=0}^{n-1} K_{6i+4} - K_{6i+3} - \sum_{i=0}^{n} K_{6i+1}-K_{6i}\notag\\
&< K_{6n-1} - K_{6n} < 0 \label{g3-f3}
\intertext{and}
f_{7}\left(K_{6n+2}\right)-g_{7}\left(K_{6n+3}\right) &= \sum_{i=0}^{n} K_{6i+2} - K_{6i+1} - \sum_{i=0}^{n-1} K_{6i+5} - K_{6i+4}\notag\\
&< K_{6n+3} - K_{6n-2}. \label{f7-g7}
\end{align}
Consequently we drop the positive first term of \eqref{supressed argument condition} which implies
\begin{align*}
\left[\log\left(5\right)-\log\left(7\right)\right]\left(K_{6n+3} - K_{6n-2}\right) + \log\left(5\right)\left(K_{6n+3}-K_{6n+2}\right) &< 0
\intertext{which is that}
\left[2\log\left(5\right)-\log\left(7\right)\right]K_{6n+3} + \left[\log\left(7\right)-\log\left(5\right)\right]K_{6n-2} - \log\left(5\right)K_{6n+2} &< 0
\end{align*}
After dropping the positive middle term and rearranging we get
\begin{align*}
K_{6n+3}&< \frac{\log\left(5\right)}{\left[2\log\left(5\right)-\log\left(7\right)\right]}K_{6n+2}
\intertext{so that}
K_{6n+3}&< \frac{\log\left(7\right)}{\log\left(3\right)}K_{6n+2}
\end{align*}
however, by condition \eqref{Kj log3log7} the right hand side is less than $K_{6n+3}$ giving the required contradiction.
\end{proof}
Consequently, at any length-scale the sets $F$ and $G$ do not both look like the Cantor middle-third set. This lemma gives the following useful corollary:
\begin{corollary}\label{subsequence corollary upper}
Every sequence $\set{\delta_{i}}$ with $\delta_{i}\rightarrow 0$ either contains a subsequence $\set{\delta_{i_{n}}}$ with each $\delta_{i_{n}}$ corresponding to some stage $j_{\delta_{i_{n}}}\in \left(K_{6n+2},K_{6n+6}\right]$ in the construction of $F$ or contains a subsequence $\set{\delta_{i_{m}}}$ corresponding to some stage $j_{\delta_{i_{m}}}\in \left(K_{6m+5},K_{6m+9}\right]$ in the construction of $G$.
\end{corollary}
\begin{proof}
If the sequence $\set{\delta_{i}}$ did not contain such a subsequence, then there is a $\Delta>0$ such that each $\delta_{i}<\Delta$ is neither a length-scale corresponding to some stage $j\in\left(K_{6n+2},K_{6n+6}\right]$ in the construction of $F$ nor a length-scale corresponding to some stage $j\in \left(K_{6m+5},K_{6m+9}\right]$ in the construction of $G$. Consequently, $\delta_{i}$ is a length scale corresponding to a stage $j\in\left(K_{6n},K_{6n+2}\right]$ in the construction of $F$ and also a length-scale corresponding to a stage $j\in\left(K_{6m+3},K_{6m+5}\right]$ which, from lemma \ref{delta not in both length-scales upper}, is contradictory.
\end{proof}
The corresponding lemma that $F$ and $G$ do not both look like the Cantor middle-$\frac{5}{7}^{\text{th}}$ set at any length-scale and the corresponding corollary for subsequences are proved in a similar way.
\begin{lemma}\label{delta not in both length-scales lower}
No $\delta$ is both a length-scale corresponding to some stage in $\left(K_{6n+1},K_{6n+3}\right]$ in the construction of $F$ for some $n\in\mathbb{N}$ and a length-scale corresponding to some stage in $\left(K_{6m+4},K_{6m+6}\right]$ in the construction of $G$ for some $m\in\mathbb{N}$,
\end{lemma}
\begin{corollary}\label{subsequence corollary lower}
Every sequence $\set{\delta_{i}}$ with $\delta_{i}\rightarrow 0$ either contains a subsequence $\set{\delta_{i_{n}}}$ with each $\delta_{i_{n}}$ corresponding to some stage $j_{\delta_{i_{n}}}\in \left(K_{6n+3},K_{6n+7}\right]$ in the construction of $F$ or contains a subsequence $\set{\delta_{i_{m}}}$ corresponding to some stage $j_{\delta_{i_{m}}}\in \left(K_{6m},K_{6m+4}\right]$ in the construction of $G$.
\end{corollary}
\section{Calculating box-counting dimensions}
In order to establish the box-counting dimensions of $F$ and $G$ we need the following proposition on the behaviour of the generator-counting functions at the limit:
\begin{proposition}\label{f Kj ratio limit}
\[
\frac{f_{3}\left(K_{6n+l}\right)}{K_{6n+l}} \rightarrow
\begin{cases}
1 & l=1\\
0 & 0\leq l < 6,\; l\neq 1
\end{cases}\quad
\frac{f_{7}\left(K_{6n+l}\right)}{K_{6n+l}} \rightarrow
\begin{cases}
1 & l=2\\
0 & 0\leq l < 6,\; l\neq 2
\end{cases}
\]
\[
\frac{g_{3}\left(K_{6m+l}\right)}{K_{6m+l}} \rightarrow
\begin{cases}
1 & l=4\\
0 & 0\leq l < 6,\; l\neq 4
\end{cases}\quad
\frac{g_{7}\left(K_{6m+l}\right)}{K_{6m+l}} \rightarrow
\begin{cases}
1 & l=5\\
0 & 0\leq l < 6,\; l\neq 5
\end{cases}
\]
as $n,m\rightarrow \infty$.
\end{proposition}
\begin{proof}
From \eqref{explicit f(Kj)} we have
\[
\frac{f_{3}\left(K_{6n+1}\right)}{K_{6n+1}} = \frac{\sum_{i=0}^{n} K_{6i+1} - K_{6i}}{K_{6n+1}} = 1 + \frac{\sum_{i=0}^{n-1} K_{6i+1}}{K_{6n+1}} - \frac{\sum_{i=0}^{n} K_{6i}}{K_{6n+1}}
\]
which converges to 1 as $n\rightarrow \infty$ by \eqref{Kj ratio limit}.
By \eqref{f constant on intervals} we have $f_{3}\left(K_{6n+l}\right)=f_{3}\left(K_{6n+1}\right)$ for $2\leq l<6$ so
\[
\frac{f_{3}\left(K_{6n+l}\right)}{K_{6n+l}} = \frac{f_{3}\left(K_{6n+1}\right)}{K_{6n+l}} = \frac{\sum_{i=0}^{n} K_{6i+1} - K_{6i}}{K_{6n+l}}
\]
which converges to 0 as $n\rightarrow \infty$ by \eqref{Kj ratio limit}. Similarly, $f_{3}\left(K_{6n+0}\right) = f_{3}\left(K_{6\left(n-1\right)+1}\right)$ so
\[
\frac{f_{3}\left(K_{6n+0}\right)}{K_{6n+0}} = \frac{f_{3}\left(K_{6\left(n-1\right)+1}\right)}{K_{6n}} = \frac{\sum_{i=0}^{n-1} K_{6i+1} - K_{6i}}{K_{6n}} \rightarrow 0
\]
The remaining results follow analogously.
\end{proof}
\begin{lemma}\label{upper box lemma}
$\dim_{B}\left(F\right)=\dim_{B}\left(G\right)=\frac{\log\left(2\right)}{\log\left(3\right)}.$
\end{lemma}
\begin{proof}
First, we show that $\dim_{B}\left(F\right)\leq \frac{\log\left(2\right)}{\log\left(3\right)}$. Writing $j_{\delta}$ for the stage associated with the length-scale $\delta$ we have from \eqref{delta implies N(F,delta)}
\begin{align*}
\frac{\log\left(N\left(F,\delta\right)\right)}{-\log \delta}&< \frac{j_{\delta}\log\left(2\right)}{f_{3}\left(j_{\delta}-1\right)\left[\log\left(3\right)-\log\left(5\right)\right] + f_{7}\left(j_{\delta}-1\right)\left[\log\left(7\right)-\log\left(5\right)\right] + \left(j_{\delta}-1\right)\log\left(5\right)}\\
&\leq \frac{j_{\delta}\log\left(2\right)}{f_{3}\left(j_{\delta}-1\right)\left[\log\left(3\right)-\log\left(5\right)\right] + \left(j_{\delta}-1\right)\log\left(5\right)}
\intertext{and as $f_{3}\left(j_{\delta}-1\right)< j_{\delta}-1$}
&< \frac{j_{\delta}\log\left(2\right)}{\left(j_{\delta}-1\right)\left[\log\left(3\right)-\log\left(5\right)\right] + \left(j_{\delta}-1\right)\log\left(5\right)} = \frac{j_{\delta}\log\left(2\right)}{\left(j_{\delta}-1\right)\log\left(3\right)}\\
\end{align*}
which converges to $\frac{\log\left(2\right)}{\log\left(3\right)}$ as $\delta\rightarrow 0$. Similarly we can show $\dim_{B}\left(G\right)\leq\frac{\log\left(2\right)}{\log\left(3\right)}$
\paragraph*{}
Next, if we take the sequence $\set{\delta_{n}}$ with
\[
-\log\delta_{n} = f_{3}\left(K_{6n+1}\right)\left[\log\left(3\right)-\log\left(5\right)\right]+ f_{7}\left(K_{6n+1}\right)\left[\log\left(7\right)-\log\left(5\right)\right]+ K_{6n+1}\log\left(5\right)
\]
we have from \eqref{delta implies N(F,delta)} that $\log\left(N\left(F,\delta_{n}\right)\right)= K_{6n+1}\log\left(2\right)$. Consequently,
\begin{align*}
\frac{\log\left(N\left(F,\delta_{n}\right)\right)}{-\log \delta_{n}} &= \frac{K_{6n+1}\log\left(2\right)}{f_{3}\left(K_{6n+1}\right)\left[\log\left(3\right)-\log\left(5\right)\right]+ f_{7}\left(K_{6n+1}\right)\left[\log\left(7\right)-\log\left(5\right)\right]+ K_{6n+1}\log\left(5\right)}\\
&= \frac{\log\left(2\right)}{\frac{f_{3}\left(K_{6n+1}\right)}{K_{6n+1}}\left[\log\left(3\right)-\log\left(5\right)\right]+\frac{f_{7}\left(K_{6n+1}\right)}{K_{6n+1}}\left[\log\left(7\right)-\log\left(5\right)\right]+ \log\left(5\right)}\\
&\rightarrow \frac{\log\left(2\right)}{\log\left(3\right)-\log\left(5\right) + 0 + \log\left(5\right)} = \frac{\log\left(2\right)}{\log\left(3\right)}
\end{align*}
as $n\rightarrow\infty$ by the convergence results \eqref{f Kj ratio limit} so that $\dim_{B}\left(F\right)\geq \frac{\log\left(2\right)}{\log\left(3\right)}$\\
A similar sequence gives the corresponding inequality for $G$.
\end{proof}
\begin{lemma}\label{lower box lemma}
$\dim_{LB}\left(F\right)=\dim_{LB}\left(G\right)=\frac{\log\left(2\right)}{\log\left(7\right)}.$
\end{lemma}
\begin{proof}
For all $\delta>0$ the implication \eqref{delta implies N(F,delta)} gives
\begin{align*}
\frac{\log\left(N\left(F,\delta\right)\right)}{-\log \delta} &\geq \frac{j_{\delta}\log\left(2\right)}{ f_{3}\left(j_{\delta}\right)\left[\log\left(3\right)-\log\left(5\right)\right]+ f_{7}\left(j_{\delta}\right)\left[\log\left(7\right)-\log\left(5\right)\right]+j_{\delta}\log\left(5\right)}\\
&\geq \frac{j_{\delta}\log\left(2\right)}{ f_{7}\left(j_{\delta}\right)\left[\log\left(7\right)-\log\left(5\right)\right]+j_{\delta}\log\left(5\right)}\\
&> \frac{j_{\delta}\log\left(2\right)}{ j_{\delta}\left[\log\left(7\right)-\log\left(5\right)\right]+j_{\delta}
\log\left(5\right)}=\frac{\log\left(2\right)}{\log\left(7\right)}
\end{align*}
Next, if we take the sequence $\set{\delta_{n}}$ with
\[
-\log\delta_{n} = f_{3}\left(K_{6n+2}\right)\left[\log\left(3\right)-\log\left(5\right)\right]+ f_{7}\left(K_{6n+2}\right)\left[\log\left(7\right)-\log\left(5\right)\right]+ K_{6n+2}\log\left(5\right)
\]
we have
\begin{align*}
\frac{\log\left(N\left(F,\delta_{n}\right)\right)}{-\log \delta_{n}} &= \frac{K_{6n+2}\log\left(2\right)}{ f_{3}\left(K_{6n+2}\right)\left[\log\left(3\right)-\log\left(5\right)\right]+ f_{7}\left(K_{6n+2}\right)\left[\log\left(7\right)-\log\left(5\right)\right]+ K_{6n+2}\log\left(5\right)}\\
&= \frac{\log\left(2\right)}{\frac{f_{3}\left(K_{6n+2}\right)}{K_{6n+2}}\left[\log\left(3\right)-\log\left(5\right)\right]+\frac{f_{7}\left(K_{6n+2}\right)}{K_{6n+2}}\left[\log\left(7\right)-\log\left(5\right)\right]+ \log\left(5\right)}\\
&\rightarrow \frac{\log\left(2\right)}{0 + \log\left(7\right)-\log\left(5\right) + \log\left(5\right)} = \frac{\log\left(2\right)}{\log\left(7\right)}
\end{align*}
as $n\rightarrow\infty$ by the convergence results \eqref{f Kj ratio limit}. Hence $\dim_{LB}\left(F\right)=\frac{\log\left(2\right)}{\log\left(7\right)}$, and similarly $\dim_{LB}\left(G\right)=\frac{\log\left(2\right)}{\log\left(7\right)}$.
\end{proof}
Consequently, both $F$ and $G$ have unequal upper and lower box-counting dimensions:
\begin{corollary}
$\dim_{LB}\left(F\right)=\dim_{LB}\left(G\right) < \dim_{B}\left(F\right)=\dim_{B}\left(G\right)$.
\end{corollary}
\paragraph*{}
Whilst the above lemmas demonstrate that for $F$ and $G$ there are sequences of length-scales $\set{\delta_{n}}$ with $\lim_{n\rightarrow\infty}\frac{\log\left(N\left(F,\delta_{n}\right)\right)}{-\log\delta_{n}}$ equal to $\frac{\log\left(2\right)}{\log\left(3\right)}$ or equal to $\frac{\log\left(2\right)}{\log\left(7\right)}$ we now show that for a large class of sequences (in fact the very sequences that corollaries \ref{subsequence corollary upper} and \ref{subsequence corollary lower} produce) this limit, if it exists, is bounded by $\frac{\log\left(2\right)}{\log\left(5\right)}$.
\begin{lemma}\label{deltan N(F,deltan) converge to log2log5}
Suppose $\set{\delta_{n}}$ is a sequence such that each length-scale $\delta_{n}$ corresponds to the construction of $F$ at some stage $j_{n}\in \left(K_{6n+2},K_{6n+6}\right]$, then if the limit exists
\[\lim_{n\rightarrow\infty}\frac{\log\left(N\left(F,\delta_{n}\right)\right)}{-\log\delta_{n}}\leq\frac{\log\left(2\right)}{\log\left(5\right)}.
\]
\end{lemma}
Essentially, these stages are sufficiently far from the range $\left(K_{6n},K_{6n+1}\right]$ where $\gen_{3}$ is applied so that the set $F$ does not look like the Cantor middle-third set at these stages. The proof relies on the fact that by stage $j_{n}$ the generator $\gen_{3}$ has not been applied for at least the last $K_{6n+2}-K_{6n+1}$ stages.
\begin{proof}
For each $n\in\mathbb{N}$ from \eqref{delta implies N(F,delta)}
\begin{align*}
\frac{\log\left(N\left(F,\delta_{n}\right)\right)}{-\log\delta_{n}} &\leq \frac{j_{n}\log\left(2\right)}{f_{3}\left(j_{n}-1\right)\left[\log\left(3\right)-\log\left(5\right)\right]+f_{7}\left(j_{n}-1\right)\left[\log\left(7\right)-\log\left(5\right)\right] + \left(j_{n}-1\right)\log\left(5\right)}\\
&\leq \frac{j_{n}\log\left(2\right)}{f_{3}\left(j_{n}-1\right)\left[\log\left(3\right)-\log\left(5\right)\right]+ \left(j_{n}-1\right)\log\left(5\right)}
\intertext{From \eqref{f constant on intervals} we have $f_{3}\left(j_{n}-1\right)=f_{3}\left(K_{6n+1}\right)$ so that}
&=\frac{j_{n}\log\left(2\right)}{f_{3}\left(K_{6n+1}\right)\left[\log\left(3\right)-\log\left(5\right)\right]+ \left(j_{n}-1\right)\log\left(5\right)}\\
&=\frac{\log\left(2\right)}{\frac{f\left(K_{6n+1}\right)}{j_{n}}\left[\log\left(3\right)-\log\left(5\right)\right]+\log\left(5\right)-\frac{1}{j_{n}}\log\left(5\right)}.
\intertext{Next, as $j_{n}>K_{6n+2}$}
&\leq \frac{\log\left(2\right)}{\frac{f_{3}\left(K_{6n+1}\right)}{K_{6n+2}}\left[\log\left(3\right)-\log\left(5\right)\right]+\log\left(5\right)-\frac{1}{j_{n}}\log\left(5\right)}\\
&\rightarrow \frac{\log\left(2\right)}{\log\left(5\right)}
\end{align*}
as $n\rightarrow\infty$ by the convergence result \eqref{f Kj ratio limit}.
\end{proof}
The corresponding result for $G$, proved in a similar way, is as follows.
\begin{lemma}\label{deltan N(G,deltan) converge to log2log5}
Suppose $\set{\delta_{m}}$ is a sequence such that each length-scale $\delta_{m}$ corresponds to the construction of $G$ at some stage $j_{m}\in \left(K_{6m+5},K_{6m+9}\right]$, then if the limit exists
\[\lim_{m\rightarrow\infty}\frac{\log\left(N\left(G,\delta_{m}\right)\right)}{-\log\delta_{m}}\leq\frac{\log\left(2\right)}{\log\left(5\right)}.
\]
\end{lemma}
The following results for lower bounds are also proved similarly.
\begin{lemma}\label{deltan M(F,deltan) converge to log2log5}
Suppose $\set{\delta_{n}}$ is a sequence such that each length-scale $\delta_{n}$ corresponds to the construction of $F$ at some stage $j_{n}\in \left(K_{6n+3},K_{6n+7}\right]$, then if the limit exists
\[\lim_{n\rightarrow\infty}\frac{\log\left(M\left(F,\delta_{n}\right)\right)}{-\log\delta_{n}}\geq\frac{\log\left(2\right)}{\log\left(5\right)}
\]
\end{lemma}
and
\begin{lemma}\label{deltan M(G,deltan) converge to log2log5}
Suppose $\set{\delta_{m}}$ is a sequence such that each length-scale $\delta_{m}$ corresponds to the construction of $G$ at some stage $j_{m}\in \left(K_{6m},K_{6m+4}\right]$, then if the limit exists
\[\lim_{m\rightarrow\infty}\frac{\log\left(M\left(G,\delta_{m}\right)\right)}{-\log\delta_{m}}\geq\frac{\log\left(2\right)}{\log\left(5\right)}.
\]
\end{lemma}
Finally, we find a bound on the box-counting dimensions of the product $F\times G$.
\begin{theorem}\label{strict upper box product inequality}
$\dim_{B}\left(F\times G\right)\leq \frac{\log\left(2\right)}{\log\left(3\right)} + \frac{\log\left(2\right)}{\log\left(5\right)}.$
\end{theorem}
\begin{proof}
We have from \eqref{limsup of sum} that
\[
\limsup_{\delta\rightarrow 0} \frac{\log\left(N\left(F\times G,\delta\right)\right)}{-\log \delta} \leq \limsup_{\delta\rightarrow 0}\left[\frac{\log\left(N\left(F,\delta\right)\right)}{-\log\delta} + \frac{\log\left(N\left(G,\delta\right)\right)}{-\log\delta}\right]
\]
so it is sufficient to show that the right hand side is no greater than $\frac{\log\left(2\right)}{\log\left(3\right)} + \frac{\log\left(2\right)}{\log\left(5\right)}$. Suppose that $\set{\delta_{i}}$ is a sequence with $\delta_{i}\rightarrow 0$ such that the limits $\lim_{i\rightarrow \infty}\frac{\log\left(N\left(F,\delta_{i}\right)\right)}{-\log \delta_{i}}$ and $\lim_{i\rightarrow \infty}\frac{\log\left(N\left(G,\delta_{i}\right)\right)}{-\log \delta_{i}}$ exist. Corollary \ref{subsequence corollary upper} guarantees that this sequence either contains a subsequence $\set{\delta_{i_{n}}}$ satisfying the hypothesis of lemma \ref{deltan N(F,deltan) converge to log2log5} or contains a subsequence $\set{\delta_{i_{m}}}$ satisfying the hypothesis of lemma \ref{deltan N(G,deltan) converge to log2log5} so at least one of $\frac{\log\left(N\left(F,\delta_{i_{n}}\right)\right)}{-\log \delta_{i_{n}}}$ and $\frac{\log\left(N\left(G,\delta_{i_{n}}\right)\right)}{-\log \delta_{i_{n}}}$ converges to $\frac{\log\left(2\right)}{\log\left(5\right)}$. Using the upper box-counting dimension from lemma \ref{upper box lemma} to bind the other term yields
\[
\lim_{n\rightarrow \infty} \frac{\log\left(N\left(F,\delta_{i_{n}}\right)\right)}{-\log \delta_{i_{n}}} + \frac{\log\left(N\left(G,\delta_{i_{n}}\right)\right)}{-\log \delta_{i_{n}}} \leq \frac{\log\left(2\right)}{\log\left(3\right)} + \frac{\log\left(2\right)}{\log\left(5\right)}
\]
a bound which also hold for the original sequence $\set{\delta_{i}}$.
As $\set{\delta_{i}}$ was an arbitrary convergent sequence,
\[
\limsup_{\delta\rightarrow 0}\left[\frac{\log\left(N\left(F,\delta\right)\right)}{-\log\delta} + \frac{\log\left(N\left(G,\delta\right)\right)}{-\log\delta}\right] \leq \frac{\log\left(2\right)}{\log\left(3\right)} + \frac{\log\left(2\right)}{\log\left(5\right)}
\]
\end{proof}
\begin{corollary}
$\dim_{B}\left(F\times G\right)< \dim_{B}\left(F\right) + \dim_{B}\left(G\right)$
\end{corollary}
\begin{theorem}
$\dim_{LB}\left(F\times G\right)\geq \frac{\log\left(2\right)}{\log\left(7\right)} + \frac{\log\left(2\right)}{\log\left(5\right)}.$
\end{theorem}
\begin{proof}
From \eqref{liminf of sum} we have
\[
\liminf_{\delta\rightarrow 0} \frac{\log\left(M\left(F\times G,\delta\right)\right)}{-\log\delta} \geq \liminf_{\delta\rightarrow 0} \left[\frac{\log\left(M\left(F,\delta\right)\right)}{-\log\delta} + \frac{\log\left(M\left(G,\delta\right)\right)}{-\log\delta} \right]
\]
so it is sufficient to prove that the right hand side is no less than $\frac{\log\left(2\right)}{\log\left(7\right)} + \frac{\log\left(2\right)}{\log\left(5\right)}$. Suppose that $\set{\delta_{i}}$ is a sequence with $\delta_{i}\rightarrow 0$ such that the limits $\lim_{i\rightarrow \infty}\frac{\log\left(N\left(F,\delta_{i}\right)\right)}{-\log \delta_{i}}$ and $\lim_{i\rightarrow \infty}\frac{\log\left(N\left(G,\delta_{i}\right)\right)}{-\log \delta_{i}}$ exist. In a similar fashion to theorem \ref{strict upper box product inequality}, corollary \ref{subsequence corollary lower} and lemmas \ref{deltan M(F,deltan) converge to log2log5} and \ref{deltan M(G,deltan) converge to log2log5} guarantee that at least one of $\lim_{i\rightarrow\infty}\frac{\log\left(M\left(F,\delta_{i_{n}}\right)\right)}{-\log\delta_{i_{n}}}$ and $\lim_{i\rightarrow\infty}\frac{\log\left(M\left(G,\delta_{i_{n}}\right)\right)}{-\log\delta_{i_{n}}}$ is no less than $\frac{\log\left(2\right)}{\log\left(5\right)}$ so
\[
\liminf_{\delta\rightarrow 0} \left[\frac{\log\left(M\left(F,\delta\right)\right)}{-\log\delta} + \frac{\log\left(M\left(G,\delta\right)\right)}{-\log\delta}\right] \geq \frac{\log\left(2\right)}{\log\left(5\right)} + \frac{\log\left(2\right)}{\log\left(7\right)}
\]
\end{proof}
\begin{corollary}
$\dim_{LB}\left(F\times G\right) > \dim_{LB}\left(F\right) + \dim_{LB}\left(G\right)$
\end{corollary}
\begin{acknowledgements}\label{ackref}
I am greatly indebted to my PhD supervisor, James Robinson, for highlighting that there was no example of a strict product inequality in the box-counting literature and to the generosity of Isabelle Harding and Matt Gibson in providing accommodation whilst much of this work was completed.
\end{acknowledgements}
| {
"timestamp": "2010-07-27T02:00:26",
"yymm": "1007",
"arxiv_id": "1007.4222",
"language": "en",
"url": "https://arxiv.org/abs/1007.4222",
"abstract": "It is known that the upper box-counting dimension of a Cartesian product satisfies the inequality $\\dim_{B}\\left(F\\times G\\right)\\leq \\dim_{B}\\left(F\\right) + \\dim_{B}\\left(G\\right)$ whilst the lower box-counting dimension satisfies the inequality $\\dim_{LB}\\left(F\\times G\\right)\\geq \\dim_{LB}\\left(F\\right) + \\dim_{LB}\\left(G\\right)$. We construct Cantor-like sets to demonstrate that both of these inequalities can be strict.",
"subjects": "Metric Geometry (math.MG)",
"title": "Strict inequality in the box-counting dimension product formulas",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137937226358,
"lm_q2_score": 0.8152324848629215,
"lm_q1q2_score": 0.8005695452261689
} |
https://arxiv.org/abs/2107.09450 | Monochromatic Edges in Complete Multipartite Hypergraphs | Consider the following problem. In a school with three classes containing $n$ students each, given that their genders are unknown, find the minimum possible number of triples of same-gender students not all of which are from the same class. Muaengwaeng asked this question and conjectured that the minimum scenario occurs when the classes are all boy, all girl and half-and-half. In this paper, we solve many generalizations of the problem including when the school has more than three classes, when triples are replaced by groups of larger sizes, when the classes are of different sizes, and when gender is replaced by other non-binary attributes. | \section{Introduction}
A \textit{hypergraph} is a pair $(V, E)$ where $V$ is a finite set of vertices and $E$ is a collection of subsets of $V$. Each subset in $E$ is called an \textit{edge}. An \textit{$r$-uniform} hypergraph contains only edges of size $r$ and if it contains all possible edges of size $r$, this $r$-uniform hypergraph is said to be \textit{complete}.
For decades, many researchers have been studying hypergraphs as a generalization of graphs and many theorems in graph theory have been extended to hypergraphs (see \cite{Frank}, \cite{Gyarfas}, \cite{Katona}, \cite{Keevash} and \cite{Mubayi}). Graph coloring which is a popular topic is also a part of those interesting extensions (see \cite{Berge} and \cite{Erdos}). Given a hypergraph, an \emph{$m$-coloring} is an assignment of a color to each vertex of the hypergraph from $m$ available colors. An edge is said to be \textit{monochromatic} if all vertices in it have the same color. Some later researches (see \cite{Bujt} and \cite{Cowen}) focused on \textit{proper coloring} and \textit{defective coloring} which involve colorings where monochromatic edges do not exist or exist only in some limited amount. The complexity of general hypergraphs has led to study focusing on hypergraphs which have an orderly and symmetric structure, for example, $k$-partite hypergraphs.
A \textit{balanced $k$-partite $r$-uniform hypergraph} has $k$ vertex classes $V_1, V_2, … , V_k$ of the same size. There are two natural generalizations of $k$-partite graphs to hypergraphs. First, each edge is an $r$-subset of $\displaystyle\bigcup_{i=1}^{k}V_i$, all of whose vertices are from different classes. The second definition of edges is that each edge is an $r$-subset, not all of whose vertices are from the same class. In the paper, we will use the latter definition.
The problem in the abstract can be restated in the language of hypergraphs as follows.
\begin{problem}
Which $2$-coloring minimizes the number of monochromatic edges of a balanced complete tripartite $3$-uniform hypergraph?
\end{problem}This question was asked by Muaengwaeng \cite{Muaeng}. Given that those two colors are red and blue, she conjectured that the minimum coloring occurs when those three classes are all blue, all red and half-and-half. In this paper, we solve many generalizations of Problem $1$. First, we study a balanced complete $k$-partite $r$-uniform hypergraph.
\begin{theorem}\label{thm:1}
Let $n \geqslant r \geqslant 3$ and $k \geqslant 2$. The $2$-coloring minimizing the number of monochromatic edges of a balanced complete $k$-partite $r$-uniform hypergraph with $n$ vertices in each class is as follows:
\begin{enumerate}
\item Color all vertices of the first $ \lfloor \frac{k}{2} \rfloor $ classes with red.
\item Color all vertices of the last $ \lfloor \frac{k}{2} \rfloor $ classes with blue.
\item If there is another class, color the vertices of that class such that the number of red and blue vertices are as equal as possible.
\end{enumerate}Moreover, this coloring is unique up to a permutation of colors and classes.
\end{theorem}
The case where $r=k=3$ was presented in a conference \cite{Boon} by the second author. The key idea is to calculate the change in the number of monochromatic edges when a vertex is recolored. We use this to find the minimum coloring among those with a fixed number of red vertices. Then we compare these minimum colorings.
The proof of Theorem $2$ gives a clue on how to prove a more general case when each class does not contain the same number of vertices.
\begin{theorem}\label{thm:2}
For any unbalanced complete tripartite $3$-uniform hypergraph $H$ with the numbers of vertices of the first, second and third classes, $ n_1 \leqslant n_2 \leqslant n_3$ where $n_3 \geqslant 3$ and $n_1+n_2+n_3 = N$, a $2$-coloring minimizing the number of monochromatic edges of $H$ is as follows:
\begin{enumerate}
\item If $n_1 + n_2 > n_3$, color all vertices in the second and third classes with blue and red, respectively, then and color $\left\lceil \frac{N^2-3N-n^2_1-2n_1n_3+3n_1+4n_3}{2(N-n_1)}\right\rceil-n_3$ vertices in the first class with red.
\item If $n_1+n_2 \leqslant n_3$, then color all vertices in the first and second classes with red and color the third class with blue.
\end{enumerate}Moreover, each coloring is unique up to a permutation of colors unless $(n_1,n_2,n_3)=(2,n,n+1)$ for $n \geqslant 2$ in which case there is another extremal coloring namely a coloring such that vertices in the third class are all red, all vertices in the second class are all blue and the first class has one red and one blue vertices.
\end{theorem}
The proof consists of two parts. First, by swapping a red vertex and a blue vertex in different classes, we conclude that a minimum coloring must be in one of the $12$ canonical forms. Then we compares the numbers of monochromatic edges between the forms.
Finally, we study the number of monochromatic edges of balanced complete $k$-partite $r$-uniform hypergraphs except, this time, up to three colors will be available. The problem of minimizing the number of monochromatic edges gets more complicated as it is not a simple two-way comparison between red and blue. The results are divided upon the remainder of the number of vertex classes, $k$, divided by $3$. We are able to solve the cases $k\equiv 0,1 \ (\textrm{mod}\ 3)$.
\begin{theorem}\label{thm:3}
Let $n \geqslant r \geqslant 3$ and $k \geqslant 3$. For any balanced complete $k$-partite $r$-uniform hypergraph, $H$, with $n$ vertices in each class, if $k\equiv 0\ (\textrm{mod}\ 3)$, the $3$-coloring minimizing the number of monochromatic edges of $H$ is as follows:
\begin{enumerate}
\item Color all vertices of the first $\frac{k}{3}$ classes with red.
\item Color all vertices of the next $\frac{k}{3}$ classes with blue.
\item Color all vertices of the last $\frac{k}{3}$ classes with green.
\end{enumerate} If $k\equiv 1 \ (\textrm{mod}\ 3)$, the $3$-coloring minimizing the number of monochromatic edges of $H$ is as follows:
\begin{enumerate}
\item Color all vertices of the first $\left\lfloor\frac{k}{3}\right\rfloor$ classes with red.
\item Color all vertices of the next $\left\lfloor\frac{k}{3}\right\rfloor$ classes with blue.
\item Color all vertices of the next $\left\lfloor\frac{k}{3}\right\rfloor$ classes with green.
\item Color all vertices of the last class such that the number of red, blue and green vertices are as equal as possible.
\end{enumerate} Moreover, each coloring is unique up to a permutation of colors.
\end{theorem}
We use a similar idea to the proof of Theorem $3$, but we need to develop some new lemmas to construct the canonical forms of the colorings.
The rest of this paper is organized as follows. In Section \ref{sec2}, we introduce some notations and useful properties that will be used throughout the paper. Later, we consider some straightforward cases. Sections \ref{sec:thm:1}, \ref{sec:thm:2} and \ref{sec:thm:3} are devoted to proving Theorems \ref{thm:1}, \ref{thm:2} and \ref{thm:3}, respectively. Finally, we conclude in Section \ref{sec:conclude} with a discussion of some open problems.
\section{Preliminaries}
First, we will introduce some notations that will be used throughout the paper. Later, we will mainly discuss some useful properties of binomial coefficients and some trivial cases of the problem.
\label{sec2}
\subsection{The number of monochromatic edges of a $2$-coloring of balanced complete $k$-partite $r$-uniform hypergraphs}
Let $H$ be a balanced complete $k$-partite $(r+1)$-uniform hypergraph with $n$ vertices in each class. We consider $(r+1)$-uniform instead of $r$-uniform hypergraph for simple calculation. Let $c$ be a coloring of $H$ with $x_i$ red vertices in the $i^{th}$ class and let $X = x_1+x_2+\cdots +x_k$. Let $M(H,c)$ be the number of monochromatic edges of $H$ with coloring $c$. Then,
\begin{align*}
M(H,c) = \left[ { X \choose r+1}-\sum^{k}_{i=1}{x_i \choose r+1} \right] + \left[ { kn-X \choose r+1}-\sum^{k}_{i=1}{n-x_i \choose r+1} \right].
\end{align*}
This function is the main method to count the number of monochromatic edges. However, this function alone is not enough for comparing the numbers of monochromatic edges of all colorings. Let $\bigtriangleup_iM(H,c)$ be the change in the number of monochromatic edges when a blue vertex in the $i^{th}$ class is recolored (if possible). The change is equal to the difference between the number of monochromatic edges containing the vertex that will be recolored, before and after the recoloring. Then,
\begin{align*}
\bigtriangleup_iM(H,c)= \left[ {X \choose r} - {x_i \choose r}\right] - \left[ {kn-X-1 \choose r} - {n-x_i-1 \choose r}\right].
\end{align*}
We sometimes simply write $\bigtriangleup M(H,c)$ instead of $\bigtriangleup_iM(H,c)$ if the class of the color-changing vertex is clear.
\subsection{The number of monochromatic edges of a $2$-coloring of unbalanced complete tripartite $3$-uniform hypergraphs}
Let $H$ be an unbalanced complete tripartite $3$-uniform hypergraph with the numbers of vertices of the first, second and third classes equal to $n_1 \leqslant n_2 \leqslant n_3$, respectively, and let $N= n_1+n_2+n_3$. Let $c$ be the coloring of $H$ with the number of red vertices of the first, second and third classes equal to $x_1,x_2$ and $x_3$, respectively, and let $X=x_1+x_2+x_3$. Then, \begin{align*}
M(H,c)= \left[ { X \choose 3}-\sum^{3}_{i=1}{x_i \choose 3} \right] + \left[ { N-X \choose 3}-\sum^{3}_{i=1}{n_i-x_i \choose 3} \right],
\end{align*} and
\begin{align*}
\bigtriangleup_iM(H,c)= \left[ {X \choose 2} - {x_i \choose 2}\right] - \left[ {N-X-1 \choose 2} - {n_i-x_i-1 \choose 2}\right].
\end{align*}
\subsection{The number of monochromatic edges of a $3$-coloring of balanced complete $k$-partite $r$-uniform hypergraphs}
Let $H$ be a balanced complete $k$-partite $(r+1)$-uniform hypergraph with $n$ vertices in each class. Let $c$ be the $3$-coloring of $H$ with the numbers of red, blue and green vertices of the $i^{th}$ class equal to $r_i, b_i$ and $g_i$, respectively, and let $R$, $B$ and $G$ be the total numbers of red, blue and green vertices, respectively. Note that $ R= \sum^{k}_{i=1}{r_i}, B= \sum^{k}_{i=1}{b_i}$ and $ G= \sum^{k}_{i=1}{g_i}$. Moreover, $n= r_i+b_i+g_i$ for each $i = 1,2,\ldots,k$ and $kn = R+B+G$. Then, \begin{align*}
M(H,c)&= \left[ { R \choose r+1}-\sum^{k}_{i=1}{r_i \choose r+1} \right] + \left[ { B \choose r+1}-\sum^{k}_{i=1}{b_i \choose r+1} \right]+ \left[ { G \choose r+1}-\sum^{k}_{i=1}{g_i \choose r+1} \right].
\end{align*}
We write $\bigtriangleup_iM(H,c)$ for the change in the number of monochromatic edges when a blue vertex in the $i^{th}$ class is recolored to red (if possible). Then,
\begin{align*}
\bigtriangleup_iM(H,c) = \left[ {R \choose r} - {r_i \choose r}\right] - \left[ {B-1 \choose r} - {b_i-1 \choose r}\right].
\end{align*} The change can be calculated similarly for any recoloring with other color combinations.
\subsection{Properties of binomial coefficients}
We will additionally introduce some standard tools that will be applied throughout this paper.
\begin{proposition} For any non-negative integers $a,b,c$ and $d$ with $c \leqslant a \leqslant b \leqslant d$, if $a+b\leqslant c+d$, then ${a \choose r} + {b \choose r} \leqslant {c \choose r} + {d \choose r}$ for any positive integer $r$. Moreover, the equality holds if and only if $(a=c$ and $b=d)$ or $d<r$.
\end{proposition}
Note that the inequality trivially holds when all upper indices of binomial coefficient terms are less than the lower index. This trivial condition will be found occasionally throughout our proofs of the main theorems.
\begin{proposition}
For any non-negative integers $x_1, x_2 ,\ldots,x_n$ whose sum is constant and for any non-negative integer $r$, $ \sum_{i=1}^n{x_i\choose r}$ is smallest if and only if $x_1, x_2 ,\ldots,x_n$ are as equal as possible or $ max\{x_1,x_2,\ldots,x_n\}<r$ .
\end{proposition}
\begin{proposition}
For any non-negative integers $x_1, x_2 ,\ldots,x_n$ whose sum is constant and for any non-negative integer $r$, $ \sum_{i=1}^n{x_i\choose r}$ is largest if and only if all but one $x_i$ are zeros or $\sum_{i=1}^n x_i < r$.
\end{proposition}
Proposition $4$ is the main tool to compare binomial coefficients while Propositions $5$ and $6$ are generalizations of Proposition $4$ which we will apply to prove some trivial cases of the problems in the next subsections.
\subsection{Colorings of hypergraphs with the size of each class fewer than the size of an edge}
In our theorems, we assume that the size of each class must be at least the size of an edge, otherwise, an edge cannot be contained in a class and our hypergraphs are just complete hypergraphs. In this subsection, we will note that the problem is trivial when $n<r$ and determine a coloring that have the minimum number of monochromatic edges. Let $H$ be a complete $r$-uniform hypergraphs $H$ and let $c$ be a coloring of $H$. Then,
\begin{align*}
M(H,c)={R \choose r} + {B \choose r}
\end{align*} if $c$ is a $2$-coloring with $R$ red and $B$ blue vertices, and
\begin{align*}
M(H,c)={R \choose r} + {B \choose r}+{G \choose r}
\end{align*} if $c$ is a $3$-coloring with $R$ red, $B$ blue and $G$ green vertices.
By Proposition $5$, $M(H,c)$ is smallest if $R$, $B$ (and $G$) are as equal as possible. Hence, a coloring such that the numbers of vertices of each color are as equal as possible has the minimum number of monochromatic edges.
\subsection{Colorings of hypergraphs with the number of classes fewer than or divisible by the number of colors}
In this subsection, we will consider colorings that the number of classes is fewer than or divisible by the number of colors and determine the colorings with the minimum number of monochromatic edges. We consider not only the colorings with $2$ or $3$ colors, but also $m$-colorings with $m$ greater than $3$.
First, we consider the hypergraphs with the number of classes fewer than the number of colors. There are no monochromatic edges of a fixed color only when all vertices of that color are contained in at most one class, or the number of vertices of that color is fewer than $r$. Hence, the colorings such that each color appears in at most one class, or appears on fewer than $r$ vertices, are the only colorings with no monochromatic edges. Such colorings exist when the number of classes is fewer than the number of colors.
The determination of the minimum coloring of the remaining case is straightforward from the Propositions $5$ and $6$.
\begin{proposition}
Let $n \geqslant r$ and let $k$ be divisible by $m$. The $m$-coloring minimizing the number of monochromatic edges of a balanced complete $k$-partite $r$-uniform hypergraph with $n$ vertices in each class is the coloring with equal numbers of vertices of each color and no polychromatic class.
\end{proposition}
\begin{proof}
Suppose that $ n \geqslant r$. Let $H$ be a balanced complete $k$-partite $r$-uniform hypergraph with $n$ vertices in each class and let $c$ be an $m$-coloring of $H$ such that $k$ is divisible by $m$. Let $x_{li}$ be the number of vertices with the $l^{th}$ color in the $i^{th}$ class and let $X_l$ be the total number of vertices with the $l^{th}$ color. Then,
\begin{align*}
M(H,c)&= \sum^{m}_{l=1} \left[ {X_l \choose r} - \sum^{k}_{i=1} {x_{li} \choose r} \right].
\end{align*} We will show that the coloring $c^*$ with equal numbers of vertices of each color and no polychromatic class is the minimum coloring by directly comparing the numbers of monochromatic edges of $c$ and $c^*$. By Propositions $5$ and $6$,
\begin{align*}
M(H,c)&=\sum^{m}_{l=1} \left[ {X_l \choose r} - \sum^{k}_{i=1} {x_{li} \choose r} \right]\\
&\geqslant m{\frac{1}{m} \sum_{l=1}^m X_l \choose r} - \sum^{k}_{i=1} {\sum^{m}_{l=1} x_{li} \choose r }\\
&=m {\frac{kn}{m} \choose r} - \sum^{k}_{i=1} {n \choose r}=M(H,c^*).
\end{align*}
The equality holds only when ($X_i= \frac{kn}{m}$ for each $i$ and each class is monochromatic) or $n<r$, but the latter is impossible. Hence, $c^*$ is the unique coloring with the minimum number of monochromatic edges.
\end{proof}
The proof in this subsection is a straightforward comparison due to the simplicity of color distribution. However, in other general cases, they are much more complicated.
\section{Proof of Theorem~\ref{thm:1}}
\label{sec:thm:1}
\begin{proof}[Proof of Theorem~\ref{thm:1}] Let $H$ be a balanced complete $k$-partite $(r+1)$-uniform hypergraph with $n \geqslant r+1$ vertices in each class where $k \geqslant 2$ and $r \geqslant 2$. Let $c$ be a coloring of $H$ with $x_i$ red vertices in the $i^{th}$ class and let $X = x_1+x_2+\cdots+x_k.$ We may assume that $X\leqslant \lfloor \frac{kn}{2} \rfloor$ , otherwise, we relabel the names of the colors. Note that if $k$ is even, the proof is completed by Proposition $7$. However, we will not need to assume that $k$ is odd in the following proof.
We will calculate $M(H,c)$ in a new manner by summing each change when a blue vertex is recolored into red one by one starting from the all blue hypergraph until we reach $c$. Let $c_0$ be the all blue coloring and let $c_j$ be the coloring after the $j^{th}$ change of $H$. Thus,
\begin{align*}
M(H,c) = M(H,c_0) + \sum^{X-1}_{j=0} \bigtriangleup M(H,c_j)={kn \choose r+1}-k{n \choose r+1} +\sum^{X-1}_{j=0} \bigtriangleup M(H,c_j).
\end{align*}
We suppose that the vertices in the first class of the all blue hypergraph will be recolored first to match the first class of $c$ and then continue to the next class. Note that $c_j$ has $j$ red vertices. Let the $i^{th}$ class be the class containing the blue vertex that will be recolored and $x$ be the number of red vertices in that class. Then, from Section $2.1$,
\begin{align*}
\bigtriangleup M(H,c_j) &= \left[ {j \choose r} - {x \choose r}\right] - \left[ {kn-j-1 \choose r} - {n-x-1 \choose r}\right]\\
&= \left[ {j \choose r} - {kn-j-1 \choose r}\right] - \left[ {x \choose r} - {n-x-1 \choose r}\right].
\end{align*}
Note that while each vertex in the changing class is being recolored, the term $x$ ascends from $0$ to $x_{i}-1$. Thus, \begin{align*}
M(H,c) &={kn \choose r+1}-k{n \choose r+1} + \sum^{X-1}_{j=0} \bigtriangleup M(H,c_j)\\
&= {kn \choose r+1}-k{n \choose r+1} + \sum^{X-1}_{j=0}\left[ {j \choose r} - {kn-j-1 \choose r}\right]- \sum^{k}_{i=1}\sum^{x_i-1}_{x=0}\left[ {x \choose r} - {n-x-1 \choose r}\right].
\end{align*}
In this way, if we consider only the colorings with $X$ red vertices, then the terms \[ {kn \choose r+1}-k{n \choose r+1} + \sum^{X-1}_{j=0}\left[ {j \choose r} - {kn-j-1 \choose r}\right] \] in the function $M(H,c)$ are constant. Only the term
\[\sum^{k}_{i=1}\sum^{x_i-1}_{x=0}\left[ {x \choose r} - {n-x-1 \choose r}\right]\] is distinct and we denote this term by $S(x_1,x_2,\ldots,x_k)$. Hence, the coloring with maximum value of $S(x_1,x_2,\ldots,x_k)$ will have the minimum number of monochromatic edges.
\begin{claim*}
Among the colorings with a constant total number $X$ of red vertices, the coloring $c^*_X$ with the minimum number of polychromatic classes has the minimum number of monochromatic edges. Moreover, the minimum coloring is unique up to a permutation of classes.
\end{claim*}
\begin{proof} Consider a coloring with the number of red vertices in the $i^{th}$ class equal to $x_i$ where $x_1 +x_2+\cdots+x_k = X$ and $x_1 \geqslant x_2 \geqslant \cdots\geqslant x_k$. Suppose that the coloring is not $c^*_X$. Therefore, there exist classes $l < m$ such that $x_l \neq n$ and $x_m \neq 0$. Next, we will compare the terms \[S(x_1, \ldots,x_l,\ldots,x_m,\ldots,x_k)\] and \[S(x_1, \ldots,x_l+1,\ldots,x_m-1,\ldots,x_k).\]
which corresponds to swapping a red vertex from the $m^{th}$ class with a blue vertex from the $l^{th}$ class. Thus, since $x_l \geqslant x_m$,
\begin{align*}S(x_1&, \ldots,x_l+1,\ldots,x_m-1,\ldots,x_k) - S(x_1, \ldots,x_l,\ldots,x_m,\ldots,x_k)\\
&=\sum_{x=0}^{x_l + 1 -1}\left[ {x \choose r} - {n-x-1 \choose r}\right] + \sum_{x=0}^{x_m -1 -1}\left[ {x \choose r} - {n-x-1 \choose r}\right]\\
&\;\;\;\;-\sum_{x=0}^{x_l-1}\left[ {x \choose r} - {n-x-1 \choose r}\right]-\sum_{x=0}^{x_m-1}\left[ {x \choose r} - {n-x-1 \choose r}\right]\\
&= \left[{x_l \choose r} -{x_m-1 \choose r} \right] + \left[{n-x_m \choose r}-{n-x_l-1\choose r}\right] \geqslant 0.
\end{align*}
The equality holds only when all upper indices of the binomial coefficient terms are less than $r$. The swapping resulted in fewer or equal monochromatic edges. We will continue swapping as long as possible to reduce the number of polychromatic classes. The inequality is strict at some point since either $x_l$ will eventually equal to $n-1$ in which case $x_l=n-1 \geqslant r$, or $x_m$ will eventually equal to $1$ in which case $n-x_m=n-1 \geqslant r$. This implies that the original coloring has strictly more monochromatic edges than some coloring. Hence, $c^*_X$ is the unique coloring with the minimum number of monochromatic edges among those with $X$ red vertices.
\end{proof}
We have determined the minimum coloring for each value of $X$. Next, we will make comparisons between colorings with different values of $X$. We will show that $M(H,c^*_X) > M(H, c^*_{X+1})$ for all $X \leqslant \lfloor\frac{kn}{2}\rfloor -1$. Let $c^*_X$ be the minimum coloring with $X \leqslant \lfloor\frac{kn}{2}\rfloor-1$ red vertices. Suppose that the polychromatic class of $c^*_X$ is the $i^{th}$ class, but if $c^*_X$ has no polychromatic class, suppose the $i^{th}$ class is an all blue class. Observe that
\begin{align*}
M(H,c^*_{X+1})-M(H,c^*_X) &=\bigtriangleup_i M(H,c^*_X)\\
&= \left[ {X \choose r} - {x_i \choose r}\right] - \left[ {kn-X-1 \choose r} - {n-x_i-1 \choose r}\right]\\
&= \left[ {X \choose r} + {n-x_i-1 \choose r}\right] - \left[ {kn-X-1 \choose r} + {x_i \choose r}\right].
\end{align*}
The following proof will be divided into two cases according to the value of $x_i
$.
\textit{Case 1}: $\frac{n}{2} \leqslant x_i < n $.
Thus, \begin{align*} X- (kn-X-1)=2\left(X+\frac{1}{2} -\frac{kn}{2}\right) \leqslant 2\left(\left\lfloor\frac{kn}{2}\right\rfloor-\frac{kn}{2}-\frac{1}{2}\right)< 0,
\end{align*}and
\begin{align*}(n-x_i-1)-x_i = 2\left(\frac{n}{2}-x_i-\frac{1}{2}\right)< 0.
\end{align*}
Hence, $\bigtriangleup_i M(H,c^*_X) \leqslant 0$ but $\bigtriangleup_i M(H,c^*_X) \neq 0$ since one of the upper indices is at least $r$. Indeed, $kn-X-1 \geqslant \left\lceil\frac{kn}{2}\right\rceil \geqslant n > r$ because $k \geqslant 2$.
\textit{Case 2}: $0 \leqslant x_i < \frac{n}{2}$.
We will show that $\bigtriangleup_i M(H,c^*_X) < 0$ by Proposition 4. As in Case 1, \[X- (kn-X-1)<0.\]
Moreover, \begin{align*}(n-x_i-1)-x_i = 2\left(\frac{n}{2}-x_i-\frac{1}{2}\right)\geqslant 0.
\end{align*} Suppose that there are $k^*$ red classes in $c^*_X$, i.e., $X = k^*n + x_i$. Since $X\leqslant \lfloor\frac{kn}{2}\rfloor-1$, we have $k^* \leqslant \frac{k-1}{2}$. Thus, \begin{align*}
(X +n-x_i-1)-(x_i+kn-X-1)=2\left(k^*n - \frac{k-1}{2}n\right)\leqslant 0.
\end{align*}\noindent Similarly, $kn-X-1>r$. Hence, by Proposition 4, $\bigtriangleup_i M(H,c^*_X) < 0.$
By the two cases, $c^*_X$ contains strictly more monochromatic edges than $c^*_{X+1}$ does, given that $X \leqslant \lfloor\frac{kn}{2}\rfloor -1$. Consequently, we can conclude that the unique coloring with the minimum number of monochromatic edges among all minimum colorings $c^*_X$ is $c^*_{\lfloor\frac{kn}{2}\rfloor}$.
Together with the claim, $c^*_{\lfloor\frac{kn}{2}\rfloor}$ is the unique coloring with the minimum number of monochromatic edges.\end{proof}
Note that, in the claim, we determine $c^*_X$ by means of determining the coloring with maximum $S(x_1,x_2,\ldots,x_k)$. On the contrary, we could determine the coloring with a constant number $X$ of red vertices that has the maximum number of monochromatic edges by showing conversely that the coloring with minimum $S(x_1,x_2,\ldots,x_k)$ is the coloring such that $x_1$, $x_2, \ldots, x_k$ are as equal as possible. However, this is out of our topic.
\section{Proof of Theorem~\ref{thm:2}}
\label{sec:thm:2}
\begin{proof}[Proof of Theorem~\ref{thm:2}]
Let $H$ be an unbalanced complete tripartite $3$-uniform hypergraph with the numbers of vertices of the first, second and third classes $ n_1 \leqslant n_2 \leqslant n_3$ and let $N=n_1+n_2+n_3$. Let $c$ be the coloring of $H$ with the number of red vertices of the first, second and third classes equal to $x_1,x_2$ and $x_3$, respectively, and let $X=x_1+x_2+x_3$.
We divide the proof into two subsections according to the size of the smallest class. In the first subsection, a similar idea as in the proof of Theorem~\ref{thm:1} is extended to determine the minimum coloring when the number of vertices of each class is at least $3$. The second subsection is mainly about hypergraphs with some small classes.
\subsection{Hypergraphs with $n_1 \geqslant 3$}
Assume that $n_1 \geqslant 3$. Let $\bigtriangleup_{i{i'}} M(H,c)$ be the change in the number of monochromatic edges if a blue vertex in the $i^{th}$ class is recolored into red and a red vertex in the ${i'}^{th}$ class is recolored into blue. The process will be called \textit{swapping} which results in a new coloring, say $c'$. We compute $\bigtriangleup_{i{i'}} M(H,c)$ by comparing the number of monochromatic edges containing those vertices undergone swapping before and after swapping process. Thus,
\begin{align*}
\bigtriangleup_{i{i'}}M(H,c)&= \left[ {x_1+x_2+x_3-1 \choose 2}-{x_i \choose 2} + {N-x_1-x_2-x_3-1 \choose 2} - {n_{i'}-x_{i'}\choose 2} \right]\\
&\;\;\;\;- \left[ {x_1+x_2+x_3-1 \choose 2}-{x_{i'}-1 \choose 2} + {N-x_1-x_2-x_3-1 \choose 2} - {n_i-x_i-1\choose 2} \right]\\
&= \left[ {x_{i'}-1 \choose 2} + {n_i-x_i-1\choose 2} \right] - \left[ {x_i \choose 2} + {n_{i'}-x_{i'}\choose 2} \right].
\end{align*} A \textit{successful swapping} is a swapping in such a way that the number of monochromatic edges is reduced, i.e., $\bigtriangleup_{i{i'}} M(H,c) < 0$.
\begin{lemma}
If $\bigtriangleup_{i{i'}} M(H,c) \leqslant 0$, then $\bigtriangleup_{i{i'}} M(H,c') < 0$.
\end{lemma}\begin{proof}
Observe that
\begin{align*}
\bigtriangleup_{i{i'}}M(H,c')&= \left[ {(x_{i'}-1)-1 \choose 2} + {n_i-(x_i+1)-1\choose 2} \right] - \left[ {x_i+1 \choose 2} + {n_{i'}-(x_{i'}-1)\choose 2} \right]\\
&\leqslant \left[ {x_{i'}-1 \choose 2} + {n_i-x_i-1\choose 2} \right] - \left[ {x_i \choose 2} + {n_{i'}-x_{i'}\choose 2} \right]=\bigtriangleup_{i{i'}} M(H,c) \leqslant 0.
\end{align*}
The equality holds only when ($x_{i'}-1<2$ and $n_i-x_i-1<2$) and ($x_i+1<2$ and $n_{i'}-x_{i'}+1<2$) and $\bigtriangleup_{i{i'}} M(H,c)=0$. Suppose that $\bigtriangleup_{i{i'}}M(H,c')=0$. We have that $x_i=0$ and $x_{i'}<3$. Thus, $n_i=n_i-x_i<3$ and $n_{i'}-2 \leqslant n_{i'} -x_{i'}<1 $, i.e., $n_{i'}<3$ which contradicts with $3 \leqslant n_1\leqslant n_2\leqslant n_3$.
\end{proof}
Lemma 8 means that if a swapping can be done without increasing the number of monochromatic edges, another swapping in the same direction will be successful (if there are red and blue vertices to be swapped). The process of successful swappings will terminate when one of the two classes (or both) is monochromatic.
\begin{lemma}
If $\bigtriangleup_{i{i'}} M(H,c) \geqslant 0$, then $\bigtriangleup_{{i'}i} M(H,c) < 0$.
\end{lemma}
\begin{proof}Observe that
\begin{align*}
\bigtriangleup_{{i'}i} M(H,c) &= \left[ {x_i-1 \choose 2} + {n_{i'}-x_{i'}-1\choose 2} \right] - \left[ {x_{i'} \choose 2} + {n_i-x_i\choose 2} \right]\\
&\leqslant \left[ {x_i \choose 2} + {n_{i'}-x_{i'}\choose 2} \right] - \left[ {x_{i'}-1 \choose 2} + {n_i-x_i-1\choose 2} \right]\\
&= -\bigtriangleup_{i{i'}} M(H,c) \leqslant 0.
\end{align*}
The equality holds only when ($x_i<2$ and $n_{i'}-x_{i'}<2$) and ($x_{i'}<2$ and $n_i-x_i<2$) and $\bigtriangleup_{i{i'}} M(H,c)=0$. Suppose that $\bigtriangleup_{{i'}i}M(H,c)=0$. We have that $x_i \leqslant 1$ and $x_{i'}\leqslant 1$. Thus, $n_i-1\leqslant =n_i-x_i<2$ and $n_{i'}-1\leqslant n_{i'} -x_{i'}<2 $, i.e., $n_i<3$ and $n_{i'}<3$ which contradicts with $3 \leqslant n_1\leqslant n_2\leqslant n_3$.
\end{proof}
Note that, for any coloring $c$, if $c$ contains two classes, the $i^{th}$ and ${i'}^{th}$, which are polychromatic, then a swapping can be done in two directions as follows.
\begin{enumerate}\item Swapping a red vertex of the $i^{th}$ class with a blue vertex of the ${i'}^{th}$ class.
\item Swapping a blue vertex of the $i^{th}$ class with a red vertex of the ${i'}^{th}$ class.
\end{enumerate}
By Lemma 9, one of the two directions is successful. Moreover, by Lemma 8, we can continue swapping in the same direction until one of the two classes is monochromatic and get fewer monochromatic edges. Hence, the coloring with minimum number of monochromatic edges among colorings with constant number of red vertices, must have at most one polychromatic class. We will list all these forms in the following table which will be the candidates for the coloring with minimum number of monochromatic edges.
\begin{center}
\begin{tabular}{ |P{3cm}|P{3cm}|P{3cm}|P{3cm}|}
\hline
Canonical forms & $1^{st}$ Class & $2^{nd}$ Class & $3^{rd}$ Class\\
\hline
$F_1$& polychromatic& blue&blue\\
$F_2$& blue& polychromatic&blue\\
$F_3$& blue& blue&polychromatic\\
$F_4$& red&polychromatic&blue\\
$F_5$& red&blue&polychromatic\\
$F_6$& polychromatic&red&blue\\
$F_7$& blue&red&polychromatic\\
$F_8$& polychromatic&blue &red\\
$F_9$& blue& polychromatic&red\\
$F_{10}$& red& red&polychromatic\\
$F_{11}$& red& polychromatic&red\\
$F_{12}$& polychromatic& red&red\\
\hline
\end{tabular}
\end{center}
The first column illustrates the list of $12$ canonical forms and the remaining columns describe the colors of vertices in those classes. The terms \textit{red} and \textit{blue} mean all vertices in those classes are monochromatic of red and blue, respectively. On the other hand, \textit{polychromatic} means that this class is allowed to be polychromatic but it may be monochromatic. Note that a coloring can be considered to be in several canonical forms, for example, the all blue coloring is of the form $F_1$, $F_2$ or $F_3$.
We may assume that $X \leqslant\lfloor\frac{N}{2}\rfloor$. Consequently, both $F_{11}$ and $F_{12}$ are out of our interest since the total numbers of red vertices, which are $n_1+x_2+n_3$ and $x_1+n_2+n_3$, respectively, exceed $\lfloor\frac{N}{2}\rfloor$ as shown:
\begin{align*}
n_1+x_2+n_3 = \frac{n_1+n_3+2x_2}{2}+\frac{n_1+n_3}{2} > \frac{n_2}{2} +\frac{n_1+n_3}{2} \geqslant \left\lfloor\frac{n_1+n_2+n_3}{2}\right\rfloor
\end{align*} and \begin{align*}
x_1+n_2+n_3 = \frac{n_2+n_3+2x_1}{2}+\frac{n_2+n_3}{2} > \frac{n_1}{2} +\frac{n_2+n_3}{2} \geqslant \left\lfloor\frac{n_1+n_2+n_3}{2}\right\rfloor.
\end{align*}
Next, we will focus on the possibility of $F_{10}$. If $c$ has total red vertices to be $X=n_1+n_2+x_3 \leqslant \lfloor\frac{n_1+n_2+n_3}{2}\rfloor $. Then, \begin{align*}
n_1 + n_2 \leqslant 2\left(\left\lfloor\frac{n_1+n_2+n_3}{2}\right\rfloor-\frac{n_1+n_2}{2}\right)\leqslant 2\left(\frac{n_1+n_2+n_3}{2}-\frac{n_1+n_2}{2}\right)=n_3.
\end{align*}
The necessary condition of a coloring $c$ of a hypergraph $H$ to be in the form $F_{10}$ is that $n_1+n_2 \leqslant n_3$. Note that the condition $n_1+n_2 \leqslant n_3 $ is equivalent to $n_1+n_2 \leqslant \lfloor\frac{n_1+n_2+n_3}{2}\rfloor \leqslant n_3$ and we call a hypergraph with this condition \textit{type A}. On the other hand, the condition $n_1+n_2 > n_3 $ is equivalent to $n_3\leqslant \lfloor\frac{n_1+n_2+n_3}{2}\rfloor < n_1+n_2$ and we call a hypergraph with this condition \textit{type B}. Next, we will determine the minimum coloring among those colorings with constant number $X$ of red vertices or $c^*_X$ from the candidates $F_1$ to $F_9$ and $F_{10}$ will be considered only when $H$ is a type A hypergraph.
As in the proof of Theorem~\ref{thm:1}, we calculate $M(H,c)$ by summing each change when a blue vertex is recolored into red one by one starting from the all blue hypergraph until we reach $c$. Let $c_0$ be the all blue coloring and let $c_j$ be the coloring after the $j^{th}$ change of $H$. Thus,
\begin{align*}
M(H,c) = M(H,c_0) + \sum^{X-1}_{j=0} \bigtriangleup M(H,c_j)= {N \choose 3}-\sum_{i=1}^3{n_i \choose 3}+\sum^{X-1}_{j=0} \bigtriangleup M(H,c_j).
\end{align*}
We suppose that the vertices in the first class of the all blue hypergraph will be recolored first to match the first class of $c$ and then continue to the next class. Note that $c_j$ has $j$ red vertices. Let the $i^{th}$ class be the class containing the blue vertex that will be recolored and $x$ be the number of red vertices in that class. Then, from Section $2.2$,
\begin{align*}
\bigtriangleup M(H,c_j) &= \left[ {j \choose 2} - {x \choose 2}\right] - \left[ {N-j-1 \choose r} - {n_i-x-1 \choose 2}\right]\\
&= \left[ {j \choose 2} - {N-j-1 \choose 2}\right] - \left[ {x \choose 2} - {n_i-x-1 \choose 2}\right].
\end{align*}
Note that while each vertex in the changing class is being recolored, the term $x$ ascends from $0$ to $x_i-1$. Thus,
\begin{align*}
M(H,c) &= {N \choose 3}-\sum_{i=1}^3{n_i \choose 3}+\sum^{X-1}_{j=0} \bigtriangleup M(H,c_j)\\
&={N \choose 3}-\sum_{i=1}^3{n_i \choose 3}+\sum^{X-1}_{j=0}\left[ {j \choose 2} - {N-j-1 \choose 2}\right]- \sum^{3}_{i=1}\sum^{x_i-1}_{x=0}\left[ {x \choose 2} - {n_i-x-1 \choose 2}\right].
\end{align*}
Similarly, if we consider only the coloring with $X$ red vertices, then the terms
\[{N \choose 3}-\sum_{i=1}^3{n_i \choose 3}+\sum^{X-1}_{j=0}\left[ {j \choose 2} - {N-j-1 \choose 2}\right]\] in the function $M(H,c)$ are constant. Only the term
\[\sum^{3}_{i=1}\sum^{x_i-1}_{x=0}\left[ {x \choose 2} - {n_i-x-1 \choose 2}\right]\] is distinct and we denote this term by $S(x_1,x_2,x_3)$. Hence, the coloring with maximum value of $S(x_1,x_2,x_3)$ will have the minimum number of monochromatic edges. Remark that if $x_i = n_i$, then the term $\sum^{x_i-1}_{x=0}\left[ {x \choose 2} - {n_i-x-1 \choose 2} \right]$ cancels itself out. Hence, $S(n_1,x_2,x_3)=S(0,x_2,x_3)$ and similarly when $x_2=n_2$ or $x_3=n_3$.
Next, we will determine $c^*_X$ by considering and comparing only among possible canonical forms according to the value $X$ and type of $H$. We will divide into several cases.
\textit{Case 1}: $0 \leqslant X < n_1$.
\begin{center}
\begin{tabular}{ |P{3cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{3cm}| }
\hline
Possible forms & $x_1$ & $x_2$ & $x_3$ & $S(x_1,x_2,x_3)$\\
\hline
$F_1$& $X$& $0$&$0$&$S(X,0,0)$\\
$F_2$& $0$& $X$&$0$&$S(0,X,0)$\\
$F_3$& $0$& $0$&$X$&$S(0,0,X)$\\
\hline
\end{tabular}
\end{center}
\noindent
We will compare among colorings in the forms $F_1$, $F_2$ and $F_3$. Note that if $n_1=n_2$, $F_1$ and $F_2$ are the same. Similarly, if $n_1=n_2=n_3$, $F_1,F_2$ and $F_3$ are the same. Then,
\begin{align*}
S(0,X,0) = \sum_{j=0}^{X-1}\left[ {j \choose 2}-{n_2-j-1\choose 2} \right]\leqslant \sum_{j=0}^{X-1}\left[ {j \choose 2}-{n_1-j-1\choose 2} \right] = S(X,0,0).
\end{align*}
\noindent The equality holds only when $n_1=n_2$ or $n_2-1<2$. Since $3 \leqslant n_2 \leqslant n_3$, we can conclude that a coloring in the form $F_1$ has fewer monochromatic edges than $F_2$ and similarly for $F_3$. Hence, $c^*_X$ is in the form $F_1$.
\textit{Case 2}: $n_1 \leqslant X < \frac{n_1 + n_2}{2}$.
\begin{center}
\begin{tabular}{ |P{3cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{3cm}| }
\hline
Possible forms & $x_1$ & $x_2$ & $x_3$&$S(x_1,x_2,x_3)$\\
\hline
$F_2$& $0$& $X$&$0$&$S(0,X,0)$\\
$F_3$& $0$& $0$&$X$&$S(0,0,X)$\\
$F_4$& $n_1$& $X-n_1$&$0$&$S(n_1,X-n_1,0)$\\
$F_5$& $n_1$& $0$&$X-n_1$&$S(n_1,0,X-n_1)$\\
\hline
\end{tabular}
\end{center}
In this case, we must have that $n_1 < n_2$. Similarly to \textit{Case 1}, we have that $F_2$ has fewer number monochromatic edges than $F_3$. Next, we will compare between colorings in the forms $F_4$ and $F_5$. If $n_2=n_3$, then both forms are the same. Thus,
\begin{align*}
S(n_1,0,X-n_1) =S(0,0,X-n_1)\leqslant S(0,X-n_1,0)=S(n_1,X-n_1,0).
\end{align*}
The equality holds only when $n_2=n_3$ or $n_3-1<2$. Since $3\leqslant n_3$, we have that $F_4$ has fewer number monochromatic edges than $F_5$. Finally, we will compare between colorings in the forms $F_2$ and $F_4$. Note that $X-n_1 < \frac{n_1 + n_2}{2} - n_1 = n_2 - \frac{n_1 + n_2}{2} < n_2 -X $. Then, \begin{align*}
S(0,X,0)&=\sum_{j=0}^{X-1}\left[ {j \choose 2}-{n_2-j-1\choose 2} \right]\\
&=\sum_{j=0}^{(X-n_1)-1}\left[ {j \choose 2}-{n_2-j-1\choose 2} \right]+\left[{X-n_1 \choose 2} +{X-n_1 +1 \choose 2} +\cdots+ {X-1 \choose 2} \right]\\
&\;\;\;\;- \left[{n_2-X\choose2}+{n_2-X+1\choose2}+\cdots+{n_1+n_2-X-1\choose2}\right]\\
&\leqslant \sum_{j=0}^{(X-n_1)-1}\left[ {j \choose 2}-{n_2-j-1\choose 2} \right] = S(0,X-n_1,0) = S(n_1, X-n_1,0).
\end{align*}
The equality holds only when $n_1+n_2-X-1 <2$. Since $X< \frac{n_1 + n_2}{2}$, this occurs only when $n_1+n_2 \leqslant 3$, i.e., $n_1 \leqslant n_2 <3$. Since $3 \leqslant n_2$, we have that $F_4$ gives fewer number monochromatic edges than $F_2$ and $c^*_X$ is in the form $F_4$.
\textit{Case 3}: $\frac{n_1 + n_2}{2} \leqslant X < n_2$.
\begin{center}
\begin{tabular}{ |P{3cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{3cm}| }
\hline
Possible forms & $x_1$ & $x_2$ & $x_3$&$S(x_1,x_2,x_3)$\\
\hline
$F_2$& $0$& $X$&$0$&$S(0,X,0)$\\
$F_3$& $0$& $0$&$X$&$S(0,0,X)$\\
$F_4$& $n_1$& $X-n_1$&$0$&$S(n_1,X-n_1,0)$\\
$F_5$& $n_1$& $0$&$X-n_1$&$S(n_1,0,X-n_1)$\\
\hline
\end{tabular}
\end{center}
Similarly, we must have $n_1 < n_2$ in this case. The comparisons between $F_2$ and $F_3$ and between $F_4$ and $F_5$ are similar to the previous case. Note that $X-n_1 \geqslant \frac{n_1 + n_2}{2} - n_1 = n_2 - \frac{n_1 + n_2}{2} \geqslant n_2 -X $. Next, we will show the comparison between $F_2$ and $F_4$ which gives a contrary result as shown:\begin{align*}
S(0,X,0)&=\sum_{j=0}^{X-1}\left[ {j \choose 2}-{n_2-j-1\choose 2} \right]\\
&=\sum_{j=0}^{(X-n_1)-1}\left[ {j \choose 2}-{n_2-j-1\choose 2} \right]+\left[{X-n_1 \choose 2} +{X-n_1 +1 \choose 2} +\cdots+ {X-1 \choose 2} \right]\\
&\;\;\;\;- \left[{n_2-X\choose2}+{n_2-X+1\choose2}+\cdots+{n_1+n_2-X-1\choose2}\right]\\
&\geqslant \sum_{j=0}^{(X-n_1)-1}\left[ {j \choose 2}-{n_2-j-1\choose 2} \right] = S(0,X-n_1,0) = S(n_1, X-n_1,0).
\end{align*}
The equality holds only when $X=\frac{n_1+n_2}{2}$ or $X-1<2$. Since $\frac{n_1 + n_2}{2} \leqslant X < n_2$ and $n_1<n_2$, the condition that $X<3$ occurs only when $n_1=1$ and $n_2=2$ or $3$. Since $3 \leqslant n_1$, we have that $F_2$ gives fewer number monochromatic edges than $F_4$ and $c^*_X$ is in the form $F_2$ when $X>\frac{n_1+n_2}{2}$. If $X=\frac{n_1+n_2}{2}$, both forms give the same number of monochromatic edges.
From now on, the cases will be divided by whether the hypergraph is of type A or B.
\textit{Case 4A}: $n_2 \leqslant X < n_1+n_2$ and $H$ is a type A hypergraph.
\begin{center}
\begin{tabular}{ |P{3cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{3cm}| }
\hline
Possible forms & $x_1$ & $x_2$ & $x_3$&$S(x_1,x_2,x_3)$\\
\hline
$F_3$& $0$& $0$&$X$&$S(0,0,X)$\\
$F_4$& $n_1$& $X-n_1$&$0$&$S(n_1,X-n_1,0)$\\
$F_5$& $n_1$& $0$&$X-n_1$&$S(n_1,0,X-n_1)$\\
$F_6$& $X-n_2$& $n_2$&$0$&$S(X-n_2,n_2,0)$\\
$F_7$& $0$& $n_2$&$X-n_2$&$S(0,n_2,X-n_2)$\\
\hline
\end{tabular}
\end{center}
Similarly to \textit{Case 2}, we have that $F_4$ has fewer number monochromatic edges than $F_5$. Next, we will compare between colorings in the forms $F_6$ and $F_7$. If $n_1=n_2=n_3$, then the forms $F_4, F_5, F_6$ and $F_7$ are the same. Thus,
\begin{align*}
S(0,n_2,X-n_2) =S(0,0,X-n_2) \leqslant S(X-n_2,0,0)=S(X-n_2,n_2,0).
\end{align*}
The equality holds only when $n_1=n_3$ or $n_3-1<2$. Since $n_3 \geqslant 3$, we have that $F_6$ has fewer number monochromatic edges than $F_7$. Next, we will compare between colorings in the forms $F_4$ and $F_6$. If $n_1=n_2$, then both forms are the same. Suppose that $n_1<n_2$. Thus, since $X< n_1+n_2$,
\begin{align*}
S(n_1, X-n_1,0) &= S(0,X-n_1,0
\\&= \left[{0 \choose 2} + {1 \choose 2} +\cdots+{X-n_2-1 \choose 2}\right]\\
&\;\;\;\;- \left[{n_1+n_2-X \choose 2} + {n_1+n_2-X+1 \choose 2} + \cdots + {n_1 -1 \choose 2}\right]\\
&\;\;\;\;+ \left[{X-n_2 \choose 2} + {X-n_2+1 \choose 2} +\cdots+{X-n_1-1 \choose 2}\right]\\
&\;\;\;\;- \left[{n_1\choose 2} + {n_1+1 \choose 2} + \cdots + {n_2-1 \choose 2}\right]\\
&\leqslant \left[{0 \choose 2} + {1 \choose 2} +\cdots+{X-n_2-1 \choose 2}\right]\\
&\;\;\;\;- \left[{n_1+n_2-X \choose 2} + {n_1+n_2-X+1 \choose 2} + \cdots + {n_1 -1 \choose 2}\right]\\
&= S(X-n_2,0,0) = S(X-n_2,n_2,0).
\end{align*}
The equality holds only when $n_2-1<2$. Since $ n_2\geqslant 3$, we have that $F_6$ has fewer number monochromatic edges than $F_4$. Finally, we will compare between colorings in the forms $F_3$ and $F_6$. Then, since $n_2 \leqslant X < n_1+n_2 \leqslant n_3$, \allowdisplaybreaks
\begin{align*}
S(0,0,X) &=\left[{0 \choose 2} + {1 \choose 2} +\cdots+{X-1 \choose 2}\right]- \left[{n_3-X \choose 2} + {n_3-X+1 \choose 2} + \cdots + {n_3 -1 \choose 2}\right]\\
&\leqslant\left[{0 \choose 2} + {1 \choose 2} +\cdots+{X-1 \choose 2}\right]- \left[{n_3-X \choose 2} + {n_3-X+1 \choose 2} + \cdots + {n_3 -1 \choose 2}\right]\\
&\;\;\;\;- \left[{X-n_2 \choose 2} + {X-n_2+1 \choose 2} +\cdots+{X-1 \choose 2}\right]\\
&\;\;\;\;+ \left[{n_3-n_2\choose 2} + {n_3+n_2+1 \choose 2} + \cdots + {n_3-1 \choose 2}\right]\\
&\;\;\;\;- \left[{n_1+n_2-X \choose 2} + {n_1+n_2-X+1 \choose 2} +\cdots+{n_3-X-1 \choose 2}\right]\\
&\;\;\;\;+ \left[{n_1\choose 2} + {n_1+1 \choose 2} + \cdots + {n_3-n_2-1 \choose 2}\right]\\
&= \left[{0 \choose 2} + {1 \choose 2} +\cdots+{X-n_2-1 \choose 2}\right]\\
&\;\;\;\;- \left[{n_1+n_2-X \choose 2} + {n_1+n_2-X+1 \choose 2} + \cdots + {n_1 -1 \choose 2}\right]\\
&= \sum_{j=0}^{(X-n_2)-1}\left[ {j \choose 2}-{n_1-j-1\choose 2} \right]= S(X-n_2,0,0) = S(X-n_2,n_2,0).
\end{align*}
The equality holds only when $n_3-1<2$. Since $n_3 \geqslant 3$, $F_6$ gives fewer number monochromatic edges than $F_3$ and $c^*_X$ is in the form $F_6$.
\textit{Case 5A}: $n_1+n_2 \leqslant X \leqslant \lfloor{\frac{N}{2}}\rfloor$ and $H$ is a type A hypergraph.
\begin{center}
\begin{tabular}{ |P{3cm}|P{1.5cm}|P{1.5cm}|P{2cm}|P{4cm}| }
\hline
Possible forms & $x_1$ & $x_2$ & $x_3$&$S(x_1,x_2,x_3)$\\
\hline
$F_3$& $0$& $0$&$X$&$S(0,0,X)$\\
$F_{10}$& $n_1$& $n_2$&$X-n_1-n_2$&$S(n_1,n_2,X-n_1-n_2)$\\
\hline
\end{tabular}
\end{center}
Note that this is only the case that we will consider $F_{10}$. We only compare between colorings in the forms $F_3$ and $F_{10}$. Then, since $X \leqslant \lfloor{\frac{n_1+n_2+n_3}{2}}\rfloor$,
\begin{align*}
S(0,0,X)&= \left[{0 \choose 2} + {1 \choose 2} +\cdots+{X-1 \choose 2}\right]- \left[{n_3-X \choose 2} + {n_3-X+1 \choose 2} +\cdots+{n_3-1 \choose 2}\right]\\
&= \left[{0 \choose 2} + {1 \choose 2} +\cdots+{X-n_1-n_2-1 \choose 2}\right]\\
&\;\;\;\;- \left[{n_1+n_2+n_3-X \choose 2} + {n_1+n_2+n_3-X+1 \choose 2} + \cdots + {n_3 -1 \choose 2}\right]\\
&\;\;\;\;+ \left[{X-n_1-n_2\choose2}+{X-n_1-n_2+1\choose2}+\cdots+{X-1\choose2}\right]\\
&\;\;\;\;-\left[{n_3-X\choose2}+{n_3-X+1\choose2}+\cdots+{n_1+n_2+n_3-X-1\choose2}\right]\\
&\leqslant \left[{0 \choose 2} + {1 \choose 2} +\cdots+{X-n_1-n_2-1 \choose 2}\right]\\
&\;\;\;\;- \left[{n_1+n_2+n_3-X \choose 2} + {n_1+n_2+n_3-X+1 \choose 2} + \cdots + {n_3 -1 \choose 2}\right]\\
&= S(0,0,X-n_1-n_2) = S(n_1,n_2,X-n_1-n_2).
\end{align*}
The equality holds only when $X=\frac{N}{2}$ or $N-X-1<2$. Since $X \leqslant \lfloor{\frac{N}{2}}\rfloor$, the condition that $N-X<3$ occurs only when $N \leqslant 4$. This is impossible because $n_3 \geqslant 3$. If $X= \frac{N}{2}$, then both forms have the same number of monochromatic edges. However, if we relabel the names of the colors, then both forms are the same. Hence, $F_{10}$ gives fewer number monochromatic edges than $F_3$ and $c^*_X$ is in the form $F_{10}$. Next, we will focus on the other cases of type B hypergraphs.
\textit{Case 4B}: $n_2 \leqslant X < n_3$ and $H$ is a type B hypergraph.
\begin{center}
\begin{tabular}{ |P{3cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{3cm}| }
\hline
Possible canonical forms & $x_1$ & $x_2$ & $x_3$&$S(x_1,x_2,x_3)$\\
\hline
$F_3$& $0$& $0$&$X$&$S(0,0,X)$\\
$F_4$& $n_1$& $X-n_1$&$0$&$S(n_1,X-n_1,0)$\\
$F_5$& $n_1$& $0$&$X-n_1$&$S(n_1,0,X-n_1)$\\
$F_6$& $X-n_2$& $n_2$&$0$&$S(X-n_2,n_2,0)$\\
$F_7$& $0$& $n_2$&$X-n_2$&$S(0,n_2,X-n_2)$\\
\hline
\end{tabular}
\end{center}
Similarly to \textit{Case 4A}, $F_4$ and $F_6$ have fewer monochromatic edges than $F_5$ and $F_7$, respectively, and $F_6$ has fewer monochromatic edges than $F_4$. Next, we have to compare between $F_3$ and $F_6$ to determine that which form $c^*_X$ is. Due to complexity of the comparison, we will conclude that $c^*_X$ is in the form $F_3$ or $F_6$.
\textit{Case 5B}: $n_3 \leqslant X \leqslant \lfloor{\frac{n_1+n_2+n_3}{2}}\rfloor $ and $H$ is a type B hypergraph.
\begin{center}
\begin{tabular}{ |P{3cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{3cm}| }
\hline
Possible canonical forms & $x_1$ & $x_2$ & $x_3$&$S(x_1,x_2,x_3)$\\
\hline
$F_4$& $n_1$& $X-n_1$&$0$&$S(n_1,X-n_1,0)$\\
$F_5$& $n_1$& $0$&$X-n_1$&$S(n_1,0,X-n_1)$\\
$F_6$& $X-n_2$& $n_2$&$0$&$S(X-n_2,n_2,0)$\\
$F_7$& $0$& $n_2$&$X-n_2$&$S(0,n_2,X-n_2)$\\
$F_8$& $X-n_3$& $0$&$n_3$&$S(X-n_3,0,n_3)$\\
$F_9$& $0$& $X-n_3$&$n_3$&$S(0,X-n_3,n_3)$\\
\hline
\end{tabular}
\end{center}
Similarly to \textit{Case 4A}, $F_4$ and $F_6$ have fewer monochromatic edges than $F_5$ and $F_7$, respectively, and $F_6$ has fewer monochromatic edges than $F_4$. Next, we will compare between colorings in the forms $F_8$ and $F_9$. If $n_1=n_2$, then both forms are the same. Thus, \begin{align*} S(0,X-n_3,n_3) =S(0,X-n_3,0)=S(X-n_3,0,0)==S(X-n_3, 0,n_3).
\end{align*}
The equality holds only when $n_1=n_2$ or $n_2-1 <2$. Since $3 \leqslant n_2$, we have that $F_8$ gives fewer number monochromatic edges than $F_9$. Finally, we will compare between colorings in the forms $F_6$ and $F_8$. If $n_2=n_3$, then both forms are the same. We may suppose that $n_2<n_3$. Thus, since $X \leqslant \lfloor{\frac{N}{2}}\rfloor $,
\begin{align*}
S(X-n_2, n_2,0)&= S(X-n_2,0,0)\\
&= \left[{0 \choose 2} + {1 \choose 2} +\cdots+{X-n_3-1 \choose 2}\right]\\
&\;\;\;\;+ \left[{X-n_3 \choose 2} + {X-n_3+1 \choose 2} +\cdots+{X-n_2-1 \choose 2}\right]\\
&\;\;\;\;- \left[{n_1+n_2 -X\choose 2} + {n_1+n_2-X+1 \choose 2} + \cdots +
{n_1+n_3-X-1 \choose 2}\right]\\
&\;\;\;\;- \left[{n_1+n_3-X \choose 2} + {n_1+n_3-X+1 \choose 2} + \cdots + {n_1 -1 \choose 2}\right]\\
&\leqslant \left[{0 \choose 2} + {1 \choose 2} +\cdots+{X-n_3-1 \choose 2}\right]\\
&- \left[{n_1+n_3-X \choose 2} + {n_1+n_3-X+1 \choose 2} + \cdots + {n_1 -1 \choose 2}\right]\\
&= S(X-n_3,0,0) = S(X-n_3,0,n_3).
\end{align*}
The equality holds only when $X=\frac{N}{2}$ or $n_1+n_3-X-1<2$. First, if $X=\frac{N}{2}$, then both forms have the same number of monochromatic edges. However, if we relabel the names of the colors, then both forms are the same. We will focus on the condition of $n_1+n_3-X<3$ where $3 \leqslant n_1 \leqslant n_2<n_3\leqslant X < \frac{N}{2}$ and $n_1+n_2>n_3$. Suppose that $n_1+n_3-X<3$.
If $N$ is even, then $3 > n_1+n_2-X \geqslant n_1+n_3-\left(\frac{N}{2}-1\right)=\frac{n_1+n_3-n_2}{2}+1$. Thus, $n_1+n_3-n_2 <4$. Since $1 \leqslant n_3-n_2$, we have that $n_1<3$ which contradicts with $3\leqslant n_1$.
If $N$ is odd, then $3 > n_1+n_2-X \geqslant n_1+n_3-\left(\frac{N-1}{2}\right)=\frac{n_1+n_3-n_2+1}{2}$. Thus, $n_1+n_3-n_2 <5$. Since $1 \leqslant n_3-n_2$, we have that $n_1<4$. Consequently, it is only possible when $n_1=3$. Note that $n_2 < n_3$ and $n_1 + n_2=3+n_2 > n_3$. For $N = 3+n_2+n_3$ to be odd, $n_2$ and $n_3$ must have the same parity. Thus, $n_3=n_2+2$ and $5>n_1+n_3-n_2=3+n_2+2-n_2=5$, which is a contradiction.
Now, we have that $n_1+n_3-X \geqslant 3$. Hence, $F_8$ gives fewer number monochromatic edges than $F_6$ and $c^*_X$ is in the form $F_8$.
To sum up, we have already determined (as shown in the table below) the canonical forms $c^*_X$ that has minimum number of monochromatic edges for each condition of hypergraphs and range of the number of red vertices $X$.
\begin{center}
\begin{tabular}{ |P{1cm}|P{3.5cm}|P{1.7cm}|P{1cm}|P{3.5cm}|P{1.7cm}| }
\hline
\multicolumn{6}{|c|}{List of best canonical forms} \\
\hline
\multicolumn{3}{|c|}{Type A Hypergraphs $n_1+n_2 \leqslant n_3$} & \multicolumn{3}{|c|}{Type B Hypergraphs $n_1+n_2>n_3$} \\
\hline
Cases&Number of red vertices & Canonical form&Cases &Number of red vertices&Canonical form\\
\hline
$1$&$0 \leqslant X < n_1$& $F_1$&$1$&$0 \leqslant X < n_1$ &$F_1$\\
$2$&$n_1 \leqslant X < \frac{n_1+n_2}{2}$ &$F_4$&$2$&$n_1 \leqslant X < \frac{n_1+n_2}{2}$& $F_4$\\
$3$&$X = \frac{n_1+n_2}{2}$& $F_2$ or $ F_4$&$3$&$X = \frac{n_1+n_2}{2}$ &$F_2$ or $ F_4$\\
$ $&$\frac{n_1+n_2}{2} < X < n_2$& $F_2$&$ $&$\frac{n_1+n_2}{2} < X < n_2$ &$F_2$\\
$4A$&$n_2 \leqslant X < n_1 +n_2$& $F_6$&$4B$&$n_2 \leqslant X <n_3$&$F_3$ or $F_6$\\
$5A$&$n_1 + n_2 \leqslant X \leqslant \lfloor{\frac{N}{2}}\rfloor$&$F_{10}$&$5B$&$n_3 \leqslant X \leqslant \lfloor{\frac{N}{2}}\rfloor$ & $F_8$\\
\hline
\end{tabular}
\end{center}
Note that, in \textit{Case 3}, $c^*_X$ is only in the form $F_2$ when $\frac{n_1+n_2}{2} < X < n_2$. However, if $X=\frac{n_1+n_2}{2}$, then $F_2$ and $F_4$ give the same number of monochromatic edges. Moreover, in \textit{Case 4B}, we have not compared the colorings in the form $F_3$ and $F_6$. The uniqueness of $c^*_X$ of the other cases will be also considered. We can see that the inequalities in some cases are equal when the sizes of some parts are equal which means that those canonical forms are equivalent. For example, in \textit{Case 1}, if $n_2=n_3$, $F_2$ is equivalent to $F_3$. The remaining inequalities are equal when $X$ is equal to some certain value, for example, in \textit{Case 5A} and \textit{Case 5B} when $X = \frac{N}{2}$; the colorings in the forms $F_3$ and $F_{10}$ are isomorphic up to a permutation of the name of colors and so are colorings in the forms $F_6$ and $F_8$ in \textit{Case 5B}.
Hence, apart from \textit{Case 3} where $X=\frac{n_1+n_2}{2}$ and \textit{Case 4B}, a coloring with red vertices in the form, according to the previous table, has fewer monochromatic edges than other colorings with the same amount of red vertices and the uniqueness follows.
Next, we will make comparisons between colorings with different values of $X$. We will show that any $c^*_X$ with $X\leqslant \lfloor{\frac{N}{2}}\rfloor -1$ has strictly more monochromatic edges than some colorings. We, hence, would like to show that $M(H,c^*_{X+1})-M(H,c^*_{X}) < 0$ for each $0 \leqslant X < \lfloor{\frac{N}{2}}\rfloor-1$. Note that, if $c^*_X$ and $c^*_{X+1}$ are in the same canonical form, we will consider $\displaystyle \bigtriangleup M(H,c^*_X)$ instead. Again, we will divide into several cases conforming to the value of $X$ and the type of $H$.
\textit{Case 1}: $0 \leqslant X < n_1$.
We have that $c^*_X$ is in the form $F_1$ with $x_1 = X$, $x_2 =0$ and $x_3=0$. Then,\begin{align*} \bigtriangleup_1M(H,c^*_X)= \left[ {X \choose 2} - {X \choose 2}\right] - \left[ {N-X-1 \choose 2} - {n_1-X-1 \choose 2}\right]\leqslant 0.
\end{align*}
The equality holds only when $N-X-1<2$, which is impossible.
\textit{Case 2}: $n_1 \leqslant X < \frac{n_1 + n_2}{2}$.
We have that $c^*_X$ is in the form $F_4$ with $x_1 = n_1$, $x_2 =X-n_1$ and $x_3=0$. Then, \begin{align*}
\bigtriangleup_2M(H,c^*_X)= \left[ {X \choose 2}+{n_1+n_2-X-1 \choose 2}\right]-\left[ {X-n_1 \choose 2} + {N-X-1 \choose 2} \right].
\end{align*}
We will show that $\bigtriangleup_2M(H,c^*_X)<0$ by Proposition 4. We have $X + (n_1+n_2-X-1) = n_1+n_2 -1 \leqslant n_3 +n_2 -1 =(X - n_1) +(N-X-1)$,
$X-n_1 < X$ and $n_1+n_2-X-1 < N -X -1$. Since $2 \leqslant N-X-1$, we have that $\displaystyle \bigtriangleup_2M(H,c^*_X)<0$.
\textit{Case 3}: $\frac{n_1 + n_2}{2} \leqslant X < n_2$.
We have that $c^*_X$ is in the form $F_2$ or $F_4$. We will show that $\bigtriangleup_2M(H,c^*_X) < 0$ for both forms. If $X =\frac{n_1 + n_2}{2}$ and $ c^*_X$ is in the form $F_4$ with $x_1 = n_1$, $x_2=X-n_1$ and $x_3 = 0$, then $\bigtriangleup_2M(H,c^*_X) < 0$ similarly as in \textit{Case 2}.
If $\frac{n_1 + n_2}{2} \leqslant X < n_2$ and $c^*_X$ is in the form $F_2$ with $x_1 = 0$, $x_2 =X$ and $x_3=0$, then \begin{align*}
\bigtriangleup_2M(H,c^*_X) = \left[ {X \choose 2} - {X \choose 2}\right] - \left[ {N-X-1 \choose 2} - {n_2-X-1 \choose 2}\right]\leqslant 0.
\end{align*}
The equality holds only when $N-X-1<2$, which is impossible.
Again, from this point, the cases will be divided by whether the hypergraph is of type A or B.
\textit{Case 4A}: $n_2 \leqslant X < n_1+n_2$ and $H$ is a type A hypergraph.
We have that $c^*_X$ is in the form $F_6$ with $x_1 = X-n_2$, $x_2 =n_2$ and $x_3=0$. Then, \begin{align*}
\bigtriangleup_1M(H,c^*_X) =\left[{X\choose 2}+{n_1+n_2-X-1 \choose2} \right] -\left[ {X-n_2 \choose2}+{N-X-1\choose 2} \right].
\end{align*}
We will show that $\displaystyle \bigtriangleup_1M(H,c^*_X)<0$ by Proposition 4. We have $X+(n_1+n_2-X-1)=n_1+n_2-1 \leqslant n_1+n_3-1=(X-n_2)+(N-X-1)$, $X-n_2 < X$ and $n_1+n_2-X-1<N-X-1$. Since $2 \leqslant N-X-1$, we have that $\displaystyle \bigtriangleup_1M(H,c^*_X)<0$.
\textit{Case 5A}: $n_1+n_2 \leqslant X \leqslant \lfloor{\frac{N}{2}}\rfloor$ and $H$ is a type A hypergraph.
We have that $c^*_X$ is in the form $F_{10}$ with $x_1 = n_1$, $x_2=n_2$ and $x_3 = X-n_1-n_2$. Then, \begin{align*}
\bigtriangleup_3M(H,c^*_X)=\left[{X\choose 2} - {X-n_1-n_2 \choose2}\right] - \left[{N-X-1\choose 2} - {N-X-1 \choose2}\right] \geqslant 0.
\end{align*}
The equality holds only when $X<2$, which is impossible. This yields a contrary result: The number of monochromatic edges increases when the number of red vertices increases. Hence, $F_{10}$ gives the minimum number of monochromatic edges when $X=n_1+n_2$ instead.
Next, we will consider the last two cases of type B hypergraphs.
\textit{Case 4B}: $n_2 \leqslant X < n_3$ and $H$ is a type B hypergraph.
We have that $c^*_X$ is in the form $F_3$ or $F_6$. We will show that $\bigtriangleup_3M(H,c^*_X) < 0$ for $F_3$ and $\bigtriangleup_1M(H,c^*_X) < 0$ for $F_6$. If $c^*_X$ is in the form $F_3$ with $x_1 = 0$, $x_2=0$ and $x_3 = X$, then \begin{align*}
\bigtriangleup_3M(H,c^*_X) = \left[ {X \choose 2} - {X \choose 2}\right] - \left[ {N-X-1 \choose 2} - {n_3-X-1 \choose 2}\right]\leqslant 0.
\end{align*}
Note that the inequality is equal when $N-X-1<2$ which is impossible. Next, if $c^*_X$ is in the form $F_6$ with $x_1 = X-n_2$, $x_2=n_2$ and $x_3 = 0$, then \begin{align*}
\bigtriangleup_1M(H,c^*_X) = \left[{X \choose 2} +{n_1 +n_2-X-1\choose 2} \right]-\left[{N-X-1 \choose 2} + {X-n_2 \choose 2}\right].
\end{align*}
We will show that $\displaystyle \bigtriangleup_1M(H,c^*_X) < 0$ by Proposition 4. We have $X+(n_1+n_2-X-1) = n_1+n_2 -1 \leqslant n_1+n_3 -1 = (N-X-1) +(X-n_2) $, $X > X-n_2$ and $n_1 +n_2-X-1 < N-X-1$. Since $2\leqslant N-X-1$, we have that $\displaystyle \bigtriangleup_1M(H,c^*_X) < 0$.
\textit{Case 5B}: $n_3 \leqslant X \leqslant \lfloor{\frac{N}{2}}\rfloor $ and $H$ is a type B hypergraph.
We have that $c^*_X$ is in the form $F_{8}$ with $x_1 = X-n_3$, $x_2=0$ and $x_3 = n_3$. Then, \begin{align*}
\bigtriangleup_1M(H,c^*_X)&=\left[{X\choose 2} - {X-n_3 \choose2}\right] - \left[{N-X-1\choose 2} - {n_1-(X-n_3)-1 \choose2}\right]\\
&= \frac{n_1^2+2n_1n_3+2X(N-n_1)-3n_1+3N-N^2-4n_3}{2}.
\end{align*}
Since we cannot apply any lemmas to $\bigtriangleup_1M(H,c^*_X)$, we expand the binomial coefficient terms and see when $\displaystyle \bigtriangleup_1M(H,c^*_X)$ is fewer than $0$. Consequently, $\displaystyle \bigtriangleup_1M(H,c^*_X) < 0$ if and only if \[X< \left \lceil\frac{N^2-3N-n_1^2-2n_1n_3+3n_1+4n_3}{2(N-n_1)}\right\rceil.\]
Write $X'$ for $\left\lceil\frac{N^2-3N-n_1^2-2n_1n_3+3n_1+4n_3}{2(N-n_1)}\right\rceil$. We have that the $c^*_{X'}$ has the minimum number of monochromatic edges among all colorings in the form $F_8$ and we will show that $X'\leqslant\frac{N}{2}$.
\begin{align*}
X'=\left\lceil\frac{N^2-3N-n_1^2-2n_1n_3+3n_1+4n_3}{2(N-n_1)}\right\rceil
\leqslant \frac{N^2 -Nn_1- 2n_3+2n_3}{2(N-n_1)}= \frac{N}{2}.
\end{align*}
Hence, we have shown all the comparisons between colorings and we can conclude that:
\begin{enumerate}
\item If $H$ is a type A hypergraph, then the coloring with $n_1 + n_2$ red vertices in the form $F_{10}$ has the minimum number of monochromatic edges.
\item If $H$ is a type B hypergraph, then the coloring with $X'$ red vertices in the form $F_8$ has the minimum number of monochromatic edges.
\end{enumerate}
We have already proved that those minimum colorings are the unique colorings that has the minimum number of monochromatic edges among colorings with the same red vertices. Furthermore, we have shown that $\bigtriangleup M(H,c^*_X)$ is fewer than zero when $X$ is fewer than $n_1+n_2$ in a type A hypergraph and when $X$ is less than $X'$ in a type B hypergraph. Hence, those minimum colorings are the unique colorings that has minimum number of monochromatic edges among all colorings.
\subsection{Hypergraphs with $n_1 < 3$ or $n_2 < 3$}
In this subsection, we will prove the remaining cases which are unbalance complete tripartite hypergraphs with some classes smaller than $3$. These cases are easy and straightforward but contain fuzzy details. First, we will consider an unbalanced complete tripartite $3$-uniform hypergraphs with $n_1 \leqslant n_2 < n_3 \geqslant 3$. There are three possibilities for these hypergraphs:
\textit{Case i}: $n_1 = n_2 =1$ and $n_3 \geqslant 3$.
Since we have that the first two classes are smaller than $3$, no edge can be contained in the first two classes. Thus,
\[M(H,c)={X\choose 3}+{N-X\choose 3}-{x_3 \choose 3}-{n_3-x_3\choose3}.\]
We have that $M(H,c) \geqslant 0$. Suppose that $M(H,c)=0$. Since $X \geqslant x_3$ and $N-X \geqslant n_3-x_3$, we have that ${X\choose 3}={x_3 \choose 3}$ and ${N-X\choose 3}={n_3-x_3\choose3}$. Since $N=n_1+n_2+n_3=1+1+n_3\geqslant 5$, at least $3$ vertices are colored the same say red, i.e., $X\geqslant 3$. Then, $X=x_3$ which implies that $N-X=n_3+2-X>n_3-x_3$. Consequently, $3 > N-X = n_3+2-x_3$, i.e., $n_3=x_3$. This means that $c$ is a coloring such that the third class contains all red vertices and no blue vertex and the first two classes are all blue. Hence, we have already determined the minimum coloring and showed that it is unique up to a permutation of colors and classes.
\textit{Case ii}: $n_1=1, n_2 =2$ and $n_3 \geqslant 3$.
Again, no edge can be contained in the first two classes. Thus,
\[M(H,c)={X\choose 3}+{N-X\choose 3}-{x_3 \choose 3}-{n_3-x_3\choose3}.\]
We will show that $M(H,c) \geqslant 1$. We have that $X \geqslant x_3$ and $N-X \geqslant n_3-x_3$. Since $N=n_1+n_2+n_3=1+1+n_3\geqslant 6$, at least $3$ vertices are colored the same say red, i.e., $X\geqslant 3$. If $X>x_3$, then $M(H,c)>0$. Suppose that $X=x_3$. Then, $N-X=n_3+3-x_3>n_3-x_3 \geqslant 3$. Hence, $M(H,c)>0$. Next, suppose that $M(H,c)=1$. If $X>x_3$, then
\begin{align*}
1\leqslant {X-1 \choose 2} \leqslant{X\choose 3}-{x_3\choose 3}\leqslant{X\choose 3}+{N-X\choose 3}-{x_3 \choose 3}-{n_3-x_3\choose3}= M(H,c).
\end{align*}
This implies that $X=3$ and so $N-X \geqslant 3$, moreover, ${N-X\choose 3}={n_3-x_3\choose3}$. Hence, $N-X=n_3-x_3$. Now, we have $N-3=N-X=n_3-x_3=N-3-x_3$, i.e., $x_3=0$. This means $c$ is a coloring such that the third class contains no red vertices and the first two classes are all red.
Suppose that $X=x_3$. Then,
\begin{align*}
1=M(H,c)={X\choose 3}+{N-X\choose 3}-{x_3 \choose 3}-{n_3-x_3\choose3}={n_3-x_3+3\choose 3}-{n_3-x_3\choose3}.
\end{align*} It is only possible when $n_3-x_3=0$ or $c$ is a coloring such that the third class contains all red vertices and no blue vertex and the first two classes are all blue. Note that if we relabel the names of colors, then both colorings are the same. Hence, we have already determined the minimum coloring and showed that it is unique up to a permutation of colors and classes.
\textit{Case iii}: $n_1=2, n_2 =2$ and $n_3 \geqslant 3$.
Similarly, no edges can be contained in the first two classes. Thus,
\[M(H,c)={X\choose 3}+{N-X\choose 3}-{x_3 \choose 3}-{n_3-x_3\choose3}.\]
We will show that $M(H,c) \geqslant 4$. We have that $X \geqslant x_3$ and $N-X \geqslant n_3-x_3$. If there is a color, say red, such that all vertices of that color are only in the third class, then, we have $X=x_3$ and $N-X=n_3+4-x_3\geqslant 4$. Thus,
\begin{align*}
M(H,c)&= {N-X\choose 3}-{N-X-4\choose3}\\
&={N-X-1 \choose 2}+{N-X-2 \choose 2}+{N-X-3 \choose 2}+{N-X-4 \choose 2}\\
&\geqslant {3 \choose 2}+{2 \choose 2} = 4.
\end{align*} The equality holds only when $N-X=4$, i.e., $n_3=x_3$. Hence, if $M(H,c)=4$, then $c$ is a colorings such that the third class contains all red vertices but no blue vertex and the first two classes are all blue.
Suppose that there is no color such that all vertices of that color are only in the third class, i.e., $X>x_3$ and $N-X>n_3-x_3$. Since $N=n_1+n_2+n_3=2+2+n_3\geqslant 7$, at least $4$ vertices are colored the same say red, i.e., $X\geqslant 4$. If $X \geqslant 5$, then
\begin{align*}
M(H,c)\geqslant {X\choose 3}-{x_3\choose3}\geqslant {X\choose 3}-{X-1\choose3}={X-1 \choose 2} \geqslant {4 \choose 2} = 6.
\end{align*}
If $X=4$, then $N-X \geqslant 3$ and
\begin{align*}
M(H,c)\geqslant {X\choose 3}-{x_3\choose3} + 1\geqslant {X\choose 3}-{X-1\choose3}+1={X-1 \choose 2}+1= 4.
\end{align*}
The equality holds only when $N-X =3$ and $x_3=X-1$, i.e., $N=7$ and $x_3=3$. This means that $n_3=3$. Hence, if $M(H,c)=4$, then $c$ is a coloring of $H$ which has $7$ vertices such that the third class is all red, the second class is all blue and the first class has one red and one blue vertices. This implies that when $n_3=3$, minimum colorings are not unique.
Finally, the last class is a hypergraph such that only the first class is smaller than $3$.
\textit{Case iv}: $n_1<3$ and $3 \leqslant n_2 \leqslant n_3$.
Fortunately, this case conforms to almost all cases in the previous subsection since we assume that $n_2 \geqslant 3$. However, there are two points that we use the fact that $n_1\geqslant 3$. The first one is in the last part of \textit{Case 3} where we compare and determine which form of $F_2$ and $F_4$ has fewer monochromatic edges. Since we do not have that $n_1\geqslant 3$, it is possible that both forms have the same number of monochromatic edges. This is not problematic because both of them have strictly more monochromatic edges than some coloring according to the next comparisons. The next case is in the last part of \textit{Case 5B} where we compare and determine which form of $F_6$ and $F_8$ has fewer monochromatic edges. Again, since we do not have that $n_1\geqslant 3$, we may have that $n_1+n_3-X<3$ where $n_2<n_3\leqslant X < \frac{n_1+n_2+n_3}{2}$ and $n_1+n_2>n_3$. If this condition occurs, we will have that $F_6$ and $F_8$ have the same number of monochromatic edges. Since $n_1<3$, the condition is possible only when ($N$ is even and $n_1<3$) or ($N$ is odd and $n_1 \leqslant 3$).
Suppose that $N$ is even. If $n_1=1$, then we have that $n_3 < n_1+n_2 = n_2+1$. Since $n_2<n_3$, there is no choice for $n_3$. If $n_1=2$, then we have that $n_3 < n_1+n_2= n_2+2$. Since $n_2<n_3$, $n_3=n_2+1$. However, for $N$ to be even, $n_2$ and $n_3$ must have different parity, which is impossible.
Suppose that $N$ is odd. Again, it is impossible for $n_1=1$. Since $n_1<3$, we have $n_1=2$. Similarly, $n_3=n_2+1$ and $N=n_1+n_2+n_3=2+n_2+n_2+1=2(n_2+1)+1$ which is odd. Consequently, the condition is possible only here and $F_6$ and $F_8$ have the same number of monochromatic edges. Hence, if $H$ is a hypergraph with $n_1=2$ and $ 3\leqslant n_2=n_3-1$, then there are only two colorings (each unique up to a permutation of colors and classes) that have minimum number of monochromatic which are colorings with $X=\lfloor{\frac{N}{2}}\rfloor$ in the form $F_6$ and $F_8$.
Now, we have determined the minimum colorings of all unbalanced complete tripartite $3$-uniform hypergraphs.
\end{proof}
\section{Proof of Theorem~\ref{thm:3}}
\label{sec:thm:3}
\begin{proof}[Proof of Theorem~\ref{thm:3}]
Assume that $ k\geqslant 2$. Let $H$ be a balanced complete $k$-partite $(r+1)$-uniform hypergraph with $n \geqslant r+1$ vertices in each class and let $N=kn$. Let $c$ be a red/blue/green coloring of $H$ with the numbers of red, blue and green vertices of the $i^{th}$ class equal to $r_i, b_i$ and $g_i$, respectively, and let $R$, $B$ and $G$ be the total numbers of red, blue and green vertices, respectively.
Let $\bigtriangleup_{i{i'}} M(H,c,r,b)$ be the change in the number of monochromatic edges if a red vertex in the $i^{th}$ class is recolored into blue and a blue vertex in the ${i'}^{th}$ class is recolored into red. The definitions are similar for other colors combinations. The process will be called a \textit{swapping} which results in a new coloring, say $c'$. As a result of the process, the number of red vertices in the $i^{th}$ class decreases by $1$ and the number of red vertices in the ${i'}^{th}$ class increases by $1$ while the total number of red vertices remains the same. In other words, the coloring $c'$ has $r_i - 1$ and $r_{i'}+1$ red vertices in the $i^{th}$ and ${i'}^{th}$ classes, respectively. Likewise, the coloring $c'$ has $b_i+1$ and $b_{i'}-1$ blue vertices in the $i^{th}$ and ${i'}^{th}$ classes, respectively.
We can compute $\bigtriangleup_{i{i'}} M(H,c,r,b)$ by comparing the numbers of monochromatic edges containing those vertices undergone swapping before and after the swapping process. Thus,
\begin{align*}
\bigtriangleup_{i{i'}}M(H,c,r,b)&= \left[ {B-1 \choose r}-{b_i \choose r} + {R-1 \choose r}- {r_{i'}\choose r} \right]\\ &\;\;\;\;- \left[ {R-1 \choose r}-{r_i-1 \choose r} + {B-1 \choose r} - {b_{i'}-1\choose r} \right]\\
&= \left[ {r_i-1 \choose r} + {b_{i'}-1\choose r} \right] - \left[ {b_i \choose r} + {r_{i'}\choose r} \right].
\end{align*}
A \textit{successful swapping} is a swapping in such a way that the number of monochromatic edges is reduced, i.e., $\bigtriangleup_{i{i'}} M(H,c,r,b) < 0$. Note that if $0<r_i<n$, $<b_i<n$, $r_{i'}=n-1$ and $b_{i'}=1$, then
\begin{align*}
\bigtriangleup_{i{i'}}M(H,c,r,b)&= \left[ {r_i-1 \choose r} + {b_{i'}-1\choose r} \right] - \left[ {b_i \choose r} + {r_{i'}\choose r} \right]\\
&=\left[ {r_i-1 \choose r} + {1-1\choose r} \right] - \left[ {b_i \choose r} + {n-1\choose r} \right].
\end{align*} Since $r_i<n$, $0<b_i$ and $n-1 \geqslant r$, we have that $\bigtriangleup_{i{i'}}M(H,c,r,b)<0$. This implies that a swapping resulting in fewer polychromatic classes is always successful.
\begin{lemma} If $\bigtriangleup_{i{i'}} M(H,c,\text{r},\text{b}) \leqslant 0$, then $\bigtriangleup_{i{i'}} M(H,c',\text{r},\text{b}) \leqslant 0$. \end{lemma}
The proof of this lemma is similar to that of Lemma 8. Lemma 10 means that if a swapping can be done without increasing the number of monochromatic edges, another swapping in the same direction will be successful (if there is a red and blue vertices to be swapped). The process of successful swappings will terminate when the $i^{th}$ class has no red vertex or the ${i'}^{th}$ class has no blue vertex.
\begin{lemma}
If $\bigtriangleup_{i{i'}} M(H,c,\text{r},\text{b}) \geqslant 0$, then $\bigtriangleup_{i{i'}} M(H,c,\text{b},\text{r}) \leqslant 0$.
\end{lemma}
The proof of this lemma is similar to that of Lemma 9. However, in contrast to Lemma $9$, it is possible that the equality holds. Note that if $c$ contains two classes, the $i^{th}$ and ${i'}^{th}$, such that both of them contain at least one red vertex and one blue vertex, then there are two directions of swapping as follows:
\begin{enumerate}\item Swapping a red vertex of the $i^{th}$ class with a blue vertex of the ${i'}^{th}$ class.
\item Swapping a blue vertex of the $i^{th}$ class with a red vertex of the ${i'}^{th}$ class.
\end{enumerate}
By Lemma 11, one of the two directions can be achieved without increasing the number of monochromatic edges. Moreover, by Lemma 10, we can continue swapping in the same direction until the $i^{th}$ class has no red vertex or the ${i'}^{th}$ class has no blue vertex and the number of monochromatic edges does not increase. Note that we get the same result when considering a swapping relating to other color combinations. Hence, the coloring with minimum number of monochromatic edges among colorings with constant number of red, blue and green vertices, is the coloring such that, for any two classes, they must have at most one color of vertices to be in common. We will list all these forms in the following table.
\begin{center}
\begin{tabular}{ |P{3cm}|P{10cm}| }
\hline
Canonical forms & Descriptions \\
\hline
$F_1$& The first class contains three colors while the other classes are monochromatic.\\
\hline
$F_2$& The first class contains a pair of colors while the other classes are monochromatic.\\
\hline
$F_3$& The first and second classes contain different pairs colors while the other classes are monochromatic.\\
\hline
$F_4$& The first, second and third classes contain different pairs colors while the other classes are monochromatic.\\
\hline
$F_5$& All classes are monochromatic.\\
\hline
\end{tabular}
\end{center}
The first column illustrates the list of $5$ canonical forms and the second column describes the colors of vertices in each class.
Suppose that $r_i \leqslant b_{i'}$. Let $\bigtriangleup_{i{i'}} M_T(H,c,r,b)$ be the change in the number of monochromatic edges if all red vertices in the $i^{th}$ class are recolored into blue and $r_i$ blue vertices in the ${i'}^{th}$ class are recolored into red. The process will be called \textit{total swapping} which results in a new coloring, say $c'$. We can compute $\bigtriangleup_{i{i'}} M_T(H,c,r,b)$ by summing the change in the number of monochromatic edges of the swappings. Thus,
\begin{align*}
\bigtriangleup_{i{i'}} &M_T(H,c,r,b)= \sum_{k=0}^{r_i-1}\left[ {r_i-1-k \choose r} + {b_{i'}-1-k\choose r} \right] - \left[ {b_i+k\choose r} + {r_{i'}+k\choose r} \right]\\
&=\left[{0 \choose r}+{1 \choose r}+\cdots+{r_i-1 \choose r}\right]+\left[{b_{i'}-r_i \choose r}+{b_{i'}-r_i+1 \choose r}+\cdots+{b_{i'}-1 \choose r}\right] \\
&\;\;\;\;-\left[{b_i \choose r}+{b_i+1 \choose r}+\cdots+{b_i+r_i-1 \choose r}\right]-\left[{r_{i'} \choose r}+{r_{i'}+1 \choose r}+\cdots+{r_{i'}+r_i-1 \choose r}\right].
\end{align*}
A \textit{successful total swapping} is a total swapping in such a way that the number of monochromatic edges is reduced.
\begin{lemma}
Suppose that $1 \leqslant r_i \leqslant b_{i'}$. Then, $\bigtriangleup_{i{i'}} M_T(H,c,r,b)\leqslant0$ if $c$ satisfies at least one of the following conditions:
\begin{enumerate}
\item $b_{i'} < r_i+b_i$.
\item $b_{i'} < r_i+r_{i'}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that $ 1 \leqslant r_i \leqslant b_{i'}$. If $b_{i'} < r_i+b_i$, then \begin{align*}
\bigtriangleup_{i{i'}} &M_T(H,c,r,b)\\
&\leqslant\left[{0 \choose r}+{1 \choose r}+\cdots+{r_i-1 \choose r}\right]-\left[{r_{i'} \choose r}+{r_{i'}+1 \choose r}+\cdots+{r_{i'}+r_i-1 \choose r}\right]\leqslant 0.
\end{align*}The equality holds only when $b_i+r_i-1<r$ and $r_{i'}+r_i-1<r$, i.e., $b_i+r_i<r+1$ and $r_i+r_{i'}<r+1$. If $b_{i'} < r_i+r_{i'}$, then \begin{align*}
\bigtriangleup_{i{i'}} M_T(H,c,r,b)& \leqslant\left[{0 \choose r}+{1 \choose r}+\cdots+{r_i-1 \choose r}\right]-\left[{b_i \choose r}+{b_i+1 \choose r}+\cdots+{b_i+r_i-1 \choose r}\right]\\
&\leqslant 0.
\end{align*}The equality holds only when $b_i+r_i-1<r$ and $r_{i'}+r_i-1<r$, i.e., $b_i+r_i<r+1$ and $r_i+r_{i'}<r+1$.
\end{proof}
Note that if we consider a class with only two colors say $r_i+b_i=n \geqslant r+1$, then $\bigtriangleup_{i{i'}} M_T(H,c,r,b)$ is strictly less than zero in both cases. We will use Lemma $12$ to get more information from the canonical forms. From this point, the term \textit{quadruple} refers to a collection of $4$ values which are the numbers of vertices of any pair of colors in any pair of classes, e.g. $(r_i,b_i,r_{i'},b_{i'})$. Consequently, in each coloring in the forms any canonical forms, all quadruples contain at least one zero. Suppose that $c$ contains the $i^{th}$ and ${i'}^{th}$ classes such that they have a color in common but the $i^{th}$ class contains a color that the ${i'}^{th}$ class does not, say we focus on $(r_i\neq 0,b_i\neq 0,r_{i'} =0,b_{i'}\neq 0)$. If $c$ is a minimum coloring with $r_i+b_i=n$, then we can conclude by Lemma $12$ that
\begin{enumerate}
\item if $r_i \leqslant b_{i'}$, then $b_{i'} \geqslant r_i+b_i$ and $b_{i'} \geqslant r_i +r_{i'}=r_i$, and
\item if $r_i \geqslant b_{i'}$, then $r_i \geqslant b_i + b_{i'}$ and $r_i \geqslant r_{i'}+b_{i'}=b_{i'} $.
\end{enumerate} We will call these \textit{quadruple conditions}.
Next, we will focus on the possibility of $F_4$. Suppose that $c$ is the coloring that has minimum number of monochromatic edges which is in the form $F_4$ such that, WLOG, the first class has $g_1 \neq 0$ green and $r_1 \neq 0$ red vertices, the second class has $g_2 \neq 0$ green and $b_2 \neq 0$ blue vertices and the third class has $r_3 \neq 0$ red and $b_3 \neq 0$ blue vertices, where $g_1$ is the maximum among those values. We will apply quadruple conditions to the quadruple $(g_1,b_1=0,g_2,b_2)$. Since $g_1 \geqslant b_2$ and $g_2+b_2=n$, $n = g_2 + b_2 \leqslant g_1 = n - r_1 < n$, which is a contradiction. Hence, an minimum coloring cannot be in the form $F_4$ and this form will be out of our interest. Consequently, we will search for the minimum colorings only from $F_1, F_2, F_3$ and $F_5$.
We will divide this into $2$ cases upon the remainder of the number $k$ of vertex classes divided by $3$. For simplicity, we define a \textit{type i} hypergraph to be a hypergraph with $k\equiv i \ (\textrm{mod}\ 3)$ classes for $i=0,1,2$. We will consider a type 0 hypergraph first.
\textit{Case 0}: $H$ is a type 0 hypergraph.
Since the number of classes is divisible by the number of colors, it is done by Proposition 7.
\textit{Case 1}: $H$ is a type 1 hypergraph.
\textit{Case 1.1}: The coloring $c$ is in the form $F_3$.
In this case, our aim is to show that all colorings in the form $F_3$ have more monochromatic edges than some coloring in the form $F_2$. We have that $c$ has two classes that are polychromatic, say the first class contains green and red vertices and the second class contains green and blue vertices, while the rest classes are monochromatic. The number of vertices of each color and the number of monochromatic classes of each color are shown in the table below.
\begin{center}
\begin{tabular}{ |P{1.3cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}| }
\hline
\multicolumn{4}{|c|}{Polychromatic Classes} & \multicolumn{3}{|c|}{Monochromatic Classes} \\
\hline
Classes&Red vertices & Blue vertices&Green vertices &Red classes &Blue classes&Green classes\\
\hline
$1$&$r_1 \neq 0$& $0$&$g_1 \neq 0$&$k_r$ &$k_b$&$k_g$\\
$2$&$0$ &$b_2 \neq 0$&$g_2 \neq 0$&$ $& $ $& $ $\\
\hline
\end{tabular}
\end{center}
We will show that the coloring such that $k_r$, $k_b$ and $k_g$ are as equal as possible has smaller number of monochromatic edges than the coloring those of which are not. WLOG, suppose that $c$ has $k_g$ green classes and $k_r$ red classes such that $k_g - k_r \geqslant 2$. We will recolor all vertices in a green class into red and get a new coloring $c'$. Thus,
\begin{align*}
M(H,c)&= { nk_r+ r_1 \choose r+1}+ { nk_b+ b_2 \choose r+1}+ { nk_g+ g_1+g_2 \choose r+1}\\
&\;\;\;\;-{r_1 \choose r+1}-{b_2 \choose r+1}-{g_1 \choose r+1}-{g_2 \choose r+1} -(k-2){n \choose r+1}
\end{align*}and
\begin{align*}
M(H,c')&= { n(k_r+1)+ r_1 \choose r+1}+ { nk_b+ b_2 \choose r+1}+ { n(k_g-1)+ g_1+g_2 \choose r+1}\\
&\;\;\;\;-{r_1 \choose r+1}-{b_2 \choose r+1}-{g_1 \choose r+1}-{g_2 \choose r+1} -(k-2){n \choose r+1}.
\end{align*}Then,
\begin{align*}
M(H,c')-M(H,c)&= \left[ { n(k_r+1)+ r_1 \choose r+1}+ { n(k_g-1)+ g_1+g_2 \choose r+1}\right]\\&\;\;\;\;-\left[{ nk_r+ r_1 \choose r+1}+ { nk_g+ g_1+g_2 \choose r+1}\right].
\end{align*} We will show that $M(H,c')-M(H,c) < 0$ by Proposition 4. We have $\left[n(k_r+1)+ r_1\right]+\left[n(k_g-1)+ g_1+g_2\right] = \left[nk_r+ r_1\right]+ \left[nk_g+ g_1+g_2\right]$, $nk_r+ r_1 < n(k_r+1)+ r_1 \leqslant n(k_g-1)+r_1 < nk_g+ g_1+g_2$ and $nk_r+ r_1 < n(k_r+1)+ g_1+g_2 \leqslant n(k_g-1)+g_1+g_2 < nk_g+ g_1+g_2$. Since $nk_g+g_1+g_2>r+1$, we have that $M(H,c')-M(H,c) < 0$. Note that we only use the fact that $r_1 < n$ and $g_1+g_2 >0$.
Since $(k-2)\equiv 2 \ (\textrm{mod}\ 3)$, colorings such that $k_r$, $k_b$ and $k_g$ are as equal as possible must have one of these conditions:
\begin{enumerate}
\item $k_r+1=k_b=k_g$,
\item $k_r=k_b+1=k_g$,
\item $k_r=k_b=k_g+1$.
\end{enumerate}
We will show that a coloring with the last condition has more number of monochromatic edges than a coloring with either of the first two conditions (with the same polychromatic classes). Suppose that $c$ is a coloring such that $k_r=k_b=k_g+1$. We consider $(g_1,r_1,g_2,0)$ of $c$. Since $g_1+r_1=n$, if $g_2 \geqslant r_1$, then $g_2 \geqslant r_1+g_1$. This leads to a contradiction since $n = r_1+g_1 \leqslant g_2 = n-b_2 <n$. Hence, $g_2 < r_1$ and consequently $g_1+g_2 \leqslant r_1$ by quadruple conditions. Next, we will increase the number of green class. WLOG, we will recolor all vertices in a red class into green and get a new coloring $c'$ with condition (1). Then,
\begin{align*}
M(H,c)&= { nk_r+ r_1 \choose r+1}+ { nk_b+ b_2 \choose r+1}+ { nk_g+ g_1+g_2 \choose r+1}\\
&\;\;\;\;-{r_1 \choose r+1}-{b_2 \choose r+1}-{g_1 \choose r+1}-{g_2 \choose r+1} -(k-2){n \choose r+1}
\end{align*}and
\begin{align*}
M(H,c')&= { n(k_r-1)+ r_1 \choose r+1}+ { nk_b+ b_2 \choose r+1}+ { n(k_g+1)+ g_1+g_2 \choose r+1}\\
&\;\;\;\;-{r_1 \choose r+1}-{b_2 \choose r+1}-{g_1 \choose r+1}-{g_2 \choose r+1} -(k-2){n \choose r+1}.
\end{align*}Then,
\begin{align*}
M(H,c')-M(H,c)&= \left[ { n(k_r-1)+ r_1 \choose r+1}+ { n(k_g+1)+ g_1+g_2 \choose r+1}\right]\\&\;\;\;\;-\left[{ nk_r+ r_1 \choose r+1}+ { nk_g+ g_1+g_2 \choose r+1}\right].
\end{align*} We will show that $M(H,c')-M(H,c) < 0$ by Proposition $4$. We have $\left[n(k_r-1)+ r_1\right]+\left[n(k_g+1)+ g_1+g_2\right] =\left[ nk_r+ r_1\right]+ \left[nk_g+ g_1+g_2\right]$. Since $g_1+g_2 \leqslant r_1$, then $nk_g+ g_1+g_2 \leqslant nk_g +r_1 = n(k_r-1) +r_1 < nk_r+r_1$ and $nk_g+ g_1+g_2 < n(k_g+1)+ g_1+g_2 \leqslant n(k_g+1)+r_1 = nk_r+ r_1$. Since $nk_g+g_1+g_2>r+1$, we have that $M(H,c')-M(H,c) < 0$.
Finally, we will consider a coloring with condition (1) or (2). By symmetry, assume that $c$ is a coloring with $k_r+1=k_b=k_g$. We will recolor a green vertex in the first class of $c$ into red. Then,
\begin{align*}
\bigtriangleup_1M(H,c)
= \left[ {nk_r+r_1 \choose r}- {nk_g+g_1+g_2-1 \choose r} \right] + \left[{g_1-1 \choose r} -{r_1 \choose r} \right].
\end{align*} We have $nk_r+r_1 = n(k_g-1)+r_1 < nk_g < nk_g+g_1+g_2-1$ and $g_1-1<g_1+g_2 \leqslant r_1$. Since $nk_g+g_1+g_2-1>r$, we have that $\bigtriangleup_1M(H,c) < 0$. This is true for any arbitrary value of $r_1 > 0$. Hence, we will recolor a green vertex into red until the first class is a red class which is a coloring in the form $F_2$ and get fewer number of monochromatic edges. Consequently, we can conclude that $c$ has more monochromatic edges than a coloring in the form $F_2$.
\textit{Case 1.2}: The coloring $c$ is in the form $F_5$.
In this case, our aim is to show that all colorings in the form $F_5$ have more monochromatic edges than some coloring in the form $F_2$. We have that all classes of $c$ are monochromatic with $k_r$, $k_b$ and $k_g$ red, blue and green classes, respectively. We can show that the coloring that $k_r$, $k_b$ and $k_g$ are as equal as possible has smaller number of monochromatic edges similarly as in \textit{Case 1.1}. Since $k\equiv 1 \ (\textrm{mod}\ 3)$, colorings that $k_r$, $k_b$ and $k_g$ are as equal as possible must have one of these conditions:
\begin{enumerate}
\item $k_r-1=k_b=k_g$,
\item $k_r=k_b-1=k_g$,
\item $k_r=k_b=k_g-1$.
\end{enumerate}
By symmetry, suppose that $c$ is a coloring such that $k_r-1=k_b=k_g$. We will show that if a red vertex in a red class, say the first class, is recolored into green, the number of monochromatic edges will decrease. The new coloring is not in the form $F_5$ but in the form $F_2$ instead. Then,
\begin{align*}
\bigtriangleup_1M(H,c)
= \left[ {nk_g \choose r}+{n-1 \choose r} \right] - \left[ {nk_r-1 \choose r} +{0 \choose r} \right].
\end{align*} we will show that $\bigtriangleup_1M(H,c) < 0$ by Proposition $4$. We have $nk_g+ (n-1)= n(k_g+1) -1 = (nk_r-1)+0$ and $0 < n-1 < nk_g <nk_r-1$. Since $nk_r-1>r$, we have that $\bigtriangleup_1M(H,c) < 0$. Consequently, we can conclude that $c$ has more monochromatic edges than a coloring in the form $F_2$.
\textit{Case 1.3}: The coloring $c$ is in the form $F_2$.
In this case, our aim is to show that all colorings in the form $F_2$ have more monochromatic edges than some coloring in the form $F_1$. We have that $c$ has a class that is polychromatic, WLOG, say the first class contains red and blue vertices while the rest classes are monochromatic. Note that the number of vertices of each color and the number of monochromatic classes of each color are shown in the table below.
\begin{center}
\begin{tabular}{ |P{1.3cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}| }
\hline
\multicolumn{4}{|c|}{Polychromatic Classes} & \multicolumn{3}{|c|}{Monochromatic Classes} \\
\hline
Classes&Red vertices & Blue vertices&Green vertices &Red classes &Blue classes&Green classes\\
\hline
$1$&$r_1 \neq 0$& $b_1 \neq 0$&$0$&$k_r$ &$k_b$&$k_g$\\
\hline
\end{tabular}
\end{center} We can show that the coloring such that $k_r$, $k_b$ and $k_g$ are as equal as possible has smaller number of monochromatic edges similarly as in \textit{Case 1.1}. Hence, we will focus on a coloring that $k_r$, $k_b$ and $k_g$ are as equal as possible. Since $(k-1)\equiv 0 \ (\textrm{mod}\ 3)$, colorings such that $k_r$, $k_b$ and $k_g$ are as equal as possible must have $k_r=k_b=k_g$.
Suppose that $c$ is a coloring such that $k_r=k_b=k_g$. By symmetry, suppose that $r_1 > 1$. We will show that if a red vertex in the first class is recolored into green, the number of monochromatic edges will decrease. The new coloring is not in the form $F_2$ but in the form $F_1$ instead. Then,
\begin{align*}
\bigtriangleup_1M(H,c) &= \left[ {nk_g \choose r} - {0 \choose r}\right] - \left[ {nk_r+r_1-1 \choose r} - {r_1-1 \choose r}\right]\\
&= \left[ {nk_g \choose r}+{r_1-1 \choose r} \right] - \left[ {nk_r +r_1-1 \choose r} +{0 \choose r} \right].
\end{align*} We will show that $\bigtriangleup_1M(H,c) < 0$ by Proposition $4$. We have $nk_g+ (r_1-1)= (nk_r +r_1 -1)+0$ and $0 < r_1-1 < nk_g <nk_r+r_1-1$. Since $nk_r+r_1-1>r$, we have that $\bigtriangleup_1M(H,c) < 0$. Consequently, we can conclude that $c$ has more monochromatic edges than a coloring in the form $F_1$.
\textit{Case 1.4}: The coloring $c$ is in the form $F_1$.
In this case, our aim is to show that a coloring in the form $F_1$ such that the numbers of red, blue and green vertices are as equal as possible has the minimum number of monochromatic edges. We have that $c$ has a class that is polychromatic, WLOG, say the first class contains red, blue and green vertices while the rest classes are monochromatic. Note that the number of vertices of each color and the number of monochromatic classes of each color are shown in the table below.
\begin{center}
\begin{tabular}{ |P{1.3cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}|P{1.4cm}| }
\hline
\multicolumn{4}{|c|}{Polychromatic Classes} & \multicolumn{3}{|c|}{Monochromatic Classes} \\
\hline
Classes&Red vertices & Blue vertices&Green vertices &Red classes &Blue classes&Green classes\\
\hline
$1$&$r_1 \neq 0$& $b_1 \neq 0$&$g_1 \neq 0$&$k_r$ &$k_b$&$k_g$\\
\hline
\end{tabular}
\end{center} We can show that the coloring that $k_r$, $k_b$ and $k_g$ are as equal as possible has smaller number of monochromatic edges similarly as in \textit{Case 1.1}. Since $(k-1)\equiv 0 \ (\textrm{mod}\ 3)$, colorings such that $k_r$, $k_b$ and $k_g$ are as equal as possible must have $k_r=k_b=k_g$.
Suppose that $c$ is a coloring such that $k_r=k_b=k_g$. We will show that if we recolor the vertices in such a way that $r_1, b_1$ and $g_1$ are as equal as possible then the number of monochromatic edges will decrease. WLOG, assume that $r_1-g_1 \geqslant 2$. We will show that if a red vertex in the first class is recolored into green, the number of monochromatic edges will decrease. Then,
\begin{align*}
\bigtriangleup_1M(H,c) &= \left[ {nk_g +g_1 \choose r} - {g_1 \choose r}\right] - \left[ {nk_r+r_1-1 \choose r} - {r_1-1 \choose r}\right]\\
&= \left[ {nk_g +g_1 \choose r}+{r_1-1 \choose r} \right] - \left[ {nk_r +r_1-1 \choose r} +{g_1 \choose r} \right].
\end{align*} We will show that $\bigtriangleup_1M(H,c) < 0$ by Proposition $4$. We have $(nk_g+g_1)+ (r_1-1)= (nk_r +r_1 -1)+g_1$ and $g_1 < r_1-1 < nk_g+g_1 <nk_r+r_1-1$. Since $nk_r+r_1-1>r$, we have that $\bigtriangleup_1M(H,c) < 0$. Consequently, we can conclude that $c$ is not a minimum coloring.
To sum up, we have proved that all colorings in the forms $F_2, F_3$ and $F_5$ have strictly more number of monochromatic edges than some coloring in the form $F_1$. Moreover, a coloring in the form $F_1$ where the numbers of red, blue and green vertices in the first class are not as equal as possible has strictly more monochromatic edges than the coloring $c^*$ where the numbers of red, blue and green classes are as equal as possible and the number of red, blue and green vertices in the first class are as equal as possible. Hence, this is the minimum coloring and it is unique up to a permutation of colors and classes.
\end{proof}
\section{Concluding remarks}
\label{sec:conclude}
In this paper, we considered $2$-colorings of balanced complete $k$-partite $r$-uniform hypergraphs and determined which one has the minimum number of monochromatic edges. The proof may give a clue for further generalization to an unbalanced hypergraph with arbitrary sizes of classes. We observed that the minimum coloring can only be in certain forms which are called canonical forms of the colorings. We studied the canonical forms of $2$-colorings of unbalanced complete tripartite $3$-uniform hypergraphs. Finally, we continued to determine the extermal $3$-coloring of balanced complete $k$-partite $r$-uniform hypergraphs when $k \equiv 0,1 \mod{3}$.
In Theorem \ref{thm:2}, we determined the minimum coloring for unbalanced $3$-uniform hypergraphs. In the proof, almost all comparisons have been done without expanding the binomial coefficient terms and they also hold if we try to generalize the proof to $r$-uniform hypergraphs with arbitrary $r \leqslant n_1$. However, in the \textit{Case 5B}, it cannot be done without expanding those binomial coefficient terms where, in this case, we expand with $r+1=3$ as the lower index. Hence, the proof works only for $3$-uniform hypergraphs. We believe that the minimum coloring for $r$-uniform hypergraphs with arbitrary $r \leqslant n_1$ differs from those in Theorem \ref{thm:2}.
Another generalized case is unbalanced complete hypergraphs with several vertex classes ($k>3$). The problem seems to be much more complicated because Theorem \ref{thm:2} demonstrates that the extremal coloring varies depending on the relationship among the sizes of the vertex classes.
\begin{problem}
What is the minimum $2$-coloring of an unbalanced complete $k$-partite $r$-uniform hypergraph?
\end{problem}
We have determined the extermal $3$-coloring of balanced complete $k$-partite $r$-uniform hypergraphs only for $k\equiv 0,1 \mod{3}$.
\begin{problem}
What is the minimum $3$-coloring of a balanced complete $k$-partite $r$-uniform hypergraph where $k\equiv 2 \mod{3}$?
\end{problem}
For $k \equiv 2 \mod{3}$, if we apply the same ideas as in the proof of Theorem \ref{thm:3}, we can conclude that the minimum coloring is in the form $F_3$ instead of $F_1$. However, the comparisons between colorings in the form $F_3$ are rather challenging and we believe that they require some further comparison tools. One might expect the minimum coloring to have approximately $\frac{\abs{V(H)}}{3}$ vertices in each color as in Theorem $4$. However, this is not the case, for example, when the number of vertices in each class is much greater than $k$.
For $m>3$, we have studied only some trivial cases for $m$-colorings of the hypergraphs in Section $2.6$.
\begin{problem}
What is the minimum $m$-coloring of a balanced complete $k$-partite $r$-uniform hypergraph ?
\end{problem}
This is a generalization of the hypergraphs in Problem $15$ which is extremely complex as we believe that the minimum coloring varies depending on the relationship between number of colors and the number of classes. However, the proof of Theorem \ref{thm:3} might be useful to determine the minimum $m$-coloring of balanced complete $k$-partite $r$-uniform hypergraphs with $k\equiv 1 \mod{m}$, and we speculate that it would be in the form similar to the coloring in the form $F_1$.
Finally, there is another natural definition of $k$-partite hypergraphs where each edge is an $r$-subset containing vertices from different classes.
\begin{problem}
With the above definition of $k$-partite hypergraphs, what is the minimum coloring of a balanced complete $k$-partite $r$-uniform hypergraph?
\end{problem}
| {
"timestamp": "2021-07-21T02:18:35",
"yymm": "2107",
"arxiv_id": "2107.09450",
"language": "en",
"url": "https://arxiv.org/abs/2107.09450",
"abstract": "Consider the following problem. In a school with three classes containing $n$ students each, given that their genders are unknown, find the minimum possible number of triples of same-gender students not all of which are from the same class. Muaengwaeng asked this question and conjectured that the minimum scenario occurs when the classes are all boy, all girl and half-and-half. In this paper, we solve many generalizations of the problem including when the school has more than three classes, when triples are replaced by groups of larger sizes, when the classes are of different sizes, and when gender is replaced by other non-binary attributes.",
"subjects": "Combinatorics (math.CO)",
"title": "Monochromatic Edges in Complete Multipartite Hypergraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587257892506,
"lm_q2_score": 0.8104789086703225,
"lm_q1q2_score": 0.8005576141072601
} |
https://arxiv.org/abs/1807.01492 | The minimal $k$-dispersion of point sets in high-dimensions | In this manuscript we introduce and study an extended version of the minimal dispersion of point sets, which has recently attracted considerable attention. Given a set $\mathscr P_n=\{x_1,\dots,x_n\}\subset [0,1]^d$ and $k\in\{0,1,\dots,n\}$, we define the $k$-dispersion to be the volume of the largest box amidst a point set containing at most $k$ points. The minimal $k$-dispersion is then given by the infimum over all possible point sets of cardinality $n$. We provide both upper and lower bounds for the minimal $k$-dispersion that coincide with the known bounds for the classical minimal dispersion for a surprisingly large range of $k$'s. | \section{Introduction and main results}
A classical problem in computational geometry and complexity asks for the size of the largest empty and axis-parallel box given a point configuration in the cube $[0,1]^2$. From a complexity point of view this \emph{maximum empty rectangle problem} was already studied by Naamad, Lee, and Hsu \cite{NLH1984} who provided an $\mathcal O(n^2)$-time algorithm as well as an $\mathcal O(n\log^2(n))$-expected-time algorithm (when the points are drawn independently and uniformly at random) to actually find such a rectangle. Further results in this direction have been obtained by Chazelle, Drysdale, and Lee in \cite{CDL1986}.
In the last decade the study of high-dimensional (geometric) structures has become increasingly important and it has been realized by now that the presence of high dimensions forces a certain regularity in the geometry of the space while, on the other hand, it unfolds various rather unexpected phenomena. The maximum empty rectangle problem studied in \cite{NLH1984} has its natural multivariate counterpart. Let $d,n\in\ensuremath{{\mathbb N}}$ and denote by $\mathscr B^d_{\text{ax}}$ the collection of axis-parallel boxes in $[0,1]^d$. Then the ($n$-th) minimal dispersion is defined to be the quantity
\begin{equation}\label{eq:min:1}
\disp^*(n,d) := \inf_{\mathscr P\subset[0,1]^d\atop{\#\mathscr P=n}}\,\sup_{\mathbb B\in\mathscr B_{\text{ax}}\atop{\mathbb B\cap \mathscr P=\emptyset}} \mathrm{vol}_d(\mathbb B),
\end{equation}
where $\#$ denotes the cardinality of a set and $\mathrm{vol}_d(\cdot)$ the $d$-dimensional Lebesgue measure. In other words, the minimal dispersion depicts the size of the largest empty, axis-parallel box amidst \emph{any} point set of cardinality $n$ in the $d$-dimensional cube. The interest in this notion is in parts motivated by applications in approximation theory, more precisely, in problems concerning the approximation of high-dimensional rank one tensors \cite{BDDG2014,NR2016} and in Marcinkiewicz-type discretizations of the uniform norm of multivariate trigonometric polynomials \cite{T2018}. Even though the minimal dispersion is conceptually quite simple and the problem of determining (or estimating) its order attracted considerable attention in the past $3$ years (see, e.g., \cite{AHR17, DJ2013, K2018, R2018,S2018,U2018, UV2018}), its behavior, simultaneously in the number of points $n$ and in the dimension $d$, is still not completely understood.
Let us briefly describe the current state of the art. Aistleitner, Hinrichs, and Rudolf proved in \cite[Theorem 1]{AHR17} that, for all $d,n\in\ensuremath{{\mathbb N}}$,
\[
\disp^*(n,d) \geq \frac{\log_2(d)}{4(n+\log_2(d))}\,.
\]
In particular, this shows that the volume of the largest empty box increases with the dimension $d$. In the same paper, the authors communicate an upper bound, which is attributed to Larcher (see \cite[Section 4]{AHR17}) and shows that
\[
\disp^*(n,d) \leq \frac{2^{7d+1}}{n}\,.
\]
This improves upon a bound of Rote and Tichy \cite[Proposition 3.1]{RT1996} when $d\geq 54$. Under the assumption that the number of points satisfies $n> 2d$, it was recently proved by Rudolf \cite[Corollary 1]{R2018} that
\[
\disp^*(n,d) \leq \frac{4d}{n} \log_2\Big(\frac{9n}{d}\Big)\,.
\]
In contrast to the probabilistic methods, several authors then provided explicit constructions of sets with small dispersion and small number of points.
For example, Krieg \cite[Theorem]{K2018} shows that for $\varepsilon\in(0,1)$ and $d\ge 2$ there is a sparse grid with the number of points
bounded by $(2d)^{\log_2(1/\varepsilon)}$ and dispersion at most $\varepsilon$.
Temlyakov proved in \cite[Theorem 2.1 and Theorem 4.1]{T2018'} that Fibonacci and
Frolov point sets achieve the dispersion of optimal order $n^{-1}$,
but without paying extra attention to the dependence on $d$,
see also~\cite{U2018b}.
An essential breakthrough was achieved by Sosnovec in \cite[Theorem 2]{S2018}. He provided a randomized construction of a set with at most
$c_\varepsilon \log_2(d)$ points in $[0,1]^d$ with dispersion at most $\varepsilon\in(0,1/4]$. Here, $c_\varepsilon\in(0,\infty)$
is a quantity depending only on $\varepsilon.$ The $\varepsilon$-dependence was then refined by the third and fourth author in \cite[Theorem 1]{UV2018}. More precisely, for every $\varepsilon\in (0,1/2)$ and $d\ge 2$, they provided a randomized construction of a point set $\mathscr P$ with
$$
\#{\mathscr P}\le 2^7\frac{(1+\log_2(\varepsilon^{-1}))^2}{\varepsilon^2}\log_2 (d)
$$
and dispersion at most $\varepsilon$. This result can be also reformulated as
\begin{equation}\label{eq:UV}
\disp^*(n,d)\le c\log_2(n)\sqrt{\frac{\log_2(d)}{n}}
\end{equation}
for $n,d\ge 2$ and some absolute constant $c\in(0,\infty)$.
In this manuscript, we generalize the notion of dispersion by introducing a quantity, which measures the size of the largest box amidst a point
set containing \emph{at most} $k$ points. The minimal dispersion \eqref{eq:min:1} then corresponds to the case $k=0$.
For $d,n\in\ensuremath{{\mathbb N}}$ and $k\in\ensuremath{{\mathbb N}}\cup \{0\}$, we define the $k$-dispersion of a point set
$\mathscr P_n=\{x_1,\dots,x_n\}\subset [0,1]^d$ to be the quantity
\[
k\!\operatorname{-disp}(\mathscr P_n,d) := \sup\Big\{\mathrm{vol}_d(\mathbb B)\,\colon\, \mathbb B\in \mathscr B_{\text ax}^d \text{ with }\# (\mathscr P_n\cap \mathbb B)\leq k\Big\},
\]
where $\mathscr B_{\text ax}^d$ is the collection of all axis-parallel boxes inside the $d$-dimensional cube $[0,1]^d$.
The minimal $k$-dispersion
is defined to be the infimum
over all possible point sets of cardinality $n$, i.e.,
\[
k\!\operatorname{-disp}^*(n,d) := \inf_{\substack{\mathscr P_n\subset [0,1]^d\\ \#\mathscr P_n=n}} k\!\operatorname{-disp}(\mathscr P_n,d).
\]
Our first main result provides an upper bound on the minimal $k$-dispersion.
\begin{thmalpha}\label{thm:main 1}
There exists a constant $C\in(0,\infty)$ such that for any $d\geq 2$ and all $k,n\in\ensuremath{{\mathbb N}}$ with $k<n/2$, we have
\begin{equation}\label{eq:dispk}
k\!\operatorname{-disp}^*(n,d) \,\le\, C\,\max\left\{\log_2 (n)\sqrt{\frac{\log_2 (d)}{n}},\,
k\,\frac{\log_2(n/k)}{n}\right\}.
\end{equation}
\end{thmalpha}
\begin{remark} The minimal $k$-dispersion is easily seen to be non-decreasing in $k$. On the other hand,
a comparison of \eqref{eq:UV} and \eqref{eq:dispk} reveals that the upper bound of minimal dispersion and minimal $k$-dispersion
are of the same order for a large range of $k$'s. Indeed, if $k=k(n,d)$ increases with the number of points $n$ and the dimension $d$ while satisfying
\[
k(n,d) \leq c \sqrt{n\cdot\log_2(d)}\,,
\]
for some absolute constant $c\in(0,\infty)$, then \eqref{eq:UV} and \eqref{eq:dispk} provide the same order in $n$ and $d$. Motivated by this result, we conjecture that \eqref{eq:UV} actually offers a lot of space for improvement.
\end{remark}
The second result establishes the following lower bound on the minimal $k$-dispersion.
\begin{thmalpha}\label{thm:main 2}
Let $k,n,d\in\ensuremath{{\mathbb N}}$. Then
\[
k\!\operatorname{-disp}^*(n,d) \,\ge\, \frac18\,\min\left\{1, \frac{k+\log_2(d)}{n}\right\}.
\]
\end{thmalpha}
\section{Proof of Theorem \ref{thm:main 1} -- the upper bound}\label{sec:thm1-multi}
We shall present here the proof for the upper bound of the minimal $k$-dispersion.
In the proof, we modify the ideas developed in \cite{S2018} and \cite{UV2018} to our setting.
\subsection{The idea of proof}
Before we start let us briefly discuss the strategy of the proof. For every $\varepsilon\in(0,1/4)$,
we construct a set ${\mathbb X}=\{x^1,\dots,x^n\}\subset[0,1]^d$ with a small number of elements and small $k$-dispersion.
This set is constructed (similarly to \cite{S2018} and \cite{UV2018}) by random sampling from a discrete mash in $[0,1]^d$.
To allow for independence across the steps of this sampling, it might happen that some points of the mash are actually
sampled more than once. Such points were naturally discarded in \cite{UV2018}, because they could not influence if a box ${\mathbb B}\subset[0,1]^d$
intersects ${\mathbb X}$, or not.
Here, we need to keep a more detailed track of the intersection of boxes with ${\mathbb X}$. Therefore, we allow for repeated sampling,
and ${\mathbb X}$ will actually be a multiset. In Section \ref{subsec:multiset}, we argue in detail that this modification does not alter
the minimal $k$-dispersion. It is then not difficult to see that each box of large enough volume contains indeed at least $k$ points
from the multiset ${\mathbb X}$ with high probability. Unfortunately, this is still not enough to apply the union bound, as there are infinitely many
boxes with large volume. We will therefore divide them into finitely many groups (called $\Omega_{m}(s,p)$ later on), and show that even the intersection
of all boxes included in $\Omega_m(s,p)$ still contains at least $k$ points from ${\mathbb X}$ with high probability. At long last, we can apply the union
bound over all admissible parameter pairs $(s,p)$.
\subsection{A random multiset}\label{subsec:multiset}
Let $\varepsilon\in (0,1/4)$ be fixed and put $m=\lceil\log_2(1/\varepsilon)\rceil$.
We define a one-dimensional set via
\[
\mathbb M_m:=\left\{\frac{1}{2^{m}},\dots,\frac{2^{m}-1}{2^{m}}\right\}\subset [0,1].
\]
For $n\in\ensuremath{{\mathbb N}}$, we construct a random multiset
$\mathbb X=\{x^1,\dots,x^n\}$ by sampling independently and uniformly at random from
the points in $\mathbb M_m^d$, where $n$ will later be the number of points in Theorem \ref{thm:main 1}.
Note that a multiset $\mathbb X$ in the cube $[0,1]^d$ is naturally identified
with the \emph{multiplicity function} $\mathbb X\colon [0,1]^d\to \ensuremath{{\mathbb N}}_0$,
where $\mathbb X(z)$ gives the multiplicity of $z\in [0,1]^d$ in $\mathbb X$.
For ${\mathbb B}\in {\mathscr B}_{ax}$, we define
\[
\#\bigr(\mathbb X \cap {\mathbb B}\bigl) \,:=\, \sum_{z\in {\mathbb B}} \mathbb X(z)\quad\text{and}\quad
\#{\mathbb X} \,:=\, \sum_{z\in[0,1]^d} \mathbb X(z)\,,
\]
and the $k$-dispersion of the multiset ${\mathbb X}$ as
$$
k\!\operatorname{-disp}_m({\mathbb X},d):= \sup\Big\{\mathrm{vol}_d(\mathbb B)\,\colon\, \mathbb B\in \mathscr B_{\text ax}^d \text{ with }\# ({\mathbb X}\cap \mathbb B)\leq k\Big\}.
$$
Finally, we take the infimum over all possible multisets of cardinality $n$ and obtain
\[
k\!\operatorname{-disp}_m^*(n,d) := \inf_{\substack{{\mathbb X}\subset [0,1]^d\\ \#{\mathbb X}=n}} k\!\operatorname{-disp}_m({\mathbb X},d).
\]
As each classical set ${\mathscr P}\in[0,1]^d$ is also a multiset (with the multiplicity function bounded by one), we immediately obtain that
\[
k\!\operatorname{-disp}_m^*(n,d)\le k\!\operatorname{-disp}^*(n,d).
\]
On the other hand, if ${\mathbb X}=\{x^1,\dots,x^n\}\subset [0,1]^d$ is a multiset,
then we consider the sets $\{x^1+\xi^1,\dots,x^n+\xi^n\}$ with $\|\xi^j\|_\infty\le \delta$ (for $\delta\in(0,\infty)$), where $\xi^1,\dots,\xi^n$ are independent random vectors which are uniformly distributed over $[-\delta,\delta]^d$. If we then let $\delta\to 0$, it follows that $k\!\operatorname{-disp}^*(n,d)\le k\!\operatorname{-disp}_m^*(n,d)$.
\subsection{The partitioning scheme}\label{subsec:partition}
We now introduce a set $\Omega_m$ containing all those boxes $\mathbb B$ with
`large' volume. For $m\in\ensuremath{{\mathbb N}}$, we define
\[
\Omega_m:=\Big\{\mathbb B=I_1\times\dots\times I_d\subset[0,1]^d\,\colon\,\mathrm{vol}_d(\mathbb B)>\frac{1}{2^m}\Big\}\,.
\]
As already described before, our approach will later be based on a union bound over all the boxes $\mathbb B\in\Omega_m$. As there are infinitely many of those boxes, we first divide $\Omega_m$ into finitely many `suitable' subsets.
This is done as follows: for $s=(s_1,\dots,s_d)\in\{0,1,\dots,2^{m}-1\}^d$ and
$p=(p_1,\dots,p_d)\in\{1/2^{m},\dots,1-1/2^{m}\}^d$, we define the collection
$\Omega_m(s,p)$ of subsets of $\Omega_m$ to be
\begin{align*}
\Omega_m(s,p)&:=\bigg\{\mathbb B=I_1\times\dots\times I_d\in\Omega_m \,\colon\,
\forall \ell\in\{1,\dots,d\}: \frac{s_\ell}{2^{m}}<\mathrm{vol}(I_\ell) \le \frac{s_\ell+1}{2^{m}}\\
&\qquad\qquad \text{and}\quad\inf I_\ell\in \Big[p_\ell-\frac{1}{2^{m}},p_\ell\Big)\bigg\}.
\end{align*}
We first observe that $\Omega_m(s,p)=\emptyset$ if the choice of $s$ does not
allow $\Omega_m(s,p)$ to contain any box $\mathbb{B}$ with $\mathrm{vol}_d(\mathbb B)>2^{-m}$.
This holds, e.g., if $s_{\ell_0}=0$ for some $\ell_0\in\{1,\dots,d\}$.
We define the index set
\[
\mathbb I_m \,:=\, \Bigl\{(s,p)\,\colon\, \Omega_m(s,p)\neq\emptyset\Bigr\},
\]
which contains those indices $(s,p)$ that are needed for the following considerations,
and we bound its cardinality. Let
\[
A_m(s) := \# \Big\{\ell\in\{1,\dots,d\}\,\colon\, s_\ell < 2^m-1 \Big\}
\]
and observe that, by definition, any $\mathbb B\in\Omega_m(s,p)$ must satisfy
$$
\frac{1}{2^m} \,<\, \mathrm{vol}_d(\mathbb B)
\,\le\, \prod_{\ell=1}^d\frac{s_\ell+1}{2^{m}}
\,\le\, \Bigl(1-\frac{1}{2^{m}}\Bigr)^{A_m(s)}.
$$
This is a contradiction if $A_m(s)>\log(2)\, m 2^m$.
Therefore, $\Omega_m(s,p)\neq\emptyset$ implies that
\begin{equation*}
A_m(s) \,\le\, \min\Bigl\{\lfloor\log(2)\, m 2^m\rfloor,\, d\Bigr\} \,=:\, A_m,
\end{equation*}
i.e., there are at most $A_m$ choices of $\ell$ with $s_\ell<2^m-1$.
Clearly, there are at most $\binom{d}{A_m} 2^{m A_m}$
choices for $s\in\{0,1,\dots,2^{m}-1\}^d$ with $A_m(s)\le A_m$.
Moreover, for given $s$,
there are at most $2^{m A_m(s)}$ choices for $p$ with
$\Omega_m(s,p)\neq\emptyset$.
This follows from the fact that for each $\ell\in\{1,\dots,d\}$,
we have at most $2^m-1$ choices for $p_\ell$ (by definition) and,
if $s_{\ell_0}=2^m-1$ for some $\ell_0\in\{1,\dots,d\}$,
then we have $\Omega_m(s,p)=\emptyset$ unless $p_{\ell_0}=2^{-m}$.
For other $p_{\ell_0}$ the boxes cannot be contained in the unit cube.
For $m$ such that $A_m<d$, we obtain
\begin{equation*
\begin{split}
\#\mathbb I_m \,&<\, \binom{d}{A_m} \, 2^{2m A_m}
\,<\, \biggl(\frac{ed}{A_m}\cdot 2^{2m}\biggr)^{A_m} \\
&<\, \biggl(\frac{4d2^{m+1}}{m}\biggr)^{A_m}
\,\le\, \exp\Bigl(m2^{m}\log(2^{m+3}d)\Bigr),
\end{split}
\end{equation*}
where we used that $\log(2)m 2^{m-1}< A_m\le \log(2) m 2^{m}$ for $m\in\ensuremath{{\mathbb N}}$ and
$e/\log(2)< 4$.
On the other hand, if $A_m=d$, i.e., if $d<\log(2)m2^m$, then we obtain
$$
\#\mathbb I_m\le 2^{md}\cdot 2^{md}\le \exp\Big(\log^2(2)\cdot2m2^m\Big).
$$
Therefore, for arbitrary $m\in\ensuremath{{\mathbb N}}$, we obtain
\begin{equation}\label{eq:card}
\#\mathbb I_m \,\le\, \exp\Bigl(m2^{m}\log(2^{m+3}d)\Bigr).
\end{equation}
\medskip
\subsection{The proof}
We shall now present the proof of the upper bound on the minimal
$k$-dispersion. We do this by proving that our random multiset has small
$k$-dispersion with positive probability, which proves the existence of a
`good' multiset.
The following result is from \cite[Lemma 3]{UV2018}. Note that
it is stated there in a different way, but (the end of) its proof clearly shows
this variant.
For $(s,p)\in\mathbb I_m$, let
\[
\mathbb B_m(s,p) \,:=\, \bigcap_{\mathbb B\in \Omega_m(s,p)} \mathbb B
\,=\, \prod_{\ell=1}^d \Bigl[p_\ell,p_\ell+\frac{s_\ell-1}{2^{m}}\Bigr].
\]
\begin{lem}\label{lem:discrete}
Let $m\in\ensuremath{{\mathbb N}}$, $(s,p)\in\mathbb I_m$ and
$z$ be uniformly distributed in $\mathbb M_m^d$. Then
\begin{equation*}
\ensuremath{{\mathbb P}}\big(z\in \mathbb B_m(s,p)\big)\ge \frac{1}{2 ^{m+4}}\,.
\end{equation*}
\end{lem}
\medskip
For the random multiset as constructed in Section~\ref{subsec:multiset}, we now estimate the probability that the number of points in
$\mathbb X\cap \mathbb B_m(s,p)$ does not exceed $k\in\ensuremath{{\mathbb N}}$, where $k<\frac{n}{2}$ (for the case $k=0$ see~\cite{UV2018}).
Let us consider two cases.
\vskip 1mm
\emph{Case 1:} Assume that $\ensuremath{{\mathbb P}}\big(x^1\in \mathbb B_m(s,p)\big)\le 1/2$.
Then we use Lemma~\ref{lem:discrete} and obtain the estimate
\[\begin{split}
\ensuremath{{\mathbb P}}\big(\#(\mathbb X & \cap \mathbb B_m(s,p))\le k\big)
\,=\, \sum_{\ell=0}^k {n\choose \ell} \ensuremath{{\mathbb P}}\big(x^1\in \mathbb B_m(s,p)\big)^{\ell}\ \ensuremath{{\mathbb P}}\big(x^1\not\in \mathbb B_m(s,p)\big)^{n-\ell}\\
&\le\, (k+1){n\choose k}\ensuremath{{\mathbb P}}\big(x^1\not\in\mathbb B_m(s,p)\big)^{n}
\,\le\, 2k \,\frac{n^k}{k!}\,\Bigl(1-\ensuremath{{\mathbb P}}\big(x^1\in\mathbb B_m(s,p)\big)\Bigr)^n\\
&\le\, \frac{2n^k}{(k-1)!}\,\Bigl(1-\frac{1}{2^{m+4}}\Bigr)^n
\,\le\, \frac{2n^k}{(k-1)!}\, \exp\Big(-\frac{n}{2^{m+4}}\Big).
\end{split}\]
\vskip 1mm
\emph{Case 2:} Assume $\ensuremath{{\mathbb P}}(x^1\in\mathbb B_m(s,p))>1/2$. Then
\begin{align*}
\ensuremath{{\mathbb P}}(\#(\mathbb X & \cap \mathbb B_m(s,p))\le k)
\,=\, \sum_{\ell=0}^k {n\choose \ell} \ensuremath{{\mathbb P}}\big(x^1\in \mathbb B_m(s,p)\big)^{\ell}\ \ensuremath{{\mathbb P}}\big(x^1\not\in\mathbb B_m(s,p)\big)^{n-\ell}\\
&\le\, (k+1){n\choose k} \frac{1}{2^{n-k}}
\,\le\, \frac{2n^k}{(k-1)!\, 2^{n-k}}
\,\le\, \frac{2n^k}{(k-1)!}\exp\Big(-\frac{n}{2^{m+4}}\Big),
\end{align*}
where the last inequality follows from the fact that
\[
(n-k)\log(2) > \frac{n}{2}\log(2)\ge \frac{n}{2^{4}}\ge \frac{n}{2^{m+4}}.
\]
\vskip 2mm
Putting both cases together, we see that
\begin{align}\label{ineq:prob}
\ensuremath{{\mathbb P}}\big(\#(\mathbb X\cap \mathbb B_m(s,p))\le k\big)
\,\le\, \frac{2n^k}{(k-1)!}\exp\Big(-\frac{n}{2^{m+4}}\Big).
\end{align}
\smallskip
Recall from Section~\ref{subsec:partition} that
\[
\Omega_m \,=\, \bigcup_{(s,p)\in\mathbb I_m} \Omega_m(s,p).
\]
Combining the upper bound \eqref{eq:card} on the cardinality of $\mathbb I_m$
with the estimate in Lemma~\ref{lem:discrete}, we obtain by a union bound that
\begin{eqnarray*}
\ensuremath{{\mathbb P}}\big(\exists \mathbb B\in\Omega_m\,:\,\#(\mathbb X\cap \mathbb B)\le k\big)
& \le &\sum_{(s,p)\in\mathbb I_m} \ensuremath{{\mathbb P}}\big(\exists \mathbb B\in\Omega_m(s,p)\,:\,\#(\mathbb X\cap \mathbb B)\le k\big)\\
&\le&\sum_{(s,p)\in\mathbb I_m} \ensuremath{{\mathbb P}}\big(\#(\mathbb X\cap \mathbb B_m(s,p))\le k\big)\\
&<& \frac{2n^k}{(k-1)!}\,\exp\Bigl(m2^{m}\log(2^{m+3}d)-n2^{-m-4}\Bigr)\,.
\end{eqnarray*}
The last expression will be smaller than or equal to $1$ if and only if
\begin{equation}\label{eq:condition}
\begin{split}
n2^{-m-4} \,&\geq m2^{m}\log(2^{m+3}d) +\log\Big(\frac{2n^k}{(k-1)!}\Big) \\
\,&= \, m2^{m}\log(2^{m+3}d) + k\, \log\Big(c_k \frac{n}{k}\Big)
\end{split}
\end{equation}
with $c_k:=k\bigl(\frac{2}{(k-1)!}\bigr)^{1/k}$. Note that by Stirling's formula,
$c_k\uparrow e$ as $k\to\infty$.
To guarantee \eqref{eq:condition}, it is enough to assume that
\[
n\geq m 2^{2m+5}\log(2^{m+3}d) \qquad\text{and}\qquad
n\geq 2^{m+5}\,k\, \log\Big(e\frac{n}{k}\Big).
\]
It is easy to prove that the second inequality is implied by
$n\ge k m 2^{m+9}> e k (m+5) 2^{m+5}$.
Hence, we find an $n\in\ensuremath{{\mathbb N}}$ with \eqref{eq:condition} such that
\[
n \,\le\, C\,m\, 2^m\, \max\Bigl\{2^{m}\,\log(2^m d),\, k \Bigr\}
\]
for some constant $C\le2^9$.
This ensures that there exists a realization of the multiset $\mathbb X$
with cardinality $n$
such that, for all boxes $\mathbb B$ with $\mathrm{vol}_d(\mathbb B)>2^{-m}$, we have
$\#(\mathbb B\cap \mathbb X)>k$.
Using the argument of Section \ref{subsec:multiset}, we obtain
\begin{align*}
N(2^{-m},d) & := \min \Big\{N\in\ensuremath{{\mathbb N}}\,:\, k\!\operatorname{-disp}(N,d)\leq 2^{-m} \Big\} \\
& \leq C\,m\, 2^m\, \max\Bigl\{2^{m}\,\log(2^m d),\, k \Bigr\}.
\end{align*}
To finish the proof, let $\varepsilon\in(0,\frac{1}{4})$ and denote by
$m:=m_\varepsilon\in\ensuremath{{\mathbb N}}$ the unique integer satisfying
\[
\frac{1}{2^m}\le\varepsilon< \frac{1}{2^{m-1}}\,,
\]
i.e., $m=\lceil\log_2(1/\varepsilon)\rceil$.
By this choice of $m$,
\begin{align*}
m 2^{2m}\log(2^{m}d) \,<\, c_1\,\log_2(d)\bigg(\frac{\log_2(1/\varepsilon)}{\varepsilon}\bigg)^2,
\end{align*}
and
\[
m 2^{m}\,k \,<\, c_2\, \frac{k\cdot\log_2(1/\varepsilon)}{\varepsilon}
\]
for some constants $c_1,c_2\in(0,\infty)$.
This means that, since $N(\cdot,d)$ is decreasing in the first argument,
\[
N(\varepsilon,d)
\,\le\, C\cdot \max\left\{\log_2(d)\bigg(\frac{\log_2(1/\varepsilon)}{\varepsilon}\bigg)^2,\,
\frac{k\cdot\log_2(1/\varepsilon)}{\varepsilon} \right\}
\]
for some constant $C\in(0,\infty)$.
We therefore conclude that
\[
k\!\operatorname{-disp}(n,d) \,\le\, C'\,\max\left\{\log_2(n)\sqrt{\frac{\log_2(d)}{n}},\,
\frac{k\cdot\log_2(n/k)}{n}\right\}
\]
with an absolute constant $C'\in(0,\infty)$.
\section{Proof of Theorem \ref{thm:main 2} -- the lower bound}\label{sec:thm2}
The proof is very much inspired by the proof of the lower bound on the dispersion
of Aistleitner, Hinrichs and Rudolf \cite{AHR17}.
We recall their argument in a slightly modified form.
Given a point set $\mathscr P_n=\{x_1,\dots,x_n\}$ with
$x_i=(x_{i,1},\dots,x_{i,d})\in[0,1]^d$,
we define the matrix $A=A(\mathscr P_n)\in\ensuremath{{\mathbb R}}^{n\times d}$ by
\[
A_{i,j} \,=\, \begin{cases}
1 & :\, x_{i,j} \ge 1/2\,; \\
0 & \,\text{ otherwise},
\end{cases}
\]
with $i=1,\dots,n$ and $j=1,\dots,d$.
Note that
if $A$ contains two equal columns, then the projection of the point set on
the two coordinates corresponding to these columns is contained in the
union of the lower-left and the upper-right quarter of the unit square.
Therefore, the dispersion is at least $1/4$.
Likewise, if two columns $c_1,c_2\in\{0,1\}^{n}$ of $A$ satisfy $c_1=1-c_2$,
then the projection is contained in the upper-left and the lower-right quarter
and the dispersion is at least $1/4$.
Recall that $A$ has $d$ columns.
It is clear from the pigeon hole principle that there must be two columns
that satisfy one of the above conditions whenever $d>2^{n-1}$.
This implies
\[
0\!\operatorname{-disp}^*\!\big(\lceil\log_2 d\rceil, d\big) \,\ge\, 1/4.
\]
We now consider the $k$-dispersion for $k\ge1$.
Following the above arguments, if there are two columns $c_1,c_2$ of $A$ that
agree (or disagree) in all but $k$ entries, then there exists a box of
volume $1/4$ that contains at most $k$ points.
Again, from the pigeon hole principle
(and just ignoring $k$ rows), we obtain that such columns exist
whenever $d>2^{n-k-1}$. This implies
\begin{equation}\label{eq:init}
k\!\operatorname{-disp}^*\!\big(k+\lceil\log_2 d\rceil, d\big) \,\ge\, 1/4.
\end{equation}
Finally, note that Lemma~1 from \cite{AHR17} holds also for the $k$-dispersion,
i.e., for all $n,k,d,\ell\in\ensuremath{{\mathbb N}}$ we have
\begin{equation}\label{eq:rec}
k\!\operatorname{-disp}^*(n,d) \,\ge\, \frac{(\ell+1)\, k\!\operatorname{-disp}^*(\ell,d)}{n+\ell+1}.
\end{equation}
From this we conclude Theorem \ref{thm:main 2}.
\begin{proof}[Proof of Theorem \ref{thm:main 2}]
For $n\le k+\log_2(d)$, we use \eqref{eq:init} and obtain
$k\!\operatorname{-disp}^*(n,d)\ge1/4\ge1/8$. For $n> k+\log_2(d)$, we use \eqref{eq:rec}
with $\ell=k+\lceil\log_2(d)\rceil\le n$ and obtain
\[
k\!\operatorname{-disp}^*(n,d) \,\ge\, \frac{(\ell+1)\, k\!\operatorname{-disp}^*(\ell,d)}{n+\ell+1}
\,\ge\, \frac14\,\frac{(\ell+1)}{n+\ell+1}
\,\ge\, \frac{k+\log_2(d)}{8n}.
\]
\end{proof}
\begin{remark}
As the method to obtain \eqref{eq:init} is clearly related to packing numbers
on the discrete cube $\{0,1\}^n$ with respect to the Hamming metric,
one could try to apply more involved methods to obtain better bounds.
For example, the well-known \emph{sphere-packing bound}
(also known as Hamming bound), see \cite[Theorem 5.2.7]{Lint1992}, states that the maximal size of a $k$-packing
of the cube, say $M(n,k)$, satisfies
\[
M(n,k) \,\le\, \frac{2^n}{\sum_{t=0}^{\lfloor(k-1)/2\rfloor}\binom{n}{t}}.
\]
However, using this bound does not lead to any significant improvement.
\end{remark}
\bibliographystyle{abbrv}
| {
"timestamp": "2018-07-05T02:06:07",
"yymm": "1807",
"arxiv_id": "1807.01492",
"language": "en",
"url": "https://arxiv.org/abs/1807.01492",
"abstract": "In this manuscript we introduce and study an extended version of the minimal dispersion of point sets, which has recently attracted considerable attention. Given a set $\\mathscr P_n=\\{x_1,\\dots,x_n\\}\\subset [0,1]^d$ and $k\\in\\{0,1,\\dots,n\\}$, we define the $k$-dispersion to be the volume of the largest box amidst a point set containing at most $k$ points. The minimal $k$-dispersion is then given by the infimum over all possible point sets of cardinality $n$. We provide both upper and lower bounds for the minimal $k$-dispersion that coincide with the known bounds for the classical minimal dispersion for a surprisingly large range of $k$'s.",
"subjects": "Numerical Analysis (math.NA); Classical Analysis and ODEs (math.CA)",
"title": "The minimal $k$-dispersion of point sets in high-dimensions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987758723627135,
"lm_q2_score": 0.8104789040926008,
"lm_q1q2_score": 0.8005576078332266
} |
https://arxiv.org/abs/1804.09697 | Electrostatic Interpretation of Zeros of Orthogonal Polynomials | We study the differential equation $ - (p(x) y')' + q(x) y' = \lambda y,$ where $p(x)$ is a polynomial of degree at most 2 and $q(x)$ is a polynomial of degree at most 1. This includes the classical Jacobi polynomials, Hermite polynomials, Legendre polynomials, Chebychev polynomials and Laguerre polynomials. We provide a general electrostatic interpretation of zeros of such polynomials: the set of real numbers $\left\{x_1, \dots, x_n\right\}$ satisfies $$ p(x_i) \sum_{k = 1 \atop k \neq i}^{n}{\frac{2}{x_k - x_i}} = q(x_i) - p'(x_i) \qquad \mbox{for all}~ 1\leq i \leq n$$ if and only if they are zeros of a polynomial solving the differential equation. We also derive a system of ODEs depending on $p(x),q(x)$ whose solutions converge to the zeros of the orthogonal polynomial at an exponential rate. | \section{Introduction}
\subsection{Introduction.}
We start by describing an 1885 result of Stieltjes for Jacobi polynomials \cite{stieltjes}. Jacobi polynomials $P_n^{\alpha, \beta}(x)$, for real $\alpha, \beta > -1$, are the unique (up to a constant factor) solutions of the equation
$$ (1 - x^2 ) y''(x) - \left( \beta - \alpha - (\alpha + \beta + 2)x\right)y'(x) = n(n + \alpha + \beta + 1) y(x)$$
where $n \in \mathbb{N}$. The solution is a polynomial of degree $n$ having all its zeros in $(-1,1)$. Stieltjes defined a notion of energy for any set $\left\{x_1, \dots, x_n \right\} \subset (-1,1)$ as
$$ E = - \sum_{i,j = 1 \atop i \neq j}^{n}{\log{|x_i - x_j|}} - \sum_{i=1}^{n}{\left( \frac{\alpha+1}{2}\log{|x-1|} + \frac{\beta+1}{2}\log{|x+1|}\right)}$$
and showed that the minimal energy is exactly given by the zeros of the Jacobi polynomial. Differentiating this expression in all the $n$ variables leads to
a system of equations describing an electrostatic equilibrium
$$ \sum_{k=1 \atop k \neq i}^{n}{ \frac{1}{x_k - x_i} } = \frac{1}{2}\frac{\alpha+1}{x_i - 1} + \frac{1}{2} \frac{\beta + 1}{x_i + 1} \qquad \mbox{for all}~1 \leq i \leq n.$$
A traditional application of the formula is to establish monotonicity of zeros with respect to the parameters $\alpha, \beta$. This argument can be found in Szeg\H{o}'s book \cite{szeg} who also discusses Laguerre and Hermite polynomials. These types of questions have been studied by a large number of people, we refer to \cite{ahmed, dimitrov, hendriksen, ismail}, the survey \cite{survey},
the 1978 survey and 2001 book by F. Calogero \cite{calogero, calogero2}, the more recent papers of F. A. Gr\"unbaum \cite{grunbaum, grunbaum2}
and M. E. H. Ismail \cite{ismail, ismail2} and references therein.
\subsection{Equilibrium.}
The purpose of this paper is to prove a simple general result that characterizes zeros of orthogonal polynomials. The statement
itself is a bit more general but most interesting when applied to the classical orthogonal polynomials. We first discuss
second order equations, then describe system of ODEs converging to zeros and conclude with a short remark on higher order equations.
\begin{theorem} Let $p(x), q(x)$ be polynomials of degree at most 2 and 1, respectively. Then the set $\left\{x_1, \dots, x_n\right\}$, assumed to be in the domain of definition, satisfies
$$ p(x_i) \sum_{k = 1 \atop k \neq i}^{n}{\frac{2}{x_k - x_i}} = q(x_i) - p'(x_i) \qquad \mbox{for all} \quad 1\leq i \leq n$$
if and only if
$$y(x) = \prod_{k=1}^{n}{(x-x_k)} \quad \mbox{solves} \quad - (p(x) y')' + q(x) y' = \lambda y \quad \mbox{for some} ~~ \lambda \in \mathbb{R}.$$
\end{theorem}
In classical applications, there is a unique polynomial solution of fixed degree corresponding to a fixed value of $\lambda$. This removes $\lambda$ as a variable and leads to a complete characterization.
One direction of the statement (zeros of a polynomial solutions satisfy the system of equations) is a fairly straightforward computation and the outline of the argument can be found, for example, in \cite{grunbaum} or, as a remark, in \cite[\S 3]{ismail}. Moreover, results in this direction can be obtained for much more general differential equations of higher order (see, for example, \cite{calogero, calogero2,avila}). Arguments in the other direction seem to exist only in special cases, as in the work of Stieltjes, and are based on interpreting the system of equations as the critical point of an associated energy functional (see e.g. \cite{ismail, survey}). Our argument proceeds by a different route and bypasses considerations of an underlying energy.\\
We remark that there is no assumption of orthogonality nor is there any restriction on the domain which may be either bounded or unbounded but we do assume that the $n$ points are contained in the domain of definition. Let us quickly consider two special cases: we start with the Hermite differential equation
$$ -y'' + x y' = \lambda y.$$
This corresponds to $p(x) \equiv 1$ and $q(x) \equiv x$. We deduce from Theorem 1 that the zeros of the $n-$th solution satisfy a relationship also going back to Stieltjes, see \cite{ahmed1},
$$ \sum_{k = 1 \atop k \neq i}^{n}{\frac{2}{x_k - x_i}} = x_i \qquad \mbox{for all}~1 \leq i \leq n.$$
Returning to a special case of Jacobi polynomials $\alpha = 0 = \beta$ (for simplicity of exposition), we obtain $p(x) = x^2 -1$, $q(x) = 0$ and thus
$$ (x_i^2-1) \sum_{k = 1 \atop k \neq i}^{n}{\frac{2}{x_k - x_i}} = 2x_i \qquad \mbox{for all} \quad 1\leq i \leq n$$
which is easily seen to be equivalent to Stieltjes' electrostatic equilibrium.
\subsection{A System of ODEs.} One interesting byproduct of our approach is a procedure that allows for the computation of zeros without ever computing the polynomial.
More precisely, for any given set $x_1(0) < x_2(0) < \dots < x_n(0)$ (assumed to be in the domain of definition), we consider the system of ordinary differential equations
$$ \frac{d}{dt} x_i(t) = -p(x_i) \sum_{k = 1 \atop k \neq i}^{n}{\frac{2}{x_k(t) - x_i(t)}} + p'(x_i(t)) - q(x_i(t)) \qquad \qquad (\diamond)$$
The result above implies that the unique stationary point of this system of ODEs is given by the $n$ zeros of a polynomial solution of degree $n$.
We will now show that under standard assumptions on the equation $ - (p(x) y')' + q(x) y' = \lambda y,$ the system converges to this fixed point at an exponential rate.
We require somewhat stronger assumptions than we do for Theorem 1 and ask that
\begin{enumerate}
\item the equation $ - (p(x) y'(x))' + q(x) y'(x) = \lambda y(x)$ is a Sturm-Liouville problem with a discrete set of solutions indexed by $0 = \lambda_0 < \lambda_1 < \lambda_2 < \dots$ and
\item the solution corresponding to $y_n$ is a polynomial of degree $n$ for all $n \in \mathbb{N}$
\end{enumerate}
These assumptions cover the classical polynomials but are fairly strong, we refer to Bochner's theorem and its nice exposition in \cite[\S 20]{ismail3}.
\begin{theorem}
The system $(\diamond)$ converges for all initial values $x_1(0) < \dots < x_n(0)$ to the zeros $x_1 < \dots < x_n$ of the degree $n$ polynomial solving the equation. Moreover,
$$ \max_{1 \leq i \leq n}{ |x_i(t) - x_i|} \leq c e^{- \sigma_n t},$$
where $c > 0$ depends on everything and $\sigma_n \geq \lambda_n - \lambda_{n-1}$.
\end{theorem}
We are not aware of any result of this type being known; the connection between partial differential equations and zeros of polynomials (or poles of rational functions) is, of course classical, we refer to the extensive
survey of Calogero \cite{calogero} and his more recent book \cite{calogero2}. Our underlying dynamical system is the \textit{backward} heat equation (well-posed for algebraic reasons) and existence
of a solution for all time is a consequence of Sturm-Liouville theory.
We assume that the initial values are all contained in the domain where the solution is defined. For classical Sturm-Liouville problems, the Weyl law suggests
$\lambda_{n} - \lambda_{n-1} \sim n$. The rapid convergence can be easily observed in examples.
The assumptions are satisfied for all classical polynomials. We consider, as a specific example, the Laguerre polynomials satisfying
$$ - x y'' + (x-1)y' = ny $$
or, in our notation, $p(x) = x = q(x)$. Theorem 2 then implies that for any choice of initial values $x(0) < y(0) < z(0)$, the system of ordinary differential equations
\begin{align*}
\dot x(t) &= \frac{2x(t)}{x(t) - y(t)} + \frac{2x(t)}{x(t) - z(t)} +1 - x(t) \\
\dot y(t) &= \frac{2y(t)}{y(t) - x(t)} + \frac{2y(t)}{y(t) - z(t)} +1 - y(t) \\
\dot z(t) &= \frac{2z(t)}{z(t) - x(t)} + \frac{2z(t)}{z(t) - y(t)} +1 - z(t)
\end{align*}
has a solution for all $t > 0$. As $t \rightarrow \infty$, the solutions converge to the zeros of the third Laguerre polynomial
$$ L_3(x) = x^3 - 9x^2 + 18x - 6$$
and thus
$$ x(t) \rightarrow 0.4157\dots,~ y(t) \rightarrow 2.2942\dots \quad \mbox{and} \quad z(t) \rightarrow 6.2899\dots.$$
Moreover, this convergence happens at an exponential rate (roughly with speed $e^{-t}$).
We consider a large-scale example (see Fig. 1) next. The equation
$$- \frac{d}{dx}\left( (1-x^2) \frac{d}{dx}y(x)\right) = n(n+1) y(x)$$
is solved by the Legendre polynomials $P_n$ defined on $(-1,1)$. In our notation, we have $p(x) = 1- x^2$ and $q(x) = 0$. We simulate
the system of ODEs
$$ \frac{d}{dt} x_i(t) = -(1-x_i(t))^2 \sum_{k = 1 \atop k \neq i}^{n}{\frac{2}{x_k(t) - x_i(t)}} - 2x_i(t)$$
for $n=100$ equations. The initial values $\left\{x_1(0), \dots, x_{100}(0)\right\}$ are chosen as uniform random variables
in the interval $(-0.1, 0.1)$. Figure 1 shows the evolution of the system and convergence to the zeros of the
Legendre polynomial of degree 100.
\vspace{-10pt}
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}[scale=1]
\draw [thick, ->] (-4.3,-3) -- (-4.3,3);
\draw [thick, ->] (4.3,-0.2) -- (4.7,-0.2);
\node at (4.6, -0.5) {$t$};
\filldraw (-4.3, 2.6) circle (0.05cm);
\draw [dashed] (-4.3,2.65) -- (4.7,2.65);
\draw [dashed] (-4.3,-2.65) -- (4.7,-2.65);
\node at (-4.5, 2.6) {1};
\filldraw (-4.3, -2.6) circle (0.05cm);
\node at (-4.5, -2.6) {-1};
\node at (-.010,0) {\includegraphics[width=0.7\textwidth]{bigsample.pdf}};
\end{tikzpicture}
\caption{Evolution of the system of ODEs for $0 \leq t \leq 0.01$ approaches the zeros of the Legendre polynomial $P_{100}$ in $(-1,1)$.}
\end{figure}
\end{center}
\vspace{-20pt}
As indicated in the Theorem, the constant determining the speed of exponential
convergence grows linearly in $n$: this is reflected in the rather short time-scale $(0 \leq t \leq 0.01)$ in the picture.
\subsection{Higher order equations} Theorem 1 as well as its proof immediately generalizes to polynomial solutions of the equation
$$ a_n(x) y^{(n)}(x) + a_{n-1}(x) y^{(n-1)}(x) + a_{1}(x)y'(x) = \lambda y(x),$$
where $a_n$ is a polynomial of degree at most $n$ and the solution $y$ is assumed to only have single zeros: one half
of the statement can be found in \cite[Proposition 1]{avila}, the other direction follows from our approach.
We are not aware of an extension of Theorem 2 since our
proof relies on Sturm-Liouville theory.
\section{Proofs}
\subsection{Proof of Theorem 1.}
\begin{proof}
One direction of the argument, showing that the zeros of any polynomial solution of the equation is in the desired electrostatic equilibrium, is a simple computation and known in greater generality \cite{ahmed1,ahmed}. However, it also follows immediately from our argument and is, for completeness sake, sketched at the end of the proof.
We assume that the system of equations is satisfied and will prove that the associated polynomial solves the differential equation. Fix $x_1, \dots, x_n$
and consider the candidate polynomial
$$ f(x) = \prod_{k=1}^{n}{(x - x_k)}.$$
We want to show that this polynomial satisfies the differential equation
$$- (p(x) f')' + q(x) f' = \lambda f \qquad \mbox{for some}~\lambda \in \mathbb{R}.$$
We introduce the function
\begin{align*}
\frac{\partial}{\partial t} u(t,x) &= \frac{\partial}{\partial x}\left( p(x) \frac{\partial}{\partial x} u(t,x) \right) - q(x) \frac{\partial}{\partial x} u(t,x) \\
u(0,x) &= f(x)
\end{align*}
This is a parabolic partial differential equation. Generally, unless we specify the sign of $p$, it might be an inverse heat equation and the solution of such an equation need not even exist for a short amount of time. Here, however, since $p$ is at most a polynomial
of degree 2 and $q$ is a polynomial of degree at most 1, we see that the right-hand side is always a polynomial of degree at most $n$. In particular, we can rewrite the partial
differential equation as a linear system of $n+1$ ordinary differential equations and this guarantees existence for all $t>0$.
Suppose $f(x)$ is not a solution of the differential equation. Then
$$u_t(0,x) = \frac{\partial}{\partial x}\left( p(x) \frac{\partial}{\partial x} f(x) \right) - q(x) \frac{\partial}{\partial x} f(x)$$
is a polynomial of degree at most $n$ and not identically 0 (since otherwise $f$ would solve the differential equation for $\lambda = 0$). We observe that if this polynomial vanishes in
exactly $\left\{x_1, \dots, x_n\right\}$, then it has to be a multiple of $f$ and we have obtained the desired result for some $\lambda \neq 0$. If this is not the case, then $u_t(x_i,0) \neq 0$ for at least one $1 \leq i \leq n$.
We fix this value of $i$. Moreover, we note
$$f'(x_i) = \prod_{k=1 \atop k \neq i}^{n}{(x_i - x_k)} \neq 0.$$
The implicit function theorem implies that there is a neighborhood of 0 for which there is a function $x_i(t)$ such that
$$ u( t, x_i(t)) = 0 \qquad \mbox{and} \qquad \frac{\partial}{\partial t} x_i(t) \big|_{t = 0} \neq 0.$$
We now compute the expression. Differentiation implies
$$ 0 = \frac{\partial}{\partial t} u(t,x_i(t))\big|_{t=0} = u_x(0,x_i)\left(\frac{\partial}{\partial t} x_i(t) \big|_{t=0}\right) + u_t(0,x_i).$$
We are interested in the first term, already computed the second term $u_x(x_i,0) = f'(x_i)$ and thus only need to compute the third term. A simple computation shows
\begin{align*}
u_t(t,x_i)\big|_{t=0} &= \frac{\partial}{\partial x}\left( p(x) \frac{\partial}{\partial x} f(x) \right) - q(x) \frac{\partial}{\partial x} f(x) \big|_{x=x_i} \\
&= p(x) \frac{\partial^2}{\partial x^2} f(x) + (p'(x) - q(x))\frac{\partial}{\partial x} f(x) \big|_{x=x_i}.
\end{align*}
The first term simplifies to
$$ \frac{\partial^2}{\partial x^2} f(x) \big|_{x=x_i} = 2 \sum_{k =1 \atop k \neq i}^{n}{ \prod_{j = 1 \atop j \notin \left\{i, k\right\}}^{n}{(x_i - x_j)}}$$
and altogether we obtain
$$ 0 \neq \frac{\partial}{\partial t} x_i(t) \big|_{t = 0} = p(x_i) \sum_{k = 1 \atop k \neq i}^{n}{\frac{2}{x_k - x_i}} + p'(x_i) - q(x_i)$$
which is a contradiction. Conversely, if $f$ is indeed a solution of the equation, then
$$ u(t,x) = e^{\lambda t} f(x) \qquad \mbox{and thus} \qquad \frac{\partial}{\partial t} x_i(t) \big|_{t = 0} = 0$$
for all $1 \leq i \leq n$ which implies that the equations are satisfied.
\end{proof}
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}[scale=1]
\draw [thick, ->] (-4.3,-3) -- (-4.3,3);
\node at (4.7, -2.8) {$t$};
\filldraw (-4.3, 2.6) circle (0.05cm);
\draw [thick, ->] (-4.3,-2.6) -- (4.7,-2.6);
\node at (-4.7, 2.6) {371};
\filldraw (-4.3, -2.6) circle (0.05cm);
\filldraw (4.1, -2.6) circle (0.05cm);
\node at (4.1, -2.8) {4};
\node at (-4.1, 2.9) {$x$};
\node at (-4.6, -2.6) {0};
\node at (-0.02,0.04) {\includegraphics[width=0.7\textwidth]{laguerre.pdf}};
\end{tikzpicture}
\caption{Another example: starting with initial values $x_i(0) = i$ for $1 \leq i \leq 100$, the associated system of ODEs approaches the zeros of the Laguerre polynomial $L_{100}$. }
\end{figure}
\end{center}
\vspace{-10pt}
\subsection{Proof of Theorem 2.}
\begin{proof} We assume that the solutions $y_0, y_1, \dots, y_{n-1}, y_n$ satisfy $\mbox{deg} (y_i) = i$ for all $0 \leq i \leq n$. We assume that $p, q$ and the domain of definition are such that classical
Sturm-Liouville theory applies. Then the polynomial $y_j$ has exactly $j$ zeros all of which are simple. Suppose
$$ x_1(0) < x_2(0) < \dots < x_n(0)$$
is given. We define the function
$$ f(x) = \prod_{k=1}^{n}{(x - x_k(0))} \quad \mbox{and write it as} \quad f(x) = \sum_{k=0}^{n}{a_k y_k(x)}$$
for some coefficients $a_0, \dots, a_n$.
This is possible because of the assumption on the degrees (an upper triangular matrix is invertible).
The strong form of the Sturm Oscillation Theorem, that is not very widely known, states that, as long as not all coefficients $b_i$ vanish, any function of the form
$$ \sum_{k=0}^{n-1}{b_k y_k(x)} \qquad \mbox{has at most}~n-1~\mbox{zeros.}$$
We refer to B\'erard \& Helffer \cite{berard} and L\"utzen \cite{lutzen} for the history of this remarkable Theorem that seems to have been forgotten (\cite{berard} gives rigorous proofs in modern language, \cite{stein}
gives a quantitative form). Since $f(x)$ has $n$ zeros, the Sturm Oscillation Theorem implies that $a_n \neq 0$. We now define $u(t,x)$ as the solution of the \textit{backward} heat equation
\begin{align*}
\frac{\partial}{\partial t} u(t,x) &= - \frac{\partial}{\partial x}\left( p(x) \frac{\partial}{\partial x} u(t,x) \right) + q(x) \frac{\partial}{\partial x} u(t,x) \\
u(0,x) &= f(x)
\end{align*}
and observe that, as explained in the proof of Theorem 1, this equation is well-posed for all $t>0$ since it can be rewritten as a linear system of $n+1$ ordinary differential equations. Linearity implies that the solution is given by
$$ u(t,x) = \sum_{k=0}^{n}{a_k e^{\lambda_k t} y_k(x)}.$$
At the same time, as long as the zeros do not collide, we can write
$$ u(t,x) = h(t)\prod_{i=1}^{n}{(x - x_i(t))},$$
where $h(t) \neq 0$ for all $t>0$ and the functions $x_i(t)$ satisfy, following the computation done in the proof of Theorem 1 and reversing the sign, the system of ordinary differential equations
$$ \frac{d}{dt} x_i(t) = -p(x_i) \sum_{k = 1 \atop k \neq i}^{n}{\frac{2}{x_k(t) - x_i(t)}} + q(x_i(t)) - p'(x_i(t)) \quad \mbox{for all}~1 \leq i \leq n.$$
It is a property of Sturm-Liouville problems that the number of distinct zeros is nonincreasing in time under the forward heat equation; since we are
dealing with the backward heat equation, we see that the number of distinct zeros is nondecreasing. Moreover, since we start with a polynomial of degree
$n$ and the solution is always a polynomial of degree at most $n$, this implies that zeros can never collide (and that the solution is always exactly of degree $n$). This shows that
$$ e^{-\lambda_n t} h(t)\prod_{i=1}^{n}{(x - x_i(t))} = a_n y_n(t) + \sum_{k=0}^{n-1}{a_k e^{(\lambda_k - \lambda_n) t} y_k(x)}.$$
All zeros of $y_n$ are simple, the Inverse Function Theorem now implies that the zeros of $u(t,x)$ converge to the zeros of $y_n$ exponentially
quickly in $t$. The speed of convergence depends on the size of $\lambda_{n-1} - \lambda_n$ and the size of $y_n'$ at its zeros; the constant in front will depend on the precise values of the coefficients $\left\{a_0, a_1, \dots, a_n\right\}$. If $a_{n-1} = 0$ or more of the leading terms vanish, then convergence would be even faster.
\end{proof}
\vspace{-15pt}
| {
"timestamp": "2018-05-17T02:08:36",
"yymm": "1804",
"arxiv_id": "1804.09697",
"language": "en",
"url": "https://arxiv.org/abs/1804.09697",
"abstract": "We study the differential equation $ - (p(x) y')' + q(x) y' = \\lambda y,$ where $p(x)$ is a polynomial of degree at most 2 and $q(x)$ is a polynomial of degree at most 1. This includes the classical Jacobi polynomials, Hermite polynomials, Legendre polynomials, Chebychev polynomials and Laguerre polynomials. We provide a general electrostatic interpretation of zeros of such polynomials: the set of real numbers $\\left\\{x_1, \\dots, x_n\\right\\}$ satisfies $$ p(x_i) \\sum_{k = 1 \\atop k \\neq i}^{n}{\\frac{2}{x_k - x_i}} = q(x_i) - p'(x_i) \\qquad \\mbox{for all}~ 1\\leq i \\leq n$$ if and only if they are zeros of a polynomial solving the differential equation. We also derive a system of ODEs depending on $p(x),q(x)$ whose solutions converge to the zeros of the orthogonal polynomial at an exponential rate.",
"subjects": "Classical Analysis and ODEs (math.CA); Spectral Theory (math.SP)",
"title": "Electrostatic Interpretation of Zeros of Orthogonal Polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109489630493,
"lm_q2_score": 0.8128673201042492,
"lm_q1q2_score": 0.8005206368929164
} |
https://arxiv.org/abs/1011.3486 | Computing the $\sin_{p}$ function via the inverse power method | In this paper, we discuss a new iterative method for computing $\sin_{p}$. This function was introduced by Lindqvist in connection with the unidimensional nonlinear Dirichlet eigenvalue problem for the $p$-Laplacian. The iterative technique was inspired by the inverse power method in finite dimensional linear algebra and is competitive with other methods available in the literature. | \section{Introduction}
In this paper we present a new method to compute the function $\sin_{p}$,
inspired by recent work done by the authors in \cite{BEM}, where an iterative
algorithm based on the inverse power method of linear algebra was introduced
for the computation of the first eigenvalue and first eigenfunction of the
Dirichlet problem for the $p$-Laplacian in arbitrary domains in
\mathbb{R}
^{N}$. The functions $\sin_{p}$, $1<p<\infty$, can be thought of as
generalizations of the familiar trigonometric functions. They arise in the
unidimensional Dirichlet eigenvalue problem for the $p$-Laplacian and were
introduced in this capacity in \cite{Lindqvist}, where a power series formula
for computing them was also formally given.
In \cite{BR1}\textbf{ }$\sin_{p}$\textbf{ }functions were utilized to
introduce a generalization of the Pr\"{u}fer transformation and thus
represent, in two phase-plane coordinates,\textbf{ }Sturm-Liouville-type
problems involving the $N$-dimensional radially symmetric $p$-Laplacian
$L_{p}u:=x^{1-N}\left( x^{N-1}\left\vert u^{\prime}\right\vert ^{p-2
u^{\prime}\right) ^{\prime}$, $0\leqslant a<x<b<\infty$.\textbf{ }This
approach was numerically implemented in \cite{BR2} for an eigenvalue problem
involving $L_{p}$ with separated homogeneous boundary conditions. In that
paper an interpolation table for $\sin_{p}$ was obtained by numerically
solving an ODE. Also in that paper the authors raised the question of finding
a fast and accurate algorithm for computing $\sin_{p}$.
Our method depends on the convergence of a sequence of functions whose
definition, as in \cite{BEM}, is motivated by an extension of the inverse
power method of linear algebra for obtaining the first eigenvalue and first
eigenfunction of finite dimensional linear operators. These functions are
recursively defined and can be given in integral form, so that they can be
obtained by numerical integration.
More specifically, recall that it suffices to obtain $\sin_{p}$ in the
interval $I_{p}=\left[ 0,\pi_{p}/2\right] $, since it is extended to the
interval $\left[ \pi_{p}/2,\pi_{p}\right] $ symmetrically with respect to
$\pi_{p}/2$ and afterward to the whole real line
\mathbb{R}
$ as an odd, $2\pi_{p}$-periodic function (the definition of $\sin_{p}$ as
well as the precise value of $\pi_{p}$ are recalled in Section 2). We define
the following sequence of (positive) functions $\left\{ \phi_{n}\right\}
\subset C^{1}\left( I_{p}\right) $. Set $\phi_{0}\equiv1$ and
\[
\left\{
\begin{array}
[c]{ll
\left( \phi_{n+1}^{\prime}\left\vert \phi_{n+1}^{\prime}\right\vert
^{p-2}\right) ^{\prime}=-\phi_{n}\left\vert \phi_{n}\right\vert ^{p-2} &
\text{ \ \ if }x\in I_{p},\\
\phi_{n+1}\left( 0\right) =\phi_{n+1}^{\prime}\left( \pi_{p}/2\right)
=0. &
\end{array}
\right.
\]
We prove that the scaled sequence $\left\{ \sqrt[p]{p-1}\phi_{n}/\left\Vert
\phi_{n}\right\Vert _{\infty}\right\} $ converges uniformly to $\sin_{p}$ in
$I_{p}$. The functions $\phi_{n}$ can be written in integral form a
\[
\phi_{n+1}\left( x\right) =\int_{0}^{x}\left( \int_{\theta}^{\pi_{p}/2
\phi_{n}\left( s\right) ^{p-1}ds\right) ^{\frac{1}{p-1}}d\theta\text{,
\ \ }x\in I_{p},
\]
and, therefore, are readily computed using standard efficient numerical
methods for definite integrals.
This paper is organized as follows. In Section 2, we recall the definition and
some basic properties of $\sin_{p}$ which will be used in the sequel. In
Section 3, we show how to recursively construct a sequence of functions which
converge uniformly to $\sin_{p}$. Finally, in Section 4 we compare the
performance of our method with those of \cite{Lindqvist} and \cite{BR2}.
\section{The function $\sin_{p}$}
For the sake of completeness we recall in this section the definition and some
properties of the function $\sin_{p}$. The unidimensional Dirichlet eigenvalue
problem for the $p$-Laplacian, $p>1$, i
\begin{equation}
\left\{
\begin{array}
[c]{ll
\psi_{p}\left( u^{\prime}\right) ^{\prime}=-\lambda\psi_{p}\left( u\right)
& \quad\text{if }a<x<b,\\
u\left( a\right) =u\left( b\right) =0, &
\end{array}
\right. \label{01
\end{equation}
where $\psi_{p}\left( t\right) =t\left\vert t\right\vert ^{p-2}$.
It is easy to verify that if $\lambda_{1}$ is the first eigenvalue of
\begin{equation}
\left\{
\begin{array}
[c]{ll
\psi_{p}\left( v^{\prime}\right) ^{\prime}=-\lambda\psi_{p}\left( v\right)
& \quad\text{if }a<x<m:=\dfrac{a+b}{2},\\
v\left( a\right) =v^{\prime}\left( m\right) =0, &
\end{array}
\right. \label{02
\end{equation}
and $v_{1}$ is the corresponding positive eigenfunction, then $\lambda_{1}$ is
also the first eigenvalue for (\ref{01}) wit
\[
u_{1}\left( x\right) =\left\{
\begin{array}
[c]{ll
v_{1}\left( x\right) & \quad\text{if }a\leqslant x\leqslant m,\\
v_{1}\left( a+b-x\right) & \quad\text{if }m\leqslant x\leqslant b,
\end{array}
\right.
\]
being the corresponding positive eigenfunction. Moreover, this function is
stricly increasing on $[a,m)$, strictly decreasing on $(m,b]$ and has only one
maximum point which is reached at $x=m$. Thus, $\left\Vert u_{1}\right\Vert
_{\infty}=u_{1}\left( m\right) $.
An expression for $\lambda_{1}$ is well known and can be obtained by
integration (see \cite{Otani}) as follows. First multiply (\ref{01}) by
$u_{1}^{\prime}$ and integrate the resulting equation by parts on $\left[
a,x\right] $ to obtai
\begin{equation}
\left. \psi_{p}\left( u_{1}^{\prime}\right) u_{1}^{\prime}\right\vert
_{a}^{x}-\int_{a}^{x}\psi_{p}\left( u_{1}^{\prime}\right) u_{1
^{\prime\prime}dx=-\lambda_{1}\int_{a}^{x}\psi_{p}\left( u_{1}\right)
u_{1}^{\prime}dx. \label{03
\end{equation}
We hav
\begin{equation}
\left. \psi_{p}\left( u_{1}^{\prime}\right) u_{1}^{\prime}\right\vert
_{a}^{x}=\left\vert u_{1}^{\prime}\left( x\right) \right\vert ^{p
-\left\vert u_{1}^{\prime}\left( a\right) \right\vert ^{p} \label{04
\end{equation
\begin{equation}
\int_{a}^{x}\psi_{p}\left( u_{1}\right) u_{1}^{\prime}dx=\int_{u_{1
(a)}^{u_{1}(x)}\psi_{p}\left( s\right) \,ds=\frac{\left\vert u\left(
x\right) \right\vert ^{p}}{p}-\frac{\left\vert u\left( a\right) \right\vert
^{p}}{p}, \label{05
\end{equation
\begin{equation}
\int_{a}^{x}\psi_{p}\left( u_{1}^{\prime}\right) u_{1}^{\prime\prime
dx=\int_{u_{1}^{\prime}\left( a\right) }^{u_{1}^{\prime}\left( x\right)
}\psi_{p}\left( s\right) \,ds=\frac{\left\vert u^{\prime}\left( x\right)
\right\vert ^{p}}{p}-\frac{\left\vert u^{\prime}\left( a\right) \right\vert
^{p}}{p}. \label{06
\end{equation}
Substituting (\ref{04}), (\ref{05}) and (\ref{06}) in (\ref{03}) we obtain
\[
\left( 1-\frac{1}{p}\right) \left[ \left\vert u_{1}^{\prime}\left(
x\right) \right\vert ^{p}-\left\vert u_{1}^{\prime}\left( a\right)
\right\vert ^{p}\right] =-\lambda_{1}\left[ \frac{\left\vert u_{1}\left(
x\right) \right\vert ^{p}}{p}-\frac{\left\vert u_{1}\left( a\right)
\right\vert ^{p}}{p}\right] ,
\]
whence
\[
\left[ \left( 1-\frac{1}{p}\right) \left\vert u_{1}^{\prime}\right\vert
^{p}+\lambda_{1}\frac{\left\vert u_{1}\right\vert ^{p}}{p}\right] _{a
^{x}=0.
\]
This means that
\[
\frac{p-1}{p}\left\vert u_{1}^{\prime}\right\vert ^{p}+\frac{\lambda_{1}
{p}\left\vert u_{1}\right\vert ^{p}\equiv C,
\]
where $C$ is a constant and $p^{\prime}=p/\left( p-1\right) $ is the
conjugate of $p$. The value of $C$ can be found computing the value of this
expression at the maximum point $m$; choosing $u_{1}$ such that $u_{1}\left(
m\right) =1$ we fin
\[
C=\frac{p-1}{p}\left\vert u_{1}^{\prime}\left( m\right) \right\vert
^{p}+\frac{\lambda_{1}}{p}\left\vert u_{1}\left( m\right) \right\vert
^{p}=\frac{\lambda_{1}}{p}.
\]
Therefore
\begin{equation}
\left( p-1\right) \left\vert u_{1}^{\prime}\left( x\right) \right\vert
^{p}+\lambda_{1}\left\vert u_{1}\left( x\right) \right\vert ^{p}=\lambda_{1}
\label{07
\end{equation}
for all $x\in\left[ a,b\right] $.
On the interval $\left[ a,m\right] $ we have $u^{\prime}\geqslant0$, hence
we can writ
\begin{equation}
\frac{u_{1}^{\prime}\left( x\right) }{\sqrt[p]{\left( 1-\left\vert
u_{1}\left( x\right) \right\vert ^{p}\right) }}=\sqrt[p]{\frac{\lambda_{1
}{p-1}} \label{08
\end{equation}
for all $x\in\left[ a,m\right] $. Integrating this equation on $\left(
a,m\right) $ leads t
\[
\frac{b-a}{2}\sqrt[p]{\frac{\lambda_{1}}{p-1}}=\int_{u_{1}(a)}^{u_{1}(m)
\frac{ds}{\sqrt[p]{1-s^{p}}}=\int_{0}^{1}\frac{ds}{\sqrt[p]{1-s^{p}}},
\]
which gives the expressio
\begin{equation}
\lambda_{1}=(p-1)\left( \frac{2}{b-a}\int_{0}^{1}\frac{ds}{\sqrt[p]{1-s^{p}
}\right) ^{p}=\left( \frac{\pi_{p}}{b-a}\right) ^{p}, \label{09
\end{equation}
where we se
\begin{equation}
\pi_{p}:=2\sqrt[p]{p-1}\int_{0}^{1}\frac{ds}{\sqrt[p]{1-s^{p}}}. \label{ppip
\end{equation}
Making the change of variable $s=\sqrt[p]{t}$ in the last integral and using
the classical Beta function $B$ we obtain
\[
\int_{0}^{1}\frac{ds}{\sqrt[p]{1-s^{p}}}=\frac{1}{p}\int_{0}^{1}t^{\frac{1
{p}-1}(1-t)^{-\frac{1}{p}}dt=\frac{1}{p}B\left( 1-\frac{1}{p},\frac{1
{p}\right) =\frac{\pi/p}{\sin(\pi/p)
\]
(Here one use the properties $B(x,y)B(x+y,1-y)=x/x\sin\left( \pi y\right) $
and $B(1,z)=1/z$ with $x=1-1/p$ and $y=z=1/p$).
Therefore
\begin{equation}
\pi_{p}=\frac{2\sqrt[p]{p-1}\left( \pi/p\right) }{\sin(\pi/p)} \label{10
\end{equation}
and
\[
\lambda_{1}=\left( \frac{2\sqrt[p]{p-1}\left( \pi/p\right) }{(b-a)\sin
\left( \pi/p\right) }\right) ^{p}.
\]
When $a=0$ and $b=\pi_{p}$ we denote the function $\sqrt[p]{p-1}u_{1}$ by
$\sin_{p}.$ Thus, $\sin_{p}\left( 0\right) =0=\sin_{p}^{\prime}\left(
\pi_{p}/2\right) $, $\lambda_{1}=1$ and from (\ref{07})
\[
\left\vert \sin_{p}^{\prime}\right\vert ^{p}+\frac{\left\vert \sin
_{p}\right\vert ^{p}}{p-1}=1.
\]
It is clear from this equation that $\sin_{p}^{\prime}\left( 0\right) =1.$
We remark that $u=\sin_{p}$ is also the unique solution of the initial value
problem
\[
\left\vert u^{\prime}\right\vert ^{p}+\frac{\left\vert u\right\vert ^{p}
{p-1}=1,\quad u\left( 0\right) =0,
\]
which can be used to define this function.
Alternatively, we can define $\sin_{p}$ on the interval $\left[ 0,\pi
_{p}/2\right] $ as an inverse function. In fact, multiplying (\ref{08}) by
$\sqrt[p]{p-1}$ and using (\ref{09}) with $a=0$ and $b=\pi_{p}$ we obtain
\[
\int_{0}^{\sin_{p}\left( x\right) }\frac{ds}{\sqrt[p]{\left( 1-\frac{s^{p
}{p-1}\right) }}=x,\quad\text{for }x\in\left[ 0,\pi_{p}/2\right] ,
\]
that is, $\sin_{p}=\zeta^{-1}$ wher
\[
\zeta\left( z\right) :
{\displaystyle\int_{0}^{z}}
\frac{ds}{\sqrt[p]{\left( 1-\frac{s^{p}}{p-1}\right) }},\quad\text{for
z\in\left[ 0,\sqrt[p]{p-1}\right] .
\]
With this definition, we extend $\sin_{p}$ to the interval $\left[ \pi
_{p}/2,\pi_{p}\right] $ symmetrically with respect to $\pi_{p}/2$ and
afterward to the whole real line
\mathbb{R}
$ as an odd, $2\pi_{p}$-periodic function. We list the basic properties of
$\sin_{p}$:
\begin{enumerate}
\item $\sin_{p}\left( 0\right) =0=\sin_{p}\left( \pi_{p}\right) $,
$\sin_{p}\left( \pi_{p}/2\right) =\left\Vert \sin_{p}\right\Vert _{\infty
}=\sqrt[p]{p-1}$.
\item $\sin_{p}\left( x\right) $ is strictly increasing in $\left[
0,\pi_{p}/2\right] $ and strictly decreasing in $\left[ \pi_{p}/2,\pi
_{p}\right] .$
\item $\left\vert \sin_{p}^{\prime}\left( x\right) \right\vert
=\sqrt[p]{1-\dfrac{\left\vert \sin_{p}\right\vert ^{p}}{p-1}}.$
\end{enumerate}
\section{A sequence uniformly convergent to $\sin_{p}$}
Let $I_{p}=\left[ 0,\pi_{p}/2\right] $ and define the following sequence of
functions $\left\{ \phi_{n}\right\} \subset C^{1}\left( I_{p}\right) $.
Set $\phi_{0}\equiv1$ and
\[
\left\{
\begin{array}
[c]{ll
\left( \psi_{p}\left( \phi_{n+1}^{\prime}\right) \right) ^{\prime
=-\psi_{p}\left( \phi_{n}\right) & \text{ \ \ if }x\in I_{p},\\
\phi_{n+1}\left( 0\right) =\phi_{n+1}^{\prime}\left( \pi_{p}/2\right)
=0. &
\end{array}
\right.
\]
In this section, we prove that the scaled sequence $\left\{ \sqrt[p]{p-1
\phi_{n}/\left\Vert \phi_{n}\right\Vert _{\infty}\right\} $ converges
uniformly to $\sin_{p}$ in $I_{p}$. Before proceeding, we recall some basic
properties of the $\psi_{p}$ functions:
\begin{description}
\item[Proposition 3.1.] (Basic properties of $\psi_{p}$) \textit{The following
holds:}
\end{description}
\begin{enumerate}
\item $\psi_{p}$\textit{ is continuous, strictly increasing and odd, for each
}$p>1.$
\item $\psi_{p}\left( ab\right) =\psi_{p}\left( a\right) \psi_{p}\left(
b\right) .$
\item $\psi_{p}\left( \dfrac{a}{b}\right) =\dfrac{\psi_{p}\left( a\right)
}{\psi_{p}\left( b\right) }$
\item $\left( \psi_{p}\right) ^{-1}=\psi_{p^{\prime}}.$
\item $\int_{0}^{t}\psi_{p}\left( s\right) ds=\dfrac{\left\vert t\right\vert
^{p}}{p}.$
\end{enumerate}
By a straightforward calculation we can find the following recursive integral
expression for the $\phi_{n}$-functions:
\begin{equation}
\phi_{n+1}\left( x\right) =\int_{0}^{x}\psi_{p^{\prime}}\left( \int
_{\theta}^{\pi_{p}/2}\psi_{p}\left( \phi_{n}\left( s\right) \right)
ds\right) d\theta. \label{11
\end{equation}
It is clear from (\ref{11}) that each $\phi_{n}\ $is positive, increasing on
$I_{p}$ and reaches its maximum value at $x=\pi_{p}/2$. One can obtain an
explicit expression for $\phi_{1}$, the second function in the sequence:
\begin{align*}
\phi_{1}\left( x\right) & =\int_{0}^{x}\psi_{p^{\prime}}\left(
\int_{\theta}^{\pi_{p}/2}\psi_{p}\left( 1\right) ds\right) d\theta\\
& =\int_{0}^{x}\psi_{p^{\prime}}\left( \frac{\pi_{p}}{2}-\theta\right)
d\theta\\
& =\int_{\pi_{p}/2-x}^{\pi_{p}/2}\psi_{p^{\prime}}\left( y\right) dy\\
& =\frac{1}{p}\left[ \left( \frac{\pi_{p}}{2}\right) ^{p}-\left(
\frac{\pi_{p}}{2}-x\right) ^{p}\right] .
\end{align*}
Note tha
\[
\left\Vert \phi_{1}\right\Vert _{\infty}=\phi_{1}\left( \frac{\pi_{p}
{2}\right) =\frac{1}{p}\left( \frac{\pi_{p}}{2}\right) ^{p}=\frac{p-1
{p}\left( \frac{\pi/p}{\sin\left( \pi/p\right) }\right) ^{p}.
\]
The next $\phi_{n}$-functions however, are very difficult to obtain explicitly
by solving the integrals analytically. On the other hand, the integrals can
easily be solved numerically.
\begin{description}
\item[Proposition 3.2.] $\phi_{n+1}\leqslant\left\Vert \phi_{1}\right\Vert
_{\infty}\phi_{n}$\textit{ on }$I_{p}.$
\end{description}
\noindent\textbf{Proof.} For $n=1$ the result is trivially true since
$\phi_{0}\equiv1.$Assuming by induction that $\phi_{n}\leqslant\left\Vert
\phi_{1}\right\Vert _{\infty}\phi_{n-1}$, we have
\begin{align*}
\phi_{n+1}\left( x\right) & =\int_{0}^{x}\psi_{p^{\prime}}\left(
\int_{\theta}^{\pi_{p}/2}\psi_{p}\left( \phi_{n}\left( s\right) \right)
\,ds\right) \,d\theta\\
& \leqslant\int_{0}^{x}\psi_{p^{\prime}}\left( \int_{\theta}^{\pi_{p}/2
\psi_{p}\left( \left\Vert \phi_{1}\right\Vert _{\infty}\phi_{n-1}\left(
s\right) \right) \,ds\right) \,d\theta\\
& =\int_{0}^{x}\psi_{p^{\prime}}\left( \psi_{p}\left( \left\Vert \phi
_{1}\right\Vert _{\infty}\right) \int_{\theta}^{\pi_{p}/2}\psi_{p}\left(
\phi_{n-1}\left( s\right) \right) \,ds\right) \,d\theta\\
& =\left\Vert \phi_{1}\right\Vert _{\infty}\int_{0}^{x}\psi_{p^{\prime
}\left( \int_{\theta}^{\pi_{p}/2}\psi_{p}\left( \phi_{n-1}\left( s\right)
\right) \,ds\right) \,d\theta\\
& =\left\Vert \phi_{1}\right\Vert _{\infty}\phi_{n}\left( x\right) .
\end{align*}
\noindent$\blacksquare$
The following technical lemma, which will be used in the sequel, can be proved
via the Cauchy mean value theorem (see \cite{AVV}) and works as a
L'H\^{o}pital's rule in order to get monotonicity for a certain quotient function.
\begin{description}
\item[Lemma 3.3.] \textit{Let }$f,g:\left[ a,b\right] \longrightarrow
R$\textit{ be continuous on }$\left[ a,b\right] $\textit{ and differentiable
in }$\left( a,b\right) $. \textit{Suppose }$g^{\prime}(x)\neq0$\textit{ for
all }$x\in\left( a,b\right) $. \textit{If }$\dfrac{f^{\prime}}{g^{\prime}
$\textit{ is (strictly) increasing }[\textit{decreasing}],\textit{ then both
}$\dfrac{f(x)-f(a)}{g(x)-g(a)}$\textit{ and }$\dfrac{f(x)-f(b)}{g(x)-g(b)
$\textit{are (strictly) increasing }[\textit{decreasing}]\textit{.}
\item[Theorem 3.4.] \textit{For each }$n\geqslant1$\textit{ the function
}$\dfrac{\phi_{n}}{\phi_{n+1}}$\textit{ is strictly decreasing on }$I_{p
$\textit{ and}
\end{description}
\begin{enumerate}
\item[(i)] $\dfrac{1}{\left\Vert \phi_{1}\right\Vert _{\infty}}\leqslant
\inf\limits_{I_{p}}\dfrac{\phi_{n}}{\phi_{n+1}}=\dfrac{\phi_{n}\left( \pi
_{p}/2\right) }{\phi_{n+1}\left( \pi_{p}/2\right) }=\dfrac{\left\Vert
\phi_{n}\right\Vert _{\infty}}{\left\Vert \phi_{n+1}\right\Vert _{\infty}}.$
\item[(ii)] $\left\Vert \dfrac{\phi_{n}}{\phi_{n+1}}\right\Vert _{\infty
=\psi_{p^{\prime}}\left( \dfrac{\int_{0}^{\pi_{p}/2}\psi_{p}\left(
\phi_{n-1}\left( s\right) \right) ds}{\int_{0}^{\pi_{p}/2}\psi_{p}\left(
\phi_{n}\left( s\right) \right) ds}\right) $ \ \ for $n\geqslant1.$
\item[(iii)] $\left\Vert \dfrac{\phi_{n}}{\phi_{n+1}}\right\Vert _{\infty
}\leqslant\left\Vert \dfrac{\phi_{n-1}}{\phi_{n}}\right\Vert _{\infty
}\leqslant\cdots\leqslant\left\Vert \dfrac{\phi_{1}}{\phi_{2}}\right\Vert
_{\infty}<\infty.$
\end{enumerate}
\noindent\textbf{Proof.} Since $\phi_{1}$ is strictly increasing, it follows
that $1/\phi_{1}$ is strictly decreasing. Assume by induction that $\phi
_{n-1}/\phi_{n}$ is strictly decreasing. Since
\[
\frac{\phi_{n}\left( x\right) -\phi_{n}\left( 0\right) }{\phi_{n+1
-\phi_{n+1}\left( 0\right) }=\frac{\phi_{n}\left( x\right) }{\phi
_{n+1}\left( x\right) },
\]
in order to show that $\phi_{n}/\phi_{n+1}$ is strictly decreasing, it
suffices in light of the lemma to verify that $\phi_{n}^{\prime}/\phi
_{n+1}^{\prime}$ is strictly decreasing on $I_{p}$. But,
\[
\frac{\phi_{n}^{\prime}\left( x\right) }{\phi_{n+1}^{\prime}\left(
x\right) }=\dfrac{\psi_{p^{\prime}}\left(
{\displaystyle\int_{x}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n-1}\left( s\right) \right) ds\right)
{\psi_{p^{\prime}}\left(
{\displaystyle\int_{x}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n}\left( s\right) \right) ds\right) }=\psi
_{p^{\prime}}\left( \frac
{\displaystyle\int_{x}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n-1}\left( s\right) \right) ds}
{\displaystyle\int_{x}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n}\left( s\right) \right) ds}\right) .
\]
Since $\psi_{p^{\prime}}$ is strictly increasing and the functions $\int
_{x}^{\pi_{p}/2}\psi_{p}\left( \phi_{n-1}\left( s\right) \right) ds$ and
$\int_{x}^{\pi_{p}/2}\psi_{p}\left( \phi_{n}\left( s\right) \right) ds$
are null at $x=\pi_{p}/2$, we can apply the lemma again to verify that the
quotient of these integral functions is a strictly decreasing function. We
hav
\[
\frac{\left(
{\displaystyle\int_{x}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n-1}\left( s\right) \right) ds\right) ^{\prime
}{\left(
{\displaystyle\int_{x}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n}\left( s\right) \right) ds\right) ^{\prime}
=\frac{\psi_{p}\left( \phi_{n-1}\left( s\right) \right) }{\psi_{p}\left(
\phi_{n}\left( s\right) \right) }=\psi_{p}\left( \frac{\phi_{n-1}
{\phi_{n}}\right) ,
\]
which is strictly decreasing by the induction hypothesis.
The inequality in (i) follows from Proposition 3.2. Before verifying (ii) we
remark that $\left\Vert 1/\phi_{1}\right\Vert _{\infty}=\infty$ since
$\phi_{1}\left( 0\right) =0.$ In order to prove (ii) we first observe that
the monotonicity of $\phi_{n}/\phi_{n+1}$ implies that
\[
\left\Vert \dfrac{\phi_{n}}{\phi_{n+1}}\right\Vert _{\infty}=\lim
_{x\rightarrow0^{+}}\frac{\phi_{n}\left( x\right) }{\phi_{n+1}\left(
x\right) }.
\]
L'H\^{o}pital's rule then yield
\[
\lim_{x\rightarrow0^{+}}\frac{\phi_{n}\left( x\right) }{\phi_{n+1}\left(
x\right) }=\lim_{x\rightarrow0^{+}}\frac{\phi_{n}^{\prime}\left( x\right)
}{\phi_{n+1}^{\prime}\left( x\right) }=\psi_{p^{\prime}}\left( \frac
{\int_{0}^{\pi_{p}/2}\psi_{p}\left( \phi_{n-1}\left( s\right) \right)
ds}{\int_{0}^{\pi_{p}/2}\psi_{p}\left( \phi_{n}\left( s\right) \right)
ds}\right) <\infty.
\]
The proof of (iii) is a consequence of the following estimates, valid for
$n\geqslant2$
\begin{align*}
\left\Vert \frac{\phi_{n}}{\phi_{n+1}}\right\Vert _{\infty} & =\psi
_{p^{\prime}}\left( \frac
{\displaystyle\int_{0}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n-1}\left( s\right) \right) ds}
{\displaystyle\int_{0}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n}\left( s\right) \right) ds}\right) \\
& \leqslant\psi_{p^{\prime}}\left( \frac
{\displaystyle\int_{0}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n}\left( s\right) \right) \psi_{p}\left( \dfrac
{\phi_{n-1}}{\phi_{n}}\left( s\right) \right) ds}
{\displaystyle\int_{0}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n}\left( s\right) \right) ds}\right) \\
& \leqslant\psi_{p^{\prime}}\left( \frac
{\displaystyle\int_{0}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n}\left( s\right) \right) \psi_{p}\left( \left\Vert
\dfrac{\phi_{n-1}}{\phi_{n}}\right\Vert _{\infty}\right) ds}
{\displaystyle\int_{0}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n}\left( s\right) \right) ds}\right) \\
& =\left\Vert \frac{\phi_{n-1}}{\phi_{n}}\right\Vert _{\infty}\psi
_{p^{\prime}}\left( \frac
{\displaystyle\int_{0}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n}\left( s\right) \right) ds}
{\displaystyle\int_{0}^{\pi_{p}/2}}
\psi_{p}\left( \phi_{n}\left( s\right) \right) ds}\right) \\
& =\left\Vert \frac{\phi_{n-1}}{\phi_{n}}\right\Vert _{\infty}.
\end{align*}
\noindent$\blacksquare$
\begin{description}
\item[Theorem 2.4.] \textit{Let }$u_{n}:=\dfrac{\phi_{n}}{\left\Vert \phi
_{n}\right\Vert _{\infty}}\in C^{1}\left( I_{p}\right) ,$\textit{ for
}$n\geqslant1.$\textit{ Then the sequence }$\left\{ u_{n}\left( x\right)
\right\} _{n\geqslant1}$\textit{ is decreasing for each }$x\in I_{p}$\textit{
and
\[
\sqrt[p]{p-1}u_{n}\rightarrow\sin_{p}\text{\textit{ \ \ uniformly in} }I_{p}.
\]
\end{description}
\noindent\textbf{Proof.} In $I_{p}$ we hav
\begin{align*}
\frac{u_{n}}{u_{n+1}} & =\dfrac{\phi_{n}}{\phi_{n+1}}\left( \dfrac
{\left\Vert \phi_{n}\right\Vert _{\infty}}{\left\Vert \phi_{n+1}\right\Vert
_{\infty}}\right) ^{-1}\\
& \geqslant\left( \inf\limits_{I_{p}}\dfrac{\phi_{n}}{\phi_{n+1}}\right)
\left( \dfrac{\left\Vert \phi_{n}\right\Vert _{\infty}}{\left\Vert \phi
_{n+1}\right\Vert _{\infty}}\right) ^{-1}\\
& =\left( \dfrac{\left\Vert \phi_{n}\right\Vert _{\infty}}{\left\Vert
\phi_{n+1}\right\Vert _{\infty}}\right) \left( \dfrac{\left\Vert \phi
_{n}\right\Vert _{\infty}}{\left\Vert \phi_{n+1}\right\Vert _{\infty}}\right)
^{-1}\\
& =1,
\end{align*}
that is, $\left\{ u_{n}\left( x\right) \right\} _{n\geqslant1}$ is
decreasing for each $x\in I_{p},$ and the whole sequence is bounded below by
$u_{1}$. Thus, there exist
\[
u:=\lim u_{n}.
\]
We have $\left\Vert u_{n}\right\Vert _{\infty}=1$ for each $n$. Moreover,
sinc
\[
\frac{\left\Vert \phi_{n}\right\Vert _{\infty}}{\left\Vert \phi_{n+1
\right\Vert _{\infty}}=\inf\limits_{I_{p}}\dfrac{\phi_{n}}{\phi_{n+1
}\leqslant\left\Vert \frac{\phi_{n}}{\phi_{n+1}}\right\Vert _{\infty
\leqslant\left\Vert \frac{\phi_{1}}{\phi_{2}}\right\Vert _{\infty}=:C,
\]
we also have, for every $x\in I_{p},
\begin{align*}
\left\vert u_{n}^{\prime}\left( x\right) \right\vert & =\frac
{1}{\left\Vert \phi_{n}\right\Vert _{\infty}}\psi_{p^{\prime}}\left( \int
_{x}^{\pi_{p}/2}\psi_{p}\left( \phi_{n-1}\left( s\right) \right) ds\right)
\\
& =\frac{\left\Vert \phi_{n-1}\right\Vert _{\infty}}{\left\Vert \phi
_{n}\right\Vert _{\infty}}\psi_{p^{\prime}}\left( \int_{x}^{\pi_{p}/2
\psi_{p}\left( \frac{\phi_{n-1}\left( s\right) }{\left\Vert \phi
_{n-1}\right\Vert _{\infty}}\right) ds\right) \\
& \leqslant C\psi_{p^{\prime}}\left( \int_{0}^{\pi_{p}/2}\psi_{p}\left(
u_{n-1}\right) ds\right) \\
& \leqslant C\psi_{p^{\prime}}\left( \int_{0}^{\pi_{p}/2}\psi_{p}\left(
1\right) ds\right) \\
& =\frac{C\pi_{p}}{2}.
\end{align*}
It follows from Arzela-Ascoli's theorem that $u_{n}\rightarrow u\in C\left(
I_{p}\right) $, uniformly.
In order to conclude the proof, we need just to show that
\begin{equation}
u=\dfrac{\sin_{p}}{\sqrt[p]{p-1}}. \label{12
\end{equation}
From (\ref{11}) we can write the following expression:
\[
u_{n+1}\left( x\right) =\gamma_{n}\int_{0}^{x}\psi_{p^{\prime}}\left(
\int_{\theta}^{\pi_{p}/2}\psi_{p}\left( u_{n}\left( s\right) \right)
ds\right) d\theta,
\]
where
\[
\gamma_{n}:=\frac{\left\Vert \phi_{n}\right\Vert _{\infty}}{\left\Vert
\phi_{n+1}\right\Vert _{\infty}}.
\]
In view of the boundedness of $\left\{ \gamma_{n}\right\} $, there exists
$\gamma:=\lim\gamma_{n_{k}}$ for some subsequence $\left\{ \gamma_{n_{k
}\right\} .$ Thus, letting $k\rightarrow\infty$ in
\[
u_{n_{k}+1}\left( x\right) =\gamma_{n_{k}}\int_{0}^{x}\psi_{p^{\prime
}\left( \int_{\theta}^{\pi_{p}/2}\psi_{p}\left( u_{n_{k}}\left( s\right)
\right) ds\right) d\theta,
\]
we ge
\[
u\left( x\right) =\gamma\int_{0}^{x}\psi_{p^{\prime}}\left( \int_{\theta
}^{\pi_{p}/2}\psi_{p}\left( u\left( s\right) \right) ds\right) d\theta\in
C^{1}\left( I_{p}\right) ,
\]
which means that $u$ is a positive solution to the following proble
\[
\left\{
\begin{array}
[c]{ll
\psi_{p}\left( u^{\prime}\right) ^{\prime}=-\gamma\psi_{p}\left( u\right)
& \text{ \ \ if }x\in I_{p},\\
u\left( 0\right) =u^{\prime}\left( \pi_{p}/2\right) =0. &
\end{array}
\right.
\]
In view of the positivity of $u$, we can integrate the equation above
multiplied by $u^{\prime}$ and proceed as in the derivation of (\ref{09}) to
find $\gamma=1$. From this we conclude that in fact $\lim\gamma_{n}=1$ (the
whole sequence converges to the eingenvalue $1$) and that $u=\lim u_{n}$
satisfies the same boundary value problem that $\sin_{p}/\left\Vert \sin
_{p}\right\Vert $ does. Since both $u$ and $\sin_{p}$ are positive and
$\left\Vert u\right\Vert _{\infty}=\left\Vert \sin_{p}/\left\Vert \sin
_{p}\right\Vert \right\Vert _{\infty}=1$, we must have
\[
u=\dfrac{\sin_{p}}{\left\Vert \sin_{p}\right\Vert
\]
whence (\ref{12}) follows. $\blacksquare$
\section{Numerical Results}
Next we examine the computational time of each method. Computations were
performed on a WindowsXP/Pentium 4-2.8GHz platform, using the GCC compiler.
Although the method of computing $\sin_{p}$ by solving an ODE suggested in
\cite{BR2} (which we implemented by means of a standard Runge-Kutta fourth
power method) is by far the fastest, the computational times of the other two
methods are competitive, the inverse power method being on average more than
twice as fast as the power series method of \cite{Lindqvist} for values of $p$
greater than 2. Also, the average number of $8$ iterations that the inverse
power method uses to obtain the same (and sometimes better; see Table 2)
accuracy of the differential equation method of \cite{BR2} is quite
remarkable, specially taking into account that the functions $\phi_{n}$
converge to $0$ rather rapidly. We emphasize that the computational time of
the inverse power method is not the main subject of this presentation. The
method demands the computation of double integrals at each iteration for each
grid point. We opted for a classical, computationally easy to implement and
reasonably fast method to compute these integrals, namely, the Simpson
composite method. However, a greater effort spent in lessening the
computational time of the numerical integrations certainly would be reflected
in a substantial decrease in the time spent computing $\sin_{p}$ overall.
Nevertheless, by considering the accuracy and the comparison scale among the
three methods (on the range of miliseconds) we may say that the results
presented in this paper validate the inverse power method as an effective and
reasonably fast method for numerically obtaining $\sin_{p}$.
Below we present the average time spent in computing $\sin_{p}$ on the whole
interval $I_{p}$ divided in $101$ grid points by each method for six values of
$p$ (the average was taken out of five computer runs); the stop criterion in
each method was an error tolerance of $10^{-8}$ between successive iterations
and less than 500 terms in the power series
\
\begin{tabular}
[c]{|l|c|c|c|c|c|c|}\hline
$p$ & $1.1$ & $1.5$ & $2.0$ & $2.5$ & $3.0$ & $3.5$\\\hline
Inverse power method & $21.5$ & $32.1$ & $1.1$ & $37.7$ & $37.8$ &
$31.7$\\\hline
Differential equation method & $1.9$ & $1.8$ & $1.1$ & $1.5$ & $1.5$ &
$1.5$\\\hline
Power series & $92.9$ & $2.2$ & $2.0$ & $79.6$ & $79.3$ & $73.3$\\\hline
\end{tabular}
\ \
\]
\begin{center}
{\small Table 1: Average time (in miliseconds) for the computation of
$\sin_{p}$ on $I_{p}$ for each method.}\vspace{0.2cm}
\end{center}
Besides the trivial point $0$, the only point where the value of $\sin_{p}$ is
exactly known is $\pi_{p}/2$, with $\sin_{p}\left( \pi_{p}/2\right)
=\sqrt[p]{p-1}$. In the next table we present the computed value for $\sin
_{p}\left( \pi_{p}/2\right) $ obtained using each method
\
\begin{tabular}
[c]{|l|c|c|c|c|c|c|}\hline
$p$ & $1.1$ & $1.5$ & $2.0$ & $2.5$ & $3.0$ & $3.5$\\\hline
$\sqrt[p]{p-1}$ & $0.123285$ & $0.629961$ & $1$ & $1.17608$ & $1.25992$ &
$1.29926$\\\hline
Inverse power method & $0.123285$ & $0.629961$ & $1$ & $1.17608$ & $1.25992$ &
$1.29926$\\\hline
Differential equation method & $0.123285$ & $0.629966$ & $1.00017$ & $1.17647$
& $1.26044$ & $1.29983$\\\hline
Power series & $5.3\times10^{128}$ & $0.629961$ & $1$ & $1.17608$ & $1.25993$
& $1.29928$\\\hline
\end{tabular}
\ \ \
\]
\begin{center}
{\small Table 2: Value of $\sin_{p}\left( \pi_{p}/2\right) =\sqrt[p]{p-1}$
obtained independently using each method.}\vspace{0.2cm}
\end{center}
\noindent Notice that the inverse power method appears to be more accurate
when computing $\sin_{p}$ at values close to $\pi_{p}/2$. Indeed, in order to
obtain a good approximation close to this point, it was necessary to allow for
a greater number of terms in the power series than would be necessary for
points far from $\pi_{p}/2$
\
\begin{tabular}
[c]{|c|c|c|c|c|c|c|}\hline
$p$ & $1.1$ & $1.5$ & $2.0$ & $2.5$ & $3.0$ & $3.5$\\\hline
Inverse power method & $5$ & $8$ & $9$ & $8$ & $8$ & $8$\\\hline
Power Series & $501$ & $13$ & $8$ & $470$ & $501$ & $501$\\\hline
\end{tabular}
\
\]
\begin{center}
{\small Table 3: Number of iterations.}\vspace{0.2cm}\vspace{0.2cm}
\end{center}
\noindent We see that the number of iterations used by the inverse power
method is remarkably low. Below, we present the graphics of $\sin_{p}$ for the
same values of $p$ computed using the three methods (except for $p=1.1$, since
the power series appears to diverge in this case). Notice that all three
methods agree very well with each other, being virtually indistinguishable.
\[
\includegraphics[scale=0.4]
{p_1.1.eps}\ \ \ \ \includegraphics[scale=0.4]
{p_1.5.eps}
\
\[
\includegraphics[scale=0.4]
{p_2.0.eps}\ \ \ \ \includegraphics[scale=0.4]
{p_2.5.eps}
\
\[
\includegraphics[scale=0.4]
{p_3.0.eps}\ \ \ \ \includegraphics[scale=0.4]
{p_3.5.eps}
\]
\section*{Acknowledgments}
The second author would like to thank the support of FAPEMIG and CNPq.
| {
"timestamp": "2010-11-16T02:04:25",
"yymm": "1011",
"arxiv_id": "1011.3486",
"language": "en",
"url": "https://arxiv.org/abs/1011.3486",
"abstract": "In this paper, we discuss a new iterative method for computing $\\sin_{p}$. This function was introduced by Lindqvist in connection with the unidimensional nonlinear Dirichlet eigenvalue problem for the $p$-Laplacian. The iterative technique was inspired by the inverse power method in finite dimensional linear algebra and is competitive with other methods available in the literature.",
"subjects": "Classical Analysis and ODEs (math.CA); Numerical Analysis (math.NA)",
"title": "Computing the $\\sin_{p}$ function via the inverse power method",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.976310532836284,
"lm_q2_score": 0.8198933403143929,
"lm_q1q2_score": 0.8004705039512656
} |
https://arxiv.org/abs/2209.02588 | A generalization of Chu-Vandermonde's Identity | We present and prove a general form of Vandermonde's identity and use it as an alternative solution to a classic probability problem. | \section{Introduction}
Vandermonde's identity, its diverse proofs, and applications have been the spot of research throughout the centuries. Nowadays, it is a classic identity whose proof is available in most combinatorics-related books~\cite{knuth94}. However, there are still some efforts to give new proofs for this well-established identity, e.g.,~\cite{spivey16}. In addition to its original form proof, some specific types of its generalization have been proposed. Some works concentrated on its higher dimension~\cite{yaacov17}. Some others put their efforts into its complex~\cite{mestrovic18} or coefficients of its numbers~\cite{knuth94}.
However, our focus was not generalizing this identity directly. We were doing some exercises in Stochastic Process and reached a classic problem during its teaching, which we put in this paper with an alternative solution in section~\ref{seq:app}. To have a formal proof of it, we reached an equation that we later realized could be considered a generalization of \textit{Vandermonde's identity}.
This article gives a general form of Vandermonde's identity, including first order and higher orders of this identity. Then, we proof the mentioned classic problem, borrowed from Ross book~\cite{ross10} using the first order of general Vandermonde's identity. Lastly, we briefly discuss some of our thoughts about this proof.
\section{The Vandermonde's identity General form.}
At first, we encountered the first order of Vandermonde's identity. We prove it as follows.
\begin{theorem}[First Order of Vandermonde's Identity General Form.]\label{1storder} Let assume $k\leq m$ and $r-k\leq n$, then
$\sum_{k=0}^{r}{k{m \choose k}{n \choose r-k}} = m {m+n-1 \choose r-1}
\label{eq:mainFormula}$, where ${x \choose y}=\frac{x!}{(x-y)!y!}$, and $x,y\in \mathbb{N}$ .
\end{theorem}
\begin{proof}
Let start with
\begin{equation*}
\begin{aligned}
m \left( x+1 \right)^{m+n-1} & = m \left( x+1 \right)^{m-1} \left( x+1 \right)^{n}\\
\end{aligned}
\end{equation*}
By $m \left({1+x}\right) ^{m-1} = \sum_{i=0}^{m}{i{m \choose i} x^{i-1}}$~\cite{Koh} we have,
\begin{equation*}
\begin{aligned}
& = \sum_{i=0}^{m}{i{m \choose i} x^{i-1}} \sum_{j=0}^{n}{{n \choose j} x^{j}}.\\
\end{aligned}
\end{equation*}
We get factor of $\frac{1}{x}$ from $x^{i-1}$ in the above equation; thus, we have
\begin{equation*}
\begin{aligned}
& = \frac{1}{x}\sum_{i=0}^{m}{i{m \choose i} x^{i}} \sum_{j=0}^{n}{{n \choose j} x^{j}}.\\
\end{aligned}
\end{equation*}
Using \textit{polynomial ring}, $\sum_{i=0}^{m}{a_{i}{x^{i}}} \sum_{j=0}^{n}{b_{j}x^{j}} =
\sum_{r=0}^{m+n}{{ \left( \sum_{k=0}^{r}{a_{k}b_{r-k}} \right) }x^{r}}$ ~\cite{whitelow}, we convert the above formula as follows.
\begin{equation*}
\begin{aligned}
& = \frac{1}{x}\sum_{r=0}^{m+n}{{ \left( \sum_{k=0}^{r}{k{m \choose k}{n \choose r-k}} \right) }x^{r}}\\
& = \sum_{r=0}^{m+n}{{ \left( \sum_{k=0}^{r}{k{m \choose k}{n \choose r-k}} \right) }x^{r-1}}\\
& = 0 + \sum_{r=1}^{m+n}{{ \left( \sum_{k=0}^{r}{k{m \choose k}{n \choose r-k}} \right) }x^{r-1}}
\end{aligned}
\end{equation*}
, or
\begin{equation}
m \left( x+1 \right)^{m+n-1} = \sum_{r=1}^{m+n}{{ \left( \sum_{k=0}^{r}{k{m \choose k}{n \choose r-k}} \right) }x^{r-1}}.
\label{eq:firstExpansion}
\end{equation}
The equation~\ref{eq:firstExpansion} is our first expansion. On the other hand, we consider \textit{binomial expansion},
\begin{equation*}
\left( x+1 \right)^{m+n-1} = \sum_{r=0}^{m+n-1}{{m+n-1 \choose r}}x^{r}.
\end{equation*}
By changing the index variable, the equation is
\begin{equation*}
= \sum_{r=1}^{m+n}{{m+n-1 \choose r-1}}x^{r-1}.
\end{equation*}
Multiplying both sides with $m$
\begin{equation}
m \left( x+1 \right)^{m+n-1} = \sum_{r=1}^{m+n}{m{m+n-1 \choose r-1}}x^{r-1}
\label{eq:binomialexp}
\end{equation}
The left sides of equation~\ref{eq:firstExpansion} and equation~\ref{eq:binomialexp} are equal. Thus,
\begin{equation*}
\begin{aligned}
\sum_{r=1}^{m+n}{{ \left( \sum_{k=0}^{r}{k{m \choose k}{n \choose r-k}} \right) }x^{r-1}}= \sum_{r=1}^{m+n}{m{m+n-1 \choose r-1}}x^{r-1}
\end{aligned}
\end{equation*}
, and results in
\begin{equation}
\begin{aligned}
\sum_{k=0}^{r}{k{m \choose k}{n \choose r-k}} = m {m+n-1 \choose r-1}.
\end{aligned}
\label{eq:rightSidesAreEqual}
\end{equation}
\end{proof}
\textbf{Theorem}~\ref{1storder} is a leading part of an alternative solution to the problem in section~\ref{seq:app}.
However, going further, we realized that the \textbf{Theorem}~\ref{1storder} is a particular case of a more general form. Thus, we introduce the general form of \textit{Vandermonde's identity} as the below theorem.
\begin{theorem}[General Form of Vandermonde's Identity] \label{generalVI} Let assume $l\leq k$, $k\leq m$ and $r-k\leq n$ then,
\begin{equation}
\sum_{k=0}^{r}{{k \brack l}{m \choose k}{n \choose r-k}} = {m \brack l} {m+n-l \choose r-l}
\label{eq:generalizedFormula}
\end{equation}
, where ${x \brack y}=\frac{x!}{(x-y)!}$ and ${x \choose y}=\frac{x!}{(x-y)!y!}$, and $x,y\in \mathbb{N}$.
\end{theorem}
\begin{proof}
Let $f(x)= (1 + x)^{m}$. its binomial expansion is
\begin{equation*}
f(x) = (1 + x)^{m} = \sum_{i=0}^{m}{{m \choose i}x^{i}}.
\end{equation*}
Then, utilizing induction, its $l$-th derivative is
\begin{equation}
f^{\left( l \right)}(x) = {m \brack l} (1+x)^{m-l} = \sum_{i=0}^{m}{{i \brack l}{m \choose i}x^{i-l}}.
\label{eq:mthderivative}
\end{equation}
Now, we start with ${m \brack l} \left( x+1\right)^{m+n-l}$. We have
\begin{equation*}
\begin{aligned}
{m \brack l} \left( x+1 \right)^{m+n-l} & = {m \brack l} \left( x+1 \right)^{m-l} \left( x+1 \right)^{n}\\
\end{aligned}
\end{equation*}
using equation~\ref{eq:mthderivative} and in a similar way to the corresponding steps of \textbf{Theorem}~\ref{1storder}, we utilize the \textit{polynomial ring} and the equation is as follows.
\begin{equation*}
\begin{aligned}
& = \sum_{i=0}^{m}{{i \brack l}{m \choose i} x^{i-l}} \sum_{j=0}^{n}{{n \choose j} x^{j}}\\
& = \sum_{r=0}^{m+n}{{ \left( \sum_{k=0}^{r}{{k \brack l}{m \choose k}{n \choose r-k}} \right) }x^{r-l}}\\
& = 0 + \sum_{r=l}^{m+n}{{ \left( \sum_{k=0}^{r}{{k \brack l}{m \choose k}{n \choose r-k}} \right) }x^{r-l}}
\end{aligned}
\end{equation*}
, or
\begin{equation}
\begin{aligned}
{m \brack l} \left( x+1 \right)^{m+n-l} & = \sum_{r=l}^{m+n}{{ \left( \sum_{k=0}^{r}{{k \brack l}{m \choose k}{n \choose r-k}} \right) }x^{r-l}}
\end{aligned}
\label{eq:gen1stExpansion}
\end{equation}
On the other hand, we use binomial expansion and have
\begin{equation*}
\left( x+1 \right)^{m+n-l} = \sum_{r=0}^{m+n-l}{{m+n-l \choose r}}x^{r}
\label{eq:gen2ndExpansion}
\end{equation*}
changing the index variable and multiplying both sides with ${m \brack l}$
\begin{equation}
{m \brack l} \left( x+1 \right)^{m+n-l} = \sum_{r=l}^{m+n}{{m \brack l}{m+n-l \choose r-l}}x^{r-l}
\label{eq:gen2ndExpansion}
\end{equation}
The left-hand sides of both equations~\ref{eq:gen1stExpansion} and~equation \ref{eq:gen2ndExpansion} are the same, so the right-hand side of both equations are the same as well, or simply
\begin{equation}
\begin{aligned}
\sum_{k=0}^{r}{{k \brack l}{m \choose k}{n \choose r-k}} = {m \brack l} {m+n-l \choose r-l}.
\end{aligned}
\end{equation}
\end{proof}
\begin{remark}
In \textbf{Theorem}~\ref{generalVI}, if
\begin{enumerate}
\item $l=0$, then it is equivalent with Vandermonde's identity, or
\begin{equation}
\begin{aligned}
\sum_{k=0}^{r}{{m \choose k}{n \choose r-k}} = {m+n \choose r};
\end{aligned}
\end{equation}
\item $l=1$, then it is equivalent with first order of general order of Vandermonde's identity;
\item $l>1$, then it is equivalent to higher order of general form of Vandermonde's identity.
\end{enumerate}
\end{remark}
\section{Application.}~\label{seq:app} In this section, we present the query which led us to face the first order of Vandermonde's identity general form. We prove the problem using \textbf{Theorem}~\ref{1storder}.
\subsection{Statement of the Problem.}
``An urn has $r$ red and $w$ white balls that are randomly removed one at a time. Let $R_{i}$ be the event that the $i$th ball removed is red. Find $P\left( R_{i} \right)$''~\cite{ross10}.
\textbf{Solution.} The answer is $\frac{r}{r+w}$. \textit{Also sprach Ross} ``... each of the $r+w$ balls is equally to be the $i$th ball removed''. In addition to the intuition introduced by Sheldon Ross, we would like to have a direct alternative formal proof of the answer. Thus, we tried to prove it, and here it is.
\begin{proof}
Let assume $r_{j}$ denotes choosing of $j$ red balls in $i-1$ previous steps. Then,
\begin{equation*}
P\left(R_{i}\right) = \sum_{j=0}^{i-1}{P\left( R_{i} \cap r_{j} \right)}\\
\end{equation*}
By applying \textit{Law of total probability} we have,
\begin{equation*}
= \sum_{j=0}^{i-1}{P\left( R_{i} | r_{j} \right) P\left(r_{j}\right) }
\end{equation*}
$P\left(r_{j}\right)$, as mentioned above, is equal to choosing $j$ red balls among the $i-1$ balls that we have already picked. Thus, its value is equal to $\frac{{ r \choose j}{ w \choose i-1-j}}{{r+w \choose i-1}}$. In addition, $P\left( R_{i} | r_{j}\right)$ denotes choosing a red ball in the $i$th step. Because, we have already picked $j$ red balls in the previous steps, its value is equal to $\frac{{r-j \choose 1}}{{r+w-\left( i-1 \right) \choose 1}}$. Consequently, we have,
\begin{equation*}
\begin{aligned}
& = \sum_{j=0}^{i-1}{\left(\frac{{r-j \choose 1}}{{r+w-\left( i-1 \right) \choose 1}} \frac{{ r \choose j}{ w \choose i-1-j}}{{r+w \choose i-1}}\right)} \\
&= \sum_{j=0}^{i-1}{\left(\frac{(r-j)}{(r+w-i+1)} \frac{{ r \choose j}{ w \choose i-1-j}}{{r+w \choose i-1}}\right)}\\
& = \frac{1}{ \left(r+w-i+1\right) {r+w \choose i-1}} \sum_{j=0}^{i-1}{\left((r-j) { r \choose j}{ w \choose i-1-j}\right)} \\
& = \frac{1}{\left(r+w-i+1\right) {r+w \choose i-1}}
\left(\sum_{j=0}^{i-1}{r{ r \choose j}{ w \choose i-1-j}} - \sum_{j=0}^{i-1}{j{ r \choose j}{ w \choose i-1-j}}\right)\\
\end{aligned}
\end{equation*}
The $\sum_{j=0}^{i-1}{j{ r \choose j}{ w \choose i-1-j}}$ term in the above formula is in accordance with \textbf{Theorem}~\ref{1storder}. Thus, we proceed with it as follows,
\begin{equation*}
\begin{aligned}
& = \frac{1}{\left(r+w-i+1\right) {r+w \choose i-1}}
\left(r{r+w \choose i-1} - r {r+w-1 \choose i-2 } \right)\\
& = \frac{r}{\left(r+w-i+1\right) {r+w \choose i-1}}
\left({r+w \choose i-1} - {r+w-1 \choose i-2 } \right)\\
& = \frac{r}{\left(r+w-i+1\right) {r+w \choose i-1}}
\left({r+w \choose i-1} - \frac{i-1}{r+w}{r+w \choose i-1 } \right)\\
& = \frac{r{r+w \choose i-1 }}{\left(r+w-i+1\right) {r+w \choose i-1}}
\left( 1 - \frac{i-1}{r+w} \right) \\
& = \frac{r}{\left(r+w-i+1\right)}
\left( \frac{\left(r+w-i+1\right)}{r+w} \right)\\
& = \frac{r}{r+w}.
\end{aligned}
\label{eq:ithRedBallProbability}
\end{equation*}
\end{proof}
\section{Final Thoughts} There have been many efforts to expand or extend Vandermonde's identity or Vandermonde's convolution. For example, Graham et al.~\cite{knuth94} defined some general forms of identities, including Vandermonde's identity, in chapter~5. But, that generalization is different from our point of view. They are a generalization of Vandermonde's identity, discussing the coefficients of $m$ or $n$ in the equation. The same happens in works such as~\cite{fang07, mestrovic18 }. Yaccov~\cite{yaacov17} investigated the determinant of the Vandermonde in a higher dimension. However, to our knowledge, we proposed this type of generalization of Vandermonde's identity for the first time, or at least we have given a new way of this generalization proof. In our proposal, we consider the general form of Vandermonde's identity from the point of view of Combinatorics' coefficients. It is worth mentioning that it is possible to observe our proposed generalization as a higher derivative of Vandermonde's identity.
\bibliographystyle{amsplain}
| {
"timestamp": "2022-09-07T02:51:52",
"yymm": "2209",
"arxiv_id": "2209.02588",
"language": "en",
"url": "https://arxiv.org/abs/2209.02588",
"abstract": "We present and prove a general form of Vandermonde's identity and use it as an alternative solution to a classic probability problem.",
"subjects": "General Mathematics (math.GM)",
"title": "A generalization of Chu-Vandermonde's Identity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105273220727,
"lm_q2_score": 0.8198933425148213,
"lm_q1q2_score": 0.8004705015785019
} |
https://arxiv.org/abs/1811.03008 | Limit points of normalized prime gaps | We show that at least 1/3 of positive real numbers are in the set of limit points of normalized prime gaps. More precisely, if $p_n$ denotes the $n$th prime and $\mathbb{L}$ is the set of limit points of the sequence $\{(p_{n+1}-p_n)/\log p_n\}_{n=1}^\infty,$ then for all $T\geq 0$ the Lebesque measure of $\mathbb{L} \cap [0,T]$ is at least $T/3.$ This improves the result of Pintz (2015) that the Lebesque measure of $\mathbb{L} \cap [0,T]$ is at least $(1/4-o(1))T,$ which was obtained by a refinement of the previous ideas of Banks, Freiberg, and Maynard (2015). Our improvement comes from using Chen's sieve to give, for a certain sum over prime pairs, a better upper bound than what can be obtained using Selberg's sieve. Even though this improvement is small, a modification of the arguments Pintz and Banks, Freiberg, and Maynard shows that this is sufficient. In addition, we show that there exists a constant $C$ such that for all $T \geq 0$ we have $\mathbb{L} \cap [T,T+C] \neq \emptyset,$ that is, gaps between limit points are bounded by an absolute constant. | \section{Introduction and main results}
The Prime Number Theorem tells us that the gap $p_{n+1}-p_n$ between consecutive primes is asymptotically $\log p_n$ on average ($p_n$ denotes the $n$th prime). It is therefore reasonable to consider the distribution of the normalized prime gaps $(p_{n+1}-p_n)/\log p_n;$ by heuristics given by Cram\'er's model we expect that for all $b>a \geq0$
\begin{align} \label{heur}
\frac{1}{N} \bigg\{ n \leq N: \, (p_{n+1}-p_n)/\log p_n \in [a,b] \bigg\} \sim \int_a^b e^{-u} \, du, \quad \quad N \to \infty.
\end{align}
That is, we expect the sequence of normalized prime gaps to satisfy a Poisson distribution (cf. Soundararajan's account \cite{Sound} for details). Gallagher \cite{Gal} has shown this to be true assuming a sufficiently uniform version of the Hardy-Littlewood conjecture.
To approach (\ref{heur}), consider the following conjecture of Erd\"os \cite{Erdos}: if $\mathbb{L}$ denotes the set limit points of the sequence $\{(p_{n+1}-p_n)/\log p_n\}_{n=1}^\infty,$ then $\mathbb{L} = [0,\infty].$ By the 1931 result of Westzynthius \cite{West} we know that $\infty \in \mathbb{L},$ and from the seminal work of Goldston, Pintz and Y\i ld\i r\i m \cite{GPY} it follows that $0 \in \mathbb{L}.$ Besides $0$ and $\infty$ no other real number is known to be in $\mathbb{L}$.
It is known that $\mathbb{L}$ has a positive Lebesque measure (Erd\"os \cite{Erdos} and Ricci \cite{Ricci}). Goldston and Ledoan \cite{GL} extended the method of Erd\"os to show that intervals of certain specific form, e.g. $[1/8,2]$, contain limit points. In addition, Pintz \cite{Pintz2} has shown that there is an ineffective constant $c$ such that $[0,c] \subseteq \mathbb{L}$ (by applying the ground-breaking work of Zhang \cite{Zhang} on bounded gaps between primes).
Note that $\mathbb{L}$ is Lebesque-measurable since it is a closed set. Hildebrand and Maier \cite{HM} showed that there exists a positive constant $c$ such that the Lebesque measure of $\mathbb{L} \cap [0,T] $ is at least $ cT$ for all sufficiently large $T$. Following the breakthrough of Maynard \cite{May} on bounded gaps between primes, it was proved by Banks, Freiberg and Maynard \cite{BFM} that this holds with $c=1/8 -o(1),$ that is, asymptotically at least 1/8 of positive real numbers are limit points. Pintz \cite{Pintz} improved this to $c=1/4 -o(1)$ by modifying the argument of \cite{BFM}; this was shown by Pintz for more general normalizations also. This was then extended to more general and especially larger normalizing factors than $\log p_n$ by Freiberg and Baker \cite{BF}, by combining the arguments with the work of Ford, Green, Konyagin, Maynard, and Tao \cite{Ford} on long prime gaps.
For clarity we only consider the set of limit points $\mathbb{L}$ with the logarithmic normalization as defined above. Our main results are deduced from the following
\begin{theorem} \label{maint}
Let $\beta_1 \leq \beta_2 \leq \beta_3 \leq \beta_4$ be any real numbers. Then
\begin{align*}
\mathbb{L} \cap \{\beta_j -\beta_i: \, \, 1 \leq i < j \leq 4\} \neq \emptyset.
\end{align*}
\end{theorem}
The proof of this will be given in Section \ref{mainsec}. We note that \cite[Theorem 1.1]{BFM} gives this for nine real numbers in place of four, and \cite[Theorem 1]{Pintz} is the same but for five real numbers. Using the same argument as in the proof of \cite[Corollary 1.2]{BFM} this implies that the Lebesque measure of $\mathbb{L} \cap [0,T]$ is $\geq (1/3-o(1))T$ as $T \to \infty$, where the $o(1)$ is ineffective. Using a more elaborate construction based on similar ideas we will show below
\begin{cor} \label{cor1}
For all $T > 0$ we have
\begin{align*}
\mu (\mathbb{L} \cap [0,T]) \geq T/3,
\end{align*}
where $\mu$ denotes the Lebesque measure on $\mathbb{R}$.
\end{cor}
Another way to approach the conjecture that $\mathbb{L} = [0,\infty]$ would be to show that for any given positive real $x$ we can find a limit point close to $x$; using Theorem \ref{maint}, we will show below that gaps between limit points are bounded by an absolute (ineffective) constant (note that this actually follows already from \cite[Theorem 1.1]{BFM}, as is evident from the proof):
\begin{cor} \label{cor2}
There exists a constant $C \geq 0$ such that for all $T\geq 0$ we have
\begin{align*}
\mathbb{L} \cap [T,T+C] \neq \emptyset.
\end{align*}
\end{cor}
In the language of combinatorics, a set $A \subseteq [0,\infty]$ is called syndetic if there is a constant $C$ such that every interval of length $C$ intersects with $A$ (cf. \cite{BFW}, for example). Thus, Corollary \ref{cor2} can be rephrased as saying that the set of limit points $\mathbb{L}$ is syndetic.
\begin{remark} By similar ideas as in the work of Baker and Freiberg \cite{BF}, one can extend our results to other normalizations of prime gaps, replacing $\log p_n$ by a function which can grow somewhat quicker than the logarithm (cf. \cite[Theorem 6.2]{BF} for what normalizations are allowed). We have restricted our attention to the logarithmic normalization to avoid having to define cumbersome notation, with the hope that this makes the article more accessible.
\end{remark}
\subsection{Proof of Corollary \ref{cor1}}
Corollary \ref{cor1} follows from combining Theorem \ref{maint} with the following general proposition:
\begin{prop} Let $k \geq 2$ and let $\mathbb{B} \subseteq [0,\infty)$ be any Lebesque-measurable set satisfying the following property: for any real numbers $\beta_1 \leq \beta_2 \leq \cdots \leq \beta_k$ we have
\begin{align*}
\mathbb{B} \cap \{\beta_j -\beta_i: \, \, 1 \leq i < j \leq k\} \neq \emptyset.
\end{align*}
Then for any $T >0$ we have
\begin{align*}
\mu (\mathbb{B} \cap [0,T]) \geq T/(k-1).
\end{align*}
\end{prop}
\begin{proof}
For any $m \geq 2$ and for any real numbers $\beta_1,\dots,\beta_m$, define the set of differences
\begin{align*}
\Delta(\beta_1,\dots,\beta_m) := \{\beta_j -\beta_i: \, \, 1 \leq i < j \leq m\}.
\end{align*}
For any $\epsilon > 0,$ let us inductively define increasing sequences of real numbers $r_j$ and $s_j$ as follows: set $r_0=s_0=0,$ and for $j >0$, having defined $r_0,\dots, r_{j-1}$ and $s_0,\dots,s_{j-1},$ let
\begin{align*}
S_{j}=S_j(r_0,\dots,r_{j-1}) := \{ s > r_{j-1}: \, \Delta(r_0,r_1,\dots, r_{j-1}, s) \cap \mathbb{B} = \emptyset \}, \quad \quad s_{j} := \inf S_j,
\end{align*}
and pick any $r_j \in S_j$ with $r_j \in [s_j,s_j+\epsilon).$ Then by the assumption on $\mathbb{B}$ for some $\ell \leq k-1$ we have $s_\ell = \infty$ (i.e. $S_\ell = \emptyset$), and we stop there and set $r_\ell = \infty$.
We now note the following property which holds for all $j \in \{0,1,\dots,\ell-1\}:$ since $s_{j+1}$ is the infimum of $S_{j+1}$, for any $t \in [r_j,s_{j+1})$ we have $\Delta(r_0,r_1,\dots, r_{j}, t) \cap \mathbb{B} \neq \emptyset$. Since $\Delta(r_0,r_1,\dots, r_{j}) \cap \mathbb{B} = \emptyset,$ this implies that
\begin{align*}
[r_j,s_{j+1}) \subseteq \bigcup_{i=0}^j (\mathbb{B}+r_i).
\end{align*}
Hence, for all $t \in (r_j,s_{j+1}]$ we have by sub-additivity
\begin{align} \label{tin}
\mu([r_j,t)) =\mu \bigg([r_j,t)\cap \bigcup_{i=0}^j (\mathbb{B}+r_i)\bigg) \leq \sum_{i=0}^j \mu\left( [r_j,t)\cap (\mathbb{B}+r_i)\right).
\end{align}
Let $T > 0.$ Then there is some $\lambda \leq \ell -1$ with $T \in (r_\lambda, r_{\lambda+1}]$ (since $r_\ell = \infty$). Denote $\tilde{T}= \min \{T,s_{\lambda+1}\}$. Then, by using $r_j < s_j+\epsilon$, we have
\begin{align*}
\mu([0,T)) = \sum_{j=0}^{\lambda-1}\mu([r_j,r_{j+1})) + \mu([r_\lambda,T))
\leq (\lambda+1)\epsilon + \sum_{j=0}^{\lambda-1}\mu([r_j,s_{j+1})) + \mu([r_\lambda,\tilde{T})) .
\end{align*}
By using (\ref{tin}) for all of the summands we get
\begin{align*}
T=\mu([0,T)) & \leq (\lambda+1)\epsilon + \sum_{j=0}^{\lambda-1}\sum_{i=0}^j \mu( [r_j,s_{j+1})\cap (\mathbb{B}+r_i)) + \sum_{i=0}^\lambda \mu( [r_\lambda,\tilde{T})\cap (\mathbb{B}+r_i)) \\
&\leq (\lambda+1)\epsilon + \sum_{j=0}^{\lambda-1}\sum_{i=0}^j \mu( [r_j,r_{j+1})\cap (\mathbb{B}+r_i)) + \sum_{i=0}^\lambda \mu( [r_\lambda,T)\cap (\mathbb{B}+r_i)) \\
&= (\lambda+1)\epsilon + \sum_{i=0}^\lambda \bigg( \sum_{j=i}^{\lambda-1} \mu( [r_j,r_{j+1})\cap (\mathbb{B}+r_i)) + \mu( [r_\lambda,T)\cap (\mathbb{B}+r_i)) \bigg) \\
&= (\lambda+1)\epsilon + \sum_{i=0}^\lambda \mu( [r_i,T)\cap (\mathbb{B}+r_i)) \\
&= (\lambda+1)\epsilon + \sum_{i=0}^\lambda \mu( [0,T-r_i)\cap \mathbb{B}) \leq (\lambda+1)\epsilon +(\lambda+1) \mu( [0,T)\cap \mathbb{B}).
\end{align*}
Hence, for any $T > 0$ we have $ \mu( [0,T)\cap \mathbb{B}) \geq T/(\lambda+1) - \epsilon \geq T/(k-1) - \epsilon$ by using $\lambda+1 \leq \ell \leq k-1$. Since $\epsilon>0$ can be made arbitrarily small, we have $\mu( [0,T)\cap \mathbb{B}) \geq T/(k-1).$
\end{proof}
\subsection{Proof of Corollary \ref{cor2}}
Corollary \ref{cor2} follows from Theorem \ref{maint} using the following general proposition. This is also proved in the work of Bergelson, Furstenberg, and Weiss \cite[Section 1, second paragraph]{BFW} but we give our own different proof of this.
\begin{prop} \label{gen2} Let $\mathbb{B} \subseteq [0,\infty)$ be any set satisfying the following property: there exists an integer $k \geq 2$ such that for any real numbers $\beta_1 \leq \beta_2 \leq \cdots \leq \beta_k$ we have
\begin{align*}
\mathbb{B} \cap \{\beta_j -\beta_i: \, \, 1 \leq i < j \leq k\} \neq \emptyset.
\end{align*}
Then there exists a constant $C\geq 0$ (ineffective) such that for all $T\geq 0$ we have
\begin{align*}
\mathbb{B} \cap [T,T+C] \neq \emptyset.
\end{align*}
\end{prop}
To prove this proposition we first prove the following weaker version:
\begin{lemma} \label{boundlemma} Let $\mathbb{B} \subseteq [0,\infty)$ satisfy the assumptions of Proposition \ref{gen2}. Let $w$ be any given function such that $w(T) \to \infty$ as $T \to \infty,$ and $w(T) >0$ for $T>0.$ Then there exists a constant $C,$ depending only on the choice of $w,$ such that for all $T > C$ we have
\begin{align*}
\mathbb{B} \cap [T-w(T),T] \neq \emptyset.
\end{align*}
\end{lemma}
\begin{proof}
Define
\begin{align*}
\mathcal{A}:=\{A > 0: \, \, \mathbb{B} \cap [A-w(A),A] = \emptyset \}.
\end{align*}
Suppose that the conclusion of the lemma is not true, so that $\mathcal{A}$ is unbounded. Then we can choose $A_1, A_2, \dots, A_{k-1} \in \mathcal{A}$ such that
\begin{align}
& A_1 < A_2 < \cdots < A_{k-1}, \\
& w(A_1) < w(A_2) < \cdots < w(A_{k-1}) \quad \text{and} \label{2}\\
& A_j < w(A_{j+1}) \quad \text{for} \quad j=1,2, \dots, k-1. \label{3}
\end{align}
Define $k$ real numbers by $\beta_0:=0$ and $\beta_j:=A_{j}$ if $j=1,2, \dots k-1.$ Then by (\ref{2}) and (\ref{3}) we have $\beta_i < w(A_j)$ if $1\leq i<j \leq k-1.$ Hence,
\begin{align*}
\{\beta_j-\beta_i: \, \, 0 \leq i < j \leq k-1\} \subseteq \bigcup_{j=1}^{k-1} [A_j-w(A_j),A_j].
\end{align*}
But by assumption we also have
\begin{align*}
\mathbb{B} \cap \{\beta_j-\beta_i: \, \, 0 \leq i < j \leq k-1\} \neq \emptyset,
\end{align*}
which gives a contradiction.
\end{proof}
\emph{Proof of Proposition \ref{gen2}.}
Suppose that no such constant $C$ exists. This implies that for every $C$ there are arbitrarily large $A$ such that $\mathbb{B} \cap [A-C,A] = \emptyset$. Hence, it is possible to find a strictly increasing sequence of positive real numbers $A_n \to \infty$ as $n \to \infty,$ such that
\begin{align*}
\mathbb{B} \cap [A_n -n, A_n ] = \emptyset
\end{align*}
for all $n \geq 1.$ Fix any such sequence $A_n$ and define a step function $w$ by setting (with $A_0=0$)
\begin{align*}
w(A) = n \quad \text{for} \quad A \in (A_{n-1},A_n] \quad \text{for any} \, \, n \geq 1.
\end{align*}
Then $w(A) \to \infty$ as $A \to \infty$, and there are arbitrarily large $A$ such that $\mathbb{B} \cap [A-w(A),A] = \emptyset,$ namely $A = A_n$ for any $n\geq 1$. This is a contradiction with Lemma \ref{boundlemma}. \qed
\subsection{Outline of the proof of Theorem \ref{maint}}
The proof of Theorem \ref{maint} will occupy us for the remainder of the article; our proof builds heavily on the earlier work of Banks, Freiberg and Maynard \cite{BFM}, and the refinement of Pintz \cite{Pintz} to their argument. We now give an informal outline of the basic ideas and indicate our modifications to them.
A finite set of integers $\mathcal{H}$ is said to be admissible if for every prime $p$ the set $\mathcal{H}$
avoids at least one residue class modulo $p$, that is, if
\begin{align*}
\bigg| \bigg\{ n \,\, (p): \, \prod_{h \in \mathcal{H}} (n+h) \equiv 0 \quad (p) \bigg\} \bigg| < p.
\end{align*}
Let $N$ be large and suppose we are given an admissible $K$-tuple $\mathcal{H} = \{h_1,\dots, h_K\}$ with $h_j \leq C \log N$ for all $j$ for some large $C.$ Then by a variant of the Erd\"os-Rankin construction (cf. \cite[Section 5]{BFM}), one can show that there is an integer $b$ and a smooth modulus $W < N^\epsilon$ such that for any $N < n \leq 2N$ with $n \equiv b \,(W),$ if there are prime numbers in the interval $[n,n+C\log N]$, then they must belong to the set $n+\mathcal{H}$.
By using the Maynard-Tao sieve, we can show that there exists $N < n \leq 2N$ with $n \equiv b \,(W)$ such that $n+\mathcal{H}$ contains prime numbers once $K=|\mathcal{H}|$ is large enough. Furthermore, suppose that we have a partition $\mathcal{H} = \mathcal{H}_1 \cup \mathcal{H}_2 \cup \cdots \cup \mathcal{H}_{M}$ into $M$ sets of equal size. Then we can show that there exists a constant $A$ such that for any integer $a \geq 1,$ if $M = \lceil Aa \rceil +1,$ then for at least $a+1$ distinct indices $j$ the set $n+\mathcal{H}_j$ contains a prime number. That is, the prime numbers that we find by the Maynard-Tao sieve are not too much concentrated on any particular set $n+\mathcal{H}_j.$
The constant $A$ is determined by how well we can control sums over prime pairs; more precisely, it is the best constant so that for all distinct $h,h' \in \mathcal{H}$ we can show the bound
\begin{align} \label{pairsout}
\sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W)}} 1_{\mathbb{P}}(n+h)1_{\mathbb{P}}(n+h')\bigg( \sum_{\substack{d_1, \dots, d_K \\ d_i | n+h_i}} \lambda_{d_1, \dots, d_K} \bigg)^2 \leq (A+o(1)) X,
\end{align}
where $X$ is the expected main term and $\lambda_{d_1, \dots, d_K}$ are sieve weights of Maynard-Tao type supported on $d_1\cdots d_K \leq N^\delta$ for some small $\delta >0$. In \cite[Section 4]{BFM}, Selberg's upper bound sieve is used to show this for $A=4.$ We improve this to $A=3.99$ by using Chen's sieve \cite{Chen}, \cite{Pan} (cf. Proposition \ref{pairs} below).
The reason why this small improvement is sufficient is as follows: we choose $a=100$ so that $\lceil 3.99a \rceil +1 = 4a,$ and partition our tuple
\begin{align*}
\mathcal{H} = \mathcal{H}_1 \cup \mathcal{H}_2 \cup \mathcal{H}_3 \cup \mathcal{H}_4, \quad \quad \quad \quad \mathcal{H}_i = \bigcup_{j=1}^a \mathcal{H}_{ij}, \quad i \in \{1,2,3,4\}.
\end{align*}
Then we find $N < n \leq 2N$ with $n \equiv b \,(W)$ such that for at least $a+1$ distinct $(i,j)$ the set $n+\mathcal{H}_{ij}$ contains a prime number. Thus, by the pigeon-hole principle we must have at least two indices $i \neq i'$ such that both $n+\mathcal{H}_i, n+\mathcal{H}_{i'}$ contain primes. By the restriction $n\equiv b \, (W)$ given by the modified Erd\"os-Rankin construction, we then know that there are two consecutive primes, one in $n+\mathcal{H}_i$ and one in $n+\mathcal{H}_{i'},$ for some $i \neq i'.$ For $\beta_1 \leq \beta_2 \leq \beta_3 \leq \beta_4$ as in Theorem \ref{maint}, it is then enough to choose $\mathcal{H}_i$ so that for all $h \in \mathcal{H}_i$ we have $h = (\beta_i + o(1)) \log N.$ From this argument we see that the exact numerical value of $A=3.99$ is not important, what matters is that $A$ is strictly less than $4.$
To show the bound (\ref{pairsout}) with $A=3.99,$ we require a Bombieri-Vinogradov type equidistribution result for primes, where the moduli run over multiples of $W < N^\epsilon.$ The possibility of exceptional zeros of $L$-functions causes some technical problems, but the result \cite[Theorem 4.2]{BFM} turns out to be sufficient. Since we are using Chen's sieve, we also need to extend this to almost-primes; this is done in Section \ref{bvsec}. In Section \ref{chensec} we apply Chen's sieve to obtain the required bound (\ref{pairsout}) for prime pairs (Proposition \ref{pairs}). We then state and prove in Section \ref{maysec} the precise version of the Maynard-Tao sieve which we will use (Proposition \ref{strong}), and in Section \ref{mainsec} we prove our main result Theorem \ref{maint}.
\begin{remark}
By the same argument, if we could show the bound (\ref{pairsout}) with any constant $A <3$ in place of $3.99,$ we would obtain Theorem \ref{maint} with sequence of four real numbers replaced by three. This in turn would give that $\mu(\mathbb{L} \cap [0,T]) \geq T/2.$ Similarly, if we had (\ref{pairsout}) with any constant $A < 2$ in place of $3.99,$ we could show that $\mathbb{L} =[0,\infty],$ which is the conjecture of Erd\"os. However, by the parity principle this should be just as hard as obtaining a lower bound for such a sum over prime pairs, which would immediately imply $\mathbb{L} =[0,\infty]$ (cf. \cite[Chapter 16]{FI} for a quantitative version due to Bombieri of the parity principle).
\end{remark}
\subsection{Notations}
We use the following asymptotic notations: for positive functions $f,g,$ we write $f \ll g$ or $f= \mathcal{O}(g)$ if there is a constant $C$ such that $f \leq C g.$ $f \asymp g$ means $g \ll f \ll g.$ The constant may depend on some parameter, which is indicated in the subscript (e.g. $\ll_{\epsilon}$).
We write $f=o(g)$ if $f/g \to 0$ for large values of the variable.
In general, $C$ stands for some large constant, which may not be the same from place to place. For variables we write $n \sim N$ meaning $N<n \leq eN$ (an $e$-adic interval), and $n \asymp N$ meaning $N/C < n < CN$ (a $C^2$-adic interval) for some constant $C>1$ which is large enough depending on the situation. If not otherwise stated the symbols $p,q,r$ denote primes and $d,k,\ell,m,n$ denote integers.
For a statement $E$ we denote by $1_E$ the characteristic function of that statement. For a set $A$ we use $1_A$ to denote the characteristic function of $A,$ so that $1_\mathbb{P}$ will denote the characteristic function of primes.
We define $P(w):= \prod_{p\leq w} p,$ and for any integer $d$ we write $P^-(d):= \min \{p: \, p | d\},$ $P^+(d):= \max \{p: \, p | d\}.$ The $k$-fold divisor function is denoted by $\tau_k(d).$ We denote the ceiling function by $\lceil \cdot \rceil$, that is, $\lceil x \rceil $ is the smallest integer $n \geq x.$
Overall we use similar notations as in \cite{BFM}, especially when we use the Maynard-Tao sieve; these are recalled in the text as needed.
\subsection*{Acknowledgements} I am grateful to my supervisor Kaisa Matom\"aki for support and comments. I also express my gratitude to Emmanuel Kowalski for helpful comments as well as for hospitality during my visit to ETH Z\"urich. I wish to thank James Maynard for bringing the article \cite{BFM} to my attention. I also wish to thank Pavel Zorin-Kranich for useful suggestions and the anonymous referee for comments. During the work the author was supported by a grant from the Magnus Ehrnrooth Foundation.
\section{Modified Bombieri-Vinogradov Theorem} \label{bvsec}
As was outlined above, we need to show an upper bound of type (\ref{pairsout}) for prime pairs, where the modulus $W$ can be as large as $N^\epsilon.$ For this purpose we require a modified version of the Bombieri-Vinogradov Theorem. Before stating this we need the following lemma on exceptional zeros of Dirichlet $L$-functions (this is \cite[Lemma 4.1]{BFM}):
\begin{lemma}Let $T \geq 3$ and $P \geq T^{1/\log_2 T}.$ For a sufficiently small constant $c > 0,$ there is at most one modulus $q \leq T$ with $P^+(q) \leq P$ and one primitive character $\chi$ modulo $q$ such that the function $L(s,\chi)$ has a zero in the region
\begin{align*}
\Re(s) \geq 1-\frac{c}{\log P}, \quad \quad |\Im(s)| \leq \exp \bigg( \log P / \sqrt{\log T}\bigg).
\end{align*}
If such a character $\chi$ mod $q$ exists, then it is real, $L(s,\chi)$ has at most one zero in the above region, which is then real and simple, and
\begin{align*}
P^+(q) \gg \log q \gg \log_2 T.
\end{align*}
\end{lemma}
Fix a constant $c>0$ for which the above lemma holds. Similarly as in \cite{BFM}, if such an exceptional modulus $q \leq T$ exists with $P = T^{1/\log_2 T}$, we define
\begin{align} \label{Z}
Z_T = P^+(q),
\end{align}
and we set $Z_T =1$ if no such modulus exists. We then have the following variant of the Bombieri-Vinogradov Theorem (this is \cite[Theorem 4.2]{BFM}):
\begin{prop} \label{bv} \emph{\textbf{(Modified Bombieri-Vinogradov).}} Let $N > 2$ and fix constants $C > 0,$ $\epsilon > 0,$ and $\delta > 0$. Let $q_0 < N^{\epsilon}$ be a square-free integer with $P^+(q_0) < N^{\epsilon/ \log_2 N}.$ Then for $\epsilon$ small enough we have
\begin{align*}
\sum_{\substack{q \leq N^{1/2-\delta} \\ q_0 | q \\ (q, Z_{N^{2\epsilon}})=1}} \max_{(a,q) = 1} \bigg | \sum_{\substack{ n \leq N \\ n \equiv a \, (q)}} \Lambda (n) \, - \frac{1}{\phi(q)} \sum_{\substack{ n \leq N}} \Lambda (n)\bigg | \, \ll_{\delta, C} \, \frac{N}{\phi(q_0) \log^C N}.
\end{align*}
\end{prop}
From the proof of \cite[Theorem 4.2]{BFM} we obtain the following lemma, which we require for the proof of Proposition \ref{bv2} below:
\begin{lemma} \label{char} With the same notations and assumptions as in Proposition \ref{bv} we have
\begin{align}
\sup_{\substack{A,B \\ AB \leq N^{1/2-\delta} \\ A \leq q_0}} \sum_{\substack{A < a \leq 2A \\ a | q_0}} \sum_{\substack{B \leq b \leq 2B \\ (b, q_0 Z_{N^{2\epsilon}})=1}} \frac{1}{\phi (b)} \sideset{}{'}\sum_{\chi \, \, (ab)} \bigg | \sum_{n \leq N} \Lambda(n) \chi(n) \bigg | \, \ll_C \frac{N}{\log^C N},
\end{align}
where $\Sigma '$ denotes the sum over primitive characters modulo $ab.$
\end{lemma}
Since we plan to apply Chen's sieve, we also require a similar equidistribution result for almost-primes. To prove such a result we require the large sieve for multiplicative characters, which follows from Theorem 9.10 of \cite{FI}:
\begin{lemma} \label{large}\emph{\textbf{(Large sieve for multiplicative characters).}} For any sequence $c_n$ of complex numbers and for any $M,N \geq 1$ we have
\begin{align*}
\sum_{q \leq Q} \frac{q}{\phi(q)} \sideset{}{'} \sum_{\chi \, \, (q)} \bigg | \sum_{M< n \leq M+N} c_n \chi(n)\bigg |^2 \, \leq \, (Q^2+N) \sum_n |c_n|^2.
\end{align*}
\end{lemma}
To state the equidistribution result for almost-primes, we need to set up some notation: fix $0 < \alpha < 1/2,$ and let $N^\alpha \ll A_1 \ll N^{1-\alpha}$, for sufficiently large $N.$ Define
\begin{align} \label{p0}
\Lambda_0(n) := (f \ast g)(n),
\end{align}
where $f(m) = 1_{\mathbb{P}}(m)(\log m)1_{m \leq A_1},$ and $g$ is any function such that $|g(n)| \, \ll 1,$ and $g(n) \neq 0$ only if $P^-(n) \geq N^{\alpha}$ and $n \asymp N/A_1$. Note that then $\Lambda_0(n)$ is supported on almost-primes $n \ll N$.
We then have that Proposition \ref{bv} holds also with $\Lambda(n)$ replaced by $\Lambda_0 (n)$:
\begin{prop} \label{bv2}\emph{\textbf{(Modified Bombieri-Vinogradov for almost-primes).}} Let $N > 2$ and fix constants $C > 0,$ $\epsilon > 0,$ and $\delta > 0$. Let $q_0 < N^{\epsilon}$ be a square-free integer with $P^+(q_0) < N^{\epsilon/ \log_2 N}.$ Let $\Lambda_0(n)$ be as in (\ref{p0}). Then for all small enough $\epsilon$ we have
\begin{align*}
\sum_{\substack{q \leq N^{1/2-\delta} \\ q_0 | q \\ (q, Z_{N^{2\epsilon}})=1}} \max_{(a,q) = 1} \bigg | \sum_{\substack{ n \equiv a \, (q)}} \Lambda_0 (n) \, - \frac{1}{\phi(q)} \sum_{\substack{ n }} \Lambda_0 (n)\bigg| \, \ll_{\delta, C} \, \frac{N}{\phi(q_0) \log^C N}.
\end{align*}
\end{prop}
\begin{proof} The basic idea is to use the large sieve inequality for large moduli and for small moduli use Lemma \ref{char}. For convenience we set $D:=N^{1/2-\delta}.$ Using the expansion
\begin{align*}
\sum_{\substack{ n \equiv a \, (q)}} \Lambda_0 (n) \, - \frac{1}{\phi(q)} \sum_{\substack{ n }} \Lambda_0 (n) = \frac{1}{\phi(q)}\sum_{\substack{\chi \, \, (q) \\ \chi \neq \chi_0}} \overline{\chi}(a) \sum_{n } \Lambda_0 (n) \chi (n),
\end{align*}
we are reduced to obtaining the bound
\begin{align} \label{ave}
\sum_{\substack{q \leq D \\ q_0 | q \\ (q, Z_{N^{2\epsilon}})=1}} \frac{1}{\phi(q)} \sum_{\substack{\chi \, \, (q) \\ \chi \neq \chi_0}} \bigg | \sum_{n } \Lambda_0 (n) \chi (n) \bigg| \, \ll_{\delta, C} \, \frac{N}{\phi(q_0) \log^C N}.
\end{align}
We then replace the character $\chi$ modulo $q$ by the primitive character $\chi'$ modulo $q'$ which induces $\chi$; we have
\begin{align*}
\chi(n) = \chi'(n) - \chi'(n)1_{(n,q/q') > 1}.
\end{align*}
Hence, the left-hand side of (\ref{ave}) is bounded by
\begin{align} \label{ave2}
\sum_{\substack{q \leq D \\ q_0 | q \\ (q, Z_{N^{2\epsilon}})=1}} \frac{1}{\phi(q)} \sum_{\substack{\chi \, \, (q) \\ \chi \neq \chi_0}} \bigg | \sum_{n } \Lambda_0 (n) \chi' (n) \bigg| + \sum_{\substack{q \leq D \\ q_0 | q \\ (q, Z_{N^{2\epsilon}})=1}} \frac{1}{\phi(q)} \sum_{\substack{\chi \, \, (q) \\ \chi \neq \chi_0}} \bigg | \sum_{m,n } f(m) g(n) \chi' (mn) 1_{(mn,q/q') > 1} \bigg|.
\end{align}
We have
\begin{align*}
1_{(mn,q/q') > 1} = 1_{(n,q/q') > 1} + 1_{(m,q/q') > 1} -1_{(m,q/q') > 1}1_{(n,q/q') > 1}.
\end{align*}
Define $h_1(n,d) := 1$ and $h_2(n,d) := 1_{(n,d)>1}.$ Then (\ref{ave2}) is bounded by
\begin{align} \label{ave3}
\sum_{i,j=1}^2 \sum_{\substack{q \leq D \\ q_0 | q \\ (q, Z_{N^{2\epsilon}})=1}} \frac{1}{\phi(q)} \sum_{\substack{\chi \, \, (q) \\ \chi \neq \chi_0}} \bigg | \sum_{m,n } f(m)h_i(m,q/q')\chi'(m) g(n)h_j(n,q/q') \chi' (n) \bigg|.
\end{align}
If $i =2$ or $j=2,$ we remove the additional conditions for $q$, write $q=dq'$, and bound the sum by
\begin{align*}
\sum_{\substack{q \leq D}} \frac{1}{\phi(q)} \sum_{\substack{\chi \, \, (q) \\ \chi \neq \chi_0}} &\bigg | \sum_{m,n } f(m)h_i(m,q/q')\chi'(m) g(n)h_j(n,q/q') \chi' (n) \bigg| \\
& \ll \sum_{d \leq D} \frac{1}{\phi(d)} \sum_{\substack{q' \leq D}} \frac{1}{\phi(q')} \sideset{}{'}\sum_{\substack{\chi \, \, (q')}} \bigg | \sum_{m,n } f(m)h_i(m,d)\chi(m) g(n)h_j(n,d) \chi (n) \bigg| \\
& \ll (\log N) \sum_{d \leq D} \frac{1}{\phi(d)} \sup_{E \leq D} \frac{1}{E} \bigg( \sum_{\substack{q' \sim E}} \frac{q'}{\phi(q')} \sideset{}{'} \sum_{\substack{\chi \, \, (q')}} \bigg | \sum_{m } f(m)h_i(m,d)\chi(m) \bigg|^2 \bigg)^{1/2} \\
& \hspace{80pt} \cdot\bigg( \sum_{\substack{q' \sim E}} \frac{q'}{\phi(q')} \sideset{}{'}\sum_{\substack{\chi \, \, (q')}} \bigg | \sum_{n } g(n)h_j(n,d) \chi (n) \bigg|^2 \bigg)^{1/2},
\end{align*}
where in the last bound we have split the sum over $q'$ dyadically and applied Cauchy-Schwarz. By Lemma \ref{large} and by the assumptions on $f$ and $g,$ the last expression is bounded by
\begin{align} \label{j2}
(\log N)\sum_{d \leq D} \frac{1}{\phi(d)} \sup_{E \leq D} \frac{1}{E} &\bigg( \bigg(E^2 + A_1 \bigg) \sum_{m } |f(m)h_i(m,d)|^2 \bigg)^{1/2} \\ \nonumber
& \hspace{20pt} \cdot\ \bigg( \bigg(E^2 + N/A_1 \bigg) \sum_{n } |g(n)h_j(n,d)|^2 \bigg)^{1/2}
\end{align}
Suppose at first that $j=2$ so that $h_j(n,d)= 1_{(n,d)>1}.$ Since $g(n)$ is supported on $P^-(n) \geq N^\alpha,$ this means that $(n,d) \geq N^\alpha.$ We obtain that (\ref{j2}) is bounded by
\begin{align*}
(\log N)&\sum_{d \leq D} \frac{1}{\phi(d)} \sup_{E \leq D} \frac{1}{E} \bigg( \bigg(E^2 + A_1 \bigg) \sum_{m } |f(m)|^2 \bigg)^{1/2} \\
& \hspace{150pt} \cdot\ \bigg( \bigg(E^2 + N/A_1 \bigg) \sum_{\substack{k| d \\ N^{\alpha} \leq k \leq D}} \sum_{n \asymp N/(A_1k)} |g(kn)|^2 \bigg)^{1/2} \\
& \ll (\log^2 N) \sum_{d \leq D} \frac{\tau(d)^{1/2}}{\phi(d)} \sup_{E \leq D} \frac{1}{E} \bigg( \bigg(E^2 + A_1 \bigg) A_1 \bigg)^{1/2} \bigg( \bigg(E^2 + N/A_1 \bigg) N^{1-\alpha}/A_1 \bigg)^{1/2} \\
& \leq (\log^4 N) \sup_{E \leq D} ( E N^{(1-\alpha)/2} + N^{1-\alpha/2}/A_1 + A_1 + N^{1-\alpha/2} /E) \ll N^{1- \alpha/3},
\end{align*}
which is sufficient. For $i=2, j=1,$ since $f(m)=1_{\mathbb{P}}(m)(\log m)1_{n \leq A_1},$ we have that if $(m,d)>1,$ then $m$ is a prime dividing $d$. Hence, by a similar argument as above we get a bound $\ll N^{1- \alpha/3}$.
For $i=j=1$ we have to estimate
\begin{align} \label{j1}
\sum_{\substack{q \leq D \\ q_0 | q \\ (q, Z_{N^{2\epsilon}})=1}} \frac{1}{\phi(q)} \sum_{\substack{\chi \, \, (q) \\ \chi \neq \chi_0}} \bigg | \sum_{m,n } f(m)\chi'(m) g(n) \chi' (n) \bigg|.
\end{align}
We begin by extracting a factor of $1/\phi(q_0)$ similarly as in the proof of \cite[Theorem 4.2]{BFM}: if $q'$ denotes the modulus of $\chi'$, then (\ref{j1}) is bounded by (writing $q'=ab,$ where $a | q_0$ and $(b,q_0)=1$; recall that $q_0$ is square-free)
\begin{align*}
\sum_{\substack{q' \leq D \\ (q', Z_{N^{2\epsilon}})=1}} \sideset{}{'}\sum_{\substack{\chi \, \, (q') }} &\bigg | \sum_{m,n } f(m)\chi(m) g(n) \chi(n) \bigg| \sum_{\substack{q \leq D \\ [q',q_0] | q \\ (q, Z_{N^{2\epsilon}})=1}} \frac{1}{\phi(q)} \\
& \ll \frac{\log N}{\phi (q_0)} \sum_{a | q_0} \sum_{\substack{b \leq D/a \\ (b, q_0Z_{N^{2\epsilon}})=1}} \frac{1}{\phi (b)} \sideset{}{'}\sum_{\substack{\chi \, \, (ab) }} \bigg | \sum_{m,n } f(m)\chi(m) g(n) \chi (n) \bigg| \\
& \ll \frac{\log^3 N}{\phi (q_0)} \sup_{\substack{A,B \\ AB \leq D \\ A \leq q_0}} \sum_{\substack{A < a \leq 2A \\ a | q_0}} \sum_{\substack{B \leq b \leq 2B \\ (b, q_0 Z_{N^{2\epsilon}})=1}} \frac{1}{\phi (b)} \sideset{}{'}\sum_{\substack{\chi \, \, (ab) }} \bigg | \sum_{m,n } f(m)\chi(m) g(n) \chi (n) \bigg| \\
\end{align*}
Hence, it remains to show that
\begin{align*}
\sup_{\substack{A,B \\ AB \leq D \\ A \leq q_0}} \sum_{\substack{A < a \leq 2A \\ a | q_0}} \sum_{\substack{B \leq b \leq 2B \\ (b, q_0 Z_{N^{2\epsilon}})=1}} \frac{1}{\phi (b)} \sideset{}{'}\sum_{\substack{\chi \, \, (ab) }} \bigg | \sum_{m,n } f(m)\chi(m) g(n) \chi (n) \bigg| \, \ll_C \frac{N}{\log^C N}
\end{align*}
For $B \geq N^{\epsilon}$ we have by Cauchy-Schwarz and Lemma \ref{large}
\begin{align*}
& \sum_{\substack{A < a \leq 2A \\ a | q_0}} \sum_{\substack{B \leq b \leq 2B \\ (b, q_0 Z_{N^{2\epsilon}})=1}} \frac{1}{\phi (b)} \sideset{}{'}\sum_{\substack{\chi \, \, (ab) }} \bigg | \sum_{m,n } f(m)\chi(m) g(n) \chi (n) \bigg| \\
& \ll \frac{1}{B} \bigg( \sum_{q \ll AB} \frac{q}{\phi(q)} \sideset{}{'}\sum_{\substack{\chi \, \, (q) }} \bigg | \sum_{m } f(m)\chi(m) \bigg|^2\bigg)^{1/2} \bigg(\sum_{q \ll AB} \frac{q}{\phi(q)} \sideset{}{'}\sum_{\substack{\chi \, \, (q) }} \bigg | \sum_{n } g(n) \chi (n) \bigg|^2 \bigg)^{1/2} \\
& \ll \frac{\log N}{B} \bigg((AB)^2 A_1 + A_1^2 \bigg)^{1/2} \bigg( (AB)^2 N/A_1 + (N/A_1)^2 \bigg)^{1/2} \\
& \leq (\log N) ( A^2 B N^{1/2} + A N / A_1^{1/2} + A_1^{1/2} A N^{1/2} + N/B ) \ll N^{1-\epsilon},
\end{align*}
if $\epsilon$ is small enough in terms of $\delta$ and $\alpha.$
For $B < N^{\epsilon}$ we replace $f(m)$ by $\Lambda(m)1_{m \leq A_1},$ which causes an error term bounded by using a trivial bound
\begin{align*}
\sum_{\substack{A < a \leq 2A \\ a | q_0}} \sum_{\substack{B \leq b \leq 2B \\ (b, q_0 Z_{N^{2\epsilon}})=1}} \frac{1}{\phi (b)} \sideset{}{'}\sum_{\substack{\chi \, \, (ab) }} &\bigg | \sum_{\substack{p^k \leq A_1 \\ k \geq 2}} \sum_{n } \log(p) \chi(p^k) g(n) \chi (n) \bigg| \\
& \hspace{20pt} \ll (AB)^2 N^{1-\alpha/2} \log N \ll N^{1-\alpha/3}
\end{align*}
if $\epsilon$ is sufficiently small. We then use Cauchy-Schwarz to get
\begin{align} \nonumber
\sum_{\substack{A < a \leq 2A \\ a | q_0}} \sum_{\substack{B \leq b \leq 2B \\ (b, q_0 Z_{N^{2\epsilon}})=1}} & \frac{1}{\phi (b)} \sideset{}{'}\sum_{\substack{\chi \, \, (ab) }} \bigg | \sum_{m,n } \Lambda(m)1_{m \leq A_1} \chi(m) g(n) \chi (n) \bigg| \\ \label{smallb}
& \ll \bigg( \sum_{\substack{A < a \leq 2A \\ a | q_0}} \sum_{\substack{B \leq b \leq 2B \\ (b, q_0 Z_{N^{2\epsilon}})=1}} \frac{1}{\phi (b)} \sideset{}{'}\sum_{\substack{\chi \, \, (ab) }} \bigg | \sum_{m \leq A_1 } \Lambda (m) \chi(m) \bigg|^2\bigg)^{1/2} \\ \nonumber
& \hspace{140pt} \cdot \bigg(\frac{1}{B}\sum_{q \ll AB} \frac{q}{\phi(q)} \sideset{}{'}\sum_{\substack{\chi \, \, (q) }} \bigg | \sum_{n } g(n) \chi (n) \bigg|^2 \bigg)^{1/2}
\end{align}
Since $AB < N^{2\epsilon} < A_1^{1/2-\delta},$ we may use the bound Lemma \ref{char} with $A_1$ in place of $N$ (decreasing $\epsilon$ also if necessary), which yields
\begin{align*}
\sum_{\substack{A < a \leq 2A \\ a | q_0}}& \sum_{\substack{B \leq b \leq 2B \\ (b, q_0 Z_{N^{2\epsilon}})=1}} \frac{1}{\phi (b)} \sideset{}{'}\sum_{\substack{\chi \, \, (ab) }} \bigg | \sum_{ m \leq A_1 } \Lambda(m)\chi(m) \bigg|^2 \\
& \ll A_1 \sum_{\substack{A < a \leq 2A \\ a | q_0}} \sum_{\substack{B \leq b \leq 2B \\ (b, q_0 Z_{N^{2\epsilon}})=1}} \frac{1}{\phi (b)} \sideset{}{'}\sum_{\substack{\chi \, \, (ab) }} \bigg | \sum_{m \leq A_1 } \Lambda(m)\chi(m) \bigg| \ll_C \frac{A_1^2}{\log^{2(C+5)} N}.
\end{align*}
Using Lemma \ref{large} to bound the sum with $g(n) \chi (n)$ in (\ref{smallb}) we get that
\begin{align*}
\sum_{\substack{A < a \leq 2A \\ a | q_0}} & \sum_{\substack{B \leq b \leq 2B \\ (b, q_0 Z_{N^{2\epsilon}})=1}} \frac{1}{\phi (b)} \sideset{}{'}\sum_{\substack{\chi \, \, (ab) }} \bigg | \sum_{m,n } f(m)\chi(m) g(n) \chi (n) \bigg| \\
& \ll_C \frac{A_1}{\log^{C+5} N} \bigg( A^2 B N/A_1 + (N/A_1)^2/B \bigg)^{1/2} \ll_C \frac{N}{\log^{C+5} N}.
\end{align*}
\end{proof}
\section{Chen's sieve upper bound for prime pairs} \label{chensec}
In this section we will apply Chen's sieve to obtain an upper bound for prime pairs, which is 3.99 times the expected main term. As will become apparent in the next section, the exact numerical value of this constant does not matter, only that it is stricly less than four. To state the result, we first need to set up some notation from \cite{BFM}.
Let $K>1,$ $N > 3,$ and define the Maynard-Tao sieve weights (recall the definition of $Z_T$ from (\ref{Z}))
\begin{align} \label{l1}
\lambda_{d_1, \dots, d_K} = \begin{cases} \bigg(\prod_{i=1}^K \mu (d_i)\bigg) \sum_{j=1}^{J} \prod_{\ell=1}^K F_{\ell,j}\bigg( \frac{\log d_\ell}{\log N}\bigg), & \text{if} \, \, (d_1\cdots d_K, Z_{N^{4 \epsilon}}) =1, \\
0, & \text{otherwise,} \end{cases}
\end{align}
for some fixed $J$, where $F_{\ell,j}:[0,\infty) \to \mathbb{R}$ are smooth compactly supported functions, not identically zero, satisfying a support condition
\begin{align} \label{l2}
\sup \bigg \{\sum_{\ell=1}^K t_l: \, \, \prod_{\ell=1}^K F_{\ell,j}(t_\ell) \neq 0 \bigg \} \leq \delta
\end{align}
for all $j=1,2,\dots,J$ for some small $\delta >0.$ Note that this implies that $\lambda_{d_1,\dots, d_K}$ are supported on $d_1 \cdots d_K \leq N^{\delta}.$ Define
\begin{align*}
F(t_1, \dots, t_K) := \sum_{j=1}^J \prod_{\ell=1}^K F_{\ell,j}'\bigg( t_\ell \bigg),
\end{align*}
where $F_{\ell,j}'$ is the derivative of $F_{\ell,j}.$ Set
\begin{align} \label{Lint}
L_K(F)& := \int_0^\infty \cdots \int_0^\infty \bigg( \int_0^\infty \int_0^\infty F(t_1,\dots t_K) dt_{K-1} dt_K\bigg)^2 dt_1 \cdots dt_{K-2} \\ \nonumber
& = \sum_{j,j'=1}^J F_{K-1,j}(0)F_{K-1,j'}(0)F_{K,j}(0)F_{K,j'}(0) \prod_{\ell=1}^{K-2} \int_0^\infty F'_{\ell,j}(t_\ell)F'_{\ell,j'}(t_\ell) dt_\ell.
\end{align}
We note here that $F_{\ell,j}$ will be chosen so that $F(t_1,\dots,t_K)$ is symmetric with respect to permutations of the variables (cf. \cite{BFM}). Let $Z_{N^{4\epsilon}}$ be as in (\ref{Z}) and define
\begin{align*}
W := \prod_{\substack{p \leq \epsilon \log N \\ p \nmid Z_{N^{4\epsilon}}}} p, \quad \quad \quad \quad B := \frac{\phi(W)}{W} \log N.
\end{align*}
Using the above notation, we have that \cite[Lemma 4.6 (iii)]{BFM} holds with the constant $4$ replaced by $3.99:$
\begin{prop} \label{pairs} For all sufficiently large $N$ the following holds:
Let $\mathcal{H} = \{h_1, \dots, h_K\} \subseteq [0,N]$ be an admissible $K$-tuple such that
\begin{align} \label{smooth}
P^+\bigg (\prod_{1 \leq i < j\leq K} (h_j-h_i) \bigg) \leq \epsilon \log N.
\end{align}
Let $b$ be an integer such that
\begin{align*}
\bigg( \prod_{j=1}^K (b+h_j) , W\bigg) =1.
\end{align*}
Then for all distinct $h_j,h_\ell \in \mathcal{H}$ we have
\begin{align*}
S:=\sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W)}} 1_{\mathbb{P}}(n+h_j)1_{\mathbb{P}}(n+h_\ell)\bigg( \sum_{\substack{d_1, \dots, d_K \\ d_i | n+h_i}} \lambda_{d_1, \dots, d_K} \bigg)^2 \leq (3.99 + \mathcal{O}(\delta)) \frac{N}{W} B^{-K}L_K(F).
\end{align*}
\end{prop}
The proof in \cite[Lemma 4.6 (iii)]{BFM} uses Selberg's sieve combined with the Modified Bombieri-Vinogradov Theorem. Our improvement comes from using Chen's sieve instead of Selberg's sieve. Similarly as in \cite[Lemma 4.6 (iii)]{BFM}, we first note that we may replace
\begin{align*}
\bigg( \sum_{\substack{d_1, \dots d_K \\ d_i | n+h_i}} \lambda_{d_1, \dots, d_K} \bigg)^2 \quad \text{by} \quad
\nu_{\mathcal{H},j,\ell} (n) := \bigg( \sum_{\substack{d_1, \dots, d_K \\ d_i | n+h_i \\ d_j=d_\ell=1}} \lambda_{d_1, \dots, d_K} \bigg)^2 1_{((n+h_j)(n+h_\ell),Z_{N^{4\epsilon}})=1}
\end{align*}
in the sum $S.$
We then require the following weighted sieve inequality of Chen type (this is essentially Lemma 4.1 of \cite{Wu}, which is in there attributed to Chen \cite{Chen}; according to Wu, the idea that this simple sieve inequality is sufficient is due to Pan \cite{Pan}).
\begin{lemma} \label{chen} Let $0 < \alpha < \beta < 1/4,$ $Y:= N^\alpha,$ and $Z:=N^{\beta}.$
Then $S \leq S_1 -S_2 /2 + S_3 /2,$ where
\begin{align*}
S_1 & := \sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W)}} 1_{\mathbb{P}}(n+h_j)1_{(n+h_\ell,P(Y))=1}\nu_{\mathcal{H},j,\ell} (n) \\
S_2 & := \sum_{Y < p \leq Z} \sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W) \\ p | n+h_\ell}} 1_{\mathbb{P}}(n+h_j)1_{(n+h_\ell,P(Y))=1}\nu_{\mathcal{H},j,\ell} (n) , \quad \quad \text{and} \\
S_3 &:= \sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W)}} 1_{\mathbb{P}}(n+h_j) \sum_{Y < p < q < r \leq Z} \sum_{(s,P(q))=1} 1_{n+h_\ell=pqrs}\nu_{\mathcal{H},j,\ell} (n) .
\end{align*}
\end{lemma}
\begin{proof}
By positivity of $\nu_{\mathcal{H},j,\ell} (n)$ it suffices to show that for any $n \in (N+h_\ell,2N+h_\ell]$
\begin{align} \label{sieve}
1_{(n,P(Z))=1} \leq 1_{(n,P(Y)) = 1} - \frac{1}{2} \sum_{Y < p \leq Z} 1_{p|n} 1_{(n,P(Y)) = 1} +\frac{1}{2} \sum_{Y < p < q < r \leq Z} \sum_{(s,P(q))=1} 1_{n=pqrs}.
\end{align}
For $(n,P(Y))>1$ this is obvious, so let $(n,P(Y))=1$ and denote $k=\sum_{Y < p \leq Z} 1_{p|n}.$ If $k=0,$ then both sides of (\ref{sieve}) are equal to one. For $k \geq 1$ the left-hand side is zero. If $k=1,$ then the right-hand side is $1-1/2+0 = 1/2 > 0.$ For $k \geq 2$ the right-hand side is $1-k/2 + (k-2)/2=0,$ since in the last sum $p$ and $q$ are fixed and there are $k-2$ ways to choose $r$.
\end{proof}
\begin{remark} Note that $\beta < 1/4$ implies that in the sum $S_3$ we have $s \gg N/(pqr) > N^{1/4} > q.$ The above lemma holds also for $\beta \geq 1/4,$ but then we sometimes may have $s=1$ in the sum $S_3.$
\end{remark}
We now proceed to estimate $S_1$, $S_2$ and $S_3$ separately by applying the linear sieve. For this we use similar notations as in \cite[Chapters 11 and 12]{FI} (using the subscript `lin' for clarity): we let $F_{\text{lin}}(s),f_{\text{lin}}(s)$ be the continuous solution to the system of delay-differential equations
\begin{align*}
\begin{cases} (sF_{\text{lin}}(s))' = f_{\text{lin}}(s-1) \\
(sf_{\text{lin}}(s))' = F_{\text{lin}}(s-1)
\end{cases}
\end{align*}
with the condition
\begin{align*}
\begin{cases} sF_{\text{lin}}(s) = 2e^{\gamma}, & \text{if} \, \, 1 \leq s \leq 3 \\
sf_{\text{lin}}(s) = 0, & \text{if} \, \, s\leq 2.
\end{cases}.
\end{align*}
Here $\gamma$ is the Euler-Mascheroni constant. We record here that for $2 \leq s \leq 4$
\begin{align*}
f_{\text{lin}}(s) = \frac{2 e^\gamma \log (s-1)}{s}.
\end{align*}
By \cite[Chapters 11 and 12]{FI} we then have
\begin{lemma}\label{linear} \emph{\textbf{(Linear sieve).}}
Let $(a_n)_{n \geq 1}$ be a sequence of non-negative real numbers. For some fixed $X$ depending only on the sequence $(a_n)_{n \geq 1}$, define $r_d$ for all square-free $d \geq 1$ by
\begin{align*}
\sum_{n \equiv 0 \, (d)} a_n = g(d) X + r_d,
\end{align*}
where $g(d)$ is a multiplicative function, depending only on the sequence $(a_n)_{n \geq 1}$, satisfying $0 \leq g(p) < 1$ for all primes $p.$ Let $D\geq 2$ (the level of distribution), and let $z=D^{1/s}$ for some $s\geq 1.$ Suppose that there exists a constant $L >0$ that for any $2 \leq w < z$ we have
\begin{align*}
\prod_{w \leq p < z} (1-g(p))^{-1} \leq \frac{\log z}{\log w} \bigg(1+\frac{L}{\log w}\bigg).
\end{align*}
Then
\begin{align*}
\sum_{n} a_n 1_{(n,P(z))=1} &\leq (F_{\text{\emph{lin}}}(s) + \mathcal{O}(\log^{-1/6} D)) X \prod_{p\leq z} (1-g(p)) + \sum_{\substack{d \leq D \\ d \, \, \text{\emph{squarefree}}}} |r_d|, \\
\sum_{n} a_n 1_{(n,P(z))=1} &\geq (f_{\text{\emph{lin}}}(s) - \mathcal{O}(\log^{-1/6} D)) X \prod_{p\leq z} (1-g(p)) - \sum_{\substack{d \leq D \\ d \, \, \text{\emph{squarefree}}}} |r_d|.
\end{align*}
\end{lemma}
We now estimate the sums $S_1,S_2$ and $S_3$ in the following three lemmata.
\begin{lemma} \label{s1} We have
\begin{align*}
S_1 \leq \frac{F_{\text{\emph{lin}}}(1/(2\alpha)) + \mathcal{O}(\delta)}{\alpha e^\gamma} \frac{N}{W} B^{-K}L_K(F)
\end{align*}
\end{lemma}
\begin{proof}
Define $r_d$ by the equation
\begin{align} \label{rd}
\sum_{\substack{N < n \leq 2N \\ n \equiv -h_\ell \, (d)}} 1_{\mathbb{P}}(n+h_j) 1_{n \equiv b \, (W)}\nu_{\mathcal{H},j,\ell} (n) = g(d) \sum_{\substack{N < n \leq 2N}} 1_{\mathbb{P}}(n+h_j) 1_{n \equiv b \, (W)} \nu_{\mathcal{H},j,\ell} (n) +r_d,
\end{align}
where $g(d)$ is a multiplicative function, supported on square-free integers, defined by
\begin{align*}
g(p) := \begin{cases} \frac{1}{p-1}, & \text{if} \, p \, \nmid W Z_{N^{4\epsilon}} \\
0, & \text{if} \, p \, \mid W Z_{N^{4\epsilon}}.
\end{cases}
\end{align*}
We note that by the same argument as in the proof of \cite[Lemma 4.6]{BFM} (recall that $d_j=d_\ell=1$ in $\nu_{\mathcal{H},j,\ell} (n) $), the sum on the right-hand side in (\ref{rd}) is
\begin{align} \nonumber
\sum_{\substack{N < n \leq 2N}} 1_{\mathbb{P}}(n+h_j) 1_{n \equiv b \, (W)} \nu_{\mathcal{H},j,\ell} (n) &= (1+o(1)) \frac{N }{\phi(W) \log N} B^{-K+2}L_K(F) \\ \label{main}
&= (1+o(1)) \frac{N}{W} B^{-K+1}L_K(F).
\end{align}
(to show this we just expand the square in $\nu_{\mathcal{H},j,\ell} (n)$, swap the order of summation, and use the Proposition \ref{bv} with moduli $[d_1,d_1'] \cdots [d_K,d_K']W \leq N^{3\delta}$ similarly as in \cite[Lemma 4.6]{BFM}). Hence, by the upper bound of the linear sieve (Lemma \ref{linear} with level of distribution $D=N^{1/2-4\delta}$, sifting up to $Y=N^\alpha$) we get
\begin{align*}
S_1 \leq (F_{\text{lin}}(1/(2\alpha)) + \mathcal{O}(\delta))\bigg( \prod_{p \leq Y} (1- g(p)) \bigg) \frac{N}{W} B^{-K+1 }L_K(F) + \sum_{\substack {d \leq N^{1/2-4\delta} \\ d \, \, \text{squarefree}} } |r_d|.
\end{align*}
By Merten's Theorem
\begin{align*}
\prod_{p \leq Y} (1- g(p)) &= \prod_{W < p < Y}\bigg( 1- \frac{1}{p-1} \bigg) = \prod_{W < p < Y}\bigg( 1- \frac{1+ \mathcal{O}(1/p)}{p} \bigg) \\
& = (1 + o(1))\frac{W}{\phi(W)} \prod_{ p < Y}\bigg( 1- \frac{1}{p} \bigg) = (1+o(1))\frac{W}{\phi(W) e^{\gamma} \log Y} ,
\end{align*}
so that
\begin{align*}
S_1 \leq \frac{F_{\text{lin}}(1/(2\alpha)) + \mathcal{O}(\delta)}{\alpha e^\gamma} \frac{N}{W} B^{-K}L_K(F)+ \sum_{\substack {d \leq N^{1/2-4\delta} \\ d \, \, \text{squarefree}} } |r_d|.
\end{align*}
For the error term we expand the square in $\nu_{\mathcal{H},j,\ell} (n) $ and swap the order of summation to get
\begin{align*}
r_d &= \sum_{\substack{N < n \leq 2N \\ n \equiv -h_\ell \, (d)}} 1_{\mathbb{P}}(n+h_j) 1_{n \equiv b \, (W)} \nu_{\mathcal{H},j,\ell} (n) - g(d) \sum_{\substack{N < n \leq 2N}} 1_{\mathbb{P}}(n+h_j) 1_{n \equiv b \, (W)} \nu_{\mathcal{H},j,\ell} (n) \\
&= \sum_{\substack{d_1, \dots, d_K \\ d'_1,\dots d_K' \\ d_j=d_j'=d_\ell=d_\ell'=1}} \lambda_{d_1,\dots, d_K} \lambda_{d_1',\dots, d_K'} \bigg( \sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W) \\n \equiv -h_\ell \, (d) \\ n \equiv -h_i \, ([d_i,d_i'])}} 1_{\mathbb{P}}(n+h_j) - g(d) \sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W) \\ n \equiv -h_i \, ([d_i,d_i'])}} 1_{\mathbb{P}}(n+h_j) \bigg).
\end{align*}
Similarly as in the proof of \cite[Lemma 4.6]{BFM}, we note that since $h'-h$ is $\epsilon \log N$-smooth for all distinct $h,h' \in \mathcal{H}$ by (\ref{smooth}), and by the support conditions (\ref{l1}), (\ref{l2}) of $\lambda_{d_1,\dots,d_k},$ we may assume that $d,$ $[d_1,d_1'],\dots, [d_K,d_K'],$ $W Z_{N^{4\epsilon}}$ are pairwise coprime. In that case we have $g(d)=1/\phi(d)$,
\begin{align*}
\sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W) \\n \equiv -h_\ell \, (d) \\ n \equiv -h_i \, ([d_i,d_i'])}} 1_{\mathbb{P}}(n+h_j) = \frac{\pi(2N+h_j) - \pi (N+h_j)}{\phi(d) \phi(W) \prod_{i=1}^K \phi([d_i,d_i'])} + \mathcal{O} \bigg( E(N, d [d_1,d_1'] \cdots [d_K, d_K'] W) \bigg),
\end{align*}
and
\begin{align*}
g(d) \hspace{-5pt}\sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W) \\ n \equiv -h_i \, ([d_i,d_i'])}} 1_{\mathbb{P}}(n+h_j) = \frac{\pi(2N+h_j) - \pi (N+h_j)}{\phi(d) \phi(W) \prod_{i=1}^K \phi([d_i,d_i'])} + \mathcal{O} \bigg( E(N, [d_1,d_1'] \cdots [d_K, d_K'] W) \bigg)
\end{align*}
where
\begin{align*}
E(N,q) = \max_{(a,q)=1 } \bigg | \pi (2N+h_j; q, a ) - \pi(N+h_j;q,a) - \frac{\pi(2N+h_j) - \pi (N+h_j)}{\phi(q)} \bigg |,
\end{align*}
if $(q, Z_{N^{4\epsilon}}) =1 $ and we set $E(N,q) = 0$ if $(q, Z_{N^{4\epsilon}}) >1.$
Hence, by the triangle inequality
\begin{align*}
\sum_{\substack {d \leq N^{1/2-4\delta} \\ d \,\, \text{squarefree}} } |r_d| \, \ll \sum_{\substack {d \leq N^{1/2-4\delta} \\ d \,\, \text{squarefree} \\ (d,W)=1} } \sum_{\substack{d_1, \dots, d_K \\ d'_1,\dots d_K' \\ d_j=d_j'=d_\ell=d_\ell'=1 }} | \lambda_{d_1,\dots, d_K} \lambda_{d_1',\dots, d_K'} | E(N, d [d_1,d_1'] \cdots [d_K, d_K'] W) \\
+ \sum_{\substack {d \leq N^{1/2-4\delta} \\ d \,\, \text{squarefree} \\ (d,W)=1}} \frac{1}{\phi(d)}\sum_{\substack{d_1, \dots, d_K \\ d'_1,\dots d_K' \\ d_j=d_j'=d_\ell=d_\ell'=1}} | \lambda_{d_1,\dots, d_K} \lambda_{d_1',\dots, d_K'} | E(N, [d_1,d_1'] \cdots [d_K, d_K'] W)
\end{align*}
The second sum on the right-hand side is bounded by $\log N$ times the first sum. We have the trivial bounds $|\lambda_{d_1,\dots, d_K}| \, \ll 1$ and $E(N,q) \ll 1 +N/\phi(q).$ Hence, using Cauchy-Schwarz and Proposition \ref{bv} the first sum is bounded by
\begin{align*}
\sum_{\substack{q \leq N^{1/2-2\delta} \\ (q, W Z_{N^{4\epsilon}}) =1 }} & \tau_{3K}(q) E(N, qW) \\
&\leq \bigg( \sum_{\substack{q \leq N^{1/2-2\delta} \\ (q, W Z_{N^{4\epsilon}}) =1 }}\tau_{3K}(q)^2 (1+N /\phi(qW))\bigg)^{1/2}\bigg( \sum_{\substack{q \leq N^{1/2-2\delta} \\ (q, W Z_{N^{4\epsilon}}) =1 }} E(N, qW) \bigg)^{1/2} \\
& \, \ll_{K,C} \frac{N}{W \log^C N},
\end{align*}
which is sufficient.
\end{proof}
\begin{lemma} \label{s2} We have
\begin{align*}
S_2 \geq \frac{1-\mathcal{O}(\delta)}{\alpha e^{\gamma}} \int_\alpha ^\beta f_{\text{\emph{lin}}} \bigg( \frac{1/2 -t}{\alpha} \bigg) \frac{dt}{t} \frac{N}{W} B^{-K}L_K(F).
\end{align*}
\end{lemma}
\begin{proof}
Set
\begin{align*}
S_{2,p} := \sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W) \\ p | n+h_\ell}} 1_{\mathbb{P}}(n+h_j)1_{(n+h_\ell,P(Y))=1}\nu_{\mathcal{H},j,\ell} (n),
\end{align*}
so that $S_2 = \sum_{Y < p \leq Z} S_{2,p}.$ We will apply the lower bound of the linear sieve to each of the sums $S_{2,p}:$ for $(d,p)=1,$ let $r_{dp}$ be defined by
\begin{align*}
\sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W) \\ p | n+h_\ell \\ n \equiv - h_\ell \, (d)}} 1_{\mathbb{P}}(n+h_j) 1_{n \equiv b \, (W)}\nu_{\mathcal{H},j,\ell} (n) = \frac{g(d)}{p-1} \sum_{\substack{N < n \leq 2N}} 1_{\mathbb{P}}(n+h_j) 1_{n \equiv b \, (W)} \nu_{\mathcal{H},j,\ell} (n) +r_{dp},
\end{align*}
where $g(d)$ is as in the proof of Lemma \ref{s1}, that is, a multiplicative function, supported on square-free integers, defined by
\begin{align*}
g(q) := \begin{cases} \frac{1}{q-1}, & \text{if} \, q \, \nmid W Z_{N^{4\epsilon}} \\
0, & \text{if} \, q \, \mid W Z_{N^{4\epsilon}}.
\end{cases}
\end{align*}
Applying the lower bound of the linear sieve (Lemma \ref{linear} with level of distribution $D=N^{1/2-4\delta}/p$ and shifting up to $Y=N^\alpha$), using (\ref{main}) and Merten's Theorem similarly as in the proof of Lemma \ref{s1}, we find that
\begin{align*}
S_{2,p} & \geq \bigg( f_{\text{lin}} \bigg( \frac{\log N^{1/2}/p}{\log Y} \bigg) - \mathcal{O}(\delta)\bigg)\frac{1}{p-1} \bigg( \prod_{q \leq Y} (1- g(q)) \bigg) \frac{N}{W} B^{-K+1}L_K(F) - \sum_{\substack {d \leq N^{1/2-4\delta}/p \\ d \, \, \text{squarefree}} } |r_{dp}| \\
& \geq \frac{1}{\alpha e^\gamma} \bigg( f_{\text{lin}} \bigg( \frac{\log N^{1/2}/p}{\log Y} \bigg) - \mathcal{O}(\delta)\bigg)\frac{1}{ p} \frac{N}{W} B^{-K}L_K(F) - \sum_{\substack {d \leq N^{1/2-4\delta}/p \\ d \, \, \text{squarefree}} } |r_{dp}|.
\end{align*}
Summing over $p$ we get, by a similar argument as in the proof of Lemma \ref{s1}, a sufficient bound for the error term
\begin{align*}
\sum_{Y < p \leq Z} \sum_{\substack {d \leq N^{1/2-4\delta}/p \\ d \, \, \text{squarefree}} } |r_{dp}| \, \ll_{C,K} \frac{N}{W\log^C N}.
\end{align*}
Hence, we have
\begin{align*}
S_{2} & \geq \frac{1- \mathcal{O}(\delta)}{\alpha e^\gamma} \bigg( \sum_{Y< p \leq Z} \frac{1}{p} f_{\text{lin}} \bigg( \frac{\log N^{1/2}/p}{\log Y} \bigg) \bigg) \frac{N}{W} B^{-K}L_K(F) \\
& \geq \frac{1- \mathcal{O}(\delta)}{\alpha e^\gamma} \bigg( \int_{Y < z \leq Z} f_{\text{lin}} \bigg( \frac{\log N^{1/2}/z}{\log Y} \bigg) \frac{dz}{z \log z} \bigg) \frac{N}{W} B^{-K}L_K(F) \\
& \geq \frac{1-\mathcal{O}(\delta)}{\alpha e^{\gamma}} \int_\alpha ^\beta f_{\text{lin}} \bigg( \frac{1/2 -t}{\alpha} \bigg) \frac{dt}{t} \frac{N}{W} B^{-K}L_K(F)
\end{align*}
by the change of variables $z=N^t$.
\end{proof}
For the next Lemma we need the Buchstab function, defined as the continuous solution to the delay-differential equation
\begin{align*}
\begin{cases} s \omega(s) = 1, & \text{if} \, \, 1 \leq s \leq 2,\\
(s \omega (s))' = \omega(s-1), & \text{if} \, \, s > 2.
\end{cases}
\end{align*}
Then by \cite[Lemma 12.1]{FI} for any $N^\epsilon < z < N$ we have
\begin{align} \label{buchstabfun}
\sum_{N< n \leq 2N} 1_{(n,P(z))=1} = (1+o(1)) \omega(\log N / \log z) \frac{N}{\log z}, \quad \quad N \to \infty.
\end{align}
\begin{lemma} \label{s3}
We have
\begin{align*}
S_3 \leq (4 + \mathcal{O}(\delta))\int_{\alpha < u_1 < u_2 < u_3 < \beta} \omega \bigg(\frac{1-u_1-u_2-u_3}{u_2} \bigg)\frac{du_1 d u_2 du_3 }{u_1 u_2^2 u_3} \frac{N}{W} B^{-K}L_K(F).
\end{align*}
\end{lemma}
\begin{proof}
Here we apply the switching, to sieve out the prime divisors of $n+h_j$ rather than $n+h_\ell$; define
\begin{align*}
a_n := \sum_{Y < p< q < r \leq Z} \sum_{(s,P(q)) = 1} 1_{n=pqrs}
\end{align*}
so that
\begin{align*}
S_3 = \sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W)}} 1_\mathbb{P}(n+h_j) a_{n+h_\ell} \nu_{\mathcal{H},j,\ell} (n).
\end{align*}
We use a similar Selberg upper bound sieve as in \cite[Lemma 4.6]{BFM} (we could just as well use the linear sieve upper bound as in the above but the argument is slightly simpler this way); let $G:[0, \infty) \to \mathbb{R}$ be a smooth function supported on $[0,1/4-2\delta]$ with $G(0)=1.$ Then
\begin{align*}
S_3 &\leq \sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W)}} a_{n+h_\ell} \bigg( \sum_{e \,| n+ h_j}\mu(e) G \bigg( \frac{\log e}{\log N}\bigg)\bigg)^2\nu_{\mathcal{H},j,\ell} (n) \\
&\leq \sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W)}} a_{n+h_\ell} \bigg( \sum_{\substack{e \,| n+ h_j\\ (e, Z_{N^{4\epsilon}})=1}}\mu(e) G \bigg( \frac{\log e}{\log N}\bigg)\bigg)^2 \bigg( \sum_{\substack{d_1, \dots, d_K \\ d_i | n+h_i \\ d_j=d_\ell=1}} \lambda_{d_1, \dots, d_K} \bigg)^2.
\end{align*}
We then expand the squares and rearrange the sum to get
\begin{align*}
\sum_{\substack{d_1, \dots, d_K \\ d'_1,\dots d_K' \\ d_j=d_j'=d_\ell=d_\ell'=1 }} \lambda_{d_1,\dots, d_K} \lambda_{d_1',\dots, d_K'} \sum_{\substack{e,e' \\ (ee', Z_{N^{4\epsilon}})=1}} \mu(e) \mu(e') G \bigg( \frac{\log e}{\log N}\bigg) G \bigg( \frac{\log e'}{\log N}\bigg) \sum_{\substack{N < n \leq 2N \\ n \equiv b \, (W) \\ [d_i,d_i'] | n+h_i \\ [e,e'] | n+h_j }} a_{n+h_\ell}
\end{align*}
In the innermost sum, we may again assume that $[d_1,d_1'],\dots, [d_K,d_K'],$ $[e,e'],$ $W Z_{N^{4\epsilon}}$ are pairwise coprime, and insert the estimates (for $d=[d_1,d'_1]\cdots [d_K,d_K'][e,e'] W$)
\begin{align*}
\sum_{\substack{N < n \leq 2N \\ n \equiv a \, (d) }} a_{n+h_\ell} = \frac{1}{\phi(d)} \sum_{\substack{N < n \leq 2N }} a_{n+h_\ell} + \tilde{r}_d.
\end{align*}
By essentially the same argument as in the proof of \cite[Lemma 4.6 (iii)]{BFM}, choosing the function $G$ optimally gives
\begin{align} \label{third}
S_3 &\leq (4+ \mathcal{O}(\delta)) \frac{\log N}{N} \bigg(\sum_{N < n \leq 2N} a_n \bigg) \frac{N}{W} B^{-K}L_K(F) + \mathcal{O}(R),
\end{align}
where
\begin{align*}
R = \sum_{\substack{d_1, \dots, d_K \\ d'_1,\dots d_K' \\ d_j=d_j'=d_\ell=d_\ell'=1 }} | \lambda_{d_1,\dots, d_K} \lambda_{d_1',\dots, d_K'} | \sum_{\substack{e,e' \leq N^{1/4-2\delta} \\ (ee',Z_{N^{4\epsilon}}) = 1}}E_0(N, [d_1,d_1'] \cdots [d_K, d_K'] [e,e'] W)
\end{align*}
with
\begin{align*}
E_0(N,d) := \max_{(a,d)=1} \bigg | \sum_{\substack{N +h_\ell< n \leq 2N+h_\ell \\ n \equiv a \, (d)}} a_n -\frac{1}{\phi(d)} \sum_{\substack{N+h_\ell < n \leq 2N+h_\ell }} a_n \bigg |.
\end{align*}
Note that the condition $e,e' \leq N^{1/4-2\delta}$ comes from the support restriction of the function $G$. Using Cauchy-Schwarz and the trivial bound $ | \lambda_{d_1,\dots, d_K}| \ll 1$ similarly as in the proof of Lemma \ref{s1}, the error term $R$ has a sufficient bound if we can show that
\begin{align*}
\sum_{\substack{d \leq N^{1/2-2\delta} \\ (d, W Z_{N^{4\epsilon}}) =1 }} |E_0(N,dW)| \, \ll_C \frac{N}{W \log^C N}.
\end{align*}
To show this we use finer-than-dyadic decomposition to write $a_n 1_{N +h_\ell < n \leq 2N + h_\ell}$ as a sum of terms of the form
\begin{align*}
\sum_{\substack{Y < p < q < r \leq Z \\ p \in I_1, \, \, q \in I_2}}\, \, \sum_{\substack{ (N+h_\ell)/(pqr) < s \leq (2N+h_\ell)/(pqr) \\ (s,P(q))=1}} 1_{n=pqrs},
\end{align*}
where each $I_j$ is of the form $(A_j, \lambda A_j ]$ for $\lambda= 1 + \log^{-2C} N$. We remove the cross-conditions $Y < p < q;$ this causes an error bounded using triangle inequality by the sum of (\ref{error1}) and (\ref{error2}), which are given by
\begin{align} \label{error1}
\sum_{\substack{d \leq N^{1/2-2\delta} \\ (d, W Z_{N^{4\epsilon}}) =1 }} & \max_{(a,d)=1} \sum_{\substack{Y < p < q < r \leq Z \\ p \in [\lambda^{-2}Y, \lambda^2 Y] \cup [\lambda^{-2}q, \lambda^2 q] \\ (pq,d) =1 }} \, \, \sum_{\substack{s \asymp N/(pqr) \\ (s,(P(q)))=1 \\ rs \equiv a \overline{pq} \, (dW)}} 1 \, \\ \nonumber
& \ll \sum_{\substack{d \leq N^{1/2-2\delta} \\ (d, W Z_{N^{4\epsilon}}) =1 }} \max_{(a,d)=1} \sum_{\substack{Y < p < q \leq Z \\ p \in [\lambda^{-2}Y, \lambda^2 Y] \cup [\lambda^{-2}q, \lambda^2 q] \\ (pq,d) =1 }} \, \, \sum_{\substack{m \asymp N/(pq) \\ m \equiv a \overline{pq} \, (dW)}} 1 \ll_C \frac{N}{W \log^C N}
\end{align}
(since $m=rs \gg N/pq > N^{1/2}$ by using $\beta < 1/4$), and
\begin{align}\label{error2}
\sum_{\substack{d \leq N^{1/2-2\delta} \\ (d, W Z_{N^{4\epsilon}}) =1 }} \frac{1}{\phi(dW)} \sum_{\substack{Y < p < q < r \leq Z \\ p \in [\lambda^{-2}Y, \lambda^2 Y] \cup [\lambda^{-2}q, \lambda^2 q]}} \, \, \sum_{\substack{s \asymp N/(pqr) \\ (s,(P(q)))=1}} 1 \, \ll_C \frac{N}{W \log^C N},
\end{align}
which is sufficient. Similarly, if we replace the condition $N+h_\ell < pqrs \leq 2N+h_\ell$ by $ (N+h_\ell)/(A_1qr) < s \leq (2N+h_\ell)/(A_1qr),$ then we get a sufficient bound for the contribution of the part where $pqrs \notin (N+h_\ell,2N+h_\ell].$ Thus, we can replace $a_n 1_{N < n \leq 2N}$ by a sum of $\mathcal{O}(\log^{4C+2} N)$ functions of the form $(P\ast g)(n), $ where for $Y \ll A_1, A_2 \ll Z$
\begin{align*}
P(m)= 1_\mathbb{P}(m)1_{m \in (A_1,\lambda A_1]} \quad \text{and} \quad
g(n)= \sum_{\substack{ q < r \leq Z \\ q \in (A_2,\lambda A_2] }} \sum_{\substack{ (N+h_\ell)/(A_1qr) < s \leq (2N+h_\ell)/(A_1qr)\\ (s,P(q))=1}} 1_{n=qrs}.
\end{align*}
We can then replace $P(m)$ by $f(m)/\log A_1,$ where $f(m) := P(m)\log m$; this is because for all $m \in (A_1,\lambda A_1]$ we have
\begin{align*}
\log m = \log A_1 + \mathcal{O}\bigg( \log^{-2C} N \bigg),
\end{align*}
so that the error term from this has a sufficient bound by trivial estimates. Finally, writing $f(m) = 1_\mathbb{P}(m)(\log m )1_{m \leq \lambda A_1} - 1_\mathbb{P}(m)(\log m) 1_{m \leq A_1}$ and using triangle inequality, we obtain by Proposition \ref{bv2} that
\begin{align*}
\sum_{\substack{d \leq N^{1/2-2\delta} \\ (d, W Z_{N^{4\epsilon}}) =1 }} |E_0(N,dW)| \, \ll_C \frac{N}{W \log^C N},
\end{align*}
which suffices by the previous remarks to bound the error term $R$ in (\ref{third}).
To compute the main term in (\ref{third}) we write by using (\ref{buchstabfun})
\begin{align*}
\sum_{N < n \leq 2N} a_n &= \sum_{Y < p< q < r \leq Z} \, \,\sum_{ \substack{ N/(pqr) < s \leq 2N/(pqr) \\ (s,P(q)) = 1}} 1 \\
& = (1+o(1)) N \sum_{Y < p< q < r \leq Z} \frac{\omega \bigg( \frac{\log(N/(pqr))}{\log q}\bigg)}{pqr \log q} \\
&= ( 1+o(1)) N \int_{Y<z_1 < z_2 < z_3 \leq Z} \omega \bigg( \frac{\log(N/(z_1z_2z_3))}{\log z_2}\bigg)\frac{dz_1 dz_2 dz_3}{z_1 z_2 z_3 (\log z_1)( \log^2 z_2) \log z_3} \\
&= ( 1+o(1)) \frac{N}{\log N} \int_{\alpha < u_1 < u_2 < u_3 < \beta} \omega \bigg(\frac{1-u_1-u_2-u_3}{u_2} \bigg)\frac{du_1 d u_2 du_3 }{u_1 u_2^2 u_3}
\end{align*}
after the change of variables $z_j=N^{u_j}.$
\end{proof}
\emph{Proof of Proposition \ref{pairs}.} Combining Lemmata \ref{chen}, \ref{s1}, \ref{s2} and \ref{s3} we obtain
\begin{align*}
S \leq (\Omega_1 - \Omega_2 + \Omega_3 + \mathcal{O}(\delta)) \frac{N}{W} B^{-K}L_K(F),
\end{align*}
where
\begin{align*}
\Omega_1 &= \frac{F_{\text{lin}}(1/(2\alpha))}{\alpha e^\gamma}, \quad \quad \quad \Omega_2 = \frac{1}{2\alpha e^{\gamma}} \int_\alpha ^\beta f_{\text{lin}} \bigg( \frac{1/2 -t}{\alpha} \bigg) \frac{dt}{t}, \quad \quad \text{and} \\
\Omega_3 &= 2 \int_{\alpha < u_1 < u_2 < u_3 < \beta} \omega \bigg(\frac{1-u_1-u_2-u_3}{u_2} \bigg)\frac{du_1 d u_2 du_3 }{u_1 u_2^2 u_3}.
\end{align*}
We choose $\alpha = 1/7$ and $\beta= 3/14$ (so that $(1/2-t)/\alpha \geq 2$ in the integral defining $\Omega_2$). For this choice we get
\begin{align*}
\Omega_1 = \frac{7F_{\text{lin}}(7/2)}{e^\gamma} = 2\bigg( \frac{3F_{\text{lin}}(3)}{e^\gamma} + \int_3^{7/2} \frac{f_{\text{lin}}(s-1)}{e^\gamma} ds \bigg) \\
= 4+ 4\int_3^{7/2} \frac{\log (s-2)}{s-1} ds \leq 4.19,
\end{align*}
\begin{align*}
\Omega_2 = \frac{7}{2e^{\gamma}} \int_{1/7}^{3/14} f_{\text{lin}} \bigg( 7/2-7t \bigg) \frac{dt}{t} = 7 \int_{1/7}^{3/14}\frac{\log(7/2-7t-1)}{7/2-7t}\frac{dt}{t} \geq 0.279,
\end{align*}
and
\begin{align*}
\Omega_3 = 2 \int_{1/7 < u_1 < u_2 < u_3 < 3/14} \omega \bigg(\frac{1-u_1-u_2-u_3}{u_2} \bigg)\frac{du_1 d u_2 du_3 }{u_1 u_2^2 u_3} \leq 0.076.
\end{align*}
Hence, $\Omega_1 - \Omega_2 + \Omega_3 < 3.99.$ \qed
\begin{remark} The upper bound for the integral in $\Omega_3$ was computed using Python 7.3; the code is available at \url{http://codepad.org/2emT1dHN}. The choice of exponents $\alpha=1/7$ and $\beta=3/14$ has not been optimized since this is not relevant to our application.
\end{remark}
\section{Modified Maynard-Tao sieve} \label{maysec}
We are now ready to prove the following version of the Maynard-Tao sieve, which is modelled after \cite[Theorem 4.3]{BFM}:
\begin{prop} \label{strong} \emph{\textbf{(Modified Maynard-Tao sieve).}} Let $K$ be a sufficiently large multiple of $4.$ Let $\epsilon > 0$ be sufficiently small. Then for all sufficiently large $N$ the following holds:
Let $Z_{N^{4\epsilon}}$ be as in (\ref{Z}) and define
\begin{align*}
W := \prod_{\substack{p \leq \epsilon \log N \\ p \, \nmid Z_{N^{4\epsilon}}}} p;
\end{align*}
Let $\mathcal{H} = \{h_1, \dots, h_K\} \subseteq [0,N]$ be an admissible $K$-tuple such that
\begin{align*}
P^+ \bigg( \prod_{1 \leq i < j\leq K} (h_j-h_i) \bigg) \leq \epsilon \log N
\end{align*}
Let $b$ be an integer such that
\begin{align*}
\bigg( \prod_{j=1}^K (b+h_j) , W\bigg) =1.
\end{align*}
Let
\begin{align*}
\mathcal{H} = \mathcal{H}_1 \cup \mathcal{H}_2 \cup \mathcal{H}_3 \cup \mathcal{H}_{4}
\end{align*}
be a partition of $\mathcal{H}$ into four sets of equal size. Then there is an integer $n \in [N,2N]$ with $n \equiv b \, (W)$ such that $n+\mathcal{H}_i$ contains a prime number for at least two distinct indices $i \in \{1,2,3,4\}.$
\end{prop}
To prove the above proposition we will show that it suffices to prove the following seemingly weaker
\begin{prop} \label{weak} Let $a \geq 1$ be an integer and let $K$ be a sufficiently large multiple of $\lceil 3.99a \rceil +1.$ Let $\epsilon > 0$ be sufficiently small. Then for all sufficiently large $N$ the following holds:
Let $Z_{N^{4\epsilon}}$ be as in (\ref{Z}) and define
\begin{align*}
W := \prod_{\substack{p \leq \epsilon \log N \\ p \nmid Z_{N^{4\epsilon}}}} p.
\end{align*}
Let $\mathcal{H} = \{h_1, \dots, h_K\} \subseteq [0,N]$ be an admissible $K$-tuple such that
\begin{align*}
P^+ \bigg( \prod_{1 \leq i < j\leq K} (h_j-h_i) \bigg) \leq \epsilon \log N
\end{align*}
Let $b$ be an integer such that
\begin{align*}
\bigg( \prod_{j=1}^K (b+h_j) , W\bigg) =1.
\end{align*}
Let
\begin{align*}
\mathcal{H} = \mathcal{H}_1 \cup \mathcal{H}_2 \cup \cdots \cup \mathcal{H}_{\lceil 3.99a \rceil+1}
\end{align*}
be a partition of $\mathcal{H}$ into $\lceil 3.99a \rceil+1$ sets of equal size. Then there is an integer $n \in [N,2N]$ with $n \equiv b \, (W)$ and a set of $a+1$ distinct indices $\{j_1,j_2,\dots, j_{a+1}\} \subseteq \{1,2,\dots,\lceil 3.99a \rceil+1\}$ such that $n+\mathcal{H}_j$ contains a prime number for every $j \in \{j_1,j_2,\dots, j_{a+1}\}.$
\end{prop}
\emph{Proof of Proposition \ref{strong} using Proposition \ref{weak}.}
We take $a=100$ so that $\lceil 3.99a \rceil+1 =4a.$ By taking a larger $K$ if necessary, we may suppose that $K$ is a sufficiently large multiple of $4a$. Given a partition $\mathcal{H} = \mathcal{H}_1 \cup \mathcal{H}_2 \cup \mathcal{H}_3 \cup \mathcal{H}_{4}$ as in Proposition \ref{strong}, we take a further partition
\begin{align*}
\mathcal{H}_i = \mathcal{H}_{i1} \cup \mathcal{H}_{i2} \cup \cdots \cup \mathcal{H}_{ia}
\end{align*}
into sets of equal sizes for all $i \in \{1,2,3,4\}.$ Then by Proposition \ref{weak} there is an integer $n \in [N,2N]$ with $n \equiv b \, (W)$ so that for at least $a+1$ distinct sets $\mathcal{H}_{ij}$ the set $n+ \mathcal{H}_{ij}$ contains a prime number. By the pigeon-hole principle this implies that $n+\mathcal{H}_i$ contains a prime number for at least two distinct indices $i \in \{1,2,3,4\}.$ \qed
\vspace{7pt}
\emph{Proof of Proposition \ref{weak}.}
We use Pintz's refined version of the argument in \cite{BFM} (cf. proof of \cite[Theorem 3]{Pintz} and especially \cite[Theorem 5.4]{BF}): using the notations of \cite{BF}, let us denote $M:=\lceil 3.99a \rceil+1$, and let $\mu, \mu'$ be positive real numbers with (defining $\binom{1}{2}=0$)
\begin{align} \label{mu}
\mu' = \max_{v \in \mathbb{N}} \bigg(v- \mu \binom{v}{2} \bigg).
\end{align}
For any integer $n$ consider
\begin{align} \label{quant}
\sum_{j=1}^M \bigg( \sum_{h \in \mathcal{H}_j} 1_\mathbb{P}(n+h) - \mu \sum_{\substack{\{h,h'\} \subseteq \mathcal{H}_j \\ h \neq h'}} 1_\mathbb{P}(n+h)1_\mathbb{P}(n+h') \bigg).
\end{align}
If there are at most $a$ indices $j$ such that $n+\mathcal{H}_j$ contains a prime, then the sum (\ref{quant}) is at most $\mu' a$. Hence, if
\begin{align*}
\sum_{h \in \mathcal{H}} 1_\mathbb{P}(n+h)-\mu' a - \mu \sum_{j=1}^M \sum_{\substack{\{h,h'\} \subseteq \mathcal{H}_j \\ h \neq h'}} 1_\mathbb{P}(n+h)1_\mathbb{P}(n+h') \, > \,0,
\end{align*}
then there are at least $a+1$ distinct indices $j$ such that $n+\mathcal{H}_j$ contains a prime. Therefore, the proposition follows once we show that
\begin{align*}
\sum_{\substack{N<n\leq 2N \\ n \equiv b \, (W)}}\bigg( \sum_{h \in \mathcal{H}} 1_\mathbb{P}(n+h)-\mu' a - \mu \sum_{j=1}^M \sum_{\substack{\{h,h'\} \subseteq \mathcal{H}_j \\ h \neq h'}} 1_\mathbb{P}(n+h)1_\mathbb{P}(n+h')\bigg) \bigg(\sum_{\substack{d_1, \dots, d_K \\ d_i | n+h_i}} \lambda_{d_1, \dots, d_K} \bigg)^2 \, > \,0.
\end{align*}
Let $\Sigma$ denote the above sum. Using \cite[Lemma 4.6 (i),(ii)]{BFM} to evaluate the first two sums, and Proposition \ref{pairs} to bound the third, we obtain that $\Sigma$ is bounded from below by
\begin{align*}
(1+\mathcal{O}(\delta))\frac{N}{WB^K} \bigg( K J_K(F) - \mu' a I_K(F) - 3.99 \mu M \binom{K/M}{2} L_K(F) \bigg),
\end{align*}
where $I_K(F)$, $J_K(F)$ and $L_K(F)$ are the integrals in \cite[Lemma 4.6]{BFM} ($L_K(F)$ is the same as in (\ref{Lint}) above). By \cite[Lemma 4.7]{BFM}, for any given $\rho \in (0,1)$ there is a choice of $F$ such that
\begin{align*}
J_K(F) &\geq (1+ \mathcal{O}(\log^{-1/2} K)) \frac{\rho \delta\log K}{K} I_K(F), \\
L_K(F) & \leq (1+ \mathcal{O}(\log^{-1/2} K)) \bigg(\frac{\rho \delta\log K}{K} \bigg)^2 I_K(F).
\end{align*}
Thus, we have
\begin{align} \label{Sigma}
\Sigma \geq \mathfrak{S}(1+\mathcal{O}(\delta)) N W^{-1}B^{-K} I_K(F),
\end{align}
where
\begin{align*}
\mathfrak{S} := \rho \delta \log K - \mu' a - 3.99 \mu M \binom{K/M}{2}\bigg(\frac{\rho\delta \log K}{K} \bigg)^2,
\end{align*}
if we pick $K$ large enough so that $\log^{-1/2} K < \delta.$ Choosing $\mu=1/L$ for some positive integer $L$ we observe that $\mu' = (1+L)/2,$ the maximum (\ref{mu}) being obtained at $v= L$ and $v=1+L$. Define the quantity $X$ by $XM: = \rho \delta \log K.$ Then by using $3.99 \leq (M-1)/a$ we obtain
\begin{align*}
\mathfrak{S} &= XM - \frac{1+L}{2} a - 3.99 \frac{M}{L}\binom{K/M}{2}\bigg(\frac{XM}{K} \bigg)^2 \\
& \geq XM - \frac{1+L}{2} a - \frac{M-1}{a} \frac{M}{L} \frac{K^2}{2M^2} \bigg(\frac{XM}{K} \bigg)^2 \\
& = XM - \frac{1+L}{2}a - \frac{M-1}{a}\frac{X^2 M}{2L} = \frac{a}{2(M-1)}>0,
\end{align*}
for $X=aL/(M-1)$ and $L=M,$ requiring that $K$ is large enough so that $\rho < 1$ for this choice of $X$.
\qed
\section{Proof of Theorem \ref{maint}} \label{mainsec}
Theorem \ref{maint} now follows by the same argument as in \cite[Section 6]{BFM}, using our Proposition \ref{strong} in place of \cite[Theorem 4.3]{BFM}; for this we need the modified Erd\"os-Rankin construction given by \cite[Lemma 5.2]{BFM} which states:
\begin{lemma} \label{erlemma} Let $K \geq 1$ and $\beta_K \geq \beta_{K-1} \geq \cdots \geq \beta_1 \geq 0.$ Then there is a real number $y(\bm{\beta},K)$ such that the following holds:
Let $x,y,z$ be any real numbers such that $x \geq 1$, $y \geq y(\bm{\beta},K)$, and
\begin{align*}
2y(1+(1+\beta_K)x)\leq 2z \leq y (\log_2 y)(\log_3 y)^{-1}.
\end{align*}
Let $\mathcal{Z}$ be any (possibly empty) set of primes such that for any $q \in \mathcal{Z}$ we have
\begin{align*}
\sum_{p\in \mathcal{Z}, \, p \geq q} 1/p \ll 1/q \ll 1/\log z.
\end{align*}
Then there is a set of integers $\{a_p:p\leq y, \, \, p \notin \mathcal{Z}\}$ and an admissible $K$-tuple $\{h_1,h_2,\dots,h_K\}$ such that
\begin{align*}
\{h_1,h_2,\dots,h_K\} &= ((0,z] \cap \mathbb{Z}) \setminus \bigcup_{p \leq y, \, p \notin \mathcal{Z}} \{m: m \equiv a_p \quad (p) \}, \\
P^+ \bigg( \prod_{1 \leq i < j\leq K} (h_j-h_i) \bigg) & \leq y,
\end{align*}
and for all $i=1,2,\dots,K$
\begin{align*}
h_i = \beta_i xy +y + \mathcal{O}\left( y e^{-\log^{1/4} y}\right).
\end{align*}
\end{lemma}
Given $\beta_1 \leq \beta_2 \leq \beta_3 \leq \beta_4$ as in Theorem \ref{maint} and any sufficiently large $N$, we will apply the above lemma with
\begin{align*}
x&:= 1/\epsilon, \quad \quad y:= \epsilon \log N, \quad \quad z:=y (\log_2 y)(2\log_3 y)^{-1}, \\
\bm{\beta}&:= \{\beta_1,\dots,\beta_1, \beta_2,\dots,\beta_2,\beta_3,\dots,\beta_3,\beta_4,\dots,\beta_4,\},
\end{align*}
where $\epsilon>0$ is sufficiently small and each $\beta_i$ is repeated $K/4$ times for some sufficiently large $K \equiv 0 \,(4)$; by translation we may assume $\beta_1\geq 0.$ We let $\mathcal{Z}:= \{Z_{N^{4\epsilon}}\}$ if $Z_{N^{4\epsilon}} > 1,$ and $\mathcal{Z}=\emptyset$ otherwise (recall (\ref{Z}) for the definition of $Z_T$). The conditions of Lemma \ref{erlemma} are satisfied, so we get a set of integers $\{a_p: p\leq y, \, \, p \neq Z_{N^{4\epsilon}}\}$ and an admissible $K$-tuple $\mathcal{H}$ such that
\begin{align}
\label{er}&\mathcal{H} = ((0,z] \cap \mathbb{Z}) \setminus \bigcup_{p \leq \epsilon \log N, \, p \neq Z_{N^{4\epsilon}}} \{m: m \equiv a_p \quad (p) \}, \\ \nonumber
& P^+ \bigg( \prod_{1 \leq i < j\leq K} (h_j-h_i) \bigg) \leq \epsilon \log N,
\end{align}
such that there is a partition $\mathcal{H}=\mathcal{H}_1 \cup \mathcal{H}_2 \cup\mathcal{H}_3 \cup\mathcal{H}_4$ into sets of equal sizes so that for all $i=1,2,3,4$ and for all $h \in \mathcal{H}_i$
\begin{align*}
h = (\beta_i+\epsilon + o(1)) \log N.
\end{align*}
Let $b$ be an integer satisfying
\begin{align*}
b \equiv -a_p \quad (p) \quad \quad \text{for all} \quad \quad p \leq \epsilon \log N, \, p \neq Z_{N^{4\epsilon}}.
\end{align*}
Then the assumptions of Proposition \ref{strong} are satisfied, so that the proposition yields two indices $1\leq i<j \leq 4$ and an integer $n \in [N,2N]$ with $n \equiv b \, (W)$ such that both $n+\mathcal{H}_i$ and $n+\mathcal{H}_j$ contain a prime number. Furthermore, since $n \equiv b \, (W),$ by (\ref{er}) we have
\begin{align*}
\mathbb{P} \cap (n,n+z] \subseteq n + \mathcal{H}.
\end{align*}
Thus, for some $1\leq i<j \leq 4$, there are consecutive primes $p,q \in n+ \mathcal{H}$ such that
\begin{align*}
p =(\beta_i+\epsilon + o(1)) \log N, \quad \quad \text{and} \quad \quad q = (\beta_j+\epsilon + o(1)) \log N.
\end{align*}
Since this holds for all sufficiently large $N$, we obtain that for some $1\leq i<j \leq 4$ we have $\beta_j-\beta_i \in \mathbb{L}$.
\qed
| {
"timestamp": "2020-11-03T02:39:38",
"yymm": "1811",
"arxiv_id": "1811.03008",
"language": "en",
"url": "https://arxiv.org/abs/1811.03008",
"abstract": "We show that at least 1/3 of positive real numbers are in the set of limit points of normalized prime gaps. More precisely, if $p_n$ denotes the $n$th prime and $\\mathbb{L}$ is the set of limit points of the sequence $\\{(p_{n+1}-p_n)/\\log p_n\\}_{n=1}^\\infty,$ then for all $T\\geq 0$ the Lebesque measure of $\\mathbb{L} \\cap [0,T]$ is at least $T/3.$ This improves the result of Pintz (2015) that the Lebesque measure of $\\mathbb{L} \\cap [0,T]$ is at least $(1/4-o(1))T,$ which was obtained by a refinement of the previous ideas of Banks, Freiberg, and Maynard (2015). Our improvement comes from using Chen's sieve to give, for a certain sum over prime pairs, a better upper bound than what can be obtained using Selberg's sieve. Even though this improvement is small, a modification of the arguments Pintz and Banks, Freiberg, and Maynard shows that this is sufficient. In addition, we show that there exists a constant $C$ such that for all $T \\geq 0$ we have $\\mathbb{L} \\cap [T,T+C] \\neq \\emptyset,$ that is, gaps between limit points are bounded by an absolute constant.",
"subjects": "Number Theory (math.NT)",
"title": "Limit points of normalized prime gaps",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9905874098566368,
"lm_q2_score": 0.8080672158638527,
"lm_q1q2_score": 0.8004612103526376
} |
https://arxiv.org/abs/2204.04729 | On dually-CPT and strong-CPT posets | A poset is a containment of paths in a tree (CPT) if it admits a representation by containment where each element of the poset is represented by a path in a tree and two elements are comparable in the poset if and only if the corresponding paths are related by the inclusion relation. Recently Alcón, Gudiño and Gutierrez introduced proper subclasses of CPT posets, namely dually-CPT, and strongly-CPT. A poset $\mathbf{P}$ is dually-CPT, if and only if $\mathbf{P}$ and its dual $\mathbf{P}^{d}$ both admit a CPT representation. A poset $\mathbf{P}$ is strongly-CPT, if and only if $\mathbf{P}$ and all the posets that share the same underlying comparability graph admit a CPT representation. Where as the inclusion between Dually-CPT and CPT was known to be strict. It was raised as an open question by Alcón, Gudiño and Gutierrez whether strongly-CPT was a strict subclass of dually-CPT. We provide a proof that both classes actually coincide. | \section{Introduction}
A poset is called a containment order of paths in a tree (CPT for short)
if it admits a representation by containment where each element of the poset
corresponds to a path in a tree and for two elements $x$ and $y$, we have
$x < y$ in the poset if and only if the path corresponding to $x$ is properly
contained in the path corresponding to $y$.
Several classes of posets are known to admit specific containment models,
for example, containment orders of circular arcs on a circle~\cite{NiMaNa88, RoUr82},
containment orders of axis-parallel boxes in $\mathbb{R}^d$ \cite{GoSc89}, or containment
orders of disks in the plane \cite{BrWi89, Fi88, Fi89} to cite just a few.
All the aforementioned classes, as well as CPT,
generalize the class {CI} of containment orders of intervals on a line~\cite{DU-MI-41}.
It is well known that this class coincides with the class
of $2$-dimensional posets and are also equivalent to the
transitive orientations of permutation graphs~\cite{Go2004}.
In 1984, Corneil and Golumbic observed that a graph $G$
may be the comparability graph of a CPT poset,
yet a different transitive orientation of $G$ may not necessarily
have a CPT representation, (see Golumbic~\cite{GO-84}).
This stands in contrast to poset dimension, interval orders,
unit interval orders, box containment orders, tolerance orders
and others which are comparability invariant.
Golumbic and Scheinerman~\cite{GoSc89}
called such classes \emph{strong containment poset classes}.
Recently, interest in CPT posets has been revived and several
groups of researchers have considered various aspects of this class
\cite{ALC-GUD-GUT-2, AGG-k-tree, GolumbicL21, MajumderMR21}.
Since the CPT posets are not a strong containment class,
Alc\'on, Gudi\~{n}o and Gutierrez~\cite{ALC-GUD-GUT-2}
introduced the study of the subclasses
dually-CPT and strongly-CPT posets.
A poset ${\mathbf P}$ is called \emph{dually-CPT} if ${\mathbf P}$ and its dual ${\mathbf P}^{d}$ admit
a CPT representation. A poset ${\mathbf P}$ is called \emph{strongly-CPT} if ${\mathbf P}$
and all the posets that share the same underlying comparability
graph admit CPT representations. From the definition
it is clear that the class of strongly-CPT posets is included
in the class of dually-CPT posets. Many families of separating examples
are now known between the class of dually-CPT and general CPT posets,
however, concerning the strongly and dually-CPT, it
was left as an open problem for many years to determine whether the
inclusion is strict or if the two classes coincide.
We present in this paper a solution to this question with
the following main theorem.
\begin{thm}
\label{thm:Main}
A poset ${\mathbf P}$ is strongly-CPT if and only if it is dually-CPT.
\end{thm}
To prove our main result we rely on the link between modular decomposition
of the underlying comparability graph and its transitive orientations.
Our strategy consists of considering a dually-CPT poset and proving
that any poset with the same comparability graph also admits a CPT representation.
At first we consider the representation and perform some
modifications to obtain a representation with particular properties. Once
this is done, we rely on the specific structure of modules in dually-CPT
posets, and we provide a method to obtain the representation of
any poset with the same comparability graph.
~\\
The paper is organized as follows: In Section \ref{sec:Def}, we present
the definitions related to posets, CPT and modular decomposition
and recall some fundamental results that we will use throughout
the paper. In Section \ref{sec:TrivPathsModules}, we prove that
for dually-CPT posets it is possible to obtain a representation
where no element of a strong module is represented by a trivial path. Then,
in Section \ref{sec:ModuleVsTrivPath}, we show how to modify a CPT representation
of a dually-CPT poset so that either the paths of a strong module
do not end on a trivial path or the considered module admits
very specific properties.
Finally, in Section~\ref{sec:Substition}, we show how to use an operation called
substitution to prove our main result.
\section{Definitions and notations}
\label{sec:Def}
A \emph{partially ordered set} or \emph{poset} is a pair
${\mathbf P}=(X,P)$ where $X$ is a finite non-empty set
and $P$ is a reflexive, antisymmetric
and transitive binary relation on $X$. The elements of $X$ are
also called \emph{vertices} of the poset. As usual, we write $x \leq
y$ in ${\mathbf P}$ for $(x,y)\in P$; and $x<y$ in ${\mathbf P}$ when $(x,y)\in P$
and $x\neq y$. If $x<y$ or $y<x$, we say that $x$ and $y$ are
\emph{comparable} in ${\mathbf P}$ and write $x\perp y$. When there is no relationship between $x$ and
$y$ we say that they are
\emph{incomparable} and write $x\parallel y$.
An element $x$ is \emph{covered} by $y$ in \textbf{P},
denoted by $x<:y$ in \textbf{P}, when $x<y$ and there is no
element $z\in X$ for which $x<z$ and $z<y$.
The \emph{down-set} $\{x\in X: x< z\}$ and the \emph{up-set} $\{x\in
X : z< x\}$ of an element $z$ are denoted by $D(z)$ and $U(z)$, respectively. We let
$D[z]=D(z) \cup \{z\}$ and $U[z]=U(z) \cup \{z\}$.
The \emph{dual} of ${\mathbf P}=(X,P)$ is the poset ${\mathbf P}^d=(X,P^d)$
where $x\leq y$ in ${\mathbf P}^d$ if and only if $y \leq x$ in ${\mathbf P}$.
A \emph{containment representation} $\Model{{\mathbf P}}$ or \emph{model}
of a poset ${\mathbf P}=(X,P)$ maps
each element $x$ of $X$ into a set $\Path{x}$ in such a way that $x <
y$ in ${\mathbf P}$ if and only if $W_x$ is a proper subset of $W_y$.
We identify the containment representation $\Model{{\mathbf P}}$ with the set family
$\{\Path{x}\}_{x\in X}$.
A poset ${\mathbf P}=(X,P)$ is a \emph{containment order of paths in
a tree}, or $CPT$ poset for brevity,
if it admits a containment representation $\Model{{\mathbf P}} = \{W_x\}$ where
every $W_x$ is a path of a tree $T$, which is called the
\emph{host tree} of the model.
When $T$ is a path, ${\mathbf P}$ is said to be a \emph{containment order
of intervals} or $CI$ poset for short.
(We generally consider a path as the set of vertices that induces it.)
The comparability graph $G_{{\mathbf P}}$ of a poset ${\mathbf P}=(X,P)$ is the
simple graph with vertex set $V(G_{{\mathbf P}})=X$ and edge set
$E(G_{{\mathbf P}})=\{xy: x\perp y\}$. In what follows, a poset ${\mathbf P}$, such
that $G_{{\mathbf P}}$ is complete (resp. without edges), is called a \emph{total order} (resp. an
\emph{empty order}). We say that two posets are
\emph{associated} if their comparability graphs are isomorphic. A
graph $G$ is a \emph{comparability graph} if there exists some
poset ${\mathbf P}$ such that $G=G_{{\mathbf P}}$.
A \emph{transitive orientation}
$\overrightarrow{E}$ of a graph $G$ is an assignment of one
of the two possible directions, $\overrightarrow{xy}$ or
$\overrightarrow{yx}$, to each edge $xy\in E(G)$ in such a way that
if $\overrightarrow{xy}\in \overrightarrow{E}$ and
$\overrightarrow{yz}\in \overrightarrow{E}$ then
$\overrightarrow{xz}\in \overrightarrow{E}$. The graphs whose
edges can be transitively oriented are exactly the comparability
graphs \cite{GA-67, GH-HO-62, Go2004}. Furthermore, given a transitive
orientation $\overrightarrow{E}$ of a graph $G$, we let
${\mathbf P}_{\overrightarrow{E}}$ denote the poset
$(V(G),P_{\overrightarrow{E}})$ where $u<v$ in
${\mathbf P}_{\overrightarrow{E}}$ if and only if $\overrightarrow{uv}\in
\overrightarrow{E}$. The comparability graph of
${\mathbf P}_{\overrightarrow{E}}$ is $G$. Thereby, the transitive
orientations of $G$ are put in one-to-one correspondence with the posets whose comparability
graphs are $G$.
Let ${\mathbf P}=(X, P)$ be a poset. A set $M\subseteq X$ is a \emph{module}
(\emph{homogeneous set} \cite{GA-67})
if for every $y\in X-M$, either $y\perp x$ for all
$x\in M$, or $y\parallel x$ for all $x \in M$.
The whole set $X $ and
the singleton sets $\left\{x\right\}$, for any $x\in X$,
are modules of ${\mathbf P}$. These modules are called \emph{trivial modules}.
A poset ${\mathbf P}$ is \emph{prime} or \emph{indecomposable} if all its modules are trivial. Otherwise
${\mathbf P}$ is \emph{decomposable} or \emph{degenerate}.
A module $M$ is \emph{strong} if for all modules $M'$
either $M\cap M'={\varnothing}$ or
$M\subseteq M'$ or $M'\subseteq M$.
A module (respectively, strong module) $M\neq X$ is called \emph{maximal}
if there exists no module (respectively, strong module)
$Y$ such that $M\subset Y \subset X$.
\begin{thm}(Modular decomposition theorem) \cite{GA-67}
Let ${\mathbf P}=(X, P)$ be a poset with at least two vertices. Then exactly
one of the following three conditions is satisfied:
\begin{enumerate}
\item [(i)] $G_{{\mathbf P}}$ is not connected and the maximal
strong modules of ${\mathbf P}$ are the connected components of $G_{{\mathbf P}}$.
\item [(ii)] $\overline{G_{{\mathbf P}}}$ is not connected and the maximal
strong modules of ${\mathbf P}$ are the connected components of $\overline{G_{{\mathbf P}}}$.
\item [(iii)] $G_{{\mathbf P}}$ and $\overline{G_{{\mathbf P}}}$ are connected. There is some
$Y\subseteq X$ and a unique partition $\mathcal{S}$ of $X$ such
that
\begin{enumerate}
\item [(a)] $|Y|\geq 4$,
\item [(b)] ${\mathbf P}\left[Y\right]$ is the biggest prime
subposet of ${\mathbf P}$ (in the sense that it is not included in any
other prime subposet),
\item [(c)] for every part $S$ of the partition $\mathcal{S}$,
$S$ is a module of ${\mathbf P}$ and $|S\cap Y|=1$.
\end{enumerate}
\end{enumerate}
\end{thm}
The previous theorem defines a partition $\mathcal{M}({\mathbf P}) = \{M_1,...,M_k\}$ of $X$,
which is called the \emph{canonical partition} or
\emph{maximal modular partition} of ${\mathbf P}$.
In the first case, $G_{{\mathbf P}}$ is said to be \textit{parallel} or \emph{stable} and the partition is formed by
the vertices
of the connected components of $G_{{\mathbf P}}$. In the second case, $G_{{\mathbf P}}$ is \emph{series} or \emph{clique} and
the partition is formed by the vertices of each connected component of $\overline{G_{{\mathbf P}}}$.
And, in the last case,
$G_{{\mathbf P}}$ is \emph{neighborhood} or \emph{prime}, and the partition is $\mathcal{S}$.
The \emph{quotient poset} of ${\mathbf P}$, denoted by ${\mathbf P}/\mathcal{M}({\mathbf P})$, has a vertex $v_i$ for
each part $M_i$ of $\mathcal{M}({\mathbf P})$; and two
vertices $v_i$ and $v_j$ of ${\mathbf P}/\mathcal{M}({\mathbf P})$ are comparable
if and only if for all $x\in M_i$
and for all $y\in M_j$, $x \perp y$ in ${\mathbf P}$.
The quotient poset is \emph{empty} (iff $G_{{\mathbf P}}$ is parallel), a \emph{total order} (iff
$G_{{\mathbf P}}$ is series) or \emph{indecomposable} (iff $G_{{\mathbf P}}$ is neighborhood).
On some occasions, when referring to a module, we will mean the subposet induced by it. For
instance,
we will say that a module $M$ of ${\mathbf P}$ is $CI$ or that it is prime, meaning that ${\mathbf P}(M)$ is.
This will be clear from the context and will cause no confusion.
\begin{thm}\cite{GA-67} \label{t:indecommposable-poset}
Given posets ${\mathbf P}$ and ${\mathbf P}'$, if $G_{{\mathbf P}}=G_{{\mathbf P}'}$ and
${\mathbf P}$ is indecomposable, then ${\mathbf P}'={\mathbf P}$ or ${\mathbf P}'={\mathbf P}^d$.
\end{thm}
\begin{prop}\cite{GA-67} \label{p:associated-posets}
Given posets ${\mathbf P}$ and ${\mathbf P}'$, if $G_{{\mathbf P}}=G_{{\mathbf P}'}$, then ${\mathbf P}$ and
${\mathbf P}'$ have the same strong modules and, consequently,
$\mathcal{M}({\mathbf P})=\mathcal{M}({\mathbf P}')$.
\end{prop}
Given a vertex $v$ of a poset ${\mathbf P}=(X, P)$ and a poset ${\mathbf H}=(X_1, H)$, \emph{substituting or
replacing} $v$ by ${\mathbf H}$ in ${\mathbf P}$ results in the poset
${\mathbf P}_{{\mathbf H}\rightarrow v}=\left(X-\{v\}\cup X_1, P_{{\mathbf H}\rightarrow v}\right)$ such that
$P_{{\mathbf H}\rightarrow v}=P-\{(x,y) : x=v \vee y=v\} \cup H \cup \{(x,y):x\in X_1 \wedge y\in
U(v)\}$ $\cup \{(x,y):y\in X_1 \wedge x\in D(v)\}$.
\begin{thm} \label{t:orientations}
Let $\mathcal{M}({\mathbf P}) = \{M_1, . . . , M_k\}$ be the maximal modular partition of a connected poset
${\mathbf P}=(X, P)$ whose quotient is prime, and call ${\mathbf H}$ the
quotient poset ${\mathbf P}/\mathcal{M}({\mathbf P})$. A poset ${\mathbf Q}$ is associated to ${\mathbf P}$ if and only if there
exist posets ${\mathbf Q}_i$ for $1\leq i \leq k$ such that
${\mathbf Q}_i$ is associated to ${\mathbf P}_i={\mathbf P}(M_i)$ for each $i$, and ${\mathbf Q}$ is obtained by replacing each
vertex $v_i$ of ${\mathbf H}$ by the poset ${\mathbf Q}_i$ or replacing each vertex $v_i$ of ${\mathbf H}^d$ by the poset
${\mathbf Q}_i$.
\end{thm}
\begin{thm}\cite{GA-67} \label{t:strong-property}
A poset ${\mathbf P}$ is $CI$ if and only if the quotient poset and all the maximal strong modules of
${\mathbf P}$ are $CI$.
\end{thm}
\begin{lem} \cite{ALC-GUD-GUT-2}
\label{l:neces} If $z$ is a vertex of a $CPT$ poset ${\mathbf P}$ then the subposet
induced by the closed down-set of $z$ is $CI$. In particular, if ${\mathbf P}$ is dually-$CPT$,
then also the subposet induced by the closed up-set of $z$ is $CI$.\end{lem}
\begin{rem}\cite{GA-67}
\label{r:strong} Let ${\mathbf P}$ and ${\mathbf P}'$ be associated posets. Then,
${\mathbf P}$ is a $CI$ poset if and only if ${\mathbf P}'$ is a $CI$ poset. In
particular, ${\mathbf P}$ is a $CI$ poset if and only if ${\mathbf P}^d$ is a $CI$
poset.
\end{rem}
\begin{thm} \label{t:dually-nprime}
Let ${\mathbf P}=(X, P)$ be a connected dually-$CPT$ poset. Then the quotient poset of ${\mathbf P}$ is
dually-$CPT$ and every maximal strong module of ${\mathbf P}$ is $CI$. In particular, if the quotient
poset is $CI$, then ${\mathbf P}$ is $CI$.
\end{thm}
\begin{proof} Let $\mathcal{M}({\mathbf P}) = \{M_1, . . . , M_k\}$ be the maximal modular partition of
${\mathbf P}$. The quotient poset ${\mathbf H}={\mathbf P}/\mathcal{M}({\mathbf P})$ is a subposet of ${\mathbf P}$, so ${\mathbf H}$ is
dually-$CPT$. We can assume that ${\mathbf P}$ is not empty, and since ${\mathbf P}$ is connected we have that
${\mathbf H}$ is connected, and so every
vertex $v_i$ of ${\mathbf H}$ is in the down-set or in the up-set of some other vertex. Which implies
that in ${\mathbf P}$ the whole module $M_i$ is in the up-set or in the down-set of some other vertex.
It follows from Lemma \ref{l:neces} that each ${\mathbf P}_i={\mathbf P}(M_i)$ is $CI$. Therefore, by Theorem
\ref{t:strong-property}, if ${\mathbf H}$ is $CI$, then ${\mathbf P}$ is $CI$.\end{proof}
The converse of Theorem \ref{t:dually-nprime} is not true in general.
For instance, if in the quotient poset ${\mathbf H}$ there exists a vertex $v_i$
such that in any $CPT$ representation of ${\mathbf H}$ the corresponding
path $W_{v_i}$ is reduced
to a vertex, then for ${\mathbf P}$ to be $CPT$ the module $M_i$ has to be a singleton.
In a representation $\Model{{\mathbf P}}$ of a CPT poset ${\mathbf P}$,
a subset $X$ of paths of \Model{{\mathbf P}}
is called \emph{one-sided} if all the paths that represent $X$
arrive at a vertex $a$ of the host tree
and all paths of $X$, except possibly one trivial path,
pass through a vertex $b$ of $T$
neighbor of $a$. If all the paths of $X$ arrive at a vertex $a$ and $X$ is not one-sided, then it is called \emph{two-sided}.
Addressing that issue in the proof of the main theorem will requires the following lemmas and
properties.
\begin{property}\cite{DU-MI-41, GO-84}
\label{p:compresing-CI-model}
Every $CI$ poset admits a $CI$ representation
where the intersection of all the intervals used in
the representation is a non-trivial interval.
\end{property}
\section{Trivial paths into modules}
\label{sec:TrivPathsModules}
The goal of this section is to prove that for any dually-CPT
poset ${\mathbf P}$, there exists a representation $\Model{{\mathbf P}}$ where all
the elements contained in strong modules are represented
by non-trivial paths.
At first we prove that if an element of module
is represented by a trivial path, it does mean that the module
(all its elements) are not greater than any other element not in the module.
\begin{lem}
Let ${\mathbf P}$ be a poset and let $M$ be a strong module of ${\mathbf P}$. If there
exists a representation $\Model{{\mathbf P}}$ where an element $x$ of $M$ is represented
by a trivial path, then all the elements of $M$ are not greater than any element
of ${\mathbf P}$ not in $M$.
\label{lem:TrivialPathModule}
\end{lem}
\begin{proof}
Let us proceed by contradiction and let us assume that there exists an element $z \notin M$
such that $z < x$. Then in any representation \Model{{\mathbf P}} we have $\Path{z} \subset \Path{x}$
but
since $\Path{x}$ is already a trivial path it cannot properly contain some other object.
\end{proof}
Hence from the previous lemma, if in a representation \Model{{\mathbf P}} an element
of a module is represented by a trivial path, the module is a minimal subset of ${\mathbf P}$.
\begin{lem}
\label{lem:ModuleContained}
Let $M$ be a strong module of a CPT poset ${\mathbf P}$,
if in a representation $\Model{{\mathbf P}}$ one of its elements
is represented as a trivial path, then there exists an element $x$ not in $M$
such that the path $\Path{x}$ contains all the paths representing the elements of $M$.
\end{lem}
\begin{proof}
Since the poset is connected, and by the previous lemma, we know that the
module cannot contain any other element, to ensure the connection outside
the module, there might be at least one element $x$ that is greater than every element
of $M$.
\end{proof}
\begin{lem}
\label{lem:PassingThroughA}
Let $M$ be a strong module of a CPT poset ${\mathbf P}$.
If in a representation $\Model{{\mathbf P}}$ one of its elements $z$
is represented as a trivial path, then this path is hosted
on some vertex $a$ of $T$. If for an element $x$ not in $M$ its path $\Path{x}$
passes through $a$, then $\Path{x}$ has to contain all the paths corresponding
to the elements of $M$.
\end{lem}
\begin{proof}
From the definition of a module, every element not in the module is either
completely disconnected from $M$ or completely connected to $M$. In that case,
if for an element $x$, in a representation $\Model{{\mathbf P}}$ its path $\Path{x}$ passes through
$a$,
then it is connected to the element $z$. Hence it has to be connected to every
element of $M$. In addition, in a transitive orientation of a graph, the containment
relation between $x$ and the elements of $M$ is the same for every element of $M$.
Thus if $\Path{x}$ contains $\Path{z}$ it contains all the paths of the elements of $M$.
\end{proof}
\begin{lem}
\label{lem:non-triv}
Let $M$ be a strong module of a CPT poset ${\mathbf P}$. If in a representation
$\Model{{\mathbf P}}$ one of its elements
is represented as a trivial path and $M$ is a clique or prime module,
then there exists at least one element of $M$ represented as a non-trivial path.
\end{lem}
In the case of dually-CPT posets, the next three lemmas consider the presence
of trivial paths in a representation of strong modules and show how
to obtain an equivalent representation where all the elements of the module
are non-trivial paths. For these lemmas, we consider each strong module to
be a CI poset and the element of the module represented by a trivial
path is denoted by $z$.
\begin{lem}
\label{lem:CliqueModuleTrivialPath}
Let $M$ be a strong CI clique module of a dually-CPT poset ${\mathbf P}$.
If an element $z$ of $M$ is represented
by a trivial path in a representation $\Model{{\mathbf P}}$, then there exists a representation
$\Model[']{{\mathbf P}}$ where $z$ is represented as non-trivial path.
\end{lem}
\begin{proof}
By Lemma \ref{lem:ModuleContained} we know that there exists an element $x$
such that all the paths of $M$ are contained in $\Path{x}$ in all CPT representations.
Let us consider three cases.
~
\\(1) Suppose the trivial path of $z$ is not an extremity
of any path that represents the elements of $M$. Let $a$ be the vertex of $T$
that hosts the trivial path of $z$. Since $\Path{z}$ is not an extremity of any path
of $M$, $a$ admits at least one neighbor $b$ in $T$ such that all the paths
of $M$ (except for $z$) pass through $b$ (see Figure \ref{fig:CliqueModule}$(i)$).
Let us subdivide the edge $a,b$ by adding a vertex $c$.
Then it suffices to replace the trivial path of $z$ by a non-trivial path that goes
from $c$ to $a$ in $T$. The containment relations among $M$ are preserved
and no new containment relation is added nor deleted with respect to the elements
not in $M$.
~
\\(2) Suppose now, the trivial path of $z$ is a common extremity for all the
elements of $M$ and $M$ is one-sided (see Figure \ref{fig:CliqueModule}$(ii)$).
We proceed as in the previous case; we consider a vertex $b$ of $T$ that is a neighbor
of $a$ and such that all the paths of $M$ except for $z$ pass through $b$.
Since $M$ is a clique, it only admits at most one element represented by
a trivial path, such a vertex $b$ exists, then we subdivide the edge by adding
a vertex $c$ and the path of $z$ goes from $a$ to $b$.
Note that the technique still works if some paths of $M$ continue after $a$.
~
\\(3) Suppose now, the trivial path of $z$ is the common extremity for
some paths of the module in a 2-sided manner (see Figure \ref{fig:CliqueModule}$(iii)$).
Let $b$ and $c$ be two vertices of $T$ that are neighbors of $a$, such that $b$ and $c$ lie
on the path of $x$, $x$ being an element not in $M$ that contains all elements of $M$.
We can partition the elements of $M$ into three sets:
$B$ the elements for which paths arrive at $a$ and pass through $b$,
$C$ defined in a similar way but \emph{w.r.t.} $c$ instead of $b$,
and $A$, the paths of $M$ that go through $b$ and $c$. This time we need
to subdivide the edges $a,b$ and $a,c$ of $T$. We add a vertex $i$ between $a$ and
$b$ and a vertex $j$ between $a$ and $c$. Then it suffices to extend the paths of $B$ until
$j$ and the paths of $C$ until $i$. The path of $z$ now goes from $i$ to $j$. By subdividing
several times the edges $a,b$ and $a,c$, we can make sure that all
the extremities are distinct.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{CliqueModule}
\end{center}
\caption{Representation of cliques modules with trivial paths.}
\label{fig:CliqueModule}
\end{figure}
\begin{lem}
\label{lem:StableModuleTrivialPath}
Let $M$ be a strong CI stable module of a dually-CPT poset ${\mathbf P}$.
If an element $z$ of $M$ is represented
by a trivial path in a representation $\Model{{\mathbf P}}$, then there exists a representation
$\Model[']{{\mathbf P}}$ where $z$ is represented as non-trivial path.
\end{lem}
\begin{proof}
Let us first remark that in a strong stable module, several elements can be
represented as trivial paths in a representation $\Model[']{{\mathbf P}}$. In addition,
if an element of $M$ is represented by a trivial path, the trivial
path is disjoint from all the other paths representing the elements of $M$.
Let $z$ be such an element.
We will transform $\Model{{\mathbf P}}$ such that all the elements of $M$ represented by trivial paths
in $\Model{{\mathbf P}}$ will be represented by non-trivial paths. Let $a$ be the vertex of $T$
that hosts the path of $z$. Thanks to Lemma \ref{lem:ModuleContained}, we know
that there exists an element $x$ of ${\mathbf P}$
such that in $\Model{{\mathbf P}}$ the paths of the elements of $M$
are contained in the path of $x$. Since $M$ is a non-trivial module it contains
at least two elements, hence in $\Model{{\mathbf P}}$ there exists a vertex $b$ of $T$ that is adjacent
to $a$, and $b$ is contained in all the paths of the elements not in $M$ that
contain $M$, since such a path has to contain $\Path{z}$ and all the other elements of $M$.
Let us denote by $U=\{u_1,u_2,\ldots,u_k\}$ the elements of $M$ that are represented
by trivial paths in $\Model{{\mathbf P}}$.
To obtain an equivalent representation $\Model[']{{\mathbf P}}$, we subdivide $2k-1$ times the edge
$a,b$.
We then rename $a$ as $a_1$, and we
number the newly created vertices $a_2,a_3,\ldots,a_{2k}$ (the transformation
is presented in Figure \ref{fig:StableModule}). In this new
representation each element $u_i$ of $U$ is replaced by a path
that goes from $a_{i}$ to $a_{k+i}$ in $T$.
It remains to prove that this representation is equivalent. First observe that
for any element $x$ connected to $M$, its path in $\Model{{\mathbf P}}$ contains all the
elements of $M$. By the choice of vertex $b$ to perform the transformation, we
can guarantee that any path of such an element $x$ will pass through $a,b$ in $\Model{{\mathbf P}}$.
Since we subdivided this edge to obtain $\Model[']{{\mathbf P}}$ , this path will still pass through
$a$ and $b$
and all the vertices introduced by the transformation.
Now for any element $y$ not connected to $M$, we know by
Lemma \ref{lem:PassingThroughA} that no path of such an element will pass through $a$.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics{StableModule}
\end{center}
\caption{$(i)$ Representation of a stable module with elements represented
by trivial paths; $(ii)$ transformation to eliminate trivial paths
from the representation.}
\label{fig:StableModule}
\end{figure}
\begin{lem}
\label{lem:PrimeModuleTrivialPath}
Let $M$ be a strong CI prime module of a dually-CPT poset ${\mathbf P}$.
If an element $z$ of $M$ is represented
by a trivial path in a representation $\Model{{\mathbf P}}$, then there exists a representation
$\Model[']{{\mathbf P}}$ where $z$ is represented as non-trivial path.
\end{lem}
\begin{proof}
For this proof, we consider three cases: (1) either $\Path{z}$ the trivial path of $z$ is
properly contained (\emph{i.e.} $\Path{z}$ is not an extremity of
any path of the element of $M$) in all the paths of the elements of the module $M$, or (2)
there exists at least two elements $q$ and $r$ of $M$ such that $\Path{z}$ is the right bound
of $\Path{q}$ and the left bound of $\Path{r}$, or (3) the path $\Path{z}$ is the right
(respectively left) bound for some paths representing elements of $M$,
and is not the left (respectively right) bound of any elements of $M$.
These three cases are illustrated in Figure
\ref{fig:PrimeModuleTrivialPath}$(i)-(iii)$.
~
\\ \noindent
{ (1)}
Let $a$ be the vertex of $T$ that hosts $\Path{z}$, the trivial path representing $z$.
By hypothesis, all the paths that represent the elements of $M$ properly contain
$\Path{z}$ and thus pass through vertex $a$. Since it is a proper containment, no
path of elements of $M$ (other than $z$) starts or finishes at $a$. Thus
$a$ admits at least one neighbor $b$ in $T$ such that all the paths
that represent elements of $M$, except for $z$, pass through $b$. To obtain
a new representation $\Model[']{{\mathbf P}}$ we subdivide the edge $a,b$ by a adding
a vertex $d$. Then $\Path{z}$ in $\Model[']{{\mathbf P}}$ is replaced by the path $a,d$.
(See Figure \ref{fig:PrimeModuleTrivialPath}$(iv)$).
Since the representation of $\Path{z}$ is the only modification of the
representation, by the previous discussion all the paths that represent the
elements of $M$ pass through $a$ and $b$ and as a consequence pass
through $a$ and $d$ since $d$ is in between $a$ and $b$.
By Lemma \ref{lem:PassingThroughA} we know that all the paths of the elements not in $M$ that
pass through $a$ will also contain all the paths of $M$. Hence the modification of $\Path{z}$
preserves the containment relation of $\Model{{\mathbf P}}$.
~
\\ \noindent
{(2)}
Let us now consider that there exist at least two elements $q$ and $r$ of $M$
such that in $\Model{{\mathbf P}}$, the vertex $a$ is the right bound of
the path $\Path{q}$ and the left bound of the path $\Path{r}$ (see Figure
\ref{fig:PrimeModuleTrivialPath}$(ii)$).
Let us denote by $L$ the set of elements of $M$ for which $a$ is the right bound
in the representation $\Model{{\mathbf P}}$ and similarly let us denote by $R$ the set of
elements of $M$ for which $a$ is the left bound in $\Model{{\mathbf P}}$. Let us remark
that $L\cap R = {\varnothing}$ and some elements of $M \setminus (L\cup R)$ might
not be empty. Let $b$ be the neighbor of $a$ in $T$ such that the paths of
the elements of $L$ pass through $b$. And similarly let $c$ be the neighbor
of $a$ in $T$ such that the paths of the elements of $R$ pass through $c$.
To obtain a new representation $R'(P)$ we subdivide the edge $a,b$ $|R|+1$ times,
and the edge $a,c$ $|L|+1$ times. The added vertices are called $ab_{i}$ for the vertices
between $a$ and $b$ and $ac_{j}$ for the vertices between $a$ and $c$.
Let $ab_{1}$ and $ac_{1}$ be the neighbors of $a$
in $\Model[']{{\mathbf P}}$. The path $\Path{z}$ now goes from $ab_{1}$ to $ac_{1}$.
The left bound
of the paths of the elements of $R$ are moved on the $ab_{i}$ vertices. The coordinates
are chosen to preserve the containment relation. We proceed symmetrically
for the paths of the elements in $L$. It remains to prove that the obtained representation
still corresponds to ${\mathbf P}$. Again we know by Lemma \ref{lem:PassingThroughA} that no
path of an element not connected to $M$ passes through $a$, by construction
it remains valid for $a$ and for all the newly introduced vertices. For any other path
their relation to $\Path{z}$ and the paths of the elements of $L$ and $R$ are unchanged.
If the path $\Path{s}$ of an element $s$ was containing the path $\Path{l}$ of an element
$l$ of $L$ in $\Model{{\mathbf P}}$, it is still the case in $\Model[']{{\mathbf P}}$. In that case the left
bound
of $\Path{l}$ is contained in $\Path{s}$ and the right bound of $\Path{s}$ will be at the
right
of $c$ in $\Model{{\mathbf P}}$. This property will be preserved in $\Model[']{{\mathbf P}}$. Similarly
if both paths $\Path{s}$ and $\Path{l}$ were overlapping in $\Model{{\mathbf P}}$, they are still
overlapping in $\Model[']{{\mathbf P}}$.
~
\\ \noindent
{(3)}
Since $\Path{z}$ is the right (\emph{resp.} left) bound of some paths representing some
elements of $M$,
and is not the left (\emph{resp.} right) bound of any other elements of $M$,
there exists a vertex
$b$ in $T$ adjacent to $a$ and such that all the paths representing elements of $M$ that end
at $a$
pass through $b$. To obtain the new representation $\Model[']{{\mathbf P}}$ it suffices
to subdivide this edge one time.
Let $c$ the newly introduced vertex. Then the trivial path $\Path{z}$ in $\Model{{\mathbf P}}$ is
replaced by
a path going from $a$ to $c$. By the transformation, we can observe that all the paths
that were containing $\Path{z}$ in $\Model{{\mathbf P}}$ still contain $\Path{z}$
in $\Model[']{{\mathbf P}}$. Let $s$ be an element of ${\mathbf P}$ such that $\Path{z} \subset \Path{s}$ in
$\Model{{\mathbf P}}$.
If $\Path{s}$ was containing $\Path{z}$ it had to pass through $a$ and $b$, thus by subdividing
$a,b$ we can also conclude that this paths will pass through $a$ and $c$, the added
vertex, in $\Model[']{{\mathbf P}}$.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics{PrimeModule}
\end{center}
\caption{Representation of prime modules with the element $z$ represented as
trivial path.}
\label{fig:PrimeModuleTrivialPath}
\end{figure}
\begin{thm}
If ${\mathbf P}$ is dually-CPT and in a representation $\Model{{\mathbf P}}$ some elements of
strong modules are represented by trivial paths, then there exists an equivalent
representation $\Model[']{{\mathbf P}}$ where all the paths representing elements of strong modules
are non-trivial paths.
\end{thm}
\begin{proof}
It is a direct consequence of Lemmas \ref{lem:CliqueModuleTrivialPath},
\ref{lem:StableModuleTrivialPath} and
\ref{lem:PrimeModuleTrivialPath} and the fact that each time a trivial path
is replaced by a non-trivial one, no trivial path is created in $\Model[']{{\mathbf P}}$.
\end{proof}
From the preceding theorem, we know
how to obtain a representation a dually-CPT poset where
all the elements contained in non-trivial strong modules
are represented by non-trivial paths. Hence, in
this representation some elements that do not belong
to strong modules might be represented by trivial paths.
\section{Ending of modules onto trivial paths}
\label{sec:ModuleVsTrivPath}
In the previous section we proved that for a dually-CPT poset,
one can always obtain a representation
where no element of a strong module is represented by a trivial path.
It therefore remains to consider
how the paths that represent a strong module $M$ can connect to
an element $z$, not contained in a strong module, where $z$ is represented
by a trivial path in the representation $\Model{{\mathbf P}}$.
Since we need to reconfigure the containment
relation inside the module, this operation could be prevented or constrained if the
trivial path is misplaced. In the case where the trivial path is in the middle of the paths of
the module, it will be easy to reconfigure the containment relation. In the opposite
case, if all the paths representing elements of a module arrive at a trivial path,
we cannot perform the intended operation as planned. In this section, we will identify
the problematic situations, and we will show how to overcome these problems. As in the previous
section, we will perform local changes to the representation to suppress problematic cases.
When the paths that represent elements of a module are connected to
a trivial path in a representation, several configurations could arise.
The most favorable one, is when the trivial path is properly contained
in the paths of the module (\emph{i.e.} the trivial path
does not lie on any extremity of the path of the module).
Actually this is a configuration we aim at obtaining. The other two
configurations is when all the path have their extremities that
end at a trivial path, or just some of them end at this trivial path.
In most cases we will be able to reconfigure our representation to
obtain a representation that is favorable to our purpose.
\subsection{Complete ending of a module on a trivial path}
Let us assume that all paths in $\Model{{\mathbf P}}$ corresponding to elements of $M$
have all their extremities end at a vertex $a$ of the host tree $T$.
In that case, there are several possibilities:
either all the paths that represent $M$ will arrive at $a$ by passing by a vertex
$b$ of $T$ and such that $a,b$ is an edge of $T$,
or
there is another vertex $c$ that is a neighbor of $a$ in $T$ different from $b$ and
such that some paths of the module pass through $c$.
In this section, even if it is not explicitly stated, the representation
of the module $M$ will contain the trivial path of $z$ located at the vertex $a$ in $T$.
\begin{rem}
If a strong module of a dually-CPT poset ${\mathbf P}$ is two-sided
in a representation $\Model{{\mathbf P}}$, then the induced graph
is not connected. Hence the strong module is a stable module.
\end{rem}
\begin{lem}
Let $M$ be a strong module of a dually-CPT poset ${\mathbf P}$. If $M$ is one-sided
in a representation \Model{{\mathbf P}} and the poset induced by $M$ is connected,
then $M$ is a clique module.
\end{lem}
\begin{proof}
If the graph induced by $M$ is connected, $M$ is either a clique module
or a prime module.
If $M$ is a clique module, then there is nothing to prove.
If $M$ is a prime module, then the graph induced by $M$
necessarily contains an induced $P_4$. Let us show that it is not
possible to represent a $P_4$ as a CPT representation where all the paths
end up at a same vertex $a$ of the host tree $T$.
Consider the representation of the $P_4$ as presented in Figure \ref{fig:P4}$(i)$
with the containment relation represented in Figure \ref{fig:P4}$(ii)$.
For a contradiction, let us assume that such a representation exists.
Since the paths of $2$ and $4$ have to contain the path of $3$ and all these
paths have to arrive at vertex $a$ of the host tree, we have a configuration
similar to the one depicted in Figure \ref{fig:P4}$(iii)$ and a part of
the host tree is depicted in Figure \ref{fig:P4}($iv$).
Since $2$ and $4$ are not connected, their paths have to diverge in $T$.
Call $x$ the vertex of $T$ where these paths diverge.
It remains to represent the path of $1$. Since $1$ is connected to $2$ but not to
$4$, call $y$ the vertex of $T$ where the path of $1$ begins.
The vertex $y$ has to lie in the proper part of the path of $2$
(see Figure \ref{fig:P4}($iv$)), and this path, by hypothesis, has to go all the way to
$a$. But in that case it has to contain the path of $3$, hence there is a contradiction.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics{P4}
\end{center}
\caption{$(i)$ a $P_4$, $(ii)$ a CI representation of $P_4$, $(iii)$ tentative representation
with
all the paths arriving at a vertex, $(iv)$ the host tree of the tentative
representation.\label{fig:P4}}
\end{figure}
We have proven that if in the representation of a strong module all its paths
arrive at a same vertex of the host tree, then the module is either a clique or a stable module.
We now consider in which cases can we obtain an alternative representation
where all the paths do not arrive at a same vertex of the host tree.
When the modification is possible, we will show how by starting from $\Model{{\mathbf P}}$ one
can obtain an equivalent representation $\Model[']{{\mathbf P}}$, that is,
a containment representation that still corresponds to ${\mathbf P}$.
\begin{lem}
\label{lem:ModNotIncluded}
Let $M$ be a strong module of a dually-CPT poset ${\mathbf P}$ and $\Model{{\mathbf P}}$
a representation of
${\mathbf P}$ where all the paths of $M$ arrive at a same vertex. If there is no
element of ${\mathbf P}$ that contains the elements of $M$, then there exists
an alternative representation $\Model[']{{\mathbf P}}$ of ${\mathbf P}$ where all the
paths of $M$ will have different endpoints.
\end{lem}
\begin{proof}
Let us assume that all the paths of a strong module $M$ arrive at a vertex
$a$ in the representation $\Model{{\mathbf P}}$. If there is no element of ${\mathbf P}\setminus M$ that
contains all the paths of $M$, then we can add a new branch to the host tree
starting at $a$ and ending at $b$ (see Figure \ref{fig:ModNotIncluded}).
Let us denote by $k$ the cardinality of $M$. In order to guarantee that all paths
end on a dedicated vertex, the new branch needs to have at least $k$ new
vertices. It is easy to make sure that the containment relation inside the module
is not altered in this new representation. It is simple to notice that
the previous containments of ${\mathbf P}$ are preserved by this modification and
no new containment is added since the branch only contains paths of $M$.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics{ModNotIncluded}
\end{center}
\caption{Example of modification on a representation of a poset ${\mathbf P}$}
\label{fig:ModNotIncluded}
\end{figure}
We now consider the case when there is at least one element $x$
not in $M$ that is greater than all
the elements of $M$.
In that case $M$ is either one-sided or two-sided.
Let us start with this second case.
\begin{lem}
\label{lem:Stable-Path}
Let $M$ be a strong stable module of a dually-CPT poset ${\mathbf P}$
and let $x$ be an element of
${\mathbf P}\setminus M$ that contains all the elements of $M$.
Let us assume that in a representation $\Model{{\mathbf P}}$ all the
paths of $M$ arrive at a vertex $a$ of $T$. Then there
exists an equivalent representation $\Model[']{{\mathbf P}}$ where all
the endpoints of the paths of $M$ near $a$ are distinct.
\end{lem}
\begin{proof}
By hypothesis, since the elements of $M$ are all contained in an element $x$
of ${\mathbf P}$, it means that in any representation $\Model{{\mathbf P}}$ of ${\mathbf P}$ the
union of the paths of $M$ is a path. If in a representation
$\Model{{\mathbf P}}$ of ${\mathbf P}$ all the paths of $M$ arrive at $a$,
let $b$ and $c$ be the
immediate neighbors of $a$ on $T$ along the path that hosts all the paths of $M$.
%
Since the strong module considered is stable and in the representation
every element lies under the path of $x$, the module is two-sided at $a$.
Since $M$ is two-sided in the representation, its elements
can be partitioned into two sets $B$ and $C$ as follows:
An element $r$ is in $B$ if its path in $\Model{{\mathbf P}}$ passes by the vertex $b$.
Similarly, an element $s$ is in $C$ if its path in $\Model{{\mathbf P}}$ passes by $c$
(see Figure \ref{fig:Stable2-sided}).
%
To obtain $\Model[']{{\mathbf P}}$ it suffices
to subdivide the edges $a,b$ and $a,c$ of $T$. All
the paths of the elements of $B$ that previously ended at $a$
will now end between $a$ and $c$. Hence it is necessary to add $|B|$ new
vertices between $a$ and $c$. In a symmetric manner, the paths of
the elements of $C$ will be elongated to end on a new vertex between
$a$ and $b$; thus it is necessary to add $|C|$ new vertices between
$a$ and $b$. It is simple to see that the introduced modification
does not alter the containment relationship. Any path that contained
all the elements of $M$ will still contain all the elements
of $M$. And any path that crossed the section of tree spanned
by the elements of $M$ but did not contain them, will still not
contain them.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics{Stable2-sided}
\end{center}
\caption{Modification of the representation of a two-sided stable module.}
\label{fig:Stable2-sided}
\end{figure}
\begin{lem}
\label{lem:Clique-Proper}
Let $M$ be a strong clique module of a dually-CPT poset ${\mathbf P}$ and let $x$ be an element of
${\mathbf P}\setminus M$ that contains all the elements of $M$.
If in a representation $\Model{{\mathbf P}}$ all the paths of $M$ arrive
at a vertex $a$, then $M$ does not contain any other strong module.
\end{lem}
\begin{proof}
Because of the element $x$, the union of all the
paths of the elements of $M$ in $\Model{{\mathbf P}}$ is included in
the path of $x$ and hence itself forms a path. Since all these paths
are bounded at $a$, then for any pair of elements $p$ and $q$ of $M$
either the path of $p$ is contained in the path of $q$ or the converse.
There is no pair of non-adjacent vertices. As a consequence
it does not contain any other module.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{Clique-Path}
\end{center}
\caption{Configuration of a clique path. The set of paths $C$ represents the paths of a strong clique module $M$}
\label{fig:Clique-Path}
\end{figure}
Let $M$ be a strong clique module with representation $\Model{{\mathbf P}}$
where all the paths of the elements of $M$ stop at a vertex $a$.
We say that $M$ is \emph{free} in $\Model{{\mathbf P}}$ if there is at least one vertex
$b$ of $T$ such that $a,b$ is an edge of $T$, no path of $M$ passes through
$b$ and all the paths that contain the paths of $M$ pass through $b$.
(See Figure \ref{fig:Clique-Path} $(i)$ and $(ii)$.)
\begin{lem}
\label{lem:Clique-Path}
Let $M$ be a strong clique module of a dually-CPT poset ${\mathbf P}$,
and let $x$ be an element of
${\mathbf P}\setminus M$ that contains all the elements of $M$.
If $M$ is free in a representation $\Model{{\mathbf P}}$ where all the paths of $M$ arrive
at a vertex $a$, then we can find a representation
$\Model[']{{\mathbf P}}$ where all the endpoints of $M$ arrive on different
vertices of $T$.
\end{lem}
\begin{proof}
Since $M$ is free in $\Model{{\mathbf P}}$ we can re-use the technique used in Lemma
\ref{lem:Stable-Path} by subdividing the edge $a,b$ of $T$.
\end{proof}
Thanks to Lemmas \ref{lem:ModNotIncluded}, \ref{lem:Stable-Path},
and \ref{lem:Clique-Path},
we know how modify a representation
in almost all the cases.
However, one case is not covered, namely,
when the module is a clique and it is blocked.
We say that a strong clique module bounded at a vertex $a$ in a representation
$\Model{{\mathbf P}}$ is \emph{blocked} if it is not free.
There are two reasons why $M$ may be blocked:
(1) It might be because a path that contains the elements of $M$
also stops at $a$, or
(2) because there are two elements $x$ and $y$ that contain
all the elements of $M$ and in $\Model{{\mathbf P}}$ their corresponding paths diverge
at $a$. (See Figure \ref{fig:Clique-Path}$(iii)$.)
\begin{rem}
\label{rem:CliqueBlocked}
Let $M$ be a strong clique blocked module of a dually-CPT.
From Lemma \ref{lem:Clique-Proper}, we know that it does not contain
any other strong module. Hence, a reconfiguration
of this subposet is just a matter of relabelling the elements.
\end{rem}
\subsection{Partial ending of a module on a trivial path}
In the previous section, we proved that whenever a module is connected to
an element $z$ of ${\mathbf P}$ represented by a trivial path in a representation
$\Model{{\mathbf P}}$ and all paths that represent the element of $M$ end at this path, we
can either alter the representation to ensure that all the paths
do not end on that trivial path or, the module is a clique and does not
contain any other modules. Hence it is possible to alter the containment
relation.
If, in the completely opposite direction, a module $M$ is connected to an element
$z$ represented by a trivial path, but no path that represents an element of
$M$ ends at this trivial path, it does not create any problem to change the
containment relation of the module.
The last case to consider is when $M$ is connected to a trivial
path, but only some paths of $M$ (not all) end at this trivial path.
We will prove that in that case an equivalent representation, where no path
of $M$ ends at this trivial path, can be obtained.
\begin{lem}
\label{lem:PartialEndingModule}
Let $M$ be a strong module of a dually-CPT poset ${\mathbf P}$ connected to an element $z$ ($z\notin M$).
If in a representation $\Model{{\mathbf P}}$ the element $z$ is represented by
a trivial path $\Path{z}$ and the paths of some elements of $M$ end at the path
of $\Path{z}$ and some other paths of elements of $M$
properly contain $\Path{z}$, then there exists an
equivalent representation $\Model[']{{\mathbf P}}$ where no element of $M$ ends
at a trivial path.
\end{lem}
\begin{proof}
Let $I$ denote the set of elements not in $M$ such
that the paths of the elements of $I$ are contained in
the paths of the elements of $M$. In the representation
$\Model{{\mathbf P}}$ all the paths that represent the element of $I$
are all contained in $\cap_{m\in M} P_m$.
Since by hypothesis not all the paths of $M$ end at
a trivial path, if there are some elements of $M$
that end paths represented by trivial paths, there
are at most two trivial paths in that situation.
Call these trivial paths $y$ and $z$.
Let us assume that the part common to all the paths of $M$
in $\Model{{\mathbf P}}$ is on a horizontal line, and that
\emph{w.l.o.g.}~that $\Path{y}$ is the leftmost
and $\Path{z}$ is the rightmost of this common part.
We assume further, in the
representation $\Model{{\mathbf P}}$, that $\Path{z}$ lies on vertex $a$ of $T$
and $\Path{y}$ lies on vertex $b$ of $T$.
We denote by $L$ (resp.~$R$) the set of all elements of $M$
whose paths in $\Model{{\mathbf P}}$ end at $b$ (resp.~at $a$.) Note that
there is at most one element of $M$ that belongs
to both $L$ and $R$, since the containment relation is proper.
There are two cases to consider: (1) either there is no element
$x$ such that all the paths of $M$ are contained in the
path of $x$, or (2) such an element $x$ exists.
~\\(1) For the first case, let us assume that such an element
does not exists. Hence there is no path in $\Model{{\mathbf P}}$ that contains
any path of the elements of $M$. In that case, to obtain
an equivalent representation, in $T$ we can add one path
with $|M|$ new vertices connected to $a$ and another path
with $|M|$ new vertices connected to $b$. Since the
poset induced by $M$ is CI, it suffices to represent
this module as a containment of intervals using these
new branches for the endpoints. The transformation
process is presented in Figure \ref{fig:PartialEndingCase1}.
The containment relation between elements of $R$ (resp. $L$)
and $I$ remain unchanged. Moreover, for any element $q$ not connected
to $M$, since the endpoints of the paths of the elements of $M$
have been relocated in the two new branches, there is no
containment relation between $\Path{q}$ and the paths
of the elements of $M$, since $\Path{q}$ does not contain any of the new
branches in $\Model[']{{\mathbf P}}$.
~\\(2) Let us now consider the case when there is an element $x$
not in $M$ such that in $\Model{{\mathbf P}}$ the path of $x$ contains all
the paths of the elements of $M$. In the host tree $T$ we denote
by $c$ the neighbor of $a$ such that no path of $R$ passes through $c$
but some paths of elements of $M$ do (by our initial hypothesis).
Let $d$ be the neighbor of $b$ in $T$ such that paths of some
elements of $M$ pass through but no element of $L$ does.
To obtain an alternative representation $\Model[']{{\mathbf P}}$
we subdivide the edge $a,c$ $|R|$ times and subdivide the edge $b,d$ $|L|$ times.
(This transformation
is presented in Figure \ref{fig:PartialEndingCase2}). Then it is just
a matter of extending the paths of the elements in $R$ such
that they end on a vertex located between $a$ and $c$.
For each element of $R$, its new ending vertex is determined
according to the containment relation in $R$.
For the elements of $L$, we proceed in a similar manner.
It remains to prove that the new representation still represents the
poset ${\mathbf P}$. The only paths that are transformed are the paths that
correspond to elements of $R$ and $L$. Without loss of generality, let $l$ be an element of $R$
and let
$\Path{l}$ be its path in $\Model[']{{\mathbf P}}$. Since $\Path{l}$ has been extended, it is clear
that all the paths in $\Model{{\mathbf P}}$ that were contained in $\Path{l}$ remain contained
in $\Model[']{{\mathbf P}}$. In addition, since the extension occurred between $a,c$ or $b,d$.
Equivalently let $k$ be an element of ${\mathbf P}$. If $\Path{l} \subset \Path{k}$ in $\Model{{\mathbf P}}$ then
$\Path{l} \subset \Path{k}$ in $\Model[']{{\mathbf P}}$. If $k$ is an element of $R$, by the
transformation
we ensure that the containment relation is preserved. If $k$ is not an element of $R$,
then in $\Model{{\mathbf P}}$, the path $\Path{k}$ passed by vertex $c$ of $T$, hence by extending
$\Path{l}$, it will not reach $c$, then it is still contained in $\Path{k}$ in
$\Model[']{{\mathbf P}}$.
Let us now consider an element $q$ such that $\Path{q}$ intersects $\Path{l}$ but there
is no containment relation in $\Model{{\mathbf P}}$. If $\Path{l} \cup \Path{q}$ is not a path in
$\Model{{\mathbf P}}$
then it contains a claw pattern and this pattern will be preserved
in $\Model[']{{\mathbf P}}$.
Let us now consider the case when $\Path{l} \cup \Path{q}$ forms a path in
$\Model{{\mathbf P}}$.
If $\Path{q}$ passes through $a$ in $\Model{{\mathbf P}}$ it has one endpoint contained between the
endpoint
of $\Path{l}$. Thus the first endpoint of $\Path{q}$ is at the left of $a$ in $\Model{{\mathbf P}}$
and
the endpoint at the right of $c$ (possibly $c$). Since $\Path{l}$ does not reach $c$ in
$\Model[']{{\mathbf P}}$,
the overlap relation is preserved in the new representation. If both paths were disjoint,
they remain disjoint in $\Model[']{{\mathbf P}}$.
\end{proof}
\begin{figure}[h!]
\begin{center}
\includegraphics{PartialEndingCase1}
\end{center}
\caption{Illustration of case (1) of Lemma \ref{lem:PartialEndingModule}.
Elements $1,2,3$ and $4$ are parts of the modules. The module is connected
to the elements represented by the paths in the box $B$. Elements $1$ and $3$ belong to $L$
and elements $2$ and $3$ belong to $R$.}
\label{fig:PartialEndingCase1}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics{PartialEndingCase2}
\end{center}
\caption{The same example as in Figure \ref{fig:PartialEndingCase1}, but this time
there is an element $x$ not in $M$ that contains all the elements of $M$ and that
prevents performing the modification of case (1).}
\label{fig:PartialEndingCase2}
\end{figure}
From Lemmas \ref{lem:ModNotIncluded}, \ref{lem:Stable-Path}, \ref{lem:Clique-Path} and
Remark \ref{rem:CliqueBlocked}, we can summarize the results of this section
with the following theorem:
\begin{thm}
Let ${\mathbf P}$ be a dually-CPT poset. Either for each strong module $M$
of ${\mathbf P}$ there exists a representation $\Model{{\mathbf P}}$
such that all the paths of $M$ do not end on a trivial
path, or $M$ is a clique blocked module.
\label{thm:StandardizedRepresentation}
\end{thm}
We call a representation that fulfills the condition of the previous theorem
a \emph{normalized representation}.
\section{Substitution}
\label{sec:Substition}
The last step to obtain our main result is to prove
that for any dually-CPT poset ${\mathbf P}$ all the posets
$\mathcal{Q}=\{{\mathbf Q}_1,\ldots,{\mathbf Q}_l\}$ that are associated to ${\mathbf P}$
admit a CPT representation. Let us consider one particular poset ${\mathbf Q}$
of this set. If ${\mathbf Q}$ is associated to ${\mathbf P}$ it means by definition
that their underlying comparability graphs are identical. We assume
that ${\mathbf P}$ is not CI, otherwise the results already stand
from Theorem \ref{t:strong-property}.
Thus we deduce that the quotient poset of ${\mathbf P}$ is not CI, by
Theorem \ref{t:dually-nprime}, and thus is prime.
Since ${\mathbf P}$ and ${\mathbf Q}$ are associated,
by Property \ref{p:associated-posets} they admit the same
set of strong modules. The quotient ${\mathbf H}$ of ${\mathbf P}$ is
obtained by keeping one element of each strong maximal
module and the quotient ${\mathbf K}$ of ${\mathbf Q}$ is
either equal to ${\mathbf H}$ or to its dual ${\mathbf H}^d$. Let us consider
that ${\mathbf H}$ is equal to ${\mathbf K}$.
To obtain a representation for ${\mathbf Q}$, we will use the normalized
representation $\Model{{\mathbf P}}$ obtained for ${\mathbf P}$. From
$\Model{{\mathbf P}}$ it is immediate to obtain a representation $\Model{{\mathbf H}}$
for ${\mathbf H}$ as it suffices to keep one path for each strong
module of ${\mathbf P}$. In addition, since it is obtained by
removing paths from a normalized representation,
we can consider that all the paths that correspond
to strong modules which are not clique blocked
modules, do not end on trivial paths of other
elements. Then we will show
that for such elements, we can replace this path
by an arbitrary CI poset. Finally, to obtain
a CPT representation for ${\mathbf Q}$ it suffices to replace
each path that is a representative of a strong module,
by the corresponding CI poset in ${\mathbf Q}$. For the
clique blocked modules, as they do not contain
other strong modules, they correspond to total orders,
hence the representation can be preserved, but the labelling
has to be changed to suit the total order in ${\mathbf Q}$.
Let ${v_0}$ be an element of ${\mathbf H}$ that is a representative of some maximal
strong module of ${\mathbf P}$ that is not a clique blocked module.
Let $\Path{v_0}=(x_1,x_2,\ldots,x_k)$ be its path in $\Model{H}$.
We will assume that $k$ is at least $4$. We will show how to replace
$\Path{v_0}$ by a CI poset ${\mathbf N}$. Let $\Model{{\mathbf N}}=\{I_i\}_{1\leq i\leq n}$
be a $CI$ representation of a poset ${\mathbf N}$ whose
vertices are $u_1,u_2,\ldots, u_m$.
Assume that the intervals $I_i$ (subpaths of a path $I$) are non-trivial,
no two of them share an end vertex
and there is an edge $cd$ of $I$ contained in
the total intersection of the intervals $I_i$ -- this assumption
is guaranteed by Proposition \ref{p:compresing-CI-model}.
Name $a$ and $b$ the end vertices
of the interval union of the intervals $I_i$. Clearly $[c,d]\subset [a,b]$.
We also assume that
$a$, $b$, $c$ and $d$ are distinct, and that neither $c$ nor $d$ are end vertices of an
interval $I_i$.
\begin{process} \label{p:substitution}
The process of replacing
in the representation $\Model{{\mathbf H}}$ the path $\Path{v_0}$ by the
intervals $\{I_i\}_{1\leq i \leq n}$ of the representation $\Model{{\mathbf N}}$ consists of:
\begin{itemize}
\item[(i)] subdividing the edges $x_1 x_2$ and $x_{k-1} x_k$ of $T$
by adding in each one $n-1$ vertices.
\item[(ii)]subdividing the edge $c d$ of $I$ by adding as many vertices as there are in $T$
between $x_2$ and $x_{k-1}$.
\item[(iii)] removing from $\Model{{\mathbf H}}$ the path $W_{v_0}$ and embedding in its place the
intervals of $S$ in such a way that the vertices $a$, $c$, $d$, $b$ and all others between
them match
with the vertices $x_1$, $x_2$, $x_{k-1}$, $x_k$ and all others between them,
respectively,
as it is shown in Figure \ref{f:grande}.
\end{itemize}
\end{process}
\begin{figure}
\centering
\includegraphics[width=15cm]{figura_grande.pdf}
\caption{Description of the Replacement process}\label{f:grande}
\end{figure}
\begin{lem}\label{l:substitution-1}
If in $\Model{{\mathbf H}}$ the path $\Path{v_0}$ that represents a module
of a dually-CPT poset ${\mathbf P}$ does not end on trivial paths, then
we can obtain the representation $\Model{{\mathbf H}_{{\mathbf N}\rightarrow v_0}}$
by replacing $\Path{v_0}$ by the
intervals $\{I_i\}_{1\leq i \leq n}$ of the representation $\Model{{\mathbf N}}$
in $\Model{{\mathbf H}_{{\mathbf N}\rightarrow v_0}}$. If
any of the paths $I_{i}$ contains (resp.~is contained in) a path $\Path{v}$, then all
the paths $I_i$ contain
(resp.~are contained in) $\Path{v}$.
Moreover, a path $\Path{v}$ of $\Model{{\mathbf H}}$ contains (is contained in) $\Path{v_0}$
if and only if $\Path{v}$ contains (is contained in) every one of the intervals $I_i$ in
$\Model{{\mathbf H}_{{\mathbf N}\rightarrow v_0}}$.
\end{lem}
\begin{proof}
This result is a direct consequence of two facts: first, that in $\Model{{\mathbf N}}$
no interval $\Path{v}$ of $\Model{{\mathbf H}}$ has an end-vertex between $x_1$ and $x_2$,
nor between $x_{k-1}$ and $x_{k+1}$, and second, that in $\Model{{\mathbf N}}$,
all the intervals $I_i$ contain the interval $x_2x_{k-1}$. See Figure \ref{f:grande}.
\end{proof}
\begin{lem}\label{l:substitution-3}
If in $\Model{{\mathbf H}}$ the path $\Path{v_0}$, that represents a blocked clique module
of a dually-CPT poset ${\mathbf P}$, ends on a trivial path, then
we can obtain the representation $\Model{{\mathbf H}_{{\mathbf N}\rightarrow v_0}}$
by replacing $\Path{v_0}$ by the a collection of paths that
represent a clique.
\end{lem}
\begin{proof}
Let us assume that $\Path{z}$ is the trivial path that
$\Path{v_0}$ ends on in $\Model{{\mathbf H}}$. Let us denote
by $a$ the vertex of the host tree that hosts $\Path{z}$.
Since the containment relation is proper, we can assume
that $\Path{v_0}$ passes through at least two vertices of the
host tree. One of the extremities of $\Path{v_0}$ is $a$. Let us
call the other extremity $b$. Since the length of $\Path{v_0}$ is
at least two, we know there exists in the host tree a
vertex $c$ that is the immediate neighbor of $b$ on
the path going to $a$. The vertex $c$ is possibly equal
to $a$. By subdividing an appropiate number of times
the edge $bc$ of the host tree, we can add as many
paths as we need to place a clique module. From the
transformation, it is easy to see that the containment
relation is preserved with respect to the module.
\end{proof}
We restate here our main theorem:
\begin{thm}
\label{thm:Main2}
A poset ${\mathbf P}$ is strongly-CPT if and only if it is dually-CPT.
\end{thm}
\begin{proof}
Let ${\mathbf H}={\mathbf P}/\mathcal{M}({\mathbf P})$ be the quotient poset,
where $\mathcal{M}({\mathbf P})=\left\{M_1, \ldots, M_k\right\}$ is the maximal modular partition
of ${\mathbf P}$.
Since ${\mathbf P}$ is a dually-$CPT$ poset and ${\mathbf H}$ is a
subposet of ${\mathbf P}$, then ${\mathbf H}$ and ${\mathbf H}^d$ admit a normalized $CPT$-representation.
If ${\mathbf H}$ is $CI$, by Remark \ref{r:strong} and Theorem \ref{t:dually-nprime}, ${\mathbf P}$ is $CI$ and
so strongly-$CPT$.
Thus let us assume that ${\mathbf H}$ is a prime dually-$CPT$ poset.
Let ${\mathbf Q}$ be an associated poset of ${\mathbf P}$ and let ${\mathbf K}$ be its quotient
poset.
Since ${\mathbf P}$ and ${\mathbf Q}$ are associated, an immediate
consequence is that ${\mathbf H}$ and ${\mathbf K}$ are associated;
in addition by hypothesis they are both prime,
hence by Theorem \ref{t:indecommposable-poset},
${\mathbf K}$ is either equal to ${\mathbf H}$ or to ${\mathbf H}^{d}$.
Let us assume, \emph{w.l.o.g.}, that ${\mathbf H}={\mathbf K}$.
We will prove that ${\mathbf Q}$ admits a $CPT$ representation.
By Theorem \ref{t:orientations} and \emph{w.l.o.g}, we assume that ${\mathbf Q}$
is obtained by replacing in ${\mathbf H}$ each vertex $v_i$
of ${\mathbf H}$ for ${\mathbf Q}_i={\mathbf Q}(M_i)$.
By Proposition \ref{p:associated-posets}, ${\mathbf P}$ and
${\mathbf Q}$ possess the same strong modules and
by Theorem \ref{t:dually-nprime} since ${\mathbf P}$ is dually-CPT,
all the strong modules of ${\mathbf P}$ and ${\mathbf Q}$ are CI.
For each ${\mathbf Q}_i$ we have a $CI$ representation.
Let $\Model{{\mathbf H}}$ be a $CPT$ representation of ${\mathbf H}$,
obtained from a normalized representation of ${\mathbf P}$.
The representation is obtained by only keeping one
path for each strong module of ${\mathbf P}$.
For each path $\Path{v_i}$ that corresponds to a module
$M_i$ of ${\mathbf Q}$, if $\Path{v_i}$ does not end on a trivial
path of $\Model{{\mathbf H}}$ then it corresponds to a module
which is not a blocked clique module, hence
by Lemma \ref{l:substitution-1}, we can replace $\Path{v_i}$
by a CI representation of ${\mathbf Q}_{i}$.
The only remaining case is if $\Path{v_i}$ ends on a trivial
path in $\Model{{\mathbf H}}$. In that case, it means that
it corresponds to a blocked clique module of ${\mathbf P}$ in the representation
$\Model{{\mathbf P}}$. Hence by Lemma \ref{l:substitution-3}, we can
replace $\Path{v_i}$ by a CI representation
of the maximal strong clique module ${\mathbf Q}_i$.
By proceeding in that way for each maximal strong module,
we are able to obtain a CPT representation $\Model{{\mathbf Q}}$ of ${\mathbf Q}$.
\end{proof}
| {
"timestamp": "2022-04-12T02:25:31",
"yymm": "2204",
"arxiv_id": "2204.04729",
"language": "en",
"url": "https://arxiv.org/abs/2204.04729",
"abstract": "A poset is a containment of paths in a tree (CPT) if it admits a representation by containment where each element of the poset is represented by a path in a tree and two elements are comparable in the poset if and only if the corresponding paths are related by the inclusion relation. Recently Alcón, Gudiño and Gutierrez introduced proper subclasses of CPT posets, namely dually-CPT, and strongly-CPT. A poset $\\mathbf{P}$ is dually-CPT, if and only if $\\mathbf{P}$ and its dual $\\mathbf{P}^{d}$ both admit a CPT representation. A poset $\\mathbf{P}$ is strongly-CPT, if and only if $\\mathbf{P}$ and all the posets that share the same underlying comparability graph admit a CPT representation. Where as the inclusion between Dually-CPT and CPT was known to be strict. It was raised as an open question by Alcón, Gudiño and Gutierrez whether strongly-CPT was a strict subclass of dually-CPT. We provide a proof that both classes actually coincide.",
"subjects": "Discrete Mathematics (cs.DM)",
"title": "On dually-CPT and strong-CPT posets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683484150417,
"lm_q2_score": 0.8104789132480439,
"lm_q1q2_score": 0.8004033217815886
} |
https://arxiv.org/abs/2111.13342 | On connected components with many edges | We prove that if $H$ is a subgraph of a complete multipartite graph $G$, then $H$ contains a connected component $H'$ satisfying $|E(H')||E(G)|\geq |E(H)|^2$. We use this to prove that every three-coloring of the edges of a complete graph contains a monochromatic connected subgraph with at least $1/6$ of the edges. We further show that such a coloring has a monochromatic circuit with a fraction $1/6-o(1)$ of the edges. This verifies a conjecture of Conlon and Tyomkyn. Moreover, for general $k$, we show that every $k$-coloring of the edges of $K_n$ contains a monochromatic connected subgraph with at least $\frac{1}{k^2-k+\frac{5}{4}}\binom{n}{2}$ edges. | \section{Introduction}
A classical observation of Erd\H{o}s and Rado is that in any two-coloring of the edges of the complete graph $K_n$, one of the color classes forms a connected graph. In \cite{gyarfas1977partition}, Gyárfás proves the following generalization of this observation: For any $k\geq 2$, in every $k$-coloring of the edges of $K_n$, there is a monochromatic connected component with at least $n/(k-1)$ vertices. This observation has since been extended in numerous ways, such as by replacing $K_n$ with a graph of high minimum degree \cite{gyarfas2017mindeg} or with a nearly-complete bipartite graph \cite{deblasio2020bipartite}, or adding a constraint on the diameter of the large monochromatic component \cite{Ruszinko2012diameter}.
See \cite{Gyarfas2011Survey} for a survey of earlier work on the subject.
The arguments used in this subject tend to focus on sparse spanning structures like double stars. As such, there is a surprising lack of progress on the corresponding question about edges in a monochromatic connected component. That is, what is the largest value of $M=M(n,k)$
such that every $k$-edge-coloring of $K_n$ has a monochromatic connected component with at least $M$ edges?
This question was raised by Conlon and Tyomkyn in \cite{conlon2021ramsey}, in the context of determining the multicolor Ramsey numbers for trails (see Section~\ref{subsec:trails} for the relevant definitions).
After showing that $M(n,2)=\frac{2}{9}n^2+o(n^2)$, they sketch a simple argument that shows $M(n,k)\geq \frac{1}{16k^2}n^2+O(n)$ for all $k$; with a slightly more careful analysis, their argument in fact yields $M(n,k)\geq \frac{1}{4k^2}n^2+O(n)$. In the other direction, they examine a construction of Gyárfás in \cite{Gyarfas2011Survey} to show that $M(n,k)\leq \frac{1}{2k(k-1)}n^2+O(n)$ for infinitely many values of $k$ (specifically, when $k-1$ is a prime power), conjecturing that this upper bound is tight in the case $k=3$.
In this note, we improve the general lower bound on $M(n,k)$, as well as a corresponding lower bound for the trail Ramsey problem, bringing it to asymptotically within a factor $1-O\left(\frac{1}{k^2}\right)$ of the upper bound for infinitely many values of $k$.
\begin{theorem}
\label{thm:colorlb}
For any $k\geq 2$, in every $k$-coloring of the edges of $K_n$, there is a monochromatic connected component with at least $M(n,k)\geq \frac{1}{k^2-k+\frac{5}{4}}\binom{n}{2}$ edges.
\end{theorem}
By building on the overarching ideas in \cite{Gyarfas2011Survey} and introducing some key new insights, we manage to strengthen this lower bound in the case $k=3$ to prove the tight lower bound conjectured by Conlon and Tyomkyn.
\begin{theorem}
\label{thm:color3}
In every $3$-coloring of the edges of $K_n$, there is a monochromatic component containing at least a sixth of the edges. That is, $M(n,3)\geq \lceil\frac{1}{6}\binom{n}{2}\rceil$. Moreover, for $n$ sufficiently large, this bound is sharp.
\end{theorem}
While equality holds in this bound for sufficiently large $n$, there are, as we will see from the proof, small values of $n$ for which $M(n,3)>\lceil\frac{1}{6}\binom{n}{2}\rceil$. In particular, equality holds for all $n\geq 18$, but not for $n=17$.
The key result we use is a new inequality that may be of independent interest. Given a subgraph $H$ of a complete multipartite graph $G$, it relates the edge counts of $G$ and $H$ to the largest edge count of a connected component $H'$ of $H$.
\begin{theorem}
\label{thm:main}
Let $G$ be a complete $r$-partite graph for some $r\geq 2$, and let $H$ be a subgraph of $G$. Then $H$ contains a connected component $H'$ satisfying
\[
|E(H')|\geq \frac{|E(H)|^2}{|E(G)|}.
\]
\end{theorem}
In effect, this result settles the density analogue of the coloring question of determining $M(n,k)$. Instead of partitioning the edges of a graph $G$ into color classes, we are fixing a subgraph $H$ of $G$ with a given fraction $\delta=\frac{|E(H)|}{|E(G)|}$ of its edges, and asking about the component of $H$ with the most edges. Theorem~\ref{thm:main} can then be restated as: if $|E(H)| = \delta |E(G)|$, then $H$ contains a connected component $H'$ with $|E(H')|\geq \delta^2 |E(G)|$. Equality can be attained asymptotically when $\delta=\frac{1}{k}$ for any positive integer $k\geq 2$: Let $V(G)=V_1\cup\cdots\cup V_r$ be an $r$-partition of $V(G)$, split each $V_i$ into $k$ (roughly) equally sized vertex sets $\{V_{i,j}\}_{j=1}^k$, and for $1\leq j\leq k$, let $H_j$ be the subgraph of $G$ induced on $\bigcup_{i=1}^r V_{i,j}$. Then indeed, $H=\bigcup_{j=1}^k H_j$ is a subgraph of $G$ whose components have edge counts
\[
|E(H_j)|=\frac{1}{k^2}|E(G)|+O(n) = \frac{|E(H)|^2}{|E(G)|} + O(n).
\]
The simplest case of Theorem~\ref{thm:main}, when $G=K_n$, already immediately implies an improvement over previously known lower bounds on $M(n,k)$.
\begin{corollary}\label{cor:weaklb}
In every $k$-coloring of the edges of $K_n$, there is a monochromatic connected component with at least $\frac{1}{k^2}\binom{n}{2}$ edges.
\end{corollary}
\begin{proof}
In a $k$-coloring of the edges of $K_n$, one of the color classes has at least $\frac{1}{k}\binom{n}{2}$ edges. Taking $H$ to be this color class in Theorem~\ref{thm:main} with $G=K_n$ yields Corollary~\ref{cor:weaklb}.
\end{proof}
The proof of Theorem~\ref{thm:color3} requires a more detailed argument, but similarly follows from this one case of Theorem~\ref{thm:main}, while the proof of Theorem~\ref{thm:colorlb} requires the use of Theorem~\ref{thm:main} in full generality.
In the next section, we give an elementary proof of Theorem~\ref{thm:main}. We then study the coloring version of the problem, as well as the corresponding trail Ramsey problem, in Section~\ref{sec:color}.
\section{\texorpdfstring{Proof of Theorem~\ref{thm:main}}{Proof of Theorem 3}}
\label{sec:mainpf}
In the case $G=K_n$, the proof of Theorem~\ref{thm:main} is very simple, but nevertheless includes a key insight: Instead of taking a component that maximizes the number of edges right away, we consider a component $H'$ with maximum \textbf{average degree} $\bar{d}(H')=\frac{|E(H')|}{\frac{1}{2}|V(H')|}$. Two observations are crucial here: First, by the so-called \emph{generalized mediant inequality}, this highest average degree must be at least the average degree of the whole graph $H$, since
\[
\frac{|E(H)|}{\frac{1}{2}|V(H)|}=\frac{\sum|E(H_i)|}{\frac{1}{2}\sum|V(H_i)|},
\]
where the sums are over connected components $H_i$ of $H$, and the right hand side is a generalized mediant of the average degrees of the individual components.
Second, the number of vertices of $H'$ is at least one more than its maximum degree, so $|V(H')|\geq \bar{d}(H')+1$. Letting $\delta = \frac{|E(H)|}{|E(G)|}$, we then obtain the bound
\begin{align*}
|E(H')|&\geq \frac{1}{2}(\bar{d}(H'))(\bar{d}(H')+1) = \binom{\bar{d}(H')+1}{2}\\
&\geq \binom{\bar{d}(H)+1}{2} = \binom{\frac{2|E(H)|}{n}+1}{2} \\
&= \binom{\delta(n-1)+1}{2} \geq \delta^2 \binom{n}{2} \\
&= \delta^2 |E(G)| = \frac{|E(H)|^2}{|E(G)|},
\end{align*}
as desired.
The general case will use both of these observations in a modified setting. Instead of the average degree, we will work with a slightly different quantity whose denominator is a weighted vertex count; we will then, perhaps counterintuitively, lower bound this weighted vertex count by the modified analogue of the average degree, in order to obtain the bound we seek. The core of our proof is the following general inequality.
\begin{lemma}
\label{lem:weightcs}
If $a_1,\dots,a_r$ and $b_1,\dots,b_r$ are nonnegative real numbers, then
\[
\left(\left(\sum_{i=1}^r a_i\right)\left(\sum_{i=1}^r b_i\right) - \sum_{i=1}^r a_i b_i \right)^2 \geq \left(\left(\sum_{i=1}^r a_i\right)^2 - \sum_{i=1}^r a_i^2 \right)\left(\left(\sum_{i=1}^r b_i\right)^2 - \sum_{i=1}^r b_i^2 \right).
\]
\end{lemma}
\begin{proof}
The Cauchy-Schwarz inequality yields
\begin{align*}
\left(\sum_{i=1}^r a_i\right)\left(\sum_{i=1}^r b_i\right) - \sum_{i=1}^r a_i b_i
\geq \left(\sum_{i=1}^r a_i\right)\left(\sum_{i=1}^r b_i\right) - \sqrt{\left(\sum_{i=1}^r a_i^2\right)\left(\sum_{i=1}^r b_i^2\right)}.
\end{align*}
This last quantity is clearly nonnegative, since $\left(\sum_{i=1}^r a_i\right)^2\geq \sum_{i=1}^r a_i^2$ and $\left(\sum_{i=1}^r b_i\right)^2\geq \sum_{i=1}^r b_i^2$. So,
\begin{align*}
&\left(\left(\sum_{i=1}^r a_i\right)\left(\sum_{i=1}^r b_i\right) - \sum_{i=1}^r a_i b_i \right)^2 \geq \left(\left(\sum_{i=1}^r a_i\right)\left(\sum_{i=1}^r b_i\right) - \sqrt{\left(\sum_{i=1}^r a_i^2\right)\left(\sum_{i=1}^r b_i^2\right)}\right)^2 \\
&= \left(\sum_{i=1}^r a_i\right)^2\left(\sum_{i=1}^r b_i\right)^2 + \left(\sum_{i=1}^r a_i^2\right)\left(\sum_{i=1}^r b_i^2\right) - 2 \left(\sum_{i=1}^r a_i\right)\left(\sum_{i=1}^r b_i\right)\sqrt{\left(\sum_{i=1}^r a_i^2\right)\left(\sum_{i=1}^r b_i^2\right)} \\
&\geq \left(\sum_{i=1}^r a_i\right)^2\left(\sum_{i=1}^r b_i\right)^2 + \left(\sum_{i=1}^r a_i^2\right)\left(\sum_{i=1}^r b_i^2\right) - \left(\sum_{i=1}^r a_i\right)^2\left(\sum_{i=1}^r b_i^2\right) - \left(\sum_{i=1}^r b_i\right)^2\left(\sum_{i=1}^r a_i^2\right) \\
&= \left(\left(\sum_{i=1}^r a_i\right)^2 - \sum_{i=1}^r a_i^2 \right)\left(\left(\sum_{i=1}^r b_i\right)^2 - \sum_{i=1}^r b_i^2 \right),
\end{align*}
where the last inequality is an application of the AM-GM inequality.
\end{proof}
Given a graph $G$ and vertex sets $S,T\subseteq V(G)$, let
\[
e_G(S,T)=\#\{(s,t)\in S\times T:\: s\sim_G t\}.
\]
We write $e(S,T)$ for $e_G(S,T)$ when the graph in question is unambiguous. Let $G[S]$ denote the induced subgraph of $G$ on the vertex set $S$. Lemma~\ref{lem:weightcs} immediately implies the following.
\begin{corollary}
\label{cor:weightcs}
Let $G$ be a complete multipartite graph. For any $S, T\subseteq V(G)$, we have
\[
e(S,T)^2\geq 4|E(G[S])||E(G[T])|.
\]
\end{corollary}
\begin{proof}
Suppose that $G$ is $r$-partite. Let $V(G)=V_1\cup \cdots \cup V_r$ be a partition of the vertices of $G$ into $r$ independent sets. Let $a_i=|V_i\cap S|$ and $b_i=|V_i\cap T|$, so
\[
e(S,T)=\sum_{\substack{1\leq i,j\leq r\\ i\neq j}} a_{i}b_{j}=\left(\sum_{i=1}^r a_i\right)\left(\sum_{i=1}^r b_i\right) - \sum_{i=1}^r a_i b_i,
\]
while
\[
|E(G[S])|=\frac{1}{2}\left(\left(\sum_{i=1}^r a_i\right)^2 - \sum_{i=1}^r a_i^2 \right), \qquad |E(G[T])|=\frac{1}{2}\left(\left(\sum_{i=1}^r b_i\right)^2 - \sum_{i=1}^r b_i^2 \right).
\]
Then Lemma~\ref{lem:weightcs} indeed yields $e(S,T)^2\geq 4|E(G[S])||E(G[T])|$, as desired.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Let $V=V(G)=V_1\cup\cdots\cup V_r$ be a partition of the vertices of $G$ into $r$ independent sets, let $H_1,\dots, H_k$ be the connected components of $H$, and let $V_{i,\ell}=V_i\cap H_\ell$. We have
\[
|E(H)|=\sum_{j=1}^k |E(H_\ell)|, \qquad |V_i|=\sum_{\ell=1}^k |V_{i,\ell}|\: \text{ for all } i.
\]
For any subset $S\subseteq V$, define $f(S)=\frac{1}{2}e_G(S,V)$. Note that
\[
f(S)=\frac{1}{2}\sum_{\substack{1\leq i,j\leq r\\i\neq j}} |V_{i}\cap S||V_{j}|,
\]
so $f(S)$ can be viewed as a weighted vertex count for $S$, with the property that
\[
\sum_{\ell=1}^k f(V(H_\ell))=\frac{1}{2}\sum_{\ell=1}^k e_G(V(H_\ell),V)=\frac{1}{2}e_G(V,V)=|E(G)|.
\]
Then by the generalized mediant inequality, for some $H'=H_\ell$ we have
\[
\frac{|E(H')|}{f(V(H'))}\geq \frac{\sum_{\ell=1}^k |E(H_\ell)|}{\sum_{\ell=1}^k f(V(H_\ell))} = \frac{|E(H)|}{|E(G)|}.
\]
Now, Corollary~\ref{cor:weightcs} applied with $S=V(H')$, $T=V$ yields $f(V(H'))^2\geq |E(G[V(H')])||E(G)|\geq |E(H')||E(G)|$, so that
\[
\frac{|E(H)|}{|E(G)|}\leq \frac{|E(H')|}{f(V(H'))}\leq \sqrt{\frac{|E(H')|}{|E(G)|}},
\]
which rearranges to the desired inequality.
\end{proof}
\section{Coloring problems}
\label{sec:color}
We can now apply Theorem~\ref{thm:main} to the corresponding coloring problems. We have already seen that applying Theorem~\ref{thm:main} with $G=K_n$ readily yields the simple lower bound on $M(n,k)$ given in Corollary~\ref{cor:weaklb}. We now discuss how to improve this lower bound, before turning to a closely related problem on monochromatic trails and circuits.
\subsection{Lower bound on $M(n,k)$ for general $k$}
Our strategy for improving the lower bound on $M(n,k)$ is as follows: First, assuming that no monochromatic component has too many edges, we show that in the color with the highest density (say, red), we can upper bound the number of vertices covered by any set of components (or else we can finish with an average degree argument on the rest of the red components). Through a smoothing argument, this yields an upper bound on the sum of squares of the number of vertices in each red component, and thus a lower bound on the number of edges in the complete multipartite graph $G$ formed by deleting from $K_n$ all edges within the vertex sets of the red components. We finish by applying Theorem~\ref{thm:main} to $G$ to find a non-red component with many edges.
\begin{proof}[Proof of Theorem~\ref{thm:colorlb}]
Fix a $k$-coloring $\chi$ of the edges of $K_n$, and suppose that the largest number of edges in a monochromatic connected component in this coloring is $z\binom{n}{2}$. Without loss of generality, let red be the color with the most edges, so there are at least $\frac{1}{k}\binom{n}{2}$ red edges in our coloring. Let $\mathcal{C}_1=\{R_1,R_2,\dots,R_m\}$ be the set of red components, with
\begin{equation}
\label{eqn:totalvtx}
|V(R_1)|\geq |V(R_2)|\geq \cdots \geq |V(R_m)|, \qquad \sum_{i=1}^m |V(R_i)|=n.
\end{equation}
Let $x=\frac{1}{\binom{n}{2}}\sum_{i=1}^m |E(R_i)|$, so $x\geq \frac{1}{k}$. By assumption, $|E(R_i)|\leq z\binom{n}{2}$ for $1\leq i\leq m$. As in the proof of Corollary~\ref{cor:weaklb}, applying Theorem~\ref{thm:main} with $K_n$ as $G$ and its red color class (i.e. the spanning subgraph formed by its red edges) as $H$ yields a red component with at least $x^2 \binom{n}{2}$ edges, so by assumption we have $z\geq x^2\geq \frac{1}{k^2}$. Define $\delta=k-\frac{1}{\sqrt{z}}\geq 0$, so $z=\frac{1}{(k-\delta)^2}$.
For any $j\in [1,m-1]$, let $G_j$ be the complete graph on $\bigcup_{i=j+1}^m V(R_i)$, with its coloring induced by $\chi$. Applying Theorem~\ref{thm:main} with $G_j$ as $G$ and the red color class of $G_j$ as $H$ yields a red connected component $H'=R_i$ with $|E(H')|\geq \frac{|E(H)|^2}{|E(G_j)|}$. Since
\[
|E(H)|=\sum_{i=j+1}^m |E(R_i)|=x\binom{n}{2}-\sum_{i=1}^{j}|E(R_i)| \geq x\binom{n}{2}-jz\binom{n}{2},
\]
while $|E(G_j)|=\binom{|V(G_j)|}{2}=\binom{n-\sum_{i=1}^{j}|V(R_i)|}{2}$, we have
\[
z\binom{n}{2}\geq |E(H')|\geq \frac{\max(x-jz,0)^2 \binom{n}{2}^2}{\binom{n-\sum_{i=1}^{j}|V(R_i)|}{2}}.
\]
Since $\frac{\max(x-jz,0)}{\sqrt{z}}\leq \frac{x}{\sqrt{z}}\leq 1$, this implies
\[
\binom{n-\sum_{i=1}^{j}|V(R_i)|}{2}\geq \frac{\max(x-jz,0)^2}{z}\binom{n}{2}\geq \binom{\frac{\max(x-jz,0)}{\sqrt{z}}n}{2}.
\]
Since $n-\sum_{i=1}^{j}|V(R_i)|\geq 1$, and the function $f(X)=\binom{X}{2}$ is increasing for $X\geq 1$, this then implies
\begin{equation}
\label{eqn:vtxbounds}
\sum_{i=1}^{j}|V(R_i)|\leq (1-x/\sqrt{z} + j\sqrt{z})n \qquad \text{for all }j\in [1,m-1].
\end{equation}
We can now give an upper bound on $\sum_{i=1}^m \binom{|V(R_i)|}{2}$ by solving the corresponding convex optimization problem. The proof of the following technical lemma will be deferred until the end of the section.
\begin{lemma}
\label{lem:smoothing}
Let $x,z>0$. Subject to the constraints $v_1\geq \cdots \geq v_m \geq 0$, $\sum_{i=1}^m v_i= 1$, and
\[
\sum_{i=1}^j v_i \leq 1-x/\sqrt{z}+j\sqrt{z} \qquad \text{for all }j\in [1,m-1],
\]
the quantity $\sum_{i=1}^m v_i^2$ is maximized when $v_1=1-x/\sqrt{z}+\sqrt{z}$, $v_i=\sqrt{z}$ for $2\leq i\leq \lfloor \frac{x}{z}\rfloor$, $v_{\lfloor \frac{x}{z}\rfloor+1}=x/\sqrt{z}-\lfloor \frac{x}{z}\rfloor \sqrt{z}$, and $v_i=0$ for $i>\lfloor \frac{x}{z}\rfloor+1$.
\end{lemma}
Let $v_i=\frac{|V(R_i)|}{n}$. Since \eqref{eqn:totalvtx} and \eqref{eqn:vtxbounds} hold, we can apply Lemma~\ref{lem:smoothing} to obtain
\[
\sum_{i=1}^m v_i^2\leq (1-x/\sqrt{z}+\sqrt{z})^2 + \left(\left\lfloor \frac{x}{z}\right\rfloor-1\right) z +\left(\left(x/z-\left\lfloor \frac{x}{z}\right\rfloor\right) \sqrt{z}\right)^2 \leq (1-x/\sqrt{z}+\sqrt{z})^2 + (x/z-1)z,
\]
so that we have
\begin{align*}
\sum_{i=1}^m \binom{|V(R_i)|}{2} &= \frac{n^2 \sum_{i=1}^m v_i^2 -n }{2} \leq \frac{n^2 ((1-x/\sqrt{z}+\sqrt{z})^2 + (x/z-1)z) - n}{2}.
\end{align*}
Finally, let $G$ be the complete $m$-partite graph obtained from $K_n$ by removing all edges within each $V(R_i)$, with its coloring induced by $\chi$. There are no red edges in $G$, so by the pigeonhole principle, one of the $k-1$ remaining colors has at least $\frac{1}{k-1}|E(G)|$ edges in $G$. Let $H$ be the spanning subgraph of $G$ induced by the edges in that color. Applying Theorem~\ref{thm:main} then yields a monochromatic connected component $H'$ with at least $\frac{1}{(k-1)^2}|E(G)|$ edges. Then by assumption we have
\begin{align*}
z &\geq \frac{1}{(k-1)^2}\frac{|E(G)|}{\binom{n}{2}} = \frac{1}{(k-1)^2} \left(1- \frac{\sum_{i=1}^m \binom{|V(R_i)|}{2}}{\binom{n}{2}} \right) \\
&\geq \frac{1}{(k-1)^2} \left(1- \frac{n^2((1-x/\sqrt{z}+\sqrt{z})^2 + (x/z-1)z) - n}{n^2-n} \right)\\
&\geq \frac{1}{(k-1)^2} \left(1- (1-x/\sqrt{z}+\sqrt{z})^2 - (x/z-1)z \right),
\end{align*}
which rearranges to give
\[
1-(k-1)^2 z \leq (1-x/\sqrt{z}+\sqrt{z})^2 + (x/z-1)z=\frac{1}{z}x^2 - (1+2/\sqrt{z})x+(1+2\sqrt{z}).
\]
The right hand side is a quadratic in $x$ that is decreasing for $x\leq \frac{z}{2}+\sqrt{z}$. Since by assumption $\sqrt{z}\geq x\geq \frac{1}{k}$, we then have
\[
1-(k-1)^2 z \leq \frac{1}{z k^2} - (1+2/\sqrt{z})\frac{1}{k}+(1+2\sqrt{z}).
\]
Substituting in $z=\frac{1}{(k-\delta)^2}$ yields
\[
1-\frac{(k-1)^2}{(k-\delta)^2}\leq \frac{(k-\delta)^2}{k^2} - \frac{1+2(k-\delta)}{k}+1+\frac{2}{k-\delta},
\]
which upon rearrangement becomes
\begin{align*}
0 &\leq (k-1)^2 k^2 + (k-\delta)^4 - k(k-\delta)^2 - 2(k-\delta)^3 k + 2(k-\delta) k^2 \\
&= -k^3+k^2-k\delta^2+2k^3\delta -2k\delta^3+\delta^4\\
&=(k-\delta^2)^2 - k(k^2-\delta^2)(1-2\delta).
\end{align*}
This implies
\[
1-2\delta \leq \frac{(k-\delta^2)^2}{k(k^2-\delta^2)}<\frac{1}{k},
\]
so that $\delta>\frac{k-1}{2k}$. Thus, the coloring $\chi$ contains a monochromatic connected component with at least $z\binom{n}{2}$ edges, where
\[
z=\frac{1}{(k-\delta)^2}>\frac{1}{(k-\frac{1}{2}+\frac{1}{2k})^2}\geq \frac{1}{k^2-k+\frac{1}{4}+1-\frac{1}{4k}+\frac{1}{4k^2}} \geq \frac{1}{k^2-k+\frac{5}{4}},
\]
as desired.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:smoothing}]
Since the feasible region is compact, there exists a choice of the $v_i$ such that the desired maximum is attained. Fix such a maximizing choice of the $v_i$.
For $j\in [m]$, let $A_j$ denote the given constraint on $\sum_{i=1}^j v_i$, and let $S$ be the set of $j$ for which equality holds in $A_j$. Let $m'=\lfloor \frac{x}{z}\rfloor$. Since $\sum_{i=1}^m v_i=1$, the constraints $A_j$ for $j>m'$ cannot be tight, so $S\subseteq [m']$. For any $i<j$ and any $\varepsilon\in (0,v_j]$, replacing $(v_i,v_j)$ with $(v_i+\varepsilon, v_j-\varepsilon)$ increases the value of $\sum_{i=1}^m v_i^2$, so by maximality, no such ``smoothing'' operation is possible. That is, we can assume the following \textbf{equality condition} for all $(i,j)$ with $i<j$: Either $v_i=v_{i-1}$, or $v_j=v_{j+1}$, or $S\cap [i,j-1]\neq \emptyset$. If $S=[m']$, then we are in exactly the maximizing case described (since by the equality conditions for $(m'+1,i)$ we have $v_i=0$ for all $i\geq m'+2$), so assume otherwise.
First, suppose there is some $i_0\in [m']$ such that $v_{i_0}<\sqrt{z}$, and pick the smallest such index $i_0$. Let $i_1\in [m]$ be the largest index such that $v_{i_1}>0$, and note that $i_1>i_0$ since $\sum_{i=1}^{i_0} v_i<1$. Then we have $v_{i_0}<v_{i_0-1}$ and $v_{i_1}>v_{i_1+1}$. But as $\sum_{i=1}^j v_i\leq (1-x/\sqrt{z}+(j-1)\sqrt{z}) + v_j < (1-x/\sqrt{z}+(j-1)\sqrt{z})+ \sqrt{z}$ for any $j\geq i_0$, we have $S\cap [i_0,i_1]=\emptyset$, contradicting the equality condition for $(i_0,i_1)$. So, we can assume $v_i\geq \sqrt{z}$ for all $i\leq m'$. In particular, if $j\in S$ for some $j\in [m']$, then we recursively obtain $v_i=\sqrt{z}$ for all $i\in [j+1,m']$, so $[j,m']\subseteq S$.
Thus, we can assume $1\notin S$, i.e. $v_1<1-x/\sqrt{z}+\sqrt{z}$. By the equality condition for $(1,2)$, we must then have $v_2=v_3$. Let $j_0\geq 3$ be the smallest index such that $v_{j_0+1}\neq v_{j_0}$. By the equality condition for $(1,j_0)$, there is some $j_1\in S\cap [2,j_0-1]$. Then
\[
1-x/\sqrt{z}+j_1 \sqrt{z} = \sum_{i=1}^{j_1} v_i = v_1+(j_1-1) v_2 < 1-x/\sqrt{z}+\sqrt{z}+(j_1-1)v_2,
\]
which implies $v_2>\sqrt{z}$. But then $\sum_{i=1}^{j_1+1} v_i=1-x/\sqrt{z}+\sqrt{z}+j_1 v_2 > 1-x/\sqrt{z}+(j_1+1) \sqrt{z}$, violating $A_{j_1+1}$. This is a contradiction, so in fact $S=[m']$, and we are in the desired maximizing case.
\end{proof}
We remark that in Gyárfás's construction, after removing all edges contained in the vertex set of each red component, each non-red component is left with approximately $\frac{1}{(k-1)^2} (1-\frac{1}{k-1}) \binom{n}{2} = \frac{k-2}{(k-1)^3} \binom{n}{2}$ edges. Since $\frac{k-2}{(k-1)^3}=\frac{1}{k^2-k+1+\frac{1}{k-2}}$, this suggests that the method used to prove Theorem~\ref{thm:colorlb} cannot show a lower bound on $M(n,k)$ better than $\frac{1}{k^2-k+1}\binom{n}{2}$ without introducing additional ideas.
\subsection{Multicolor Ramsey numbers of trails and circuits}
\label{subsec:trails}
A \emph{trail} is a walk without repeated edges, and a \emph{circuit} is a trail with the same first and last vertex. The ($k$-color) Ramsey problem for trails is the question of finding the largest $m$ such that every $k$-coloring of the edges of $K_n$ contains a monochromatic trail of length $m$.
Answering a question of Osumi \cite{osumi2021ramsey}, Conlon and Tyomkyn \cite{conlon2021ramsey} show that every $2$-coloring of the edges of $K_n$ contains a monochromatic circuit with at least $\frac{2}{9}n^2+O(n^{3/2})$ edges, and this is asymptotically tight. For the case of general $k$, they observe that by deleting a forest in each color class of a $k$-coloring of $K_n$ to make each color class Eulerian (i.e. ensuring every vertex has even degree in each color), one can reduce this Ramsey problem to a variant of the problem of determining $M(n,k)$. Where previously we colored the edges of $K_n$ and found a large monochromatic component, we now apply the same procedure to the graph obtained by deleting at most $kn$ edges from $K_n$. We now prove a lower bound for the general case of this problem, analogous to the bound on $M(n,k)$ given in Theorem~\ref{thm:colorlb}.
\begin{corollary}
\label{thm:trailsweak}
Every $k$-coloring of the edges of $K_n$ contains a monochromatic circuit (and hence a monochromatic trail) of length at least $\frac{1}{k^2-k+\frac{5}{4}}\binom{n}{2}+O_k(n)$.
\end{corollary}
\begin{proof}
Fix a $k$-coloring of $E(K_n)$. As in \cite{conlon2021ramsey}, we can remove from each color class a forest that meets all odd degree vertices, leaving a coloring where every color class, and hence every monochromatic connected component, is Eulerian. Let $\chi$ be the resulting partial $k$-coloring of $E(K_n)$, where the (at most $kn$) removed edges are left uncolored, and let $z\binom{n}{2}$ be the largest number of edges in a monochromatic component in this coloring. It suffices to show that $z\geq \frac{1}{k^2-k+\frac{5}{4}}+O_k(n^{-1})$, since every monochromatic component is Eulerian, and thus contains an Eulerian circuit. Fix a color (say, red) with $x\binom{n}{2}$ edges, where $x\geq \frac{1}{k} \frac{\binom{n}{2}-nk}{\binom{n}{2}}=\frac{1}{k}-\frac{2}{n-1}$. As before, applying Theorem~\ref{thm:main} with $G=K_n$ immediately yields $z\geq x^2\geq \frac{1}{k^2}-O(n^{-1})$. Note that this means there is a monochromatic circuit with at least $\left(\frac{1}{k^2}-O(n^{-1})\right)\binom{n}{2}=\frac{1}{k^2}\binom{n}{2}+O(n)$ edges.
To improve this lower bound further, we proceed as in the proof of Theorem~\ref{thm:colorlb}. Letting $R_1,\dots,R_m$ be the red components, such that \eqref{eqn:totalvtx} holds, we obtain \eqref{eqn:vtxbounds} in the same manner as before, so we can once again apply Lemma~\ref{lem:smoothing} to obtain the same upper bound on $\sum_{i=1}^m \binom{|V(R_i)|}{2}$ in terms of $x$ and $z$. Let $G$ be the complete $m$-partite graph with parts $V(R_1),\dots,V(R_m)$, with its partial coloring induced by $\chi$. Since $G$ has no red edges, and at most $kn$ edges are uncolored, one of its color classes $H$ has at least $\frac{1}{k-1}(|E(G)|-kn)=\left(\frac{1}{k-1}-O_k(n^{-1})\right)|E(G)|$ edges. Applying Theorem~\ref{thm:main} as before yields
\begin{align*}
z &\geq \left(\frac{1}{k-1}-O_k(n^{-1})\right)^2\frac{|E(G)|}{\binom{n}{2}}
\geq \left(\frac{1}{(k-1)^2}-O_k(n^{-1})\right) \left(1- \frac{\sum_{i=1}^m \binom{|V(R_i)|}{2}}{\binom{n}{2}} \right) \\
&\geq \left(\frac{1}{(k-1)^2+O_k(n^{-1})}\right) \left(1- (1-x/\sqrt{z}+\sqrt{z})^2 - (x/z-1)z \right).
\end{align*}
Letting $z=\frac{1}{(k-\delta)^2}$ and noting that $x\geq \frac{1}{k}-O(n^{-1})$, we can perform the same rearrangements and substitutions as in the proof of Theorem~\ref{thm:colorlb}, simply separating out the $O_k(n^{-1})$ terms at each step, to derive the inequality $1-2\delta < \frac{1}{k} + O_k(n^{-1})$, and thus
\[
z\geq \frac{1}{k^2-k+\frac{5}{4}-O_k(n^{-1})}= \frac{1}{k^2-k+\frac{5}{4}}+O_k(n^{-1}).
\]
Then the coloring $\chi$ contains a monochromatic connected component, and hence a monochromatic Eulerian circuit, with at least $z\binom{n}{2}\geq \frac{1}{k^2-k+\frac{5}{4}}\binom{n}{2}+O_k(n)$ edges, as claimed.
\end{proof}
\subsection{Three colors}
In this section, we prove the lower and upper bounds on $M(n,3)$ in separate lemmas in order to establish Theorem~\ref{thm:color3}. We then discuss the behavior of $M(n,3)$ for small values of $n$, and conclude by describing how to adapt our proofs to obtain asymptotically tight bounds for the Ramsey numbers of trails and circuits in three colors.
\begin{lemma}
\label{lem:3lb}
Every $3$-coloring of the edges of $K_n$ contains a monochromatic component with at least $\lceil \frac{1}{6}\binom{n}{2}\rceil$ edges.
\end{lemma}
\begin{proof}
Let $G=K_n$, and call the three colors red, green, and blue.
First, suppose one of the color classes (say, red) is connected. If there are at least $\frac{1}{6}|E(G)|$ red edges, we are done. Otherwise, the other two colors together have at least $\frac{5}{6}|E(G)|$ edges, so one of them has at least $ \frac{5}{12}|E(G)|$ edges. Applying Theorem~\ref{thm:main} with $G=K_n$ to the graph in that color then gives a monochromatic component with at least $(\frac{5}{12})^2 |E(G)| >\frac{1}{6}|E(G)|$ edges, so we are again done.
Thus, we can assume every color has at least two components. Without loss of generality, let red be the color with the most edges, so the red graph $H$ has at least $\frac{1}{3}|E(G)|$ edges. Let $H'$ be the component of $H$ with the highest average degree, so $|V(H')|\geq \bar{d}(H')+1\geq \frac{1}{3}n$. We can assume $|V(H')|\leq \frac{1}{2}n$; otherwise, we would have $|E(H')|\geq \frac{1}{2}|V(H')|\bar{d}(H') > \frac{1}{6}|E(G)|$. Let $V_1=V(H')$ and $V_2=V(G)\setminus V_1$, and let $G'$ be the bipartite graph induced by $G$ between $V_1$ and $V_2$, so $G'$ has at least $|V_1||V_2|\geq \frac{2}{9}n^2>\frac{1}{3}|E(G)|$ edges, all of which must be green or blue.
Fix an edge in $G'$ and consider the monochromatic component $C_1$ of $G'$ containing this edge. Without loss of generality, assume $C_1$ is green. Suppose $C_1$ covers all of $V_1$. Since every green edge in $G'$ intersects $V_1$, this means there is exactly one green component of $G'$ with a nonzero number of edges. Since there are at least two green components in $G$, there is a vertex $v\in V_2$ not in $C_1$. Then all edges between $v$ and $V_1$ must be blue, so all vertices of $V_1$ are in the same blue component in $G'$. This implies that there is also exactly one blue component of $G'$ with nonzeroly many edges. Thus, all edges of $G'$ are in one of two monochromatic components, one of which then has at least $\frac{1}{2}|E(G')|>\frac{1}{6}|E(G)|$ edges, as desired.
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{3colorwithkey2.png}
\caption{Lower bound: Exactly two components of each color}
\label{fig:3color}
\end{figure}
We are likewise done if $C_1$ covers all of $V_2$, so we can assume $V_1\setminus C_1$ and $V_2\setminus C_1$ are both nonempty. Let $A_1=V_1\cap C_1$, $B_1=V_2\cap C_1$, $A_2=V_1\setminus A_1$, $B_2=V_2\setminus B_1$. Then all edges between $A_1$ and $B_2$, or between $A_2$ and $B_1$, can only be blue. Then there are at most two blue components, and thus by assumption exactly two. This in turn implies that all edges between $A_1$ and $B_1$, or between $A_2$ and $B_2$, can only be green, so there are exactly two green components $C_1$ and $C_2$. Finally, all edges between $B_1$ and $B_2$ can only be red, so we conclude that there are exactly two red components, $V_1$ and $V_2$; see Figure~\ref{fig:3color}.
Since each of the three colors has exactly two components, one of the components has at least $\frac{1}{6}|E(G)|$ edges, as desired.
\end{proof}
\begin{lemma}
\label{lem:3ub}
For sufficiently large $n$, there exists a $3$-coloring of the edges of $K_n$ such that every monochromatic component contains at most $\lceil \frac{1}{6} \binom{n}{2}\rceil$ edges.
\end{lemma}
\begin{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{3ubnokey.png}
\caption{Initial construction for the $M(n,3)$ upper bound}
\label{fig:3ub}
\end{figure}
We consider the following modification of a construction by Gyárfás: Let $V=V_1\cup V_2\cup V_3\cup V_4$ be a partition of the vertices of $G=K_n$ into four parts, with
\[
\left\lceil \frac{n}{4}\right \rceil = |V_1| \geq \cdots \geq |V_4| = \left\lfloor \frac{n}{4}\right \rfloor.
\]
Letting $E(U,W)$ denote the set of edges between vertex sets $U$ and $W$, color $E(V_1,V_2)\cup E(V_3,V_4)$ red, $E(V_1,V_3)\cup E(V_2,V_4)$ green, and $E(V_1,V_4)\cup E(V_2,V_3)$ blue. For large enough $n$, $e(V_i,V_j)\leq \lceil \frac{1}{6}\binom{n}{2}\rceil$ for $1\leq i<j\leq 4$, so it remains to extend this partial coloring by coloring the edges within each of the $V_i$ such that each monochromatic component has at most $\lceil \frac{1}{6}\binom{n}{2}\rceil$ edges at the end. It is natural to attempt the simple approach of distributing the edges within each $V_i$ as evenly as possible among the three colors. However, the possibility of a slight difference in size among the $V_i$ can yield a $\Theta(n)$ difference among the numbers of edges in each component if we are not sufficiently careful; a different coloring strategy will turn out to be simpler to analyze.
We call an extension of the above partial coloring \emph{nice} if each green or blue component contains exactly $\lceil\frac{1}{6}\binom{n}{2}\rceil$ edges. At least one nice coloring exists: we can color exactly $\lceil \frac{1}{6}\binom{n}{2}\rceil - e(V_1,V_3)$ of the edges within $V_1$ and $\lceil \frac{1}{6}\binom{n}{2}\rceil - e(V_2,V_4)$ of the edges within $V_2$ green, and exactly $\lceil \frac{1}{6}\binom{n}{2}\rceil - e(V_2,V_3)$ of the edges within $V_3$ and $\lceil \frac{1}{6}\binom{n}{2}\rceil - e(V_1,V_4)$ of the edges within $V_4$ blue (there are enough edges within each $V_i$ to do this when $n$ is sufficiently large), and color all remaining edges red. See Figure~\ref{fig:3ub} for a diagram of this coloring. Fix a nice coloring where the larger of the two red components contains as few edges as possible.
Suppose one of the red components in this coloring, without loss of generality the one on $V_1\cup V_2$, has more than $\lceil\frac{1}{6}\binom{n}{2}\rceil$ edges. Then $V_3\cup V_4$ must have less than $\lceil\frac{1}{6}\binom{n}{2}\rceil$ red edges. Without loss of generality let $V_1$ contain a red edge $e$. If either $V_3$ contains a green edge, or $V_4$ contains a blue edge, we can switch the color of that edge with edge $e$, preserving the sizes of the green and blue components while decreasing the size of the larger red component by one. Otherwise, $V_3$ is entirely blue and red, and $V_4$ is entirely green and red. Since the red component on $V_3\cup V_4$ has less than $\lceil\frac{1}{6}\binom{n}{2}\rceil$ edges, neither $V_3$ nor $V_4$ can be entirely red (for sufficiently large $n$, $\lfloor \frac{n}{4}\rfloor^2 + \binom{\lfloor \frac{n}{4}\rfloor}{2}>\lceil \frac{1}{6}\binom{n}{2}\rceil$). Then if $V_2$ contains a red edge, we can similarly switch two edges to reduce the size of the larger red component by one, so we can assume $V_2$ is entirely green and blue. But then there is one component of each color (including the larger red component) that does not have any edges within $V_2\cup V_3\cup V_4$, which means there are at least $3\lceil\frac{1}{6}\binom{n}{2}\rceil$ edges incident to $V_1$, a contradiction since
\[
e(V_1,V)\leq \binom{\lceil \frac{n}{4}\rceil}{2}+\left\lceil \frac{n}{4}\right\rceil\left(n-\left\lceil \frac{n}{4}\right\rceil\right) < 3\left\lceil\frac{1}{6}\binom{n}{2}\right\rceil,
\]
for sufficiently large $n$. Thus indeed there is a construction of this form where every monochromatic component has at most $\lceil\frac{1}{6} \binom{n}{2}\rceil$ edges.
\end{proof}
Combining Lemmas~\ref{lem:3lb} and~\ref{lem:3ub} yields Theorem~\ref{thm:color3} as desired.
The construction in the proof of Lemma~\ref{lem:3ub} is well-defined for all $n\geq 46$. A more careful analysis, splitting into cases based on the value of $n$ modulo $4$ and then using a more explicit construction in each case, shows that in fact $M(n,3)=\left\lceil\frac{1}{6}\binom{n}{2}\right\rceil$ for all $n\geq 11$ except $n=13,17$. However, the lower bound from Lemma~\ref{lem:3lb} is not sharp for some of these small values of $n$. Closely inspecting each step of our proof for $n=17$, for example, we can deduce that $M(17,3)=24$, instead of the expected $23$; indeed, all of the bounds before the final step in the proof of Lemma~\ref{lem:3lb} are loose enough to yield a component with at least $24$ edges unless we are in the case depicted in Figure~\ref{fig:3color}, where one of the vertex sets $A_i$ or $B_i$ has size $5$, and the other three sets have size $4$. The set of size $5$ then contains $10$ internal edges, some $4$ of which are in the same color, yielding a component with $4+20=24$ edges as claimed. This shows a genuine difference in the behavior of $M(n,3)$ for these small values of $n$ due to integer-related constraints in the extremal configurations.
We can adjust the proof of Lemma~\ref{lem:3lb} to give a lower bound on the size of the largest monochromatic circuit in a $3$-coloring of the edges of $K_n$, as follows. First, as before, we can remove a forest in each color and leave each color class Eulerian. The resulting graph $G$ has at least $\binom{n}{2}-3n$ edges, so all but at most $3\sqrt{n}$ of the vertices have degree at least $n-1-2\sqrt{n}$. We then pass to the induced subgraph $G'$ on these $n'\geq n-3\sqrt{n}$ vertices; the minimum degree of $G'$ is at least $n'-1-2\sqrt{n}\geq n'-3\sqrt{n'}$ when $n$ is sufficiently large. The argument then proceeds largely as in the proof of Lemma~\ref{lem:3lb}, except that the condition of every green or blue component intersecting both $V_1$ and $V_2$ is strengthened by requiring every green or blue component to intersect each of $V_1$ and $V_2$ in more than $6\sqrt{n'}$ vertices. The proof then concludes as before, reducing to the case where there are exactly two components in each color. When the edges between $V(G')$ and $V(G)\setminus V(G')$ are added back in, it remains true that there are at most two components in each color with a positive number of edges. So, at least one monochromatic component in $G$ has at least $\frac{1}{6}(|E(G)|-(3\sqrt{n})^2) \geq \frac{1}{12}n^2 - O(n)$ edges. Since this component of $G$ is Eulerian, we have a circuit, and hence a trail, of the desired length, for all sufficiently large $n$. The upper bound from Lemma~\ref{lem:3ub} likewise applies to the size of the longest circuit, showing that the lower bound is asymptotically tight.
\vspace{3mm}
\noindent {\bf Acknowledgments.} I would like to thank my advisor Jacob Fox for introducing me to this problem and for helpful conversations along the way, as well as David Conlon and Mykhaylo Tyomkyn for the insights from our later joint work that served as inspiration for some of the strengthened arguments in the revised version of this paper. In addition, I would like to thank the anonymous referees for their careful reading and helpful comments, including a specific suggestion that led to a significant strengthening in the bound for general $k$ in Theorem~\ref{thm:colorlb}.
\bibliographystyle{acm}
| {
"timestamp": "2022-08-30T02:11:27",
"yymm": "2111",
"arxiv_id": "2111.13342",
"language": "en",
"url": "https://arxiv.org/abs/2111.13342",
"abstract": "We prove that if $H$ is a subgraph of a complete multipartite graph $G$, then $H$ contains a connected component $H'$ satisfying $|E(H')||E(G)|\\geq |E(H)|^2$. We use this to prove that every three-coloring of the edges of a complete graph contains a monochromatic connected subgraph with at least $1/6$ of the edges. We further show that such a coloring has a monochromatic circuit with a fraction $1/6-o(1)$ of the edges. This verifies a conjecture of Conlon and Tyomkyn. Moreover, for general $k$, we show that every $k$-coloring of the edges of $K_n$ contains a monochromatic connected subgraph with at least $\\frac{1}{k^2-k+\\frac{5}{4}}\\binom{n}{2}$ edges.",
"subjects": "Combinatorics (math.CO)",
"title": "On connected components with many edges",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683476832692,
"lm_q2_score": 0.810478913248044,
"lm_q1q2_score": 0.8004033211885024
} |
https://arxiv.org/abs/1010.5005 | Error Estimates for Generalized Barycentric Interpolation | We prove the optimal convergence estimate for first order interpolants used in finite element methods based on three major approaches for generalizing barycentric interpolation functions to convex planar polygonal domains. The Wachspress approach explicitly constructs rational functions, the Sibson approach uses Voronoi diagrams on the vertices of the polygon to define the functions, and the Harmonic approach defines the functions as the solution of a PDE. We show that given certain conditions on the geometry of the polygon, each of these constructions can obtain the optimal convergence estimate. In particular, we show that the well-known maximum interior angle condition required for interpolants over triangles is still required for Wachspress functions but not for Sibson functions. |
\section{Introduction}
While a rich theory of finite element error estimation exists for meshes made of triangular or quadrilateral elements, relatively little attention has been paid to meshes constructed from arbitrary polygonal elements. Many quality-controlled domain meshing schemes could be simplified if polygonal elements were permitted for dealing with problematic areas of a mesh and finite element methods have been applied to such meshes~\cite{TS06,WBG07}. Moreover, the theory of Discrete Exterior Calculus has identified the need for and potential usefulness of finite element methods using interpolation methods over polygonal domain meshes (e.g. Voronoi meshes associated to a Delaunay domain mesh)~\cite{GB2010}. Therefore, we seek to develop error estimates for functions interpolated from data at the vertices of a polygon $\Omega$.
Techniques for interpolation over polygons focus on generalizing barycentric coordinates to arbitrary $n$-gons; this keeps the degrees of freedom associated to the vertices of the polygon which is exploited in nodal finite element methods. The seminal work of Wachspress~\cite{W1975} explored this exact idea and has since spawned a field of research on rational finite element bases over polygons. Many alternatives to these `Wachspress coordinates' have been defined as well, including the Harmonic and Sibson interpolants. To our knowledge, however, no careful analysis has been made as to which, if any, of these interpolation functions provide the correct error estimates required for finite element schemes.
We consider first-order interpolation operators from some generalization of barycentric coordinates to arbitrary convex polygons. A set of barycentric coordinates $\{\lambda_i\}$ for $\Omega$ associated with the interpolation operator $I: H^2(\Omega) \rightarrow \text{span} \{\lambda_i\} \subset H^1(\Omega)$ is given by
\begin{equation}
\label{eq:genintop}
Iu:=\sum_iu(\textbf{v}_i)\lambda_i.
\end{equation}
Since barycentric coordinates are unique on triangles (described in Section~\ref{ssec:triangulation}) this is merely the standard linear Lagrange interpolation operator when $\Omega$ is a triangle.
Before stating any error estimates, we fix some notation.
For multi-index ${\bf \alpha} = (\alpha_1, \alpha_2)$ and point $\textbf{x} = (x,y)$, define $\textbf{x}^{\bf \alpha} := x^{\alpha_1} y^{\alpha_2}$, $\alpha ! := \alpha_1 \alpha_2$, $|{\bf \alpha}| := \alpha_1 + \alpha_2$, and $D^{\bf \alpha} u := \partial^{|{\bf \alpha}|} u/\partial x^{\alpha_1}\partial y^{\alpha_2}$.
The Sobolev semi-norms and norms over an open set $\Omega$ are defined by
\begin{align*}
\hpsn{u}{m}{\Omega}^2 &:= \int_\Omega \sum_{|\alpha| = m} |D^\alpha u(\textbf{x})|^2 \,{\rm d} \textbf{x} &{\rm and} & & \hpn{u}{m}{\Omega}^2 &:= \sum_{0\leq k\leq m}\hpsn{u}{m}{\Omega}^2.
\end{align*}
The $H^{0}$-norm is the $L^2$-norm and will be denoted $\lpn{\cdot}{2}{\Omega}$.
Analysis of the finite element method often yields bounds on the solution error in terms of the best possible approximation in the finite-dimensional solution space.
Thus the challenge of bounding the solution error is reduced to a problem of finding a good interpolant.
In many cases Lagrange interpolation can provide a suitable estimate which is asymptotically optimal.
For first-order interpolants that we consider, this \textbf{optimal convergence estimate} has the form
\begin{equation}
\label{eq:hconv}
\hpn{u - I u }{1}{\Omega} \leq C\,\diam(\Omega)\hpsn{u}{2}{\Omega},\quad\forall u\in H^2(\Omega).
\end{equation}
To prove estimate (\ref{eq:hconv}) in our setting, it is sufficient (see Section~\ref{sec:intsobsp}) to restrict the analysis to a class of domains with diameter one and show that $I$ is a bounded operator from $H^2(\Omega)$ into $H^1(\Omega)$, that is
\begin{equation}
\label{eq:iuh1uh2}
\hpn{Iu}{1}{\Omega} \leq C_I \hpn{u}{2}{\Omega},\quad \forall u\in H^1(\Omega).
\end{equation}
We call equation (\ref{eq:iuh1uh2}) the \textbf{$H^1$-interpolant estimate} associated to the barycentric coordinates $\lambda_i$ used to define $I$.
The optimal convergence estimate (\ref{eq:hconv}) does not hold uniformly over all possible domains; a suitable geometric restriction must be selected to produce a uniform bound. Even in the simplest case (Lagrange interpolation on triangles), there is a gap between geometric criteria which are simple to analyze (e.g. the minimum angle condition) and those that encompass the largest possible set of domains (e.g. the maximum angle condition).
This paper is devoted to finding geometric criteria under which the optimal convergence estimate (\ref{eq:hconv}) holds for several types of generalized barycentric coordinates on arbitrary convex polygons. We begin by establishing some notation (shown in Figure \ref{fig:notation}) to describe the specific geometric criteria.
\begin{figure}[ht]
\begin{center}
\psfrag{vi}{$\textbf{v}_i$}
\psfrag{Bi}{$\beta_i$}
\psfrag{c}{\textcolor{red}{$\textbf{c}$}}
\psfrag{diam}{\textcolor{blue}{$\diam(\Omega)$}}
\psfrag{Om}{$\Omega$}
\psfrag{r}{\textcolor{red}{$\rho(\Omega)$}}
\includegraphics[width=.3\linewidth]{img/notation.eps}
\end{center}
\caption{Notation used throughout paper. }
\label{fig:notation}
\end{figure}
Let $\Omega$ be a convex polygon with $n$ vertices. Denote the vertices of $\Omega$ by $\textbf{v}_i$ and the interior angle at $\textbf{v}_i$ by $\beta_i$. The largest distance between two points in $\Omega$ (the diameter of $\Omega$) is denoted $\diam(\Omega)$ and the radius of the largest inscribed circle is denoted $\rho(\Omega)$. The center of this circle is denoted $\textbf{c}$ and is selected arbitrarily when no unique circle exists. The \textbf{aspect ratio} (or chunkiness parameter) $\gamma$ is the ratio of the diameter to the radius of the largest inscribed circle, i.e.
\[\gamma := \frac{\diam(\Omega)}{\rho(\Omega)}.\]
We will consider domains satisfying one or more of the following geometric conditions.
\renewcommand{\labelenumi}{G\arabic{enumi}.}
\begin{enumerate}
\item \textbf{Bounded aspect ratio:} There exists $\gamma^*\in\R$ such that $\gamma < \gamma^*$. \label{g:ratio}
\item \textbf{Minimum edge length: } There exists $d_*\in\R$ such that $|\textbf{v}_i - \textbf{v}_j| > d_* > 0$ for all $i\neq j$. \label{g:minedge}
\item \textbf{Maximum interior angle:} There exists $\beta^*\in\R$ such that $\beta_i < \beta^* < \pi$ for all $i$.\label{g:maxangle}
\end{enumerate}
Using several definitions of generalized barycentric functions from the literature, we show which geometric constraints on $\Omega$ are either necessary or sufficient to ensure the estimate for each definition. The main results of this paper are summarized by the following theorem and Table \ref{tab:conditions}. Primary attention is called to the difference between Wachspress and Sibson coordinates: while G\ref{g:maxangle} is a necessary requirement for Wachspress coordinates, it is demonstrated to be unnecessary for the Sibson coordinates.
\begin{theorem}
In Table \ref{tab:conditions}, any necessary geometric criteria to achieve the $H^1$ interpolant estimate (\ref{eq:iuh1uh2}) are denoted by N. The set of geometric criteria denoted by S in each row are sufficient to guarantee the $H^1$ interpolant estimate (\ref{eq:iuh1uh2}).
\begin{table}[ht]
\centering
\sbox{\strutbox}{\rule{0pt}{0pt}}
\begin{tabular}[.75\textwidth]{@{\extracolsep{\fill}} ccccc}
&& \begin{tabular}{c} G\ref{g:ratio} \\ (aspect \\ ratio)\end{tabular} & \begin{tabular}{c} G\ref{g:minedge} \\ (min edge \\ length)\end{tabular} &
\begin{tabular}{c} G\ref{g:maxangle} \\ (max interior \\ angle)\end{tabular} \\[2mm]
\hline\hline\\[2mm]
Triangulated & $\displaystyle\lambda^{\rm Tri}$ & - & - & S,N \\[2mm]
\hline\\[2mm]
Harmonic & $\displaystyle\lambda^{\rm Har}$ & S & - & - \\[2mm]
\hline\\[2mm]
Wachspress & $\displaystyle\lambda^{\rm Wach}$ & S & S & S,N \\[2mm]
\hline\\[2mm]
Sibson & $\displaystyle\lambda^{\rm Sibs}$ & S & S & - \\[2mm]
\hline
\end{tabular}
\caption{`N' indicates a necessary geometric criterion for achieving the $H^1$ interpolant estimate (\ref{eq:iuh1uh2}). The set of criteria denoted `S' in each row, taken together, are sufficient to ensure the $H^1$ interpolant estimate (\ref{eq:iuh1uh2}).}
\label{tab:conditions}
\end{table}
\end{theorem}
In Section \ref{sec:genbary}, we define the various types of generalized barycentric coordinates, compare their properties, and mention prior applications.
In Section \ref{sec:geomcond}, we review some general geometric results needed for subsequent proofs.
In Section \ref{sec:intsobsp}, we give the relevant background on interpolation theory for Sobolev spaces and state some classical results used to motivate our approach.
In Section \ref{sec:trimeth}, we show that the simplest technique of triangulating the polygon achieves the estimate if and only if G\ref{g:maxangle} holds.
In Section \ref{sec:optimal}, we show that if harmonic coordinates are used, G\ref{g:ratio} alone is sufficient.
In Section \ref{sec:wachpress}, we show that Wachspress coordinates require G\ref{g:maxangle} to achieve the estimate but all three criteria are sufficient.
In Section \ref{sec:sibson}, we show that Sibson coordinates achieve the estimate with G\ref{g:ratio} and G\ref{g:minedge} alone.
We discuss the implications of these results and future directions in Section \ref{sec:conc}.
\section{Generalized Barycentric Coordinate Types}
\label{sec:genbary}
Barycentric coordinates on general polygons are any set of functions satisfying certain key properties of the regular barycentric functions for triangles.
\begin{definition}\label{def:barcoor}
Functions $\lambda_i:\Omega\rightarrow\R$, $i=1,\ldots, n$ are \textbf{barycentric coordinates} on $\Omega$ if they satisfy two properties.
\renewcommand{\labelenumi}{B\arabic{enumi}.}
\begin{enumerate}
\item \textbf{Non-negative}: $\lambda_i\geq 0$ on $\Omega$.\label{b:nonneg}
\item \textbf{Linear Completeness}: For any linear function $L:\Omega\rightarrow\R$,
$\displaystyle L=\sum_{i=1}^{n} L(\textbf{v}_i)\lambda_i$.\label{b:lincomp}
\end{enumerate}
\end{definition}
\begin{remark}
Property B\ref{b:lincomp} is the key requirement needed for our interpolation estimates. It ensures that the interpolation operation preserves linear functions, i.e. $IL = L$.
\end{remark}
We will restrict our attention to barycentric coordinates satisfying the following invariance property.
Let $T:\R^2 \rightarrow \R^2$ be a composition of rotation, translation, and uniform scaling transformations and let $\{\lambda^T_i\}$ denote a set of barycentric coordinates on $T\Omega$.
\renewcommand{\labelenumi}{B\arabic{enumi}.}
\begin{enumerate}
\setcounter{enumi}{2}
\item \textbf{Invariance:} $\displaystyle\lambda_i(\textbf{x})=\lambda_i^T(T(\textbf{x}))$.\label{b:invariance}
\end{enumerate}
This assumption will allow estimates over the class of convex sets with diameter one to be immediately extended to generic sizes since translation, rotation and uniform scaling operations can be easily passed through Sobolev norms (see Section~\ref{sec:intsobsp}). At the expense of requiring uniform bounds over a class of diameter-one domains rather than a single reference element, complications associated with handling non-affine mappings between reference and physical elements are avoided \cite{ABF02}.
A set of barycentric coordinates $\{\lambda_i\}$ also satisfies these additional familiar properties:
\renewcommand{\labelenumi}{B\arabic{enumi}.}
\begin{enumerate}
\setcounter{enumi}{3}
\item \textbf{Partition of unity:} $\displaystyle\sum_{i=1}^{n}\lambda_i\equiv 1$. \label{b:partition}
\item \textbf{Linear precision:} $\displaystyle\sum_{i=1}^{n}\textbf{v}_i\lambda_i(\textbf{x})=\textbf{x}$. \label{b:linprec}
\item \textbf{Interpolation:} $\displaystyle\lambda_i(\textbf{v}_j) = \delta_{ij}$. \label{b:interpolation}
\end{enumerate}
The precise relationship between these properties and those defining the barycentric coordinates is given in the following proposition.
\begin{proposition} The properties B\ref{b:nonneg}-B\ref{b:interpolation} are related as follows:
\label{prop:Brelation}
\vspace{-.07in}
\begin{enumerate}[(i)]
\item B\ref{b:lincomp} $\Leftrightarrow$ (B\ref{b:partition} and B\ref{b:linprec})
\item (B\ref{b:nonneg} and B\ref{b:lincomp}) $\Rightarrow$ B\ref{b:interpolation}
\end{enumerate}
\end{proposition}
\begin{proof}
Given B\ref{b:lincomp}, setting $L\equiv 1$ implies B\ref{b:partition} and setting $L(\textbf{x})=\textbf{x}$ yields B\ref{b:linprec}. Conversely, assuming B\ref{b:partition} and B\ref{b:linprec}, let $L(x,y)=ax+by+c$ where $a,b,c\in\R$ are constants. Let $\textbf{v}_i$ have coordinates $(\textbf{v}_i^x,\textbf{v}_i^y)$. Then
\begin{align*}
\sum_{i=1}^{n}L(\textbf{v}_i)\lambda_i(x,y) & =\sum_{i=1}^{n} (a\textbf{v}_i^x+b\textbf{v}_i^y+c)\lambda_i(\textbf{x})\\
& =a\left(\sum_{i=1}^{n}\textbf{v}_i^x\lambda_i(\textbf{x})\right)+b\left(\sum_{i=1}^{n}\textbf{v}_i^y\lambda_i(\textbf{x})\right)+c\left(\sum_{i=1}^{n}\lambda_i(\textbf{x})\right)\\
& = ax+by+c = L(x,y).
\end{align*}
A proof that B\ref{b:nonneg} and B\ref{b:lincomp} imply B\ref{b:interpolation} can be found in \cite[Corollary 2.2]{FHK2006}. \qed
\end{proof}
Thus, while other definitions of barycentric coordinates appear in the literature, requiring only properties B\ref{b:nonneg} and B\ref{b:lincomp} is a minimal definition still achieving all the desired properties.
In the following subsections, we define common barycentric coordinate functions from the literature. Additional comparisons of barycentric functions can be found in the survey papers of Cueto et al. \cite{CSCMCD2003} and Sukumar and Tabarraei \cite{ST2004}.
\subsection{Triangulation Coordinates}
\label{ssec:triangulation}
The simplest method for constructing barycentric coordinates on a polygon is to triangulate the polygon and use the standard barycentric coordinate functions of these triangles. Interpolation properties of this scheme are well known from the standard analysis of the finite element method over triangular meshes, but this construction serves as an important point of comparison with the alternative barycentric coordinates discussed later.
Let ${\mathcal{T}}$ be a triangulation of $\Omega$ formed by adding edges between the $\textbf{v}_j$ in some fashion. Define
\[\lambda^{\rm Tri}_{i,{\mathcal{T}}}:\Omega\rightarrow \R\]
to be the barycentric function associated to $\textbf{v}_i$ on triangles in ${\mathcal{T}}$ containing $\textbf{v}_i$ and identically 0 otherwise. Trivially, these functions define a set of barycentric coordinates on $\Omega$.
Two particular triangulations are of interest. For a fixed $i$, let ${\mathcal{T}}_m$ denote any triangulation with an edge between $\textbf{v}_{i-1}$ and $\textbf{v}_{i+1}$. Let ${\mathcal{T}}_M$ denote the triangulation formed by connecting $\textbf{v}_i$ to all the other $\textbf{v}_j$. Examples are shown in Figure \ref{fig:optimal}.
\begin{figure}[ht]
\begin{center}
\psfrag{vi}{$\textbf{v}_i$}
\psfrag{tmin}{${\mathcal{T}}_m$}
\psfrag{tmax}{${\mathcal{T}}_M$}
\[\begin{array}{ccc}
\includegraphics[width=.3\linewidth]{img/triang-min.eps} &
\quad\text{ }\quad &
\includegraphics[width=.3\linewidth]{img/triang-max.eps}
\end{array}\]
\end{center}
\caption{Triangulations ${\mathcal{T}}_m$ and ${\mathcal{T}}_M$ are used to produce the minimum and maximum barycentric functions associated with $\textbf{v}_i$, respectively.}
\label{fig:optimal}
\end{figure}
\begin{proposition}(Floater et al.~\cite{FHK2006})
\label{prop:fhk}
Any barycentric coordinate function $\lambda_i$ according to Definition \ref{def:barcoor} satisfies the bounds
\begin{equation}
\label{eq:fhk}
0\leq \lambda^{\rm Tri}_{i,{\mathcal{T}}_m}(\textbf{x})\leq \lambda_i(\textbf{x}) \leq \lambda^{\rm Tri}_{i,{\mathcal{T}}_M}(\textbf{x})\leq 1,\quad\forall\textbf{x}\in\Omega.
\end{equation}
\end{proposition}
Proposition \ref{prop:fhk} tells us that the triangulation coordinates are, in some sense, the extremal definitions of generalized barycentric coordinates. In any triangulation of $\Omega$, at least one triangle will be of the form $(\textbf{v}_{i-1},\textbf{v}_i,\textbf{v}_{i+1})$, and hence the lower bound in (\ref{eq:fhk}) is always realized by some $\lambda^{\rm Tri}_i$. Thus, the examination of alternative barycentric coordinates can be motivated as an attempt to find non-extremal generalized barycentric coordinates.
\subsection{Harmonic Coordinates}
A particularly well-behaved set of barycentric coordinates, harmonic coordinates, can be defined as the solutions to certain boundary value problems. Let $g_i:\partial\Omega\rightarrow\R$ be the piecewise linear function satisfying
\[g_i(\textbf{v}_j)=\delta_{ij},\quad g_i \text{ linear on each edge of $\Omega$}.\]
The harmonic coordinate function $\lambda^{\rm Har}_i$ is defined to be the solution of Laplace's equations with $g_i$ as boundary data,
\begin{equation}
\label{eq:optpde}
\displaystyle\left\{\begin{array}{rcll}
\Delta\left(\lambda^{\rm Har}_i\right) & = & 0, & \text{on $\Omega$}, \\
\lambda^{\rm Har}_i & = & g_i. & \text{on $\partial\Omega$}.
\end{array}\right.
\end{equation}
Existence and uniqueness of the solution are well known results~\cite{Ev98,RR04}. Properties B\ref{b:nonneg} and B\ref{b:lincomp} are a consequence of the maximum principle and linearity of Laplace's equation.
These coordinates are optimal in the sense that they minimize the norm of the gradient over all functions satisfying the boundary conditions,
\[
\lambda^{\rm Har}_i = \text{argmin} \left\{\hpsn{\lambda}{1}{\Omega} \, : \, \lambda = g_i \,\text{on $\partial\Omega$} \right\}.
\]
This natural construction extends nicely to polytopes, as well as to a similar definition for barycentric-like (Whitney) vector elements on polygons. Christensen~\cite{C2008} has explored theoretical results along these lines. Numerical approximations of the $\lambda^{\rm Har}_i$ functions have been used to solve Maxwell's equations over polyhedral grids~\cite{E2007} and for finite element simulations for computer graphics~\cite{MKBWG2008,JMRGS07}. While it may seem excessive to solve a PDE just to derive the basis functions for a larger PDE solver, the relatively limited geometric requirements required for their use (see Section \ref{sec:optimal}) make these functions a useful reference point for comparison with simpler constructions and a suitable choice in contexts where mesh element quality is hard to control.
\subsection{Wachspress Coordinates}
\begin{figure}[ht]
\begin{center}
\psfrag{vi}{$\textbf{v}_i$}
\psfrag{vip1}{$\textbf{v}_{i+1}$}
\psfrag{vim1}{$\textbf{v}_{i-1}$}
\psfrag{x}{$\textbf{x}$}
\psfrag{Aix}{$A_i(\textbf{x})$}
\psfrag{Aip1x}{$A_{i+1}(\textbf{x})$}
\psfrag{Aim1x}{$A_{i-1}(\textbf{x})$}
\psfrag{wBi}{$B_i$}
\psfrag{dots}{$\cdots$}
\[\begin{array}{ccc}
\includegraphics[width=.3\linewidth]{img/triang-wach1.eps} &
\includegraphics[width=.3\linewidth]{img/triang-wach2.eps}
\end{array}\]
\end{center}
\caption{Left: Notation for $A_i(\textbf{x})$. Right: Notation for $B_i$.}
\label{fig:wach-ntn}
\end{figure}
One of the earliest generalizations of barycentric coordinates was provided by Wachspress \cite{W1975}. Definition of these coordinates is defined based on some notation shown in Figure \ref{fig:wach-ntn}. Let $\textbf{x}$ denote an interior point of $\Omega$ and let $A_i(\textbf{x})$ denote the area of the triangle with vertices $\textbf{x}$, $\textbf{v}_i$, and $\textbf{v}_{i+1}$ where, by convention, $\textbf{v}_{0} := \textbf{v}_{n}$ and $\textbf{v}_{n+1} := \textbf{v}_{1}$. Let $B_i$ denote the area of the triangle with vertices $\textbf{v}_{i-1}$, $\textbf{v}_i$, and $\textbf{v}_{i+1}$. Define the Wachspress weight function
\[w^{\rm Wach}_i(\textbf{x}) = B_i \prod_{j\not=i,i-1}A_j(\textbf{x}).\]
The Wachspress coordinates are then given by
\begin{equation}
\label{eq:wach}
\lambda^{\rm Wach}_i(\textbf{x})=\frac{w^{\rm Wach}_i(\textbf{x})}{\sum_{j=1}^{n} w^{\rm Wach}_j(\textbf{x})}
\end{equation}
These coordinates have received extensive attention in the literature since they can be represented as rational functions in Cartesian coordinates. Their use in finite element schemes has been numerically tested in specific application contexts but to our knowledge has not been evaluated in the general Sobolev error estimate context considered here. We note that $\lambda^{\rm Wach}_i\in H^1(\Omega)$ since it is a rational function with strictly positive denominator on $\Omega$.
\begin{remark}
Since $B_i$ does not depend on $\textbf{x}$ and $A_i(\textbf{x})$ is linear in $\textbf{x}$, the Wachspress functions are degree $n-2$. By a result from Warren~\cite{W2003}, the Wachspress functions are the unique, lowest degree rational barycentric functions over polygons. For finite element applications, however, the $\lambda_i$ need not be rational.
\end{remark}
\subsection{Sibson (Natural Neighbor) Coordinates}
\begin{figure}[ht]
\psfrag{vi}{$\textbf{v}_i$}
\psfrag{Ci}{$C_i$}
\psfrag{x}{$\textbf{x}$}
\psfrag{Dix}{$D(\textbf{x})$}
\psfrag{DixCi}{$D(\textbf{x})\cap C_i$}
\[\begin{array}{ccc}
\includegraphics[width=.3\linewidth]{img/NECexampleA.eps} &
\includegraphics[width=.3\linewidth]{img/NECexampleB.eps} &
\includegraphics[width=.3\linewidth]{img/NECexampleC.eps}
\end{array}\]
\caption{Geometric calculation of a Sibson coordinate. $C_i$ is the area of the Voronoi region associated to vertex $\textbf{v}_i$ inside $\Omega$. $D(\textbf{x})$ is the area of the Voronoi region associated to $\textbf{x}$ if it is added to the vertex list. The quantity $D(\textbf{x})\cap C_i$ is exactly $D(\textbf{x})$ if $\textbf{x}=\textbf{v}_i$ and decays to zero as $\textbf{x}$ moves away from $\textbf{v}_i$, with value identically zero at all vertices besides $\textbf{v}_i$.}
\label{fig:sibson}
\end{figure}
The Sibson coordinates \cite{S1980}, also called the natural neighbor or natural element coordinates, make use of Voronoi diagrams on the vertices $\textbf{v}_i$ of $\Omega$. Let $\textbf{x}$ be a point inside $\Omega$. Let $P$ denote the set of vertices $\{\textbf{v}_i\}$ and define
\[P'=P\cup \{\textbf{x}\} = \{\textbf{v}_1,\ldots,\textbf{v}_{n},\textbf{x}\}.\]
We denote the \textbf{Voronoi cell} associated to a point $\textbf{p}$ in a pointset $Q$ by
\[V_Q(\textbf{p}):= \left\{\textbf{y}\in \Omega \, : \, \vsn{\textbf{y}-\textbf{p}} < \vsn{\textbf{y}-\textbf{q}} \, , \, \forall\textbf{q}\in Q\setminus\{\textbf{p}\} \right\}.\]
Note that these Voronoi cells have been restricted to $\Omega$ and are thus always of finite size. We fix the notation
\[\begin{array}{lcccl}
C_i &:=& |V_{P}(\textbf{v}_i)| &=& \left|\{\textbf{y}\in\Omega \, : \, \vsn{\textbf{y}-\textbf{v}_i} < \vsn{\textbf{y}-\textbf{v}_j}\, , \, \forall j\not=i\}\right| \\
&& &=& \text{area of cell for $\textbf{v}_i$ in Voronoi diagram on the points of $P$,} \\
\\
D(\textbf{x}) &:=& |V_{P'}(\textbf{x})| &=& \left|\{\textbf{y}\in\Omega \, : \, \vsn{\textbf{y}-\textbf{x}} < \vsn{\textbf{y}-\textbf{v}_i}\, , \, \forall i\}\right| \\
&& &=& \text{area of cell for $\textbf{x}$ in Voronoi diagram on the points of $P'$}.
\end{array}
\]
By a slight abuse of notation, we also define
\[ D(\textbf{x})\cap C_i := |V_{P'}(\textbf{x})\cap V_{P}(\textbf{v}_i)|. \]
The notation is shown in Figure \ref{fig:sibson}. The Sibson coordinates are defined to be
\begin{align*}
\lambda^{\rm Sibs}_i(\textbf{x}) &:= \frac{D(\textbf{x})\cap C_i}{D(\textbf{x})} &
\textnormal{or, equivalently,} & &
\lambda^{\rm Sibs}_i(\textbf{x}) &= \frac{D(\textbf{x})\cap C_i}{\sum_{j=1}^{n}D_j(\textbf{x})\cap C_j}.
\end{align*}
It has been shown that the $\lambda^{\rm Sibs}_i$ are $C^\infty$ on $\Omega$ except at the vertices $\textbf{v}_i$ where they are $C^0$ and on circumcircles of Delaunay triangles where they are $C^1$ \cite{S1980,F1990}. Since the finite set of vertices are the only points at which the function is not $C^1$, we conclude that $\lambda^{\rm Sibs}_i\in H^1(\Omega)$.
To close this section, we compare the intra-element smoothness properties of the coordinate types on the interior of $\Omega$. The triangulation coordinates are $C^0$, the Sibson coordinates are $C^1$, and the Wachspress functions and the harmonic coordinates are both $C^\infty$.
\section{Generalized Shape Regularity Conditions}
\label{sec:geomcond}
The invariance property B\ref{b:invariance} allows estimates on diameter-one polygons to be scaled to polygons of arbitrary size. Several well-known properties of planar convex sets to be used throughout the analysis are given in Proposition~\ref{prop:convexfacts}. Let $|\Omega|$ denote the area of convex polygon $\Omega$ and let $|\partial \Omega|$ denote the perimeter of $\Omega$.
\begin{proposition}\label{prop:convexfacts}
If $\Omega$ is a convex polygon with $\diam(\Omega) = 1$, then
\vspace{-.07in}
\begin{enumerate}[(i)]
\item $|\Omega| < \pi/4$,\label{cf:maxarea}
\item $|\partial \Omega| \leq \pi$, \label{cf:maxperimeter}
\item $\Omega$ is contained in a ball of radius no larger than $1/\sqrt{2}$, and \label{cf:jungs}
\item If convex polygon $\Upsilon$ is contained in $\Omega$, then $|\partial \Upsilon| \leq |\partial \Omega|$. \label{cf:subset}
\end{enumerate}
\end{proposition}
The first three statements are the isodiametric inequality, a corollary to Barbier's theorem, and Jung's theorem, respectively. The last statement is a technical result along the same lines. See \cite{Eg58,YB61,SA00} for more details.
Certain combinations of the geometric restrictions (G1-G3) imply additional useful properties for the analysis. These resulting conditions are listed below.
\renewcommand{\labelenumi}{G\arabic{enumi}.}
\begin{enumerate}
\setcounter{enumi}{3}
\item \textbf{Minimum interior angle:} There exists $\beta_*\in\R$ such that $\beta_i > \beta_* > 0$ for all $i$.\label{g:minangle}
\item \textbf{Maximum vertex count:} There exists $n^*\in\R$ such that $n < n^*$.\label{g:maxdegree}
\end{enumerate}
For triangles, G\ref{g:minangle} and G\ref{g:maxangle} are the only two important geometric restrictions since G\ref{g:maxdegree} holds trivially and G\ref{g:ratio}$\Leftrightarrow$G\ref{g:minangle}$\Rightarrow$G\ref{g:minedge}. For general polygons, the relationships between these conditions are more complicated; for example, a polygon satisfying G\ref{g:ratio} may have vertices which are arbitrarily close to each other and thus might not satisfy G\ref{g:maxdegree}. Proposition~\ref{prop:Grelation} below specifies when the original geometric assumptions (G1-G3) imply G\ref{g:minangle} or G\ref{g:maxdegree}.
\begin{proposition} The following implications hold.
\label{prop:Grelation}
\vspace{-.07in}
\begin{enumerate}[(i)]
\item G\ref{g:ratio} $\Rightarrow$ G\ref{g:minangle} \label{gr:getminangle}
\item (G\ref{g:minedge} or G\ref{g:maxangle}) $\Rightarrow$ G\ref{g:maxdegree}\label{gr:getmaxdegree}
\end{enumerate}
\end{proposition}
\begin{proof}
G\ref{g:ratio} $\Rightarrow$ G\ref{g:minangle}: If $\beta_i$ is an interior angle, then $\rho(\Omega) \leq \sin(\beta_i/2)$ (see Figure~\ref{fig:minIntAngProof}). Thus $\gamma > \frac{1}{\sin(\beta_i/2)}$. We conclude that $\beta_i > 2 \arcsin \frac{1}{\gamma^*}$. Note that $\gamma^*\geq 2$ so this is well-defined.
\begin{figure}[ht]
\begin{center}
\psfrag{vi}{$\textbf{v}_i$}
\psfrag{Bi}{$\beta_i$}
\psfrag{c}{\textcolor{red}{$\textbf{c}$}}
\psfrag{r}{\textcolor{red}{$\rho(\Omega)$}}
\includegraphics[width=.3\linewidth]{img/minIntAngProof.eps}
\end{center}
\caption{Proof that G1 $\Rightarrow$ G4. The upper angle in the triangle is $\leq\beta_i/2\leq \pi/2$ and the hypoteneuse is $\leq\diam(\Omega)=1$. Thus $\rho(\Omega)\leq \sin(\beta_i/2)$.}
\label{fig:minIntAngProof}
\end{figure}
G\ref{g:minedge} $\Rightarrow$ G\ref{g:maxdegree}: By Jung's theorem (Proposition~\ref{prop:convexfacts}(\ref{cf:jungs})), there exists $\textbf{x}\in\Omega$ such that $\Omega \subset B(\textbf{x},1/\sqrt{2})$. By G\ref{g:minedge}, $\{B(\textbf{v}_i,d_*/2)\}_{i=1}^{n}$ is a set of disjoint balls. Thus $B(\textbf{x}, 1/\sqrt{2}+d_*/2)$ contains all of these balls. Comparing the areas of $\bigcup_{i=1}^{n} B(\textbf{v}_i,d_*/2)$ and $B(\textbf{x}, 1/\sqrt{2}+d_*/2)$ gives $n\frac{\pi d_*^2}{4} < \pi (\frac 1{\sqrt 2}+d_*/2)^2$, so $n < \frac{(\sqrt{2}+d_*)^2}{d_*^2}$.
G\ref{g:maxangle} $\Rightarrow$ G\ref{g:maxdegree}: Since $\Omega$ is convex, $\sum_{i=1}^{n} \beta_i = \pi(n-2)$. So $n \beta^* \geq \pi(n-2)$. Thus $n \leq \frac{2\pi}{\pi-\beta^*}$.
\qed
\end{proof}
\section{Interpolation in Sobolev Spaces}
\label{sec:intsobsp}
Interpolation error estimates are typically derived from the Bramble-Hilbert lemma which says that Sobolev functions over a certain domain or class of domains can be approximated well by polynomials.
The original lemma \cite{BH70} applied to a fixed domain (typically the ``reference'' element) and did not indicate how the estimate was impacted by domain geometry. Later, a constructive proof based on the averaged Taylor polynomial gave a uniform estimate under the geometric restriction G\ref{g:ratio} \cite{DS80,BS08}.
Recent improvements to this construction have demonstrated that even the condition G\ref{g:ratio} is unnecessary \cite{Ve99,DL04}. This modern version of the Bramble-Hilbert lemma is stated below and has been specialized to our setting, namely, the $H^1$ estimate for diameter $1$, convex domains.
\begin{lemma}[\cite{Ve99,DL04}]\label{lem:bramblehilbert}
Let $\Omega$ be a convex polygon with diameter $1$. For all $u\in H^2(\Omega)$, there exists a first order polynomial $p_u$ such that
$\hpn{u-p_u}{1}{\Omega} \leq C_{BH} \, \hpsn{u}{2}{\Omega}$.
\end{lemma}
We emphasize that the constant $C_{BH}$ is uniform over all convex sets of diameter $1$. The $H^1$-interpolant estimate (\ref{eq:iuh1uh2}) and Lemma~\ref{lem:bramblehilbert} together ensure the desired
optimal convergence estimate (\ref{eq:hconv}).
\begin{theorem}
Let $\Omega$ be a convex polygon with diameter $1$. If the $H^1$-interpolant estimate (\ref{eq:iuh1uh2}) holds, then for all $u\in H^2(\Omega)$,
\[
\hpn{u-Iu}{1}{\Omega} \leq (1 + C_I)\, \sqrt{1+C_{BH}^2} \, \hpsn{u}{2}{\Omega}.
\]
\end{theorem}
\begin{proof}
Let $p_u$ be the polynomial given in Lemma~\ref{lem:bramblehilbert} which closely approximates $u$. By property B\ref{b:lincomp}, $Ip_u = p_u$ yielding the estimate
\begin{align*}
\hpn{u-Iu}{1}{\Omega} & \leq \hpn{u-p_u}{1}{\Omega} + \hpn{I(u-p_u)}{1}{\Omega}\\
& \leq (1 + C_I)\hpn{u-p_u}{2}{\Omega}
\leq (1 + C_I)\sqrt{1+C_{BH}^2} \hpsn{u}{2}{\Omega}.\qed
\end{align*}
\end{proof}
\begin{corollary}
Let $\diam(\Omega) \leq 1$. If the $H^1$-interpolant estimate (\ref{eq:iuh1uh2}) holds, then for all $u\in H^2(\Omega)$,
\[
\hpn{u-Iu}{1}{\Omega} \leq (1 + C_I)\, \sqrt{1+C_{BH}^2}\, \diam(\Omega) \, \hpsn{u}{2}{\Omega}.
\]
\end{corollary}
\begin{proof}
This follows from the standard scaling properties of Sobolev norms since property B\ref{b:invariance} allows for a change of variables to a unit diameter domain.
Note: the $L^2$-component of the $H^1$-norm satisfies a stronger estimate containing an extra power of $\diam(\Omega)$.
\qed
\end{proof}
Section~\ref{sec:errorest} is an investigation of the geometric conditions under which the $H^1$-interpolant estimate (\ref{eq:iuh1uh2}) holds for the barycentric functions discussed in Section~\ref{sec:genbary}.
Under the geometric restrictions G\ref{g:ratio} and G\ref{g:maxdegree}, one method for verifying (\ref{eq:iuh1uh2}) (utilized in \cite{BS08} for simplicial interpolation) is to bound the $H^1$-norm of the basis functions. In several cases we will utilize this criteria which is justified by the following lemma.
\begin{lemma}
\label{lem:basisbd}
Under G\ref{g:ratio} and G\ref{g:maxdegree}, the $H^1$-interpolant estimate (\ref{eq:iuh1uh2}) holds whenever there exists a constant $C_\lambda$ such that
\begin{equation}\label{eq:basisbound}
\hpn{\lambda_i}{1}{\Omega} \leq C_\lambda.
\end{equation}
\end{lemma}
\begin{proof}
This follows almost immediately from the Sobolev embedding theorem; see \cite{Ad03,Le09}:
\[
\hpn{Iu}{1}{\Omega} \leq \sum_{i=1}^{n} |u(\textbf{v}_i)| \hpn{\lambda_i}{1}{\Omega} \leq n^* C_\lambda\, \vn{u}_{C^0(\overline{\Omega})} \leq n^* \,C_\lambda\, C_s\, \hpn{u}{2}{\Omega},
\]
where $C_s$ is the Sobolev embedding; i.e., $\vn{u}_{C^0(\overline{\Omega})} \leq C_s\, \hpn{u}{2}{\Omega}$ for all $u\in H^2(\Omega)$. The constant $C_s$ is independent of the domain $\Omega$ since the boundaries of all polygons satisfying G\ref{g:ratio} are uniformly Lipschitz~\cite{Le09}. \qed
\end{proof}
\section{Error Estimate Requirements}\label{sec:errorest}
\subsection{Estimate Requirements for Triangulation Coordinates}
\label{sec:trimeth}
Interpolation error estimates on triangles are well understood: the optimal convergence estimate (\ref{eq:hconv}) holds as long as the triangle satisfies a maximum angle condition \cite{BA76,Ja76}. In fact, it has been shown that the triangle circumradius controls the error independent of any other geometric criteria \cite{Kr91}. This result can be directly applied to $I^{\rm Tri}$, the interpolation operator associated to coordinates $\lambda^{\rm Tri}_i$. This convention will also be used to define $I^{\rm Opt}$, $I^{\rm Wach}$, and $I^{\rm Sibs}$ as the interpolation operators associated with with harmonic, Wachspress, and Sibson coordinates, respectively.
\begin{lemma}
\label{lem:triangulated}
Under G\ref{g:maxangle}, the $H^1$ interpolant estimate (\ref{eq:iuh1uh2}) holds for $I^{\rm Tri}$. Conversely, G\ref{g:maxangle} is a necessary assumption to achieve (\ref{eq:iuh1uh2}) with $I^{\rm Tri}$.
\end{lemma}
\begin{proof}
All angles of all triangles of any triangulation ${\mathcal{T}}$ of $\Omega$ satisfying G\ref{g:maxangle} are less than $\beta^*$. Thus, the sufficiency of G\ref{g:maxangle} follows immediately from the maximum angle condition on simplices~\cite{BA76}. An example from the same paper involving the interpolation of a quadratic function over a triangle also establishes the necessity of the condition.\qed
\end{proof}
\subsection{Estimate Requirements for Harmonic Coordinates}
\label{sec:optimal}
\begin{figure}[ht]
\begin{center}
\psfrag{vi}{$\textbf{v}_i$}
\psfrag{vip1}{$\textbf{v}_{i+1}$}
\psfrag{vim1}{$\textbf{v}_{i-1}$}
\psfrag{c}{\textcolor{red}{$\textbf{c}$}}
\psfrag{y}{\textcolor{red}{$\textbf{y}$}}
\psfrag{x}{$\textbf{x}$}
\psfrag{Aix}{$A_i(\textbf{x})$}
\psfrag{wBi}{$B_i$}
\[\begin{array}{cccc}
\includegraphics[height=.2\linewidth]{img/triang-opt.eps} &
\includegraphics[height=.35\linewidth]{img/triang-opt-est1.eps} &
\text{ }\quad\text{ } &
\includegraphics[height=.35\linewidth]{img/triang-opt-est2.eps} \\
\\
\textbf{(a)} &
\textbf{(b)} &&
\textbf{(c)}
\end{array}\]
\end{center}
\caption{\textbf{(a)} Triangulation used in the analysis of Harmonic coordinates. \textbf{(b)} Notation for proof of the bound for $\angle \textbf{c}\textbf{v}_i\textbf{v}_{i+1}$ in a case where it is $>\pi/2$. \textbf{(c)} Notation for proof of the bound for $\angle \textbf{v}_i\textbf{c}\textbf{v}_{i+1}$ in a case where it is $>\pi/2$.}
\label{fig:opt-tri}
\end{figure}
Recalling the notation from Figure~\ref{fig:notation}, let $T$ be the triangulation of $\Omega$ formed by connecting $\textbf{c}$ to each of the $\textbf{v}_i$; see Figure~\ref{fig:opt-tri}a.
\begin{proposition}\label{prop:nlatriangulation}
Under G\ref{g:ratio} all angles of all triangles of $T$ are less than $\pi - \arcsin (1/\gamma^*)$.
\end{proposition}
\begin{proof}
Consider the triangle with vertices $\textbf{c}$, $\textbf{v}_i$ and $\textbf{v}_{i+1}$. Without loss of generality, assume that $|\textbf{c} - \textbf{v}_i| < |\textbf{c}- \textbf{v}_{i+1}|$. First we bound $\angle\textbf{c} \textbf{v}_{i+1} \textbf{v}_i$. By the law of sines,
\begin{equation}
\label{eq:lawsinarg}
\frac{\sin(\angle\textbf{c} \textbf{v}_{i+1} \textbf{v}_i)}{\sin(\angle\textbf{c} \textbf{v}_i \textbf{v}_{i+1})}=\frac{|\textbf{c} - \textbf{v}_i|}{|\textbf{c} - \textbf{v}_{i+1}|}<1.
\end{equation}
If $\angle\textbf{c} \textbf{v}_i \textbf{v}_{i+1}>\pi/2$ then $\angle\textbf{c} \textbf{v}_{i+1} \textbf{v}_i<\pi/2$. Otherwise, (\ref{eq:lawsinarg}) implies $\angle\textbf{c}\textbf{v}_{i+1}\textbf{v}_i<\pi/2$.
To bound angle $\angle \textbf{c}\textbf{v}_i\textbf{v}_{i+1}$, it suffices to consider the case when $\angle \textbf{c}\textbf{v}_i\textbf{v}_{i+1}>\pi/2$, as shown in Figure~\ref{fig:opt-tri}b. Define $\textbf{y}$ to be the point on the line through $\textbf{v}_i$ and $\textbf{v}_{i+1}$ which forms a right triangle with $\textbf{v}_i$ and $\textbf{c}$. Since $\angle \textbf{c}\textbf{v}_i\textbf{v}_{i+1}>\pi/2$, $\textbf{y}$ is exterior to $\Omega$, as shown. Observe that
\[\frac{|\textbf{c}-\textbf{v}_i|}{|\textbf{c}-\textbf{y}|}<\frac{|\textbf{c}-\textbf{v}_{i+1}|}{|\textbf{c}-\textbf{y}|}<\frac{\diam(\Omega)}{\rho(\Omega)}=\gamma<\gamma^*.\]
Since $\sin(\pi-\angle \textbf{c}\textbf{v}_i\textbf{v}_{i+1})=\frac {|\textbf{c}-\textbf{y}|}{|\textbf{c}-\textbf{v}_i|}$, the result follows.
For the final case, it suffices to assume $\angle \textbf{v}_i\textbf{c}\textbf{v}_{i+1} > \pi/2$, as shown in Figure~\ref{fig:opt-tri}c. Define $\textbf{y}$ in the same way, but note that in this case $\textbf{y}$ is between $\textbf{v}_i$ and $\textbf{v}_{i+1}$, as shown. Similarly, $\frac{|\textbf{c}-\textbf{v}_{i+1}|}{|\textbf{c}-\textbf{y}|}<\gamma^*$, implying $\angle\textbf{v}_i\textbf{v}_{i+1}\textbf{c} >\arcsin(1/\gamma^*)$. Since $\angle\textbf{v}_i\textbf{c}\textbf{v}_{i+1}<\pi-\angle\textbf{v}_i\textbf{v}_{i+1}\textbf{c}$, the result follows.\qed
\end{proof}
\begin{lemma}\label{lem:optimal}
Under G\ref{g:ratio} the operator $I^{\rm Opt}$ satisfies the $H^1$ interpolant estimate (\ref{eq:iuh1uh2}).
\end{lemma}
\begin{proof}
Since the differential equation (\ref{eq:optpde}) is linear, $I^{\rm Opt} u$ is the solution to the differential equation,
\begin{equation}
\label{eq:ioptpde}
\displaystyle\left\{\begin{array}{rcll}
\Delta\left(I^{\rm Opt} u \right) & = & 0, & \text{on $\Omega$} \\
I^{\rm Opt}_i u & = & g_u, & \text{on $\partial\Omega$}
\end{array}\right.
\end{equation}
where $g_u$ is the piecewise linear function which equals $u$ at the vertices of $\Omega$. Following the standard approach for handling nonhomogeneous boundary data we divide: $I^{\rm Opt} u = u_{\rm hom} + u_{\rm non}$ where $u_{\rm non}\in H^1(\Omega)$ is some function satisfying the boundary condition (i.e., $u_{\rm non} = g_u$ on $\partial\Omega$) and $u_{\rm hom}$ solves,
\begin{equation}
\label{eq:uhom}
\displaystyle\left\{\begin{array}{rcll}
\Delta u_{\rm hom} & = & -\Delta u_{\rm non}, & \text{on $\Omega$} \\
u_{\rm hom} & = & 0, & \text{on $\partial\Omega$}.
\end{array}\right.
\end{equation}
Specifically we select $u_{\rm non}$ to be the standard Lagrange interpolant of $u$ over triangulation $T$ (described earlier).
Since Proposition~\ref{prop:nlatriangulation} guarantees that no large angles exist in the triangulation, the standard interpolation error estimate holds,
\begin{equation}
\label{eq:nlainterpolation}
\hpn{u - u_{\rm non}}{1}{\Omega} \leq C_{BA} \, \hpsn{u}{2}{\Omega}
\end{equation}
where $C_{BA}$ only depends upon the aspect ratio bound $\gamma^*$ since $\diam(\Omega) = 1$.
The triangle inequality then implies that $\hpn{u_{\rm non}}{1}{\Omega} \leq \max(1,C_{BA}) \hpn{u}{2}{\Omega}$.
Next a common energy estimate (see \cite{Ev98}) for (\ref{eq:uhom}) implies that $\hpsn{u_{\rm hom}}{1}{\Omega} \leq \hpsn{u_{\rm non}}{1}{\Omega}$. The Poincar\'e inequality (see \cite{Le09}) ensures that $\lpn{u_{\rm hom}}{2}{\Omega} \leq C_P \hpsn{u}{1}{\Omega}$ where $C_P$ only depends on the diameter of $\Omega$ which we have fixed to be $1$. The argument is completed by combining the previous estimates:
\begin{align*}
\hpn{I^{\rm Opt} u}{1}{\Omega} & \leq \hpn{u_{\rm hom}}{1}{\Omega} + \hpn{u_{\rm non}}{1}{\Omega} \\
& \leq (1 + C_P) \hpsn{u_{\rm hom}}{1}{\Omega} + \hpn{u_{\rm non}}{1}{\Omega}\\
& \leq (1 + C_P)\hpn{u_{\rm non}}{1}{\Omega} \leq (1 + C_P)\max(1,C_{BA}) \hpn{u}{2}{\Omega}.
\qed
\end{align*}
\end{proof}
\subsection{Estimate Requirements for Wachspress Coordinates}
\label{sec:wachpress}
\begin{figure}[ht]
\begin{center}
\psfrag{v1}{$\textbf{v}_1$}
\[\begin{array}{ccc}
\includegraphics[width=.3\linewidth]{img/wach-cex-eps-pt10.eps} &
\includegraphics[width=.3\linewidth]{img/wach-cex-eps-pt05.eps} &
\includegraphics[width=.3\linewidth]{img/wach-cex-eps-pt025.eps}
\end{array}\]
\end{center}
\caption{Example showing the necessity of condition G\ref{g:maxangle} for attaining the optimal convergence estimate (\ref{eq:hconv}) with the Wachspress coordinates. As the shape approaches a square, the level sets of $\lambda^{\rm Wach}_1$ collect at the top edge, causing a steep gradient and thus preventing a bound on the $H^1$ norm of the error. The figures from left to right correspond to $\epsilon$ values of 0.1, 0.05, and 0.025. }
\label{fig:wach-cex}
\end{figure}
Unlike the harmonic coordinate functions, the Wachspress coordinates can produce unsatisfactory interpolants unless additional geometric conditions are imposed. We present a simple counterexample (observed qualitatively in \cite{FHK2006} and in Figure~\ref{fig:wach-cex}) to show what can go wrong.
Let ${\Omega_\epsilon}$ be the pentagon defined by the vertices
\[
\textbf{v}_1=(0,1+\epsilon),\quad
\textbf{v}_2=(1,1),\quad
\textbf{v}_3=(1,-1),\quad
\textbf{v}_4=(-1,-1),\quad
\textbf{v}_5=(-1,1),
\]
with $\epsilon>0$. As $\epsilon\rightarrow 0$, ${\Omega_\epsilon}$ approaches a square so G1 is not violated.
Consider the interpolant of $u(\textbf{x})=1-x_1^2$ where $\textbf{x} =(x_1,x_2)$. Observe that $u$ has value 1 at $\textbf{v}_1$ and value $0$ at the other vertices of ${\Omega_\epsilon}$. Hence
\[I^{\rm Wach} u=\sum_{i=1}^5 u(\textbf{v}_i)\lambda^{\rm Wach}_i=\lambda^{\rm Wach}_1\]
Using the fact that $\partial u/\partial y = 0$, we write
\begin{align*}
\hpn{u - I^{\rm Wach} u}{1}{{\Omega_\epsilon}}^2 &= \hpn{u - \lambda^{\rm Wach}_1}{1}{{\Omega_\epsilon}}^2 \\
& = \int_{\Omega_\epsilon} |u - \lambda^{\rm Wach}_1|^2+\left|\frac{\partial (u - \lambda^{\rm Wach}_1)}{\partial x}\right|^2+\left|\frac{\partial \lambda^{\rm Wach}_1}{\partial y}\right|^2.
\end{align*}
The last term in this sum blows up as $\epsilon \rightarrow 0$.
\begin{lemma}
\label{lem:wachcex}
$\displaystyle\lim_{\epsilon\rightarrow 0}\int_{\Omega_\epsilon}\left|\frac{\partial \lambda^{\rm Wach}_1}{\partial y}\right|^2=\infty$.
\end{lemma}
The proof of the lemma is given in the appendix. As a corollary, we observe that $\hpn{u - Iu}{1}{{\Omega_\epsilon}}$ cannot be bounded independently of $\epsilon$. Since $\hpn{u}{2}{{\Omega_\epsilon}}$ is finite, this means the optimal convergence estimate (\ref{eq:hconv}) cannot hold without additional geometric criteria on the domain $\Omega$. This establishes the necessity of a maximum interior angle bound on vertices if Wachspress coordinates are used.
Under the three geometric restrictions G\ref{g:ratio}, G\ref{g:minedge}, and G\ref{g:maxangle}, (\ref{eq:hconv}) does hold which will be shown in Lemma~\ref{lm:wach}. We begin with some preliminary estimates.
\begin{proposition}\label{pr:gradarea}
For all $\textbf{x} \in \Omega$, $\vsn{\nabla A_i(\textbf{x})} \leq \frac{1}{2}$.
\end{proposition}
\begin{proof}
In \cite[Equation (17)]{FK10} it is shown that the $\vsn{\nabla A_i(\textbf{x})}=\frac{1}{2}\vsn{\textbf{v}_i - \textbf{v}_{i+1}}$. Since $\diam (\Omega) = 1$ the result follows.
\qed
\end{proof}
Next we show that the triangular areas $B_i$ are uniformly bounded from below given our geometric assumptions.
\begin{proposition}\label{pr:carealower}
Under G\ref{g:ratio}, G\ref{g:minedge}, and G\ref{g:maxangle}, there exists $B_*$ such that $B_i >B_*$.
\end{proposition}
\begin{proof}
By G\ref{g:minedge}, the area of the isosceles triangle with equal sides of length $d_*$ meeting with angle $\beta_i$ at $\textbf{v}_i$ is a lower bound for $B_i$, as shown in Figure~\ref{fig:ajxlower} (left). More precisely, $B_i > (d_*)^2 \sin (\beta_i/2) \cos(\beta_i/2)$. G\ref{g:maxangle} implies that $\cos(\beta_i/2) > \cos(\beta^*/2)$. G\ref{g:minangle} (which follows from G\ref{g:ratio} by Proposition \ref{prop:Grelation}) implies that $\sin (\beta_i/2) > \sin(\beta_*/2)$. Thus $B_i > B_* := (d_*)^2 \sin (\beta_*/2) \cos(\beta^*/2)$.
\qed
\end{proof}
Proposition~\ref{pr:carealower} can be extended to guarantee a uniform lower bound on the sum of the Wachspress weight functions.
\begin{proposition}\label{pr:wachweightlower}
Under G\ref{g:ratio}, G\ref{g:minedge}, and G\ref{g:maxangle}, there exists $w_*$ such that for all $\textbf{x}\in \Omega$, $$\sum_k w^{\rm Wach}_k(\textbf{x}) > w_*.$$
\end{proposition}
\begin{proof}
Let $\textbf{v}_i$ be the nearest vertex to $\textbf{x}$, breaking any tie arbitrarily. We will produce a lower bound on $w_i(\textbf{x})$. Let $j \notin \{i-1,i\}$. G\ref{g:minedge} implies that $\vsn{\textbf{x}-\textbf{v}_j} > d_*/2$ and $\vsn{\textbf{x}-\textbf{v}_{j+1}} > d_*/2$. G\ref{g:maxangle} implies that $\angle \textbf{x}\textbf{v}_j\textbf{v}_{j+1} < \beta^*$ and $\angle \textbf{x}\textbf{v}_{j+1}\textbf{v}_j < \beta^*$. It follows that $A_j(\textbf{x}) > (d_*)^2\sin(\pi-\beta^*)/4$ (see Figure~\ref{fig:ajxlower} (right)). We now use Proposition~\ref{pr:carealower} and property G\ref{g:maxdegree} (which follows from either G\ref{g:minedge} or G\ref{g:maxangle} by Proposition~\ref{prop:Grelation}) to conclude that
\[
\sum_k w^{\rm Wach}_k(\textbf{x}) > w^{\rm Wach}_i(\textbf{x}) = B_i \prod_{j\not=i,i-1}A_j(\textbf{x}) \geq B_* \left[(d_*)^2\sin(\pi-\beta^*)/4\right]^{n^*-2}.
\]
\qed
\end{proof}
\begin{figure}[ht]
\begin{center}
\psfrag{vi}{$\textbf{v}_i$}
\psfrag{vj}{$\textbf{v}_j$}
\psfrag{vjp1}{$\textbf{v}_{j+1}$}
\psfrag{vip1}{$\textbf{v}_{i+1}$}
\psfrag{vim1}{$\textbf{v}_{i-1}$}
\psfrag{dstar}{$d_*$}
\psfrag{x}{$\textbf{x}$}
\psfrag{Ajx}{$A_j(\textbf{x})$}
\[
\includegraphics[width=1.8in]{img/bilower.eps}\quad\text{ }\quad
\includegraphics[width=1.5in]{img/ajxlower.eps}
\]
\end{center}
\caption{\textbf{Left:} Justification of claim that $B_i > (d_*)^2 \sin (\beta_i/2) \cos(\beta_i/2)$ in the proof of Proposition~\ref{pr:carealower}. The shaded triangle is isosceles with angle $\beta_i$ and two side lengths equal to $d_*$ as indicated. Computing the area of this triangle using the dashed edge as the base yields the estimate. \textbf{Right:} Justification of claim that $A_j(\textbf{x}) > (d_*)^2\sin(\pi-\beta^*)/4$ in the proof of Proposition~\ref{pr:wachweightlower}. The indicated angle is at least $\pi-\beta^*$ by G\ref{g:maxangle} and $\vsn{\textbf{v}_{j}-\textbf{v}_{j+1}}>d_*$. Computing the area of the triangle using edge $\textbf{v}_{j}\textbf{v}_{j+1}$ as the base yields the estimate.}
\label{fig:ajxlower}
\end{figure}
\begin{lemma}\label{lm:wach}
Under G1, G2, and G3, (\ref{eq:basisbound}) holds for the Wachspress coordinates.
\end{lemma}
\begin{proof}
The gradient of $\lambda^{\rm Wach}_i(\textbf{x})$ can be bounded using Proposition \ref{pr:wachweightlower}:
\begin{align}
\vsn{\nabla \lambda^{\rm Wach}_i(\textbf{x})} & \leq \frac{\vsn{\nabla w^{\rm Wach}_i(\textbf{x})}}{\sum_j w^{\rm Wach}_j(\textbf{x})} + \frac{w^{\rm Wach}_i(\textbf{x})\sum_k \vsn{\nabla w^{\rm Wach}_k(\textbf{x})}}{\left(\sum_j w^{\rm Wach}_j(\textbf{x})\right)^2}\notag\\
& \leq \frac{\vsn{\nabla w^{\rm Wach}_i(\textbf{x})} + \sum_k \vsn{\nabla w^{\rm Wach}_k(\textbf{x})}}{\sum_j w^{\rm Wach}_j(\textbf{x})}
\leq \frac{2 \sum_k \vsn{\nabla w^{\rm Wach}_k(\textbf{x})}}{w_*}.\label{eq:wachgradbound}
\end{align}
Recalling Proposition~\ref{prop:convexfacts}, $\sum_{j=1}^{n} A_j(\textbf{x}) < \pi/4$ and $B_i<\pi/4$.
Using Proposition~\ref{pr:gradarea} and the arithmetic mean-geometric mean inequality, we derive
\begin{align}
\vsn{\nabla w^{\rm Wach}_i(\textbf{x})} &= \vsn{\sum_{j\neq i-1,i} B_i \nabla A_j(\textbf{x}) \prod_{k\neq i-1,i,j} A_k(\textbf{x})}\notag \\
& \leq \sum_{j\neq i-1,i}\left[ B_i \vsn{\nabla A_j(\textbf{x})} \prod_{k\neq i-1,i,j} A_k(\textbf{x})\right]\notag \\
& \leq \sum_{j\neq i-1,i}\frac{\pi}{8}\left[ \frac{\sum_{k\neq i-1,i,j} A_k(\textbf{x})}{n-3} \right]^{n-3}
\leq \sum_{j\neq i-1,i}\frac{\pi}{8}\left[ \frac{\pi}{4(n-3)} \right]^{n-3} \notag \\
& = \frac{\pi}{8}(n-2)\left[ \frac{\pi}{4(n-3)}\right]^{n-3}. \label{eq:wachgradweight}
\end{align}
By induction, one can show that $n(n-2)\left[ \frac{\pi}{4(n-3)}\right]^{n-3} \leq 2\pi$ for $n\geq 4$. Using this, we substitute (\ref{eq:wachgradweight}) into (\ref{eq:wachgradbound}) to get
\[\vsn{\nabla \lambda^{\rm Wach}_i(\textbf{x})}\leq \frac 2{w_*}\sum_k|\nablaw^{\rm Wach}_k(\textbf{x})|\leq\frac 2{w_*}n\frac{\pi}{8}(n-2)\left[ \frac{\pi}{4(n-3)}\right]^{n-3}\leq \frac{\pi}{4w_*}2\pi =\frac{\pi^2}{2w_*}. \]
Since $|\Omega|<\pi/4$ by Proposition \ref{prop:convexfacts}, we thus have a uniform bound
\[\hpn{\lambda^{\rm Wach}_i}{1}{\Omega} \leq \sqrt{\left(1 + \frac{\pi^4}{4w_*^2}\right)\frac{\pi}{4}}.\qed \]
\end{proof}
\subsection{Estimate Requirements for Sibson Coordinates}
\label{sec:sibson}
The interpolation estimate for Sibson coordinates is computed using a very similar approach to that of the previous section on Wachspress coordinates. However in this case the geometric condition G\ref{g:maxangle} is not necessary. We begin with a technical property of domains satisfying conditions G\ref{g:ratio} and G\ref{g:minedge}.
\begin{proposition}\label{prop:ballinvoronoi}
Under G\ref{g:ratio} and G\ref{g:minedge}, there exists $h_* > 0$ such that for all $\textbf{x}\in \Omega$, $B(\textbf{x},h_*)$ does not intersect any three edges or any two non-adjacent edges of $\Omega$.
\end{proposition}
\begin{proof}
Let $\textbf{x} \in \Omega$, $h\in (0,d_*/2)$, and suppose that two disjoint edges of $\Omega$, $e_i$ and $e_j$, intersect $B(\textbf{x}, h)$. Let $L_i$ and $L_j$ be the lines containing $e_i$ and $e_j$ and let $\theta$ be the angle between these lines; see Figure~\ref{fig:ballinset}. We first consider the case where $L_i$ and $L_j$ are not parallel and define $\textbf{z} = L_i \cap L_j$.
\begin{figure}
\centering
\psfrag{x}{$\textbf{x}$}
\psfrag{z}{$\textbf{z}$}
\psfrag{ei}{$e_i$}
\psfrag{ej}{$e_j$}
\psfrag{vi}{$\textbf{v}_i$}
\psfrag{vj}{$\textbf{v}_j$}
\psfrag{Li}{$L_i$}
\psfrag{Lj}{$L_j$}
\psfrag{h}{$h$}
\psfrag{W}{$W$}
\psfrag{theta}{$\theta$}
\psfrag{omega}{$\Omega$}
\includegraphics[width=.75\linewidth]{img/ballinset.eps}
\caption{Notation for proof of Proposition~\ref{prop:ballinvoronoi}.}
\label{fig:ballinset}
\end{figure}
Let $\textbf{v}_i$ and $\textbf{v}_j$ be the endpoints of $e_i$ and $e_j$ nearest to $\textbf{z}$. Since $h < d_*/2$ both $\textbf{v}_i$ and $\textbf{v}_j$ cannot live in $B(\textbf{x}, h)$; without loss of generality assume that $\textbf{v}_i\notin B(\textbf{x}, h)$.
Since $\dist(\textbf{v}_j,L_i) < 2h$,
\begin{equation}\label{eq:thetabound}
\sin\theta < 2h/\vsn{\textbf{z} - \textbf{v}_j}.
\end{equation}
Let $W$ be the sector between $L_i$ and $L_j$ containing $x$. Now $\Omega \subset B(\textbf{v}_j,1)\cap W \subset B(\textbf{z}, 1 + \vsn{\textbf{z}-\textbf{v}_j})\cap W$.
It follows that $\rho(\Omega) \leq (1 + \vsn{\textbf{v}_j - \textbf{z}})\sin\theta $.
Using (\ref{eq:thetabound}) and G\ref{g:ratio},
\[
\frac{1}{\gamma^*} \leq \frac{2h}{\vsn{\textbf{z} - \textbf{v}_j}}(1 + \vsn{\textbf{z} - \textbf{v}_j}) \leq 2h\left(\frac{1}{d_*} + 1 \right)
\]
where the final inequality holds because by G\ref{g:minedge}
$\vsn{\textbf{z} - \textbf{v}_j} \geq \vsn{\textbf{v}_i - \textbf{v}_j} \geq d_*$.
Thus
\begin{equation}\label{eq:hlwrbd}
h > \frac{d_*}{2\gamma^*(1+d_*)}.
\end{equation}
Estimate (\ref{eq:hlwrbd}) holds in the limiting case: when $L_i$ and $L_j$ are parallel. In this case $\Omega$ must be contained in a strip of width $2h$ which for small $h$ violates the aspect ratio condition.
The triangle is the only polygon with three or more pairwise non-adjacent edges.
So it remains to find a suitable $h_*$ so that $B(\textbf{x},h_*)$ does not intersect all three edges of the triangle.
For a triangle, $\rho(\Omega)$ is the radius of the smallest circle touching all three edges.
Since under G\ref{g:ratio} $\rho(\Omega) \geq 1/\gamma^*$, $B(\textbf{x},\frac{1}{2\gamma^*})$ intersects at most two edges.
Thus $h_* = \frac{d_*}{2\gamma^*(1+d_*)}$ is sufficiently small to satisfy the proposition in all cases.
\qed
\end{proof}
Proposition~\ref{prop:ballinvoronoi} is a useful tool for proving a lower bound on $D(\textbf{x})$, the area of the Voronoi cell of $\textbf{x}$ intersected with $\Omega$.
\begin{proposition}\label{prop:minvoronoiarea}
Under G\ref{g:ratio} and G\ref{g:minedge}, there exists $D_* >0$ such that $D(\textbf{x}) > D_*$.
\end{proposition}
\begin{proof}
Let $h_*$ be the constant in Proposition~\ref{prop:ballinvoronoi}. We consider two cases, based on whether the point $\textbf{x}$ is near any vertex of $\Omega$, as shown in Figure~\ref{fig:sibsproof} (left).
\noindent \underline{Case 1}: There exists $\textbf{v}_i$ such that $\textbf{x} \in B(\textbf{v}_i,h_*/2)$.
\begin{figure}
\centering
\psfrag{x}{$\textbf{x}$}
\psfrag{vi}{$\textbf{v}_i$}
\psfrag{hstar}{$h_*$}
\psfrag{Bi}{$\beta_i$}
\[\begin{array}{ccc}
\includegraphics[width=.3\linewidth]{img/cutCorners.eps} &
\quad\text{ }\quad &
\includegraphics[width=.23\linewidth]{img/nearvertexcase.eps}
\end{array}\]
\caption{The proof of Proposition~\ref{prop:minvoronoiarea} has two cases based on whether $\textbf{x}$ is within $h_*/2$ of some $\textbf{v}_i$ or not. When $\textbf{x}$ is within $h_*/2$ of $\textbf{v}_i$, the shaded sector shown on the right is contained in $V_{P'}(\textbf{x})\cap\Omega$.}
\label{fig:sibsproof}
\end{figure}
Consider the sector of $B(\textbf{x},h_*/2)$ specified by segments which are parallel to the edges of $\Omega$ containing $\textbf{v}_i$, as shown in Figure~\ref{fig:sibsproof} (right). This sector must be contained in $\Omega$ by Proposition~\ref{prop:ballinvoronoi} and in the Voronoi cell of $\textbf{x}$ by choice of $h_*<d_{*}$. Thus by G\ref{g:minangle} (using Proposition~\ref{prop:Grelation}(\ref{gr:getminangle})) $D(\textbf{x}) \geq \beta_* h_*^2/8$.
\noindent \underline{Case 2}: For all $\textbf{v}_i$, $\textbf{x} \notin B(\textbf{v}_i,h_*/2)$.
In this case, $B(\textbf{x}, h_*/4) \cap \Omega \subset V_{P'}(\textbf{x})$. If $B(\textbf{x}, h_*/4)$ intersects zero or one boundary edge of $\Omega$, then $D(\textbf{x}) \geq \pi h_*^2/32$. Otherwise $B(\textbf{x}, h_*/4)$ intersects two adjacent boundary edges. By G\ref{g:minangle}, $D(\textbf{x}) \geq \beta_* h_*^2/32$.
\qed
\end{proof}
General formulas for the gradient of the area of a Voronoi cell are well-known and can be used to bound the gradients of $D(\textbf{x})$ and $D(\textbf{x})\cap C_i$.
\begin{proposition}\label{prop:voronoigrad}
$|\nabla D(\textbf{x})| \leq \pi$ and $|\nabla (D(\textbf{x})\cap C_i)| \leq 1$.
\end{proposition}
\begin{proof}
The gradient of the area of a Voronoi region is known to be
\[
\nabla D(\textbf{x}) = \sum_{j=1}^{n} \frac{\textbf{v}_j - \textbf{x}}{\vsn{\textbf{v}_j - \textbf{x}}} F_j,
\]
where $F_j$ is the length of the segment separating the Voronoi cells of $\textbf{x}$ and $\textbf{v}_j$~\cite{OA91,OBSC00}. Then applying Proposition~\ref{prop:convexfacts} gives
\[
\vsn{\nabla D(\textbf{x})} \leq \sum_{i=1}^{n} F_i \leq |\partial\Omega|\leq \pi.
\]
Similarly,
\[
\nabla (D(\textbf{x})\cap C_i) = \frac{\textbf{v}_i - \textbf{x}}{\vsn{\textbf{v}_i - \textbf{x}}} F_i,
\]
and since $F_i \leq \diam(\Omega)$, $|\nabla (D(\textbf{x})\cap C_i)| \leq 1$.
\qed
\end{proof}
Propositions~\ref{prop:minvoronoiarea} and \ref{prop:voronoigrad} give estimates for the key terms needed in proving (\ref{eq:basisbound}) for the Sibson coordinates $\lambda^{\rm Sibs}$.
\begin{lemma}\label{lm:sibs}
Under G1 and G2, (\ref{eq:basisbound}) holds for the Sibson coordinates.
\end{lemma}
\begin{proof}
$\vsn{\nabla \lambda^{\rm Sibs}_i}$ is estimated by applying Propositions~\ref{prop:minvoronoiarea} and \ref{prop:voronoigrad}:
\begin{align*}
\vsn{\nabla \lambda^{\rm Sibs}_i} &\leq \frac{|\nabla (D(\textbf{x})\cap C_i)|}{D(\textbf{x})} + \frac{(D(\textbf{x})\cap C_i) \vsn{\nabla D(\textbf{x})} }{D(\textbf{x})^2}
\leq \frac{|\nabla (D(\textbf{x})\cap C_i)| + |\nabla D(\textbf{x})|}{D(\textbf{x})} \\
&\leq \frac{1 + \pi}{D_*}.
\end{align*}
Integrating this estimate completes the result.
\qed
\end{proof}
\begin{corollary}
By Lemma \ref{lem:basisbd}, the $H^1$ interpolant estimate (\ref{eq:iuh1uh2}) holds for the Sibson coordinates.
\end{corollary}
\section{Final Remarks}
\label{sec:conc}
Geometric requirements needed to ensure optimal interpolation error estimates are necessary for guaranteeing the compatibility of polygonal meshes with generalized barycentric interpolation schemes in finite element methods. Moreover, the identification of necessary and unnecessary geometric restrictions provides a tool for comparing various approaches to barycentric interpolation. Specifically we have demonstrated the necessity of a maximum interior angle restriction for Wachspress coordinates, which was empirically observed in \cite{FHK2006}, and shown that this restriction is unneeded when using Sibson coordinates.
Table \ref{tab:conditions} provides a guideline for how to choose barycentric basis functions given geometric criteria or, conversely, which geometric criteria should be guaranteed given a choice of basis functions. While utilized throughout our analysis, the aspect ratio requirement G\ref{g:ratio} can likely be substantially weakened. Due to a dependence on specific affine transformations, such techniques on triangular domains \cite{BA76,Ja76} (i.e., methods for proving error estimates under the maximum angle condition rather than the minimum angle condition) cannot be naturally extended to polygonal domains. Although aimed at a slightly different setting that we have analyzed, challenges in identifying sharp geometric restrictions are apparent from the numerous studies on quadrilateral elements, e.g., \cite{Ja77,ZV95,AD01,MNS08}. A satisfactory generalization of the maximum angle condition to arbitrary polygons is a subject of further investigation.
This paper emphasizes three specific barycentric coordinates (harmonic, Wachspress, and Sibson) but several others have been introduced in the literature. Maximum entropy \cite{Su04}, metric \cite{MLBD02}, and discrete harmonic \cite{PP93} coordinates can all be studied either by specific analysis or generalizing the arguments given here to wider classes of functions. The mean value coordinates defined by Floater~\cite{F2003} are of particular interest in this regard as they are defined by an explicit formula and appear to not require a maximum angle condition. The formal analysis of these functions, however, is not trivial. Additional generalizations could be considered by dropping certain restrictions on the coordinates, such as non-negativity, or the mesh elements, such as convexity. Working with non-convex elements, however, would require some non-obvious generalization of the geometric restrictions G1-G5.
\section{Proof of Lemma \ref{lem:wachcex}}
\noindent
\begin{proof}
The explicit formula for the Wachspress weight associated to $\textbf{v}_1$ is
\[w^{\rm Wach}_1(\textbf{x})= B_1A_2A_3A_4 =\epsilon(1-x)(1+x)(1+y) \]
where $\textbf{x}=(x,y)$ is an arbitrary point inside ${\Omega_\epsilon}$. The other weights can be computed similarly, yielding the coordinate function
\[\lambda^{\rm Wach}_1= \frac{w_{i}(\textbf{v})}{\sum_{j=1}^5w_{j}(\textbf{v})} = \frac{\epsilon(1-x)(1+x)(1+y)}{\epsilon^2 (1 - x^2) + 4\epsilon + 2(1 - y)}.\]
The $y$ partial derivative term is computed to be
\[\frac{\partial \lambda^{\rm Wach}_1}{\partial y} = \frac{4\epsilon(1-x^2) + 4\epsilon^2(1-x^2) + \epsilon^3(1 - x^2)^2}{(\epsilon^2 (1 - x^2) + 4\epsilon + 2(1 - y))^2}\]
Define the subregion ${\Omega'_P}\subset{\Omega_\epsilon}$ by
\[{\Omega'_P}=\left\{(x,y)\in{\Omega_\epsilon}:\frac 14\leq x\leq\frac 34,\quad 1\leq y\leq 1+\epsilon\right\}\]
Observe that $\frac{7}{16}\leq 1 - x^2 \leq \frac{15}{16}$ on ${\Omega'_P}$. Fix $0<\epsilon<1$. We bound the numerator by
\[
4\epsilon(1-x^2) + 4\epsilon^2(1-x^2) + \epsilon^3(1 - x^2)^2 > 4\epsilon\cdot \frac{7}{16} + 4 \epsilon^2 \cdot \frac{7}{16} + \epsilon^3\cdot \frac{49}{256} > \frac{7}{4}\epsilon.
\]
Since $|y-1|<\epsilon$ on ${\Omega'_P}$, we can bound the denominator by
\[
|\epsilon^2 (1 - x^2) + 4\epsilon + 2(1 - y)| \leq |\epsilon^2 (1 - x^2)| + |4\epsilon| + |2(1 - y)| \leq \epsilon^2 + 4\epsilon + 2\epsilon \leq 7\epsilon.
\]
Putting these results together, we have that
\[\left|\frac{\partial \lambda^{\rm Wach}_1}{\partial y}\right|>\frac{\frac{7}{4}\epsilon}{49\epsilon^2} = \frac{1}{28\epsilon }>0.\]
Let $C=\frac {1}{28}$ for ease of notation. Since $|{\Omega'_P}|>\frac 18 \epsilon$,
\[\lim_{\epsilon\rightarrow 0}\int_{\Omega_\epsilon}\left|\frac{\partial \lambda^{\rm Wach}_1}{\partial y}\right|^2\geq \lim_{\epsilon\rightarrow 0}\int_{{\Omega'_P}}\left|\frac{\partial \lambda^{\rm Wach}_1}{\partial y}\right|^2>\lim_{\epsilon\rightarrow 0}\int_{{\Omega'_P}}\frac{C^2}{\epsilon^2}=C^2\lim_{\epsilon\rightarrow 0}\frac{|{\Omega'_P}|}{\epsilon^2}>\frac{C^2}{8}\lim_{\epsilon\rightarrow 0}\frac 1\epsilon=\infty,\]
thereby proving the lemma.
\qed
\end{proof}
\cancel{
The explicit formula for the Wachspress weight associated to $\textbf{v}_1$ is
\[w^{\rm Wach}_1(\textbf{x})= B_1A_2A_3A_4 =\epsilon xy(1-x)\]
where $\textbf{x}=(x,y)$ is an arbitrary point inside ${\Omega_\epsilon}$. The other weights can be computed similarly, yielding the coordinate function
\[\lambda^{\rm Wach}_1= \frac{w_{i}(\textbf{v})}{\sum_{j=1}^5w_{j}(\textbf{v})} = \frac{4\epsilon x (x-1)}{\epsilon^2 x(x-3)(x-1)^2-2\epsilon(x-1)(1-3x+x^2y)+(y-1)(x^2y+3xy+x^2-x-2)}.\]
Then we compute the $y$ partial derivative term and fix the following notation,
\begin{eqnarray*}
\frac{\partial \lambda^{\rm Wach}_1}{\partial y}
&=&
\frac
{\epsilon^2(8x^4(x-1)-8x^3(x-1))+\epsilon(8x(x-1)+x^2(x-1)(2-3y))}
{(\epsilon^2 x(x-3) (x-1)^2- 2 \epsilon (x-1)(1 - 3 x + x^2y) + (y-1) (-2 + x^2 (1 + y) + x (3y -1)))^2} \\
&=&
\frac{\epsilon^2 f_1+\epsilon f_2}{(\epsilon^2f_3+\epsilon f_4 + (y-1)f_5)^2}
\end{eqnarray*}
where the $f_i$ are polynomials in $x$ and $y$ with the natural correspondence to the expression for the partial derivative. Define the subregion ${\Omega'_P}\subset{\Omega_\epsilon}$ by
\[{\Omega'_P}=\left\{(x,y)\in{\Omega_\epsilon}:\frac14\leq x\leq\frac 34,\quad 1\leq y\leq 1+\epsilon\right\}\]
Observe that $|f_1|$ is bounded above on ${\Omega'_P}$ by 8. Further, it is easily confirmed that $|f_2|>1$ on ${\Omega'_P}$ since the $8x(x-1)$ term dominates on this region and the range of $x$ values does not include 0 or 1. Thus for $\epsilon< 1/9$ we have $\epsilon|f_1|<8\epsilon<\frac 89 <|f_2|$, meaning we have the uniform bound
\[|f_2|-\epsilon|f_1|\geq 1-8\epsilon>\frac 19 > 0.\]
Now, since the $f_i$ are polynomials, there exists $M\in\R$ an upper bound for $|f_3|$, $|f_4|$ and $|f_5|$ on ${\Omega'_P}$. Since $|y-1|<\epsilon$ on ${\Omega'_P}$, we have the estimate
\[|\epsilon^2f_3+\epsilon f_4 + (y-1)f_5|^2\leq |\epsilon^2f_3|^2+|\epsilon f_4|^2+|\epsilon f_5|^2\leq \epsilon^2(3M^2).\]
Putting these results together, we have that
\[\left|\frac{\partial \lambda^{\rm Wach}_1}{\partial y}\right|\geq \frac{\epsilon(|f_2|-\epsilon|f_1|)}{|\epsilon^2f_3+\epsilon f_4 + (y-1)f_5|^2}>\frac{\epsilon(1/9)}{\epsilon^2(3M^2)}= \frac{1}{\epsilon 27M^2}>0.\]
Let $C=1/27M^2$ for ease of notation. Observe that the area $|{\Omega'_P}|>\frac 18 \epsilon$. Thus we have that
\[\lim_{\epsilon\rightarrow 0}\int_{\Omega_\epsilon}\left|\frac{\partial \lambda^{\rm Wach}_1}{\partial y}\right|^2\geq \lim_{\epsilon\rightarrow 0}\int_{{\Omega'_P}}\left|\frac{\partial \lambda^{\rm Wach}_1}{\partial y}\right|^2>\lim_{\epsilon\rightarrow 0}\int_{{\Omega'_P}}\frac{C^2}{\epsilon^2}=C^2\lim_{\epsilon\rightarrow 0}\frac{|{\Omega'_P}|}{\epsilon^2}>\frac{C^2}{8}\lim_{\epsilon\rightarrow 0}\frac 1\epsilon=\infty,\]
thereby proving the lemma.
}
\section{Additional references}
| {
"timestamp": "2011-04-19T02:00:24",
"yymm": "1010",
"arxiv_id": "1010.5005",
"language": "en",
"url": "https://arxiv.org/abs/1010.5005",
"abstract": "We prove the optimal convergence estimate for first order interpolants used in finite element methods based on three major approaches for generalizing barycentric interpolation functions to convex planar polygonal domains. The Wachspress approach explicitly constructs rational functions, the Sibson approach uses Voronoi diagrams on the vertices of the polygon to define the functions, and the Harmonic approach defines the functions as the solution of a PDE. We show that given certain conditions on the geometry of the polygon, each of these constructions can obtain the optimal convergence estimate. In particular, we show that the well-known maximum interior angle condition required for interpolants over triangles is still required for Wachspress functions but not for Sibson functions.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Error Estimates for Generalized Barycentric Interpolation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683476832691,
"lm_q2_score": 0.8104789109591832,
"lm_q1q2_score": 0.8004033189280959
} |
https://arxiv.org/abs/2105.00247 | Extension of tetration to real and complex heights | The continuous tetrational function ${^x}r=\tau(r,x)$, the unique solution of equation $\tau(r,x)=r^{\tau(r,x-1)}$ and its differential equation $\tau'(r,x) =q \tau(r,x) \tau'(r,x-1)$, is given explicitly as ${^x}r=\exp_{r}^{\lfloor x \rfloor+1}[\{x\}]_q$, where $x$ is a real variable called height, $r$ is a real constant called base, $\{x\}=x-\lfloor x \rfloor$ is the sawtooth function, $\lfloor x \rfloor$ is the floor function of $x$, and $[\{x\}]_q=(q^{\{x\}}-1)/(q-1)$ is a q-analog of $\{x\}$ with $q=\ln r$, respectively. Though ${^x}r$ is continuous at every point in the real $r-x$ plane, extensions to complex heights and bases have limited domains. The base $r$ can be extended to the complex plane if and only if $x\in \mathbb{Z}$. On the other hand, the height $x$ can be extended to the complex plane at $\Re(x)\notin \mathbb{Z}$. Therefore $r$ and $x$ in ${^x}r$ cannot be complex values simultaneously. Tetrational laws are derived based on the explicit formula of ${^x}r$. | \section{Introduction}
Tetration, i.e., iterated exponentiation, is the fourth hyperoperation after addition, multiplication, and exponentiation \cite{Goodstein1947}. Tetration is defined as
\[^{n}r\colonequals \underset{n}{\underbrace{r^{r^{\udots^{r}}}}},\]
meaning that copies of $r$ are exponentiated \(n\) times in a right-to-left direction, or recursively defined as
\[^{n}r\colonequals
\begin{cases}
1 & (n=0)\\
r^{(^{n-1}r)} & (n \geq 1).
\end{cases}\]
Obviously \(^{n-1}r=\log_r (^{n}r)\). The parameter \(r\) and \(n\) are referred to as the base and height, respectively. In this paper, we use the notations to express tetration as
\[^{n}r=\tau(r,n)=\exp_{r}^{n}1,\]
and we write \(\tau(n)\) for simplicity if \(r\) is a constant.\vspace{0.2in}
\noindent
Tetration has been applied in fundamental physics \cite{Scot2006}, degree of connection in air transportation system \cite{Sun2017}, signal interpolation \cite{Khetkeeree2019}, data compression \cite{Furuya2019}, etc.
\vspace{0.2in}
\noindent
Tetration is known as a rapid growing function that may be useful for expressing superexponential growth or huge numbers, but it also shows saturation and oscillation.
\begin{figure}
\includegraphics[scale=0.43]{fig_discrete.pdf}
\centering
\caption{Behaviors of discrete tetration for different \(r\) (left), and cobweb plots (right).}
\label{fig:discrete tetration}
\end{figure}
As in Fig. \ref{fig:discrete tetration} the protean dynamics can be illustrated by cobweb plots. \vspace{0.2in}
\noindent
Euler \cite{Euler1783} proved that the limit of tetration \(^{n}r\), as \(n\) goes to infinity, converges if \(e^{(-e)}\leq r \leq e^{1/e}\). Eisenstein \cite{Eisenstein1844} gave the closed form of \(^{\infty}r\) for complex \(r\), and Corless et al. \cite{Corless1996} expressed it as \(^{\infty}r=W(-\ln r)/(-\ln r)\) by using Lambert's W function. \vspace{0.2in}
\noindent
Different from established extension of base \(r\) to real or complex values under integers or infinite height, extension of height \(n\) to real or complex values is still under debate. The object of this paper is to derive rigorously the explicit formula of continuous tetration. \vspace{0.2in}
\begin{dfn}
\label{dfn_tau}
Let \(r>0\) be real constant and \(x>-2\) be real variable. Tetrational function \(\tau (x)\) is a continuous function defined as:
\[\tau(0)\colonequals 1,\hspace{15pt}\tau(x)\colonequals r^{\tau(x-1)}.\]
\end{dfn}
\begin{cly}
Independent of \(r\), every tetrational function \(\tau(x)\) goes through \(\tau(-1)=0\) and \(\tau(0)=1\).
\end{cly}
\begin{cly}
\label{cly_tau}
If a curve segment connecting between \(\tau(-1)=0\) and \(\tau(0)=1\) is defined, and is extended to other region by exponentiation or logarithm, then the whole curve satisfies Definition\ref{dfn_tau}.
\end{cly} \vspace{0.2in}
\noindent
Because of Colloray \ref{cly_tau}, it is taken for granted that \(\tau(x)\) cannot be uniquely determined without an extra requirement. It follows that there exist many unique solutions for different extra conditions. Hooshmand \cite{Hooshmand2006} applied a linear segment between \(\tau(-1)=0\) and \(\tau(0)=1\), and proved uniqueness of the function \(\tau(x)=\exp_{r}^{\lfloor x \rfloor+1}\{x\}\), where \(\lfloor x \rfloor\) is the floor function of \(x\) and \(\{x\}=x-\lfloor x \rfloor\) is the sawtooth function. He found that the first derivative exists at the connecting points of \(x=n\in\mathbb{N}\) only for the case of \(r=e\), and the second derivative does not exist at the connecting points. Kouznetsov \cite{Kouznetsov2009} gave a numerical solution for \(r>e^{1/e}\) in the complex plane under the requirement that \(\tau(x+iy)\) is holomorphic, and Paulsen and Cowgill \cite{Paulsen2017} proved that such a holomorphic solution is unique.\vspace{0.2in}
\noindent
In this paper, we analytically derive an unique solution only from the minimum requirement of differentiability: the delay-differential equation in Lemma \ref{lem_delaydiff}.
\begin{lem}
\label{lem_delaydiff}
Let \(q=\ln{r}\) be a real constant.\\
If a tetrational function is differentiable, the following delay-differential equation holds at every point in \(x>-2\):
\[\frac{d\tau(x)}{dx}=q\tau(x)\frac{d\tau(x-1)}{dx}\]
\end{lem} \vspace{0.2in}
\begin{proof}
The derivative of \(\tau(x)\) in Definition \ref{dfn_tau}.
\end{proof}\vspace{0.2in}
\noindent
This paper consists of eight sections. In Section \ref{sec_multiple}, a new function, named multi-tetrational function, is defined to express the restriction placed in the delay-differential equation. In Section \ref{sec_Taylor}, the tetrational function and multi-tetrational function are then expressed in Taylor series with coefficients characteristic to these functions. In Section \ref{sec_trans}, based on the general forms of coefficients, the Taylor series are transformed into new expressions. In section \ref{sec_explicit} explicit formulae of tetrational functions and multi-tetrational functions, the main results of this paper, are derived. In Section \ref{sec_complex}, Analytical properties as well as the extension to the complex heights and bases are discussed. In Section \ref{sec_rule}, calculation rules of tetration are presented. Concluding remarks are given in section \ref{sec_summary}.
\section{Multi-tetrational function}
\label{sec_multiple}
In this section we define a key function, and by using this, we transform the delay-differential function into an ordinary differential equation.\vspace{0.2in}
\noindent
Let us consider a continuous function \(\mu(x)\) which goes through the following discrete values:
\[\mu(0)=\tau(0)=1\]
\[\mu(1)=\tau(1)\tau(0)=r\]
\[\mu(2)=\tau(2)\tau(1)\tau(0)=r^{r}r\]
\[\vdots\]
\[\mu(n)=\tau(n)\mu(n-1)\]
It is reasonable to define \(\mu(x)\) by replacing \(n\) with \(x\).\vspace{0.2in}
\noindent
\begin{dfn}
\label{dfn_mu}
A continuous function \(\mu(x)\) is called a multi-tetrational function if it satisfies the following relations:
\[\mu(0)\colonequals\tau(0),\hspace{15pt}\mu(x)\colonequals\tau(x)\mu(x-1)\]
\end{dfn}
\begin{lem}
\label{lem_mu_value}
The equation \(\mu(-1)=\mu(0)=1\) holds independent of \(r\).
\end{lem}
\begin{proof}
From Definition \ref{dfn_tau}, we get \(\mu(0)=\tau(0)=1\). Assigning \(x=0\) to second equation in Definition\ref{dfn_mu} gives \(\mu(-1)=\mu(0)/\tau(0)=1\).
\end{proof}\vspace{0.2in}
\noindent
The delay-differential equation in Lemma \ref{lem_delaydiff} is transformed in to an ordinary differential equation including \(\mu(x)\) as in Theorem \ref{thm_dif_tau_mu}, which places strong restriction on both \(\tau(x)\) and \(\mu(x)\) .\vspace{0.2in}
\begin{thm}
\label{thm_dif_tau_mu}
Let \(q=\ln{r}\) be a real constant.\\
Let \(\omega\in\mathbb{C}\) be q-dependent constant. \\
The delay-defferentail equation \ref{lem_delaydiff} is equivalent to the differential equation:
\[\frac{d\tau(x)}{dx}=\omega q^{x}\mu(x)\]
\end{thm}
\begin{proof}
By assigning integer heights \(x=n\in\mathbb{N}\) to the equation \ref{lem_delaydiff}, we get:
\[\tau'(0)=q\tau(0)\tau'(-1)\]
\[\tau'(1)=q\tau(1)\tau'(0)=q^{2}\tau(1)\tau(0)\tau'(-1)\]
\[\tau'(2)=q\tau(2)\tau'(1)=q^{3}\tau(2)\tau(1)\tau(0)\tau'(-1)\]
\[\vdots\]
\[\tau'(n)=q^{n+1}\mu(n)\tau'(-1).\]
By defining \(\omega\colonequals{q}\tau'(-1)\), we get
\[\tau'(n)={\omega}q^{n}\mu(n).\]
By definitions \ref{dfn_tau} and \ref{dfn_mu}, the continuous functions \(\tau(x)\), \(\mu(x)\) and \(q^{x}\) go through all of the discrete points \(\tau(n)\), \(\mu(n)\) and \(q^{n}\), respectively. Therefore equation above is extended to real values:
\[\tau'(x)={\omega}q^{x}\mu(x).\]
Conversely, if \(\tau'(x)={\omega}q^{x}\mu(x)\) and \(\mu(x)=\tau(x)\mu(x-1)\) are given, then \[\frac{\tau'(x)}{\tau'(x-1)}=\frac{{\omega}q^{x}\mu(x)}{{\omega}q^{x-1}\mu(x-1)}=\frac{q\mu(x)}{\mu(x-1)}=q\tau(x).\]
Therefore the delay-defferential equation \(\tau'(x)=q\tau(x)\tau'(x-1)\) is reproduced.\\
\end{proof}
\section{Taylor series}
\label{sec_Taylor}
In this section, we will find general forms of the coefficients in Taylor series of the tetrational function and multi-tetrational function. \vspace{0.2in}
\noindent
First we express \(\tau(x)\) and \(\mu(x)\) as Taylor series.
\begin{dfn}
\label{dfn_taylor}
Let \(a_n, b_n, c_n\) and \(d_n\) be real Taylor coefficients.
Taylor series of tetrational and multi-tetrational functions near the points \(x=0\) or \(x=-1\) are expressed as:
\[\tau(x)\colonequals\sum_{n=0}^{\infty}\frac{1}{n!}{a_n}{x^{n}},\hspace{15pt}\tau(x-1)\colonequals\sum_{n=0}^{\infty}\frac{1}{n!}{b_n}{x^{n}},\]
\[\mu(x)\colonequals\sum_{n=0}^{\infty}\frac{1}{n!}{c_n}{x^{n}},\hspace{15pt}\mu(x-1)\colonequals\sum_{n=0}^{\infty}\frac{1}{n!}{d_n}{x^{n}},\]
\end{dfn}
\begin{cly}
The following coefficients are given as follows independent of \(r\):
\[a_0=1,\hspace{15pt}b_0=0,\hspace{15pt}c_0=1,\hspace{15pt}d_0=1\]
\end{cly}
\begin{proof}
Corollary \ref{cly_tau} gives \(a_0=\tau(0)=1\) and \(b_0=\tau(-1)=0\). Lemma \ref{lem_mu_value} gives \(c_0=\mu(0)=1\), and \(d_0=\mu(-1)=1\).
\end{proof}\vspace{0.2in}
\noindent
Next we find the correlations of coefficients \(a_n, b_n, c_n\) and \( d_n\).
\begin{lem}
\label{lem_coefficient_abcd}
Let \(s=\ln{q}\) be the real or complex constant dependent on \(q\), where \(s\) is the principal value for \(q<0\).\\
Let \(\binom{n}{k}\) be the binomial coefficient.\\
Taylor coefficients \(a_n, b_n, c_n\),and \( d_n\) for \(n\geq1\) are related to each other in the following ways:
\begin{eqnarray}
a_n&=&\omega\sum_{k=1}^{n}\binom{n-1}{k-1}c_{k-1}s^{n-k},\\
qb_n&=&\omega\sum_{k=1}^{n}\binom{n-1}{k-1}d_{k-1}s^{n-k},\\
c_n&=&\sum_{k=0}^{n}\binom{n}{k}a_{n-k}d_{k}.
\end{eqnarray}
\end{lem}
\begin{proof}
Repeated differentiation of the equation in Theorem \ref {thm_dif_tau_mu} and assigning \(x=0\) and \(x=-1\) to it give relations (1) and (2). The relation (3) is given by assigning \(x=0\) to the equation in Definition \ref{dfn_mu} after repeated differentiation of it.
\end{proof}
\noindent
From Lemma \ref{lem_delaydiff}, we can derive another relationship between \(a_{n}\) and \(b_{n}\) similar to Lemma \ref{lem_coefficient_abcd}. The relation, however, can be derived from equations in Lemma \ref{lem_coefficient_abcd}.\vspace{0.2in}
\noindent
Finally we show that general formulae of \(a_n, b_n, c_n\) and \( d_n\) have the common structure with the Striling numbers of the second kind \(\stirling{m}{n}\) and constants \(A_{n}\) or \(B_{n}\) e.g.,
\[a_{0}=A_{0},\]
\[a_{1}=A_{0}{\omega},\]
\[a_{2}=A_{0}s{\omega}+A_{1}{\omega}^{2},\]
\[a_{3}=A_{0}s^{2}{\omega}+3A_{1}s{\omega}^{2}+A_{2}{\omega}^{3},\]
\[a_{4}=A_{0}s^{3}{\omega}+7A_{1}s^{2}{\omega}^{2}+6A_{2}s{\omega}^{3}+A_{3}{\omega}^{4}.\]
Generally,
\[a_{0}=A_{0},\hspace{10pt}a_{n}=\sum_{k=1}^{n}\stirling{n}{k}A_{k}s^{n-k}{\omega}^{k}\hspace{10pt}(n\geq{1}).\]
The proof is given in Theorem \ref{thm_coefficient_AB}.
\begin{thm}
\label{thm_coefficient_AB}
Let \(A_{n}\) and \(B_{n}\) be a real constants having the relation:
\[A_{0}\colonequals 1,\hspace{10pt}A_{1}\colonequals 1,\hspace{10pt}B_{0}\colonequals 0,\hspace{10pt}B_{1}\colonequals 1,\]
\[A_{n}\colonequals\sum_{k=1}^{n}\binom{n}{k-1}A_{n-k}B_{k}.\hspace{15pt}(n\geq 2)\]
Let \(\stirling{n}{m}\) be the Stirling numbers of the second kind.\\
Then the coefficients \(a_n, b_n, c_n\),and \( d_n\) for \(n\geq 1\) with the following formulae satisfy the relations given in Lemma \ref{lem_coefficient_abcd}:
\[a_{n}=\sum_{k=1}^{n}\stirling{n}{k}A_{k}s^{n-k}\omega^{k},\hspace{15pt}qb_{n}=\sum_{k=1}^{n}\stirling{n}{k}B_{k}s^{n-k}\omega^{k},\]
\[c_{n}=\sum_{k=1}^{n}\stirling{n}{k}A_{k+1}s^{n-k}\omega^{k},\hspace{10pt}d_{n}=\sum_{k=1}^{n}\stirling{n}{k}B_{k+1}s^{n-k}\omega^{k}.\]
\end{thm}
\begin{proof}
We shall prove this by induction.\\
For \(n=1\), relations in Lemma \ref{lem_coefficient_abcd} (1), (2) and (3) are \(a_{1}=\omega\), \(qb_{1}=\omega\) and \(c_{1}=a_{1}d_{0}+a_{0}d_{1}\), respectively.
Clearly, the formulae of \(a_{1}=A_{1}\omega=\omega\) and \(qb_{1}=B_{1}\omega=\omega\) satisfy the first and second relations above, respectively. The relation \ref{lem_coefficient_abcd} (3) also holds since the left hand side is \(c_{1}=A_{2}\omega\), and the right hand side is \(a_{1}d_{0}+a_{0}d_{1}=(A_{1}B_{1}+A_{0}B_{2})\omega=A_{2}\omega\).\vspace{0.1in}
\noindent
Then suppose the given formulae hold for \(n=k\). \\
By using \(c_{n}\) \((n\leq{k})\) and the following relationship \cite{Abramowitz1964}\cite{Graham1994}: \[\stirling{n+1}{k+1}=\sum_{j=k}^{n}\binom{n}{j}\stirling{j}{k},\] the relation of \ref{lem_coefficient_abcd} (1) for \(n=k+1\) is expressed as
\[a_{k+1}=\omega\sum_{j=1}^{k+1}\binom{k}{j-1}c_{j-1}s^{k+1-j}\]
\[={\omega}\left[\binom{k}{0}c_{0}s^{k}+\binom{k}{1}c_{1}s^{k-1}+\cdots+\binom{k}{k}c_{k}\right]\]
\[=\binom{k}{0}\stirling{0}{0}A_{1}s^{k}{\omega}+\binom{k}{1}\stirling{1}{1}A_{2}{\omega}^{2}s^{k-1}+\cdots+\binom{k}{k}\left(\sum_{j=1}^{k}\stirling{k}{j}A_{j+1}s^{k-j}\omega^{k+1}\right)\]
\[=\binom{k}{0}\stirling{0}{0}A_{1}s^{k}{\omega}+\left[\binom{k}{1}\stirling{1}{1}+\binom{k}{2}\stirling{2}{1}\right]A_{2}s^{k-1}{\omega}^{2}+\cdots+\binom{k}{k}\stirling{k}{k}A_{k+1}{\omega}^{k+1}\]
\[=\stirling{k+1}{1}A_{1}s^{k}{\omega}+\stirling{k+1}{2}A_{2}s^{k-1}{\omega}^2+\cdots+\stirling{k+1}{k+1}A_{k+1}{\omega}^{k+1}\]
Therefore, if the relation holds for \(n=k\), it holds for \(n=k+1\).
It follows that the first relation is generally satisfied by the given formulae. \\
Similarly, proof for the relation of \ref{lem_coefficient_abcd} (2) is given by replacing \(a_{n}\) and \(c_{n}\) with \(qb_{n}\) and \(d_{n}\), respectively. Again, if the relation holds for \(n=k\), it holds for \(n=k+1\). \\
Therefore relation \ref{lem_coefficient_abcd} (1) and (2) are generally satisfied by given expressions.
Next we check the given formulae of \(a_{n}\), \(c_{n}\) and \(d_{n}\) generally satisfy the relation of \ref{lem_coefficient_abcd} (3) at any \(k\).
\[c_{k}=\sum_{j=0}^{k}\binom{k}{j}a_{k-j}d_{j}\]
\[=\binom{k}{0}a_{k}d_{0}+\binom{k}{1}a_{k-1}d_{1}+\cdots+\binom{k}{k}a_{0}d_{k}\]
By replacing each \(a_{n}\) and \(d_{n}\) by the given formulae, we have
\[c_{k}=\binom{k}{0}\left[\stirling{k}{1}A_{1}s^{k-1}{\omega}+\cdots+\stirling{k}{k}A_{k}s^{0}{\omega}^{k}\right]B_{1}\]
\[+\binom{k}{1}\left[\stirling{k-1}{1}A_{1}s^{k-2}{\omega}+\cdots+\stirling{k-1}{k-1}A_{k-1}s^{0}{\omega}^{k-1}\right]B_{2}\omega+\cdots\]
\[+\binom{k}{k}A_{0}\left[\stirling{k}{1}B_{2}s^{k-1}{\omega}+\cdots+\stirling{k}{k}B_{k+1}s^{0}{\omega}^{k}\right].\]
By rearrangement, we have
\[c_{k}=\left[\binom{k}{0}\stirling{k}{1}A_{1}B_{1}+\binom{k}{k}\stirling{k}{1}A_{0}B_{2}\right]s^{k-1}\omega\]
\[+\left[\binom{k}{0}\stirling{k}{2}A_{2}B_{1}+\sum_{h=0}^{k-2}\binom{k}{j+h}\stirling{k-j-h}{2-j}\stirling{j+h}{j}A_{1}B_{2}+\binom{k}{k}\stirling{k}{2}A_{0}B_{3}\right]s^{k-1}{\omega}^{2}\]
\[+\cdots\]
\[+\left[\binom{k}{0}\stirling{k}{k}A_{k}B_{1}{\omega}^{k}+\binom{k}{1}\stirling{k-1}{k-1}A_{k-1}B_{2}+\cdots+\binom{k}{k}\stirling{k}{k}A_{0}B_{k+1}\right]{\omega}^{k}.\]
By using the following relation for \(k<m\) \cite{Abramowitz1964}\cite{Graham1994}:
\[\stirling{n}{m}\binom{m}{j}=\sum_{k=0}^{n-m}\binom{n}{j+k}\stirling{n-j-k}{m-j}\stirling{j+k}{j},\]
We have
\[c_{k}=\stirling{k}{1}\left[\binom{1}{0}A_{1}B_{1}+\binom{1}{1}A_{0}B_{2}\right]s^{k}\omega\]
\[+\stirling{k}{2}\left[\binom{2}{0}A_{2}B_{1}+\binom{2}{1}A_{1}B_{2}+\binom{2}{2}A_{0}B_{3}\right]s^{k-1}{\omega}^{2}\]
\[+\cdots\]
\[+\stirling{k}{k}\left[\binom{k-1}{0}A_{k-1}B_{1}+\binom{k-1}{1}A_{k-2}B_{2}+\cdots+\binom{k-1}{k-1}A_{0}B_{k-1}\right]\]
By using the relationship \(A_{n}=\sum_{k=1}^{n}\binom{n}{k-1}A_{n-k}B_{k}\), finally we get
\[c_{k}=\stirling{k}{1}A_{2}s^{k}{\omega}+\stirling{k}{2}A_{3}s^{k-1}{\omega}^{2}+\cdots+\stirling{k}{k-1}A_{k}s{\omega}^{k-1}+\stirling{k}{k}A_{k+1}{\omega}^{k},\]
which is consistent with the expression of \(c_{k}\) given above.\\
Therefore the relation of \ref{lem_coefficient_abcd} (3) is generally satisfied by the given formulae.\\
\end{proof}\vspace{0.2in}
\noindent
Now we have the general form of Taylor series of the tetrational and muti-tetrational functions near the points \(x=0\) or \(x=-1\):
\[\tau(x)=A_{0}+A_{1}{\omega}x+\frac{1}{2}(A_{1}s{\omega}+A_{2}{\omega}^{2})x^{2}+\frac{1}{3!}(A_{1}s^{2}{\omega}+3A_{2}s{\omega}^2+A_{3}{\omega}^{3})x^{3}+\cdots,\]
\[\mu(x)=A_{1}+A_{2}{\omega}x+\frac{1}{2}(A_{2}s{\omega}+A_{3}{\omega}^{2})x^{2}+\frac{1}{3!}(A_{2}s^{2}{\omega}+3A_{3}s{\omega}^2+A_{4}{\omega}^{3})x^{3}+\cdots,\]
\[\tau(x-1)=\frac{B_{0}}{q}+\frac{B_{1}{\omega}}{q}x+\frac{1}{2}\frac{B_{1}s{\omega}+B_{2}{\omega}^{2}}{q}x^{2}+\frac{1}{3!}\frac{B_{1}s^{2}{\omega}+3B_{2}s{\omega}^2+B_{3}{\omega}^{3}}{q}x^{3}+\cdots,\]
\[\mu(x-1)=B_{1}+B_{2}{\omega}x+\frac{1}{2}(B_{2}s{\omega}+B_{3}{\omega}^{2})x^{2}+\frac{1}{3!}(B_{2}s^{2}{\omega}+3B_{3}s{\omega}^2+B_{4}{\omega}^{3})x^{3}+\cdots.\]
\section{Tetrational series}
\label{sec_trans}
In this section, the Taylor series of the tetrational functions and multi-tetrational functions are transformed into new expressions based on the property of Stirling numbers of the second kind.\vspace{0.2in}
\noindent
First we define a key function to express new series.
\begin{dfn}
\label{dfn_phi}
Let \(\varphi(x)\) be a function defined as
\[\varphi(x)\colonequals\frac{{\omega}(q^{x}-1)}{s}.\]
\end{dfn}
\begin{cly}
\label{cly_diff_Phi}
The following equation holds:
\[\frac{d\varphi(x)}{dx}={\omega}q^{x}.\]
\end{cly}\vspace{0.2in}
\noindent
By using \(\varphi(x)\), the Taylor series are then simplified as in Theorem \ref{thm_tetrational_series}.
\begin{thm}
\label{thm_tetrational_series}
Let Taylor series of tetrational functions and mutiple tetrational functions near the points \(x=0\) or \(x=-1\) be absolutely convergent, then they are expressed in power series of \(\varphi(x)\), named tetrational series:
\[\tau(x)=\sum_{n=0}^{\infty}\frac{1}{n!}A_{n}\varphi^{n}(x),\hspace{15pt}\tau(x-1)=\frac{1}{q}\sum_{n=0}^{\infty}\frac{1}{n!}B_{n}\varphi^{n}(x),\]
\[\mu(x)=\sum_{n=0}^{\infty}\frac{1}{n!}A_{n+1}\varphi^{n}(x),\hspace{15pt}\mu(x-1)=\sum_{n=0}^{\infty}\frac{1}{n!}B_{n+1}\varphi^{n}(x).\]
\end{thm}
\begin{proof}
Since the Taylor series of tetrational function is absolutely convergent, it can be rearranged as follows:
\[\tau(x)=A_{0}+A_{1}{\omega}x+\frac{1}{2}(A_{1}s{\omega}+A_{2}{\omega}^{2})x^{2}+\frac{1}{3!}(A_{1}s^{2}{\omega}+3A_{2}s{\omega}^2+A_{3}{\omega}^{3})x^{3}+\cdots\]
\[=A_{0}+A_{1}{\omega}\left(x+\frac{1}{2}sx^{2}+\frac{1}{3!}s^{2}x^{3}+\cdots\right)+A_{2}{\omega}^{2}\left(\frac{1}{2}x^2+\frac{1}{3!}3sx^{3}+\cdots\right)+\cdots\]
\[=A_{0}+A_{1}\frac{\omega}{s}\left(sx+\frac{1}{2}s^{2}x^{2}+\frac{1}{3!}s^{2}x^{3}+\cdots\right)+A_{2}\frac{\omega^{2}}{s^{2}}\left(\frac{1}{2}s^{2}x^2+\frac{1}{3!}3s^{3}x^{3}+\cdots\right)+\cdots.\]
By using the exponential generating function of the Stiring numbers of the second kind \cite{Abramowitz1964}\cite{Graham1994}:
\[\frac{1}{k!}(e^{x}-1)^{k}=\sum_{n=k}^{\infty}\stirling{n}{k}\frac{x^{n}}{n!},\]
we have
\[\tau(x)=A_{0}+A_{1}\left[\frac{{\omega}(e^{sx}-1)}{s}\right]+\frac{1}{2}A_{2}\left[\frac{{\omega}(e^{sx}-1)}{s}\right]^{2}+\frac{1}{3!}A_{3}\left[\frac{{\omega}(e^{sx}-1)}{s}\right]^{3}+\cdots.\]
Since \(s=\ln{q}\), we can replace \(e^{sx}\) with \(q^{x}\). The Taylor series
are expressed as the power series of \(\varphi(x).\)
Similarly, we can transform \(\tau(x-1)\), \(\mu(x)\) and \(\mu(x-1)\) into the tetrational series:
\begin{align*}
\tau(x)&=A_{0}+A_{1}\varphi(x)+\frac{1}{2}A_{2}\varphi^{2}(x)+\frac{1}{3!}A_{3}\varphi^{3}(x)+\cdots,\\
\mu(x)&=A_{1}+A_{2}\varphi(x)+\frac{1}{2}A_{3}\varphi^{2}(x)+\frac{1}{3!}A_{4}\varphi^{3}(x)+\cdots,\\
\tau(x-1)&=\frac{B_{0}}{q}+\frac{B_{1}}{q}\varphi(x)+\frac{1}{2}\frac{B_{2}}{q}\varphi^{2}(x)+\frac{1}{3!}\frac{B_{3}}{q}\varphi^{3}(x)+\cdots,\\
\mu(x-1)&=B_{1}+B_{2}\varphi(x)+\frac{1}{2}B_{3}\varphi^{2}(x)+\frac{1}{3!}B_{4}\varphi^{3}(x)+\cdots.
\end{align*}
\end{proof}\vspace{0.2in}
\begin{cly}
\label{cly_dtau_dphi_mu}
The following equations hold near \(x=0\):
\[\frac{\partial\tau(x)}{\partial\varphi}=\mu(x),\hspace{10pt}\frac{\partial\tau(x-1)}{\partial\varphi}=q^{-1}\mu(x-1).\]
\end{cly}\vspace{0.2in}
\noindent
We check the consistency of the formulae as shown in Lemma \ref{lem_consistency}.
\begin{lem}
\label{lem_consistency}
Formulae \ref{thm_tetrational_series} satisfy the differential equation \ref{thm_dif_tau_mu}:
\[\frac{d\tau(x)}{dx}={\omega}q^{x}\mu(x).\]
\end{lem}
\begin{proof}
Since \ref{cly_diff_Phi} and \ref{cly_dtau_dphi_mu}, with chain rule,
\[\frac{d\tau(x)}{dx}=\frac{\partial\tau(x)}{\partial\varphi}\frac{\partial\varphi(x)}{\partial{x}}={\omega}q^{x}\mu(x),\]
\[\frac{d\tau(x-1)}{dx}=\frac{\partial\tau(x-1)}{\partial\varphi}\frac{\partial\varphi(x)}{\partial{x}}={\omega}q^{x-1}\mu(x-1).\]
\end{proof}\vspace{0.2in}
\noindent
We further describe the general case of tetrational series with an integer shift \(n\) as in Lemma \ref{lem_integer_shift}. The formulae above are special case of \(n=0\).
\begin{lem}
\label{lem_integer_shift}
Let \(0\leq{x}<1\) be the real heights.\\
Let \(n\) and \(m\) be integers and natural numbers, respectively.\\
Let \(A_{m}^{[n]}\) and \(B_{m}^{[n]}\) be n-dependent real constants.\\
The tetrational and multi-tetrational functions, having horizontal translation by integers \(n\) or \(n-1\), are expressed as:
\begin{align*}
\tau(x+n)&=q^{n}\sum_{k=0}^{\infty}\frac{x^{k}}{k!}\sum_{j=1}^k\stirling{k}{j}A_{j}^{[n]}s^{k-j}{\omega}^{j}=q^{n}\sum_{k=0}^{\infty}\frac{1}{k!}A_{k}^{[n]}{\varphi}^{k}(x),\\
\mu(x+n)&=\sum_{k=0}^{\infty}\frac{x^{k}}{k!}\sum_{j=1}^k\stirling{k}{j}A_{j+1}^{[n]}s^{k-j}{\omega}^{j}=\sum_{k=0}^{\infty}\frac{1}{k!}A_{k+1}^{[n]}{\varphi}^{k}(x),\\
\tau(x+n-1)&=q^{n-1}\sum_{k=0}^{\infty}\frac{x^{k}}{k!}\sum_{j=1}^k\stirling{k}{j}B_{j}^{[n]}s^{k-j}{\omega}^{j}=q^{n-1}\sum_{k=0}^{\infty}\frac{1}{k!}B_{k}^{[n]}{\varphi}^{k}(x),\\
\mu(x+n-1)&=\sum_{k=0}^{\infty}\frac{x^{k}}{k!}\sum_{j=1}^k\stirling{k}{j}B_{j+1}^{[n]}s^{k-j}{\omega}^{j}=\sum_{k=0}^{\infty}\frac{1}{k!}B_{k+1}^{[n]}{\varphi}^{k}(x).
\end{align*}
Coefficients have the relationships of:
\[A_{0}^{[n]}=\tau(n),\hspace{15pt}B_{0}^{[n]}=\tau(n-1),\]
\[A_m^{[n]}=q^{n}\sum_{k=1}^{m}\binom{m-1}{k-1}A_{m-k}^{[n]}B_{k}^{[n]}\hspace{10pt}(m\geq{1}).\]
Generally, \(\tau(x)\) and \(\mu(x)\) having horizontal translation by \(n\) have the relationship of:
\[\frac{\partial \tau(x+n)}{\partial\varphi}=q^{n}\mu(x+n).\]
\end{lem}
\begin{proof}
Obviously the tetrational series satisfy \(\partial\tau(x+n)/\partial\varphi=q^{n}\mu(x+n)\). It follows that these series satisfy the differential equation \(\tau'(x+n)={\omega}q^{x+n}\mu(x+n)\), as similarly in \ref{lem_consistency}. Tetrational series and the Taylor series are converted into each other with the exponential generating function of the Stirling numbers of the second kind as in Theorem \ref{thm_tetrational_series}.\\
\noindent
The relationships of coefficients are given by assigning \(x=0\) to the repeated partial derivatives by \(\varphi\) of \(\mu(x+n)=\tau(x+n)\mu(x+n-1)\).
\end{proof}\vspace{0.2in}
\section{Explicit formulae}
\label{sec_explicit}
In this section, we derive the explicit formulae of tetrational and muti-tetrational function by determining the coefficients \(A_{n}\) and \(B_{n}\).\vspace{0.2in}
\begin{lem}
\label{lem_mu_piecewise}
Let \(0\leq{x}<1\).
The following equations hold:
\[\mu(x-1)=1,\]
\[\mu(x)=\tau(x),\]
\[\mu(x+n)=\prod_{k=0}^{n}\tau(x+k).\]
\end{lem}
\begin{proof}
We defined \(\mu(x)\) in Definition \ref{dfn_mu} as a continuous function satisfying \[\mu(x)=\tau(x)\mu(x-1),\]
which goes through the following discrete points:
\[\mu(0)=\tau(0)\]
\[\mu(1)=\tau(1)\tau(0)\]
\[\mu(2)=\tau(2)\tau(1)\tau(0)\]
\[\vdots\]
\[\mu(n)=\prod_{k=0}^{n}\tau(k)=\tau(n)\mu(n-1).\]
Similarly, \(\mu(x)\) also goes through the discrete points with changes of \(0\leq{x}<1\) as follows, since continuous functions \(\mu(x)\) and \(\tau(x)\) go through \(\mu(n)\) and \(\tau(n)\) respectively:
\[\mu(x)=\tau(x)\]
\[\mu(x+1)=\tau(x+1)\tau(x)\]
\[\mu(x+2)=\tau(x+2)\tau(x+1)\tau(x)\]
\[\vdots\]
\[\mu(x+n)=\prod_{k=0}^{n}\tau(x+k)=\tau(x+n)\mu(x+n-1)\]
The relationship \(\mu(x+n)=\tau(x+n)\mu(x+n-1)\) is consistent with the definition \ref{dfn_mu}.\\
\noindent
Since from the definition \(\mu(x-1)=\mu(x)/\tau(x)\) and \(\mu(x)=\tau(x)\) as given above, we have
\[\mu(x-1)=\frac{\tau(x)}{\tau(x)}=1.\]
\end{proof}
\begin{cly}
Tetrational function \(\tau(x)\) and \(\mu(x)\) are piecewise connected functions.
\end{cly}
\begin{proof}
Obviously from Lemma \ref{lem_mu_piecewise}, multi-tetrational function \(\mu(x)\) is a piecewise connected function. Since tetrational function have the relationship of \(\tau'(x)=\omega{q}^x\mu(x)\), tetrational function \(\tau(x)\) is also a piecewise connected function.
\end{proof}
\begin{lem}
\label{lem_AB_value}
The coefficients \(A_{n}\) and \(B_{n}\) are determined as
\[A_{n}=1,\]
\[B_{n\neq{1}}=0,\hspace{15pt}B_{1}=1.\]
\end{lem}
\begin{proof}
From Theorem \ref{thm_tetrational_series},
\[\tau(x)=\sum_{n=0}^{\infty}\frac{1}{n!}A_{n}\varphi^{n}(x),\hspace{10pt}\mu(x)=\sum_{n=0}^{\infty}\frac{1}{n!}A_{n+1}\varphi^{n}(x).\]
Since \(\partial{\tau(x)}/\partial{\varphi}=\mu(x)\) from \ref{cly_dtau_dphi_mu} and \(\tau(x)=\mu(x)\) from \ref{lem_mu_piecewise}, we have
\[1=A_{0}=A_{1}=A_{2}=\cdots.\]
From the relationship of coefficients \(A_{n}=\sum_{k=1}^{n}\binom{n}{k-1}A_{n-k}B_{k}\), we get \(B_{n\neq{1}}=0\) and \(B_{1}=1\).\\
\end{proof}
\noindent
Similarly, we can get the values of \(A_{m}^{[n]}\) and \(B_{m}^{[n]}\) given in Lemma \ref{lem_integer_shift}.\vspace{0.2in}
\begin{lem}
\label{lem_omega_value}
The constant \(\omega\) is determined as
\[\omega=\frac{sq}{q-1}.\]
\end{lem}
\begin{proof}
Since we determined the values of \(B_{n}\) as in Lemma \ref{lem_AB_value}, and with the expression of \(\varphi(x)\) in Definition \ref{dfn_phi}, we can express \(\tau(x-1)\) for \(0\leq{x}<1\) as
\[\tau(x-1)=\frac{1}{q}\varphi(x)=\frac{\omega(q^{x}-1)}{sq}\]
Then by assigning \(x=1\), we get the equation \(\omega(q-1)/sq=1\) to derive \(\omega\).
\end{proof}\vspace{0.2in}
\noindent
Now in Theorem \ref{thm_explicit} we can derive the explicit solution for the delay-differential equation \ref{lem_delaydiff}. The domains of the tetrational function as well as the multi-tetrational function are naturally extended by applying the sawtooth function.
\begin{thm}
\label{thm_explicit}
Let \(r\) be real constant.\\
Let \(x\) be real variable heights in \(x\in\mathbb{R}\setminus\{n:n\in\mathbb{Z}, n\leq{-2}\}.\)\\
Let \(\lfloor{x}\rfloor\) and \(\{x\}\colonequals{x}-\lfloor{x}\rfloor\) be the floor function and the sawtooth function of \(x\), respectively.\\
Let \([x]_q\) be the q-analog of \(x\) defined as
\[[x]_{q}\colonequals\frac{q^{x}-1}{q-1},\]
where \(q=\ln{r}\) is the principal branch if \(r<0\).\\
Let \(\exp_{r}^{n}=\log_{r}^{-n}\) be the extended iterated exponential operator defined as
\[\exp_{r}^{n}{f}\colonequals
\begin{cases}
\underset{n}{\underbrace{\exp_r\exp_r\cdots\exp_r}} {f} & \hspace{15pt}(n\geq{0})\\
\hspace{3pt}\underset{n}{\underbrace{\log_r\log_r\cdots\log_r}} {f} & \hspace{15pt}(n<{0}),
\end{cases}
\]
where \(f\) is the operand.\\
Let \(\prod_{k=m}^{n}\) be the extended product operator defined as
\[\prod_{k=m}^{n}f_{k}\colonequals
\begin{dcases}
f_{m}f_{m+1}{\cdots}f_{n-1}f_{n} & (0\leq{m}\leq{n})\\
\frac{1}{f_{m+1}f_{m+2}{\cdots}f_{n+1}} & (n\leq{m}<0),
\end{dcases}
\]
where \(f_{k}\) are the operands.\\
Then the tetrational function and the multi-tetratinal functions are expressed as
\[{^x}r=\tau(x)=\exp_{r}^{\lfloor{x}\rfloor+1}[\{x\}]_q\]
\[\mu(x)=\prod_{k=0}^{\lfloor{x}\rfloor}\exp_{r}^{k+1}[\{x\}]_q=q^{-\lfloor{x}\rfloor-1}\frac{\partial\tau(x)}{\partial[\{x\}]_q}\]
\end{thm}
\begin{proof}
Let us consider the case of \(0\leq x<1\).\\
From Definition \ref{dfn_phi} and Lemma \ref{lem_omega_value}, we have
\[\varphi(x)=\frac{q(q^{x}-1)}{q-1}.\]
By using the notation of q-analog of the variable, we can express it as
\[\varphi(x)=q[x]_q.\]
Then the tetrational and multi-tetrational functions for \(n\geq{0}\) are given as follows since \(\tau(x)=\sum_{n=0}^{\infty}{(q[x]_q)}^{n}/n!=\exp_{r}[x]_{q}\) in Theorem \ref{thm_tetrational_series} and \(\tau(x+n+1)=\exp_{r}{\tau(x+n)}\) as well as \(\mu(x+n)=\tau(x+n)\mu(x+n-1)\):
\begin{align*}
\tau(x)=\exp_{r}[x]_q, & \hspace{15pt}\mu(x)=\exp_{r}[x]_q\\
\tau(x+1)=\exp_{r}^{2}[x]_q, & \hspace{15pt}\mu(x+1)=\exp_{r}^{2}[x]_q\exp_{r}[x]_q\\
\tau(x+2)=\exp_{r}^{3}[x]_q, & \hspace{15pt}\mu(x+2)=\exp_{r}^{3}[x]_q\exp_{r}^{2}[x]_q\exp_{r}[x]_q\\
& \hspace{5pt}\vdots\\
\tau(x+n)=\exp_{r}^{n+1}[x]_q, & \hspace{15pt}\mu(x+n)=\prod_{k=0}^{n}\exp_{r}^{k+1}[x]_q
\end{align*}
In the case for \(n<0\). The relation \(\tau(x-1)=[x]_q\) and \(\tau(x-n-1)=\log_{r}\tau(x-n)\), as well as \(\mu(x-1)=1\) and \(\mu(x-n)=\mu(x-n+1)/\tau(x-n+1)\), give the following series:
\begin{align*}
\tau(x-1)=[x]_q, & \hspace{15pt}\mu(x-1)=\frac{\tau(x)}{\tau(x)}=1,\\
\tau(x-2)=\log_{r}[x]_q, & \hspace{15pt}\mu(x-2)=\frac{\tau(x)}{\tau(x)\tau(x-1)}=\frac{1}{[x]_q},\\
\tau(x-3)=\log_{r}^{2}[x]_q, & \hspace{15pt}\mu(x-3)=\frac{\tau(x)}{\tau(x)\tau(x-1)\tau(x-2)}=\frac{1}{[x]_q\log_{r}[x]_q},\\
& \hspace{5pt}\vdots\\
\tau(x-n)=\log_{r}^{n+1}[x]_q, & \hspace{15pt}\mu(x-n)=\frac{\prod_{k=0}^{0}\tau(x+k)}{\prod_{k=1}^{n}\tau(x-k+1)}=\prod_{k=0}^{-n}\tau(x+k).
\end{align*}
In the last line, we applied the extended operator.\\
Therefore we can also give the same expression for \(n<0\) as
\[\tau(x+n)=\exp_{r}^{n+1}[x]_q,\hspace{15pt}\mu(x+n)=\prod_{k=0}^{n}\exp_{r}^{k+1}[x]_q.\]
Obviously, the following equation holds for both cases of \(n\geq{0}\) and \(n<0\):
\[\mu(x+n)=q^{-n-1}\frac{\partial\tau(x+n)}{\partial[x]_q}.\]
By replacing \(x\) and \(n\) above with \(\{x\}\) and \(\lfloor{x}\rfloor\), respectively, we have the expressions of tetrational and multi-tetrational functions for \(x\in(-\infty, \infty)\):
\[\tau(x)=\exp_{r}^{\lfloor{x}\rfloor+1}[\{x\}]_q,\hspace{10pt}\mu(x)=\prod_{k=0}^{\lfloor{x}\rfloor}\exp_{r}^{k+1}[\{x\}]_q=q^{-\lfloor{x}\rfloor-1}\frac{\partial\tau(x)}{\partial[\{x\}]_q}.\]
\end{proof}
\begin{cly}
\label{cly_core}
The following equation holds:
\[
^{\{x\}}r=r^{[\{x\}]_q}.
\]
\end{cly}
\begin{proof}
Since \(0\leq{\{x\}}<1\) and \(\lfloor\{x\}\rfloor=0\), the equation is given from Theorem \ref{thm_explicit}.
\end{proof}
\noindent
Let us consider the special case of \(r=e\). If \(r\rightarrow{e}\), hence \(q=\ln{r}\rightarrow{1}\), then q-analog becomes \([\{x\}]_q\rightarrow{\{x\}}\). So we have \(\tau(x)=\exp_{r}^{\lfloor{x}\rfloor+1}\{x\}.\)
This formula is the same as Hooshmand's ultra exponential function for \(r=e\) \cite{Hooshmand2006}.\vspace{0.2in}
\noindent
The curves of continuous tetrational functions \({^x}r=\tau(x)\) for different bases \(r\) are shown in Fig. \ref{fig:continuous tetration}. It is obvious that the tetrational functions for \(r<1\) have complex values.
\begin{figure}
\includegraphics[scale=0.4]{fig_continuous.pdf}
\centering
\caption{Behaviours of the continuous tetrational functions \({^x}r\) for different bases \(r\), where only the real component are expressed for \(r=1/e\) and \(r=e^{-e}\) (left). The tetrational functions of \(r=1/e\) and \(r=e^{-e}\) are expressed in the complex plane (right).}
\label{fig:continuous tetration}
\end{figure}
\vspace{0.2in}
\noindent
Figure \ref{fig:multiple} shows the behaviours of the mutiple tetrational functions \(\mu(x)\) for different bases \(r\). In general the functions are not differentiable at \(x=n\in\mathbb{Z}\)
\begin{figure}
\includegraphics[scale=0.4]{fig_multiple.pdf}
\centering
\caption{Behaviors of the multi-tetrational functions \(\mu(x)\) for different bases \(r\), where only the real component are expressed for \(r=1/e\) and \(r=e^{-e}\) (left). The multi-tetrational functions of \(r=1/e\) and \(r=e^{-e}\) are expressed in the complex plane (right).}
\label{fig:multiple}
\end{figure}
\vspace{0.2in}
\section{Analytical properties}
\label{sec_complex}
First, let us study the properties of \({^y}x=\tau(x,y)\) as the multivariable function of base \(x\) and height \(y\). From this point, we shall freely choose the letters to express the bases and integers.
\begin{thm}
\label{thm_total_differential}
Let \(x>0\) be real bases and let \(q=\ln{x}\).\\
Let \(y\) be real heights in \(y\in\mathbb{R}\setminus\{n:n\in\mathbb{Z}, n\leq{-2}\}.\)\\
Then the tetrational function \({^{y}}x=\tau(x,y)=\exp_{x}^{\lfloor{y}\rfloor+1}[\{y\}]_{q}\) is totally differentiable:
\[d\tau(x,y)=\frac{\partial\tau(x,y)}{\partial{x}}dx+\frac{\partial\tau(x,y)}{\partial{y}}dy.\]
\end{thm}
\begin{proof}
We shall prove this by induction.
Let \(-1\leq{y}<0\), then the tetrational function is
\[\tau(x,y)=\frac{q^{y+1}-1}{q-1}=\frac{e^{(\ln{\ln{x}})(y+1)}-1}{\ln{x}-1}.\]
Then the partial derivatives for \(-1\leq{y}<0\) are:
\[\frac{\partial\tau(x,y)}{\partial{x}}=\frac{ye^{(\ln{\ln{x}})(y+1)}}{x\ln{x}(\ln{x}-1)}-\frac{e^{(\ln{\ln{x}})(y+1)}-1}{x(\ln{x}-1)^{2}}\]
\[\frac{\partial\tau(x,y)}{\partial{y}}=\frac{(\ln{\ln{x}})e^{(\ln{\ln{x}})(y+1)}}{\ln{x}-1}\]
The following equation holds for \(-1\leq{y}<0\).
\[\frac{\partial^{2}\tau(x,y)}{\partial{x}\partial{y}}=\frac{\partial^{2}\tau(x,y)}{\partial{y}\partial{x}}=\frac{(1+(y+1)\ln{\ln{x}})e^{(\ln{\ln{x}})(y+1)}}{x\ln{x}(\ln{x}-1)}-\frac{(\ln{\ln{x}})e^{(\ln{\ln{x}})(y+1)}}{x(\ln{x}-1)^{2}}\]
Therefore
\[d\tau(x,y)=\frac{\partial\tau(x,y)}{\partial{x}}dx+\frac{\partial\tau(x,y)}{\partial{y}}dy.\]
Next, let us consider the general case.\\
From the Definition \ref{dfn_tau}:
\[{^{y+1}}x=\tau(x,y+1)=e^{(\ln{x})\tau(x,y)},\]
the partial derivatives of this general equation are
\[\frac{\partial\tau(x,y+1)}{\partial{x}}=\tau(x,y+1)\left[\frac{\tau(x,y)}{x}+(\ln{x})\frac{\partial\tau(x,y)}{\partial{x}}\right],\]
\[\frac{\partial\tau(x,y+1)}{\partial{y}}=(\ln{x})\tau(x,y+1)\frac{\partial\tau(x,y)}{\partial{y}}.\]
\noindent
Let \(\tau(x,y)\) be defined in \(n\leq{y}<n+1\), and \(\tau(x,y+1)\) be defined in \(n+1\leq{y+1}<n+2\).\\
\noindent
Suppose the following equation holds for \(n\leq{y}<n+1\):
\[\frac{\partial^{2}\tau(x,y)}{\partial{x}\partial{y}}=\frac{\partial^{2}\tau(x,y)}{\partial{y}\partial{x}}\]
Since \(\tau(x,y)\) in Theorem \ref{thm_explicit} is determined so as to be continuous at integer heights \(y\), and differentiation along base \(x\) is not affected by the existence of boundary,
obviously the connection at \(y=n\) is continuous as follows:
\[\underset{y\rightarrow{n+1}}\lim\frac{\partial\tau(x,y)}{\partial{x}}=\underset{y\rightarrow{n}}\lim\frac{\partial\tau(x,y+1)}{\partial{x}}=\frac{\partial\tau(x,n+1)}{\partial{x}}\]
\[\underset{y\rightarrow{n+1}}\lim\frac{\partial\tau(x,y)}{\partial{y}}=\underset{y\rightarrow{n}}\lim\frac{\partial\tau(x,y+1)}{\partial{y}}=\frac{\partial\tau(x,n+1)}{\partial{y}}\]
\noindent
The following equation holds:
\[\frac{\partial^{2}\tau(x,y+1)}{\partial{x}\partial{y}}=\frac{\partial^{2}\tau(x,y+1)}{\partial{y}\partial{x}}\]
\[=(\ln{x})\tau(x,y+1)\frac{\partial\tau(x,y)}{\partial{y}}\left[\frac{\tau(x,y)}{x}+(\ln{x})\frac{\partial\tau(x,y)}{\partial{x}}\right]\]
\[+\tau(x,y+1)\left[\frac{1}{x}\frac{\partial\tau(x,y)}{\partial{y}}+(\ln{x})\frac{\partial^{2}\tau(x,y)}{\partial{y}\partial{x}}\right]\]
Hence:
\[d\tau(x,y+1)=\frac{\partial\tau(x,y+1)}{\partial{x}}dx+\frac{\partial\tau(x,y+1)}{\partial{y}}dy.\]
In general, therefore, tetrational function is totally differentiable.\\
\end{proof}\vspace{0.2in}
\noindent
The property above allow us to define the inverse operations of tetration, super-logarithm or super-root, in monotonic changing range: base \(r\geq{1}\) and height \(h>-2\).\vspace{0.2in}
\noindent
Next we study the behaviours of the tetrational function \({^{h}x}=\tau(x,h)\) with a variable base \(x\) and a constant height \(h\).\vspace{0.2in}
\begin{lem}
\label{lem_(^n)x:x=0}
let \(n\) be the integer constant.\\
Let \(x>0\) be the real variable.
\[
\underset{x\rightarrow{0}}\lim(^{n}x)=
\begin{cases}
0 & (n: \mathrm{even})\\
1 & (n: \mathrm{odd})
\end{cases}
\]
\end{lem}
\begin{proof}
We shall prove this by induction.
From Definition \ref{dfn_tau}, the relations \(^{-1}x=0\) and \(^{0}x=1\) hold for \(x\rightarrow{0}\).\\
Obviously we have \(\underset{x\rightarrow{0}}{\lim}(\ln{x})=-\infty\).\\
If \(\underset{x\rightarrow{0}}{\lim}({^{2n}}x)=1\), then
\[\underset{x\rightarrow{0}}{\lim}[(^{2n}x)\ln{x}]=-\infty.\]
It follows
\[\underset{x\rightarrow{0}}{\lim}({^{2n+1}}x)=\underset{x\rightarrow{0}}{\lim}\left[e^{(^{2n}x)\ln{x}}\right]=0,\]
If \(\underset{x\rightarrow{0}}{\lim}({^{2n+1}}x)=0\), then
\[\underset{x\rightarrow{0}}{\lim}[(^{2n+1}x)\ln{x}]=\underset{x\rightarrow{0}}{\lim}[\ln[x^{(^{2n+1}x)}]]=\underset{x\rightarrow{0}}{\lim}[\ln{(^{2n}x)}]=0.\]
It follows
\[\underset{x\rightarrow{0}}{\lim}({^{2n+2}}x)=\underset{x\rightarrow{0}}{\lim}\left[e^{(^{2n+1}x)\ln{x}}\right]=1.\]
Therefore the relation generally holds.\\
\end{proof}\vspace{0.2in}
\noindent
The relation in Lemma \ref{lem_(^n)x:x=0} can be confirmed by the curves shown in Figure \ref{fig:(^n)x_real} (left). The tetrational functions \({^n}x\) for integers \(n\) have real values and go to 0 or 1 for even \(n\) or odd \(n\) respectively under \(x\rightarrow{0}\). As shown in the right graph of Figure \ref{fig:(^n)x_real}, in the general case including non-integers \(h\), the functions \({^h}x\) have complex values and non-monotonic behaviours for \(x<1\) while the functions have real values and monotonic changes for \(x\geq{1}\).
\begin{figure} [ht]
\includegraphics[scale=0.35]{fig_n_x_real.pdf}
\centering
\caption{Behaviors of \({^n}x\) for integer heights \(-1\leq{n}\leq{4}\) (left). The tetrational function \({^h}x\) for various heights \(-1\leq{h}\leq{3}\) with an interval of 0.2 (right). In general, the functions have complex values for non-integer heights if \(x<1\), and only the real components are shown in the right figures.}
\label{fig:(^n)x_real}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.5]{fig_n_x_complex.pdf}
\centering
\caption{The trajectories of tetrational functions \({^h}x\) in complex plane for non-integer heights \(h\) with an interval of 0.1. (a) \(-1<h<0\), (b) \(0<h<1\), (c) \(1<h<2\), (d) \(2<h<3\). }
\label{fig:(^n)x_complex}
\end{figure}
\noindent
In the right graph of Figure \ref{fig:(^n)x_real}, only the real components are shown, while trajectories for \(x<1\) in complex plane are given in Figures \ref{fig:(^n)x_complex}. It is obvious that \({^h}x\) do not always go to 0 or 1 for \(x\rightarrow{0}\), e.g., the trajectory for \(h=0.5\) circulates around 0 and hence never reach 0 or 1.\vspace{0.2in}
\noindent
It is also shown in Figure \ref{fig:(^n)x_complex} (a) that the trajectories for \(h=-0.5\) is a semicircle and the distorted semicircles for \(h\) and \(-(1+h)\) are axially symmetric each other, e.g., -0.2 and -0.8. The relation between \({^h}x\) and \(^{-(1+h)}y\) is given as Lemma \ref{lem_(^n)x_symmetry}, which explains not only the symmetry but also existence of a pair function convertible each other if \(-1\leq{h}\leq{0}\).
\begin{lem}
\label{lem_(^n)x_symmetry}
Let \(-1\leq{h}\leq{0}\).\\
Let \(x>0\) and \(y>0\).\\
Let \(({^h}x)^{*}\) be the complex conjecture of \({^h}x\).\\
Then the following equation holds:
\[({^h}x)^{*}=1-({^{-(1+h)}}y),\]
or with the q-analog notation:
\[[h+1]_{\ln{x}}=1-[-h]_{\ln{y}},\]
where
\[\ln{y}=\frac{1}{\ln{x}}.\]
\end{lem}
\begin{proof}
Let \(q=\ln{x}\) and \(k>0\).\\
For \(0<x<1\), obviously \(q<0\), and \(q\) can be written as \(q=-k=ke^{i\pi}\). Hence we have
\[q^{h+1}=k^{h+1}[\cos\pi(h+1)+i\sin\pi(h+1)]\]
The tetrational function \({^h}x\) for \(-1\leq{h}\leq{0}\) is then expressed as
\[
{^h}x=[h+1]_q=\frac{q^{h+1}-1}{q-1}=\frac{h^{k+1}\cos\pi(h+1)}{-k-1}+i\frac{k^{h+1}\sin\pi(h+1)}{-k-1}.
\]
Therefore
\[
({^h}x)^{*}=-\frac{h^{k+1}\cos\pi(h+1)}{k+1}+i\frac{k^{h+1}\sin\pi(h+1)}{k+1}.
\]
On the other hand,
\[\ln{y}=\frac{1}{q}=k^{-1}e^{i\pi}.\]
Hence we get
\[
\left(\frac{1}{q}\right)^{-(h+1)}=k^{h+1}[\cos\pi(h+1)-i\sin\pi(h+1)].
\]
Then
\[
1-(^{-(1+h)}y)=1-[-h]_{1/q}=1-\frac{\left(\frac{1}{q}\right)^{-h}-1}{\frac{1}{q}-1}=1-\frac{\left(\frac{1}{q}\right)^{-(h+1)}-q}{1-q}
\]
\[
=1-\frac{k^{h+1}\cos\pi(h+1)+k}{1+k}+i\frac{k^{h+1}\sin\pi(h+1)}{1+k}
\]
\[
=-\frac{h^{k+1}\cos\pi(h+1)}{k+1}+i\frac{k^{h+1}\sin\pi(h+1)}{k+1}.
\]
Therefore \(({^h}x)^{*}=1-({^{-(1+h)}}y)\) holds.\vspace{0.2in}\\
\noindent
For \(x\geq{1}\), obviously \(q\geq{0}\), and we have
\[({^h}x)^{*}={^h}x=\frac{q^{h+1}-1}{q-1}.\]
On the other hand, we have
\[1-(^{-(1+h)}y)=1-\frac{\left(\frac{1}{q}\right)^{-h}-1}{\frac{1}{q}-1}=\frac{q^{h+1}-1}{q-1}.\]
Therefore generally \(({^h}x)^{*}=1-({^{-(1+h)}}y)\) holds.\\
\end{proof}\vspace{0.2in}
\noindent
Now let us consider the extension of the tetration to complex bases and heights.
\begin{thm}
\label{thm_complex_domains}
Let \(r\) be real constant.\\
Let \(z\) be a complex variable. \\
Then the following statements hold:
\begin{itemize}
\item \({^z}r\) is holomorphic if and only if \(\Re(z)\notin\mathbb{Z}\).
\item \({^r}z\) is holomorphic if and only if \(r\in\mathbb{Z}\).
\end{itemize}
\end{thm}
\begin{proof}
Let \(z=x+iy\).\\
(1) Proof of the first statement.\\
We shall prove this by induction.\\
Let \(s=\ln{q}=\ln{r}\).\\
Then \(q^{z+1}=e^{s(x+1+iy)}=e^{s(x+1)}[\cos(sy)+i\sin(sy)]\).\\
For \(-1\leq{x}<{0}\) we have
\[
{^x}r=\frac{q^{x+1+iy}-1}{q-1}=\frac{e^{s(x+1)}\cos(sy)-1}{q-1}+i\frac{e^{s(x+1)}\sin(sy)}{q-1}
=u_{0}+iv_{0},\]
where \(u_{0}\) and \(v_{0}\) are real and imaginary component of \({^x}r\), respectively.\\
Obviously the Cauchy-Riemann equations holds:
\[
\frac{\partial{u_{0}}}{\partial{x}}
=\frac{\partial{v_{0}}}{\partial{y}}
=\frac{se^{s(x+1)}\cos(sy)}{q-1},
\]
\[
\frac{\partial{u_{0}}}{\partial{y}}
=-\frac{\partial{v_{0}}}{\partial{v}}
=-\frac{se^{s(x+1)}\sin(sy)}{q-1}.
\]
Therefore \({^x}r\) is holomorphic in \(-1\leq{x}<{0}\).\\
Let \(n\in\mathbb{Z}\). Suppose \(^{(x+iy)}r=u_{n}+iv_{n}\) for \((n-1)\leq{x}<{n}\) is holomorphic.\\
Then
\[
^{(x+1+iy)}r=r^{(u_{n}+iv_{n})}=e^{qu_{n}}\cos(qv_{n})+ie^{qu_{n}}\sin(qv_{n})=u_{n+1}+iv_{n+1}.
\]
By using the chain rule, e.g.,
\[
\frac{\partial{u_{n+1}}}{\partial{x}}
=\frac{\partial{u_{n+1}}}{\partial{u_{n}}}
\frac{\partial{u_{n}}}{\partial{x}}
+\frac{\partial{u_{n+1}}}{\partial{v_{n}}}
\frac{\partial{v_{n}}}{\partial{x}},
\]
we can prove that Cauchy-Riemann equations holds:
\[
\frac{\partial{u_{n+1}}}{\partial{x}}
=\frac{\partial{v_{n+1}}}{\partial{y}}
=qe^{qu_{n}}\cos(qv_{n})\frac{\partial{u_{n}}}{\partial{x}}-qe^{qu_{n}}\sin(qv_{n})\frac{\partial{v_{n}}}{\partial{x}},
\]
\[
\frac{\partial{v_{n+1}}}{\partial{x}}
=-\frac{\partial{u_{n+1}}}{\partial{y}}
=qe^{qu_{n}}\sin(qv_{n})\frac{\partial{u_{n}}}{\partial{x}}-qe^{qu_{n}}\cos(qv_{n})\frac{\partial{v_{n}}}{\partial{x}}.
\]
Therefore \({^z}r\) is generally holomorphic in inside of each segment \((n-1)<x<n\).\\ However the Cauchy-Riemann equations do not hold at boundaries of segments \(x=n\) as follows:\\
The real and imaginary components of \({^z}r\) in segment \(-1\leq{x}<{0}\) approach to the following values as \(x\rightarrow{0}\):
\[
\underset{x\rightarrow{0}}{\lim}{u_{0}}=\frac{q\cos(sy)-1}{q-1},\hspace{20pt}\underset{x\rightarrow{0}}{\lim}{v_{0}}=\frac{q\sin(sy)}{q-1}.
\]
On the other hand, \({^z}r\) approaches the following values as \(x\rightarrow{0}\) since \(u_{1}+iv_{1}=e^{qu_{0}}\cos(qv_{0})+ie^{qu_{0}}\sin(qv_{0})\):
\[
\underset{x\rightarrow{0}}{\lim}{u_{1}}=\exp\left[q\frac{q\cos(sy)-1}{q-1}\right]\cos\left(q\frac{q\sin(sy)}{q-1}\right)
\]
\[
\underset{x\rightarrow{0}}{\lim}{v_{1}}=\exp\left[q\frac{q\cos(sy)-1}{q-1}\right]\sin\left(q\frac{q\sin(sy)}{q-1}\right)
\]
Obviously, in general \({^r}z\) is not continuous at the segment boundary. It is continuous if and only if the heights are real, \(y=0\) hence \(v_{1}=v_{0}=0\).\\
Similarly,
\[
\underset{x\rightarrow{0}}{\lim}{u_{n+1}}=e^{qu_{n}}\cos(qv_{n}),\hspace{20pt}\underset{x\rightarrow{0}}{\lim}{v_{n+1}}=e^{qu_{n}}\sin(qv_{n}).
\]
Then \(\underset{x\rightarrow{n}}{\lim}(u_{n+1}+iv_{n+1})=\underset{x\rightarrow{n}}{\lim}(u_{n}+iv_{n})\) holds if and only if \(v_{n+1}=v_{n}=0\).\\
Therefore, the first statement holds.\vspace{0.2in}
\noindent
(2) Proof of the second statement\\
Let \(h,k\in\mathbb{R}\) and let \(\ln{z}=e^{h+ik}=e^{h}\cos{k}+ie^{h}\sin{k}\).\\
Since \(z=x+iy=e^{e^{h}\cos{k}}\cos{(e^{h}\cos{k})}+ie^{e^{h}\cos{k}}\sin{(e^{h}\cos{k})}\), we can confirm that the Cauchy-Riemann equations hold and expressed as follows:
\[
\frac{\partial{h}}{\partial{x}}=\frac{\partial{k}}{\partial{y}}=\frac{1}{e^{(e^{h}\cos{k}+h)}\cos{(e^{h}\sin{k}+k)}},
\]
\[
\frac{\partial{k}}{\partial{x}}=-\frac{\partial{h}}{\partial{y}}=-\frac{1}{e^{(e^{h}\cos{k}+h)}\sin{(e^{h}\sin{k}+k)}}.
\]
Let \(-1\leq{r}<0\). Then we have
\[
{^r}z=\frac{(\ln{z})^{r+1}-1}{\ln{z}-1}=\frac{e^{(r+1)h}\cos{k(r+1)}+ie^{(r+1)h}\sin{k(r+1)}-1}{e^{h}\cos{k}+ie^{h}\sin{k}-1}
\]
\[
=\frac{[e^{(r+1)h}\cos{k(r+1)}+ie^{(r+1)h}\sin{k(r+1)}-1][(e^{h}\cos{k}-1)-ie^{h}\sin{k}]}{(e^{h}\cos{k}-1)^{2}+e^{2h}\sin^{2}{k}}
\]
\[
=\frac{e^{(r+2)h}\cos{rk}-e^{(r+1)h}\cos{k(r+1)}-e^{h}\cos{k}+1}{e^{2h}-2e^{h}\cos{k}+1}
\]
\[
+i\frac{e^{(r+2)h}\sin{rk}-e^{(r+1)h}\sin{k(r+1)}+e^{h}\sin{k}}{e^{2h}-2e^{h}\cos{k}+1}
\]
\[=u+iv,\]
where \(u\) and \(v\) are real and imaginary component of \({^r}z\), respectively.\\
The partial derivatives of \(u\) and \(v\) by \(h\) or \(k\) are:
\begin{align*}
\frac{\partial{u}}{\partial{h}}&=\frac{(r+2)e^{(r+2)h}\cos{kr}-(r+1)e^{(r+1)h}\cos{k(r+1)}-e^{h}\cos{k}}{e^{2h}-2e^{h}\cos{k}+1}\\
&-\frac{\left(e^{(r+2)h}\cos{rk}-e^{(r+1)h}\cos{k(r+1)}-e^{h}\cos{k}+1\right)(2e^{2h}-2e^{h}\cos{k})}{(e^{2h}-2e^{h}\cos{k}+1)^{2}},\\
\frac{\partial{u}}{\partial{k}}&=\frac{-re^{(r+2)h}\sin{kr}+(r+1)^{2}e^{(r+1)h}\sin{k(r+1)}+e^{h}\sin{k}}{e^{2h}-2e^{h}\cos{k}+1}\\
&-\frac{\left(e^{(r+2)h}\cos{rk}-e^{(r+1)h}\cos{k(r+1)}-e^{h}\cos{k}+1\right)(2e^{h}\sin{k})}{(e^{2h}-2e^{h}\cos{k}+1)^{2}},\\
\end{align*}
\begin{align*}
\frac{\partial{v}}{\partial{h}}&=\frac{(r+2)e^{(r+2)h}\sin{kr}-(r+1)e^{(r+1)h}\sin{k(r+1)}+e^{h}\sin{k}}{e^{2h}-2e^{h}\cos{k}+1}\\
&-\frac{\left(e^{(r+2)h}\sin{rk}-e^{(r+1)h}\sin{k(r+1)}+e^{h}\sin{k}\right)(2e^{2h}-2e^{h}\cos{k})}{(e^{2h}-2e^{h}\cos{k}+1)^{2}},\\
\frac{\partial{v}}{\partial{k}}&=\frac{re^{(r+2)h}\cos{kr}-(r+1)^{2}e^{(r+1)h}\cos{k(r+1)}+e^{h}\cos{k}}{e^{2h}-2e^{h}\cos{k}+1}\\
&-\frac{\left(e^{(r+2)h}\sin{rk}-e^{(r+1)h}\sin{k(r+1)}+e^{h}\sin{k}\right)(2e^{h}\sin{k})}{(e^{2h}-2e^{h}\cos{k}+1)^{2}}.
\end{align*}
It is obvious that the Cauchy-Riemann equations do not hold.
\[
\frac{\partial{u}}{\partial{x}}=\frac{\partial{u}}{\partial{h}}\frac{\partial{h}}{\partial{x}}+\frac{\partial{u}}{\partial{k}}\frac{\partial{k}}{\partial{x}}
\neq
\frac{\partial{v}}{\partial{h}}\frac{\partial{h}}{\partial{y}}+\frac{\partial{v}}{\partial{k}}\frac{\partial{k}}{\partial{y}}=\frac{\partial{v}}{\partial{y}},
\]
\[
\frac{\partial{u}}{\partial{y}}=\frac{\partial{u}}{\partial{h}}\frac{\partial{h}}{\partial{y}}+\frac{\partial{u}}{\partial{k}}\frac{\partial{k}}{\partial{y}}
\neq
-\frac{\partial{v}}{\partial{h}}\frac{\partial{h}}{\partial{x}}-\frac{\partial{v}}{\partial{k}}\frac{\partial{k}}{\partial{x}}=-\frac{\partial{v}}{\partial{x}}.
\]
Similarly the tetrational function in other segments \({^{(r+n)}}z=\exp_{z}^{n}({^r}z)\) are generally not holomorphic inside the segment \((n-1)<x<n\) \\
However, at the boundaries of the segment \(r=n\), the function is expressed as \({^n}z=\exp_{z}^{n}[1]\) and is holomorphic proved by induction as follows .\\
Let \(z=e^{x+iy}\) and \({^{(n+1)}}z=x_{n}+iy_{n}\).\\
For \(n=1\),
\[
{^1}z=e^{x+iy}=e^{x}\cos{y}+ie^{x}\sin{y}=u_{1}+iv_{1}
\]
The Cauchy-Riemann equations hold:
\[\frac{\partial{u}}{\partial{x}}=\frac{\partial{v}}{\partial{y}}=e^{x}\cos{y},\hspace{20pt}\frac{\partial{u}}{\partial{y}}=-\frac{\partial{v}}{\partial{x}}=-e^{x}\sin{y}.\]
Hence \({^1}z\) is holomorphic.\\
Suppose \({^n}z=u_{n-1}+iv_{n-1}\) is holomorphic:
\[\frac{\partial{u_{n-1}}}{\partial{x}}=\frac{\partial{v_{n-1}}}{\partial{y}},\hspace{20pt}\frac{\partial{u_{n-1}}}{\partial{y}}=-\frac{\partial{v_{n-1}}}{\partial{x}}.\]
Then \(^{(n+1)}z\) is expressed as:
\[^{(n+1)}z=z^{(^{n}z)}=e^{(x+iy)(u_{n-1}+iv_{n-1})}=e^{(xu_{n-1}-yv_{n-1})+i(xv_{n-1}+yu_{n-1})}\]
\[=e^{(xu_{n-1}-yv_{n-1})}\cos{(xv_{n-1}+yu_{n-1})}+ie^{(xu_{n-1}-yv_{n-1})}\sin{(xv_{n-1}+yu_{n-1})}\]
\[=u_{n}+v_{n}.\]
By using the relations above, it is shown that the Cauchy-Riemann equations hold:
\[
\frac{\partial{u_{n}}}{\partial{x}}=\frac{\partial{v_{n}}}{\partial{y}}
\]
\[
=\left(u_{n-1}+x\frac{\partial{u_{n-1}}}{\partial{x}}-y\frac{\partial{v_{n-1}}}{\partial{x}}\right)e^{(xu_{n-1}-yv_{n-1})}\cos{(xv_{n-1}+yu_{n-1})}
\]
\[
-\left(v_{n-1}+x\frac{\partial{v_{n-1}}}{\partial{x}}+y\frac{\partial{u_{n-1}}}{\partial{x}}\right)e^{(xu_{n-1}-yv_{n-1})}\sin{(xv_{n-1}+yu_{n-1})},
\]
\[
\frac{\partial{u_{n}}}{\partial{y}}=-\frac{\partial{v_{n}}}{\partial{x}}
\]
\[
=-\left(v_{n-1}+x\frac{\partial{v_{n-1}}}{\partial{x}}+y\frac{\partial{u_{n-1}}}{\partial{x}}\right)e^{(xu_{n-1}-yv_{n-1})}\cos{(xv_{n-1}+yu_{n-1})}
\]
\[
-\left(u_{n-1}+x\frac{\partial{u_{n-1}}}{\partial{x}}-y\frac{\partial{v_{n-1}}}{\partial{x}}\right)e^{(xu_{n-1}-yv_{n-1})}\sin{(xv_{n-1}+yu_{n-1})}.
\]
Hence \(^{(n+1)}z\) is holomorphic.\\
Therefore the second statement holds.
\end{proof}
\begin{cly}
If both the base and height are complex values without integer components, \(x,y\in\mathbb{C}\) and \(x,y\notin\mathbb{Z}\), then the tetrational function \({^y}x\) is not holomorphic.
\end{cly}
\section{Calculation rules of tetration}
\label{sec_rule}
In this section, we study calculation rules of tetration based on the explicit formulae and properties of the functions.\vspace{0.2in}
\noindent
First let us extend the exponentiation operator.
\begin{dfn}
\label{dfn_extended_exp}
Let \(r,x,y\in\mathbb{R}\).\\
Extended exponentiation operator \(\exp_{r}^{x}\) is defined as:
\[\exp_{r}^{x}1\colonequals\exp_{r}^{\lfloor{x}\rfloor}\left(r^{[\{x\}]_q}\right)={^x}r,\]
where the notations in Theorem \ref{thm_explicit} are used.
\end{dfn}
\begin{lem}
\label{lem_e^(x-a)(^a)x}
Let \(r,x,y\in\mathbb{R}\).
The following equation holds:
\[{^x}r=\exp_r^{x-y}\left({{^y}r}\right)\]
\end{lem}
\begin{proof}
Since \(x=\lfloor{x}\rfloor+\{x\}\), by using the relation \ref{cly_core}, we have the following relations:
\[
{^x}r=\exp_{r}^{x}1=\exp_{r}^{\lfloor{x}\rfloor+\{x\}}1=\exp_{r}^{\lfloor{x}\rfloor}\left({^{\{x\}}r}\right)
.\]
Then for any \(w\in\mathbb{R}\),
\[
\exp_{r}^{x}1=\exp_{r}^{\lfloor{x}\rfloor-w+\{x\}+w}1=\exp_{r}^{\lfloor{x}\rfloor-w}\left({^{\{x\}+w}r}\right)
.\]
By using \(y=\{x\}+w\) and hence \(\lfloor{x}\rfloor-w=\lfloor{x}\rfloor+\{x\}-y=x-y\), we get the relation.
\end{proof}
\begin{lem}
\label{lem_exp_summation}
Let \(r,x,y\in\mathbb{R}\).
The following equation holds:
\[\exp_{r}^{x}\exp_{r}^{y}1=\exp_{r}^{x+y}1={^{x+y}}r.\]
\end{lem}
\begin{proof}
From Lemma\ref{lem_e^(x-a)(^a)x},
\[{^x}r=\exp_{r}^{x-a}\left({^a}r\right)=\exp_{r}^{x-a}\exp_{r}^{a}1\]
By rewriting \(y=x-a\), we get the relation.
\end{proof}\vspace{0.2in}
\noindent
Next, we distinguish the exponentiation (right associative) from the power (left associative) for clarity.\vspace{0.2in}
\begin{dfn}
\label{dfn_vee_wedge}
Let \(\vee\) and \(\wedge\) be the symbols of the exponentiation (right associative) and the power (left associative), respectively.
\begin{align*}
{^x}r\vee{f} & \colonequals{\exp_r^{x}{f}},\\
f\wedge{{^x}r} & \colonequals{f}^{({^x}r)},
\end{align*}
where \(f\) is the operand.\\
\noindent
The left-hand side of \(\vee\) is not a number but the part of the exponentiation operator. The iterated tetration is given as:
\[{^x}({^y}r)\vee{f}\colonequals(\exp_{r}^{y})^{x}f=\exp_{r}^{xy}f.\]
\end{dfn}\vspace{0.2in}
\noindent
The notation \({^x}r\vee{^y}r\) refers to \(\exp_{r}^x({^y}x)\), where the operand \({^y}r\) is a number while \({^x}r\vee\) is not a number but the extended exponent operator, not referring to \(\exp_{({^x}r)}\). On the other hand, the notation \({^x}r\wedge{^y}r\) refers to \(({^x}r)^{({^y}r)}=\exp_{({^x}r)}({^y}r)\), where both \({^x}r\) and \({^y}r\) are numbers. Similarly, \({^x}({^y}r)\vee{1}\) is not \({^x}t\) with \(t={^y}r\vee{1}\).\vspace{0.2in}
\noindent
Some calculation rules are given as in Theorem \ref{thm_tet_rules}.
\begin{thm}
\label{thm_tet_rules}
Let \(r,t,x,y,z\in\mathbb{R}\).
The following relations hold:
\begin{eqnarray}
\setcounter{equation}{1}
{^x}r\vee{1}&=&\hspace{3pt}{^x}r.\\
-{^x}r&=&{^{-1}}r\vee\left(\frac{1}{^{x+1}r}\right)\\
\frac{1}{{^x}r}&=&r\vee({-^{x-1}r})\\
{^x}r+{^y}r&=&{^{-1}}r\vee({^{x+1}}r\cdot{^{y+1}}r).\\
{^x}r-{^y}r&=&{^{-1}}r\vee\left(\frac{{^{x+1}}r}{{^{y+1}}r}\right).\\
{^{x}}r\wedge{^y}r&=&{^{y+1}}r\wedge{^{x-1}r}.\\
{^x}r\cdot{^y}r&=&{^{-1}r\vee\left({^x}r\wedge{^y}r\right)}=r\vee\left({^{x-1}}r+{^{y-1}r}\right)\\
\frac{{^x}r}{{^y}r}&=&{^{-1}r}\vee({^{x+1}}r\wedge\frac{1}{{^y}r})=r\vee({^{x-1}}r-{^{y-1}}r).
\end{eqnarray}
\begin{eqnarray}
{^x}r\vee{^y}r&=&{^y}r\vee{^x}r\hspace{3pt}=\hspace{3pt}{^{x+y}}r.\\
{^y}({^x}r)&=&{^x}({^y}r)\hspace{3pt}=\hspace{3pt}{^{xy}}r.\\
{^z}({^x}r\vee{^y}r)&=&{^{z(z+y)}}r.\\
{^{\{x\}}}r&=&r\vee\frac{(\ln{r})^{\{x\}}-1}{\ln{r-1}}\\
{^x}r&=&{^{\lfloor{x}\rfloor}}r\vee{^{\{x\}}}r\\
{^{\{x\}+\{y\}-1}}r&=&{^{\{x\}-1}}r+(\ln{r})^{\{x\}}\cdot\left({^{\{y\}-1}}r\right),\hspace{10pt}\{x\}+\{y\}<1.\\
{^x}r&=&\left({^{\lfloor{x}\rfloor+1}}r\right)\vee\left(1-{^{-\{x\}}}t\right)^{*},\hspace{10pt}\ln{r}\cdot\ln{t}=1.\\
\frac{\partial({^x}r)}{\partial({^{\{x\}-1}r})}&=&(\ln{r})^{\lfloor{x}\rfloor+1}\prod_{k=0}^{\lfloor{x}\rfloor}({^k}}r\vee{^{\{x\}}r).
\end{eqnarray}
\end{thm}
\begin{proof}
The proofs are given as fellows respectively:\\
(1) Directly given from Definition \ref{dfn_extended_exp}. This is the relation between operator \({^x}r\vee\) and number \({^x}r\):
\[{^x}r=\exp_{r}^{x}1={^x}r\vee{1}.\]
(2) By Definition \ref{dfn_vee_wedge}, we have \(\log_{r}f={^{-1}}r\vee{f}\).
\[-{^x}r=-\log_{r}[{^{x+1}}r]=\log_{r}\left[\frac{1}{{^{x+1}}r}\right]={^{-1}}r\vee\left[\frac{1}{^{x+1}r}\right].\]
(3) By the exponent rule, we have
\[\frac{1}{{^x}r}=({^x}r)^{-1}=(r^{^{x-1}r})^{(-1)}=r^{-(^{x-1}r)}=r\vee(-^{x-1}r).\]
(4) By the logarithm rule,
\[{^x}r+{^y}r=\log_{r}({^{x+1}}r)+\log_{r}({^{y+1}}r)=\log_{r}({^{x+1}}r\cdot{^{y+1}}r)={^{-1}}r\vee({^{x+1}}r\cdot{^{y+1}}r).\]
(5) By the logarithm rule,
\[{^x}r-{^y}r=\log_{r}({^{x+1}}r)-\log_{r}({^{y+1}}r)=\log_{r}\left(\frac{{^{x+1}}r}{{^{y+1}}r}\right)={^{-1}}r\vee\left(\frac{{^{x+1}}r}{{^{y+1}}r}\right).\]
(6) By the exponent rule,
\[{^x}r\wedge{^y}r=({^x}r)^{({^y}r)}=r^{(^{x-1}r)({^y}r)}=r^{({^y}r)(^{x-1}r)}={^{y+1}}r\wedge{^{x-1}r}.\]
(7) By the logarithm rule,
\[{^x}r\cdot{^y}r={^x}r\cdot\log_{r}({^{y+1}}r)=\log_{r}\left[({^{y+1}}r)^{({^x}r)}\right]={^{-1}r}\vee({^{y+1}r}\wedge{^{x}r})={^{-1}r}\vee({^{x}r}\wedge{^{y}r}),\]
where the (6) is used at the last step. By the exponent rule,
\[{^x}r\cdot{^y}r=r^{({^{x-1}}r)}\cdot{r}^{({^{y-1}}r)}=r^{({^{x-1}}r+{^{y-1}}r)}=r\vee({^{x-1}}r+{^{y-1}}r).\]
(8) By the logarithm rule,
\[\frac{{^x}r}{{^y}r}=\frac{1}{{^y}r}\log_{r}({^{x+1}}r)=\log_{r}\left[({^{x+1}}r)^{\frac{1}{({^y}r)}}\right]={^{-1}r}\vee({^{x+1}}r\wedge\frac{1}{{^y}r}).\]
By the exponent rule,
\[\frac{{^x}r}{{^y}r}=r^{({^{x-1}}r)}\cdot{r}^{(-{^{y-1}}r)}=r^{({^{x-1}}r-{^{y-1}}r)}=r\vee({^{x-1}}r-{^{y-1}}r).\]
(9) Directly give from Lemma \ref{lem_e^(x-a)(^a)x} and \ref{lem_exp_summation}.\\
(10) From definition \ref{dfn_vee_wedge}, we have the relation by assining \(f=1\).\\
(11) \({^z}({^x}r\vee{^y}r)={^z}({^x}r\vee{^y}r)\vee{1}=(\exp_{r}^{x+y})^{z}1=\exp_{r}^{z(x+y)}1={^{z(x+y)}}r\vee{1}={^{z(x+y)}}r\).\\
(12) Corollary \ref{cly_core} directly give the relation.\\
(13) We have the relation from Definition \ref{dfn_extended_exp} and (12).\\
(14) This relation is the property of q-numbers:
\[
{^{\{x\}+\{y\}-1}}r=\frac{(\ln{r})^{\{x\}+\{y\}}-1}{\ln{r}-1}=\frac{(\ln{r})^{\{x\}}-1}{\ln{r}-1}+(\ln{x})^{\{x\}}\frac{(\ln{r})^{\{y\}}-1}{\ln{r}-1}
\]
\[={^{\{x\}-1}}r+(\ln{r})^{\{x\}}\cdot\left({^{\{y\}-1}}r\right).\]
(15) From Lemma \ref{lem_(^n)x_symmetry}, we have the following relation.
\[\left({^{\{x\}-1}}r\right)^{*}=1-{^{-\{x\}}}t\]
Then \({^{\{x\}-1}}r=\left(1-{^{-\{x\}}}t\right)^{*}\).\\
By applying the operation of \(\left({^{\lfloor{x}\rfloor+1}}r\vee\right)\), we have
\[{^{\lfloor{x}\rfloor}}r\vee{^{\{x\}}}r=\left({^{\lfloor{x}\rfloor+1}}r\right)\vee\left(1-{^{-\{x\}}}t\right)^{*}.\]
Then we use (13) and get the relation. \\
(16) We get the relation from Theorem \ref{thm_explicit} and Corollary \ref{cly_core}.\\
\end{proof}
\section{Concluding remarks}
\label{sec_summary}
We derived the simple explicit formulae of the continuous tetrational function as well as multi-tetrational funcion based only on the delay-differential equation. The definition of the multi-tetrational function and finding the Stirling numbers of the second kind in Taylor coefficients are the key steps of our approach. The solutions is piecewise connected function and is class \(C^{1}\) at connecting point for real heights, and the height is extended to complex heights except the segment boundaries. Our solution has advantages in simple expression as well as in continuity in real r-x plane. The series for each segment converges absolutely and has infinite radius of convergence . The solution should be compared with those of numerical approaches of class \(C^{\infty}\) \cite{Kouznetsov2009}\cite{Paulsen2017} with limited domain of base \(r>e^{1/e}\) and finite radius of convergence for Taylor expansion. The calculation rules of tetration are given by distinguishing exponentiation from power operation.
\begin{bibdiv}
\begin{biblist}
\bib{Goodstein1947}{article}{
title={Transfinite ordinals in recursive number theory},
volume={12},
DOI={10.2307/2266486},
journal={J. Symb. Log.},
author={Goodstein, R. L.},
year={1947},
pages={123--129}
}
\bib{Scot2006}{article}{
title={General relativity and quantum mechanics: toward a generalization of the Lambert W function},
volume={17},
DOI={10.1007/s00200-006-0196-1},
journal={Appl. Algebra Eng. Commun. Comput.},
author={Scot, T. C.},
author={Mann, R.},
author={Martinez II, R. E.},
year={2006},
pages={41--47}
}
\bib{Sun2017}{article}{
title={On the topology of air navigation route systems},
volume={170},
DOI={10.1680/jtran.15.00106},
journal={P I CIVIL ENG-ENERGY},
author={Sun, X.},
author={Wandelt, S.},
author={Linke, F.},
year={2017},
pages={46--59}
}
\bib{Khetkeeree2019}{article}{
title={Signal Reconstruction using Second Order Tetration Polynomial},
DOI={10.1109/ITC-CSCC.2019.8793435},
journal={2019 34th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC), JeJu, Korea (South)},
author={Khetkeeree, S.},
author={Chansamorn, C.},
year={2019},
pages={1--4}
}
\bib{Furuya2019}{article}{
title={Compaction of Church Numerals},
VOLUME = {12},
DOI={10.3390/a12080159},
journal={Algorithms (Basel)},
author={Furuya, I.},
author={Kida, T.},
year={2019},
pages={159}
}
\bib{Euler1783}{article}{
title={De serie Lambertina Plurimisque eius insignibus proprietatibus,},
volume={2},
journal={Acta Acad. Scient. Petropol.},
author={Euler, L.},
year={1783},
pages={29--51}
}
\bib{Eisenstein1844}{article}{
title={Entwicklung von \(\alpha^{\alpha^{\alpha^{\udots}}}\),},
volume={28},
doi = {10.1515/crll.1844.28.49},
journal={J. Reine angew. Math.},
author={Eisenstein, G.},
year={1844},
pages={49--52}
}
\bib{Corless1996}{article}{
title={On the Lambert W function},
volume={5},
doi = {10.1007/BF02124750},
journal={Adv. Comput. Math.},
author={Corless, R. M.},
author={Gonnet, G. H.},
author={Hare, D. E. G.},
author={Jeffrey, D. J.},
author={Knuth, D. E.},
year={1996},
pages={329--359}
}
\bib{Hooshmand2006}{article}{
title={Ultra power and ultra exponential functions},
volume={17},
doi = {10.1080/10652460500422247},
journal={Integral Transforms Spec. Funct.},
author={Hooshmand, M. H.},
year={2006},
pages={549--558}
}
\bib{Kouznetsov2009}{article}{
title={Solution of \(F(z+1) = \exp(F(z))\) in Complex \(z\)-Plane},
volume={78},
doi = {10.1090/S0025-5718-09-02188-7},
journal={Math. Comp.},
author={Kouznetsov, D.},
year={2009},
pages={1647--1670}
}
\bib{Paulsen2017}{article}{
title={Solving \(F(z+1)=b^{F(z)}\) in the complex plane,},
volume={43},
doi = {10.1007/s10444-017-9524-1},
journal={Adv. Comput. Math.},
author={Paulsen, W.},
author={Cowgill, S.},
year={2009},
pages={1261--1282}
}
\bib{Abramowitz1964}{book}{
title={Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables},
publisher={NationalBureau of Standards},
isbn = {0-486-61272-4},
author={Abramowitz, M.},
author={Stegun, A.},
year={1964},
}
\bib{Graham1994}{book}{
title={Concrete Mathematics: A Foundation for Computer Science},
publisher={Addison-Wesley Longman Publishing Co., Inc.},
isbn={0201558025},
edition={2nd},
author={Graham, Ronald L.},
author={Knuth, Donald E.},
year={1994},
}
\end{biblist}
\end{bibdiv}
\end{document}
| {
"timestamp": "2021-09-01T02:13:58",
"yymm": "2105",
"arxiv_id": "2105.00247",
"language": "en",
"url": "https://arxiv.org/abs/2105.00247",
"abstract": "The continuous tetrational function ${^x}r=\\tau(r,x)$, the unique solution of equation $\\tau(r,x)=r^{\\tau(r,x-1)}$ and its differential equation $\\tau'(r,x) =q \\tau(r,x) \\tau'(r,x-1)$, is given explicitly as ${^x}r=\\exp_{r}^{\\lfloor x \\rfloor+1}[\\{x\\}]_q$, where $x$ is a real variable called height, $r$ is a real constant called base, $\\{x\\}=x-\\lfloor x \\rfloor$ is the sawtooth function, $\\lfloor x \\rfloor$ is the floor function of $x$, and $[\\{x\\}]_q=(q^{\\{x\\}}-1)/(q-1)$ is a q-analog of $\\{x\\}$ with $q=\\ln r$, respectively. Though ${^x}r$ is continuous at every point in the real $r-x$ plane, extensions to complex heights and bases have limited domains. The base $r$ can be extended to the complex plane if and only if $x\\in \\mathbb{Z}$. On the other hand, the height $x$ can be extended to the complex plane at $\\Re(x)\\notin \\mathbb{Z}$. Therefore $r$ and $x$ in ${^x}r$ cannot be complex values simultaneously. Tetrational laws are derived based on the explicit formula of ${^x}r$.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Extension of tetration to real and complex heights",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683484150417,
"lm_q2_score": 0.8104789063814617,
"lm_q1q2_score": 0.8004033150003693
} |
https://arxiv.org/abs/1812.00610 | Weak discrete maximum principle and $L^\infty$ analysis of the DG method for the Poisson equation on a polygonal domain | We derive several $L^\infty$ error estimates for the symmetric interior penalty (SIP) discontinuous Galerkin (DG) method applied to the Poisson equation in a two-dimensional polygonal domain. Both local and global estimates are examined. The weak maximum principle (WMP) for the discrete harmonic function is also established. We prove our $L^\infty$ estimates using this WMP and several $W^{2,p}$ and $W^{1,1}$ estimates for the Poisson equation. Numerical examples to validate our results are also presented. | \section{Introduction}
The discontinuous Galerkin (DG) method, which was proposed originally by Reed and Hill \cite{osti_4491151} in 1973, is a powerful method for solving numerically a wide range of partial differential equations (PDEs). We use a discontinuous function which is a polynomial on each element and introduce the numerical flux on each element boundary. The DG scheme is then derived by controlling the numerical flux to ensure the local conservation law.
For linear elliptic problems, the study of stability and convergence developed well in the early 2000s; see the standard references
\cite{MR1885715},
\cite{riv08}
and \cite{MR2485457}
for the detail. However, most of those works are based on the $L^2$ and
DG energy norms, and a little is done using other norms. An exception
is \cite{MR2113680}, where the optimal order error estimate in the
$L^\infty$ norm was proved using the discrete Green function. However,
the DG scheme in \cite{MR2113680} is defined in the exactly-fitted
triangulation of a smooth domain; this restriction is somewhat
unrealistic for practical applications. From the view point of applications, the $L^p$ theory, especially the $L^\infty$ theory, plays an important role in the analysis of nonlinear problems. Therefore, the development of the $L^p$ theory for the DG method is a subject of great importance.
Another important subject for confirming the validity of numerical methods is the discrete maximum principle (DMP).
Nevertheless, only a few works has been devoted to DMP for DG
method. Horv\'{a}th and Mincsovics \cite{MR3015392} proved DMP for DG
method applied to the one-dimensional Poisson equation. Badia, Bonilla
and Hierro (\cite{MR3315069}, \cite{MR3646366}) proposed nonlinear DG
schemes satisfying DMP for linear convection-diffusion equations in the one and two space dimensions.
However, to the best of our knowledge, no results are known regarding DMP for the standard DG method in the higher-dimensional space domain.
In contrast, the $L^p$ theory and DMP have been actively studied regarding the
standard finite element method (FEM).
The pioneering work by Ciarlet and Raviart \cite{MR0375802} studied $L^\infty$ and $W^{1,p}$ error estimates together with DMP; in particular, those error estimates were proved as a consequence of DMP. Then, the $L^\infty$ error estimates were proved using several methods; Scott \cite{MR0436617} applied the discrete Green function and
Nitsche \cite{MR568857} utilized the weighted norm technique.
Rannacher and Scott succeeded in deriving
the optimal $W^{1,p}$ error estimate for $1\le p\le \infty$ in
\cite{MR645661}.
Detailed local and global pointwise estimates have been studied by many researchers, such as \cite{MR1464148}.
Recently, the optimal order $W^{1,\infty}$ and $L^\infty$ stability and
error estimates were established for the Poisson equation defined in a
smooth domain; the effect of polyhedral approximations of a smooth
domain was precisely examined. See \cite{2018arXiv180400390K} for the detail.
As is well known, the non-negativity assumption (non-obtuse assumption)
on the triangulation is necessary for DMP to hold in the standard
FEM. In this connection,
Schatz \cite{MR551291} is noteworthy in this area for deriving the \emph{weak
maximum principle} (WMP) without the non-negativity assumption and applying it to the proof of the stability, local and global error estimates in the $L^\infty$ norm.
In this paper, we are motivated by \cite{MR551291}, and extend the results of that study to the symmetric interior penalty (SIP) DG method which is one of the popular DG method for the Poisson equation. Our results are summarized as follows.
Let $u$ be the solution of the Dirichlet boundary value problem for the
Poisson equation \eqref{eq:poisson} defined in a polygonal domain
$\Omega\subset\mathbb{R}^2$, and let $u_h$ be the solution of the
SIPDG method \eqref{eq:dg} in the finite dimensional space $V_h$ defined
as \eqref{eq:vh}. Then, we have the $L^\infty$ interior error estimate
(see Theorem \ref{thm1})
\[
\norm{u-u_h}_{L^\infty(\Omega_0)} \le C \left(\inf_{\chi \in V_h}\norm{u-\chi}_{\alpha(h),\Omega_1}+\norm{u-u_h}_{L^2(\Omega_1)}\right),
\]
where $\Omega_0$ and $\Omega_1$ are open subsets such that $\Omega_0
\subset \Omega_1 \subset \Omega$, and $C$ is independent of $h$ and the
choice of $\Omega_0$ and $\Omega_1$. This interior estimate is
valid under Assumption \ref{asm1} below, where $\alpha(h)$ and
$\|\cdot\|_{\alpha(h),\Omega_1}$ are defined.
Using this interior error estimate, we prove
the WMP (see Theorem \ref{thm2})
\[
\norm{u_h}_{L^\infty(\Omega)} \le C \norm{u_h}_{ L^\infty(\partial
\Omega)}
\]
for the discrete harmonic function $u_h\in V_h$.
Finally, under some assumptions on the triangulation (see Assumption
\ref{asm2}), we prove the $L^\infty$ error estimate
(see Theorem \ref{thm3})
\[
\norm{u-u_h}_{L^\infty(\Omega)} \le C \left(\inf_{\chi \in V_h}\norm{u-\chi}_{\alpha(h),\Omega} + \norm{u-u_h}_{L^\infty(\partial \Omega)}\right)
\]
for the solution $u$ of the Poisson equation \eqref{eq:poisson} and its
DG approximation $u_h$. As a matter of fact, the WMP is a key point of
the proof of this error estimate.
Moreover, we obtain the $L^\infty$ error estimate of the form (see Corollary \ref{cor2})
\[
\norm{u-u_h}_{L^\infty(\Omega)} \le
C h^{r}\norm{u}_{W^{1+r,\infty}(\Omega)},
\]
where $r$ denotes the degree of approximate polynomials.
Unfortunately, this error estimate is only sub-optimal even for the
piecewise linear element ($r=1$). This is because the Dirichlet boundary
condition is imposed ``weakly'' by the variational formulation (Nitsche'
method) in
the DG method. On the other hand, it is imposed ``strongly'' by the nodal
interpolation in the FEM. This implies that we further need to more deeply consider the imposition of the Dirichlet boundary condition
in the DG method. In particular, study of better, more precise estimates of
$\alpha(h)$ and $\norm{u-u_h}_{L^\infty(\partial \Omega)}$ are necessary, and will be a focus of our future works.
Our SIPDG scheme and main results, Theorems \ref{thm1}, \ref{thm2} and \ref{thm3}, are
presented in Section \ref{result}.
The main tool of our analysis is several $W^{2,p}$ and $W^{1,1}$
estimates for solutions of the Poisson equation. Several local error
estimates developed in previous works (see \cite{MR0520174},
\cite{MR2113680} and \cite{MR0431753}) are also used.
After having presented those preliminary results in Section \ref{sect2}, we state the proofs of
Theorems \ref{thm1}, \ref{thm2} and \ref{thm3} in Sections
\ref{s:I},
\ref{s:II} and
\ref{s:III}, respectively.
Finally, we report the results of numerical
experiments to confirm the validity of our theoretical results in Section \ref{s:ne}.
Before concluding this Introduction, here, we list the notation used in this
paper.
\paragraph{Notation.}
We follow the standard notation, for example, of \cite{MR2424078} as for function spaces and their norms. In particular, for $1\le p \le \infty$ and a positive integer $j$, we use the standard Lebesgue space $L^{p}(\mathcal{O})$ and the Sobolev space $W^{j,p}(\mathcal{O})$. Here and hereinafter, $\mathcal{O}$ denotes a bounded domain in $\mathbb{R}^2$. The semi-norms and norm of $W^{j,p}(\mathcal{O})$
are denoted, respectively, by
\[
\abs{v}_{W^{i,p}(\mathcal{O})} = \left(\sum_{\abs{\alpha} = i}\norm{\pderiv[\alpha]{v}{x}}_{L^p(\mathcal{O})}^p\right)^{{1}/{p}},\quad
\norm{v}_{W^{j,p}(\mathcal{O})} = \left(\sum_{i = 0}^{j} \abs{v}_{W^{i,p}(\mathcal{O})}^p\right)^{{1}/{p}}.
\]
Letting $C_0^\infty(\mathcal{O})$ be the set of all infinity differentiable functions with compact support in $\mathcal{O}$, $W^{j,p}_0(\mathcal{O})$ denotes the closure of $C^\infty_0(\mathcal{O})$ in the $W^{j,p}(\mathcal{O})$ norm. The space $W^{1,p}_0(\mathcal{O})$ is characterized as $W^{j,p}_0(\mathcal{O})=\{v\in W^{1,p}(\mathcal{O})\colon v|_{\partial\mathcal{O}=0}\}$ if $\partial\mathcal{O}$ is a Lipschitz boundary.
Let $p'$ be the H\"older conjugate exponent of $p$; $1/p+1/p'=1$. The inner product of $L^2(\mathcal{O})$ is denoted by $(\cdot,\cdot)_{\mathcal{O}}$. The $\mathbb{R}^d$-Lebesgue measure of $\mathcal{O}$ is denoted by $\abs{\mathcal{O}}_d$. We also use the fractional order Sobolev space $W^{s,p}(\mathcal{O})$ for $s>0$. As usual, we write as $H^s(\mathcal{O}) = W^{s,2}(\mathcal{O})$.
For $\Gamma \subset \partial \mathcal{O}$, we define $W^{j,p}(\Gamma)$ and $H^s(\Gamma)$ using a surface measure $ds=ds_\Gamma$.
For $\mathcal{O}_1,\mathcal{O}_2\subset\mathbb{R}^2$, we write
$\mathcal{O}_1\Subset\mathcal{O}_2$ and
$\mathcal{O}_2\Supset\mathcal{O}_1$ to express $\mathcal{O}_1\subset\mathcal{O}_2$ and
$\overline{\mathcal{O}_1}\subset\mathcal{O}_2$.
Finally, the letter $C$ denotes a generic positive constant depending only on $\Omega$, $\partial\Omega$, the criterion $\sigma_0$ of the penalty parameter and the shape-regularity constant $C_*$ defined in Section \ref{result}.
\section{DG scheme and results}
\label{result}
Throughout this paper, letting $\Omega$ be a bounded polygonal domain in $\mathbb{R}^2$, we consider the Dirichlet boundary value problem for the Poisson equation
\begin{equation}
\left\{
\begin{array}{rcc}
-\Delta u = f & \text{in} & \Omega \\
u = g & \text{on} & \partial \Omega,
\end{array}\right.\label{eq:poisson}
\end{equation}
where $f \in L^2(\Omega)$ and $g \in H^{1/2}(\partial\Omega)$. There exists an extension $\tilde{g}\in H^1(\Omega)$ such that $\tilde{g}=g$ on $\partial\Omega$ and $\|\tilde{g}\|_{H^1(\Omega)}\le C\|g\|_{H^{1/2}(\partial\Omega)}$. The following discussion does not
depend on the way of extension.
Then, the weak formulation of \eqref{eq:poisson} is stated as follows.
\smallskip
\textup{(BVP;$f,g$)} Find $u=u_0+\tilde{g}\in H^1(\Omega)$ and $u_0\in H^1_0(\Omega)$ such that
\begin{equation}
\int_\Omega \nabla u_0 \cdot \nabla v ~dx = (f,v)_\Omega-\int_\Omega \nabla \tilde{g} \cdot \nabla v ~dx \quad {}^\forall v \in H^1_0(\Omega). \label{eq:weakhomodiriclet}
\end{equation}
The Lax--Milgram theory guarantees that \textup{(BVP;$f,g$)} admits a unique solution $u\in H^1(\Omega)$. The regularity of $u$ is well studied. See \cite[Theorem 1]{MR0466912} and \cite[Lemma 1.2]{MR551291} for the detail of the following results.
\begin{prop}\label{prop:poisson}
Let $0 < \alpha < 2\pi$ be the maximum (interior) angle of $\partial\Omega$, and set $\beta = \pi/\alpha$. Letting $g=0$, we suppose that $u \in H^1_0(\Omega)$ is the solution of \textup{(BVP;$f,0$)}.
\begin{enumerate}
\item[\textup{(i)}] If $\Omega$ is convex ($\beta > 1$), then $u$ belongs to $H^2(\Omega) \cap H^1_0(\Omega)$, and we have
\begin{equation}
\abs{u}_{H^2(\Omega)} \le \norm{f}_{L^2(\Omega)}. \label{eq:convexregulality}
\end{equation}
\item[\textup{(ii)}] If $1/2 < \beta < 1$ and $f \in L^p(\Omega)$ for some $1<p<2/(2-\beta)$, then $u$ belongs to $ W^{2,p}(\Omega) \cap H^1_0(\Omega)$, and there exists a positive constant $C$ depending only on $\Omega$ and $p$ such that
\begin{equation}
\norm{u}_{W^{2,p}(\Omega)} \le C \norm{f}_{L^p(\Omega)}. \label{eq:nonconvexregulality}
\end{equation}
\end{enumerate}
\end{prop}
\begin{rem}
Because $\beta > 1/2$, we have $2/(2-\beta) > 4/3$ and, consequently, \eqref{eq:nonconvexregulality} holds for some $4/3 < p \le 2$.
\end{rem}
Let $\{\mathcal{T}_h\}_h$ be a family of shape-regular and quasi-uniform triangulations of $\Omega$ (see \cite[(4.4.15), (4.4.16)]{bs08}). That is, there exists a positive constant $C_*$ satisfying
\begin{equation}
\frac{h_K}{\rho_K} \le C_* ,\, \,\frac{h}{h_K} \le C_*\qquad (K\in\mathcal{T}_h\in\{\mathcal{T}_h\}_h). \label{eq:triangulation}
\end{equation}
Therein, $h_K$ and $\rho_K$ denote the diameters of the circumscribed and inscribed circles of $K$, respectively. Moreover, the granularity parameter $h$ is defined as $h = \displaystyle \max_{K \in \mathcal{T}_h}h_K$.
We set, for $1\le p\le \infty$,
\begin{equation}
V^p = V^p(\Omega) \coloneqq \{ v \in L^p(\Omega) \colon v|_K \in W^{1,p}(K) , (\nabla v)|_{\partial K} \in L^{p}(\partial K) {}^\forall K \in \mathcal{T}_h\}.
\end{equation}
It is noteworthy that $v$ is a continuous function on $K \in \mathcal{T}_h$ if $v \in V^\infty$.
For a positive integer $r$, we define the finite element spaces $V_h$ and $\mathring V_h$ as
\begin{align}
V_h &= V_h^r(\Omega) \coloneqq\{ v_h \in L^2(\Omega) \colon v_h|_K
\in\mathcal{P}^r(K) \text{ for each }K \in \mathcal{T}_h\}, \label{eq:vh}\\
\mathring V_h & = \mathring V_h(\Omega) \coloneqq \{v_h \in V_h \colon
\operatorname{supp}{v_h} \subset \Omega\},\label{eq:vho}
\end{align}
where $\mathcal{P}^r(K)$ denotes the set of all polynomials of degree $\le r$.
For $\mathcal{O}\subset\Omega$, we set
$V^p(\mathcal{O})=\{v|_{\mathcal{O}}\colon v\in V^p\}$, $V_h(\mathcal{O})=\{v|_{\mathcal{O}}\colon v\in V_h\}$ and
$\mathring V_h(\mathcal{O}) \coloneqq \{v_h \in V_h(\mathcal{O}) \colon \operatorname{supp}{v_h} \subset \mathcal{O}\}$.
We let $\mathcal{E}_h$ be the set of all edges of $K \in \mathcal{T}_h$, and set
\[
\E^{\partial}_h \coloneqq \{e \in \mathcal{E}_h \colon e \subset \partial \Omega \},\quad \E^{\circ}_h \coloneqq \mathcal{E}_h \setminus \E^{\partial}_h.
\]
For $v \in V^p$ and $e \in \mathcal{E}_h$, we define $\mean{\cdot}$ and $\jump{\cdot}$ as follows.
If $e \in \E^{\circ}_h$, we set
\begin{align}
\mean{v} \coloneqq \frac{1}{2}(v_1 +v_2)\,,\,\,&
\jump{v} \coloneqq v_1n_1 + v_2n_2 \,,\,\\
\mean{\nabla v} \coloneqq \frac{1}{2}(\nabla v_1 + \nabla v_2)\,,\,&
\jump{\nabla v} \coloneqq \nabla v_1 \cdot n_1 + \nabla v_2 \cdot n_2 \,.
\end{align}
If $e \in \E^{\partial}_h$, we set
\begin{align}
\mean{v} \coloneqq v \,,\,\jump{v} \coloneqq v n\,,\,\mean{\nabla v} \coloneqq \nabla v \,,\,\jump{\nabla v} \coloneqq \nabla v \cdot n \,.
\end{align}
Therein, for $e \in \E^{\circ}_h$, there exist distinct $K_1,\,K_2 \in \mathcal{T}_h$ satisfying $e \subset \partial K_1 \cap \partial K_2$ and $v_i = v|_{K_i}$, where
$n_i$ denotes the outward unit normal vector to $e$ of $K_i$, and $n$ denotes the outward unit normal vector on $\partial \Omega$.
We define norm $\norm{v}_{V^p(\mathcal{O})}$ on $V^p(\mathcal{O})$, where $\mathcal{O}=\Omega$ or $\mathcal{O}\subset\Omega$, as
\[
\norm{v}_{V^p(\mathcal{O})}^p \coloneqq
\sum_{K \in \mathcal{T}_h} \norm{v}_{W^{1,p}(K \cap \mathcal{O})}^p + \sum_{e \in \mathcal{E}_h} h_e^{1-p}\norm{\jump{v}}_{L^p(e \cap \overline{\mathcal{O}})}^p
+ \sum_{e \in \mathcal{E}_h} h_e \norm{\mean{\nabla v}}_{L^p(e \cap \overline{\mathcal{O}})}^p
\]
and
\[
\norm{v}_{V^\infty(\mathcal{O})} \coloneqq \max_{K \in \mathcal{T}_h} \norm{v}_{W^{1,\infty}(K \cap \mathcal{O})}
+ \max_{e \in \mathcal{E}_h}h_e^{-1}\norm{\jump{v}}_{L^\infty(e\cap\overline{\mathcal{O}})}+ \max_{e \in \mathcal{E}_h} \norm{\mean{\nabla v}}_{L^\infty(e\cap\overline{\mathcal{O}})},
\]
where $h_e = (h_{K_1} + h_{K_2})/2$ if $e \in \E^{\circ}_h$ and $h_e = h_K$ if $e \in \E^{\partial}_h$.
Letting $1 \le p ,p'\le \infty$ and $1/p+1/p'=1$, we introduce the DG bilinear form on $V^p\times V^{p'}$ as
\begin{multline}
a(u,v) \coloneqq \sum_{K \in \mathcal{T}_h} \int_K \nabla u \cdot \nabla v ~dx \\
-\sum_{e \in \mathcal{E}_h} \int_e(\mean{\nabla u}\jump{v} + \mean{\nabla v}\jump{u})~ ds
+ \sum_{e \in \mathcal{E}_h}\frac{\sigma}{h_e}\int_e \jump{u}\jump{v} ~ ds \label{eq:bilinear}
\end{multline}
for $u \in V^p$ and $v \in V^{p'}$. Herein, $\sigma$ is a sufficiently large constant.
The linear form $F$ on $V^2$ is defined by
\begin{equation}
F(v) \coloneqq \int_\Omega fv ~ dx + \sum_{e \in \E^{\partial}_h}\int_e g\left( \frac{\sigma}{h_e}v - \nabla v \cdot n\right)~ds.
\end{equation}
Now we can state the DG scheme to be addressed in this paper:
\begin{equation}
\textup{(DG;$f,g$)}\quad
\text{Find}\quad u_h \in V_h \quad \text{ s.t. }\quad
a(u_h,\chi) = F(\chi) \quad {}^\forall \chi \in V_h.
\label{eq:dg}
\end{equation}
This scheme is usually called the symmetric interior penalty DG (SIPDG) method, and the $L^2$ theory is well-developed at present (see \cite{MR1885715}). For example, the DG bilinear form $a$ is continuous in the sense that, for any $1 \le p \le \infty$, there exists $C>0$ satisfying
\begin{equation}
a(u,v) \le C \norm{u}_{V^p}\norm{v}_{V^{p'}} \quad {}^\forall u \in V^p, {}^\forall v \in V^{p'}. \label{eq:dgconti}
\end{equation}
Moreover, there exists $\sigma_0>0$ such that, if $\sigma\ge \sigma_0$, then we have
\begin{equation}
a(\chi,\chi) \ge C \norm{\chi}_{V^2}^2 \quad {}^\forall \chi \in V_h. \label{eq:dgcoercive}
\end{equation}
Consequently, \textup{(DG;$f,g$)} with $\sigma\ge \sigma_0$ admits a unique solution $u_h \in V_h$ and, it satisfies
\[
\|u_h\|_{V^2} \le \sup_{\chi\in V_h}\frac{F(\chi)}{\|\chi\|_{V^2}}.
\]
If the solution $u$ of \textup{(BVP;$f,g$)} belongs to $u \in H^s(\Omega)$ for some $s > \frac{3}{2}$, we have
\begin{equation}
a(u,v) = F(v) \quad {}^\forall v \in V^2. \label{eq:dgconsistency}
\end{equation}
As a result, we have the Galerkin orthogonality (consistency)
\begin{equation}
a(u-u_h,\chi) = 0 \quad {}^\forall \chi \in V_h. \label{eq:galerkin}
\end{equation}
Our main theorem below will be formulated using the pair of functions $u\in V^\infty$ and $u_h\in V_h$ satisfying \eqref{eq:galerkin}. More generally, we consider $u\in V^\infty$ and $u_h\in V_h$ satisfying
\begin{equation}
a(u-u_h,\chi) = 0 \quad {}^\forall \chi \in \mathring V_h.
\label{eq:go}
\end{equation}
Below, we always assume that $\sigma\ge \sigma_0$.
We are now in a position to state the main results of this paper, but to do so, we need additional notations. Suppose that we are given an open disk $D \Subset \Omega$ with center $x_0$ and radius $R$.
In the disk $D$, we consider an auxiliary Neumann problem:
\begin{equation}
\left\{
\begin{array}{rcc}
-\Delta w + w = \varphi & \text{in} & D \\
\partial_n w = 0 & \text{on} & \partial D,
\end{array}\right.
\label{eq:circle}
\end{equation}
where $\partial_n=n\cdot \nabla$ denotes the outward normal derivative to $\partial D$.
Because $\partial D$ is smooth, for a given $\varphi\in L^2(D)$, there exists a unique solution $w\in H^2(D)$ of \eqref{eq:circle}; this correspondence is denoted by $w=\mathcal{G}_D\varphi$. We recall the $W^{2,p}$ and $W^{1,1}$ regularity results in Section \ref{sect2}. The DG bilinear form $a_D^1$ corresponding to \eqref{eq:circle} is introduced as
\begin{multline}
a_D^1(u,v) \coloneqq \sum_{K \in \mathcal{T}_h} \int_{K \cap D}( \nabla u \cdot \nabla v + uv)~dx \\
-\sum_{e \in \mathcal{E}_h} \int_{e\cap \overline D}(\mean{\nabla u}\jump{v} + \mean{\nabla v}\jump{u})~ds
+ \sum_{e \in \mathcal{E}_h}\frac{\sigma}{h_e}\int_{e\cap \overline D} \jump{u}\jump{v} ~ds. \label{eq:bilinearoncircle}
\end{multline}
We introduce the operator $\Pi^1_h$ of $L^2(D)\to V_h(D)$ as
\begin{equation}
a_D^1(\mathcal{G}_D\varphi - \Pi_h^1 \varphi , \chi) = 0 \quad {}^\forall \chi \in V_h(D) \label{eq:circlegalerkin}
\end{equation}
and make the assumption below:
\begin{asm}\label{asm1}
There exist a function $\alpha$ of $\mathbb{R}_+=(0,\infty)\to \mathbb{R}_+$ and constant $C>0$ which are independent of $h$ such that
\begin{gather}
\mbox{$\alpha$ is bounded in a neighborhood of $0$;}\label{eq:asm1a}\\
a_D^1(\mathcal{G}_D^1 \varphi -\Pi_h^1 \varphi,v) \le C h\alpha(h) \norm{\varphi}_{L^2(D)}\norm{v}_{V^{\infty}(D)} \quad {}^\forall \varphi \in L^2(D). {}^\forall v \in V^{\infty}(D) \label{eq:asm1}
\end{gather}
for a sufficiently small $h$.
\end{asm}
\begin{rem}
In view of \eqref{eq:circleconti} and \eqref{eq:circlel1} of Proposition \ref{prop:D},
we can take at least $\alpha(h) = 1$.
\end{rem}
Using this $\alpha(h)$, we define $\norm{\cdot}_{\alpha(h),\Omega_0}$ as
\begin{equation}
\norm{v}_{\alpha(h),\mathcal{O}} \coloneqq \norm{v}_{L^\infty(\mathcal{O})} + \alpha(h)\norm{v}_{V^\infty(\mathcal{O})}
\end{equation}
for $\mathcal{O}\subset \Omega$ or $\mathcal{O}=\Omega$.
Our first result is the following interior error estimate in the $L^\infty$ norm.
\begin{thm}[$L^\infty$ interior error estimate]\label{thm1}
Letting $u \in V^\infty,u_h \in V_h$ satisfy \eqref{eq:go} and supposing that $\kappa>0$ and open sets $\Omega_0 \subset \Omega_1 \subset \Omega$ satisfy $\operatorname{dist}(\Omega_0,\partial\Omega_1) \ge \kappa h$, then we have, under Assumption \ref{asm1},
\begin{equation}
\norm{u-u_h}_{L^\infty(\Omega_0)} \le C \left(\inf_{\chi \in V_h}\norm{u-\chi}_{\alpha(h),\Omega_1}+\norm{u-u_h}_{L^2(\Omega_1)}\right) \label{eq:localerror}
\end{equation}
for a sufficiently small $h$.
\end{thm}
The next result is the weak discrete maximum principle.
\begin{thm}[Weak discrete maximum principle]\label{thm2}
Supposing that Assumption \ref{asm1} is satisfied and letting $u_h \in V_h$ be the discrete harmonic function, i.e.,
\begin{equation}
a(u_h,\chi) = 0 \quad{}^\forall \chi \in \mathring V_h, \label{eq:discreteharmonic}
\end{equation}
then we have
\begin{equation}
\norm{u_h}_{L^\infty(\Omega)} \le C \norm{u_h}_{ L^\infty(\partial \Omega)} \label{eq:dmp}
\end{equation}
for a sufficiently small $h$.
\end{thm}
To state the final $L^\infty$ error estimate, we make the following assumption on triangulations.
\begin{asm}\label{asm2}
There exist a convex polygonal domain $\widetilde \Omega \Supset \Omega$ and its triangulation $\widetilde \mathcal{T}_h$
such that $\mathcal{T}_h$ is the restriction of $\widetilde \mathcal{T}_h$ to $\Omega$, and that
\eqref{eq:triangulation} holds for any $\widetilde K \in \widetilde \mathcal{T}_h\in \{\widetilde \mathcal{T}_h\}_h$ with the same constant $C_*$.
\end{asm}
We define $\widetilde \mathcal{E}_h$, $V^p(\widetilde \Omega)$, $V_h(\widetilde \Omega)$ similarly, using $\widetilde \mathcal{T}_h$.
\begin{thm}[$L^\infty$ error estimate]\label{thm3}
Letting $u \in V^\infty,u_h \in V_h$ satisfy \eqref{eq:go}, and supposing that Assumptions \ref{asm1} and \ref{asm2} are satisfied, then we have
\begin{equation}
\norm{u-u_h}_{L^\infty(\Omega)} \le C \left(\inf_{\chi \in V_h}\norm{u-\chi}_{\alpha(h),\Omega} + \norm{u-u_h}_{L^\infty(\partial \Omega)}\right) \label{eq:thm3}
\end{equation}
for a sufficiently small $h$.
\end{thm}
\section{Preliminaries}
\label{sect2}
In this section, we collect some preliminary results.
\subsection{Some local estimates}
\label{sec:l2}
For $x_0 \in \mathbb{R}^2$ and $d>0$, we denote $B_d(x_0)$ as an open disk with center $x_0$ and radius $d$. We define $N_d(\Omega_0) \coloneqq \{ x \in \overline\Omega \colon \operatorname{dist}{(x,\Omega_0)} < d \}$ and $S_d(x_0) \coloneqq N_d(\{x_0\}) = \overline\Omega \cap B_d(x_0)$.
For $\Omega_0 \subset \Omega_1 \subset \Omega$, we define
\[
d(\Omega_0,\Omega_1) \coloneqq \operatorname{dist}{(\Omega_0,\partial \Omega_1)},\quad d_{\Omega}(\Omega_0,\Omega_1) \coloneqq \operatorname{dist}{(\Omega_0,\partial \Omega_1 \setminus \partial\Omega}).
\]
If $d_{\Omega}(\Omega_0,\Omega_1) \ge d$, then we have $N_d(\Omega_0) \subset \Omega_1$.
First, we recall further regularity results for the solution of \textup{(BVP;$f,0$)}; see \cite[Lemma 1.2, Lemma 1.3]{MR551291} for more detail.
\begin{prop}
\label{prop:poisson2}
Under the same settings of Proposition \ref{prop:poisson}, we have the following.
\begin{enumerate}
\item[\textup{(iii)}] \label{prop:poisson3} Assume that $f \in L^2(\Omega)$ and $\operatorname{supp}{f} \subset S_d(x_0)$ for some $d > 0$ and $x_0 \in \overline \Omega$ with $\operatorname{dist}(x_0,\partial \Omega) \le d$.
Then, we have
\begin{equation}
\abs{u}_{H^1(\Omega)} \le C d \norm{f}_{L^2(S_d(x_0))}. \label{eq:localstability}
\end{equation}
\item[\textup{(iv)}] \label{prop:poisson4} Assume that $\Omega_0 \subset \Omega_1 \subset \Omega$ and $d>0$ with $d_{\Omega}(\Omega_0,\Omega_1) \ge d$.
Then, there exists a positive constant $C$ such that
\begin{equation}
\abs{u}_{W^{2,p}(\Omega_0)} \le C(\abs{f}_{L^p(\Omega_1)}+d^{-1}\abs{u}_{W^{1,p}(\Omega_1)}+d^{-2}\norm{u}_{L^p(\Omega_1)}) \label{interiorstability}
\end{equation}
under the same assumption of \textup{(i)} or \textup{(ii)} in Proposition \ref{prop:poisson}.
\end{enumerate}
\end{prop}
A version of the Poincar\'e inequality is available (see \cite[Lemma 1.1]{MR551291}).
\begin{prop}\label{prop:poincare}
Let $\Omega$ be a simply connected polygonal domain. Then, there exists a positive constant $C$ satisfying
\begin{equation}
\norm{v}_{L^2(S_d(x_0))} \le C d\abs{v}_{H^1(S_d(x_0))} \quad {}^\forall v \in H^1_0(\Omega)\label{eq:poincare}
\end{equation}
for all $x_0 \in \partial \Omega$ and $d>0$.
\end{prop}
\begin{prop}\label{prop:holder}
For a domain $\Omega_0 \subset \Omega$ and $1 \le p < q \le \infty$, we have $V^q(\Omega_0) \subset V^p(\Omega_0)$.
In particular, there exists a positive constant $C$ independent of $h$ and $\Omega_0$ satisfying
\begin{equation}
\norm{v}_{V^p(\Omega_0)} \le C \abs{\Omega_0}_2^{\frac{1}{p}-\frac{1}{q}} \norm{v}_{V^q(\Omega_0)} \label{eq:holder}
\end{equation}
for a sufficiently small $h$.
\end{prop}
\begin{proof}
Using H\"older's inequality, we have
\begin{align*}
\norm{v}_{V^p(\Omega_0)}^p &\le C \abs{\Omega_0}_2^{\frac{q-p}{q}}\sum_{K \in \mathcal{T}_h} \norm{v}_{W^{1,q}(K \cap \Omega_0)}^p
\\ &+ \left(\sum_{e \in \mathcal{E}_h} h_e \abs{e\cap\overline\Omega_0}_1\right)^{\frac{q-p}{q}}\left(\sum_{e \in \mathcal{E}_h} h_e^{1-q}\norm{\jump{v}}_{L^q(e \cap \overline \Omega_0 )}^p
+ \sum_{e \in \mathcal{E}_h} h_e \norm{\mean{\nabla v}}_{L^q(e \cap \overline \Omega_0)}^p\right).
\end{align*}
Therefore,
\begin{align*}
\sum_{e \in \mathcal{E}_h} h_e \abs{e\cap\overline\Omega_0}_1 \le \sum_{K \in \mathcal{T}_h(\Omega_0)}\abs{K}_2 \le C\abs{\Omega_0}_2,
\end{align*}
where $\mathcal{T}_h(\Omega_0) \coloneqq \{K \in \mathcal{T}_h \colon \overline K \cap \overline \Omega_0 \ne \emptyset\}$. Consequently, \eqref{eq:holder} follows.
\end{proof}
For $\mathcal{O}\subset\Omega$, we denote broken Sobolev space $W^{j,p}_h(\mathcal{O})$ as
\[
W^{j,p}_h(\mathcal{O})=W^{j,p}_h(\mathcal{O},\mathcal{T}_h) \coloneqq\{v \in L^p(\mathcal{O}) \colon v|_{K\cap \mathcal{O}} \in W^{j,p}(K\cap \mathcal{O})\}
\]
equipped with the norm
\[
\norm{v}_{W^{j,p}_h(\mathcal{O})} = \left(\sum_{K \in \mathcal{T}_h} \norm{v}_{W^{j,p}( K \cap \mathcal{O})}^p\right)^{{1}/{p}}.
\]
The following results are available; see \cite[Chapter 3]{MR0520174}, \cite[Propositions 2.1 and 2.2]{MR2113680} and \cite[Proposition 2.2]{MR0431753}.
\begin{prop}\label{prop:bestapprx}
Let $1 \le p \le \infty$, and $0 \le i \le 1 \le j \le 1 + r$.
Assume that $\kappa>0$ and open sets $\Omega_0 \subset \Omega_1 \subset \Omega$ satisfy $d_{\Omega}(\Omega_0,\Omega_1)\ge\kappa h$.
Then, there exists positive constant $C$ independent of $h$ such that for $v \in W^{j,p}_h(\Omega_1)$, $\chi \in V_h(\Omega_1)$ exists and satisfies
\begin{equation}
\norm{v-\chi}_{W^{i,p}_h(\Omega_0)} \le C h^{j-i}\norm{v}_{W^{j,p}_h(\Omega_1)}. \label{eq:bestapprx}
\end{equation}
\end{prop}
\begin{prop}\label{prop:inverseineq}
Let $1 \le p \le \infty$,
Assume that $\kappa>0$ and open sets $\Omega_0 \subset \Omega_1 \subset \Omega$ satisfy $d_{\Omega}(\Omega_0,\Omega_1)\ge\kappa h$.
Then, there exists a positive constant $C$ independent of $h$ satisfying
\begin{equation}
\norm{v_h}_{V^p(\Omega_0)} \le C h^{-1} \norm{v_h}_{L^p(\Omega_1)} \label{eq:inverseineq}
\end{equation}
for $v_h \in V_h(\Omega_1)$.
\end{prop}
\begin{prop}\label{prop:superapprx}
Let open sets $\Omega_1 \Subset \Omega_2 \Subset \Omega_3 \Subset\Omega_4 \Subset \Omega$.
Then, there exists a positive constant $C$ independent of $h$, and the following property holds for a sufficiently small $h$.
For each $\eta_h \in V_h(\Omega_4)$, there exists $\chi \in \mathring V_h(\Omega_3)$ satisfying $\chi \equiv \eta_h$ on $\Omega_2$ and
\begin{equation}
\norm{\eta_h-\chi}_{V^2(\Omega_3)} \le C \norm{\eta_h}_{V^2(\Omega_4\setminus \Omega_1)}.
\end{equation}
\end{prop}
\subsection{$L^2$ theory for DG method}
\label{sec:l2dg}
The following results, Propositions \ref{prop:energyerror} and \ref{prop:interiorl2}, are well-known (see \cite[\S 4]{MR1885715} and \cite[\S 3 and \S 4]{MR2113680}).
\begin{prop}\label{prop:energyerror}
For $f\in L^2(\Omega)$ and $g\in H^{1/2}(\Gamma)$,
there exists a unique solution $u_h \in V_h$ of DG scheme \textup{(DG;$f,g$)}.
In addition, if the solution $u$ of \eqref{eq:poisson} belongs to $H^s(\Omega)$ with $s>\frac{3}{2}$, then, we have
\begin{equation}
\norm{u-u_h}_{V_2(\Omega)} \le C \inf_{\chi \in V_h}\norm{u-\chi}_{V_2(\Omega)}. \label{eq:energyestimate}
\end{equation}
Moreover, if $u \in H^{r+1}(\Omega)$, we have
\begin{equation}
\|u-u_h\|_{L^2(\Omega)}+h\norm{u-u_h}_{V_2(\Omega)} \le C h^{r+1}|u|_{H^2(\Omega)}.
\label{eq:energyestimate1}
\end{equation}
\end{prop}
\begin{prop}\label{prop:interiorl2}
Assume that $\kappa>0$ and open sets $\Omega_0 \subset \Omega_1 \subset \Omega$ satisfy $d = d_{\Omega}(\Omega_0,\Omega_1) \ge \kappa h$.
We set $S_h = V_h$ or $\mathring V_h$.
If $S_h = V_h$, we also assume that $d(\Omega_0,\Omega_1) >0$.
If $u \in H^1(\Omega)$ and $u_h \in S_h$ satisfy
\[
a(u-u_h,\chi) = 0 \quad {}^\forall \chi \in \mathring V_h
\]
Then, there exists a positive constant $C_1$ independent of $h$, $u$, and $u_h$ satisfying
\begin{equation}
\norm{u-u_h}_{V^2(\Omega_0)} \le C_1\left( \inf_{\chi \in S_h} \norm{u-\chi}_{V^2(\Omega_1)} + \norm{u-u_h}_{L^2(\Omega_1)}\right). \label{dg:interiorerror1}
\end{equation}
Moreover, if $u \in H^{1+r}(\Omega)$, there exists a positive constant $C_2$ independent of $h$, $u$, $u_h$, and $d$ satisfying
\begin{equation}
\norm{u-u_h}_{V^2(\Omega_0)} \le C_2\left(h^r \norm{u}_{H^{1+r}(\Omega_1)}+d^{-1}\norm{u-u_h}_{L^2(\Omega_1)}\right). \label{eq:interiorerror2}
\end{equation}
\end{prop}
\subsection{Estimates on the disk}
\label{sec:disk}
We here state some estimates for functions defined on the open disk $D$ with center $x_0$ and radius $R$. Recall that we consider the Neumann boundary value problem \eqref{eq:circle} and the corresponding bilinear form $a_D^1$ defined as \eqref{eq:bilinearoncircle}.
For the positive constant $c$, we denote by $cD$ an open disk with center $x_0$ and radius $cR$.
The following property (i) is well-known (see \cite{MR0188615} for example). However, we can find no explicit reference to (ii), and we prove it by essentially the same way as \cite{MR0336050}.
\begin{prop}\label{prop:circleregulality}
\begin{enumerate}
\item[\textup{(i)}] If $f \in L^p(D)$ for some $1 < p < \infty$, then the solution $u \in W^{2,p}(D)$ of \eqref{eq:circle} exists and satisfies
\begin{equation}
\norm{u}_{W^{2,p}(D)} \le C \norm{f}_{L^p(D)}. \label{eq:circleregulality}
\end{equation}
\item[\textup{(ii)}] If $f \in L^1(D)$, then the weak solution $u \in W^{1,1}(D)$ of \eqref{eq:circle} exists and satisfies
\begin{equation}
\norm{u}_{W^{1,1}(D)} \le C \norm{f}_{L^1(D)}. \label{eq:circlel1}
\end{equation}
\end{enumerate}
\end{prop}
\begin{prop}
\label{prop:D}
\textup{(i)} Consistency.
If the solution $u$ of \eqref{eq:circle} belongs to $u \in H^s(D)$ with $s > \frac{3}{2}$, then we have
\begin{equation}
a_D^1(u,v) = (f,v)_D \quad {}^\forall v \in V^2(D). \label{eq:circleconsistency}
\end{equation}
\textup{(ii)} Continuity. For $1 \le p \le \infty$, we have
\begin{equation}
a_D^1(u,v) \le C \norm{u}_{V^p(D)}\norm{v}_{V^{p'}(D)} \quad {}^\forall u \in V^p(D), {}^\forall v \in V^{p'}(D). \label{eq:circleconti}
\end{equation}
\textup{(iii)} Coercivity.
\begin{equation}
a_D^1(\chi,\chi) \ge C \norm{\chi}_{V^2(D)}^2 \quad {}^\forall \chi \in V_h(D). \label{eq:circlecoercive}
\end{equation}
\end{prop}
\begin{lem}\label{lem1}
Assume that $\tilde u \in V^\infty(D)$ satisfies $\operatorname{supp} \tilde u \subset \frac{1}{2}D$.
Let $\tilde u_h \in V_h(D)$ satisfy
\[
a_D^1(\tilde u - \tilde u_h ,\chi) = 0 \quad {}^\forall \chi \in V_h(D).
\]
Then, under Assumption \ref{asm1}, there exists a positive constant $C$ independent of $h$ and $\tilde u$ satisfying
\begin{equation}
\norm{\tilde u - \tilde u_h}_{L^\infty(\frac{1}{4}D)} \le C \norm{\tilde u}_{\alpha(h),D} \label{eq:lem1}
\end{equation}
for a sufficiently small $h$.
\end{lem}
\begin{proof}
We take $x_1 \in \frac{1}{4}D$ such that $\abs{\tilde u(x_1)-\tilde u_h(x_1)} = \norm{\tilde{u}-\tilde{u}_h}_{L^\infty(\frac{1}{4}D)}$.
For a sufficiently large $M$, we denote by $D_h \subset D$ the open disk with center at $x_1$ and radius $Mh$. Then, we have
\begin{align*}
\abs{\tilde u(x_1)-\tilde u_h (x_1)} & \le \abs{\tilde u(x_1)} + \abs{\tilde u_h(x_1)} \\
& \le \abs{\tilde u (x_1)} + Ch^{-1}\abs{\tilde u_h}_{L^2(D_h)} \\
& \le C\norm{\tilde u}_{L^\infty(D_h)} + Ch^{-1}\norm{\tilde u - \tilde u_h}_{L^2(D_h)}.
\end{align*}
For $\phi \in C_0^\infty(D_h)$, we set $v = \mathcal{G}_D \phi$ and $v_h = \Pi_h^1 \phi$.
Then,
\begin{align*}
(\tilde u - \tilde u_h,\phi)_{D_h} &= a_D^1(\tilde u -\tilde u_h , v) = a_D^1(\tilde u - \tilde u_h,v-v_h) =a_D^1(\tilde u ,v-v_h)\\
& \le C h \alpha(h)\norm{\tilde u}_{V^\infty(D)}\norm{\phi}_{L^2(D_h)}.
\end{align*}
Therefore, we deduce $\norm{\tilde u -\tilde u_h}_{L^2(D_h)} \le Ch\alpha(h) \norm{\tilde u}_{V^\infty(D)}$, which implies \eqref{eq:lem1}.
\end{proof}
\begin{lem}\label{lem2}
Assume that $w_h \in V_h(D)$ satisfies
\[
a_D^1(w_h,\chi) = 0 \quad {}^\forall \chi \in \mathring V_h(D).
\]
Then, there exists a positive constant $C$ independent of $h$ and $w_h$ satisfying
\begin{equation}
\abs{w_h(x_0)} \le C \norm{w_h}_{L^2(D)} \label{eq:lem2}
\end{equation}
for a sufficiently small $h$.
\end{lem}
\begin{proof}
In view of Proposition \ref{prop:superapprx}, there exists $\eta \in \mathring V_h(\frac{3}{4}D)$ satisfying $\eta_h \equiv w_h$ on $\frac{1}{2}D$ and
$\norm{\eta_h}_{V^2(\frac{3}{4}D)} \le C \norm{w_h}_{V^2(D)}$.
For a sufficiently large $M$, let $D_h \subset D$ be the disk with center at $x_0$ and radius $Mh$.
Then,
\begin{align}
\abs{w_h(x_0)} = \abs{\eta_h(x_0)} \le C h^{-1} \norm{\eta_h}_{L^2(D_h)}. \label{eq:lem2first}
\end{align}
For $\phi \in C_0^\infty(D_h)$, we set $v = \mathcal{G}_D \phi$ and $v_h = \Pi_h^1 \phi$.
According to Proposition \ref{prop:superapprx}, there exists $\chi_h \in \mathring V_h(\frac{1}{2}D)$ satisfying $\chi_h \equiv v_h$ on $\frac{1}{4}D$ and
$\norm{v_h-\chi_h}_{V^2(\frac{3}{4}D)} \le C \norm{v_h}_{V^2(\frac{3}{4}D\setminus\frac{1}{8}D)}$.
We can estimate this as
\begin{align}
(\eta_h,\phi)_{D_h} &= a_D^1(\eta_h,v) = a_D^1(\eta_h,v_h) = a_D^1(\eta_h,v_h-\chi_h) \nonumber \\
& \le C \norm{\eta_h}_{V^2(\frac{3}{4}D)} \norm{v_h-\chi_h}_{V^2(\frac{3}{4}D)} \nonumber \\
& \le C \norm{w_h}_{V^2(D)}\norm{v_h}_{V^2(\frac{3}{4}D\setminus\frac{1}{8}D)}.
\end{align}
Because $a_D^1(v_h,\chi) = (\phi,\chi)_D = 0$ for $\chi \in \mathring V_h(D\setminus D_h)$, using Proposition \ref{prop:interiorl2} and the Sobolev inequality, we obtain
\begin{align}
\norm{v_h}_{V^2(\frac{3}{4}D \setminus \frac{1}{8}D)} &\le C \norm{v_h}_{L^2(D)} \nonumber\\
& \le C\left(\norm{v-v_h}_{W^{1,1}_h(D)} + \norm{v}_{W^{1,1}(D)}\right),
\end{align}
where $\norm{v}_{W^{1,1}_h(D)} \coloneqq \sum_{K \in \mathcal{T}_h} \norm{v}_{W^{1,1}(K \cap D)}$.
By Propositions \ref{prop:energyerror} and \eqref{eq:circleregulality}, we have
\begin{align}
\norm{v-v_h}_{W^{1,1}_h(D)} &\le C\left(\sum_{K \in \mathcal{T}_h}\norm{v-v_h}_{H^1(K\cap D)}^2\right)^{1/2} \nonumber \\
& \le C h \norm{v}_{H^2(D)} \nonumber \\
& \le C h \norm{\phi}_{L^2(D_h)}.
\end{align}
Using Proposition \ref{prop:circleregulality}, we have
\begin{equation}
\norm{v}_{W^{1,1}(D)} \le C \norm{\phi}_{L^1(D_h)} \le Ch\norm{\phi}_{L^2(D_h)}. \label{eq:lem2last}
\end{equation}
From \eqref{eq:lem2first}--\eqref{eq:lem2last}, we obtain \eqref{eq:lem2}.
\end{proof}
\section{Interior error estimates (Proof of Theorem \ref{thm1})}
\label{s:I}
We first consider the homogeneous Neumann boundary value problem in $\Omega$.
Set $a^1(u,v) \coloneqq a(u,v) + (u,v)_\Omega$.
\begin{lem}\label{thm1:coercive}
Assume that $\kappa>0$ and open sets $\Omega_0 \subset \Omega_1 \subset \Omega$ satisfy $d = d(\Omega_0,\Omega_1) \ge \kappa h$.
Let $u \in V^\infty$ and $u_h \in V_h$ satisfy
\[
a^1(u-u_h,\chi) = 0 \quad {}^\forall \chi \in \mathring V_h.
\]
Then, under Assumption \ref{asm1}, there exists a positive constant $C$ independent of $h$, $u$, and $u_h$ satisfying
\begin{equation}
\norm{u-u_h}_{L^\infty(\Omega_0)} \le C \left(\inf_{\chi \in V_h}\norm{u-\chi}_{\alpha(h),\Omega_1}+\norm{u-u_h}_{L^2(\Omega_1)}\right) \label{eq:coercivelocalerror}
\end{equation}
for a sufficiently small $h$.
\end{lem}
\begin{proof}
Letting $\chi\in V_h$ be arbitrary, we set $v=u-\chi$ and $v_h=u_h-\chi$.
We take $x_0 \in \Omega_0$ such that $\abs{v(x_0)-v_h(x_0)}= \norm{v-v_h}_{L^\infty(\Omega_0)}= \norm{u-u_h}_{L^\infty(\Omega_0)}$.
Let $D\Subset \Omega_1$ be an open disk with center at $x_0$ and radius $R < d$.
Because $d \ge \kappa h$, we can take $R$ independent of $x_0$.
Letting $\omega \in C^\infty_0(\frac{1}{2}D)$ satisfy $0 \le \omega \le 1 $ and $\omega \equiv 1$ on $\frac{1}{4}D$, we set $\tilde v = \omega v \in V^\infty(D)$. We define $\tilde v_h \in V_h(D)$ as
\[
a_D^1(\tilde v - \tilde v_h ,\xi) = 0 \quad {}^\forall \xi \in V_h(D).
\]
Then, $a_D^1(\tilde v_h - v_h,\eta_h) = a^1(v-v_h,\eta_h) = 0$ for $\eta_h \in \mathring V_h(\frac{1}{4}D)$, because $v=v_h$ in $\operatorname{supp} \eta_h\subset \frac14 D$.
Using Lemmas \ref{lem2} and \ref{lem1}, we have
\begin{align*}
\abs{\tilde v_h(x_0) - v_h(x_0)}
&\le C \norm{\tilde v_h -v_h}_{L^2(\frac{1}{4}D)} \\
& \le C \norm{\tilde v -\tilde v_h}_{L^\infty(\frac{1}{4}D)} + C \norm{v-v_h}_{L^2(\frac{1}{4}D)} \\
& \le C \norm{\tilde v}_{\alpha(h),\frac{1}{2}D} + C \norm{v-v_h}_{L^2(\frac14 D)}\\
& \le C \norm{v}_{\alpha(h),D} + C \norm{v-v_h}_{L^2(\frac14 D)}.
\end{align*}
Therefore,
\begin{align*}
\norm{u-u_h}_{L^\infty(\Omega_0)}&= \abs{v(x_0)-v_h(x_0)} \\
& \le \abs{\tilde v(x_0)-\tilde v_h(x_0)} + \abs{\tilde v_h(x_0) - v_h(x_0)}\\
& \le C \norm{v}_{\alpha(h),D} + C \norm{v-v_h}_{L^2(\frac14 D)}\\
&\le C\norm{u-\chi}_{\alpha(h),\Omega_1} + C\norm{u-u_h}_{L^2(\Omega_1)}.
\end{align*}
\end{proof}
\begin{lem}\label{lem3}
Assume that $D' \Subset D \subset \Omega$ are open disks with the same center.
Let $u \in V^\infty(D)$ and $u_h \in V_h(D)$ satisfy
\[
a_D(u-u_h,\chi)=0\quad{}^\forall \chi \in \mathring V_h(D)
\]
where $a_D(v,w) = a_D^1(v,w)-(v,w)_D$.
Then, letting Assumption \ref{asm1} be satisfied, we have, for $p$ and $q$ which satisfy $2 \le q < p \le \infty$ and $\frac{1}{q} < \frac{1}{p} + \frac{1}{2} \displaystyle$
\begin{equation}
\norm{u-u_h}_{L^p(D')} \le C \left(\norm{u}_{\alpha(h),D} + \norm{u-u_h}_{L^q(D)}\right) \label{eq:lem3}
\end{equation}
for a sufficiently small $h$.
\end{lem}
\begin{proof}
Setting $\psi = \mathcal{G}_D(u-u_h)$, $\psi_h = \Pi_h^1(u-u_h)$, we have
\[
a_D^1(u-u_h-\psi_h,\chi) = 0 \quad{}^\forall \chi \in \mathring V_h(D).
\]
We apply Lemma \ref{thm1:coercive} to obtain
\begin{equation*}
\norm{u-u_h-\psi_h}_{L^\infty(D')} \le C\norm{u}_{\alpha(h),D} + C\norm{u-u_h}_{L^2(D)} + C\norm{\psi_h}_{L^2(D)}.
\end{equation*}
On the other hand, using \eqref{eq:circlecoercive}, we have
\begin{equation*}
\norm{\psi_h}_{L^2(D)}^2 \le C a_D(\psi_h,\psi_h)= C (u-u_h,\psi_h)_{D} \le C\norm{u-u_h}_{L^2(D)}\norm{\psi_h}_{L^2(D)}
\end{equation*}
and, therefore,
\begin{align}
\norm{u-u_h}_{L^p(D')}& \le \norm{u-u_h-\psi_h}_{L^p(D')} + \norm{\psi_h}_{L^p(D')} \nonumber \\
& \le C \norm{u}_{\alpha(h),D} + C\norm{u-u_h}_{L^2(D)} + \norm{\psi_h}_{L^p(D')}. \label{thm1coercive:second}
\end{align}
Because $\psi$ is smooth, we again apply Lemma \ref{thm1:coercive} to obtain
\begin{align*}
\norm{\psi_h}_{L^\infty(D')} & \le \norm{\psi-\psi_h}_{L^\infty(D')} + \norm{\psi}_{L^\infty(D')} \\
& \le \norm{\psi}_{\alpha(h),D} + C\norm{\psi-\psi_h}_{L^2(D)} \\
&\le C \norm{\psi}_{\alpha(h),D} \\
&\le C\left(\norm{\psi}_{W^{1,\infty}(D)} + \max_{e \in \mathcal{E}_h,e\cap \overline D \ne \emptyset}\norm{\nabla \psi}_{L^\infty(e\cap\overline D)}\right).
\end{align*}
The Sobolev inequality and elliptic regularity give
\[
\norm{\psi}_{W^{1,\infty}(D)} \le C \norm{\psi}_{W^{2,s}(D)} \le C \norm{u-u_h}_{L^s(D)}
\]
for $s >2$.
Because
\[
\abs{\nabla \psi (x)} \le C \int_D \frac{\abs{u(y)-u_h(y)}}{\abs{x-y}} dy \le C \norm{u-u_h}_{L^s(D)}
\]
for $x \in D$, we have
\begin{equation}
\norm{\psi_h}_{L^\infty(D')} \le C \norm{u-u_h}_{L^s(D)}. \label{thm1:linfty}
\end{equation}
Similarly, we deduce
\begin{align*}
\norm{\psi_h}_{L^2(D')} &\le C \norm{\psi}_{V^2(D)} \\
& \le C\norm{\psi}_{H^1(D)} + C \left( \sum_{e \in \mathcal{E}_h,e\cap \overline D \ne \emptyset} h_e\norm{\nabla \psi}_{L^2(e\cap\overline D)}^2\right)^{1/2}.
\end{align*}
Applying the Young inequality for convolution, we have
\begin{align*}
\abs{\nabla \psi(x)} &\le C \int_D\frac{\abs{u(y)-u_h(y)}}{\abs{x-y}} dy \\
&\le C \norm{u-u_h}_{L^t(D)} \norm{\abs{x-y}^{-1}}_{L^{2t/(3t-2)}(D)} \\
& \le C \norm{u-u_h}_{L^t(D)} .
\end{align*}
for $x \in D$ and $1<t<2$.
Therefore,
\begin{align}
\norm{\psi_h}_{L^2(D')} &\le C\norm{\psi}_{W^{2,t}(D)} + C \left( \sum_{e \in \mathcal{E}_h,e\cap \overline D \ne \emptyset}h_e^2\norm{u-u_h}_{L^t(D)}^2\right)^{1/2} \nonumber \\
& \le C \norm{u-u_h}_{L^t(D)}. \label{thm1:l2}
\end{align}
In view of the Riesz--Thorin interpolation theorem, we have
\begin{align}
\norm{\psi_h}_{L^p(D')} \le C \norm{u-u_h}_{L^q(D)}
\end{align}
for $\theta = \frac{2}{p}$, and $\frac{1}{q} = \frac{1-\theta}{s} + \frac{\theta}{t} < \frac{1}{2}+\frac{1}{p}$. This, together with \eqref{thm1coercive:second}, implies \eqref{eq:lem3}.
\end{proof}
We can now state the following proof.
\begin{proof}[Proof of Theorem \ref{thm1}.]
Letting $\chi\in V_h$ be arbitrary, we set $v=u-\chi$ and $v_h=u_h-\chi$.
Similar to the proof of Lemma \ref{thm1:coercive}, we take $x_0 \in \Omega_0$, $D \Subset \Omega_1$, and $\omega \in C^\infty_0(\frac{1}{2}D)$.
Setting $\tilde v = \omega v \in V_h(D)$, we define $\tilde v_h \in V_h(D)$ as
\[
a_D^1(\tilde v - \tilde v_h ,\xi) = 0 \quad {}^\forall \xi \in V_h(D).
\]
Then, Lemma \ref{lem1} gives
\[
\norm{\tilde v -\tilde v_h}_{L^\infty(\frac{1}{4}D)} \le C\norm{\tilde v}_{\alpha(h),D}.
\]
Setting $D' = \varepsilon D$ for $\varepsilon < \frac{1}{4}$, we let $\psi$ be the unique solution of
\[
\left\{ \begin{array}{ccc}
-\Delta \psi = -(\tilde v_h - \tilde v) & \text{in} & D' \\
\psi = 0 & \text{on} & \partial D'
\end{array} \right..
\]
Then, we have
\[
a_D(\psi,w) = -(\tilde v_h - \tilde v,w)_{D'} \quad {}^\forall w \in \{w \in V^2(D') \colon w|_{\partial D'} = 0 \}
\]
and
\[
a_D(\psi-(v_h-\tilde v_h),\xi) = 0 \quad {}^\forall \xi \in \mathring V_h(D').
\]
Applying Lemma \ref{lem3} several times, we obtain
\begin{align*}
\norm{\psi-(v_h-\tilde v_h)}_{L^\infty(\frac{1}{4}D')} & \le C \norm{\psi}_{\alpha(h),D'} + C \norm{\psi-(v_h-\tilde v_h)}_{L^2(D')} \\
& \le C \norm{\psi}_{\alpha(h),D'} + C \norm{v-v_h}_{L^2(D')} + C \norm{\tilde v - \tilde v_h}_{L^2(D')} \\
& \le C \norm{\psi}_{\alpha(h),D'} + C \norm{v-v_h}_{L^2(D')} + C\norm{\tilde v}_{\alpha(h),D}.
\end{align*}
Then, we deduce $\norm{\psi}_{\alpha(h),D'} \le C \norm{\tilde v -\tilde v_h}_{L^\infty(D')} \le C\norm{v}_{\alpha(h),D}$ in the similar way as the proof of Lemma \ref{lem3}.
Using the triangle inequality, we have
\begin{align*}
\norm{v_h-\tilde v_h}_{L^\infty(\frac14D')}\le C\norm{v}_{\alpha(h),D} + C\norm{v-v_h}_{L^2(D)}.
\end{align*}
Therefore,
\begin{align*}
\norm{u-u_h}_{L^\infty(\Omega_0)} &\le \abs{\tilde v(x_0)-\tilde v_h(x_0)} + \abs{\tilde v_h(x_0) - v_h(x_0)} \\
& \le C\norm{v}_{\alpha(h),D} + C\norm{v-v_h}_{L^2(D)} \\
&\le C\norm{u-\chi}_{\alpha(h),\Omega_1} + C\norm{u-u_h}_{L^2(\Omega_1)}.
\end{align*}
\end{proof}
\begin{cor}\label{cor1}
Under the same assumption of Theorem \ref{thm1}, we further assume that $d(\Omega_1,\Omega) \ge \kappa h$ and $u \in W^{1+r,\infty}(\Omega)$.
Then, there exists a positive constant $C$ independent of $h$, $u$, $u_h$, and $D$ satisfying
\begin{equation}
\norm{u-u_h}_{L^\infty(\Omega_0)} \le C h^r\left[h+\alpha\left(\frac{h}{d}\right)\right]\norm{u}_{W^{1+r,\infty}(\Omega_1)} + Cd^{-1}\norm{u-u_h}_{L^2(\Omega_1)} \label{eq:cor1}
\end{equation}
for a sufficiently small $h$.
\end{cor}
\section{Weak discrete maximum principle (Proof of Theorem \ref{thm2})}
\label{s:II}
We follow the same method as the proof of Theorem 1 of \cite{MR551291} to prove Theorem \ref{thm2} described below.
\begin{proof}[Proof of Theorem \ref{thm2}.]
Let $x_0 \in \Omega$ satisfy $\abs{u_h(x_0)} = \norm{u_h}_{L^\infty(\Omega)}$.
Set $d = \operatorname{dist}(x,\partial \Omega)$. First, we consider the case $d \ge 2\kappa h$ for some $\kappa>1$.
Then, applying Corollary \ref{cor1} to an open disk with center $x_0$ and $u\equiv 0$, we have
\[
\abs{u_h(x_0)} \le C d^{-1}\norm{u_h}_{L^2(S_{\frac{1}{2}d}(x_0))} \le C d^{-1}\norm{u_h}_{L^2(S_{d}(x_0))}.
\]
Now we assume that $d < 2\kappa h$. Using the inverse inequality, we have
\[
\abs{u_h(x_0)} \le Ch^{-1}\norm{u_h}_{L^2(S_{h}(x_0))}.
\]
Therefore,
\begin{equation}
\norm{u_h}_{L^\infty(\Omega)} \le C \rho^{-1}\norm{u_h}_{L^2(S_\rho(x_0))} \label{eq:thm2step1}
\end{equation}
where $\rho = \max\{d,h\}$.
Let $\phi \in C^\infty_0(S_\rho(x_0))$ satisfy $\norm{\phi}_{L^2(S_\rho(x_0))}=1$.
Let $v \in H^1_0(\Omega)$ be the solution of \eqref{eq:poisson} with $f=\phi$ and $g=0$.
Then, $v \in W^{2,p}(\Omega)$ for some $4/3< p\le 2 $, and $a(v,w) = (\phi,w)_\Omega$ for all $w \in V^2$.
Let $v_h \in \mathring V_h$ be satisfy
\[
a(v_h,\chi) = (\phi,\chi) \quad {}^\forall \chi \in \mathring V_h.
\]
In view of the assumption of $u_h$, we get
\begin{align}
\abs{(u_h,\phi)_\Omega} &= \abs{a(u_h,v)} = \abs{a(u_h,v-v_h)} \nonumber \\
& = \abs{a(u_h-\chi,v-v_h)} \label{eq:thm2step2first}
\end{align}
for $\chi \in \mathring V_h$.
We define $\hat u_h \in \mathring V_h$ such that
$\hat u_h = 0$ at nodal points on $\partial \Omega$ and $\hat u_h = u_h$ at interior nodal points. Then, we have
\begin{align*}
\operatorname{supp}(u_h - \hat u_h) & \subset \Lambda_h = \{x \in \overline \Omega \colon \operatorname{dist}(x,\partial \Omega) \le h \},\\
\norm{u_h - \hat u_h}_{L^\infty(\Omega)} & \le C \norm{u_h}_{L^\infty(\partial \Omega)}.
\end{align*}
Substituting \eqref{eq:thm2step2first} for $\chi = \hat u_h$, and using the inverse inequality, we have
\begin{align}
\abs{(u_h,\phi)_\Omega} &\le C \norm{u_h-\hat u_h}_{V^\infty} \norm{v-v_h}_{V^1(\Lambda_h)} \nonumber \\
& \le Ch^{-1}\norm{u_h - \hat u_h}_{L^\infty(\Omega)}\norm{v-v_h}_{V^1(\Lambda_h)} \nonumber \\
& \le Ch^{-1}\norm{u_h}_{L^\infty(\partial \Omega)}\norm{v-v_h}_{V^1(\Lambda_h)}. \label{eq:thm2step2last}
\end{align}
Set $R_0 = \operatorname{diam}\Omega$ and $d_j = R_02^{-j}$ for non-negative integer $j$.
We define $A_j$ as
\[
A_j \coloneqq \{x \in \overline \Omega \colon d_{j+1} \le \abs{x-x_0} \le d_j \}.
\]
Then, $\abs{A_j \cap \Lambda_h}_2 \le C d_jh$.
Set $A_j^l =\displaystyle \bigcup_{k=j-l}^{j+l}A_k$ and $J \coloneqq \min \{j \in {\mathbb Z} \colon d_{j+1} \le 8 \rho\}$.
Then, we have
\begin{align}
\norm{v-v_h}_{V^1(\Lambda_h)} & \le \sum_{j=0}^J \norm{v-v_h}_{V^1(\Lambda_h \cap A_j)} + \norm{v-v_h}_{V^1(\Lambda_h \cap S_{8\rho})} \nonumber \\
& \le C\sum_{j=0}^J h^{1/2}d_j^{1/2}\norm{v-v_h}_{V^2(\Lambda_h\cap A_j)} + C\rho^{1/2}h^{1/2} \norm{v-v_h}_{V^2(\Lambda_h \cap S_{8\rho}(x_0))}. \label{eq:thm2step3first}
\end{align}
To estimate the second term of \eqref{eq:thm2step3first}, we apply Propositions \ref{prop:energyerror} and \ref{prop:bestapprx} and get
\begin{align}
\norm{v-v_h}_{V^2(\Lambda_h \cap S_{8\rho}(x_0))} &\le Ch^{2-\frac{2}{p}}\norm{v}_{W^{2,p}(\Omega)} \nonumber \\
& \le Ch^{2-\frac{2}{p}}\norm{\phi}_{L^p(S_\rho(x_0))} \nonumber \\
& \le Ch^{2-\frac{2}{p}}\rho^{\frac{2}{p}-1} \label{eq:thm2step3circle}.
\end{align}
Meanwhile, using Proposition \ref{prop:interiorl2} for $j$ satisfying $\Lambda_h \cap A_j \ne \emptyset$, we have
\begin{align*}
\norm{v-v_h}_{V^2(\Lambda_h \cap A_j)} &\le C\left( \norm{v-v_h}_{V^2(A_j^1)} + d_j^{-1}\norm{v-v_h}_{L^2(A_j^1)} \right) \nonumber \\
& \le Ch^{2-\frac{2}{p}}\norm{v}_{W^{2,p}(A_j^2)} + C d_j^{-1} \norm{v-v_h}_{L^2(A_j^1)}.
\end{align*}
In view of Proposition \ref{prop:poisson4},
\begin{align*}
\norm{v}_{W^{2,p}(A_j^2)} &\le C\left( \norm{\phi}_{L^p(A_j^3)} + d_j^{-1}\abs{v}_{W^{1,p}(A_j^3)} + d_j^{-2}\norm{v}_{L^p(A_j^3)} \right) \\
& \le C d_j^{\frac{2}{p}-1}\left(1 + d_j^{-1}\abs{v}_{H^1(A_j^4)} + d_j^{-2}\norm{v}_{L^2(A_j^4)} \right).
\end{align*}
Because $\operatorname{diam} A_j^4 \le 32 d_j$ and $\operatorname{dist}(A_j^4,\partial \Omega) \le h$, there exists $\overline x_j \in \partial \Omega$ satisfying
\[
A_j^4 \subset S_{64d_j}(\overline x_j)\,,\,S_\rho(x_0) \subset S_{64d_j}(\overline x_j).
\]
Therefore, we have
\begin{align}
\norm{v}_{W^{2,p}(A_j^2)} \le C d_j^{\frac{2}{p}-1} \label{eq:thm2step3w2p}
\end{align}
by Propositions \ref{prop:poisson3} and \ref{prop:poincare}.
Let $\eta \in C^\infty_0(S_{64d_j}(\overline x_j))$ and let $w \in H^1_0(\Omega)$ be the solution of \eqref{eq:poisson} with $f=\eta$ and $g=0$.
Let $w_h \in \mathring V_h$ satisfy
\[
a(w_h,\chi) = (\eta,\chi)_{\Omega} \quad {}^\forall \chi \in \mathring V_h.
\]
Similar to the above, we deduce
\begin{align*}
\abs{(v-v_h,\eta)_{S_{64d_j}(\overline x_j)}} & = \abs{a(v-v_h,w-w_h)} \nonumber \\
& \le C \norm{v-v_h}_{V^2}\norm{w-w_h}_{V^2} \nonumber \\
& \le Ch^{4-\frac{4}{p}}\norm{\phi}_{L^p(S_\rho(x_0))}\norm{\eta}_{L^p(S_{64d_j}(\overline x_j))} \nonumber \\
& \le Ch^{4-\frac{4}{p}}(\rho d_j)^{\frac{2}{p}-1}\norm{\eta}_{L^2(S_{64d_j}(\overline x_j))}.
\end{align*}
Therefore,
\begin{align}
\norm{v-v_h}_{L^2(A_j^1)} \le Ch^{4-\frac{4}{p}}(\rho d_j)^{\frac{2}{p}-1}. \label{eq:thm2step3l2}
\end{align}
Summing up \eqref{eq:thm2step3first}--\eqref{eq:thm2step3l2} and using $h \le \rho \le C d_j \le R_0$ and $p>\frac{3}{4}$, we obtain
\begin{align}
\norm{v-v_h}_{V^1(\Lambda_h)} & \le \sum_{j=0}^J h^{1/2}d_j^{1/2}\left(h^{2-\frac{2}{p}}d_j^{\frac{2}{p}-1} + h^{4-\frac{4}{p}}\rho^{\frac{2}{p}-1}d_j^{\frac{2}{p}-1} \right)
+ h\rho \left( \frac{h}{\rho} \right)^{\frac{3}{2}-\frac{2}{p}} \nonumber \\
& \le Ch\rho. \label{eq;thm2step3last}
\end{align}
The desired \eqref{eq:dmp} now follows \eqref{eq:thm2step2last} and \eqref{eq:thm2step1}.
\end{proof}
\section{$L^\infty$ error estimate (Proof of Theorem \ref{thm3})}
\label{s:III}
We finally state the following proof.
\begin{proof}[Proof of Theorem \ref{thm3}.]
Let $\tilde u \in V^\infty(\widetilde \Omega)$ be the extension of $u$ satisfying $\norm{\tilde u}_{\alpha(h), \widetilde \Omega} \le C \norm{u}_{\alpha(h),\Omega}$ and $\tilde u = 0$ on $\partial \widetilde \Omega$.
Moreover, let $\tilde u_h \in \mathring V_h(\widetilde \Omega)$ solve
\[
\tilde a(\tilde u - \tilde u_h ,\xi) = 0 \quad {}^\forall \xi \in \mathring V_h(\widetilde \Omega),
\]
where $\tilde a$ is the bilinear form \eqref{eq:bilinear} with replacement of $\mathcal{T}_h$, $\mathcal{E}_h$ by $\widetilde \mathcal{T}_h$, $\widetilde \mathcal{E}_h$, respectively.
For arbitrary $\chi \in V_h$, we define $\tilde \chi \in \mathring V_h(\widetilde \Omega)$ as a zero extension.
Then, in view of Theorem \ref{thm1}, we have
\begin{equation}
\norm{\tilde u -\tilde u_h}_{L^\infty(\Omega)} \le C \norm{\tilde u- \tilde \chi}_{\alpha(h),\widetilde \Omega} + C \norm{\tilde u - \tilde u_h}_{L^2(\widetilde \Omega)}. \label{eq:thm3step1}
\end{equation}
Let $\psi \in H^1(\widetilde \Omega)$ be the solution of
\begin{equation}
\left\{
\begin{array}{ccc}
-\Delta \psi = \tilde u - \tilde u_h & \text{in} & \widetilde \Omega \\
\psi = 0 & \text{on} & \partial \widetilde \Omega .
\end{array}\right.
\end{equation}
Then, $\psi \in H^2(\widetilde \Omega)$ and $\tilde a (\psi,\eta) = (\psi,\eta)_{\widetilde \Omega}$ for $\eta \in V^2(\widetilde \Omega)$.
Let $\psi_h \in \mathring V_h(\widetilde \Omega)$ solve
\[
\tilde a(\psi-\psi_h,\xi) = 0 \quad {}^\forall \xi \in \mathring V_h(\widetilde \Omega).
\]
Then, by the continuity of $\tilde a$ and elliptic regularity, we have
\begin{align}
\norm{\tilde u -\tilde u_h}_{L^2(\widetilde \Omega)}^2 &= \tilde a(\tilde u-\tilde u_h ,\psi) = \tilde a(\tilde u-\tilde \chi,\psi-\psi_h) \nonumber \\
& \le C\norm{\tilde u-\tilde \chi}_{V^\infty(\widetilde \Omega)}\norm{\psi - \psi_h}_{V^1(\widetilde \Omega)} \nonumber \\
& \le Ch \norm{\tilde u-\tilde \chi}_{V^\infty(\widetilde \Omega)}\norm{\tilde u -\tilde u_h}_{L^2(\widetilde \Omega)}. \label{eq:thm3step2first}
\end{align}
Because $a(u_h - \tilde u_h,\xi) = 0$ for $\xi \in \mathring V_h$, using Theorems \ref{thm2} and \ref{thm1} and \eqref{eq:thm3step2first}, we deduce
\begin{align}
\norm{u_h - \tilde u_h}_{L^\infty(\Omega)} & \le C\norm{u_h-\tilde u_h}_{L^\infty(\partial \Omega)} \nonumber \\
& \le C\norm{\tilde u - \tilde u_h}_{L^\infty(\partial \Omega)} + C\norm{u- u_h}_{L^\infty(\partial \Omega)} \nonumber \\
& \le C\norm{\tilde u-\tilde \chi}_{\alpha(h),\widetilde \Omega} + C\norm{\tilde u -\tilde u_h}_{L^2(\widetilde \Omega)} + C\norm{u- u_h}_{L^\infty(\partial \Omega)} \nonumber \\
& \le C\norm{\tilde u-\tilde \chi}_{\alpha(h),\widetilde \Omega} + C\norm{u- u_h}_{L^\infty(\partial \Omega)}. \label{eq:thm3step3}
\end{align}
Therefore, using triangle inequality, we obtain
\[
\norm{u-u_h}_{L^\infty(\Omega)} \le C \norm{u-\chi}_{\alpha(h),\Omega} + C\norm{u-u_h}_{L^\infty(\partial \Omega)}.
\]
\end{proof}
\begin{cor}\label{cor2}
In addition to the assumption of Theorem \ref{thm3}, we assume $u \in W^{1+r,\infty}(\Omega)$.
Then, we have
\begin{equation}
\norm{u-u_h}_{L^\infty(\Omega)} \le
C h^{r}\norm{u}_{W^{1+r,\infty}(\Omega)}.\label{eq:cor2}
\end{equation}
\end{cor}
\begin{proof}
First, in view of the standard interpolation error estimate, we have
\[
\inf_{\chi \in V_h}\norm{u-\chi}_{\alpha(h),\Omega} \le C h^r(h+\alpha(h))\norm{u}_{W^{1+r,\infty}(\Omega)}.
\]
To perform the estimation for $\|u-u_h\|_{L^\infty(\partial\Omega)}$,
we let $e \in \E^{\partial}_h$ and $K \in \mathcal{T}_h$ such that $e \subset \overline K$.
Moreover, let $\chi\in V_h$ be arbitrary. By the inverse inequality, we have
\begin{align*}
\norm{u-u_h}_{L^\infty(e)} & \le
\norm{u-\chi}_{L^\infty(e)} + \norm{u_h-\chi}_{L^\infty(e)} \\
&\le \norm{u-\chi}_{L^\infty(K)} + Ch_e^{-1/2}\norm{u_h-\chi}_{L^2(e)} .
\end{align*}
Using \eqref{eq:dgconti}, \eqref{eq:dgcoercive}, and \eqref{eq:galerkin}, we have
$\|\chi-u_h\|_{V^2}\le C\|\chi-u\|_{V^2}$ and, consequently,
\[
h_e^{-1/2} \norm{\chi-u_h}_{L^2(e)}\le C\|\chi-u\|_{V^2}.
\]
Therefore,
\[
\norm{u-u_h}_{L^\infty(e)}\le \norm{u-\chi}_{L^\infty(K)}+C\|\chi-u\|_{V^2}.
\]
Choosing $\chi$ as the Lagrange interpolation of $u$, we deduce
\[
\norm{u-u_h}_{L^\infty(e)}\le Ch^{r}(h+\alpha(h))|u|_{W^{r+1,\infty}(K)}+Ch^r|u|_{W^{r+1,\infty}(\Omega)}.
\]
Summing up those estimate, we obtain the desired \eqref{eq:cor2}.
\end{proof}
\section{Numerical examples}
\label{s:ne}
In this section, we examine the weak discrete maximum principle (Theorem \ref{thm2}) and the $L^\infty$ error estimate (Corollary \ref{cor2}) using numerical examples.
We consider the square domain $\Omega$ (see Fig. \ref{fig:squaremesh}) and the L-shape domain $\Omega$ (see Fig. \ref{fig:lshapemesh}).
\begin{figure}[bt]
\centering
\subfloat[][Square domain]{\includegraphics[scale=0.7]{Fig1a.pdf}\label{fig:squaremesh}}
\subfloat[][L-shape domain]{\includegraphics[scale=0.7]{Fig1b.pdf}\label{fig:lshapemesh}}
\caption{Domain $\Omega$}
\end{figure}
First, we solve (DG;$f,g$) with
$f = 0$ and $g = \cos(\pi x) \cos (\pi y)$; the solution $u_h$ satisfies \eqref{eq:discreteharmonic}.
The minimum and maximum values of $u_h$ on $\Omega$ and $\partial \Omega$ are reported in Tab.~\ref{tb:wdmp}. We see from Tab.~\ref{tb:wdmp} that the minimum and maximum values on $\Omega$ agree with those on $\partial \Omega$. We infer that the discrete maximum principle \eqref{eq:dmp} actually holds with $C=1$.
\begin{table}[bt]
\caption{Minimum and Maximum on $\Omega$ or $\partial \Omega$}
\centering
\begin{tabular}{lc|cc|cc}\hline
Domain & $h$ & $\displaystyle \min_{\Omega}u_h$ & $\displaystyle \min_{\partial \Omega}u_h$ & $\displaystyle \max_{\Omega}u_h$ & $\displaystyle \max_{\partial \Omega}u_h$ \\ \hline\hline
\multirow{2}{*}{Square} & $0.152069063$ & $-1.01415829$ & $-1.01415829$ & $1.01407799$ & $1.01407799$ \\ \cline{2-6}
& $0.0762297934$ & $-1.00438815$&$-1.00438815$&$1.00437510$&$1.00437510$ \\ \hline
\multirow{2}{*}{L-shape} & $0.152069063$ & $-1.01406865$ & $-1.01406865$ & $1.01414407$ & $1.01414407$ \\ \cline{2-6}
& $0.0790226728$ & $-1.00437424$&$-1.00437424$&$1.00437503$&$1.00437503$ \\ \hline
\end{tabular}
\label{tb:wdmp}
\end{table}
Finally, we consider (BVP;$f,g$) with $f(x,y)=2\pi^2\sin(\pi x)\sin(\pi y)$ and $g(x,y)=\sin(\pi x)\sin(\pi y)$. The exact solution is given as $u(x,y) = \sin(\pi x)\sin(\pi y)$.
We examine errors $\|u-u_h\|_{L^\infty(\Omega)}$ with $r=1$ ($\mathcal{P}^1$ element) and $r=2$ ($\mathcal{P}^2$ element). Results are shown in Fig. \ref{fig:p1error} and Fig. \ref{fig:p2error}. We observe that the order is almost $O(h^{1+r})$: the optimal convergence rate is actually observed. This implies that our $L^\infty$ error estimate, Corollary \ref{cor2}, has room for improvement.
\begin{figure}[bt]
\centering
\subfloat[][$r=1$]{\includegraphics[scale=0.8]{Fig2a.pdf}\label{fig:p1error}}
\subfloat[][$r=2$]{\includegraphics[scale=0.8]{Fig2b.pdf}\label{fig:p2error}}
\caption{$L^\infty$ errors $\|u-u_h\|_{L^\infty(\Omega)}$}
\end{figure}
\section{Conclusion}
We have shown the interior error estimate and discrete weak maximum principle of the DG method for the Poisson equation.
Those results are extensions of the standard FEM \cite{MR551291} to the DG method.
Moreover, we have derived the $L^\infty$ error estimate as an application of the discrete weak maximum principle.
Unfortunately, our $L^\infty$ error estimate is only sub-optimal.
The optimal rate has been proved in \cite{MR2113680} by another method.
This implies that we need to deep consider the imposition of the Dirichlet boundary condition in the DG method. In particular, we will study more precise estimates of $\alpha(h)$ and $\norm{u-u_h}_{L^\infty(\partial \Omega)}$ in the future works.
\section*{Acknowledgment}
The first author was supported by Program for Leading Graduate Schools, MEXT, Japan.
The second author was supported by JST CREST Grant Number JPMJCR15D1,
Japan, and JSPS KAKENHI Grant Number 15H03635, Japan.
\bibliographystyle{spmpsci}
| {
"timestamp": "2018-12-04T02:25:23",
"yymm": "1812",
"arxiv_id": "1812.00610",
"language": "en",
"url": "https://arxiv.org/abs/1812.00610",
"abstract": "We derive several $L^\\infty$ error estimates for the symmetric interior penalty (SIP) discontinuous Galerkin (DG) method applied to the Poisson equation in a two-dimensional polygonal domain. Both local and global estimates are examined. The weak maximum principle (WMP) for the discrete harmonic function is also established. We prove our $L^\\infty$ estimates using this WMP and several $W^{2,p}$ and $W^{1,1}$ estimates for the Poisson equation. Numerical examples to validate our results are also presented.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Weak discrete maximum principle and $L^\\infty$ analysis of the DG method for the Poisson equation on a polygonal domain",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357211137666,
"lm_q2_score": 0.8152324826183822,
"lm_q1q2_score": 0.8003428491987237
} |
https://arxiv.org/abs/1306.5872 | Geometric properties of inverse polynomial images | Given a polynomial $\T_n$ of degree $n$, consider the inverse image of $\R$ and $[-1,1]$, denoted by $\T_n^{-1}(\R)$ and $\T_n^{-1}([-1,1])$, respectively. It is well known that $\T_n^{-1}(\R)$ consists of $n$ analytic Jordan arcs moving from $\infty$ to $\infty$. In this paper, we give a necessary and sufficient condition such that (1) $\T_n^{-1}([-1,1])$ consists of $\nu$ analytic Jordan arcs and (2) $\T_n^{-1}([-1,1])$ is connected, respectively. | \section{Introduction}
Let $\PP_n$ be the set of all polynomials of degree $n$ with complex coefficients. For a polynomial $\T_n\in\PP_n$, consider the inverse images $\T_n^{-1}(\R)$ and $\T_n^{-1}([-1,1])$, defined by
\begin{equation}
\T_n^{-1}(\R):=\bigl\{z\in\C:\T_n(z)\in\R\bigr\}
\end{equation}
and
\begin{equation}
\T_n^{-1}([-1,1]):=\bigl\{z\in\C:\T_n(z)\in[-1,1]\bigr\},
\end{equation}
respectively. It is well known that $\T_n^{-1}(\R)$ consists of $n$ analytic Jordan arcs moving from $\infty$ to $\infty$ which cross each other at points which are zeros of the derivative $\T_n'$. In \cite{Peh-2003}, Peherstorfer proved that $\T_n^{-1}(\R)$ may be split up into $n$ Jordan arcs (not necessarily analytic) moving from $\infty$ to $\infty$ with the additional property that $\T_n$ is strictly monotone decreasing from $+\infty$ to $-\infty$ on each of the $n$ Jordan arcs. Thus, $\T_n^{-1}([-1,1])$ is the union of $n$ (analytic) Jordan arcs and is obtained from $\T_n^{-1}(\R)$ by cutting off the $n$ arcs of $\T_n^{-1}(\R)$. In \cite[Thm.\,3]{PehSch-2004}, we gave a necessary and sufficient condition such that $\T_n^{-1}([-1,1])$ consists of $2$ Jordan arcs, compare also \cite{Pakovich-1995}, where the proof can easily be extended to the case of $\ell$ arcs, see also \cite[Remark after Corollary\,2.2]{Peh-2003}. In the present paper, we will give a necessary and sufficient condition such that (1)~$\T_n^{-1}([-1,1])$ consists of $\nu$ (but not less than $\nu$) \emph{analytic} Jordan arcs (in Section\,2) and (2)~$\T_n^{-1}([-1,1])$ is connected (in Section\,3), respectively. From a different point of view as in this paper, inverse polynomial images are considered, e.g., in \cite{PehSt}, \cite{Pakovich-1998}, \cite{Pakovich-2007}, and \cite{Pakovich-2008}.
Inverse polynomial images are interesting for instance in approximation theory, since each polynomial (suitable normed) of degree $n$ is the minimal polynomial with respect to the maximum norm on its inverse image, see \cite{KamoBorodin}, \cite{Peh-1996}, \cite{FischerPeherstorfer}, and \cite{Fischer-1992}.
\section{The Number of (Analytic) Jordan Arcs of an Inverse Polynomial Image}
Let us start with a collection of important properties of the inverse images $\T_n^{-1}(\R)$ and $\T_n^{-1}([-1,1])$. Most of them are due to Peherstorfer\,\cite{Peh-2003} or classical well known results. Let us point out that $\T_n^{-1}(\R)$ (and also $\T_n^{-1}([-1,1])$), on the one hand side, may be characterized by $n$ analytic Jordan arcs and, on the other side, by $n$ (not necessarily analytic) Jordan arcs, on which $\T_n$ is strictly monotone.
Let $C:=\{\gamma(t):t\in[0,1]\}$ be an analytic Jordan arc in $\C$ and let $\T_n\in\PP_n$ be a polynomial such that $\T_n(\gamma(t))\in\R$ for all $t\in[0,1]$. We call a point $z_0=\gamma(t_0)$ a \emph{saddle point} of $\T_n$ on $C$ if $\T_n'(z_0)=0$ and $z_0$ is no extremum of $\T_n$ on $C$.
\begin{lemma}\label{Lem-1}
Let $\T_n\in\PP_n$ be a polynomial of degree $n$.
\begin{enumerate}
\item $\T_n^{-1}(\R)$ consists of $n$ analytic Jordan arcs, denoted by $\tilde{C}_1,\tilde{C}_2,\dots,\tilde{C}_n$, in the complex plane running from $\infty$ to $\infty$.
\item $\T_n^{-1}(\R)$ consists of $n$ Jordan arcs, denoted by $\tilde{\Gamma}_1,\tilde{\Gamma}_2,\dots,\tilde{\Gamma}_n$, in the complex plane running from $\infty$ to $\infty$, where on each $\tilde{\Gamma}_j$, $j=1,2,\ldots,n$, $\T_n(z)$ is strictly monotone decreasing from $+\infty$ to $-\infty$.
\item A point $z_0\in\T_n^{-1}(\R)$ is a crossing point of exactly $m$, $m\geq2$, analytic Jordan arcs $\tilde{C}_{i_1},\tilde{C}_{i_2},\ldots,\tilde{C}_{i_m}$, $1\leq{i}_1<i_2<\ldots<i_m\leq{n}$, if and only if $z_0$ is a zero of $\T'_n$ with multiplicity $m-1$. In this case, the $m$ arcs are cutting each other at $z_0$ in successive angles of $\pi/m$. If $m$ is odd then $z_0$ is a saddle point of $\re\{\T_n(z)\}$ on each of the $m$ arcs. If $m$ is even then, on $m/2$ arcs, $z_0$ is a minimum of $\re\{\T_n(z)\}$ and on the other $m/2$ arcs, $z_0$ is a maximum of $\re\{\T_n(z)\}$.
\item A point $z_0\in\T_n^{-1}(\R)$ is a crossing point of exactly $m$, $m\geq2$, Jordan arcs\\ $\tilde{\Gamma}_{i_1},\tilde{\Gamma}_{i_2},\ldots,\tilde{\Gamma}_{i_m}$, $1\leq{i}_1<i_2<\ldots<i_m\leq{n}$, if and only if $z_0$ is a zero of $\T'_n$ with multiplicity $m-1$.
\item $\T_n^{-1}([-1,1])$ consists of $n$ analytic Jordan arcs, denoted by $C_1,C_2,\dots,C_n$, where the $2n$ zeros of $\T_n^2-1$ are the endpoints of the $n$ arcs. If $z_0\in\C$ is a zero of $\T_n^2-1$ of multiplicity $m$ then exactly $m$ analytic Jordan arcs $C_{i_1},C_{i_2},\ldots,C_{i_m}$ of $\T_n^{-1}([-1,1])$, $1\leq{i}_1<i_2<\ldots<i_m\leq{n}$, have $z_0$ as common endpoint.
\item $\T_n^{-1}([-1,1])$ consists of $n$ Jordan arcs, denoted by $\Gamma_1,\Gamma_2,\dots,\Gamma_n$, with $\Gamma_j\subset\tilde{\Gamma}_j$, $j=1,2,\dots,n$, where on each $\Gamma_j$, $\T_n(z)$ is strictly monotone decreasing from $+1$ to $-1$. If $z_0\in\C$ is a zero of $\T_n^2-1$ of multiplicity $m$, then exactly $m$ Jordan arcs $\Gamma_{i_1},\ldots,\Gamma_{i_m}$ of $\T_n^{-1}([-1,1])$, $1\leq{i}_1<i_2<\ldots<i_m\leq{n}$, have $z_0$ as common endpoint.
\item Two arcs $C_j,C_k$, $j\neq k$, cross each other at most once (the same holds for $\Gamma_j,\Gamma_k$).
\item Let $S:=\T_n^{-1}([-1,1])$, then the complement $\C\setminus{S}$ is connected.
\item Let $S:=\T_n^{-1}([-1,1])$, then, for $P_n(z):=\T_n((z-b)/a)$, $a,b\in\C$, $a\neq0$, the inverse image is $P_n^{-1}([-1,1])=aS+b$.
\item $\T_n^{-1}([-1,1])\subseteq\R$ if and only if the coefficients of $\T_n$ are real, $\T_n$ has $n$ simple real zeros and $\min\bigl\{|\T_n(z)|:\T'_n(z)=0\bigr\}\geq1$.
\item $\T_n^{-1}(\R)$ is symmetric with respect to the real line if and only if $\T_{n}(z)$ or $\ii\T_{n}(z)$ has real coefficients only.
\end{enumerate}
\end{lemma}
\begin{proof}
(i), (iii), (iv), and (xi) are well known.\\
For (ii), see \cite[Thm.\,2.2]{Peh-2003}.\\
Concerning the connection between (iii),(iv) and (v),(vi) note that each zero $z_0$ of $Q_{2n}(z)=\T_n^2(z)-1\in\PP_{2n}$ with multiplicity $m$ is a zero of $Q_{2n}'(z)=2\T_n(z)\,\T_n'(z)$ with multiplicity $m-1$, hence a zero of $\T_n'(z)$ with multiplicity $m-1$. Thus, (v) and (vi) follow immediately from (i)\&(iii) and (ii)\&(iv), respectively.\\
(vii) follows immediately from (viii).\\
Concerning (viii), suppose that there exists a simple connected domain $B$, which is surrounded by a subset of $\T_n^{-1}([-1,1])$. Then the harmonic function $v(x,y):=\im\{\T_n(x+\ii{y})\}$ is zero on $\partial{B}$ thus, by the maximum principle, $v(x,y)$ is zero on $B$, which is a contradiction.\\
(ix) follows from the definition of $\T_n^{-1}([-1,1])$.\\
For (x), see \cite[Cor.\,2.3]{Peh-2003}.
\end{proof}
\begin{example}\label{Ex}
Consider the polynomial $\T_n(z):=1+z^2(z-1)^3(z-2)^4$ of degree $n=9$. Fig.\,\ref{Fig_InverseImageGeneral} shows the inverse images $\T_n^{-1}([-1,1])$ (solid line) and $\T_n^{-1}(\R)$ (dotted and solid line). The zeros of $\T_n+1$ and $\T_n-1$ are marked with a circle and a disk, respectively. One can easily identitfy the $n=9$ analytic Jordan arcs $\tilde{C}_1,\tilde{C}_2,\ldots,\tilde{C}_n$ which $\T_n^{-1}(\R)$ consists of, compare Lemma\,\ref{Lem-1}\,(i), and the $n=9$ analytic Jordan arcs $C_1,C_2,\ldots,C_n$ which $\T_n^{-1}([-1,1])$ consists of, compare Lemma\,\ref{Lem-1}\,(v), where the endpoints of the arcs are exactly the circles and disks, i.e., the zeros of $\T_n^2-1$. Note that $\tilde{C}_1=\R$, $C_1=[-0.215\ldots,0]$ and $C_2=[0,1]$.
\end{example}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1.6]{Fig_InverseImageGeneral}
\caption{\label{Fig_InverseImageGeneral} Inverse images $\T_9^{-1}([-1,1])$ (solid line) and $\T_9^{-1}(\R)$ (dotted and solid line) for the polynomial $\T_9(z):=1+z^2(z-1)^3(z-2)^4$}
\end{center}
\end{figure}
Before we state the result concerning the minimal number of analytic Jordan arcs $\T_n^{-1}([-1,1])$ consists of, let us do some preparations. Let $\T_n\in\PP_n$ and consider the zeros of the polynomial $\T_n^2-1\in\PP_{2n}$. Let $\{a_1,a_2,\ldots,a_{2\ell}\}$ be the set of all zeros of $\T_n^2-1$ with \emph{odd} multiplicity, where $a_1,a_2,\ldots,a_{2\ell}$ are pairwise distinct and each $a_j$ has multiplicity $2\beta_j-1$, $j=1,\ldots,2\ell$. Further, let
\begin{equation}
(b_1,b_2,\ldots,b_{2\nu}):=(\underbrace{a_1,\ldots,a_1}_{(2\beta_1-1)-\text{times}},
\underbrace{a_2,\ldots,a_2}_{(2\beta_2-1)-\text{times}},\ldots,\underbrace{a_{2\ell},\ldots,a_{2\ell}}_{(2\beta_{2\ell}-1)-\text{times}}),
\end{equation}
thus
\begin{equation}
2\nu=\sum_{j=1}^{2\ell}(2\beta_j-1),
\end{equation}
i.e., $b_1,b_2,\ldots,b_{2\nu}$ are the zeros of odd multiplicity \emph{written according to their multiplicity}.
\begin{theorem}\label{Thm-NuArcs}
Let $\T_n\in\PP_n$ be any polynomial of degree $n$. Then, $\T_n^{-1}([-1,1])$ consists of $\nu$ (but not less than $\nu$) analytic Jordan arcs with endpoints $b_1,b_2,\ldots,b_{2\nu}$ if and only if $\T_n^2-1$ has exactly $2\nu$ zeros $b_1,b_2,\ldots,b_{2\nu}$ (written according to their multiplicity) of odd multiplicity.
\end{theorem}
\begin{proof}
By Lemma\,\ref{Lem-1}\,(v), $\T_n^{-1}([-1,1])$ consists of $n$ analytic Jordan arcs $C_1,C_2,\ldots,C_n$, which can be combined into $\nu$ analytic Jordan arcs in the following way. Clearly, two analytic Jordan arcs $C_{i_1}$ and $C_{i_2}$ can be joined together into one analytic Jordan arc if they have the same endpoint, which is a zero of $\T_n^2-1$, and if they lie on the same analytic Jordan arc $\tilde{C}_{i_3}$ of Lemma\,\ref{Lem-1}\,(i). By Lemma\,\ref{Lem-1}\,(iii) and (v), such combinations are possible only at the zeros of $\T_n^2-1$ of \emph{even} multiplicity. More precisely, let $d_1,d_2,\ldots,d_k$ be the zeros of $\T_n^2-1$ with even multiplicities $2\alpha_1,2\alpha_2,\ldots,2\alpha_k$, where, by assumption,
\[
2\alpha_1+2\alpha_2+\ldots+2\alpha_k=2n-2\nu.
\]
By Lemma\,\ref{Lem-1}\,(iii) and (v), at each point $d_j$, the $2\alpha_j$ analytic Jordan arcs of $\T_n^{-1}([-1,1])$ can be combined into $\alpha_j$ analytic arcs, $j=1,2,\ldots,k$. Altogether, the number of such combinations is $\alpha_1+\alpha_2+\ldots+\alpha_k=n-\nu$, thus the the total number of $n$ analytic Jordan arcs is reduced by $n-\nu$, hence $\nu$ analytic Jordan arcs remain and the sufficiency part is proved. Since, for each polynomial $\T_n\in\PP_n$, there is a unique $\nu\in\{1,2,\ldots,n\}$ such that $\T_n^2-1$ has exactly $2\nu$ zeros of odd multiplicity (counted with multiplicity), the necessity part follows.
\end{proof}
\begin{example}
For a better understanding of the combination of two analytic Jordan arcs into one analytic Jordan arc, as done in the proof of Theorem\,\ref{Thm-NuArcs}, let us again consider the inverse image of the polynomial of Example\,\ref{Ex}.
\begin{itemize}
\item The point $d_1=0$ is a zero of $\T_n-1$ with multiplicity $2\alpha_1=2$, thus $2$ analytic Jordan arcs, here $C_1$ and $C_2$, have $d_1$ as endpoint, compare Lemma\,\ref{Lem-1}\,(v). Along the arc $\tilde{C}_1$, $d_1$ is a maximum, along the arc $\tilde{C}_2$, $d_1$ is a minimum, compare Lemma\,\ref{Lem-1}\,(iii), thus the $2$ analytic Jordan arcs $C_1$ and $C_2$ can be joined together into one analytic Jordan arc $C_1\cup{C}_2$.
\item The point $d_2=2$ is a zero of $\T_n-1$ with multiplicity $2\alpha_2=4$, thus $4$ analytic Jordan arcs, here $C_6$, $C_7$, $C_8$ and $C_9$, have $d_2$ as endpoint. Along the arc $\tilde{C}_7$ or $\tilde{C}_9$, $d_3$ is a maximum, along the arc $\tilde{C}_8$ or $\tilde{C}_1$, $d_3$ is a minimum, compare Lemma\,\ref{Lem-1}\,(iii). Hence, the analytic Jordan arcs $C_6$ and $C_9$ can be combined into one analytic Jordan arc $C_6\cup{C}_9$, analogously $C_7$ and $C_8$ can be combined into $C_7\cup{C}_8$.
\item The point $a_1=1$ is a zero of $\T_n-1$ with multiplicity $3$, thus $3$ analytic Jordan arcs, here $C_2$, $C_4$ and $C_5$, have $a_1$ as endpoint. Since $a_1$ is a saddle point along each of the three analytic Jordan arcs $\tilde{C}_1,\tilde{C}_4,\tilde{C}_5$, compare Lemma\,\ref{Lem-1}\,(iii), no combination of arcs can be done.
\end{itemize}
Altogether, we get $\alpha_1+\alpha_2=3=n-\nu$ combinations and therefore $\T_n^{-1}([-1,1])$ consists of $\nu=6$ analytic Jordan arcs, which are given by $C_1\cup{C}_2$, $C_3$, $C_4$, $C_5$, $C_6\cup{C}_9$ and $C_7\cup{C}_8$.
\end{example}
\begin{lemma}\label{Lemma-PolEq}
For any polynomial $\T_n(z)=c_nz^n+\ldots\in\PP_n$, $c_n\in\C\setminus\{0\}$, there exists a unique $\ell\in\{1,2,\ldots,n\}$, a unique monic polynomial $\HH_{2\ell}(z)=z^{2\ell}+\ldots\in\PP_{2\ell}$ with pairwise distinct zeros $a_1,a_2,\ldots,a_{2\ell}$, i.e.,
\begin{equation}\label{H}
\HH_{2\ell}(z)=\prod_{j=1}^{2\ell}(z-a_j),
\end{equation}
and a unique polynomial $\U_{n-\ell}(z)=c_nz^{n-\ell}+\ldots\in\PP_{n-\ell}$ with the same leading coefficient $c_n$ such that the polynomial equation
\begin{equation}\label{TU}
\T_n^2(z)-1=\HH_{2\ell}(z)\,\U_{n-\ell}^2(z)
\end{equation}
holds. Note that the points $a_1,a_2,\ldots,a_{2\ell}$ are exactly those zeros of $\T_n^2-1$ which have odd multiplicity.
\end{lemma}
\begin{proof}
The assertion follows immediately by the fundamental theorem of algebra for the polynomial $Q_{2n}(z):=\T_n^2(z)-1=c_n^2z^{2n}+\ldots\in\PP_{2n}$, where $2\ell$ is the number of distinct zeros of $Q_{2n}$ with odd multiplicity. It only remains to show that the case $\ell=0$ is not possible. If $\ell=0$, then all zeros of $Q_{2n}$ are of even multiplicity. Thus there are at least $n$ zeros (counted with multiplicity) of $Q_{2n}'$ which are also zeros of $Q_{2n}$ but not zeros of $\T_n$. Since $Q_{2n}'(z)=2\,\T_n(z)\,\T_n'(z)$, there are at least $n$ zeros (counted with multiplicity) of $\T_n'$, which is a contradiction.
\end{proof}
Let us point out that the polynomial equation \eqref{TU} (sometimes called Pell equation) is the starting point for investigations concerning minimal or orthogonal polynomials on several intervals, see, e.g., \cite{Bogatyrev}, \cite{Peh-1993}, \cite{Peh-1996}, \cite{Peh-2001}, \cite{PehSch-1999}, \cite{SoYu-1995}, and \cite{Totik-2001}.\\
\indent In \cite[Theorem\,3]{PehSch-2004}, we proved that the polynomial equation \eqref{TU} (for $\ell=2$) is equivalent to the fact that $\T_n^{-1}([-1,1])$ consists of $2$ Jordan arcs (not necessarily analytic), compare also \cite{Pakovich-1995}. The condition and the proof can be easily extended to the general case of $\ell$ arcs, compare also \cite[Remark after Corollary\,2.2]{Peh-2003}. In addition, we give an alternative proof similar to that of Theorem\,\ref{Thm-NuArcs}.
\begin{theorem}\label{Thm-EllArcs}
Let $\T_n\in\PP_n$ be any polynomial of degree $n$. Then $\T_n^{-1}([-1,1])$ consists of $\ell$ (but not less than $\ell$) Jordan arcs with endpoints $a_1,a_2,\ldots,a_{2\ell}$ if and only if $\T_n^2-1$ has exactly $2\ell$ pairwise distinct zeros $a_1,a_2,\ldots,a_{2\ell}$, $1\leq\ell\leq{n}$, of odd multiplicity, i.e., if and only if $\T_n$ satisfies a polynomial equation of the form \eqref{TU} with $\HH_{2\ell}$ given in \eqref{H}.
\end{theorem}
\begin{proof}
By Lemma\,\ref{Lem-1}\,(vi), $\T_n^{-1}([-1,1])$ consists of $n$ Jordan arcs $\Gamma_1,\Gamma_2,\dots,\Gamma_n$, which can be combined into $\ell$ Jordan arcs in the following way: Let $d_1,d_2,\ldots,d_k$ be those zeros of $\T_n^2-1$ with \emph{even} multiplicities $2\alpha_1,2\alpha_2,\ldots,2\alpha_k$ and let, as assumed in the Theorem, $a_1,a_2,\ldots,a_{2\ell}$ be those zeros of $\T_n^2-1$ with \emph{odd} multiplicities $2\beta_1-1,2\beta_2-1,\ldots,2\beta_{2\ell}-1$, where
\begin{equation}\label{SumMult}
2\alpha_1+2\alpha_2+\ldots+2\alpha_k+(2\beta_1-1)+(2\beta_2-1)+\ldots+(2\beta_{2\ell}-1)=2n
\end{equation}
holds. By Lemma\,\ref{Lem-1}\,(vi), at each point $d_j$, the $2\alpha_j$ Jordan arcs can be combined into $\alpha_j$ Jordan arcs, $j=1,2,\ldots,\nu$, and at each point $a_j$, the $2\beta_j-1$ Jordan arcs can be combined into $\beta_j$ Jordan arcs, $j=1,2,\ldots,2\ell$. Altogether, the number of such combinations, using \eqref{SumMult}, is
\[
\alpha_1+\alpha_2+\ldots+\alpha_{\nu}+(\beta_1-1)+(\beta_2-1)+\ldots+(\beta_{2\ell}-1)=(n+\ell)-2\ell=n-\ell,
\]
i.e., the total number $n$ of Jordan arcs is reduced by $n-\ell$, thus $\ell$ Jordan arcs remain and the sufficiency part is proved. Since, by Lemma\,\ref{Lemma-PolEq}, for each polynomial $\T_n\in\PP_n$ there is a unique $\ell\in\{1,2,\ldots,n\}$ such that $\T_n^2-1$ has exactly $2\ell$ distinct zeros of odd multiplicity, the necessity part is clear.
\end{proof}
\begin{example}
Similar as after the proof of Theorem\,\ref{Thm-NuArcs}, let us illustrate the combination of Jordan arcs by the polynomial of Example\,\ref{Ex}. Taking a look at Fig.\,\ref{Fig_InverseImageGeneral}, one can easily identitfy the $n=9$ Jordan arcs $\Gamma_1,\Gamma_2,\ldots,\Gamma_n\in\T_n^{-1}([-1,1])$, where each arc $\Gamma_j$ runs from a disk to a circle. Note that the two arcs, which cross at $z\approx0.3$, may be chosen in two different ways. Now, $\T_n^2-1$ has the zero $d_1=0$ with multiplicity $2\alpha_1=2$, the zero $d_2=2$ with multiplicity $2\alpha_2=4$, and a zero $a_1=1$ with multiplicity $2\beta_1-1=3$, all other zeros $a_j$ have multiplicity $2\beta_j-1=1$, $j=2,3,\ldots,2\ell$. Thus, it is possible to have one combination at $d_1=0$, two combinations at $d_2=2$ and one combination of Jordan arcs at $a_1=1$. Altogether, we obtain $\alpha_1+\alpha_2+(\beta_1-1)=4=n-\ell$ combinations and the number of Jordan arcs is $\ell=5$.
\end{example}
For the sake of completeness, let us mention two simple special cases, first the case $\ell=1$, see, e.g., \cite[Remark\,4]{PehSch-2004}, and second, the case when all endpoints $a_1,a_2,\ldots,a_{2\ell}$ of the arcs are real, see \cite{Peh-1993}.
\begin{corollary}
Let $\T_n\in\PP_n$.
\begin{enumerate}
\item $\T_n^{-1}([-1,1])$ consists of $\ell=1$ Jordan arc with endpoints $a_1,a_2\in\C$, $a_1\neq{a}_2$, if and only if $\T_n$ is the classical Chebyshev polynomial of the first kind (suitable normed), i.e., $\T_n(z)=T_n((2z-a_1-a_2)/(a_2-a_1))$, where $T_n(z):=\cos(n\arccos{z})$. In this case, $\T_n^{-1}([-1,1])$ is the complex interval $[a_1,a_2]$.
\item $\T_n^{-1}([-1,1])=[a_1,a_2]\cup[a_3,a_4]\cup\ldots\cup[a_{2\ell-1},a_{2\ell}]$, $a_1,a_2,\ldots,a_{2\ell}\in\R$, $a_1<a_2<\ldots<a_{2\ell}$, if and only if $\T_n$ satisfies the polynomial equation \eqref{TU} with $\HH_{2\ell}$ as in \eqref{H} and $a_1,a_2,\ldots,a_{2\ell}\in\R$, $a_1<a_2<\ldots<a_{2\ell}$.
\end{enumerate}
\end{corollary}
Let us consider the case of $\ell=2$ Jordan arcs in more detail. Given four pairwise distinct points $a_1,a_2,a_3,a_4\in\C$ in the complex plane, define
\begin{equation}\label{H2}
\HH_4(z):=(z-a_1)(z-a_2)(z-a_3)(z-a_4),
\end{equation}
and suppose that $\T_n(z)=c_nz^n+\ldots\in\PP_n$ satisfies a polynomial equation of the form
\begin{equation}\label{TU2}
\T_n^2(z)-1=\HH_4(z)\,\U_{n-2}^2(z)
\end{equation}
with $\U_{n-2}(z)=c_nz^{n-2}+\ldots\in\PP_{n-2}$. Then, by \eqref{TU2}, there exists a $z^*\in\C$ such that the derivative of $\T_n$ is given by
\begin{equation}\label{z*}
\T_n'(z)=n(z-z^*)\,\U_{n-2}(z).
\end{equation}
By Theorem\,\ref{Thm-EllArcs}, $\T_n^{-1}([-1,1])$ consists of two Jordan arcs. Moreover, it is proved in \cite[Theorem\,3]{PehSch-2004} that the two Jordan arcs are crossing each other if and only if $z^*\in\T_n^{-1}([-1,1])$ (compare also Theorem\,\ref{Theorem-Connectedness}). In this case, $z^*$ is the only crossing point. Interestingly, the minimum number of analytic Jordan arcs is not always two, as the next theorem says. In order to prove this result, we need the following lemma\,\cite[Lemma\,1]{PehSch-2004}.
\begin{lemma}\label{Lemma-PehSch}
Suppose that $\T_n\in\PP_n$ satisfies a polynomial equation of the form \eqref{TU2}, where $\HH_4$ is given by \eqref{H2}, and let $z^*$ be given by \eqref{z*}.
\begin{enumerate}
\item If $z^*$ is a zero of $\U_{n-2}$ then it is either a double zero of $\U_{n-2}$ or a zero of $\HH$.
\item If $z^*$ is a zero of $\HH$ then $z^*$ is a simple zero of $\U_{n-2}$.
\item The point $z^*$ is the only possible common zero of $\HH$ and $\U_{n-2}$.
\item If $\U_{n-2}$ has a zero $y^*$ of order greater than one then $y^*=z^*$ and $z^*$ is a double zero of $\U_{n-2}$.
\end{enumerate}
\end{lemma}
\begin{theorem}
Suppose that $\T_n\in\PP_n$ satisfies a polynomial equation of the form \eqref{TU2}, where $\HH_4$ is given by \eqref{H2}, and let $z^*$ be given by \eqref{z*}. If $z^*\notin\{a_1,a_2,a_3,a_4\}$ then $\T_n^{-1}([-1,1])$ consists of two analytic Jordan arcs. If $z^*\in\{a_1,a_2,a_3,a_4\}$ then $\T_n^{-1}([-1,1])$ consists of three analytic Jordan arcs, all with one endpoint at $z^*$, and an angle of $2\pi/3$ between two arcs at $z^*$.
\end{theorem}
\begin{proof}
We distinguish two cases:
\begin{enumerate}
\item[1.] $\T_n(z^*)\notin\{-1,1\}$: By Lemma\,\ref{Lemma-PehSch}, $\T_n^2-1$ has 4 simple zeros $\{a_1,a_2,a_3,a_4\}$ and $n-2$ double zeros. Thus, by Theorem\,\ref{Thm-NuArcs}, $\T_n^{-1}([-1,1])$ consists of two analytic Jordan arcs.
\item[2.] $\T_n(z^*)\in\{-1,1\}$:
\begin{enumerate}
\item[2.1] If $z^*\in\{a_1,a_2,a_3,a_4\}$ then, by Lemma\,\ref{Lemma-PehSch}, $\T_n^2-1$ has 3 simple zeros given by $\{a_1,a_2,a_3,a_4\}\setminus\{z^*\}$, $n-3$ double zeros and one zero of multiplicity 3 (that is $z^*$). Thus, by Theorem\,\ref{Thm-NuArcs}, $\T_n^{-1}([-1,1])$ consists of three analytic Jordan arcs.
\item[2.2] If $z^*\notin\{a_1,a_2,a_3,a_4\}$ then, by Lemma\,\ref{Lemma-PehSch}, $z^*$ is a double zero of $\U_{n-2}$. Thus $\T_n^2-1$ has 4 simple zeros $\{a_1,a_2,a_3,a_4\}$, $n-4$ double zeros and one zero of multiplicity 4 (that is $z^*$). Thus, by Theorem\,\ref{Thm-NuArcs}, $\T_n^{-1}([-1,1])$ consists of two analytic Jordan arcs.
\end{enumerate}
\end{enumerate}
The very last statement of the theorem follows immediately by Lemma\,\ref{Lem-1}\,(iii).
\end{proof}
Let us mention that in \cite{PehSch-2004}, see also \cite{Sch-2007a} and \cite{Sch-2009}, necessary and sufficient conditions for four points $a_1,a_2,a_3,a_4\in\C$ are given with the help of Jacobian elliptic functions such that there exists a polynomial of degree $n$ whose inverse image consists of two Jordan arcs with the four points as endpoints. Concluding this section, let us give two simple examples of inverse polynomial images.
\begin{example}\hfill{}
\begin{enumerate}
\item Let $a_1=-1$, $a_2=-a$, $a_3=a$ and $a_4=1$ with $0<a<1$ and
\[
\HH_4(z)=(z-a_1)(z-a_2)(z-a_3)(z-a_4)=(z^2-1)(z^2-a^2).
\]
If
\[
\T_2(z):=\frac{2z^2-a^2-1}{1-a^2},\quad\U_0(z):=\frac{2}{1-a^2},
\]
then
\[
\T_2^2(z)-\HH_4(z)\U_0^2(z)=1.
\]
Thus, by Theorem\,\ref{Thm-EllArcs}, $\T_2^{-1}([-1,1])$ consists of two Jordan arcs with endpoints $a_1$, $a_2$, $a_3$, $a_4$, more precisely $\T_2^{-1}([-1,1])=[-1,-a]\cup[a,1]$.
\item Let $a_1=\ii$, $a_2=-\ii$, $a_3=a-\ii$ and $a_4=a+\ii$ with $a>0$ and
\[
\HH_4(z)=(z-a_1)(z-a_2)(z-a_3)(z-a_4)=(z^2+1)((z-a)^2+1).
\]
If
\[
\T_2(z):=\frac{\ii}{a}\bigl(z^2-az+1\bigr),\quad\U_0(z):=\frac{\ii}{a},
\]
then
\[
\T_2^2(z)-\HH_4(z)\U_0^2(z)=1.
\]
Thus, by Theorem\,\ref{Thm-EllArcs}, $\T_2^{-1}([-1,1])$ consists of two Jordan arcs with endpoints $a_1$, $a_2$, $a_3$, $a_4$. More precisely, if $0<a<2$,
\[
\T_2^{-1}([-1,1])=\bigl\{x+\ii{y}\in\C:-\frac{(x-a/2)^2}{1-a^2/4}+\frac{y^2}{1-a^2/4}=1\bigr\},
\]
i.e., $\T_2^{-1}([-1,1])$ is an equilateral hyperbola (not crossing the real line) with center at $z_0=a/2$ and asymptotes $y=\pm(x-a/2)$.\\
If $a=2$, $\T_2^{-1}([-1,1])=[\ii,a-\ii]\cup[-\ii,a+\ii]$, i.e., the union of two complex intervals.\\
If $2<a<\infty$,
\[
\T_2^{-1}([-1,1])=\bigl\{x+\ii{y}\in\C:\frac{(x-a/2)^2}{a^2/4-1}-\frac{y^2}{a^2/4-1}=1\bigr\},
\]
i.e., $\T_2^{-1}([-1,1])$ is an equilateral hyperbola with center at $z_0=a/2$, crossing the real line at $a/2\pm\sqrt{a^2/4-1}$ and asymptotes $y=\pm(x-a/2)$.\\
In Fig.\,\ref{Fig_Rectangle}, the sets $\T_2^{-1}([-1,1])$ including the asymptotes are plotted for the three cases discussed above.
\end{enumerate}
\end{example}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.7]{Fig_Rectangle}
\caption{\label{Fig_Rectangle} The inverse image $\T_2^{-1}([-1,1])$ for $0<a<2$ (left plot), for $a=2$ (middle plot) and for $a>2$ (right plot)}
\end{center}
\end{figure}
\section{The Connectedness of an Inverse Polynomial Image}
In the next theorem, we give a necessary and sufficient condition such that the inverse image is connected.
\begin{theorem}\label{Theorem-Connectedness}
Let $\T_n\in\PP_n$. The inverse image $\T_n^{-1}([-1,1])$ is connected if and only if all zeros of the derivative $\T_n'$ lie in $\T_n^{-1}([-1,1])$.
\end{theorem}
\begin{proof}
Let $\Gamma:=\bigl\{\Gamma_1,\Gamma_2,\dots,\Gamma_n\bigr\}$ denote the set of arcs of $\T_n^{-1}([-1,1])$ as in Lemma\,\ref{Lem-1}\,(vi).
``$\Leftarrow$'': Suppose that all zeros of $\T_n'$ lie in $\Gamma$. Let $A_1\in\Gamma$ be such that it contains at least one zero $z_1$ of $\T'_n$ with multiplicity $m_1\geq1$. By Lemma\,\ref{Lem-1}\,(ii), (iv) and (vi), there are $m_1$ additional arcs $A_2,A_3,\dots,A_{m_1+1}\in\Gamma$ containing $z_1$. By Lemma\,\ref{Lem-1}\,(vii),
\[
A_j\cap{A}_k=\{z_1\}~\text{for}~j,k\in\{1,2,\dots,m_1+1\},~j\neq{k}.
\]
Now assume that there is another zero $z_2$ of $\T'_n$, $z_2\neq{z}_1$, with multiplicity $m_2$, on $A_{j^*}$, $j^*\in\{1,2,\dots,m_1+1\}$. Since no arc $A_j$, $j\in\{1,2,\dots,m_1+1\}\setminus\{j^*\}$ contains $z_2$, there are $m_2$ curves $A_{m_1+1+j}\in\Gamma$, $j=1,2,\dots,m_2$, which cross each other at $z_2$ and for which, by Lemma\,\ref{Lem-1}\,(vii),
\[
\begin{aligned}
A_j\cap{A}_k&=\{z_2\} \quad &&\text{for } &&j,k\in\{m_1+2,\dots,m_1+m_2+1\}, j\neq k,\\
A_j\cap{A}_k&=\emptyset &&\text{for } &&j\in\{1,2,\dots,m_1+1\}\setminus\{j^*\},\\
& && &&k\in\{m_1+2,\dots,m_1+m_2+1\}\\
A_{j^*}\cap{A}_k&=\{z_2\} &&\text{for }
&&k\in\{m_1+2,\dots,m_1+m_2+1\}.
\end{aligned}
\]
If there is another zero $z_3$ of $\T_n'$, $z_3\notin\{z_1,z_2\}$, on $A_{j^{**}}$, $j^{**}\in\{1,2,\dots,m_1+m_2+1\}$, of multiplicity $m_3$, we proceed as before.\\
We proceed like this until we have considered all zeros of $\T'_n$ lying on the constructed set of arcs. Thus, we get a connected set of $k^*+1$ curves
\[
A^*:=A_1\cup{A}_2\cup\ldots\cup{A}_{k^*+1}
\]
with $k^*$ zeros of $\T'_n$, counted with multiplicity, on $A^*$.\\
Next, we claim that $k^*=n-1$. Assume that $k^*<n-1$, then, by assumption, there exists a curve $A_{k^*+2}\in\Gamma$, for which
\[
A_{k^*+2}\cap{A}^*=\{\}
\]
and on which there is another zero of $\T'_n$. By the same procedure as before, we get a set $A^{**}$ of $k^{**}+1$ arcs of $\Gamma$ for which $A^*\cap{A}^{**}=\{\}$ and $k^{**}$ zeros of $\T'_n$, counted with multiplicity. If $k^*+k^{**}=n-1$, then we would get a set of $k^*+k^{**}+2=n+1$ arcs, which is a contradiction to Lemma\,\ref{Lem-1}\,(i). If $k^*+k^{**}<n-1$, we proceed analogously and again, we get too many arcs, i.e., a contradiction to Lemma\,\ref{Lem-1}\,(vi). Thus, $k^*=n-1$ must hold and thus $\Gamma$ is connected.
``$\Rightarrow$'': Suppose that $\Gamma$ is connected. Thus, it is possible to reorder $\Gamma_1,\Gamma_2,\ldots,\Gamma_n$ into $\Gamma_{k_1},\Gamma_{k_2},\ldots,\Gamma_{k_n}$ such that $\Gamma_{k_1}\cup\ldots\cup\Gamma_{k_j}$ is connected for each $j\in\{2,\ldots,n\}$. Now we will count the crossing points (common points) of the arcs in the following way: If there are $m+1$ arcs $A_1,A_2,\ldots,A_{m+1}\in\Gamma$ such that $z_0\in{A}_j$, $j=1,2,\ldots,A_{m+1}$, then we will count the crossing point $z_0$ $m$-times, i.e., we say $A_1,\ldots,A_{m+1}$ has $m$ crossing points. Hence, $\Gamma_{k_1}\cup\Gamma_{k_2}$ has one crossing point, $\Gamma_{k_1}\cup\Gamma_{k_2}\cup\Gamma_{k_3}$ has two crossing points, $\Gamma_{k_1}\cup\Gamma_{k_2}\cup\Gamma_{k_3}\cup\Gamma_{k_4}$ has 3 crossing points, and so on. Summing up, we arrive at $n-1$ crossing points which are, by Lemma\,\ref{Lem-1}\,(iv) the zeros of $\T_n'$.
\end{proof}
Theorem\,\ref{Theorem-Connectedness} may be generalized to the question how many connected sets $\T_n^{-1}([-1,1])$ consists of. The proof runs along the same lines as that of Theorem\,\ref{Theorem-Connectedness}.
\begin{theorem}
Let $\T_n\in\PP_n$. The inverse image $\T_n^{-1}([-1,1])$ consists of $k$, $k\in\{1,2,\ldots,n\}$, connected components $B_1,B_2,\ldots,B_k$ with $B_1\cup{B}_2\cup\ldots\cup{B}_k=\T_n^{-1}([-1,1])$ and $B_i\cap{B}_j=\{\}$, $i\neq{j}$, if and only if $n-k$ zeros of the derivative $\T_n'$ lie in $\T_n^{-1}([-1,1])$.
\end{theorem}
\bibliographystyle{amsplain}
| {
"timestamp": "2013-06-26T02:01:40",
"yymm": "1306",
"arxiv_id": "1306.5872",
"language": "en",
"url": "https://arxiv.org/abs/1306.5872",
"abstract": "Given a polynomial $\\T_n$ of degree $n$, consider the inverse image of $\\R$ and $[-1,1]$, denoted by $\\T_n^{-1}(\\R)$ and $\\T_n^{-1}([-1,1])$, respectively. It is well known that $\\T_n^{-1}(\\R)$ consists of $n$ analytic Jordan arcs moving from $\\infty$ to $\\infty$. In this paper, we give a necessary and sufficient condition such that (1) $\\T_n^{-1}([-1,1])$ consists of $\\nu$ analytic Jordan arcs and (2) $\\T_n^{-1}([-1,1])$ is connected, respectively.",
"subjects": "Complex Variables (math.CV)",
"title": "Geometric properties of inverse polynomial images",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9904405986777259,
"lm_q2_score": 0.8080672066194946,
"lm_q1q2_score": 0.8003425678960499
} |
https://arxiv.org/abs/1203.5829 | Ensemble estimators for multivariate entropy estimation | The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size $T$ as the dimension $d$ of the samples increases. In particular, the rate is often glacially slow of order $O(T^{-{\gamma}/{d}})$, where $\gamma>0$ is a rate parameter. Examples of such estimators include kernel density estimators, $k$-nearest neighbor ($k$-NN) density estimators, $k$-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of $O(T^{-1})$. Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample. | \section{Introduction}
Non-linear functionals of probability densities $f$ of the form $G(f) = \int g(f(x),x) f(x) dx$ arise in applications of information theory, machine learning, signal processing and statistical estimation. Important examples of such functionals include Shannon $g(f,x)=-\log(f)$ and R\'enyi $g(f,x) = f^{\alpha-1}$ entropy, and the quadratic functional $g(f,x) = f^{2}$. In these applications, the functional of interest often must be estimated empirically from sample realizations of the underlying densities.
Functional estimation has received significant attention in the mathematical statistics community. However, estimators of functionals of multivariate probability densities $f$ suffer from mean square error (MSE) rates which typically decrease with dimension $d$ of the sample as $O(T^{-{\gamma}/{d}})$, where $T$ is the number of samples and $\gamma$ is a positive rate parameter. Examples of such estimators include kernel density estimators~\cite{kde}, $k$-nearest neighbor ($k$-NN) density estimators~\cite{fuk2}, $k$-NN entropy functional estimators~\cite{hero,kks,litt}, intrinsic dimension estimators~\cite{kks}, divergence estimators~\cite{wang}, and mutual information estimators. This slow convergence is due to the curse of dimensionality. In this paper, we introduce a simple affine combination of an ensemble of such slowly convergent estimators and show that the weights in this combination can be chosen to significantly improve the rate of MSE convergence of the weighted estimator. In fact our ensemble averaging method can improve MSE convergence to the parametric rate $O(T^{-1})$.
Specifically, for $d$-dimensional data, it has been observed that the variance of estimators of functionals $G(f)$ decays as $O(T^{-1})$ while the bias decays as $O(T^{-1/(1+d)})$. To accelerate the slow rate of convergence of the bias in high dimensions, we propose a weighted ensemble estimator for ensembles of estimators that satisfy conditions ${\mathscr C}.1$(\ref{BE}) and ${\mathscr C}.2$(\ref{VE}) defined in Sec. II below. Optimal weights, which serve to lower the bias of the ensemble estimator to $O(T^{-1/2})$, can be determined by solving a convex optimization problem. Remarkably, this optimization problem does not involve any density-dependent parameters and can therefore be performed offline. This then ensures MSE convergence of the weighted estimator at the parametric rate of $O(T^{-1})$.
\subsection{Related work}
When the density $f$ is $s > d/4$ times differentiable, certain estimators of functionals of the form $\int g(f(x),x) f(x) dx$, proposed by Birge and Massart~\cite{birge}, Laurent~\cite{laurent} and Gin\'e and Mason~\cite{gini}, can achieve the parametric MSE convergence rate of $O(T^{-1})$. The key ideas in ~\cite{birge,laurent,gini} are: (i) estimation of quadratic functionals $\int f^2(x) dx$ with MSE convergence rate $O(T^{-1})$; (ii) use of kernel density estimators with kernels that satisfy the following symmetry constraints:
\begin{equation}
\int K(x) dx =1, \hspace{0.5in} \int x^r K(x) dx = 0,
\label{eq:symmetrickernels}
\end{equation} for $r=1,..,s$; and finally (iii) truncating the kernel density estimate so that it is bounded away from $0$. By using these ideas, the estimators proposed by ~\cite{birge,laurent,gini} are able to achieve parametric convergence rates.
In contrast, the estimators proposed in this paper require additional higher order smoothness conditions on the density, i.~e.~ the density must be $s>d$ times differentiable. However, our estimators are much simpler to implement in contrast to the estimators proposed in ~\cite{birge,laurent,gini}. In particular, the estimators in ~\cite{birge,laurent,gini} require separately estimating quadratic functionals of the form $\int f^2(x) dx$, and using truncated kernel density estimators with symmetric kernels (\ref{eq:symmetrickernels}), conditions that are not required in this paper. Our estimator is a simple affine combination of an ensemble of estimators, where the ensemble satisfies conditions ${\mathscr C}.1$ and ${\mathscr C}.2$. Such an ensemble can be trivial to implement. For instance, in this paper we show that simple uniform kernel plug-in estimators (\ref{eq:plugin}) satisfy conditions ${\mathscr C}.1$ and ${\mathscr C}.2$.
Ensemble based methods have been previously proposed in the context of classification. For example, in both boosting~\cite{boosting} and multiple kernel learning~\cite{multipleK} algorithms, lower complexity weak learners are combined to produce classifiers with higher accuracy. Our work differs from these methods in several ways. First and foremost, our proposed method performs estimation rather than classification. An important consequence of this is that the weights we use are {\emph {data independent}}, while the weights in boosting and multiple kernel learning must be estimated from training data since they depend on the unknown distribution.
\subsection{Organization}
The remainder of the paper is organized as follows. We formally describe the weighted ensemble estimator for a general ensemble of estimators in Section~\ref{sec:genmeth}, and specify conditions ${\mathscr C}.1$ and ${\mathscr C}.2$ on the ensemble that ensure that the ensemble estimator has a faster rate of MSE convergence. Under the assumption that conditions ${\mathscr C}.1$ and ${\mathscr C}.2$ are satisfied, we provide an MSE optimal set of weights as the solution to a convex optimization(\ref{convexsoltheory}). Next, we shift the focus to entropy estimation in Section~\ref{sec:entropyest}, propose an ensemble of simple uniform kernel plug-in entropy estimators, and show that this ensemble satisfies conditions ${\mathscr C}.1$ and ${\mathscr C}.2$. Subsequently, we apply the ensemble estimator theory in Section~\ref{sec:genmeth} to the problem of entropy estimation using this ensemble of kernel plug-in estimators. We present simulation results in Section~\ref{sec:exp} that illustrate the superior performance of this ensemble entropy estimator in the context of (i) estimation of the Panter-Dite distortion-rate factor~\cite{riten} and (ii) testing the probability distribution of a random sample. We conclude the paper in Section~\ref{sec:con}.
\subsection*{Notation}
We will use bold face type to indicate random variables and random vectors and regular type face for constants. We denote the statistical expectation operator by the symbol ${{\mathbb{E}}}$ and the conditional expectation given random variable $\mathbf{Z}$ using the notation ${{\mathbb{E}}}_{\mathbf{Z}}$. We also define the variance operator as ${{\mathbb{V}}}[\mathbf{X}] = {{\mathbb{E}}}[(\mathbf{X}-{{\mathbb{E}}}[\mathbf{X}])^2] $ and the covariance operator as $\mathrm{Cov}[\mathbf{X},\mathbf{Y}] = {{\mathbb{E}}}[(\mathbf{X}-{{\mathbb{E}}}[\mathbf{X}])(\mathbf{Y}-{{\mathbb{E}}}[\mathbf{Y}])] $. We denote the bias of an estimator by $\mathbb{B}$.
\section{Ensemble estimators}
\label{sec:problem}
\label{sec:genmeth}
Let $\bar{l} = \{l_1,..,l_L\}$ denote a set of parameter values. For a parameterized ensemble of estimators $\{\hat{\mathbf{E}}_l\}_{l \in \bar{l}}$ of $E$, define the weighted ensemble estimator with respect to weights $w = \{w(l_1),\ldots,w(l_L)\}$ as $$\hat{\mathbf{E}}_w = \sum_{l \in \bar{l}} w(l)\hat{\mathbf{E}}_l$$ where the weights satisfy $\sum_{l \in \bar{l}} w(l) = 1$. This latter sum-to-one condition guarantees that $\hat{\mathbf{E}}_w$ is asymptotically unbiased if the component estimators $\{\hat{\mathbf{E}}_l\}_{l \in \bar{l}}$ are asymptotically unbiased. Let this ensemble of estimators $\{\hat{\mathbf{E}}_l\}_{l \in \bar{l}}$ satisfy the following two conditions:
\begin{itemize}
\item ${\mathscr C}.1$ The bias is given by
\begin{eqnarray}
\label{Bias_Ensemble}
\label{BE}
\mathbb{B}({\hat{\mathbf{E}}}_{l}) &=& \sum_{i \in {\cal I}} c_{i}\psi_i(l)T^{-{i/2d}} + O(1/\sqrt{T}),
\end{eqnarray}
where $c_{i}$ are constants that depend on the underlying density, ${\cal I} = \{i_1,..,i_I\}$ is a finite index set with cardinality $I<L$, $\min({\cal I}) = i_0 > 0$ and $\max({\cal I}) = i_d \leq d$, and $\psi_i(l)$ are basis functions that depend only on the estimator parameter $l$.
\item ${\mathscr C}.2$ The variance is given by
\begin{eqnarray}
\label{Variance_Ensemble}
\label{VE}
\mathbb{V}({\hat{\mathbf{E}}}_{l}) &=& c_{v}\left({\frac{1}{T}}\right) + o\left(\frac{1}{T}\right).
\end{eqnarray}
\end{itemize}
\begin{theorem}
\label{lemma:weightedensemble}
For an ensemble of estimators $\{\hat{\mathbf{E}}_l\}_{l \in \bar{l}}$, assume that the conditions ${\mathscr C}.1$ and ${\mathscr C}.2$ hold. Then, there exists a weight vector $w_o$ such that
$${{\mathbb{E}}}[(\hat{\mathbf{E}}_{w_o} - E)^2] = O(1/T).$$ This weight vector can be found by solving the following convex optimization problem:
\begin{equation}
\begin{aligned}
& \underset{w}{\text{minimize}}
& & ||w||_2 \\
& \text{subject to}
& & \sum_{l \in \bar{l}} w(l) = 1, \\
&&& \gamma_w(i) = \sum_{l \in \bar{l}} w(l)\psi_i(l) = 0, \; i \in {\cal I},
\end{aligned}
\label{convexsoltheory}
\end{equation}
where $\psi_i(l)$ is the basis defined in (2.1).
\end{theorem}
\begin{proof}
The bias of the ensemble estimator is given by
\begin{eqnarray}
\label{Bias_EnsembleEstimate}
\mathbb{B}({\hat{\mathbf{E}}}_{w}) &=& \sum_{i \in {\cal I}} c_{i}\gamma_w(i)T^{-{i/2d}} + O\left(\frac{||w||_1}{\sqrt{T}}\right) \nonumber \\
&=& \sum_{i \in {\cal I}} c_{i}\gamma_w(i)T^{-{i/2d}} + O\left(\frac{\sqrt{L}||w||_2}{\sqrt{T}}\right)
\end{eqnarray}
Denote the covariance matrix of $\{\hat{\mathbf{E}}_l; l \in \bar{l}\}$ by $\Sigma_L$. Let $\bar{\Sigma}_L = \Sigma_LT$. Observe that by (\ref{Variance_Ensemble}) and the Cauchy-Schwarz inequality, the entries of $\bar{\Sigma}_L$ are $O(1)$. The variance of the weighted estimator $\hat{\mathbf{E}}_w$ can then be bounded as follows:
\begin{eqnarray}
\label{eq:var}
{{\mathbb{V}}}(\hat{\mathbf{E}}_w) &=& {{\mathbb{V}}}(\sum_{l \in \bar{l}} w_l\hat{\mathbf{E}}_l) = w' \Sigma_L w = \frac{w' \bar{\Sigma}_L w}{T}\nonumber \\
&\leq& \frac{\lambda_{\max}(\bar{\Sigma}_L)||w||^2_2}{T} \leq \frac{trace(\bar{\Sigma}_L)||w||^2_2}{T} \leq \frac{L||w||^2_2}{T}
\end{eqnarray}
We seek a weight vector $w$ that (i) ensures that the bias of the weighted estimator is $O(T^{-1/2})$ and (ii) has low $\ell_2$ norm $||w||_2$ in order to limit the contribution of the variance, and the higher order bias terms of the weighted estimator. To this end, let $w_{o}$ be the solution to the convex optimization problem defined in (2.3). The solution $w_o$ is the solution of
\begin{equation}
\begin{aligned}
& \underset{w}{\text{minimize}}
& & ||w||^2_2 \\
& \text{subject to}
& & A_0w = b,
\end{aligned}
\nonumber
\label{convexsol2}
\end{equation}
where $A_0$ and $b$ are defined below. Let $a_0$ be the vector of ones: $[1,1. .. ,1]_{1 \times L}$; and let $a_{i}$, for each $i \in \cal{I}$ be given by $a_{i} = [\psi_i(l_1), .. ,\psi_i(l_L)]$. Define $A_0 = [a'_0, a'_{i_1}, ... , a'_{i_I}]'$, $A_1 = [a'_{i_1}, ... , a'_{i_I}]'$ and $b = [1;0;0;..;0]_{(I+1) \times 1}$.
Since $L > I$, the system of equations $A_0 w = b$ is guaranteed to have at least one solution (assuming linear independence of the rows $a_i$). The minimum squared norm $\eta_L(d) := ||w_0||_2^2$ is then given by
$$\eta_L(d) = {\frac{\text{det}(A_1A'_1)}{\text{det}(A_0A'_0)}}.$$
Consequently, by (\ref{Bias_EnsembleEstimate}), the bias $\mathbb{B}[\hat{\mathbf{E}}_{w_o}] = O(\sqrt{L\eta_L(d)}/\sqrt{T})$. By (\ref{eq:var}), the estimator variance ${{\mathbb{V}}}[\hat{\mathbf{E}}_{w_0}] = O(L\eta_L(d)/T)$. The overall MSE is also therefore of order $O(L\eta_L(d)/T)$.
For any fixed dimension $d$ and fixed number of estimators $L>I$ in the ensemble independent of sample size $T$, the value of $\eta_L(d)$ is also independent of $T$. Stated mathematically, $L\eta_L(d) = \Theta(1)$ for any fixed dimension $d$ and fixed number of estimators $L>I$ independent of sample size $T$. This concludes the proof.
\end{proof}
In the next section, we will verify conditions $\mathscr C.1$(\ref{BE}) and $\mathscr C.2$(\ref{VE}) for plug-in estimators $\hat{\mathbf{G}}_{k}(f)$ of entropy-like functionals $G(f) = \int g(f(x),x) f(x) dx$.
\section{Application to estimation of functionals of a density}
\label{sec:entropyest}
Our focus is the estimation of general non-linear functionals $G(f)$ of $d$-dimensional multivariate densities $f$ with known finite support ${\cal S} = [a,b]^d$, where $G(f)$ has the form
\begin{equation}
\label{eq:oracle}
G(f) = \int g(f(x),x) f(x) dx,
\end{equation}
for some smooth function $g(f,x)$. Let ${\cal B}$ denote the boundary of ${\cal S}$. Assume that $T=N+M$ i.i.d realizations $\{\mathbf{X}_1, \ldots, \mathbf{X}_N, \mathbf{X}_{N+1}, \ldots, \mathbf{X}_{N+M}\}$ are available from the density $f$.
\subsection{Plug-in estimators of entropy}
\label{sec:weightedpluginest}
The truncated \emph{uniform kernel} density estimator is defined below. For any positive real number $k \leq M$, define the distance $d_k$ to be: $d_k = (k/M)^{1/d}$. Define the truncated kernel region for each $X \in {\cal S}$ to be $S_k(X) = \{Y \in {\cal S} : ||X-Y||_\infty \leq d_k/2\}$, and the volume of the truncated uniform kernel to be $V_k(X) = \int_{S_k(X)} dz$. Note that when the smallest distance from $X$ to ${\cal B}$ is greater than $d_k/2$, $V_k(X) = d_k^d = k/M$. Let $\mathbf{l}_k(X)$ denote the number of samples falling in $S_k(X)$: $\mathbf{l}_k(X) = \sum_{i=1}^{M} 1_{\{\mathbf{X}_i \in S_k(X)\}}$. The truncated {uniform kernel} density estimator is defined as
\begin{equation}
\hat{\mathbf{f}}_{k}(X) = \frac{\mathbf{l}_k(X)}{MV_k(X)}.
\end{equation}
The plug-in estimator of the density functional is constructed using a data splitting approach as follows. The data is randomly subdivided into two parts $\{\mathbf{X}_1, \ldots, \mathbf{X}_N\}$ and $\{\mathbf{X}_{N+1}, \ldots, \mathbf{X}_{N+M}\}$ of $N$ and $M$ points respectively. In the first stage, we form the kernel density estimate ${\hat{\mathbf{f}}_k}$ at the $N$ points $\{\mathbf{X}_1, \ldots, \mathbf{X}_N\}$ using the $M$ realizations $\{\mathbf{X}_{N+1}, \ldots, \mathbf{X}_{N+M}\}$. Subsequently, we use the $N$ samples $\{\mathbf{X}_1, \ldots, \mathbf{X}_N\}$ to approximate the functional $G(f)$ and obtain the plug-in estimator:
\begin{eqnarray}
\label{eq:plugin}
\hat{\mathbf{G}}_k &=& \frac{1}{N}\sum_{i=1}^N g({\hat{\mathbf{f}}{_k}(\mathbf{X}_i)},\mathbf{X}_i).
\end{eqnarray}
Also define a standard kernel density estimator $\tilde{\mathbf{f}}_k$, which is identical to $\hat{\mathbf{f}}_k$ except that the volume $V_k(X)$ is always set to the untruncated value $V_k(X) = k/M$. Define
\begin{eqnarray}
\label{eq:plugin2}
\tilde{\mathbf{G}}_k &=& \frac{1}{N}\sum_{i=1}^N g({\tilde{\mathbf{f}}{_k}(\mathbf{X}_i)},\mathbf{X}_i).
\end{eqnarray}
The estimator $\tilde{\mathbf{G}}_k$ is identical to the estimator of Gy{\"{o}}rfi and van der Meulen~\cite{gyrofi}. Observe that the implementation of $\tilde{\mathbf{G}}_k$, unlike $\hat{\mathbf{G}}_k$, does not require knowledge about the support of the density.
\subsubsection{Assumptions}
\label{sec:assump}
We make a number of technical assumptions that will allow us to obtain tight MSE convergence rates for the kernel density estimators defined above. $({\cal {A}}.0)$ : Assume that $k = k_0M^\beta$ for some rate constant $0<\beta<1$, and assume that $M$, $N$ and $T$ are linearly related through the proportionality constant $\alpha_{frac}$ with: $0 < \alpha_{frac} < 1$, $M = \alpha_{frac}T$ and $N = (1-\alpha_{frac})T$. $({\cal {A}}.1)$ : Let the density $f$ be uniformly bounded away from $0$ and upper bounded on the set ${\cal S}$, i.e., there exist constants $\epsilon_0$, $\epsilon_\infty$ such that $0 < \epsilon_0 \leq f(x) \leq \epsilon_\infty < \infty$ $\forall x \in {\cal S}$. $({\cal {A}}.2)$: Assume that the density $f$ has continuous partial derivatives of order $d$ in the interior of the set ${\cal S}$, and that these derivatives are upper bounded. $({\cal {A}}.3)$: Assume that the function $g(f,x)$ has $\max\{\lambda,d\}$ partial derivatives w.r.t. the argument $f$, where $\lambda$ satisfies the condition $\lambda \beta>1$. Denote the $n$-th partial derivative of $g(f,x)$ wrt $x$ by $g^{(n)}(f,x)$. $({\cal {A}}.4)$: Assume that the absolute value of the functional $g(f,x)$ and its partial derivatives are strictly upper bounded in the range $\epsilon_0 \leq f \leq \epsilon_\infty$ for all $x$. $({\cal {A}}.5)$: Let $\epsilon \in (0,1)$ and $\delta \in (2/3,1)$. Let ${\cal C}(M)$ be a positive function satisfying the condition $ {\cal C}(M) = \Theta(\exp(-M^{\beta(1-\delta)}))$. For some fixed $0 < \epsilon< 1$, define $p_l = (1-\epsilon)\epsilon_0$ and $p_u = (1+\epsilon)\epsilon_\infty $. Assume that the conditions
$$ (i) \sup_{x}|h(0,x)| < G_1 < \infty, $$
$$ (ii) \sup_{f \in (p_l,p_u),x}|h(f,x)| < G_2 < \infty, $$
$$(iii) \sup_{f \in (1/k,p_u),x }|h(f,x)|{\cal C}(M) < G_3 < \infty \hspace{0.15in} \forall M,$$
$$(iv)\sup_{f \in (p_l,2^dM/k),x} |h(f,x)|{\cal C}(M) < G_4 < \infty \hspace{0.15in} \forall M,$$ are satisfied by
$h(f,x) = g(f,x), g^{(3)}(f,x)$ and $g^{(\lambda)}(f,x)$, for some constants $G_1$, $G_2$, $G_3$ and $G_4$.
These assumptions are comparable to other rigorous treatments of entropy estimation. The assumption $({\cal {A}}.0)$ is equivalent to choosing the bandwidth of the kernel to be a fractional power of the sample size~\cite{raykar}. The rest of the above assumptions can be divided into two categories: (i) assumptions on the density $f$, and (ii) assumptions on the functional $g$. The assumptions on the smoothness, boundedness away from $0$ and $\infty$ of the density $f$ are similar to the assumptions made by other estimators of entropy as listed in Section II,~\cite{beir}. The assumptions on the functional $g$ ensure that $g$ is sufficiently smooth and that the estimator is bounded. These assumptions on the functional are readily satisfied by the common functionals that are of interest in literature: Shannon $g(f,x) = - \log(f)I(f>0) + I(f=0)$ and R\'enyi $g(f,x) = f^{\alpha-1}I(f>0) + I(f=0)$ entropy, where $I(.)$ is the indicator function, and the quadratic functional $g(f,x) = f^{2}$.
\subsubsection{Analysis of MSE}
\label{sec:mseanal}
Under the assumptions stated above, we have shown the following in the Appendix:
\begin{theorem}
\label{knnbiasH}
The biases of the plug-in estimators $\hat{\mathbf{G}}_{k}, \tilde{\mathbf{G}}_{k}$ are given by
\begin{eqnarray}
\label{Biasu}
\mathbb{B}(\hat{\mathbf{G}}_{k}) &=& \sum_{i=1}^{d} c_{1,i}\left({\frac{k}{M}}\right)^{i/d} + \frac{c_{2}}{k} \nonumber + o\left(\frac{1}{k} + \frac{k}{M}\right) \nonumber \\
\mathbb{B}(\tilde{\mathbf{G}}_{k}) &=& c_{1}\left({\frac{k}{M}}\right)^{1/d} + \frac{c_{2}}{k} \nonumber + o\left(\frac{1}{k} + \frac{k}{M}\right), \nonumber
\end{eqnarray}
where $c_{1,i}$, $c_1$ and $c_2$ are constants that depend on $g$ and $f$
\end{theorem}
\begin{theorem}
\label{knnvarH}
The variances of the plug-in estimators $\hat{\mathbf{G}}_{k}, \tilde{\mathbf{G}}_{k}$ are identical up to leading terms, and are given by
\begin{eqnarray}
\label{Variance}
{{\mathbb{V}}}(\hat{\mathbf{G}}_{k}) &=& c_4\left(\frac{1}{N}\right)+ c_5\left(\frac{1}{M}\right) + o\left(\frac{1}{M} + \frac{1}{N}\right) \nonumber \\
{{\mathbb{V}}}(\tilde{\mathbf{G}}_{k}) &=& c_4\left(\frac{1}{N}\right)+ c_5\left(\frac{1}{M}\right) + o\left(\frac{1}{M} + \frac{1}{N}\right), \nonumber
\end{eqnarray}
where $c_4$ and $c_5$ are constants that depend on $g$ and $f$.
\end{theorem}
\subsubsection{Optimal MSE rate}
From Theorem \ref{knnbiasH}, observe that the conditions $k \to \infty$ and $k/M \to 0$ are necessary for the estimators $\hat{\mathbf{G}}_{k}$ and $\tilde{\mathbf{G}}_{k}$ to be unbiased. Likewise from Theorem \ref{knnvarH}, the conditions $N \to \infty$ and $M \to \infty$ are necessary for the variance of the estimator to converge to $0$. Below, we optimize the choice of bandwidth $k$ for minimum MSE, and also show that the optimal MSE rate is invariant to the choice of $\alpha_{frac}$.
\paragraph{Optimal choice of $k$}
Minimizing the MSE over $k$ is equivalent to minimizing the square of the bias over $k$. The optimal choice of $k$ is given by
\begin{eqnarray}
\label{kopt}
k_{opt} &=& \Theta({M^{{1}/{1+d}}}),
\end{eqnarray}
and the bias evaluated at $k_{opt}$ is $\Theta({M^{{-1}/{1+d}}})$.
\paragraph{Choice of $\alpha_{frac}$}
Observe that the MSE of $\hat{\mathbf{G}}_{k}$ and $\tilde{\mathbf{G}}_{k}$ are dominated by the squared bias $(\Theta(M^{-2/(1+d)}))$ as contrasted to the variance $(\Theta(1/N+1/M))$. This implies that the asymptotic MSE rate of convergence is invariant to the selected proportionality constant $\alpha_{frac}$.
In view of (a) and (b) above, the optimal MSE for the estimators $\hat{\mathbf{G}}{_k}$ and $\tilde{\mathbf{G}}{_k}$ is therefore achieved for the choice of $k=\Theta(M^{1/(1+d)})$, and is given by $\Theta(T^{-2/(1+d)})$. Our goal is to reduce the estimator MSE to $O(T^{-1})$. We do so by applying the method of weighted ensembles described in Section~\ref{sec:genmeth}.
\subsection{Weighted ensemble entropy estimator}
For a positive integer $L > I = d-1$, choose $\bar{l} = \{l_1, \ldots, l_L\}$ to be positive real numbers. Define the mapping $k(l) = l\sqrt{M}$ and let $\bar{k} = \{k(l); l \in \bar{l}\}$. Define the weighted ensemble estimator
\begin{equation}
\hat{\mathbf{G}}_w = \sum_{l \in \bar{l}} w(l)\hat{\mathbf{G}}_{k(l)}.
\label{eq:weightedensemble}
\end{equation}
From Theorems \ref{knnbiasH} and \ref{knnvarH}, we see that the biases of the ensemble of estimators $\{\hat{\mathbf{G}}_{k(l)}; l \in \bar{l}\}$ satisfy ${\mathscr C}.1$(\ref{BE}) when we set $\psi_i(l) = l^{i/d}$ and ${\cal I} = \{1,..,d-1\}$. Furthermore, the general form of the variance of $\hat{\mathbf{G}}_{k(l)}$ follows ${\mathscr C}.2$(\ref{VE}) because $N, M = \Theta(T)$. This implies that we can use the weighted ensemble estimator $\hat{\mathbf{G}}_w$ to estimate entropy at $O(L\eta_L(d)/T)$ convergence rate by setting $w$ equal to the optimal weight $w_o$ given by (\ref{convexsoltheory}).
\section{Experiments}
\label{sec:exp}
We illustrate the superior performance of the proposed weighted ensemble estimator for two applications: (i) estimation of the Panter-Dite rate distortion factor, and (ii) estimation of entropy to test for randomness of a random sample.
For finite $T$ direct use of Theorem 1 can lead to excessively high variance. This is because forcing the condition (2.3) that $\gamma_w(i)=0$ is too strong and, in fact, not necessary. The careful reader may notice that to obtain $O(T^{-1}) $ MSE convergence rate in Theorem 1 it is sufficient that $\gamma_w(i)$ be of order $O(T^{-1/2+i/2d})$. Therefore, in practice we determine the optimal weights according to the optimization:
\begin{equation}
\begin{split}
\min_w \quad &\epsilon\\
\text{subject to} \quad &\gamma_w(0) = 1,\\
&\lvert \gamma_w(i) T^{1/2 - i/2d} \rvert \leq \epsilon, \quad i \in \mathcal{I},\\
&\lVert w \rVert_2^2 \leq \eta.
\end{split}
\label{convexsol}
\end{equation}
The optimization \eqref{convexsol} is also convex. Note that, as contrasted to \eqref{convexsoltheory}, the norm of the weight vector $w$ is bounded instead of being minimized. By relaxing the constraints $\gamma_w(i)=0$ in \eqref{convexsoltheory} to the softer constraints in \eqref{convexsol}, the upper bound $\eta$ on $\lVert w \rVert_2^2$ can be reduced from the value $\eta_L(d)$ obtained by solving \eqref{convexsoltheory}. This results in a more favorable trade-off between bias and variance for moderate sample sizes. In our experiments, we find that setting $\eta = 3d$ yields good MSE performance. Note that as $T \to \infty$, we must have $\gamma_w(i) \to 0$ for $i \in \mathcal{I}$ in order to keep $\epsilon$ finite, thus recovering the strict constraints in \eqref{convexsoltheory}.
For fixed sample size $T$ and dimension $d$, observe that increasing $L$ increases the number of degrees of freedom in the convex problem \eqref{convexsol}, and therefore will result in a smaller value of $\epsilon$ and in turn improved estimator performance. In our simulations, we choose $\bar{l}$ to be $L=50$ equally spaced values between $0.3$ and $3$, ie the $l_i$ are uniformly spaced as $$l_i = \frac{x}{a} + \frac{(a-1)ix}{aL}; i=1,..,L,$$ with scale and range parameters $a=10$ and $x=3$ respectively. We limit $L$ to 50 because we find that the gains beyond $L=50$ are negligible. The reason for this diminishing return is a direct result of the increasing similarity among the entries in $\bar{l}$, which translates to increasingly similar basis functions $\psi_i(l) = l^{i/d}$.
\subsection{Panter-Dite factor estimation}
\begin{figure}[!t]
\centering
\subfigure[\small{Variation of MSE of Panter-Dite factor estimates as a function of sample size $T$. From the figure, we see that the proposed weighted estimator has the fastest MSE rate of convergence wrt sample size $T$ ($d=6$).}]{
\includegraphics[scale=.25]{MSEvsT.png}
\label{a-compare}
}
\subfigure[\small{Variation of MSE of Panter-Dite factor estimates as a function of dimension $d$. From the figure, we see that the MSE of the proposed weighted estimator has the slowest rate of growth with increasing dimension $d$ ($T=3000$).}]{
\includegraphics[scale=.25]{MSEvsd.png}
\label{ad-compare}
}
\caption{Variation of MSE of Panter-Dite factor estimates using standard kernel plug-in estimator~[14], truncated kernel plug-in estimator~(3.3), histogram plug-in estimator[17], $k$-NN estimator~[20], entropic graph estimator~[18] and the weighted ensemble estimator~(3.6). }
\end{figure}
For a $d$-dimensional source with underlying density $f$, the Panter-Dite distortion-rate function~\cite{riten} for a $q$-dimensional vector quantizer with $n$ levels of quantization is given by $ \delta(n)= n^{-2/q} \int f^{q/(q+2)}(x) dx. $ The Panter-Dite factor corresponds to the functional $G(f)$ with $g(f,x) = n^{-2/q} f^{-2/(q+2)}I(f>0) + I(f=0)$. The Panter-Dite factor is directly related to the R\'{e}nyi $\alpha$-entropy, for which several other estimators have been proposed~\cite{gyrofi0,heroJ,pal,leo2}.
In our simulations we compare six different choices of functional estimators - the three estimators previously introduced: (i) the standard kernel plug-in estimator $\tilde{\mathbf{G}}_k$, (ii) the boundary truncated plug-in estimator $\hat{\mathbf{G}}_k$ and (iii) the weighted estimator $\hat{\mathbf{G}}_w$ with optimal weight $w = w^*$ given by (\ref{convexsol}), and in addition the following popular entropy estimators: (iv) histogram plug-in estimator~\cite{gyrofi0}, (v) $k$-nearest neighbor ($k$-NN) entropy estimator~\cite{leo2} and (vi) entropic $k$-NN graph estimator~\cite{heroJ,pal}. For both $\tilde{\mathbf{G}}_k$ and $\hat{\mathbf{G}}_k$, we select the bandwidth parameter $k$ as a function of $M$ according to the optimal proportionality $k=M^{1/(1+d)}$ and $N=M=T/2$.
We choose $f$ to be the $d$ dimensional mixture density $f(a,b,p,d) = pf_\beta(a,b,d) + (1-p)f_u(d)$; where $d=6$, $f_\beta(a,b,d)$ is a $d$-dimensional Beta density with parameters $a=6,b=6$, $f_u(d)$ is a $d$-dimensional uniform density and the mixing ratio $p$ is $0.8$. The reason we choose the beta-uniform mixture for our experiments is because it trivially satisfies all the assumptions on the density $f$ listed in Section 3.1, including the assumptions of finite support and strict boundedness away from 0 on the support. The true value of the Panter-Dite factor $\delta(n)$ for the beta-uniform mixture is calculated using numerical integration methods via the 'Mathematica' software (http://www.wolfram.com/mathematica/). Numerical integration is used because evaluating the entropy in closed form for the beta-uniform mixture is not tractable.
The MSE values for each of the six estimators are calculated by averaging the squared error $[\hat{\delta}_i(n) - \delta(n)]^2$, $i=1,..,m$ over $m = 1000$ Monte-Carlo trials, where each $\hat{\delta}_i(n)$ corresponds to an independent instance of the estimator.
\subsubsection{Variation of MSE with sample size $T$}
The MSE results of the different estimators are shown in Fig.~\ref{a-compare} as a function of sample size $T$, for fixed dimension $d=6$. It is clear from the figure that the proposed ensemble estimator $\hat{\mathbf{G}}_w$ has significantly faster rate of convergence while the MSE of the rest of the estimators, including the truncated kernel plug-in estimator, have similar, slow rates of convergence. It is therefore clear that the proposed optimal ensemble averaging significantly accelerates the MSE convergence rate.
\subsubsection{Variation of MSE with dimension $d$}
For fixed sample size $T$ and fixed number of estimators $L$, it can be seen that $\epsilon$ increases monotonically with $d$. This follows from the fact that the number of constraints in the convex problem \ref{convexsol} is equal to $d+1$ and each of the basis functions $\psi_i(l) = l^{i/d}$ monotonically approaches $1$ as $d$ grows, . This in turn implies that for a fixed sample size $T$ and number of estimators $L$, the overall MSE of the ensemble estimator should increase monotonically with the dimension $d$.
The MSE results of the different estimators are shown in Fig.~\ref{ad-compare} as a function of dimension $d$, for fixed sample size $T=3000$. For the standard kernel plug-in estimator and truncated kernel plug-in estimator, the MSE increases rapidly with $d$ as expected. The MSE of the histogram and $k$-NN estimators increase at a similar rate, indicating that these estimators suffer from the curse of dimensionality as well. On the other hand, the MSE of the weighted estimator also increases with the dimension as predicted, but at a slower rate. Also observe that the MSE of the weighted estimator is smaller than the MSE of the other estimators for all dimensions $d>3$.
\subsection{Distribution testing}
\begin{figure}[!t]
\centering
\subfigure[\small{Entropy estimates for random samples corresponding to hypothesis $H_0$ (experiments 1-500) and $H_1$ (experiments 501-1000).}]{
\includegraphics[scale=.30]{responsemod.png}
\label{c-compare}
}
\subfigure[\small{Histogram envelopes of entropy estimates for random samples corresponding to hypothesis $H_0$ (blue) and $H_1$ (red).}]{
\includegraphics[scale=.30]{histmod.png}
\label{d-compare}
}
\caption{Entropy estimates using standard kernel plug-in estimator, truncated kernel plug-in estimator and the weighted estimator, for random samples corresponding to hypothesis $H_0$ and $H_1$. The weighted estimator provides better discrimination ability by suppressing the bias, at the cost of some additional variance.}
\end{figure}
In this section, we illustrate the weighted ensemble estimator for non-parametric estimation of Shannon differential entropy. The Shannon differential entropy is given by $G(f)$ where $g(f,x) = - \log(f)I(f>0) + I(f=0)$. The improved accuracy of the weighted ensemble estimator is demonstrated in the context of hypothesis testing using estimated entropy as a statistic to test for the underlying probability distribution of a random sample. Specifically, the samples under the null and alternate hypotheses $H_0$ and $H_1$ are drawn from the probability distribution $f(a,b,p,d)$, described in Section IV.A, with fixed $d=6$, $p=0.75$ and two sets of values of $a,b$ under the null and alternate hypothesis, $H_0: a=a_0,b=b_0$ versus $H_1: a=a_1,b=b_1$.
First, we fix $a_0=b_0=6$ and $a_1=b_1=5$. The density under the null hypothesis $f(6,6,0.75,6)$ has greater curvature relative to $f(5,5,0.75,6)$ and therefore has smaller entropy. Five hundred (500) experiments are performed under each hypothesis with each experiment consisting of 1000 samples drawn from the corresponding distribution. The true entropy and estimates $\tilde{\mathbf{G}}_k$, $\hat{\mathbf{G}}_k$ and $\hat{\mathbf{G}}_w$ obtained from each instance of $10^3$ samples are shown in Fig.~\ref{c-compare} for the 1000 experiments. This figure suggests that the ensemble weighted estimator provides better discrimination ability by suppressing the bias, at the cost of some additional variance.
To demonstrate that the weighted estimator provides better discrimination, we plot the histogram envelope of the entropy estimates using standard kernel plug-in estimator, truncated kernel plug-in estimator and the weighted estimator for the cases corresponding to the hypothesis $H_0$ (color coded {blue}) and $H_1$ (color coded {red}) in Fig.~\ref{d-compare}. Furthermore, we quantitatively measure the discriminative ability of the different estimators using the deflection statistic $ds = {|\mu_1-\mu_0|}/{\sqrt{\sigma_0^2+\sigma_1^2}},$ where $\mu_0$ and $\sigma_0$ (respectively $\mu_1$ and $\sigma_1$) are the sample mean and standard deviation of the entropy estimates. The deflection statistic was found to be $1.49$, $1.60$ and $1.89$ for the standard kernel plug-in estimator, truncated kernel plug-in estimator and the weighted estimator respectively. The receiver operating curves (ROC) for this entropy-based test using the three different estimators are shown in Fig.~\ref{b-compare}. The corresponding areas under the ROC curves (AUC) are given by $0.9271$, $0.9459$ and $0.9619$.
\begin{figure}[!t]
\centering
\subfigure[\small{ROC curves corresponding to entropy estimates obtained using standard and truncated kernel plug-in estimators and the weighted estimator. The corresponding AUC are given by $0.9271$, $0.9459$ and $0.9619$.}]{
\includegraphics[scale=.30]{ROC1000.pdf}
\label{b-compare}
}
\subfigure[\small{Variation of AUC curves vs $\delta (= a_0-a_1, b_0-b_1)$ corresponding to Neyman-Pearson omniscient test, entropy estimates using the standard and truncated kernel plug-in estimators and the weighted estimator.}]{
\includegraphics[scale=.30]{auc10000.pdf}
\label{e-compare}
}
\caption{Comparison of performance in terms of ROC for the distribution testing problem. The weighted estimator uniformly outperforms the individual plug-in estimators.}
\end{figure}
In our final experiment, we fix $a_0=b_0=10$ and set $a_1=b_1=10-\delta$, perform 500 experiments each under the null and alternate hypotheses with samples of size 5000, and plot the AUC as $\delta$ varies from $0$ to $1$ in Fig.~\ref{e-compare}. For comparison, we also plot the AUC for the Neyman-Pearson likelihood ratio test. The Neyman-Pearson likelihood ratio test, unlike the Shannon entropy based tests, is an omniscient test that assumes knowledge of both the underlying beta-uniform mixture parametric model of the density and the parameter values $a_0$, $b_0$ and $a_1$, $b_1$ under the null and alternate hypothesis respectively. Figure 4 shows that the weighted estimator {\emph {uniformly and significantly}} outperforms the individual plug-in estimators and comes closest to the performance of the omniscient Neyman-Pearson likelihood test. The relatively superior performance of the Neyman-Pearson likelihood test is due to the fact that the weighted estimator is a nonparametric estimator that has marginally higher variance (proportional to $||w^*||_2^2$) as compared to the underlying parametric model for which the Neyman-Pearson test statistic provides the most powerful test.
\section{Conclusions}
\label{sec:con}
We have proposed a new estimator of functionals of a multivariate density based on weighted ensembles of kernel density estimators. For ensembles of estimators that satisfy general conditions on bias and variance as specified by ${\mathscr C}.1$(\ref{BE}) and ${\mathscr C}.2$(\ref{VE}) respectively, the weight optimized ensemble estimator has parametric $O(T^{-1})$ MSE convergence rate that can be much faster than the rate of convergence of any of the individual estimators in the ensemble. The optimal weights are determined as a solution to a convex optimization problem that can be performed offline and does not require training data. We illustrated this estimator for uniform kernel plug-in estimators and demonstrated the superior performance of the weighted ensemble entropy estimator for (i) estimation of the Panter-Dite factor and (ii) non-parametric hypothesis testing.
Several extensions of the framework of this paper are being pursued: (i) using $k$-nearest neighbor ($k$-NN) estimators in place of kernel estimators; (ii) extending the framework to the case where support ${\cal S}$ {\emph {is not known}}, but for which conditions ${\mathscr C}.1$ and ${\mathscr C}.2$ hold; (iii) using ensemble estimators for estimation of other functionals of probability densities including divergence, mutual information and intrinsic dimension; and (iv) using an $l_1$ norm $\|w\|_1$in place of the $l_2$ norm $\|w\|_2$ in the weight optimization algorithm (2.3) so as to introduce sparsity into the weighted ensemble.
\section*{Acknowledgement}
This work was partially supported by (i) ARO grant W911NF-12-1-0443 and (ii) NIH grant 2P01CA087634-06A2.
\begin{figure}[h]
\centering
\includegraphics[width=6.2in]{sss.pdf}
\caption{Illustration for the proof of Lemma~\ref{biaslemma}.}
\label{i-compare}
\end{figure}
| {
"timestamp": "2013-03-05T02:01:58",
"yymm": "1203",
"arxiv_id": "1203.5829",
"language": "en",
"url": "https://arxiv.org/abs/1203.5829",
"abstract": "The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size $T$ as the dimension $d$ of the samples increases. In particular, the rate is often glacially slow of order $O(T^{-{\\gamma}/{d}})$, where $\\gamma>0$ is a rate parameter. Examples of such estimators include kernel density estimators, $k$-nearest neighbor ($k$-NN) density estimators, $k$-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of $O(T^{-1})$. Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample.",
"subjects": "Statistics Theory (math.ST); Methodology (stat.ME)",
"title": "Ensemble estimators for multivariate entropy estimation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754501811438,
"lm_q2_score": 0.8128673178375734,
"lm_q1q2_score": 0.8003292053974677
} |
https://arxiv.org/abs/1302.7252 | Stability results for some fully nonlinear eigenvalue estimates | In this paper, we give some stability estimates for the Faber-Krahn inequality relative to the eigenvalues of Hessian operators | \section{Introduction}
In this paper we prove some stability estimates for the eigenvalue
$\lambda_k(\Omega)$ of the $k$-Hessian operator, that has the
variational characterization
\begin{equation*}
\lambda_k(\Omega) = \min\left\{\int_\Omega (-u)
S_k(D^2u)\,dx,\; u\in \Phi_k^2(\Omega) \text{ and }
\int_\Omega (-u)^{k+1}\,dx=1 \right\}.
\end{equation*}
Here $\Omega$ is a bounded, strictly convex, open set of $\mathbb R^n$, $n\ge
2$, with $C^2$ boundary, $\Sk_k(D^2u)$, with $1\le k\le n$, is the
$k$-th elementary symmetric function of the eigenvalues of $D^2 u$
with $u\in C^2(\Omega)$, and $\Phi_k^2(\Omega)$ denotes the class of
the admissible functions for $\Sk_k$, the so-called $k$-convex
functions (see Section 2 for the precise definitions). Notice that
$S_1(D^2u)=\Delta u$, the Laplacian operator, while $\Sk_n(D^2u)=\det
D^2 u$, the Monge-Amp\`ere operator.
It is known that, under suitable assumptions on
$\Omega$, for this kind of operators a Faber-Krahn inequality holds,
that is the eigenvalue $\lambda_k(\Omega)$ attains its minimum value
on the ball $\Omega_{k-1}^*$, which preserves an appropriate curvature
measure of $\Omega$, the $(k-1)$-th quermassintegral:
\begin{equation}
\label{introfksk}
\lambda_k(\Omega) \ge \lambda_k(\Omega^*_{k-1}), \quad 1\le k\le n
\end{equation}
(see \cite{bt07,ga09}). Moreover, for $k=n$,
$\lambda_n(\Omega)$ is also bounded from above by
$\lambda_n(\Omega^*_0)$, with $|\Omega^*_0|=|\Omega|$ (see
\cite{bntpoin}). For sake of completeness, we
recall that in the case of Neumann boundary condition, for $k=1$, the
reverse inequality in \eqref{introfksk} holds (see \cite{w56,sz54},
\cite{chdb12} and \cite{brchtr09,dpg2} for related results).
In \cite{dpg5} we give some stability estimates of
\eqref{introfksk}, proving that
\begin{equation*}
\frac{\lambda_k(\Omega) -
\lambda_k(\Omega^*_{k-1})}{\lambda_k(\Omega)} \le C_{n,k}
\frac{|\Omega^*_k|-|\Omega|}{|\Omega^*_k|}, \quad 1\le k\le n-1,
\end{equation*}
for some constant $C_{n,k}$ depending only on $n$ and $k$, while for
$k=n$,
\begin{equation*}
\frac{\lambda_n(\Omega) -
\lambda_n(\Omega^*_{n-1})}{\lambda_n(\Omega)} \le C_n
\frac{|\Omega^*_{n-1}|-|\Omega|}{|\Omega^*_{n-1}|},
\end{equation*}
where $C_n$ which depends only on $n$. Roughly speaking, such
inequalities state that if $\Omega$ is close to a ball with respect
the $L^1$ norm, then their corresponding eigenvalues are near. Such
result is in the spirit of a well-known result due to Payne and
Weinberger for the Laplace operator (see \cite{pw61}), and given in
\cite{bnt10} for the $p$-Laplace operator (see also \cite{nitsch12}
for the best constant in the case $p=2$, and \cite{dpgtors} for
the anisotropic case).
Viceversa, the aim of this paper is to prove some stability results
which ensure that if $\lambda_k(\Omega)$ is near to
$\lambda_k(\Omega^*_{k-1})$, then, in an appropriate sense, $\Omega$ is
close to a suitable ball of $\mathbb R^n$ (see Section 2 for the precise
statements).
There are several contributions in this direction, for the first
eigenvalue of the Laplacian operator (see \cite{melas92},
\cite{hana94}) or, more generally, for the $p$-Laplacian
(see \cite{bhatta01}, \cite{fmp09}). In such papers, depending on the
assumptions on $\Omega$, suitable notions of the distance between the
set $\Omega$ and a ball are considered. In particular, under the
convexity assumption on the domain, it seems natural to take into
account the Hausdorff distance (see \cite{melas92}), while, in a more
general setting, such notion is replaced by the so-called Fraenkel
asymmetry (see \cite{bhatta01}, \cite{fmp09}). Both arguments are
considered in \cite{hana94}.
Dealing with convex sets, our aim is to prove some stability result
for Hessian operators in the spirit of the results given in
\cite{melas92,hana94}.
In particular, we prove that for a strictly convex, smooth domain
$\Omega$, such that
\[
\lambda_k(\Omega) \le \lambda_k(\Omega^*_{k-1})(1+\varepsilon),
\]
for some $\varepsilon>0$ sufficiently small, then there exist two balls
$B_{r_\Omega}$ and $B_{R_\Omega}$ such that $B_{r_\Omega}\subseteq
\Omega \subseteq B_{R_\Omega}$ and two suitable asymmetry coefficients
of $\Omega$ with respect to $\Omega^*_{k-1}$ vanish when $\varepsilon$ goes
to zero. This will imply that the Hausdorff asymmetry of $\Omega$ is
close to zero (see Section 2.4 for the precise definitions).
The paper is organized as follows. In Section 2, we recall some basic
definitions of convex geometry, and the properties of symmetrization
for quermassintegrals. Moreover, we summarize some useful results on
the eigenvalue problem for Hessian operators. In sections 3 and 4 we
state and prove the main results. We distinguish the case of the
Monge-Amp\`ere operator (see Section 3) from the case of $\Sk_k$,
$1\le k\le n-1$. Our approach makes use of a quantitative version of a
suitable isoperimetric inequality and a symmetrization for
quermassintegral technique.
\section{Notation and preliminaries}
Throughout the paper, we will denote with $\Omega$ a set of
$\mathbb R^n$, $n\ge 2$ such that
\begin{equation}
\label{ipomega}
\Omega\text{ is a bounded, strictly convex, open set with }C^2
\text{ boundary}.
\end{equation}
By strict convexity of $\Omega$ we mean that the Gauss curvature is
strictly positive at every point of $\partial \Omega$.
Given a function $u \in C^2(\Omega)$, we denote by $\lambda(D^2u)=
(\lambda_1,\lambda_2, \ldots,\lambda_n)$ the vector of the
eigenvalues of $D^2u$. The $k$-Hessian operator $\Sk_k(D^2u)$, with
$k=1,2,\ldots,n$, is
\begin{equation*}
\Sk_k(D^2u)=\sum_{i_1<i_2<\cdots<i_k} \lambda_{i_1} \cdot
\lambda_{i_2} \cdots \lambda_{i_k}.
\end{equation*}
Hence $\Sk_k(D^2u)$ is the sum of all $k \times k$ principal minors of
the matrix $D^2u$.
The $k$-Hessian operator can be written also in divergence form,
that is
\begin{equation*}
\label{div}
\Sk_k(D^2u)=\frac{1}{k}\sum_{i,j=1}^n (\Sk_k^{ij}u_i)_j,
\end{equation*}
where $S_k^{ij} = \frac{\partial S_k(D^2u)}{\partial u_{ij}}$ (see
for instance \cite{trudi1}, \cite{trudi2}, \cite{wangeigen}).
Well known examples ok $k$-Hessian operators are $\Sk_1(D^2u)=\Delta
u$, the Laplace operator, and $\Sk_n(D^2u)=\det(D^2u)$, the
Monge-Amp\`ere operator.
It is well-known that $\Sk_1(D^2u)$ is elliptic. This property is not
true in general for $k>1$. As matter of fact, the $k$-Hessian operator
is elliptic when it acts on the class of the so-called
$k$-convex function, defined below.
\begin{definiz}
Let $\Omega$ be as in \eqref{ipomega}. A function $u \in C^2(\Omega)$
is called a $k$-convex function (strictly
$k$-convex) in $\Omega$ if
\begin{equation*}
\Sk_j(D^2u)\geq 0 \text{ }(>0) \quad \text{for }j=1,
\ldots, k.
\end{equation*}
We denote the class of $k$-convex functions in $\Omega$ such
that $u \in C^2(\Omega)\cap C(\bar{\Omega})$ and $u=0$ on $\partial
\Omega$ by $\Phi^2_k(\Omega)$.
\end{definiz}
Clearly, the set $\Phi_n^2(\Omega)$
coincides with the set of the convex and $C^2(\Omega)$ functions
vanishing on $\partial \Omega$.
If we define with $\Gamma_k$ the following convex open cone
\begin{equation*}
\Gamma_k=\{ \lambda \in \mathbb R^n : S_1(\lambda)>0, S_2(\lambda)>0,
\ldots, S_k(\lambda)>0\},
\end{equation*}
in \cite{ivo} it is proven that $\Gamma_k$ is the cone of
ellipticity of $\Sk_k$. Hence the $k$-Hessian operator is elliptic
with respect to the $k$-convex functions.
By definition, it follows that the $k$-convex functions are
subharmonic in $\Omega$ and then negative in $\Omega$ if zero on
$\partial\Omega$.
We go on by recalling some definitions of convex geometry which will
be largely used in next sections. Standard references for this topic
are \cite{bz}, \cite{schn}.
\subsection
{Quermassintegrals and the Aleksandrov-Fenchel inequalities}
Let $K$ be a convex body, and let be $\rho>0$. We denote by $|K|$ the
Lebesgue measure of $K$, by $P(K)$ the perimeter of $K$ and by
$\omega_n$ the measure of the unit ball in $\mathbb R^n$.
The well-known Steiner formula for the Minkowski sum is
\[
|K+\rho B_1| =\sum_{i=0}^{n} \binom{n}{i} W_i(K) \rho^i.
\]
The coefficient $W_i(K)$, $i=0,\ldots,n$, is known as the $i$-th
quermassintegral of $K$. Some special cases are $W_0(K)=|K|$,
$nW_1(K)=P(K)$, $W_n(K)=\omega_n$. If $K$ has $C^2$ boundary, with
nonvanishing Gaussian curvature, the quermassintegrals can be related
to the principal curvatures of $\partial K$. Indeed, in such a case
\[
W_i(K)=\frac 1 n \int_{\partial K} H_{i-1} d \mathcal H^{n-1}, \quad
i={1,\ldots n}.
\]
Here $H_j$ denotes the $j$-th normalized elementary symmetric function
of the principal curvatures $\kappa_1,\ldots,\kappa_{n-1}$ of $\partial K$,
that is $H_0=1$ and
\[
H_j= \binom{n-1}{j}^{-1} \sum_{1\le i_1\le \ldots \le i_j\le n-1}
\kappa_{i_1}\cdots \kappa_{i_j},\quad j={1,\ldots,n-1}.
\]
An immediate computation shows that if $B_R$ is a ball of radius $R$,
then
\begin{equation}
\label{querball}
W_i(B_R)= \omega_n R^{n-i}, \quad i=0,\ldots,n.
\end{equation}
Moreover, the $i$-th quermassintegral, $0\le i \le n$, rescales as
\[
W_{i}(tK)=t^{n-i}W_i(K), \quad t>0.
\]
The Aleksandrov-Fenchel inequalities state that
\begin{equation}
\label{afineq}
\left( \frac{W_j(K)}{\omega_n} \right)^{\frac{1}{n-j}} \ge \left(
\frac{W_i(K)}{\omega_n} \right)^{\frac{1}{n-i}}, \quad 0\le i < j
\le n-1,
\end{equation}
where the inequality is replaced by an equality if and only if $K$ is
a ball.
In what follows, we use the Aleksandrov-Fenchel inequalities for
particular values of $i$ and $j$. If $i=1$, and $j=k-1$, we have that
\begin{equation}\label{af-2}
W_{k-1}(K) \ge \omega_n^{\frac{k-2}{n-1}} n^{-\frac{n-k+1}{n-1}}
P(K)^{\frac{n-k+1}{n-1}}, \quad
3\le k \le n.
\end{equation}
When $i=0$ and $j=1$, we have the classical isoperimetric inequality:
\[
P(K) \ge n \omega_n^{\frac 1 n} |K|^{1-\frac 1 n}.
\]
Moreover, if $i=k-1$, and $j=k$, we have
\[
W_k(K) \ge \omega_n^{\frac{1}{n-k+1}} W_{k-1}(K)^{\frac{n-k}{n-k+1}}.
\]
It can be also shown a derivation formula for quermassintegral of
level sets of a function $u \in \Phi_k^2(\Omega)$ (see
\cite{reilly}):
\begin{equation}
\label{reillyder}
- \frac{d}{dt} W_{k}(\Omega_t)= \frac{n-k}{n}
\int_{\Sigma_t}H_{k}(\Sigma_t) |Du|^{-1} d \mathcal H^{n-1},
\end{equation}
where $\Sigma_t$ is the boundary of $\Omega_t=\{ -u > t \}$.
Moreover, we recall the following equality (see \cite{reilly} again):
\begin{equation}
\label{introintcurv}
\int_{\Omega_t}\Sk_k (D^2u)dx = \frac{1}{k} \int_{\Sigma_t} H_{k-1} |Du|^k
d\mathcal H^{n-1}.
\end{equation}
\subsection{Rearrangements for quermassintegrals}
Now we recall some basic facts on rearrangements for
quermassintegrals. For an exhaustive treatment of the properties of
such rearrangements we refer the reader, for example, to \cite{ta81},
\cite{tso}, \cite{trudiso}.
Let $1\le k\le n$, and denote by $\Omega^*_{k-1}$ the ball centered at the
origin and with the same $W_{k-1}$-measure than $\Omega$, that is
$W_{k-1}(\Omega^*_{k-1})=W_{k-1}(\Omega)$.
The $(k-1)$-symmetrand of a function $u \in \Phi_{k}(\Omega)$,
$k=1,\ldots,n$, is the radially symmetric increasing function
$u^*_{k-1}$, defined in the ball $\Omega^*_{k-1}$, which preserves
the $W_{k-1}$-measure of the level sets of $u$. More precisely, we
have that, for $x\in\Omega^*_{k-1}$,
\begin{equation*}
u_{k-1}^{*}(x)=-\inf \left\{ t \ge 0 \colon W_{k-1}(\Omega_t) \le
\omega_n\left|x\right|^{n-k+1},\; Du \ne 0 \text{ on } \Sigma_t\right\},
\end{equation*}
where $\Sigma_{t} = \partial \Omega_t=\left\{x \in \Omega\colon
-u(x)>t\right\}$, with $t\ge 0$.
We stress that, for $k=1$, $u^{*}_{0}(x)$ coincides with
the classical Schwarz symmetrand of $u$, while for $k=2$,
$u^*_1(x)$ is the rearrangement of $u$ which preserves the
perimeter of the level sets of $u$.
Denoting with $R$ the radius of $\Omega^*_{k-1}$, the following
statements hold true (see \cite{tso,trudiso}):
\begin{enumerate}
\item writing $u_{k-1}^{*}(x) = \rho(r)$ for
$r=\left|x\right|$, we have $\rho(0)=\min_{\Omega}(u)$ and
$\rho(R)=0$,
\item $\rho(r)$ is a negative and increasing function on
$\left[0,R\right]$,
\item $\rho(r) \in C^{0,1}(\left[0,R\right])$ and moreover $0 \le
\rho'(r)\leq \sup_{\Omega}\left|Du\right|$ almost everywhere.
\end{enumerate}
If the function $u$ has convex level sets, the Aleksandrov-Fenchel
inequalities \eqref{afineq} imply that
$|\{ -u>t \}|\le |\{-u^*_{k-1}>t\}|$ and then, for any $p\ge 1$,
\begin{equation*}
\label{norme}
\left\|u\right\|_{L^p(\Omega)} \le
\left\|u_{k-1}^{*}\right\|_{L^p(\Omega^*_{k-1})},
\end{equation*}
while, by property $(1)$,
\[
\|u\|_{{L^\infty(\Omega)}} =\|u^*_{k-1}\|_{{L^\infty(\Omega^*_{k-1})}}.
\]
Now, it is possible to define the following functional associated to
the $k$-Hessian operator, known as $k$-Hessian integral:
\begin{equation*}
I_k\left[u,\Omega \right]=\int_{\Omega} (-u)\Sk_k(D^2u)
\, dx.
\end{equation*}
In the radial case the Hessian integrals can be defined as
follows:
\begin{equation*}
I_{k}\left[ u^{*}_{k-1}, \Omega^*_{k-1}\right]=
\binom{n}{k} \omega_n \int_0^R f^{k+1}\big(\omega_n
r^{n-k+1}\big) r^{n-k} \, dr
\end{equation*}
where $f\big(\omega_n \left|x\right|^{n-k+1}\big) = \left|\nabla
u^{*}_{k-1}(x)\right|$.
Finally we recall that for the Hessian integrals the following
extension of P\'olya-Szeg\"o principle holds (see
\cite{trudiso,tso}):
\begin{equation}
\label{pol}
I_{k}\left[u, \Omega\right]\geq
I_{k}\left[u^{*}_{k-1}, \Omega^*_{k-1}\right], \quad p \geq 1.
\end{equation}
\subsection{Eigenvalue problems for $\Sk_k$}
In this subsection we give a quick review on the main properties of
eigenvalues and eigenfunctions of the $k$-Hessian operators, namely
the couples $(\lambda,u)$ which solve
\begin{equation}
\label{autsk}
\left\{
\begin{array}{ll}
S_k(D^2u)=\lambda (-u)^k &\text{in } \Omega,\\
u=0 &\text{on } \partial\Omega.
\end{array}
\right.
\end{equation}
The following existence result holds (see \cite{lionsma} for $k=n$,
and \cite{wangeigen}, \cite{geng} in the general case):
\begin{theo}
Let $\Omega$ as in \eqref{ipomega}. Then, there exists a positive
constant $\lambda_k(\Omega)$ depending only on $n,k$, and $\Omega$,
such that problem (\ref{autsk}) admits a
solution $u \in C^2(\Omega)\cap C^{1,1}(\overline{\Omega})$,
negative in $\Omega$, for
$\lambda=\lambda_k(\Omega)$ and $u$ is unique up to positive
scalar multiplication. Moreover, $\lambda_k(\Omega)$ has the following
variational characterization:
\begin{equation*}
\lambda_k(\Omega)=\min_{\substack{u \in
\Phi_k^2(\Omega) \\
u \ne 0 }} \displaystyle \frac{\int_{\Omega}(-u)
S_k(D^2u)\,dx}{\int_{\Omega}(-u)^{k+1}\,dx}.
\end{equation*}
\end{theo}
As matter of fact, if $k<n$ the above theorem holds under a more
general assumption on $\Omega$, namely requiring that $\Omega$ is
strictly $k$-convex (see \cite{wangeigen}, \cite{geng}).
As matter of fact, we observe that if $k=1$, or $k=n$,
$\lambda_k(\Omega)$ coincides respectively with the first eigenvalue
of the Laplacian operator, or with the eigenvalue of the
Monge-Amp\`ere operator.
A simple but useful property of the eigenvalue $\lambda_k(\Omega)$
is that it rescales as
\begin{equation}
\label{risc_palla}
\lambda_k(t\Omega)=t^{-2k}\lambda_k(\Omega),\quad t>0.
\end{equation}
If $k=1$, the well-known Faber-Krahn inequality states that
\[
\lambda_1(\Omega) \ge \lambda_1(\Omega^\#),
\]
where $\Omega^\#$ is the ball centered at the origin
with the same Lebesgue measure of $\Omega$. Moreover, the equality
holds if $\Omega=\Omega^\#$.
In \cite{bt07}, \cite{ga09} it is proved that if $k=n$ and $\Omega$ is
a bounded strictly convex open set, then
\begin{equation*}
\lambda_n(\Omega) \ge \lambda_n(\Omega_{n-1}^*).
\end{equation*}
In general, in \cite{ga09} it is proven that if $\Omega$ is
a strictly convex set such that the eigenfunctions have convex level
sets, then, for $2\le k \le n$,
\begin{equation}
\label{fksk}
\lambda_k(\Omega) \ge \lambda_k(\Omega^*_{k-1}).
\end{equation}
\subsection{Asymmetry measures and isoperimetric deficit}
A purpose of this paper is to prove that the difference
between the two sides in \eqref{fksk} controls the exterior and interior
deficiencies, defined as follows (see also \cite{hana94} for $k=1$).
Given $\Omega$ bounded nonempty domain of $\mathbb R^n$, denoted by $R$ the
radius of the ball $\Omega^*_{k-1}$, then the exterior and interior
$k$-deficiency of $\Omega$ are, respectively, the nonnegative numbers
\begin{equation}
\label{defe}
D_{k}(\Omega)= \frac{R_\Omega}{R}-1,\quad d_k(\Omega)= 1-\frac{r_\Omega}{R},
\end{equation}
where $r_\Omega$ and $R_\Omega$ are the inradius and the circumradius
of $\Omega$. Such numbers give a measure of the asymmetry of $\Omega$
with respect to the ball with the same $(k-1)$-quermassintegral than
$\Omega$. Furthermore, the deficiency of $\Omega$ is
\begin{equation*}
\Delta(\Omega)=\frac{R_\Omega}{r_\Omega}-1.
\end{equation*}
In order to have a measure of the asymmetry of $\Omega$ in terms of
the Hausdorff distance $d$, we define the following coefficient:
\begin{equation}
\label{haus}
\delta_H(\Omega) = \inf \{d(\Omega, \Omega^*_{n-1}+x_0),\; x_0\in \mathbb R^n\}.
\end{equation}
We refer to $\delta_H$ as the Hausdorff asymmetry of
$\Omega$.
In the class of convex sets, it is possible to obtain some stability
estimates for the Aleksandrov-Fenchel inequalities \eqref{afineq}.
More precisely, if $s$ denotes the Steiner point of
$\Omega$, then in \cite{grsc} it has been proved that
\begin{equation}
\label{gshaus}
\begin{array}{l}
d(\Omega,\Omega^*_{n-1}+s)^{(n+3)/2} \le \bar C
\frac{P(\Omega)^{(n^2-3)/2}}{|\Omega|^{(n+3)(n-2)/2}} \left[
\left(\frac{P(\Omega)}{n\omega_n}\right)^n -
\left(\frac{|\Omega|}{\omega_n}\right)^{n-1} \right],\\
d(\Omega,\Omega^*_{n-1}+s)^{(n+3)/2}
\le \bar C_1
\frac{W_{n-2}(\Omega)W_{n-1}(\Omega)^{\frac{n-1}{2}}}{W_{k-1}(\Omega)^{n-k}}
\left( \frac{W_k(\Omega)^{n-k+1}}{\omega_n} -W_{k-1}(\Omega)^{n-k}
\right),
\end{array}
\end{equation}
where $\bar C,\bar C_1$ are two constants depending only on $n$, which can be
explicitly determined. These estimates justify the definition of
$\delta_H$.
As matter of fact, in \cite{grsc} it is observed that, the inequalities
\eqref{gshaus} can be rewritten as a Bonnesen-type inequality in terms
of the inradius $r_\Omega$ and the circumradius $R_\Omega$ of $\Omega$:
\begin{equation}
\label{gs}
\begin{array}{l}
(R_\Omega-r_\Omega)^{(n+3)/2} \le \tilde C
\frac{P(\Omega)^{(n^2-3)/2}}{|\Omega|^{(n+3)(n-2)/2}}\left[
\left(\frac{P(\Omega)}{n\omega_n}\right)^n -
\left(\frac{|\Omega|}{\omega_n}\right)^{n-1} \right],\\
(R_\Omega-r_\Omega)^{(n+3)/2} \le \tilde C_1
\frac{W_{n-2}(\Omega)W_{n-1}(\Omega)^{\frac{n-1}{2}}}{W_{k-1}(\Omega)^{n-k}}
\left( \frac{W_k(\Omega)^{n-k+1}}{\omega_n} - W_{k-1}(\Omega)^{n-k}
\right).
\end{array}
\end{equation}
\section{The case of the Monge-Amp\`ere operator}
In this section we consider the eigenvalue problem for the
Monge-Amp\`ere operator,
\begin{equation}\label{eig_eq}
\left\{
\begin{array}{ll}
-\det D^2 u= \lambda (-u)^{n} &
\text{in }\Omega, \\ [.2cm]
u = 0 & \text{on }\partial\Omega,
\end{array}
\right.
\end{equation}
and we prove the first stability result, stated below.
\begin{theo}
\label{main}
Let $\Omega \subset \mathbb R^n$ be as in \eqref{ipomega} such that
\begin{equation}
\label{small}
\lambda_n(\Omega) \le (1+\varepsilon)\lambda_n(\Omega_{n-1}^*),
\end{equation}
where $\varepsilon>0$ is sufficiently small and $\Omega_{n-1}^*$ is the
ball such that $W_{n-1}(\Omega)=W_{n-1}(\Omega_{n-1}^*)$. Then, if
$\delta_H(\Omega)$ is the Hausdorff asymmetry \eqref{haus}, it holds that
\begin{equation*}
\label{maindelta}
\delta_H(\Omega)\le C_n \varepsilon^{\frac{1}{(n+1)(n+3)}},
\end{equation*}
where $C_n$ is a constant which depend only on $n$. Moreover,
denoting by $d_n(\Omega)$ and $D_n(\Omega)$, respectively, the
interior and exterior $n$-deficiency of $\Omega$ as in
\eqref{defe}, we have the following:
\begin{enumerate}
\item if $n=2$, then
\begin{equation}
\label{defdet2}
d_2(\Omega) \le C_2
\sqrt[6]\varepsilon, \quad
D_2(\Omega) \le C_2 \sqrt[12]\varepsilon,
\end{equation}
where $C_2$ denotes a positive constant which depends only on
the dimension $n=2$.
\item If $n\ge 3$, then
\begin{equation}
\label{defdetn}
d_n(\Omega) \le C_n \varepsilon^{\frac{1}{2n+2}},
\quad
D_n(\Omega)\le C_n \varepsilon^{\frac{1}{(n+1)(n+3)}},
\end{equation}
where $C_n$ depends only on $n$.
\end{enumerate}
\end{theo}
\begin{rem}
\label{remeq}
The estimates \eqref{defdet2} and \eqref{defdetn} can be read as
\[
\frac{P(\Omega)-P(B_{r_\Omega})}{P(\Omega)} \le C_2 \sqrt[6]\varepsilon,
\qquad\quad
\frac{P(B_{R_\Omega})-P(\Omega)}{P(\Omega)} \le C_2 \sqrt[12]\varepsilon,
\]
and
\begin{equation*}
\frac{W_{n-1}(\Omega) - W_{n-1}(B_{r_\Omega})}{W_{n-1}(\Omega)}
\le C_n \varepsilon^{\frac{1}{2n+2}},
\qquad
\frac{W_{n-1}(B_{R_\Omega}) - W_{n-1}(\Omega)}{W_{n-1}(\Omega)} \le
C_n \varepsilon^{\frac{1}{(n+1)(n+3)}},
\end{equation*}
in the spirit of the stability result contained in \cite{melas92}.
\end{rem}
To prove the Theorem, we need some preliminary lemmas.
For $\delta\ge 0$, we denote
\[
\Omega_\delta = \{x\in \Omega \colon -u > \delta \}.
\]
In the following result we estimate $W_{n-1}(\Omega_{\delta})$ in
term of $W_{n-1}(\Omega)$.
\begin{lemma}\label{lemma1}
Under the hypotheses of Theorem \ref{main}, if $u$ is the
eigenfunction of \eqref{eig_eq} such that $\|u\|_{L^{n+1}(\Omega)}=1$, then for
any $\delta$ such that $0<\delta<\frac 1 2 |\Omega|^{-\frac{1}{n+1}}$, we have
\[
W_{n-1}(\Omega_\delta)\ge W_{n-1}(\Omega) (1-\max\{\varepsilon, 2\delta
|\Omega|^{\frac{1}{n+1}}\}).
\]
\begin{proof}
For $\delta>0$, we compute the Rayleigh quotient of the function
$\phi=u+\delta$ in $\Omega_\delta$. Then,
\begin{equation}
\label{udelta}
\lambda_n(\Omega_\delta) \le \dfrac{\int_{\Omega_\delta}
(-\phi)\det D^2 \phi\,dx }{ \int_{\Omega_\delta}
(-\phi)^{n+1}\,dx} = \dfrac{\int_{\Omega_\delta}
(-u-\delta)\det D^2 u \,dx }{ \int_{\Omega_\delta}
(-u-\delta)^{n+1}\,dx}.
\end{equation}
Moreover, being $u$ a solution of \eqref{eig_eq} with
$\lambda=\lambda_n(\Omega)$, we get, by H\"older inequality, and
recalling that $\int_\Omega (-u)^{n+1} dx =1$, that
\[
\begin{split}
\int_{\Omega_\delta}(-u-\delta)\det D^2 u \,dx
&= \lambda_n(\Omega) \int_{\Omega_\delta} (-u-\delta)(-u)^n\,dx \\
&\le \lambda_n(\Omega) \left(\int_{\Omega_\delta}
(-u-\delta)^{n+1}\,dx\right)^{\frac{1}{n+1}}
\left(\int_{\Omega_\delta}
(-u)^{n+1}\,dx\right)^{\frac{n}{n+1}} \\
&\le \lambda_n(\Omega) \left(\int_{\Omega_\delta}
(-u-\delta)^{n+1}\,dx\right)^{\frac{1}{n+1}}.
\end{split}
\]
Hence, combining the above estimate with \eqref{udelta} it follows
that
\begin{equation}
\label{udelta2}
\lambda_n(\Omega_\delta) \le \lambda_n(\Omega) \left(\int_{\Omega_\delta}
(-u-\delta)^{n+1}\,dx \right)^{-\frac{n}{n+1}}
\end{equation}
On the other hand, by Minkowski inequality and choosing
$\delta<\frac 1 2 |\Omega|^{-\frac{1}{n+1}}$, we obtain
that
\[
\begin{split}
\left(\int_{\Omega_\delta} (-u-\delta)^{n+1}\,dx\right)^{\frac{1}{n+1}}
&\ge \left( \int_{\Omega_\delta} (-u)^{n+1} \,dx \right)^{\frac
{1}{n+1}} - \left( \int_{\Omega_\delta} \delta^{n+1} \,dx \right)^{\frac
{1}{n+1}} \\ &\ge \left(1- \int_{\Omega\setminus\Omega_\delta}
\delta^{n+1} \,dx \right)^{\frac{1}{n+1}}- \delta
|\Omega_\delta|^{\frac{1}{n+1}} \\
&= \left(1-
\delta^{n+1} \big(|\Omega|-|\Omega_\delta|\big)
\right)^{\frac{1}{n+1}}- \delta |\Omega_\delta|^{\frac{1}{n+1}} \\
&\ge 1-\delta\big( |\Omega| - |\Omega_\delta| \big)^{\frac{1}{n+1}}
- \delta |\Omega_\delta|^{\frac{1}{n+1}} \ge 1-2\delta
|\Omega|^{\frac {1}{n+1}}.
\end{split}
\]
Hence, from \eqref{udelta2}, \eqref{small} and Faber-Krahn
inequality it follows that
\[
\lambda_n((\Omega_\delta)^*_{n-1}) \le \lambda_n(\Omega_\delta) \le
(1+\varepsilon) \lambda_n(\Omega^*_{n-1}) \big( 1-2\delta |\Omega|^{\frac
{1}{n+1}} \big)^{-n}
\]
which implies, by \ref{risc_palla}, that
\begin{equation}
\label{udelta3}
\left(\frac{W_{n-1}(\Omega_\delta)}{W_{n-1}(\Omega)}\right)^{2n}
=
\frac{\lambda_n(\Omega^*_{n-1})}{\lambda_n((\Omega_\delta)^*_{n-1})}
\ge
\frac{\big( 1-2\delta |\Omega|^{\frac
{1}{n+1}} \big)^n}{1+\varepsilon},
\end{equation}
where we used that the balls $\Omega^*_{n-1}$ and
$(\Omega_\delta)^*_{n-1}$ preserve, respectively, the $(n-1)$-th
quermassintegral of $\Omega$ and $\Omega_\delta$. Hence, by
\eqref{udelta3} we get that
\[
\frac{W_{n-1}(\Omega_\delta)}{W_{n-1}(\Omega)}
\ge \left( 1 - \frac{\varepsilon+ 2 \delta |\Omega|^{\frac
{1}{n+1}}}{1+\varepsilon} \right)^{1/2} \ge
1-\max\{\varepsilon, 2\delta |\Omega|^{\frac{1}{n+1}}\},
\]
obtaining the thesis.
\end{proof}
\end{lemma}
The second lemma we need is the following.
\begin{lemma}
Under the hypotheses of Theorem \ref{main}, if $\Omega_t=\{-u>t\}$,
and $u$ is the eigenfunction of \eqref{eig_eq} in $\Omega$ such that
$\|u\|_{L^{n+1}(\Omega)}=1$, then
\begin{equation}
\label{eq:3}
\int_0^{+\infty} t^n \big( W_{n-1}(\Omega_t)^n
- \omega_n^{n-1}|\Omega_t|\big) dt \le \frac{\omega_n^{n-1}}{n+1} \varepsilon.
\end{equation}
\end{lemma}
\begin{proof}
We consider the difference of the eigenvalues related to the
sets $\Omega$ and $\Omega^*_{n-1}$. Choosing $u$ as the normalized
eigenfunction of \eqref{eig_eq} in $\Omega$, using the
P\'olya-Szeg\"o principle \eqref{pol}
\begin{equation}
\label{eq:2}
\begin{split}
\lambda_n(\Omega) - \lambda_n(\Omega^*_{n-1}) & \ge
\int_\Omega (-u)\det D^2 u\,dx -
\frac{\int_{\Omega^*_{n-1}}(-u_{n-1}^* )\det D^2
u_{n-1}^*\,dx }{\int_{\Omega^*_{n-1}}(-u_{n-1}^* )^{n+1} \,dx} \\[.2cm]
& \ge \frac{\int_{\Omega^*_{n-1}}(-u_{n-1}^* )\det D^2
u_{n-1}^*\,dx }{\int_{\Omega^*_{n-1}}(-u_{n-1}^* )^{n+1} \,dx}
\left(\int_{\Omega^*_{n-1}}(-u_{n-1}^* )^{n+1}
\,dx - 1 \right) \\[.2cm]
&\ge \lambda_n(\Omega^*_{n-1})\left(\int_{\Omega^*_{n-1}}(-u_{n-1}^*
)^{n+1} \,dx -1 \right).
\end{split}
\end{equation}
On the other hand, recalling that $u$ has normalized $L^{n+1}$ norm,
the coarea formula and an integration by parts give that
\[
\begin{split}
\int_{\Omega^*_{n-1}}(-u_{n-1}^*)^{n+1} \,dx -1 &=
(n+1)\int_0^{+\infty} t^n \big(
|\{-u^*_{n-1}>t\}|-|\{-u>t\}|\big) dt \\
&= (n+1) \omega_n^{1-n} \int_0^{+\infty} t^n \big(
W_{n-1}(\Omega_t)^n - \omega_n^{n-1}|\Omega_t|\big).
\end{split}
\]
Hence, joining \eqref{eq:2} with the above equality, and using
\eqref{small} we obtain that
\[
\int_0^{+\infty} t^n \big( W_{n-1}(\Omega_t)^n
- \omega_n^{n-1}|\Omega_t|\big) dt \le
\frac{\omega_n^{n-1}}{n+1} \varepsilon,
\]
that is the thesis.
\end{proof}
Last lemma plays a key role in order to obtain that the constant $C_n$
involved in (1) and (2) in Theorem \ref{main} is independent on
$\Omega$.
\begin{lemma}
\label{boundmis}
Under the hypotheses of Theorem \ref{main}, it holds that
\begin{equation*}
|\Omega| \ge \tilde C_n \left[W_{n-1}(\Omega)\right]^n,
\end{equation*}
where $\tilde C_n$ denotes a positive constant depending only on $n$.
\end{lemma}
\begin{proof}
Let $u$ be an eigenfunction of \eqref{eig_eq} corresponding to the
eigenvalue $\lambda=\lambda_n(\Omega)$. Then
\begin{equation}
\label{eq}
\det D^2u = \lambda (-u)^n \quad \text{ in }\Omega.
\end{equation}
Integrating both sides in \eqref{eq} on the level set
$\Omega_t=\{-u>t\}$, and denoting by $\Sigma_t=\partial \Omega_t=\{-u=t\}$,
and using the H\"older inequality we have
\begin{multline}
\label{sinistra}
\int_{\Omega_t}\det D^2u\,dx = \frac{1}{n} \int_{\Sigma_t} H_{n-1}
|Du|^n d\mathcal H^{n-1}
\ge \\ \ge \frac{1}{n} \frac{\left(\int_{\Sigma_t} H_{n-1} d\mathcal
H^{n-1}\right)^{n+1}}{ \left(\int_{\Sigma_t} H_{n-1} |Du|^{-1} d\mathcal
H^{n-1}\right)^{n}} = \frac{1}{n}
\frac{(n\omega_n)^{n+1}}{\left(-\frac{d}{dt}W_{n-1}(\Omega_t)\right)^n}.
\end{multline}
Last inequality follows by the H\"older inequality and the properties of
quermassintegrals. Moreover, being $|\Omega_t|\le |\Omega|$, we have
\begin{equation}
\label{destra}
\left(\int_{\Omega_t}(-u)^n\,dx \right)^\frac{1}{n} \le
|\Omega|^{\frac 1 n} \|u\|_{L^{\infty}(\Omega)}.
\end{equation}
Putting togheter \eqref{sinistra} and \eqref{destra}, by \eqref{eq} we
get
\[
-\frac{d}{dt}W_{n-1}(\Omega_t) \ge n \omega_n^{1+\frac{1}{n}}
\lambda^{-\frac{1}{n}}|\Omega|^{-\frac 1 n} \|u\|_{L^\infty(\Omega)}^{-1}
\]
and, integrating between $0$ and $\|u\|_{L^{\infty}(\Omega)}$,
\[
W_{n-1}(\Omega) \ge n \omega_n^{1+\frac{1}{n}}
\lambda^{-\frac{1}{n}}|\Omega|^{-\frac{1}{n}},
\]
that is, being $\lambda=\lambda_n(\Omega)\le
(1+\varepsilon)\lambda_n(\Omega^*_{n-1})$,
\begin{equation}
\label{misbas}
|\Omega|^{\frac 1 n} \ge n \omega_n^{1+\frac 1 n} W_{n-1}(\Omega)^{-1}
\lambda_{n}(\Omega^*_{n-1})^{-\frac{1}{n}} (1+\varepsilon)^{-\frac 1 n}.
\end{equation}
As matter of fact, being $W_{n-1}(\Omega)=W_{n-1}(\Omega^*_{n-1})$,
properties \eqref{risc_palla} and \eqref{querball} give that
\[
\lambda_n(\Omega^*_{n-1})= \left(
\frac{W_{n-1}(\Omega)}{\omega_n}\right)^{-2n} \lambda_n(B),
\]
where $B=\{|x|<1\}$. Then \eqref{misbas} becomes
\[
|\Omega|^{\frac 1 n} \ge n \omega_n^{\frac 1 n - 1} W_{n-1}(\Omega)
\lambda_n(B)^{-\frac 1 n} (1+\varepsilon)^{-\frac 1 n},
\]
and this concludes the proof.
\end{proof}
Now we are in position to prove the main theorem of this section.
\begin{proof}[Proof of the Theorem \ref{main}]
First of all, we observe that the quotient
\[
\frac{W_{n-1}(K)-W_{n-1}(L)}{W_{n-1}(K)}
\]
is rescaling invariant, hence we suppose that
$W_{n-1}(\Omega)=1$. Consequently, by Lemma \ref{boundmis} and the
Aleksandrov-Fenchel inequality, we have that there exists two
positive constants $c_1(n)$ and $c_2(n)$, which depend only on the
dimension, such that
\begin{equation}
\label{bound}
c_1(n) \le |\Omega| \le c_2(n).
\end{equation}
For $\delta$ as in Lemma \ref{lemma1}, by \eqref{eq:3} we get that
\[
\begin{split}
\inf_{0\le t \le \delta}\big\{ W_{n-1}(\Omega_t)^n
- \omega_n^{n-1}|\Omega_t|\big\} &\le \frac {n+1} {\delta^{n+1}}
\int_0^{\delta} t^n \big( W_{n-1}(\Omega_t)^n
- \omega_n^{n-1}|\Omega_t|\big) dt \\
&\le \omega_{n}^{n-1} \frac {\varepsilon}{\delta^{n+1}} =
\omega_{n}^{n-1} \sqrt\varepsilon,
\end{split}
\]
where we finally choose $\delta^{n+1}=\sqrt{\varepsilon}$. Hence, this
gives that there exists $0\le \tau \le \delta$ such that
\begin{equation}
\label{eq:4}
W_{n-1}(\Omega_\tau)^n \le \omega_n^{n-1}|\Omega_\tau| +
\omega_{n}^{n-1} \sqrt\varepsilon.
\end{equation}
\fbox{\bf Case $n=2$.} In such a case, \eqref{eq:4} becomes
\begin{equation*}
\label{eq:7}
\frac{P(\Omega_\tau)^2}{4\pi}- |\Omega_\tau| \le \sqrt \varepsilon.
\end{equation*}
Then, denoting by $r_\tau$ and $R_\tau$ the inradius and the
circumradius of $\Omega_\tau$ respectively, and by $\rho_\tau$ the
radius of
$(\Omega_\tau)^*_1$, using the Bonnesen inequality (see for
example \cite{oss79}, and \cite{afn09,afn11} for some related
questions) we have
\[
(\rho_\tau-r_\tau)^2 \le (R_\tau-r_\tau)^2 \le \sqrt\varepsilon.
\]
Being $2\pi \rho_\tau = P(\Omega_\tau) $, we have by Lemma
\ref{lemma1}, for $\varepsilon$ sufficiently small, that
\[
r_\tau\ge \frac{ P(\Omega_\tau) }{2\pi} -\sqrt[4]{\varepsilon}\ge
\frac{P(\Omega)}{2\pi}\left( 1- 2\sqrt[6]{\varepsilon}|\Omega|^{\frac 1 3}
\right)-\sqrt[4]{\varepsilon}\ge R(1-C_{2}|\Omega|^{\frac
1 3} \sqrt[6]{\varepsilon}),
\]
where $R=\frac{P(\Omega)}{2\pi}$ is the radius of $\Omega^*_1$ and
$C_{2}$ denotes a constant which depends only on the dimension
$n=2$. Being $r_\tau < r_\Omega$, by \eqref{bound} we have that
\begin{equation}
\label{eq:6}
d_2(\Omega) \le 1-
\frac{r_\tau}{R} \le C_{2}|\Omega|^{\frac
1 3} \sqrt[6] \varepsilon\le C_{2} \sqrt[6]{\varepsilon}
\end{equation}
where $B_{r_\tau}$ is a ball of radius $r_\tau$ contained in
$\Omega$. Then, by \eqref{eq:6} and being $P(\Omega)=2$, we have
that
\begin{equation}
\label{eq:8}
\left( \frac{P^2(\Omega)}{4\pi} -|\Omega| \right)^{\frac 1 2}\le
\left( \frac{\big(P(B_{r_\Omega})+C_2\sqrt[6]{\varepsilon}\big)^2}{4\pi} -
|B_{r_\Omega}| \right)^{\frac 1 2} \le C_2 \sqrt[12]\varepsilon,
\end{equation}
where last inequality follows being $P(B_{r_\Omega})\le P(\Omega)=2$.
Then, \eqref{eq:8}, \eqref{bound} and \eqref{gshaus} give
\[
\delta_H(\Omega) \le C_2 \sqrt[15] \varepsilon.
\]
On the other hand, applying to \eqref{eq:8} the well-known Bonnesen
inequality, we get that
\[
D_2(\Omega) \le 2\pi(R_\Omega-r_\Omega)\le C_2 \sqrt[12] \varepsilon.
\]
\fbox{\bf Case $n> 2$.}
From \eqref{eq:4} and the Aleksandrov-Fenchel inequalities
\eqref{af-2} with $k=n$, we have that
\[
\frac{P(\Omega_\tau)^{\frac{n}{n-1}}}{(n^n\omega_n)^{\frac{1}{n-1}}}\le
|\Omega_\tau|+\sqrt \varepsilon.
\]
Hence, by \eqref{bound} and for $\varepsilon$ sufficiently small, an
elementary inequality gives that
\[
\frac{P(\Omega_\tau)^{{n}}}{n^n\omega_n}\le
|\Omega_\tau|^{n-1}+C_n\sqrt \varepsilon.
\]
Then, applying the stability result \eqref{gs}, and using again
\eqref{bound}, it follows that
\begin{equation}
\label{gs2}
(R_\tau-r_\tau)^{(n+3)/2} \le C_n
\frac{P(\Omega)^{(n^2-3)/2}}{|\Omega|^{(n+3)(n-2)/2}} \sqrt\varepsilon\le
C_n\sqrt \varepsilon,
\end{equation}
where, as before, $R_\tau$ and $r_\tau$ are, respectively, the
circumradius and the inradius of $\Omega_\tau$.
Now, let $\rho_\tau$ be the radius of the ball
$(\Omega_\tau)^*_{n-1}$, having the same $W_{n-1}$ measure of
$\Omega_\tau$. Similarly as before, being $\rho_\tau <R_\tau$, by
\eqref{gs2}, Lemma \ref{lemma1} and \eqref{bound}, for $\varepsilon$
sufficiently small we have
\begin{multline}
\label{catenella}
r_\tau \ge \rho_\tau - C_n \varepsilon^{\frac{1}{n+3}} =
\frac{W_{n-1}(\Omega_\tau)}{\omega_n} - C_n
\varepsilon^{\frac{1}{n+3}} \ge \\ \ge
\omega_n^{-1}W_{n-1}(\Omega)\left(1-2\varepsilon^{1/(2n+2)}|\Omega|^{\frac{1}{n+1}}
\right) - C_n \varepsilon^{\frac{1}{n+3}} \ge \\ \ge
R\left(1- C_n \varepsilon^{1/(2n+2)}
\right),
\end{multline}
where $R=\omega_n^{-1}W_{n-1}(\Omega)$ is the radius of the ball
$\Omega^*_{n-1}$.
Denoting again with $r_\Omega$ the inradius of $\Omega$, we have that
$r_\tau \le r_\Omega$ and
\begin{equation*}
d_n(\Omega) \le 1- \frac{r_\tau}{R} \le C_n \varepsilon^{1/(2n+2)}.
\end{equation*}
As matter of fact, by the Aleksandrov-Fenchel inequalities,
\eqref{catenella} and being $W_{n-1}(\Omega)=1$, it follows that
\begin{equation}\label{catena}
\begin{split}
\left[\left(
\frac{P(\Omega)}{n\omega_n} \right)^n - \left( \frac{ |\Omega|
}{\omega_n}\right)^{n-1}\right]^{\frac{2}{n+3}}
&\le
\left[
\left( \frac{W_{n-1}(\Omega)^n}{\omega_n^n}
\right)^{n-1} -
\left( \frac{ |\Omega|
}{\omega_n}\right)^{n-1}\right]^{\frac{2}{n+3}} \le \\ & \le
\left[
\left( \frac{W_{n-1}(B_{r_\Omega})+C_n \varepsilon^{\frac{1}{2n+2}}}{\omega_n}
\right)^{n(n-1)}-
\left( \frac{ |B_{r_\Omega}|
}{\omega_n}\right)^{n-1}\right]^{\frac{2}{n+3}} \le \\ & \le
\left[
\left( \frac{W_{n-1}(B_{r_\Omega})}{\omega_n}
\right)^{n(n-1)} + C_n \varepsilon^{\frac{1}{2n+2}} -
\left( \frac{ |B_{r_\Omega}|
}{\omega_n}\right)^{n-1}\right]^{\frac{2}{n+3}} = \\ &=
C_n \varepsilon^{\frac{1}{(n+1)(n+3)}}.
\end{split}
\end{equation}
Hence, applying \eqref{gshaus}, from \eqref{catena}, \eqref{bound} and being
$W_{n-1}(\Omega)=\omega_n R=1$, we get that
\begin{equation*}
\delta_H(\Omega)\le C_n \varepsilon^{\frac{1}{(n+1)(n+3)}},
\end{equation*}
while applying \eqref{gs}, we get
\begin{equation*}
D_n(\Omega) \le \omega_n(R_\Omega-r_\Omega) \le C_n
\varepsilon^{\frac{1}{(n+1)(n+3)}},
\end{equation*}
and this concludes the proof.
\end{proof}
\begin{rem}
\label{remdef}
Under the assumption of Theorem \ref{main}, from \eqref{catena} and
\eqref{catenella} an estimate for the deficiency of $\Omega$ holds,
that is
\[
\Delta(\Omega) \le C_{n} \varepsilon^{\frac{1}{(n+1)(n+3)}}.
\]
\end{rem}
\section{The case of the $k$-Hessian operator, $1 \le k\le n-1$}
In this section we consider the eigenvalue problem related to the
$k$-Hessian operators, $1 \le k\le n-1$, namely
\begin{equation*}
\left\{
\begin{array}{ll}
\Sk_k(D^2u)=\lambda (-u)^k &\text{in } \Omega,\\
u=0 &\text{on } \partial\Omega,
\end{array}
\right.
\end{equation*}
obtaining the stability result as follows.
\begin{theo}
\label{main2}
Let $1\le k \le n-1$, and $\Omega \subset \mathbb R^n$ be as in
\eqref{ipomega} such that
\begin{equation}
\label{small2}
\lambda_k(\Omega) \le (1+\varepsilon)\lambda_k(\Omega_{k-1}^*),
\end{equation}
where $\varepsilon>0$ is sufficiently small and $\Omega_{k-1}^*$ is the
ball such that $W_{k-1}(\Omega)=W_{k-1}(\Omega_{k-1}^*)$.
Moreover we suppose that the eigenfunctions related to
$\lambda_k(\Omega)$ have convex level sets. Then,
\[
\delta_H(\Omega) \le C_{n,k}\varepsilon^{\frac{2\alpha}{n+3}},
\]
and
\begin{equation}
\label{tesik}
d_k(\Omega)\le
C_{n,k} \varepsilon^{\alpha},\quad D_k(\Omega) \le
C_{n,k} \varepsilon^{\frac{2\alpha}{n+3}},
\end{equation}
where $\alpha=\max\left\{\frac{1}{k+1}, \frac
{2k}{(k+1)(n+3)}\right\}$, $C_{n,k}$ is a positive constant which
depends only on $n$ and $k$, and $d_k(\Omega)$ and $D_k(\Omega)$
are, respectively, the interior and exterior $k$-deficiency of
$\Omega$ as in \eqref{defe}.
\end{theo}
\begin{rem}
As observed in Section 2.3, the additional hypothesis on the convexity
of the level sets of the eigenfunctions corresponding to
$\lambda_k(\Omega)$ is necessary to have that a Faber-Krahn inequality
holds. On the other hand this assumption seems to be natural. Indeed,
for $k=1$ this is due to the Korevaar concavity maximum principle
(see\cite{kor}), while it is trivial for $k=n$. For the $k$-Hessian
operators, at least in the case $n=3$ and $k=2$, it in \cite{lmx10}
and \cite{sa12} is proved that if $\Omega$ is sufficiently smooth, the
eigenfunctions of $\Sk_2$ have convex level sets. Up to our knowledge,
the general case is an open problem.
\end{rem}
\begin{rem}
Similarly as observed in Remark \ref{remeq}, by the estimates
\eqref{tesik} we can obtain that
\begin{equation*}
\frac{W_{k-1}(\Omega) - W_{k-1}(B_{r_\Omega})}{W_{k-1}(\Omega)} \le
C_{n,k} \varepsilon^{\alpha},\quad \frac{W_{k-1}(B_{R_\Omega}) -
W_{k-1}(\Omega)}{W_{k-1}(\Omega)} \le
C_{n,k} \varepsilon^{\frac{2\alpha}{n+3}}.
\end{equation*}
\end{rem}
Similarly to the case of the Monge-Amp\`ere operator, to give the
proof of Theorem \ref{main2} we first consider some preliminary
results.
Using the same notations of section 3, for $\delta\ge 0$, we denote
\[
\Omega_\delta = \{x\in \Omega \colon -u > \delta \}.
\]
\begin{lemma}
\label{lemma1kkk}
Under the hypotheses of Theorem \ref{main2}, if $u$ is the
eigenfunction of $\Sk_k$ in $\Omega$ such that
$\|u\|_{L^{k+1}(\Omega)}=1$, then for any $\delta$ such that
$0<\delta<\frac 1 2 |\Omega|^{-\frac{1}{k+1}}$, we have
\begin{equation}
\label{lemma1k}
W_{k-1}(\Omega_\delta) \ge
W_{k-1}(\Omega) \left[1-(n-k+1)\max\{\varepsilon, 2\delta
|\Omega|^{\frac{1}{k+1}}\}\right].
\end{equation}
\end{lemma}
\begin{proof}
For $\delta>0$, we compute the Rayleigh quotient of the function
$\phi=u+\delta$ in $\Omega_\delta$. Then,
\begin{equation}
\label{udeltak}
\lambda_k(\Omega_\delta) \le \dfrac{\int_{\Omega_\delta}
(-\phi)\Sk_k (D^2 \phi)\,dx }{ \int_{\Omega_\delta}
(-\phi)^{k+1}\,dx} = \dfrac{\int_{\Omega_\delta}
(-u-\delta)\Sk_k (D^2 u) \,dx }{ \int_{\Omega_\delta}
(-u-\delta)^{k+1}\,dx}.
\end{equation}
\[
\begin{split}
\int_{\Omega_\delta}(-u-\delta)\Sk_k (D^2u) \,dx
&= \lambda_k(\Omega) \int_{\Omega_\delta} (-u-\delta)(-u)^k\,dx \\
&\le \lambda_k(\Omega) \left(\int_{\Omega_\delta}
(-u-\delta)^{k+1}\,dx\right)^{\frac{1}{k+1}}
\left(\int_{\Omega_\delta}
(-u)^{k+1}\,dx\right)^{\frac{k}{k+1}} \\
&\le \lambda_k(\Omega) \left(\int_{\Omega_\delta}
(-u-\delta)^{k+1}\,dx\right)^{\frac{1}{k+1}}.
\end{split}
\]
Hence, combining the above estimate with \eqref{udeltak} it follows
that
\begin{equation}
\label{udelta2k}
\lambda_k(\Omega_\delta) \le \lambda_k(\Omega) \left(\int_{\Omega_\delta}
(-u-\delta)^{k+1}\,dx \right)^{-\frac{k}{k+1}}
\end{equation}
On the other hand, by Minkowski inequality and choosing
$\delta<\frac 1 2 |\Omega|^{-\frac{1}{k+1}}$, we obtain
that
\[
\begin{split}
\left(\int_{\Omega_\delta} (-u-\delta)^{k+1}\,dx\right)^{\frac{1}{k+1}}
&\ge \left( \int_{\Omega_\delta} (-u)^{k+1} \,dx \right)^{\frac
{1}{k+1}} - \left( \int_{\Omega_\delta} \delta^{k+1} \,dx \right)^{\frac
{1}{k+1}} \\ &\ge \left(1- \int_{\Omega\setminus\Omega_\delta}
\delta^{k+1} \,dx \right)^{\frac{1}{k+1}}- \delta
|\Omega_\delta|^{\frac{1}{k+1}} \\
&= \left(1-
\delta^{k+1} \big(|\Omega|-|\Omega_\delta|\big)
\right)^{\frac{1}{k+1}}- \delta |\Omega_\delta|^{\frac{1}{k+1}} \\
&\ge 1-\delta\big( |\Omega| - |\Omega_\delta| \big)^{\frac{1}{k+1}}
- \delta |\Omega_\delta|^{\frac{1}{k+1}} \ge 1-2\delta
|\Omega|^{\frac {1}{k+1}}.
\end{split}
\]
Hence, from \eqref{udelta2k}, \eqref{small2} and Faber-Krahn
inequality it follows that
\[
\lambda_k((\Omega_\delta)^*_{k-1}) \le \lambda_k(\Omega_\delta) \le
(1+\varepsilon) \lambda_k(\Omega^*_{k-1}) \big( 1-2\delta |\Omega|^{\frac
{1}{k+1}} \big)^{-k}
\]
which implies, by \eqref{risc_palla}, that
\begin{equation}
\label{udelta3k}
\left(\frac{W_{k-1}(\Omega_\delta)}{W_{k-1}(\Omega)}
\right)^{\frac{2k}{n-k+1}}
=
\frac{\lambda_k(\Omega^*_{k-1})}{\lambda_k((\Omega_\delta)^*_{k-1})}
\ge \frac{\big( 1-2\delta |\Omega|^{\frac{1}{k+1}}
\big)^k}{1+\varepsilon},
\end{equation}
where we used that the balls $\Omega^*_{k-1}$ and
$(\Omega_\delta)^*_{k-1}$ preserve, respectively, the $(k-1)$-th
quermassintegral of $\Omega$ and $\Omega_\delta$. Hence, by
\eqref{udelta3k} we get that
\[
\frac{W_{k-1}(\Omega_\delta)}{W_{k-1}(\Omega)}
\ge \left( 1 - \frac{\varepsilon+ 2 \delta |\Omega|^{\frac
{1}{k+1}}}{1+\varepsilon} \right)^{\frac{n-k+1}{2}} \ge
1-(n-k+1)\max\{\varepsilon, 2\delta |\Omega|^{\frac{1}{k+1}}\},
\]
obtaining the thesis.
\end{proof}
\begin{lemma}
Under the hypotheses of Theorem \ref{main2}, if $u$ is the
eigenfunction of $\Sk_k$ in $\Omega$ such that
$\|u\|_{L^{k+1}(\Omega)}=1$, we have that
\begin{equation}
\label{eq:10}
\frac {n(n-k+1)^{k}}{k} \int_0^{\max|u|}
\frac{W_k(\Omega_t)^{k+1}-W_k((\Omega_t)^*_{k-1})^{k+1}}{\big[-\frac
{d}{dt} W_{k-1}(\Omega_t)\big]^k}
dt \le \varepsilon \lambda_k(\Omega_{k-1}^*).
\end{equation}
\end{lemma}
\begin{proof}
The divergence form of $\Sk_k$ and the coarea formula give that
(see also \cite{trudiso})
\[
\lambda_k(\Omega) = \frac 1 k \int_0^{\max(-u)} dt \int_{\Sigma_t}
H_{k-1}(\Sigma_t) |Du|^k d\mathcal H^{n-1},
\]
where $\Sigma_t=\partial \Omega_t$. Then, the H\"older inequality
and the Reilly formula \eqref{reillyder} give that
\begin{multline}
\label{lemmak}
\lambda_k(\Omega) \ge \frac 1 k \int_0^{\max(-u)}
\dfrac{ \left( \int_{\Sigma_t} H_{k-1}(\Sigma_t) d\mathcal
H^{n-1}\right)^{k+1}}
{\left(
\int_{\Sigma_t} \frac{H_{k-1}(\Sigma_t)}{|Du|}d\mathcal H^{n-1}
\right)^{k} } dt = \\ =
\frac {n(n-k+1)^{k}}{k} \int_0^{\max(-u)}
\frac{ W_k(\Omega_t)^{k+1} }
{\big[-\frac{d}{dt} W_{k-1}(\Omega_t)\big]^{k} } dt.
\end{multline}
Moreover, being $\|u^*_{k-1}\|_{k+1} \ge \|u\|_{k+1}=1$, we have
\begin{multline}
\label{lemmakball}
\lambda_k(\Omega^*_{k-1}) \le
\frac{ \int_{\Omega^*_{k-1}} (-u^*_{k-1})\Sk_k(D^2 u^*_{k-1}) dx }
{\int_{\Omega^*_{k-1}} (-u^*_{k-1})^{k+1} dx } \le
\int_{\Omega^*_{k-1}} (-u^*_{k-1})\Sk_k(D^2 u^*_{k-1}) dx = \\ =
\frac {n(n-k+1)^{k}}{k} \int_0^{\max(-u)}
\frac{ W_k((\Omega_t)^*_{k-1})^{k+1}}
{\big[-\frac{d}{dt} W_{k-1}(\Omega_t)\big]^{k} } dt.
\end{multline}
Last equality follows from the symmetry of $u^*_{k-1}$ and being
$W_{k-1}(\Omega_t)= W_{k-1}\big((\Omega_t)^*_{k-1}\big)$. Hence, taking
\eqref{lemmak} and \eqref{lemmakball} and subtracting, from we have that
\begin{multline*}
\varepsilon \lambda_k(\Omega^*_{k-1}) \ge \\ \ge \lambda_k(\Omega)
-\lambda_k(\Omega^*_{k-1}) \ge
\frac {n(n-k+1)^{k}}{k} \int_0^{\max(-u)}
\frac{ W_k(\Omega_t)^{k+1}- W_k((\Omega_t)^*_{k-1})^{k+1}}
{\big[-\frac{d}{dt} W_{k-1}(\Omega_t)\big]^{k} } dt,
\end{multline*}
that gives the thesis.
\end{proof}
In the next result we prove a lower bound for $|\Omega|$ in term of $W_{k-1}(\Omega)$.
\begin{lemma}
\label{boundmisk}
Under the hypotheses of Theorem \ref{main2}, it holds that
\begin{equation*}
|\Omega| \ge C_{n,k} W_{k-1}^{\frac{n}{n-k+1}}(\Omega),
\end{equation*}
where $C_{n,k}$ denotes a positive constant depending only on $n$ and $k$.
\end{lemma}
\begin{proof}
Let be $u$ an eigenfunction corresponding to the eigenvalue
$\lambda=\lambda_k(\Omega)$ and such that $\|u\|_{L^{k+1}}=1$. Then,
\begin{equation}
\label{eqk}
\Sk_k (D^2u) = \lambda (-u)^k \quad\text{ in }\Omega.
\end{equation} Arguing as in Lemma \ref{boundmis}, by
\eqref{introintcurv}
\eqref{reillyder} and the H\"older inequality we have
\begin{equation}
\label{sinistrak}
\int_{\Omega_t}\Sk_k (D^2u)dx = \frac{1}{k} \int_{\Sigma_t} H_{k-1} |Du|^k
d\mathcal H^{n-1} \ge C_{n,k} \frac{\left(W_k(\Omega_t)\right)^{k+1}}
{\left(-\frac{d}{dt}W_{k-1}(\Omega_t)\right)^k}.
\end{equation}
We divide the proof in three cases.
Case $k> \frac n 2$. By H\"older inequality we have:
\begin{equation}
\label{destrak}
\left(\int_{\Omega_t}(-u)^kdx\right)^{\frac 1 k} \le
|\Omega|^{\frac 1 k} \|u\|_{L^{\infty}(\Omega)}.
\end{equation}
Putting togheter \eqref{sinistrak} and \eqref{destrak}, by \eqref{eqk}
we get that
\[
W_k(\Omega_t)^{-\frac{k+1}{k}} \left(
-\frac{d}{dt}W_{k-1}(\Omega_t) \right) \ge C_{n,k} |\Omega|^{-\frac
1 k}
\|u\|_{\infty}^{-1}\lambda^{-\frac 1 k}.
\]
Using the Aleksandrov-Fenchel inequalities \eqref{afineq} with $j=k$
and $i=k-1$, and integrating between
$0$ and $\|u\|_{L^{\infty}(\Omega)}$, being $k>\frac n 2$ we get
\[
W_{k-1}(\Omega)^{\frac{2k-n}{k(n-k+1)}}\ge C_{n,k}
\lambda^{-\frac{1}{k}}|\Omega|^{-\frac{1}{k}}.
\]
Being $\lambda_k(\Omega) \le (1+\varepsilon)\lambda_k(\Omega^*_{k-1})$,
and recalling the properties \eqref{risc_palla} and
\eqref{querball}, we have:
\begin{gather*}
\begin{split}
|\Omega|^{\frac{1}{k}} &
\ge C_{n,k} W_{k-1}(\Omega)^{-\frac{2k-n}{k(n-k+1)}}
{\lambda_{k}(\Omega^*_{k-1})^{-\frac{1}{k}}} {(1+\varepsilon)^{-\frac{1}{k}}}
=
\\ &=
C_{n,k} W_{k-1}(\Omega)^{-\frac{2k-n}{k(n-k+1)}}
W_{k-1}(\Omega^*_{k-1})^{\frac {2}{n-k+1}}
{\lambda_{k}(B_1)^{-\frac{1}{k}}}
{(1+\varepsilon)^{-\frac{1}{k}}}
= \\ &= C_{n,k}
{(1+\varepsilon)^{-\frac{1}{k}}}W_{k-1}(\Omega)^{\frac{n}{k(n-k+1)}},
\end{split}
\end{gather*}
and the first case is completed.
Case $k< \frac n 2$. By H\"older inequality, and being $\|u\|_{k+1}=1$
we have:
\begin{equation}
\label{destrak2}
\left(\int_{\Omega_t}(-u)^kdx\right)^{\frac 1 k} \le
|\Omega_t|^{\frac {1}{k(k+1)}} \left( \int_{\Omega_t} u^{k+1}dx
\right)^{\frac {1}{k+1}}\le |\Omega|^{\frac {1}{k(k+1)}}.
\end{equation}
Then, joining \eqref{sinistrak} and \eqref{destrak2}, and using the
Aleksandrov-Fenchel inequalities we get
\[
W_{k-1}(\Omega_t)^{-\frac{(k+1)(n-k)}{k(n-k+1)}} \left(
-\frac{d}{dt}W_{k-1}(\Omega_t) \right) \ge C_{n,k} |\Omega|^{-\frac
{1} {k(k+1)}}\lambda^{-\frac 1 k}.
\]
Integrating between $0$ and $\delta$ sufficiently small, we get that
\[
W_{k-1}(\Omega_\delta)^{-\frac{n-2k}{k(n-k+1)}}-
W_{k-1}(\Omega)^{-\frac{n-2k}{k(n-k+1)}} \ge C_{n,k} |\Omega|^{-\frac
{1} {k(k+1)}}\lambda^{-\frac 1 k} \delta.
\]
Now we apply Lemma \ref{lemma1kkk}. Let $\varepsilon$ and $\delta$ such that
$\varepsilon<2\delta|\Omega|^{\frac{1}{k+1}}<(n-k+1)^{-1}$. Hence, writing
$\alpha=-\frac{n-2k}{k(n-k+1)}<0$, we get
\begin{equation}
\label{boh1}
W_{k-1}(\Omega)^{\alpha}\left[
(1-2\delta (n-k+1) |\Omega|^{\frac{1}{k+1}}
)^\alpha-1 \right]
\ge C_{n,k} |\Omega|^{-\frac{1} {k(k+1)}}\lambda^{-\frac 1
k}\delta.
\end{equation}
Moreover, if $\delta$ is such that $2\delta
|\Omega|^{\frac{1}{k+1}}(n-k+1)\le 1-{2^{-\frac{1}{1-\alpha}}}$, from
\eqref{boh1} we get
\[
W_{k-1}(\Omega)^{\alpha}\left[
-4\alpha (n-k+1) |\Omega|^{\frac{1}{k+1}}
\right] \delta
\ge C_{n,k} |\Omega|^{-\frac{1} {k(k+1)}}\lambda^{-\frac 1
k}\delta,
\]
that is
\[
|\Omega|^{\frac{1}{k}}
\ge C_{n,k} W_{k-1}(\Omega)^{\frac{n-2k}{k(n-k+1)}} \lambda^{-\frac
1 k}= C_{n,k} (1+\varepsilon)^{-\frac{1}{k}} \lambda_k(B_1)^{-\frac 1 k}
W_{k-1}(\Omega)^{\frac{n}{k(n-k+1)}},
\]
that is the thesis.
Case $k= \frac n 2$.
Arguing as before, we get
\[
\log\left(\frac{W_{\frac n 2-1}(\Omega)}
{W_{\frac n 2-1}(\Omega_\delta)}\right) \ge C_{n} |\Omega|^{-\frac
{4} {n (n+2)}}\lambda^{-\frac 2 n} \delta.
\]
By Lemma \ref{lemma1kkk}, it follows that if
$\varepsilon<2\delta|\Omega|^{\frac{2}{n+2}}<\frac{2}{n+2}$,
\[
-\log\left( 1-\delta\left(n +2\right)|\Omega|^{\frac{2}{n+2}} \right)
\ge C_{n} |\Omega|^{-\frac
{4} {n (n+2)}}\lambda^{-\frac 2 n} \delta.
\]
Then, for $\delta$ such that $\delta (n+2)
|\Omega|^{\frac{2}{n+2}}<\frac {1} {2(n+2)}$,
\[
2(n+2)|\Omega|^{\frac{2}{n+2}}\delta \ge
C_{n}|\Omega|^{-\frac{4}{n(n+2)}}\lambda^{-\frac 2 n} \delta.
\]
Then, similarly as before,
\[
|\Omega|^{\frac{2}{n}} \ge C_{n}
W_{\frac n 2 -1}(\Omega)^{\frac{4}{(n+2)}},
\]
and the proof of the Lemma is completed.
\end{proof}
Now we can prove the main theorem of this section.
\begin{proof}[Proof of the Theorem \ref{main2}]
Without loss of generality, we may suppose that $W_{k-1}(\Omega)=1$.
Indeed, the quotient
\[
\frac{W_{k-1}(K)-W_{k-1}(L)}{W_{k-1}(K)}
\]
is rescaling invariant. Consequently, by Lemma \ref{boundmisk} and the
Aleksandrov-Fenchel inequality, we have that there exist two
positive constants $c_1(n,k)$ and $c_2(n,k)$, which depend only on $n$
and $k$, such that
\begin{equation}
\label{boundk}
c_1(n,k) \le |\Omega| \le c_2(n,k).
\end{equation}
The H\"older inequality gives that
\begin{multline}
\label{eq:holder}
\varepsilon= \delta^{k+1} = \left( \int_0^\delta dt \right)^{k+1} \le
\left(\int_0^\delta \big[-\frac{d}{dt} W_{k-1}(\Omega_t)\big] dt\right)^k
\int_0^\delta \frac{1}{\big[-\frac{d}{dt}
W_{k-1}(\Omega_t)\big]^k}dt = \\ =
\left[W_{k-1}(\Omega)-W_{k-1}(\Omega_\delta)\right]^k
\int_0^\delta \frac{1}{\big[-\frac{d}{dt} W_{k-1}(\Omega_t)\big]^{k}}dt.
\end{multline}
Hence, for $\varepsilon>0$ sufficiently small, $\delta$ verifies the
hypothesis in Lemma \ref{lemma1kkk}, and the inequalities
\eqref{eq:holder}, \eqref{lemma1k}, \eqref{eq:10} and \eqref{boundk}
imply that
\begin{multline*}
\inf_{t\in [0,\delta]} \left( W_k(\Omega_t)^{k+1}-
W_k((\Omega_t)^*_{k-1})^{k+1}\right) \le \\
\le \frac 1 \varepsilon
\left[W_{k-1}(\Omega)-W_{k-1}(\Omega_\delta)\right]^k
\int_0^{\max(-u)}
\frac{ W_k(\Omega_t)^{k+1}- W_k((\Omega_t)^*_{k-1})^{k+1}}
{\big[-\frac{d}{dt} W_{k-1}(\Omega_t)\big]^{k} } dt \le \\ \le
C_{n,k} \varepsilon^{\frac {k}{k+1}}
\lambda_k(\Omega^*_{k-1}).
\end{multline*}
Hence, for some $\tau \in [0,\delta]$, we have that
\[
W_k(\Omega_\tau)^{k+1}
\le W_k((\Omega_\tau)^*_{k-1})^{k+1} + C_{n,k} \varepsilon^{\frac{k}{k+1}},
\]
being $W_{k-1}(\Omega)=1$.
Moreover, an algebraic inequality and \eqref{boundk} give that
\[
{\omega_n}^{-1} W_{k}(\Omega_\tau)^{n-k+1}
\le W_{k-1}(\Omega_\tau)^{n-k} + C_{n,k} \varepsilon^{\frac{k}{k+1}},
\]
Applying the estimates \eqref{gs} and \eqref{boundk}, and using the
same notation of the proof of Theorem \ref{main}, we have
\begin{equation*}
(R_\tau-r_\tau)^{(n+3)/2} \le
C_{n,k}\varepsilon^{\frac{k}{k+1}}.
\end{equation*}
Moreover, by \eqref{lemma1k} it follows that
\begin{multline}
\label{katenella}
r_\tau \ge \rho_\tau - C_{n,k} \varepsilon^{\frac{2k}{(k+1)(n+3)}} =
\left(\frac{W_{k-1}(\Omega_\tau)}{\omega_n}\right)^{\frac{1}{n-k+1}} -
C_{n,k}
\varepsilon^{\frac{2k}{(k+1)(n+3)}} \ge \\
\ge
\left[ \frac{ W_{k-1}(\Omega) }{\omega_n}
\left(1-C_{n,k}\varepsilon^{\frac{1}{k+1}}|\Omega|^{\frac{1}{k+1}}
\right) \right]^{\frac{1}{n-k+1}} - C_{n,k}
\varepsilon^{\frac{2k}{(k+1)(n+3)}} \ge \\
\ge
\left[\frac{ W_{k-1}(\Omega) }{\omega_n}\right]^{\frac{1}{n-k+1}}
\left(
1-\tilde C_{n,k}\varepsilon^{\frac{1}{k+1}}
-C_{n,k} \varepsilon^{\frac{2k}{(k+1)(n+3)}}
\right)\ge
R
\left(
1- C_{n,k}\varepsilon^\alpha
\right),
\end{multline}
where $R=\left[
\omega_n^{-1}W_{k-1}(\Omega)\right]^{\frac{1}{n-k+1}}$ is the
radius of $\Omega^*_{k-1}$, and $\alpha=\max\left\{\frac{1}{k+1}, \frac
{2k}{(k+1)(n+3)}\right\}$. Hence, recalling \eqref{querball}, and
being $r_\tau\le r_\Omega$ and $W_{k-1}(\Omega)=1$, then
\begin{equation*}
\label{kkkk}
d_k(\Omega) \le \omega_n^{\frac{1}{n-k+1}}(R-r_\tau)\le
C_{n,k} \varepsilon^{\alpha},
\end{equation*}
that is the first estimate in \eqref{tesik}.
In order to obtain the remaining estimates of the theorem,
using the Aleksandrov-Fenchel inequalities, \eqref{katenella} and
being $W_{k-1}(\Omega)=1$, we have that
\begin{equation}\label{catenak}
\begin{split}
\left[\left(
\frac{P(\Omega)}{n\omega_n} \right)^n - \left( \frac{ |\Omega|
}{\omega_n}\right)^{n-1}\right]^{\frac{2}{n+3}}
&\le \left[
\left( \frac{W_{k-1}(\Omega)^n}{\omega_n^n}
\right)^{\frac{n-1}{n-k+1}} -
\left( \frac{ |\Omega|
}{\omega_n}\right)^{n-1}\right]^{\frac{2}{n+3}} \le
\\
& \le
\left[
\left( \frac{W_{k-1}(B_{r_\Omega})+C_{n,k}
\varepsilon^{\alpha}}{\omega_n}
\right)^{\frac{n(n-1)}{n-k+1}} -
\left( \frac{ |B_{r_\Omega}|
}{\omega_n}\right)^{n-1}\right]^{\frac{2}{n+3}} \le
\\ & \le
\left[
\left( \frac{W_{k-1}(B_{r_\Omega})}{\omega_n}
\right)^{\frac{n(n-1)}{n-k+1}} + C_{n,k} \varepsilon^{\alpha} -
\left( \frac{ |B_{r_\Omega}|
}{\omega_n}\right)^{n-1}\right]^{\frac{2}{n+3}} =
\\ & =
C_{n,k} \varepsilon^{\frac{2\alpha}{n+3}}.
\end{split}
\end{equation}
Hence, \eqref{catenak} and \eqref{gshaus} imply
\[
\delta_H(\Omega)\le C_{n,k}
\varepsilon^{\frac{2\alpha}{n+3}},
\]
while, from \eqref{gs} we get
\begin{equation*}
D_k(\Omega) \le
\omega_n^{\frac{1}{n-k+1}}(R_\Omega-r_\Omega) \le C_{n,k}
\varepsilon^{\frac{2\alpha}{n+3}},
\end{equation*}
and this concludes the proof.
\end{proof}
\begin{rem}
Similarly as observed in Remark \ref{remdef}, from the proof of
Theorem \ref{main2} it is possible to obtain that
\[
\Delta(\Omega) \le C_{n,k} \varepsilon^{\frac{2\alpha}{n+3}}.
\]
\end{rem}
| {
"timestamp": "2013-03-27T01:01:43",
"yymm": "1302",
"arxiv_id": "1302.7252",
"language": "en",
"url": "https://arxiv.org/abs/1302.7252",
"abstract": "In this paper, we give some stability estimates for the Faber-Krahn inequality relative to the eigenvalues of Hessian operators",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Stability results for some fully nonlinear eigenvalue estimates",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754461077707,
"lm_q2_score": 0.8128673087708699,
"lm_q1q2_score": 0.8003291931595022
} |
https://arxiv.org/abs/1709.07290 | Comparing the Switch and Curveball Markov Chains for Sampling Binary Matrices with Fixed Marginals | The Curveball algorithm is a variation on well-known switch-based Markov chain approaches for uniformly sampling binary matrices with fixed row and column sums. Instead of a switch, the Curveball algorithm performs a so-called binomial trade in every iteration of the algorithm. Intuitively, this could lead to a better convergence rate for reaching the stationary (uniform) distribution in certain cases. Some experimental evidence for this has been given in the literature. In this note we give a spectral gap comparison between two switch-based chains and the Curveball chain. In particular, this comparison allows us to conclude that the Curveball Markov chain is rapidly mixing whenever one of the two switch chains is rapidly mixing. Our analysis directly extends to the case of sampling binary matrices with forbidden entries (under the assumption of irreducibility). This in particular captures the case of sampling simple directed graphs with given degrees. As a by-product of our analysis, we show that the switch Markov chain of the Kannan-Tetali-Vempala conjecture only has non-negative eigenvalues if the sampled binary matrices have at least three columns. This shows that the Markov chain does not have to be made lazy, which is of independent interest. We also obtain an improved bound on the smallest eigenvalue for the switch Markov chain studied by Greenhill for uniformly sampling simple directed regular graphs. | \section{Introduction}
The problem of uniformly sampling binary matrices with fixed row and column sums (marginals) has received a lot of attention, see, e.g., \cite{Rao1996,Kannan1999,Erdos2013,Erdos2015,Erdos2016}.
Equivalent formulations for this problem are the uniform sampling of undirected bipartite graphs, or the uniform sampling of directed graphs with possible a self-loop at every node (but no parallel edges).
One approach is to define a Markov chain on the space of all binary matrices for given fixed row and column sums, and study a random walk on this space induced by making small changes to a matrix using a given probabilisitic procedure (that defines the transition matrix). The idea, roughly speaking, is that after a sufficient amount of time, the so-called \emph{mixing time}, the resulting matrix almost corresponds to a sample from the uniform distribution over all binary matrices with given row and column sums. The most well-known probabilistic procedures for making these small changes use so-called switches, see, e.g., \cite{Rao1996}.
More recently the Curveball algorithm was introduced in some experimental papers
\cite{Verhelst2008,Strona2014}, which is a procedure that intuitively speeds up the mixing time of switch-based chains in many settings.
The goal of this paper is to confirm this intuition by giving a spectral gap comparison for the Markov chains of the classical switch algorithm of Kannan, Tetali and Vempala \cite{Kannan1999} and the Curveball algorithm as formulated by Verhelst \cite{Verhelst2008}.
We will start with an informal description of both algorithms.
For a given initial binary matrix $A$, in every step of the switch algorithm we randomly choose two distinct rows and two distinct columns uniformly at random. If the $2 \times 2$ submatrix corresponding to these rows and columns is a \emph{checkerboard} $C_i$ for $i = 1,2$, where,
$$
C_1 = \left(\begin{matrix}
1 & 0 \\ 0 & 1
\end{matrix} \right) \ \ \ \ \text{ and } \ \ \ \
C_2 = \left(\begin{matrix}
0 & 1 \\ 1 & 0
\end{matrix} \right),
$$
then the $2 \times 2$ submatrix is replaced by $C_{i+1}$ for $i$ modulo $2$. That is, if the checkerboard is $C_1$, it is replaced by $C_2$, and vice versa. If the submatrix does not correspond to a checkerboard, nothing is changed. Such an operation is called a \emph{switch}.
The Curveball algorithm intuitively speeds up the switch algorithm. In every step of the algorithm, first two rows are chosen uniformly at random from $A$ as in the switch algorithm. Then, a so-called \emph{binomial trade} is performed. In such a trade, we first look at all the columns in the $2 \times n$ submatrix given by the chosen rows, and we identify all the columns for which the column sum, in this submatrix, is one. That is, the column consist of precisely one $1$ and one $0$. For example if the $2 \times 6$ submatrix (i.e., $n = 6$) is given by
$$
\begin{pmatrix}
1 & \mathbf{1} & 0 & \mathbf{0} & \mathbf{0} & \mathbf{1} \\ 1 & \mathbf{0} & 0 & \mathbf{1} & \mathbf{1} & \mathbf{0}
\end{pmatrix} ,
$$
then we consider the (auxiliary) submatrix
$$
\left(\begin{matrix}
1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0
\end{matrix} \right)
$$
given by the second, fourth, fifth and sixth column. Let $u$ and $l$ respectively be the number of columns where the $1$ appears on the upper row and the lower row ($u = l = 2$ here) . We now uniformly at random draw a $2 \times (u+l)$ matrix with columns sums equal to $1$, and row sums equal to $u$ and $l$. Note that there are $\binom{u+l}{u}$ possible choices, hence the name binomial trade. We then replace the (auxiliary) submatrix with this new submatrix in $A$. Note that such a drawing can be obtained by uniformly choosing $u$ out of $u+l$ column indices
Both these algorithms define a Markov chain on the set of all $m \times n$ binary matrices satisfying given row and columns sums $r$ and $c$. The main result of this work is a comparison of their relaxation times, or, equivalently, spectral gaps (see next section for definitions).
\begin{theorem}[Relaxation time comparison]\label{thm:relax_comparison}
Let $(1 - \lambda_*^c)^{-1}$ and $(1 - \lambda_*^s)^{-1}$, be the relaxation times of the Curveball and switch Markov chains respectively. Then, with $r_{\max}$ the maximum row sum,
$$
\frac{2}{n(n-1)}\cdot (1 - \lambda_*^s)^{-1} \ \leq \ (1 - \lambda_*^c)^{-1} \ \leq \ \min\left\{1, \frac{(2r_{\max} + 1)^2}{2n(n-1)}\right\} \cdot (1 - \lambda_*^s)^{-1}.
$$
\end{theorem}
\noindent We present a more general comparison framework inspired by, and based on, the notion of a heat-bath Markov chain as introduced by Dyer, Greenhill and Ullrich \cite{Dyer2014}. We prove Theorem \ref{thm:relax_comparison} as an application of this framework in the the more general setting where the binary matrices can also have \emph{forbidden entries} that must be zero. This allows us to also compare the chains for the sampling of a simple directed graph with given degree sequence, as its adjacency matrix can be modeled by a square binary matrix with zeros on the diagonal.
\subsection{Related work}
Before going into related work, we would also like to refer the reader to \cite{Erdos2016} for a nice exposition on related work concerning the switch Markov chain.
Kannan, Tetali and Vempala \cite{Kannan1999} conjectured that the KTV-switch chain is rapidly mixing for all fixed row and column sums.
Mikl\' os, Erd\H{o}s and Soukup \cite{Erdos2013} proved the conjecture for half-regular binary matrices, in which all the row sums are equal (or all column sums), and Erd\H{o}s, Kiss, Mikl\' os and Soukup \cite{Erdos2015} extended this result to almost half-regular marginals. The authors prove this in a slightly more general context where there might be certain forbidden edge sets.
The Curveball algorithm was first described by Verhelst \cite{Verhelst2008} and a slightly different version was later independently formulated by Strona, Nappo, Boccacci, Fattorini and San-Miguel-Ayanz \cite{Strona2014}. The name Curveball algorithm was introduced in \cite{Strona2014}. Theorem \ref{thm:relax_comparison} directly implies that the Curveball Markov chain is also rapidly mixing for (almost) half-regular marginals.
For the uniform sampling of simple directed graphs with a given degree sequence, the most used switch algorithm is the \emph{edge-switch} version,\footnote{We will address this version as well.} see Greenhill \cite{Greenhill2011}, who gives a polynomial upper bound on the mixing time for the case of $d$-regular directed graphs, and Greenhill and Sfragara \cite{Greenhill2017} for some recent results on certain irregular degree sequences.
The latter paper \cite{Greenhill2017} only considers degree sequences for which the edge-switch Markov chain is irreducible for a given degree sequence. The Curveball chain has also been formulated for (un)directed graphs, see Carstens, Berger and Strona \cite{Carstens2016}. A theoretical analysis for the mixing time of the Curveball Markov chain was raised as an open problem there.
All the results regarding rapid mixing mentioned above rely on the multi-commodity flow method developed by Sinclair \cite{Sinclair1992}. In this work we omit multi-commodity flow techniques in order to compare the switch and Curveball Markov chains, but rather take a more elementary approach based on comparing eigenvalues of transition matrices.
One seeming advantage of the eigenvalue comparison is that it allows us to compare the switch and Curveball chains for arbitrary fixed row and column sums.
Our spectral gap comparisons are special cases of the classical comparison framework developed largely by Diaconis and Saloff-Coste and is based on so-called Dirichlet form comparisons of Markov chains, see, e.g., \cite{Diaconis1993b,Diaconis1993a}, and also Quastel \cite{Quastel1992}. See also the expository paper by Dyer, Goldberg, Jerrum and Martin \cite{Dyer2006}. As the stationary distributions are the same for all our Markov chains, we use a more direct, but equivalent, framework based on positive semidefiniteness. We briefly elaborate on this in Appendix \ref{app:dirichlet} for the interested reader.
The transition matrix of the Curveball Markov chain is a special case of a heat-bath Markov chain, as introduced by Dyer, Greenhill and Ullrich \cite{Dyer2014}. Our work partially builds on \cite{Dyer2014} in the sense that we compare a Markov chain, with a similar decomposition property as in the definition of a heat-bath chain, to its heat-bath variant. We explain these ideas in the next section.
\section{General framework}\label{sec:general}
We consider an ergodic Markov chain $\mathcal{M} = (\Omega,P)$ with stationary distribution $\pi$, being strictly positive for all $x \in \Omega$, that is of the form\footnote{This description is almost the same as that of a heat-bath chain \cite{Dyer2014}, and is introduced to illustrate the conceptual idea.}
\begin{equation}\label{eq:general_chain}
P = \sum_{a \in \mathcal{L}} \rho(a) \sum_{R \in \mathcal{R}_a} P_{R}
\end{equation}
which is given by a
\begin{enumerate}[i)]
\item finite index set $\mathcal{L}$,
and probability distribution $\rho$ over $\mathcal{L}$,
\item partition $\mathcal{R}_a = \cup R_{k,a}$ of $\Omega$ for $a \in \mathcal{L}$.
\end{enumerate}
Moreover, the restriction of a matrix $P_R$ to the rows and columns of $R = R_{k,a}$ defines the transition matrix of an ergodic, time-reversible Markov chain on $R$ (and is zero elsewhere), with stationary distribution
$$
\tilde{\pi}_R(x) = \frac{\pi(x)}{\pi(R)}
$$
for $x \in R$. We use $1 = \lambda_0^R \geq \lambda_1^R \geq \dots \geq \lambda_{|R|-1}^R$ to denotes its eigenvalues. Note that these are also eigenvalues of $P_R$ and that all other eigenvalues of $P_R$ are zeros (as all rows and columns not corresponding to elements in $R$ only contain zeros). We use $\mathcal{R}$ to denote the multi-set $\cup_a \mathcal{R}_a$ indexed by pairs $(k,a)$.
Note that the chain $\mathcal{M}$ proceeds by drawing an index $a$ from the set $\mathcal{L}$, and then performs a transition in the Markov chain on the set $R$ that the current state is in.
The \emph{heat-bath variant} $\mathcal{M}_{heat}$ of the chain $\mathcal{M}$ is given by the transition matrix
\begin{equation}\label{eq:heat_chain}
P_{heat} = \sum_{a \in \mathcal{L}} \rho(a) \sum_{R \in \mathcal{R}_a} \mathbf{1}\cdot \sigma_R
\end{equation}
with $\sigma_R$ is a row-vector given by $\sigma_R(x) = \tilde{\pi}_R(x)$ if $x \in R$ and zero otherwise, and $\mathbf{1}$ the all-ones column vector.
It can be shown that $\mathcal{M}_{heat}$ is an ergodic Markov chain as well. It is reversible by construction \cite{Dyer2014}.\footnote{The Curveball chain is the heat-bath variant of the KTV-switch chain as we will later prove.}
\begin{theorem}\label{thm:heat_comparison}
Let $\mathcal{M}$ be a Markov chain as in (\ref{eq:general_chain}), and $\mathcal{M}_{heat}$ its heat-bath variant as in (\ref{eq:heat_chain}). If $\alpha$ and $\beta$ are non-zero constants, with $\alpha\cdot \beta > 0$, such that
\begin{equation}
\label{eq:assumption}
\min_{R \in \mathcal{R}} \ \ \min_{i = 1,\dots,R-1} \{ \lambda_i^R, \alpha - \beta(1 - \lambda_i^R) \} \geq 0,
\end{equation}
then
\begin{equation}
\label{eq:alpha_beta}
\frac{1}{\alpha}\frac{1}{1 - \lambda_*^{heat}} \leq \frac{1}{\beta}\frac{1}{1 - \lambda_*},
\end{equation}
where $\lambda_*^{(heat)}$ is the second largest eigenvalue of $P_{(heat)}$. In particular, if $\lambda_{R-1}^R \geq 0$ for every $R \in \mathcal{R}$, then
$
(1 - \lambda_*^{heat})^{-1} \leq (1 - \lambda_*)^{-1}
$.
\end{theorem}
The intuition behind Theorem \ref{thm:heat_comparison} is that in order to compare the relaxation times of a Markov chain and its heat-bath variant, it suffices to compare them locally on the sets $R$. Note that $\alpha$ and $\beta$ can both be negative, so that this statement can be used to lower bound the relaxation time of the heat-bath variant in terms of the original relaxation time as well.
We will use the following propositions in the proof of Theorem \ref{thm:heat_comparison}. For $S \subseteq \Omega$, the matrix $I_S$ is defined by $I_S(x,x) = 1$ if $x \in S$ and zero otherwise. Also, a symmetric real-valued matrix $A$ is positive semidefinite if all its eigenvalues are non-negative, and this is denoted by $A \succeq 0$.
\begin{proposition}[\cite{Zhang1999}]\label{prop:eigenvalue_comparison}
Let $X,Y$ be symmetric $l \times l$ matrices. If $X - Y \succeq 0$, then $\lambda_i(X) \geq \lambda_i(Y)$ for $i = 1,\dots, l$, where $\lambda_i(C)$ is the $i$-th largest eigenvalue of $C = X,Y$.
\end{proposition}
\begin{proposition}
\label{prop:eigen_difference}
Let $X$ be the $k \times k$ transition matrix of an ergodic reversible Markov chain with stationary distribution $\pi$, and eigenvalues $1 = \lambda_0 > \lambda_1 \geq \dots \geq \lambda_{k-1}$. Let $X^* = \lim_{t \rightarrow \infty} X^t$ be the matrix containing the row vector $\pi$ on every row. Then the eigenvalues of $\alpha(I - X^*) - \beta(I - X)$ are
$$
\{0\} \cup \{ \alpha - \beta(1 - \lambda_i) \ \big| \ i = 1,\dots,k-1\} .
$$
for given constants $\alpha$ and $\beta$.
\end{proposition}
\begin{proof}
As $X$ is the transition matrix of a reversible Markov chain, it holds that the matrix $V X V^{-1}$ is symmetric, where $V = \text{diag}(\pi_1^{1/2},\pi_2^{1/2},\dots,\pi_k^{1/2}) = \text{diag}(\sqrt{\pi})$.\footnote{This is the same argument for showing that a reversible Markov chain only has real eigenvalues.} Note that the eigenvalues of $\alpha(I - X^*) - \beta(I - X)$ are the same as those of
$$
V(\alpha(I - X^*) - \beta(I - X))V^{-1} = \alpha(I - \sqrt{\pi}^T\sqrt{\pi}) - \beta(I - VXV^{-1}).
$$
Moreover, with $1 = (1,1,1,\dots,1)^T$ the all-ones vector, we have
$$
VXV^{-1}\sqrt{\pi}^T = V X \mathbf{1} = V \mathbf{1} = \sqrt{\pi}^T,
$$
so that $\sqrt{\pi}^T$ is an eigenvector of $VXV^{-1}$ with eigenvalue $1$. It then follows that $\sqrt{\pi}^T$ is an eigenvector of $\alpha(I - \sqrt{\pi}^T\sqrt{\pi}) - \beta(I - VXV^{-1})$ with eigenvalue $0$. Let $\sqrt{\pi}^T = w_0, w_1,\dots,w_{k-1}$ be a basis of orthogonal eigenvectors for $VXV^{-1}$ corresponding to eigenvalues $\lambda_1,\dots,\lambda_{k-1}$ (note that $X$ and $VXV^{-1}$ have the same eigenvalues). It then follows that
$$
[\alpha(I - \sqrt{\pi}^T\sqrt{\pi}) - \beta(I - VXV^{-1})] w_i = \alpha - \beta(1 - \lambda_i)
$$
because of orthogonality. This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:heat_comparison}]
Let $D$ be the $|\Omega| \times |\Omega|$ diagonal matrix with $(D)_{xx} = \sqrt{\pi(x)}$. As the matrices $\mathbf{1} \cdot \sigma_R$ and $P_R$ define reversible Markov chains on $R$, the matrix
$$
Y_R = D^{-1}[\alpha(I_R - \mathbf{1}\cdot \sigma_R) - \beta(I_R - P_{R})]D
$$
is symmetric. Moreover, from the assumption in (\ref{eq:assumption}), together with Proposition \ref{prop:eigen_difference} and the fact that similar\footnote{Two square matrices $A$ and $B$ are \emph{similar} if there exists an invertible matrix $T$ such that $A = T^{-1}BT$.} matrices have the same set of eigenvalues, it follows that $Y_R$ is positive semidefinite. Since any non-negative linear combination of positive semidefinite matrices is again positive semidefinite, the matrix
$$
D^{-1}[\alpha(I - P_{heat}) - \beta(I - P)]D = \sum_{a \in \mathcal{L}} \rho(a) \sum_{R \in \mathcal{R}_a} D^{-1}[\alpha(I_R - \mathbf{1}\cdot \sigma_R) - \beta(I_R - P_{R})]D
$$
is also positive semidefinite. Using Proposition \ref{prop:eigenvalue_comparison}, and again the fact that similar matrices $A,B$ have the same set of eigenvalues, it follows that
$$
\alpha(1 - \lambda_i^{heat}) \geq \beta (1 - \lambda_i)
$$
where $\lambda_i^{(heat)}$ is the $i$-th largest eigenvalue of $P_{(heat)}$. Note that $P$ has non-negative eigenvalues as $D^{-1} P D$ is a non-negative linear combination of positive semidefinite matrices. A similar argument holds for $P_{heat}$ and was shown in \cite{Dyer2014}. In particular, it follows that $\lambda_1^{(heat)}$ is the second-largest eigenvalue of $P_{(heat)}$. This proves (\ref{eq:alpha_beta}).
\end{proof}
\subsection{Markov chain definitions} \label{sec:background}
Let $\mathcal{M} = (\Omega,P)$ be an ergodic, time-reversible Markov chain over state space $\Omega$ with transition matrix $P$ and stationary distribution $\pi$. We write $P_x^t = P^t(x,\cdot)$ for the distribution over $\Omega$ at time step $t$ given that the initial state is $x \in \Omega$. It is well-known that the matrix $P$ only has real eigenvalues $1 = \lambda_0 > \lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_{N-1} > -1$, where $N = |\Omega|$. Moreover, we define $\lambda_{*} = \max\{\lambda_1,|\lambda_{N-1}|\}$ is the second-largest eigenvalue of $P$. The \emph{variation distance} at time $t$ with initial state $x$ is
$$
\Delta_x(t) = \max_{S \subseteq \Omega} \big| P^t(x,S) - \pi(s)\big| = \frac{1}{2}\sum_{y \in \Omega} \big| P^t(x,y) - \pi(y)\big|
$$
and the mixing time $\tau(\epsilon)$ is defined as
$$
\tau(\epsilon) = \max_{x \in \Omega}\left\{ \min\{ t : \Delta_x(t') \leq \epsilon \text{ for all } t' \geq t\}\right\}.
$$
A Markov chain is said to be \emph{rapidly mixing} if the mixing time can be upper bounded by a function polynomial in $\ln(|\Omega|/\epsilon)$. It is well-known, e.g., following directly from Proposition 1 \cite{Sinclair1992}, that
\begin{equation}\label{eq:mixing_time}
\frac{1}{2}\frac{\lambda_*}{1 - \lambda_*} \ln(1/2\epsilon) \ \leq \ \tau(\epsilon) \
\leq \ \frac{1}{1 - \lambda_*}\cdot (\ln(1/\pi_*) + \ln(1/\epsilon))
\end{equation}
where $\pi_* = \min_{x \in \Omega} \pi(x)$. This roughly implies that the mixing time is determined by the \emph{spectral gap} $(1 - \lambda_*)$, or its inverse, the \emph{relaxation time} $(1 - \lambda_*)^{-1}$.
We also introduce some additional notation. We let $G_{\Omega} = (\Omega,A)$ be the state space graph, with an arc $(a,b) \in A$ if and only if $P(a,b) > 0$ for $a,b \in \Omega$ with $a \neq b$. If $P$ is symmetric, we define $H_{\Omega} = (\Omega,E)$ as the undirected counterpart of $G_{\Omega}$ with $\{a,b\} \in E$ if and only if $(a,b),(b,a) \in A$ with $a \neq b$. Moreover, the $\delta$-lazy version of $\mathcal{M}$ is the Markov chain defined by transition matrix $(1 - \delta)I + \delta P$ for $0 < \delta < 1$. Note that this chain is also ergodic, and time-reversible with stationary distribution $\pi$.
\begin{proposition}\label{prop:lazy}
If $0 < \delta < 1$ is such the transition matrix $(1- \delta)I + \delta P$ of the $\delta$-lazy version of $\mathcal{M}$ only has non-negative eigenvalues. Then
$$
\frac{1}{1 - \lambda_{*,\delta}} \leq \frac{1}{\delta} \frac{1}{1 - \lambda_{*}}
$$
where $\lambda_{*,\delta} = \lambda_{1,\delta} = (1 - \delta) + \delta \lambda_{1}$ is the second-largest eigenvalue of $(1- \delta) + \delta P$.
\end{proposition}
\begin{proof}
If $\lambda_i$ is an eigenvalue of $P$ then $\lambda_{i,\delta} := (1 - \delta) + \delta \lambda_i$ is an eigenvalue of $(1 - \delta)I + \delta P$. Note that $\lambda_i \leq \lambda_j$ if and only if $\delta \lambda_i \leq \delta \lambda_j$, which is true if and only if
$$
\lambda_{i,\delta} = (1 - \delta) + \delta \lambda_i \leq (1 - \delta) + \delta \lambda_j = \lambda_{j,\delta}.
$$
This in particular shows that $\lambda_{1,\delta} = (1 - \delta) + \delta \lambda_{1}$ is indeed the second-largest eigenvalue of $(1 - \delta)I + \delta P$. Moreover, $\lambda_{i,\delta} = (1 - \delta) + \delta \lambda_i$ is equivalent to
$$
\frac{1}{1 - \lambda_{i,\delta}} = \frac{1}{\delta} \frac{1}{1 - \lambda_{i}}
$$
for $i > 0$. As the eigenvalues of $(1- \delta) + \delta P$ are all non-negative, we have
$$
\frac{1}{1 - \lambda_{*,\delta}} = \frac{1}{1 - \lambda_{1,\delta}} = \frac{1}{\delta} \frac{1}{1 - \lambda_{1}} \leq \frac{1}{\delta} \frac{1}{1 - \lambda_{*}}
$$
and this completes the proof. Note that the final inequality is true independent of the sign of $\lambda_1$.
\end{proof}
\subsection{Johnson graphs.}\label{sec:johnson}
One class of graphs that are of particular interest in this work, are the so-called Johnson graphs.
For given integers $1 \leq q \leq p$, the undirected Johnson graph $J(p,q)$ contains as nodes all subsets of size $q$ of $\{1,\dots,p\}$, and two subsets $u,v \subseteq \{1,\dots,p\}$ are adjacent if and only if $|u \cap v| = q - 1$. We refer the reader to \cite{Holton1993,Brouwer2011} for the following facts. The Johnson graph $J(p,q)$ is a $q(p-q)$-regular graph and the eigenvalues of its adjacency matrix are given by
$$
(q - i)(p - q -i) - i \ \ \ \ \text{ with multiplicity }\ \ \ \ \binom{p}{i} - \binom{p}{i-1}
$$
for $i = 0,\dots,q$, with the convention that $\binom{p}{-1} = 0$. The following observation is included for ease of reference. It will often be used to lower bound the smallest eigenvalue of a Johnson graph.
\begin{proposition}\label{prop:johnson}
Let $p,q \in \mathbb{N}$ be given. The continuous function $f : \mathbb{R} \rightarrow \mathbb{R}$ defined by
$$
f(x) = [(q - x)(p - q - x) - x] - q(p - q) = x(x - (p+1))
$$
is minimized for $x^* = (p+1)/2$ and $f(x^*) = -(p+1)^2/4$.
\end{proposition}
\section{Binary matrices and the switch chain}
We are given $n,m \in \mathbb{N}$, fixed row sums $r = (r_1,\dots, r_m)$, column sums $c = (c_1,\dots,c_n)$, and a set of forbidden entries $\mathcal{F} \subseteq \{1,\dots,m\} \times \{1,\dots,n\}$. The state space $\Omega = \Omega(r,c,\mathcal{F})$ is the set of all binary $m \times n$-matrices $A$ satisfying these row and column sums, and for which $A(a,b) = 0$ if $(a,b) \in \mathcal{F}$. For $A \in \Omega$, we let $A_{ij}$ be the $2 \times n$-submatrix formed by rows $i$ and $j$, for $1 \leq i < j \leq m$. We define
\begin{equation}\label{eq:forbidden_T}
U_{ij}(A) = \{k \in \{1,\dots,n\} : A(i,k) = 1,\ A(j,k) = 0 \text{ and } (j,k) \notin \mathcal{F}\},
\end{equation}
with $u_{ij}(A) = |U_{ij}(A)|$, and similarly
\begin{equation}\label{eq:forbidden_B}
L_{ij}(A) = \{k \in \{1,\dots,n\} : A(i,k) = 0,\ A(j,k) = 1 \text{ and } (i,k) \notin \mathcal{F}\},
\end{equation}
with $l_{ij}(A) = |L_{ij}(A)|$. Note that $L_{ij} \cup U_{ij}$ are precisely the columns $k$ for which $A_{ij}$ has different values on its rows and for which neither $(i,k)$ or $(j,k)$ is forbidden.
Matrices $A, B \in \Omega$ are \emph{switch-adjacent for row $i$ and $j$} if $A = B$ or if $A - B$ contains exactly four non-zero elements that occur on rows $i$ and $j$, and the columns $k$ and $l$ containing these non-zero elements do not have forbidden entries in $A_{ij}$.
Two matrices are switch-adjacent if they are switch-adjacent for some rows $i$ and $j$.
\emph{$\gamma$-Switch chain.} We next introduce the notion of a $\gamma$-switch Markov chain which is done for notational convenience as there are multiple switch-based chains available in the literature. For feasible $\gamma > 0$, the transition matrix of such a chain on state space $\Omega = \Omega(r,c,\mathcal{F})$ is given by
$$
P_{\gamma}(A,B) = \left\{ \begin{array}{ll} \binom{m}{2}^{-1}\cdot \gamma & \ \ \ \ \ \ \text{if } A \neq B \text{ are switch-adjacent}, \\
\binom{m}{2}^{-1} \sum_{1 \leq i < j \leq m} 1 - u_{ij}l_{ij} \cdot \gamma & \ \ \ \ \ \ \text{if } A = B, \\
0 &\ \ \ \ \ \ \text{otherwise,}
\end{array}\right.
$$
provided $\gamma$ satisfies the following assumption.
\begin{assumption}\label{assump:gamma}
For given $n,m,r,c$ and $\mathcal{F}$, we assume that $\gamma$ is such that
$$
1 - u_{ij}(A)l_{ij}(A)\cdot \gamma > 0
$$
for all $A \in \Omega$ and $1 \leq i < j \leq m$.
\end{assumption}
Note that the transition probability for switch-adjacent matrices is the same everywhere in the state space, and does not depend on the matrices. In particular, the transition matrix $P_{\gamma}$ is symmetric and hence the chain is reversible with respect to the uniform distribution. The factor $2/(m(m-1))$ is included for notational convenience. The chain can roughly be interpreted as follows. We first choose two distinct rows $i$ and $j$ uniformly at random, and then transition to a different matrix switch-adjacent for rows $i$ and $j$, of which there are $u_{ij}l_{ij}$ possibilities, where every matrix has probability $\gamma$ of being chosen; and with probability $1 - u_{ij}l_{ij}\gamma$ we do nothing. Taking $\gamma = 2/(n(n-1))$ we get back the KTV-switch chain \cite{Kannan1999}. We will later show that (a lazy version of) the edge-switch chain in \cite{Greenhill2011,Greenhill2017} also falls within this definition. \medskip
\begin{remark}
We always assume that the set $\Omega(r,c,\mathcal{F})$ is non-empty, and that the $\gamma$-switch chain is irreducible (it is clearly always aperiodic and finite).
Irreducibility is in particular guaranteed in the case there are no forbidden entries \cite{Rao1996}; or in case $n = m \geq 4$, with $\mathcal{F}$ is the set of diagonal entries, and regular marginals $c_i = r_i = d$ for some given $d \geq 1$ \cite{Greenhill2011}. Note that the condition of irreducibility is independent of the value of $\gamma$.
\end{remark}\medskip
We next explain that the $\gamma$-switch chain is of the form (\ref{eq:general_chain}). The index set
$$
\mathcal{L} = \{(i,j) : 1 \leq i < j \leq m\}
$$
is the set of all pairs of distinct rows, and $\rho$ is the uniform distribution over $\mathcal{L}$, that is, $\rho(a) = \binom{m}{2}^{-1}$ for all $a \in \mathcal{L}$. The partitions $\mathcal{R}_a$ for $a \in \mathcal{L}$ rely on the notion of a binomial neighborhood, that is also defined in \cite{Verhelst2008} to describe the Curveball Markov chain (the decomposition idea given here is novel).
\begin{definition}[Binomial neighborhood]
For a fixed binary matrix $A$ and row-pair $(i,j)$, the $(i,j)$-binomial neighborhood $\mathcal{N}_{ij}(A)$ of $A$ is the set of matrices that can be reached by only applying switches on rows $i$ and $j$. More formally, $N_{ij}(A)$ contains all binary matrices $B \in \Omega$ for which $A(k,l) = B(k,l)$ whenever $(k,l) \notin \{i,j\} \times U_{ij}(A) \cup L_{ij}(A)$, which in particular implies that $U_{ij}(A) \cup L_{ij}(A) = U_{ij}(B) \cup L_{ij}(B)$.\footnote{Said differently, $\mathcal{N}_{ij}(A)$ contains all matrices that can be reached by one trade on rows $i$ and $j$ in the Curveball algorithm, as described in the introduction.}
\end{definition}
It should be clear that two matrices $A, B \in \Omega$ can be part of \emph{at most} one common binomial neighborhood, see also \cite{Verhelst2008}. This follows directly from the observation that if $B \in \mathcal{N}_{ij}(A) \setminus \{A\}$, then $A$ and $B$ differ on precisely rows $i$ and $j$, so switches using any other pair of rows $\{k,l\} \neq \{i,j\}$ can never transform $A$ into $B$. Moreover, we have $A \in \mathcal{N}_{ij}(A)$; if $B \in \mathcal{N}_{ij}(A)$, then $A \in \mathcal{N}_{ij}(B)$ \cite{Verhelst2008}; and, if $A \in \mathcal{N}_{ij}(B)$, $B \in \mathcal{N}_{ij}(C)$, then $A \in \mathcal{N}_{ij}(C)$. That is, the relation $\sim_{ij}$ defined by $a \sim b$ if and only if $a \in \mathcal{N}_{ij}(b)$, is an equivalence relation on $\Omega$. The equivalence classes of $\sim_{ij}$ define the set $\mathcal{R}_{(i,j)}$.
Finally, note that $u_{ij}(A) = u_{ij}(B)$ and $l_{ij}(A) = l_{ij}(B)$ if $A$ and $B$ are part of the same binomial neighborhood $\mathcal{N}$. Therefore, these numbers are only neighborhood-dependent, and not element-dependent within a fixed neighborhood. Observe that
$$
|\mathcal{N}| = \binom{u_{ij} + l_{ij}}{u_{ij}}.
$$
Moreover, another important observation is that the undirected state space graph (see Section \ref{sec:background}) $H$ of the $\gamma$-switch chain (which is the same for all $\gamma$) induced on a binomial neighborhood is isomorphic to a Johnson graph $J(u+l,u)$ whenever $u,l \geq 1$ (see Section \ref{sec:johnson} for notation and definition). If either $u = 0$ or $l = 0$ it consists of a single binary matrix. To see this, note that every element in the $(i,j)$-binomial neighborhood $\mathcal{N}_{ij}(A)$ can be represented by the set of indices of the columns $k$ for which $A(i,k) = 1, A(j,k) = 0$ and $(j,k) \notin \mathcal{F}$, which we denote by $Z(A_{ij})$. The set $\{1,\dots,l_{ij} + u_{ij}\}$ here is then the set indices of \emph{all} columns with precisely one $1$ and one $0$ on rows $i,j$ and that do not contain forbidden entries. Indeed, matrices $A \neq B$ are switch-adjacent for rows $i$ and $j$ if $Z(A_{ij}) \cap Z(B_{ij}) = u_{ij} - 1$.
Informally, the Markov chain resulting from always deterministically choosing rows $i$ and $j$ in the switch algorithm, is the disjoint union of smaller Markov chains each with a state space graph isomorphic to some Johnson graph.
\begin{example}\label{exmp:binom_neighborhood}
Consider the binary matrix
$$
A = \left(\begin{matrix}
0 & 1 & 1 & 0 & 1 & 0 & 1\\ 1 & 0 & 0 & 1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 1 & 1
\end{matrix} \right)
$$
and the $2 \times 7$-submatrix formed by rows $1$ and $2$, which is
$$
A_{12} = \left(\begin{matrix}
0 & 1 & 1 & 0 & 1 & 0 & 1\\ 1 & 0 & 0 & 1 & 1 & 0 & 1
\end{matrix} \right).
$$
For sake of simplicity, we (uniquely) describe every element of the $(1,2)$-binomial neighborhood $\mathcal{N}_{12}(A)$ by the first four columns (precisely those with column sums equal to one in the submatrix). For the switch chain, the induced subgraph of the undirected state space graph $H$ on the $(1,2)$-binomial neighborhood of $A$, the Johnson graph $J(4,2)$ is given in Figure \ref{fig:johnson}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=3.75]
\coordinate (A1) at (-0.25,0);
\coordinate (A2) at (0.25,0);
\coordinate (M1) at (-0.5,0.375);
\coordinate (M2) at (0.5,0.375);
\coordinate (T1) at (-0.25,0.75);
\coordinate (T2) at (0.25,0.75);
\node at (A1) [circle,scale=0.7,fill=black] {};
\node (a1) [below=0.1cm of A1] {$\left(\begin{matrix}
1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0
\end{matrix} \right)$};
\node at (A2) [circle,scale=0.7,fill=black] {};
\node (a2) [below=0.1cm of A2] {$\left(\begin{matrix}
1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1
\end{matrix} \right)$};
\node at (M1) [circle,scale=0.7,fill=black] {};
\node (m1) [left=0.1cm of M1] {$\left(\begin{matrix}
0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1
\end{matrix} \right)$};
\node at (M2) [circle,scale=0.7,fill=black] {};
\node (m2) [right=0.1cm of M2] {$\left(\begin{matrix}
1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1
\end{matrix} \right)$};
\node at (T1) [circle,scale=0.7,fill=black] {};
\node (t1) [above=0.1cm of T1] {$\left(\begin{matrix}
0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0
\end{matrix} \right)$};
\node at (T2) [circle,scale=0.7,fill=black] {};
\node (t2) [above=0.1cm of T2] {$\left(\begin{matrix}
0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0
\end{matrix} \right)$};
\path[every node/.style={sloped,anchor=south,auto=false}]
(T1) edge[-,very thick] node {} (T2)
(T1) edge[-,very thick] node {} (M2)
(T1) edge[-,very thick] node {} (A1)
(T1) edge[-,very thick] node {} (M1)
(T2) edge[-,very thick] node {} (M1)
(T2) edge[-,very thick] node {} (A1)
(T2) edge[-,very thick] node {} (A2)
(M1) edge[-,very thick] node {} (A2)
(M1) edge[-,very thick] node {} (M2)
(A1) edge[-,very thick] node {} (M2)
(A1) edge[-,very thick] node {} (A2)
(A2) edge[-,very thick] node {} (M2);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale=3]
\coordinate (A1) at (-0.25,0);
\coordinate (A2) at (0.25,0);
\coordinate (M1) at (-0.5,0.375);
\coordinate (M2) at (0.5,0.375);
\coordinate (T1) at (-0.25,0.75);
\coordinate (T2) at (0.25,0.75);
\node at (A1) [circle,scale=0.7,fill=black] {};
\node (a1) [below=0.1cm of A1] {$\{1,4\}$};
\node at (A2) [circle,scale=0.7,fill=black] {};
\node (a2) [below=0.1cm of A2] {$\{1,3\}$};
\node at (M1) [circle,scale=0.7,fill=black] {};
\node (m1) [left=0.1cm of M1] {$\{2,3\}$};
\node at (M2) [circle,scale=0.7,fill=black] {};
\node (m2) [right=0.1cm of M2] {$\{1,2\}$};
\node at (T1) [circle,scale=0.7,fill=black] {};
\node (t1) [above=0.1cm of T1] {$\{2,4\}$};
\node at (T2) [circle,scale=0.7,fill=black] {};
\node (t2) [above=0.1cm of T2] {$\{3,4\}$};
\path[every node/.style={sloped,anchor=south,auto=false}]
(T1) edge[-,very thick] node {} (T2)
(T1) edge[-,very thick] node {} (M2)
(T1) edge[-,very thick] node {} (A1)
(T1) edge[-,very thick] node {} (M1)
(T2) edge[-,very thick] node {} (M1)
(T2) edge[-,very thick] node {} (A1)
(T2) edge[-,very thick] node {} (A2)
(M1) edge[-,very thick] node {} (A2)
(M1) edge[-,very thick] node {} (M2)
(A1) edge[-,very thick] node {} (M2)
(A1) edge[-,very thick] node {} (A2)
(A2) edge[-,very thick] node {} (M2);
\end{tikzpicture}
\caption{The induced subgraph $H$ for the switch chain on the $(1,2)$-binomial neighborhood of $A$. On the left we have indexed the nodes by the submatrices of the first four columns, and on the right by label sets, indicating the positions of the $1$'s on the top row (i.e., row $1$).}
\label{fig:johnson}
\end{figure}
\end{example}
\begin{remark}
A fixed binomial neighborhood is reminiscient of the Bernoulli-Laplace Diffusion model, see, e.g., \cite{Diaconis1987,Donnelly1994} for an analysis of this model. Here, there are two bins with resp. $k$ and $n- k$ balls, and in every transition two randomly chosen balls, one from each bin, are interchanged between the bins. Indeed, the state space graph is then a Johnson graph \cite{Donnelly1994}. The transition probabilities are different, due to the non-zero holding probabilities in the switch algorithm, but the eigenvalues of this Markov chain are related to the eigenvalues of the switch Markov chain on a fixed binomial neighborhood, see also \cite{Diaconis1987,Donnelly1994}
\end{remark}
\medskip
For a binomial neighborhood $\mathcal{N} = \mathcal{N}_{ij}(A)$ for given $i < j$ and $A \in \Omega$, the undirected graph $H_{\mathcal{N}} = (\Omega, E_{\mathcal{N}})$ is the graph where $E_{\mathcal{N}}$ forms the edge-set of the Johnson graph $J(u_{ij} + l_{ij},u_{ij})$ on $\mathcal{N} \subseteq \Omega$, and where all binary matrices $B \in \Omega \setminus \mathcal{N}$ are isolated nodes. We use $M(H_{\mathcal{N}})$ do denote its adjacency matrix. The discussion above leads to the following result summarizing that the $\gamma$-switch chain is of the form (\ref{eq:general_chain}), and that its heat-bath variant is precisely the Curveball Markov chain as in \cite{Verhelst2008} defined by transition matrix
$$
P_c(A,B) = \left\{ \begin{array}{ll} \binom{m}{2}^{-1}\cdot \binom{u_{ij} + l_{ij}}{u_{ij}}^{-1} & \ \ \ \ \ \ \text{if } B \in \mathcal{N}_{ij}(A) \setminus \{A\}, \\
\binom{m}{2}^{-1}\sum_{1 \leq i < j \leq m} \binom{u_{ij} + l_{ij}}{u_{ij}}^{-1} & \ \ \ \ \ \ \text{if } A = B, \\
0 &\ \ \ \ \ \ \text{otherwise.}
\end{array}\right.
$$
Roughly speaking, the Curveball chain is precisely the chain sampling uniform within a fixed binomial neighborhood. For $S \subseteq \Omega$, the identity matrix $I_{S}$ on $S$ is defined by $I_{S}(x,x) = 1$ if $x \in S$ and zero elsewhere, and the all-ones matrix $J_{S}$ on $S$ is defined by $J_S(x,y) = 1$ if $x,y \in S$ and zero elsewhere.
\begin{theorem}\label{thm:switch_decomposition} The transition matrix $P_\gamma$ of the $\gamma$-switch chain is of the form (\ref{eq:general_chain}) namely
\begin{equation}\label{eq:switch_decomposition}
P_\gamma = \sum_{1 \leq i < j \leq m} \binom{m}{2}^{-1} \sum_{\mathcal{N} \in \mathcal{R}_{(i,j)}} (1 - u_{ij}l_{ij}\cdot\gamma)\cdot I_{\mathcal{N}} + \gamma \cdot M(H_\mathcal{N}).
\end{equation}
The heat-bath variant of the $\gamma$-switch chain is given by the Curveball chain, and can be written as
\begin{equation}\label{eq:curveball_decomposition}
P_c = \sum_{1 \leq i < j \leq m} \binom{m}{2}^{-1} \sum_{\mathcal{N} \in \mathcal{R}_{(i,j)}} \binom{u_{ij} + l_{ij}}{u_{ij}}^{-1} J_{\mathcal{N}}.
\end{equation}
\end{theorem}
\begin{proof}
The decomposition in (\ref{eq:switch_decomposition}) follows from the discussion above, and Assumption \ref{assump:gamma} guarantees that the matrix
$$
(1 - u_{ij}l_{ij}\cdot\gamma)\cdot I_{\mathcal{N}} + \gamma \cdot M(H_\mathcal{N})
$$
indeed defines the transition matrix of a Markov chain for every $\mathcal{N}$. Moreover, remember that the $\gamma$-switch chain has uniform stationary distribution $\pi$ over $\Omega$. Indeed, for a binomial neighborhood $\mathcal{N} = \mathcal{N}_{ij}(A)$ for given $i < j$ and $A \in \Omega$, the vector $\sigma_{\mathcal{N}}$ as used in (\ref{eq:heat_chain}) is then given by
$$
\sigma_{\mathcal{N}}(x) = \frac{\pi(x)}{\pi(\mathcal{N})} = \frac{1}{|\Omega|} \cdot \frac{|\Omega|}{|\mathcal{N}|} = \frac{1}{|\mathcal{N}|} = \binom{u_{ij} + l_{ij}}{u_{ij}}^{-1}
$$
if $x \in \mathcal{N}$, and zero otherwise. This implies that
$$
\mathbf{1}\cdot \sigma_{\mathcal{N}} = \binom{u_{ij} + l_{ij}}{u_{ij}}^{-1} J_{\mathcal{N}}.
$$
as desired.
\end{proof}
This completes our description of the $\gamma$-switch chain as a Markov chain of the form (\ref{eq:general_chain}) with heat-bath variant the Curveball chain. We next study two explicit $\gamma$-switch chains.
\subsection{KTV-switch chain}
The switch chain of the Kannan-Tetali-Vempala conjecture, as described in the introduction, can be obtained by setting $\gamma = 2/(n(n-1))$. As the product $u_{ij}(A)l_{ij}(A)$ can be at most $n^2/4$ for any $A \in \Omega$ and $1 \leq i < j \leq m$, we see that $\gamma$ satisfies Assumption \ref{assump:gamma}.
\begin{theorem}\label{thm:curveball_KTV}
Let $P_c$ and $P_{KTV}$ be the transition matrices of resp. the Curveball and KTV-switch Markov chains with $n \geq 3$.
Then
$$
\frac{2}{n(n-1)}\cdot (1 - \lambda_*^{KTV})^{-1} \ \leq \ (1 - \lambda_*^c)^{-1} \ \leq \ \min\left\{1, \frac{(2r_{\max} + 1)^2}{2n(n-1)}\right\} \cdot (1 - \lambda_*^{KTV})^{-1},
$$
where $\lambda_*^{(KTV,c)} = \lambda_1^{(KTV,c)} $ is the second largest eigenvalue of $P_{(KTV,c)}$. In particular, $P_{KTV}$ only has non-negative eigenvalues.
\end{theorem}
\begin{proof}
Let $\mathcal{N} = \mathcal{N}_{ij}(A)$ for given $i < j$ and $A \in \Omega$. We apply Theorem \ref{thm:heat_comparison} for various pairs $(\alpha,\beta)$.
\emph{Case 1: $\alpha = \beta = 1$.} From (\ref{eq:assumption}) it follows that it suffices to show that for any binomial neighborhood $\mathcal{N}$ the submatrix of
$$
Y_{\mathcal{N}} = \left[1 - u_{ij}l_{ij}\cdot\binom{n}{2}^{-1}\right] I_{\mathcal{N}} + \binom{n}{2}^{-1}M(H_{\mathcal{N}})
$$
formed by the rows and columns of $\mathcal{N}$ only has non-negative eigenvalues.
For any eigenvalue $\lambda$ of this submatrix, we have
$$
\lambda = 1 + (\mu - u_{ij}l_{ij})\binom{n}{2}^{-1}
$$
where $\mu = \mu(\lambda)$ is an eigenvalue of the Johnson graph $J(u_{ij} +l_{ij}, u_{ij})$ on $\mathcal{N}$. In particular, using Proposition \ref{prop:johnson} with $p = u_{ij} + l_{ij}$ and $q = u_{ij}$, we get
$
(\mu - u_{ij}l_{ij}) \geq -\frac{1}{4}(u_{ij} + l_{ij} + 1)^2 \geq -\frac{1}{4}(n + 1)^2
$
using that $0 \leq u_{ij} + l_{ij} \leq n$. Therefore, when $n \geq 5$, we have
$$
\lambda \geq 1 - \frac{1}{2}\frac{(n+1)^2}{n(n-1)} \geq 0.
$$
The cases $n = 3,4$ can be checked with some elementary arguments. This is left to the reader. Note that, in particular, this implies that $P_{KTV}$ only has non-negative eigenvalues when $n \geq 3$.
\emph{Case 2: $\alpha = 1$ and $\beta = (2n(n-1))/((2r_{\max} + 1)^2)$.} Using similar notation as in the previous case, we show that
$$
\lambda = 1 - \beta \left(1 - \left(1 + \left(\mu - u_{ij} \cdot l_{ij}\right)\binom{n}{2}^{-1}\right)\right) = 1 + \beta(\mu - u_{ij} \cdot l_{ij})\binom{n}{2}^{-1} \geq 0
$$
for any $\mu = \mu(\lambda)$ that is an eigenvalue of the Johnson graph $J(u_{ij} + l_{ij}, u_{ij})$. Again, using Proposition \ref{prop:johnson} in order to lower bound the quantity $(\mu -u_{ij} \cdot l_{ij})$, we find
$$
1 + \beta \cdot \left(\mu - u_{ij} \cdot l_{ij}\right)\binom{n}{2}^{-1} \geq 1 - \frac{\beta}{4}(u_{ij} + l_{ij} +1)^2\binom{n}{2}^{-1} \geq 1 - \frac{\beta}{4}(2r_{\max}+1)^2\binom{n}{2}^{-1} \geq 0,
$$
using the fact that $0 \leq u_{ij} + l_{ij} \leq 2r_{\max}$ and the choice of $\beta$.
\emph{Case 3: $\alpha = -1$ and $\beta = -2/(n(n-1))$.} We have to show that
$$
\lambda = \binom{n}{2} \left(1 - \left(1 + \left(\mu - u_{ij} \cdot l_{ij}\right)\binom{n}{2}^{-1} \right)\right) - 1 = u_{ij} \cdot l_{ij} - \mu - 1 \geq 0
$$
for all
$$
\mu = \mu(k) = (u - k)(l - k) - k
$$
where $k = 1,\dots,u$. Note that the eigenvalue $u_{ij} \cdot l_{ij}$ for the case $k = 0$ yields the largest eigenvalue $1 = \lambda_0^{\mathcal{N}}$ of $Y_{\mathcal{N}}$, and does not have to be considered here. The maximum over $k = 1,\dots,u$ is then attained for $k = 1$, and we have
$
u_{ij} \cdot l_{ij}- \mu - 1 \geq u_{ij} \cdot l_{ij} - ((u_{ij} - 1)(l_{ij} - 1) - 1) - 1 = u_{ij} + l_{ij} - 1 \geq 0,
$ since $u_{ij}, l_{ij} \geq 1$.
\end{proof}
\subsection{Edge-switch chain}
In every step of the edge-switch algorithm, two matrix-entries $(i,a)$ and $(j,b)$ from the set $\{(c,d) : A(c,d) = 1\}$ are chosen uniformly at random. We refer to it as the edge-switch algorithm, as for the interpretation of uniformly sampling directed graphs (where every node can have at most one self-loop), it corresponds to choosing two distinct edges uniformly at random. If the $2 \times 2$ submatrix corresponding to rows $i,j$ and columns $a,b$ forms a checkerboard, and if $(i,b)$ and $(j,b)$ are not forbidden entries, the checkerboard is adjusted (similar as for the KTV-switch algorithm as described in the introduction). Note that
$$
P_{edge}(A,B) = \binom{\rho}{2}^{-1}
$$
if $A$ and $B$ are switch-adjacent, where $\rho = \sum_{i} r_i$ is the total number of ones in every binary matrix in $\Omega$. Note that
$$
\gamma = \binom{m}{2} \binom{\rho}{2}^{-1}
$$
in this case.
The analysis in the main part of this section implies that we can write
\begin{equation}\label{eq:switch_reformulated}
P_{edge} = \sum_{1 \leq i < j \leq m} \binom{m}{2}^{-1} \sum_{\mathcal{N} \in \mathcal{R}_{(i,j)}} \left[1 - u_{ij}l_{ij}\cdot\binom{m}{2}\binom{\rho}{2}^{-1}\right] I_{\mathcal{N}} + \binom{m}{2}\binom{\rho}{2}^{-1}M(H_{\mathcal{N}})
\end{equation}
where $M(H_{\mathcal{N}})$ is the adjacency matrix of a Johnson graph for every $\mathcal{N}$. However, the matrix
\begin{equation}\label{eq:switch_S}
S_{\mathcal{N}} = \left[1 - u_{ij}l_{ij}\cdot\binom{m}{2}\binom{\rho}{2}^{-1}\right] I_{\mathcal{N}} + \binom{m}{2}\binom{\rho}{2}^{-1}M(H_{\mathcal{N}})
\end{equation}
does not necessarily define the transition matrix of a Markov chain on $\mathcal{N}$, as the holding probabilities might be negative.\footnote{In versions (v1,v2) we wrongfully claim that these matrices are stochastic, from which we conclude that $(1 - \lambda_*^c)^{-1} \leq 2 (1 - \lambda_*^{edge})^{-1}$.
We fix this claim in Theorem \ref{thm:comparison_edge} at the cost of a polynomial factor. We can therefore still conclude that the Curveball chain is rapidly mixing whenever the edge-switch chain is rapidly mixing. The results on regular instances, given later on, remain unchanged.}
We circumvent this problem by making the edge-switch chain $\delta$-lazy for $\delta$ sufficiently small. This procedure can be carried out for any $\gamma$ that does not satisfy Assumption \ref{assump:gamma}, provided $\gamma$ is polynomially bounded.
\begin{theorem}\label{thm:comparison_edge}
There exists a non-negative $\delta = \text{poly}(n,m,\rho)^{-1}$
such that
$$
\frac{1}{1 - \lambda_*^c} \leq \frac{1}{\delta}\cdot \frac{1}{1 - \lambda_*^{edge}}
$$
where $\lambda_*^{c, (edge)}$ is the second largest eigenvalue of $P_{c, (edge)}$.
\end{theorem}
\begin{proof}
Note that
\begin{eqnarray}
(1 - \delta)I + \delta P_{edge} &=& \sum_{i < j } \binom{m}{2}^{-1} \sum_{\mathcal{N} \in \mathcal{R}_{(i,j)}} (1 - \delta) + \delta \cdot S_{\mathcal{N}} \nonumber \\
& = & \sum_{i < j } \binom{m}{2}^{-1} \sum_{\mathcal{N} \in \mathcal{R}_{(i,j)}} \left[1 - \delta \cdot u_{ij}l_{ij}\binom{m}{2}\binom{\rho}{2}^{-1}\right] I_{\mathcal{N}} + \delta\cdot \binom{m}{2}\binom{\rho}{2}^{-1}M(H_{\mathcal{N}}) \nonumber
\end{eqnarray}
so by taking, e.g.,
$$
\delta = \frac{1}{2}\left[ \frac{n^2}{4}\binom{m}{2}\binom{\rho}{2}^{-1}\right]^{-1}
$$
we see that the matrices $(1 - \delta) + \delta \cdot S_{\mathcal{N}}$ are stochastic matrices with non-negative eigenvalues, as all holding probabilities are at least $1/2$. Here we also use the fact that $u_{ij}(A)l_{ij}(A) \leq n^2/4$ for all $A \in \Omega$ and $1 \leq i < j \leq m$. We may conclude that $(1 - \lambda_{*}^c)^{-1} \leq (1 - \lambda_{*,\delta}^{edge})^{-1} \leq (1 - \lambda_{*}^{edge})^{-1}/\delta$ where we use
Proposition \ref{prop:lazy} in the last inequality.
\end{proof}\medskip
For certain instances we can do better than the $\delta$ in the proof of the previous theorem.
\begin{theorem}\label{thm:regular}
Let $\Omega = \Omega(n,d,\mathcal{F})$ be the set of square $n \times n$ binary matrices with row and column sums equal to $d \in \mathbb{N}$, so that $\rho = nd$, and forbidden entries $\mathcal{F}$. Then
$$
(1 - \lambda_*^{c})^{-1} \leq \left(\frac{2d+1}{2d}\right)^2 (1 - \lambda_*^{edge})^{-1}
$$
\end{theorem}
\begin{proof}With $S_{\mathcal{N}}$ as in (\ref{eq:switch_S}) we have that any eigenvalue $\lambda$ of $S_{\mathcal{N}}$ is of the form
$$
\lambda = 1 + (\mu - u_{ij}l_{ij})\binom{n}{2}\binom{nd}{2}^{-1}
$$
where $\mu = \mu(\lambda)$ is an eigenvalue of the Johnson graph $J(u_{ij} + l_{ij},u_{ij})$. Proposition \ref{prop:johnson} shows that
$
(\mu - u_{ij}l_{ij}) \geq -(u_{ij}+ l_{ij} + 1)^2/4 \geq - (2d + 1)^2/4,
$
using $0 \leq u_{ij} + l_{ij} \leq 2d$ in the last inequality. It then follows that
$$
1 + (\mu - u_{ij}l_{ij})\binom{n}{2}\binom{nd}{2}^{-1} = 1 - \frac{1}{4}\frac{(2d+1)^2n(n-1)}{nd(nd-1)} = 1 - \frac{1}{4}\frac{4d^2(n-1)}{d(nd-1)} - \frac{1}{4}\frac{(4d+1)(n-1)}{d(nd-1)}.
$$
Note that $d(n-1) \leq nd - 1$ for all $n, d \geq 1$, from which it follows that
$$
1 + (\mu - u_{ij}l_{ij})\binom{n}{2}\binom{m}{2}^{-1} \geq - \frac{1}{4}\frac{(4d+1)(n-1)}{d(nd-1)} \geq -\frac{1}{d} - \frac{1}{4d^2}
$$
for all $n \in \mathbb{N}$. This implies that
$(1/d+1/(4d^2))I_{\mathcal{N}} + S_{\mathcal{N}}$ is positive semidefinite. Rescaling, and rewriting, gives that
$$
\left[1 - \left(\frac{2d}{2d+1}\right)^2\right]I_{\mathcal{N}} + \left(\frac{2d}{2d+1}\right)^2 S_{\mathcal{N}}
$$
is a symmetric stochastic transition matrix with only non-negative eigenvalues, i.e., we can take
$$
\delta = \left(\frac{2d}{2d+1}\right)^2.
$$
\end{proof}
\begin{corollary}\label{cor:regular}
Let $\Omega = \Omega(n,d,\mathcal{F})$ be the set of square $n \times n$ binary matrices with row and column sums equal to $d \in \mathbb{N}$, so that $\rho = nd$, and forbidden entries $\mathcal{F}$, and let $\lambda_{|\Omega|-1}^{edge}$ be the smallest eigenvalue of $P_{edge}$. Then
$$
(1 + \lambda_{|\Omega|-1}^{edge})^{-1} \leq \frac{4d^2}{4d^2 - 4d - 1} \leq \frac{5}{2}
$$
if $d \geq 2$.
\end{corollary}
\begin{proof}
In the proof of Theorem \ref{thm:regular} it was shown that
$$
\left( \frac{1}{d} + \frac{1}{4d^2}\right) I + P_{edge} = \sum_{1 \leq i < j \leq m} \binom{m}{2}^{-1} \sum_{\mathcal{N} \in \mathcal{R}_{(i,j)}} \left( \frac{1}{d} + \frac{1}{4d^2}\right)I_{\mathcal{N}} + S_{\mathcal{N}} \succeq 0,
$$
and hence
$
\lambda_{|\Omega|-1}^{edge} \geq -\left( \frac{1}{d} + \frac{1}{4d^2}\right).
$
Rewriting this gives the result.
\end{proof}
With $\mathcal{F}$ the set of diagonal entries, this improves a bound of $(1 + \lambda_{|\Omega|-1}^{edge})^{-1} \leq n^2d^2/4$ of Greenhill \cite{Greenhill2011} for the edge-switch chain for the sampling of simple directed regular graphs.
\section{Parallelism in the Curveball chain}
As a binary matrix is only adjusted on two rows at the time in the Curveball algorithm, one might perform multiple binomial trades in parallel on distinct pairs of rows \cite{Carstens2016}. To be precise, in every step of the so-called \emph{$k$-Curveball algorithm}, we choose a set of $k \leq \lfloor m/2 \rfloor$ disjoint pairs of rows uniformly at random and perform a binomial trade on every pair (see introduction). For $k = \lfloor m/2 \rfloor$ this corresponds to the Global Curveball algorithm described in \cite{Carstens2016}. We show that the induced $k$-Curveball chain is of the form (\ref{eq:general_chain}). The index set $\mathcal{L} = \mathcal{L}_k$ is the collection of all sets containing $k$ pairwise disjoint sets of two rows, i.e.,
$$
\left\{\{(1_a,1_b),(2_a,2_b),\dots,(k_a,k_b)\} \ : \ 1_a,1_b,\dots,k_a,k_b \in [m], \ |\{1_a,1_b,2_a,2_b,\dots,k_a,k_b\}| = 2k \right\},
$$
and $\rho$ is the uniform distribution over $\mathcal{L}$. For a fixed collection $\kappa \in \mathcal{L}_k$, we define the $\kappa$-neighborhood $\mathcal{N}_{\kappa}(A)$ of binary matrix $A \in \Omega$ as the set of binary matrices $B \in \Omega$ that can be obtained from $A$ by binomial trade-operations (see introduction) only involving the row-pairs in $\kappa$. Formally speaking, we have $B \in \mathcal{N}_{\kappa}(A)$ if and only if there exist binary matrices $A_l$ for $l = 0,\dots,k-1$, so that
$$
A_{l+1} \in \mathcal{N}_{(l+1)_a,(l+1)_b}(A_l)
$$
where $A = A_0$ and $B = A_{k}$. Note that the matrices $A_l$ might not all be pairwise distinct, as $A$ and $B$ could already coincide on certain pairs of rows in $\kappa$. Also note that $u_{i_ai_b}(A) = u_{i_ai_b}(B)$ and $l_{i_ai_b}(A) = l_{i_ai_b}(B)$ if $B \in \mathcal{N}_{\kappa}(A)$ for $i = 1,\dots,k$. It is not hard to see that such a neighborhood is isomorphic to a Cartesian product $W_1 \times W_2 \times \dots \times W_k$ of finite sets $W_1,\dots,W_k$ with
$$
|W_i| = \binom{u_{i_ai_b} + l_{i_ai_b}}{u_{i_ai_b}}.\footnote{That is, the elements of $W_i$ describe a matrix on row-pair $(i_a,i_b)$.}
$$
Moreover, the relation $\sim_{\kappa}$ defined by $a \sim_{\kappa} b$ if and only if $b \in \mathcal{N}_{\kappa}(a)$ defines an equivalence relation, and its equivalence classes give the set $\mathcal{R}_{\kappa}$. We now consider the following artificial formulation of the original Curveball chain: we first select $k$ pairs of distinct rows uniformly at random, and then we choose one of those pairs uniformly at random and apply a binomial trade on that pair. It should be clear that this generates the same Markov chain as when we directly select a pair of distinct rows uniformly at random.
For $\mathcal{N}_{\kappa} \in \mathcal{R}_{\kappa}$ the matrix $P_{\mathcal{N}_{\kappa}}$ restricted to the rows and columns in $\mathcal{N}_{\kappa}$ is then the transition matrix of a Markov chain over $W_1 \times \dots \times W_k$, where in every step we choose an index $i \in [k]$ uniformly at random and make a transition in $W_i$ based on the (uniform) transition matrix
$$
Q_{i} = \binom{u_{i_ai_b} + l_{i_ai_b}}{u_{i_ai_b}} ^{-1} J
$$
where $J$ is the all-ones matrix of approriate size. More formally, the matrix $P_{\mathcal{N}_{\kappa}}$ restricted to the columns and rows in $\mathcal{N}_{\kappa}$ is given by
\begin{equation}\label{eq:trans_product}
\frac{\sum_{i = 1}^k \left[ \mathbf{\otimes}_{j = 1}^{i-1} \mathcal{I}_j\right] \otimes Q_{i} \otimes \left[\otimes_{j = i+1}^k \mathcal{I}_j\right]}{k},
\end{equation}
forming a transition matrix on $\mathcal{N}_{\kappa}$, and is zero elsewhere. Here $\mathcal{I}_j$ is the identity matrix with the same size as $Q_j$ and $\otimes$ the usual tensor product. The eigenvalues of the matrix in (\ref{eq:trans_product}) are given by
\begin{equation}\label{eq:eigen_product}
\lambda_{\mathcal{N}_{\kappa}} = \left\{\frac{1}{k} \sum_{i = 1}^k \lambda_{{j_i},i} : 0 \leq j_i \leq |W_i| - 1\right\}
\end{equation}
where $1= \lambda_{0,i} \geq \lambda_{1,i} \geq \dots \geq \lambda_{|W_i| - 1,i}$ are the eigenvalues of $Q_i$ for $i = 1,\dots,k$.\footnote{See, e.g., \cite{Erdos2015decomposition} for a similar argument regarding the transition matrix, and eigenvalues, of a Markov chain of this form. These statements follow directly from elementary arguments involving tensor products.} It then follows that
$$
P_{c} = \sum_{\kappa \in \mathcal{L}_k} \frac{1}{|\mathcal{L}_k|} \sum_{\mathcal{N}_{\kappa} \in \mathcal{R}_{\kappa}} P_{\mathcal{N}_{\kappa}}
$$
which is of the form (\ref{eq:general_chain}). For $k = 1$, we get back the description of the previous section. Now, its heat-bath variant is precisely the $k$-Curveball Markov chain
$$
P_{k-Curveball} = \sum_{\kappa \in \mathcal{L}_k} \frac{1}{|\mathcal{L}_k|} \sum_{\mathcal{N}_{\kappa} \in \mathcal{R}_{\kappa}} \frac{1}{|\mathcal{N}_{\kappa}|} J_{\mathcal{N}_{\kappa}},
$$
where
$$
|\mathcal{N}_{\kappa}| = \prod_{i = 1}^k \binom{u_{i_ai_b} + l_{i_ai_b}}{u_{i_ai_b}}^{-1}
$$
as, roughly speaking, for a fixed neighborhood $\mathcal{N}_{\kappa}$, the $k$-Curveball chain is precisely the uniform sampler over such a neighborhood.
\begin{theorem}\label{thm:global_curveball}
We have
$$
\frac{(1 - \lambda_*^{c})^{-1}}{k} \ \leq \ (1 - \lambda_*^{k,c})^{-1} \ \leq \ (1 - \lambda_*^{c})^{-1}
$$
where $\lambda_*^{k,c}$ is the second-largest eigenvalue of the $k$-Curveball chain, and $\lambda_*^c$ the second-largest eigenvalue of the $1$-Curveball chain.
\end{theorem}
\begin{proof}
The upper bound follows from Theorem \ref{thm:heat_comparison}, with $\alpha = \beta = 1$, as the eigenvalues of all the $Q_i$ are non-negative, and therefore (\ref{eq:eigen_product}) implies that the eigenvalues of the matrix in (\ref{eq:trans_product}) are also non-negative. For the lower bound, we take $\alpha = -1$ and $\beta = -k$. That is, we have to show that
$$
-1 + k(1 - \mu)
$$
with $\mu \in \lambda_{\mathcal{N}_{\kappa}} \setminus \{1\}$ as in (\ref{eq:eigen_product}). It is not hard to see that the second-largest eigenvalue in $\lambda_{\mathcal{N}_{\kappa}}$ is $(k-1)/k$, as the eigenvalues of every fixed $Q_i$ are $1 = \lambda_{0,i} > \lambda_{1,i} = \dots = \lambda_{|W_i| - 1} = 0$. This implies that
$$
-1 + k(1 - \mu) \geq -1 + k(1 - (k-1)/k) \geq 0
$$
for all $\mu \in \lambda_{\mathcal{N}_{\kappa}} \setminus \{1\}$.
\end{proof}
In general, the upper bound is tight for certain (degenerate) cases, that is, parallelism in the Curveball chain does not necessarily guarantee an improvement in its relaxation time. E.g., take column marginals $c_i = 1$ for $i = 1,\dots,n$, and row-marginals $r_1 = r_2 = n/2$ and $r_3 = r_4 = 0$, and consider $k = 2$.
\section{Conclusion}
We believe similar ideas as in this work can be used to prove that the Curveball chain is rapidly mixing for the sampling of undirected graphs with given degree sequences \cite{Carstens2016}, whenever one of the switch chains is rapidly mixing for those marginals. We leave this for future work, as the proof we have in mind is a bit more involved, but of a very similar nature as the ideas described here.
An interesting direction for future work is to give a better comparison between the edge-switch chain and Curveball chain. It would also be interesting to see if there exist classes of marginals for which one can give a strict improvement over the result in Theorem \ref{thm:global_curveball}.
\subsection*{Acknowledgements}
Pieter Kleer is grateful to Annabell Berger and Catherine Greenhill for some useful discussions and comments regarding this work.
\bibliographystyle{plain}
| {
"timestamp": "2017-10-19T02:07:27",
"yymm": "1709",
"arxiv_id": "1709.07290",
"language": "en",
"url": "https://arxiv.org/abs/1709.07290",
"abstract": "The Curveball algorithm is a variation on well-known switch-based Markov chain approaches for uniformly sampling binary matrices with fixed row and column sums. Instead of a switch, the Curveball algorithm performs a so-called binomial trade in every iteration of the algorithm. Intuitively, this could lead to a better convergence rate for reaching the stationary (uniform) distribution in certain cases. Some experimental evidence for this has been given in the literature. In this note we give a spectral gap comparison between two switch-based chains and the Curveball chain. In particular, this comparison allows us to conclude that the Curveball Markov chain is rapidly mixing whenever one of the two switch chains is rapidly mixing. Our analysis directly extends to the case of sampling binary matrices with forbidden entries (under the assumption of irreducibility). This in particular captures the case of sampling simple directed graphs with given degrees. As a by-product of our analysis, we show that the switch Markov chain of the Kannan-Tetali-Vempala conjecture only has non-negative eigenvalues if the sampled binary matrices have at least three columns. This shows that the Markov chain does not have to be made lazy, which is of independent interest. We also obtain an improved bound on the smallest eigenvalue for the switch Markov chain studied by Greenhill for uniformly sampling simple directed regular graphs.",
"subjects": "Discrete Mathematics (cs.DM)",
"title": "Comparing the Switch and Curveball Markov Chains for Sampling Binary Matrices with Fixed Marginals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750518329434,
"lm_q2_score": 0.8104789155369047,
"lm_q1q2_score": 0.800246661237759
} |
https://arxiv.org/abs/math/9805084 | Coloring Distance Graphs on the Integers | Given a set D of positive integers, the associated distance graph on the integers is the graph with the integers as vertices and an edge between distinct vertices if their difference lies in D. We investigate the chromatic numbers of distance graphs. We show that, if $D = {d_1,d_2,d_3,...}$, with $d_n | d_{n+1}$ for all n, then the distance graph has a proper 4-coloring. We further find the exact chromatic numbers of all such distance graphs. Next, we characterize those distance graphs that have periodic proper colorings and show a relationship between the chromatic number and the existence of periodic proper colorings. | \section{Introduction} \label{S:intro}
What is the least number of classes into which the integers
can be partitioned, so that no two members of the same class
differ by a square? What if ``square'' is replaced by ``factorial''?
Questions like these can be formulated as graph coloring problems.
Given a set $D$ of positive integers,
the \emph{distance graph} $\gz{D}$ is the graph with the integers
as vertices and an edge between distinct vertices if
their difference lies in $D$;
we call $D$ the \emph{distance set} of this graph.
A \emph{proper coloring} of a graph is an assignment of colors
to the vertices so that no two vertices joined by an edge
receive the same color.
The \emph{chromatic number} of a graph $G$, denoted by $\chi(G)$,
is the least number of colors in a proper coloring.
We abbreviate $\chi\big(\gz{D}\big)$ by $\ensuremath{\chi(D)}$.
We refer to~\cite{BoMu76,WesD96} for
graph-theoretic terminology not defined here.
When $D$ is the set of all positive squares, we call $\gz{D}$
the \emph{square distance graph}.
When $D$ is the set of all factorials, we obtain the
\emph{factorial distance graph}.
The questions at the beginning of this section ask for
the chromatic numbers of these two graphs.
We will study the chromatic numbers of these and other
distance graphs on the integers.
Distance graphs on the integers were introduced by
Eggleton, Erd{\H o}s, and Skilton in~\cite{EES85}.
In \cite{EES85,EES86}, the problem was posed of
characterizing those distance sets $D$,
containing only primes, such that $\ensuremath{\chi(D)}=4$.
This problem was studied in
\cite{EggR88,EES90,VoWa91,VoWa94};
see also \cite{EES88}.
More recently,
\cite{CCH97,DeZh97} have discussed
the chromatic numbers of more general distance graphs with
distance sets having 3 or 4 elements.
In this paper, we are primarily interested in distance
graphs for which the distance set is infinite,
although our results apply to finite distance sets as well.
We begin in Section~\ref{S:easy}
with some easy lemmas on connectedness and bounds on
the chromatic number.
In Section~\ref{S:dc}, we consider distance graphs for which the
distance set is totally ordered by the divisibility relation.
We determine the chromatic numbers of all such graphs; in particular,
we prove that they are all $4$-colorable.
In Section~\ref{S:pc}, we study periodic proper colorings of
distance graphs and their relationship to the chromatic number.
Throughout this paper
we will use standard notation for intervals to denote
sets of consecutive integers.
For example, $\iv{2}{6}$ denotes the set
$\left\{2,3,4,5,6\right\}$.
\section{Basic Results} \label{S:easy}
In this section, we establish some basic facts about the
connectedness and chromatic number of distance graphs.
The results of this section have all been at least partially
stated in earlier works.
Our first result characterizes those
distance sets for which the distance graph is connected.
This result has been partially stated or implicitly
assumed in a number of earlier works;
see~\cite[p.~95]{EES85}.
For $D$ a set of positive integers,
we note that $\ensuremath{\gcd(D)}$ is well defined when $D$ is
infinite.
Given a real number $k$ and a set $D$, we denote by
$k\cdot D$ the set $\left\{\,kd:d\in D\,\right\}$.
\begin{lemma} \label{L:conn}
Let $D$ be a nonempty set of positive integers.
The graph $\gz{D}$ is connected if
and only if $\ensuremath{\gcd(D)}=1$.
Further, each component of $\gz{D}$ is isomorphic to
$\gzmatch{\frac{1}{\ensuremath{\gcd(D)}}\cdot D}$.
\end{lemma}
\begin{proof}
There is a path between vertices $k$ and $k+1$ if and only if
there exist $d_1,\dotsc,d_a,e_1,\dotsc,e_b\in D$ such that
$d_1+\dotsb+d_a-e_1-\dotsb-e_b=1$. This happens
precisely when $\ensuremath{\gcd(D)}=1$, and so the first
statement of the lemma is true.
For the second statement, one isomorphism is the function\break
$\varphi\colon\ensuremath{\gcd(D)}\cdot\ensuremath{\mathbb{Z}}\to\ensuremath{\mathbb{Z}}$ defined by
$\varphi(k)=\frac{k}{\ensuremath{\gcd(D)}}$.\ggcqed
\end{proof}
When we determine the chromatic numbers of distance graphs,\break
Lemma~\ref{L:conn} will often allow us to assume that the GCD of
the distance set is $1$.
Next, we prove a useful upper bound on the chromatic number.
This result is a slight generalization of a result of
Chen, Chang, and Huang \cite[Lemma~2]{CCH97}.
\begin{lemma} \label{L:nomult}
Let $D$ be a nonempty set of positive integers, and
let $k$ be a positive integer.
If $\frac{1}{\ensuremath{\gcd(D)}}\cdot D$ contains no multiple of $k$,
then $\ensuremath{\chi(D)}\le k$.
\end{lemma}
\begin{proof}
Let $D$ and $k$ be as stated.
By Lemma~\ref{L:conn} we may assume that $\ensuremath{\gcd(D)}=1$.
Thus, we assume that $D$ contains no multiple of $k$.
We color the integers with colors $\iv{0}{k-1}$,
assigning to each integer $i$ the color corresponding
to the residue class of $i$ modulo $k$.
Two integers will be assigned the same color precisely when
they differ by a multiple of $k$.
Since no multiple of $k$ occurs in $D$, this is a proper
$k$-coloring of $\gz{D}$.\ggcqed\end{proof}
The converse of Lemma~\ref{L:nomult} holds when $k=2$.
This gives us a characterization of bipartite
distance graphs:
$\gz{D}$ is bipartite precisely when
$\frac{1}{\ensuremath{\gcd(D)}}\cdot D$ contains no multiple of $2$,
that is, when all elements of $D$ have the same power
of $2$ in their prime factorizations.
This result has been partially stated in earlier works;
see~\cite[Thms.~8 \&~10]{EES85} and~\cite[Thms.~3 \&~4]{CCH97}.
\begin{proposition} \label{P:chi2}
Let $D$ be a set of positive integers.
The graph $\gz{D}$ is bipartite
if and only if there exists a
non-negative integer $k$ so that
$\frac{1}{2^k}\cdot D$ contains only odd integers.
\end{proposition}
\begin{proof}
We may assume $D\ne\emptyset$.
Since a graph is bipartite if and only if each component is
bipartite, we may also assume,
by Lemma~\ref{L:conn}, that $\ensuremath{\gcd(D)}=1$.
For such $D$ we show that $\gz{D}$ is bipartite
if and only if each element of $D$ is odd.
$(\Longrightarrow)$ Since $\ensuremath{\gcd(D)}=1$, $D$ must have
an odd element $d$.
Suppose that $D$ has an even element $e$.
If we begin at $0$, take $e$ steps
in the positive direction,
each of length $d$, ending at $de$, and then take $d$ steps
in the negative direction,
each of length $e$, ending at $0$, then we have followed a closed
walk of odd length.
Formally, the set
\[\left\{0,d,d\cdot2,\dotsc,d(e-1),de,(d-1)e,(d-2)e,\dotsc,2e,e\right\}\]
is the vertex set of an odd circuit, and so $\gz{D}$ is
not bipartite.
$(\Longleftarrow)$
If every element of $D$ is odd, then $\ensuremath{\chi(D)}\le2$,
by Lemma~\ref{L:nomult}.\ggcqed\end{proof}
The converse of Lemma~\ref{L:nomult} does not hold when
$k>2$.
For example, let $k>2$, and let $D=\left\{1,k\right\}$.
Then $\frac{1}{\ensuremath{\gcd(D)}}\cdot D$ contains a multiple of $k$,
and yet $\ensuremath{\chi(D)}\le3\le k$
(this is not hard to show; it will also follow from Lemma~\ref{L:dc3}).
As with general graphs, it appears to be quite difficult to
determine when a distance graph has a proper $k$-coloring, for $k\ge3$.
However, when $D$ is finite, there does exist an algorithm to
determine $\ensuremath{\chi(D)}$.
This was proven for $D$ a finite set of primes
by Eggleton, Erd\H os, and Skilton
\cite[Corollary to Thm.~2]{EES90};
essentially the same proof works for more general sets.
\begin{theorem} \label{T:existsalg}
There exists an algorithm to determine $\ensuremath{\chi(D)}$
for $D$ a finite set of positive integers.\end{theorem}
\begin{proof} (Outline---see~\cite[Thm.~2]{EES90})
Let $q=\max(D)$.
Then $\ensuremath{\chi(D)}\le q+1$, by Lemma~\ref{L:nomult}.
We consider the colorings of the subgraph of $\gz{D}$
induced by $S=\iv{1}{q^q+q}$.
We show that, for $k\le q$, if $S$ has a proper
$k$-coloring, then $\ensuremath{\chi(D)}\le k$;
thus, $\ensuremath{\chi(D)}$ can be determined by a bounded search.
Let $k\le q$, and suppose that $S$ has a proper $k$-coloring.
The number of $k$-colorings of a block of $q$
consecutive integers is at most $q^q$.
Since $S$ contains $q^q+1$ such blocks,
two such blocks contained in $S$
(say $\iv{a}{a+q-1}$ and $\iv{b}{b+q-1}$, with $a<b$)
receive the same pattern of colors.
We extend the coloring of $\iv{a}{b+q-1}$ to a
coloring $f$ of $\ensuremath{\mathbb{Z}}$ using the rule
$f(i+a-b)=f(i)$, for all $i$.
We can show that this is a proper coloring if $\gz{D}$,
and so $\ensuremath{\chi(D)}\le k$.\ggcqed\end{proof}
While an algorithm exists to determine $\ensuremath{\chi(D)}$ for
finite $D$, we do not know whether
there is an efficient algorithm.
For finite graphs, determining whether the chromatic number is
at most $k$ is NP-complete~\cite{GaJo79}.
We conjecture that this is also true for distance graphs
with finite distance sets.
\begin{conjecture} \label{J:3npc}
Let $k\ge3$.
Determining whether $\ensuremath{\chi(D)}\le k$ for finite sets $D$
is NP-complete.\ggcqed\end{conjecture}
\section{Divisibility Chains} \label{S:dc}
We now focus on a particular class of distance graphs:
those in which the distance set is
totally ordered by divisibility.
We show that all such graphs are $4$-colorable,
and we determine their chromatic numbers.
A \emph{divisibility chain} is a set of positive integers that is
totally ordered by the divisibility relation.
When $D$ is a (finite or infinite) divisibility chain we
denote the elements of $D$ by $d_1,d_2,\dotsc$, where
$d_1\mid d_2\mid\dotsb$.
The \emph{ratios} of $D$ are the numbers
$r_i=\frac{d_{i+1}}{d_i}$, for each $i$.
When determining $\ensuremath{\chi(D)}$, we may, by Lemma~\ref{L:conn},
assume that $\ensuremath{\gcd(D)}=d_1=1$.
Thus, $\ensuremath{\chi(D)}$ depends only on the ratios.
We may also assume that all the $d_i$'s are distinct, that is,
that none of the ratios is equal to $1$.
A \emph{string} over $\{1,2\}$ is a finite sequence of $1$'s
and $2$'s, written without spaces or separators.
For example, $\alpha=1211$ is a string of length 4
with $\alpha_1=1$, $\alpha_2=2$, etc.
For $k$ a positive integer, a string $\alpha$
is \emph{$k$-compatible} with a distance set $D$
if there is a proper $k$-coloring of $\gz{D}$ with
colors $\iv{0}{k-1}$ such that the differences, modulo $k$,
between colors of consecutive vertices form
repeated copies of $\alpha$.
Below is part of such a coloring with $k=4$ and $\alpha=1211$.
\newcommand{{}}{{}}
\newcommand{\phantom{0}}{\phantom{0}}
\newcommand{{\,}}{{\,}}
\newcommand{\qp\qp\qs}{\phantom{0}\qp{}}
\newcommand{\qx\qp\qs}{{\,}\phantom{0}{}}
\[
\begin{array}{rl}
\text{vertex}&
\qx0{} \qp\qp\qs \qp1{} \qp\qp\qs \qp2{} \qp\qp\qs \qp3{} \qp\qp\qs
\qp4{} \qp\qp\qs \qp5{} \qp\qp\qs \qp6{} \qp\qp\qs \qp7{} \qp\qp\qs
\qp8{} \qp\qp\qs \qp9{} \qp\qp\qs 10{} \qp\qp\qs 11{} \qp\qp\qs
12{} \cr
\text{color}&
\qx0{} \qp\qp\qs \qp1{} \qp\qp\qs \qp3{} \qp\qp\qs \qp0{} \qp\qp\qs
\qp1{} \qp\qp\qs \qp2{} \qp\qp\qs \qp0{} \qp\qp\qs \qp1{} \qp\qp\qs
\qp2{} \qp\qp\qs \qp3{} \qp\qp\qs \qp1{} \qp\qp\qs \qp2{} \qp\qp\qs
\qp3{} \cr
\text{difference}&
\qx\qp\qs \qp1{} \qp\qp\qs \qp2{} \qp\qp\qs \qp1{} \qp\qp\qs \qp1{}
\qp\qp\qs \qp1{} \qp\qp\qs \qp2{} \qp\qp\qs \qp1{} \qp\qp\qs \qp1{}
\qp\qp\qs \qp1{} \qp\qp\qs \qp2{} \qp\qp\qs \qp1{} \qp\qp\qs \qp1{}
\qp\qp\qs \cr
\end{array}
\]
We see that $1211$ is not $4$-compatible with $\{3\}$,
since, for example, $1$ and $4$ receive the same color;
this is because the sum of three consecutive entries of
the repeated copies of $\alpha$ is divisible
by $4$ (i.e., $2+1+1=4$).
Generally,
a string $\alpha$ is $k$-compatible with $\{d\}$ if the
concatenation of repeated copies of $\alpha$ contains no
$d$ consecutive entries whose sum is a multiple of $k$.
\begin{theorem} \label{T:dc4}
If $D$ is a divisibility chain,
then $\ensuremath{\chi(D)}\le4$.
\end{theorem}
\begin{proof}
We may assume that the $d_i$'s are all distinct, and that $d_1=1$.
We use notation such as $\alpha^n$ to denote a string;
the superscript does not denote exponentiation or concatenation.
\smallskip\noindent\emph{Claim.}
For $n=1,2,3,\dotsc$, there exist strings $\alpha^n$,
$\beta^n$ of length $d_n$ over $\left\{1,2\right\}$ such that
\begin{enumerate}
\item $\alpha^n$, $\beta^n$ differ only in the first entry,
with $\alpha^n_1=1$, and $\beta^n_1=2$, and
\label{I:dc4pf-differ}
\item if $\gamma$ is a string resulting from the
concatenation of any number of copies of $\alpha^n$ and/or
$\beta^n$, in any order, then $\gamma$ is $4$-compatible with
$\left\{d_1,d_2,d_3,\dotsc,d_n\right\}$.
\label{I:dc4pf-compat}
\end{enumerate}
Before we prove the claim, we show that the theorem follows from it.
If the claim holds, then, for each $n$,
$\alpha^n$ is $4$-compatible with
$\left\{d_1,d_2,\dotsc,d_n\right\}$, and so
$\gzmatch{\left\{d_1,d_2,\dotsc,d_n\right\}}$
has a proper 4-coloring.
Since every finite subgraph of $\gz{D}$
is isomorphic to a finite subgraph of
$\gzmatch{\left\{d_1,\dotsc,d_n\right\}}$ for some $n$,
every finite subgraph of $\gz{D}$ is 4-colorable,
and we may conclude that $\ensuremath{\chi(D)}\le4$, by a compactness
argument.
Hence, it suffices to prove the claim.
\smallskip\noindent\emph{Proof of Claim.}
We proceed by induction on $n$.
For $n=1$, we assumed that $d_n=1$.
Let $\alpha^n=1$, and let $\beta^n=2$;
these satisfy the claim for $n=1$.
Now suppose that $n\ge1$,
and that the claim holds for $n$.
Define $s$ and $t$ as follows.
\[
s:=\sum_{i=1}^{d_n}\alpha^n_i;\qquad
t:=\sum_{i=1}^{d_n}\beta^n_i=s+1.
\]
We show first that $\iv{r_n\cdot s}{r_n\cdot t}$ contains
integers $w$, $w+1$, neither a multiple of $4$.
If $r_n>2$, then this is true since there are at
least 4 consecutive integers in $\iv{r_n\cdot s}{r_n\cdot t}$.
On the other hand, if $r_n=2$,
then $r_n\cdot s$ and $r_n\cdot t$ are both even.
Exactly one of the two is divisible by four.
If $4\mid\left(r_n\cdot s\right)$, then let $w=r_n\cdot s+1$;
otherwise, let $w=r_n\cdot s$.
Now we choose $a\ge1$, $b\ge0$ so that $a+b=r_n$
and $as+bt=w$:
let $b=w-r_n\cdot s$, and let $a=r_n-b$.
We define $\alpha^{n+1}$ to be the concatenation of
$a$ copies of
$\alpha^n$ followed by $b$ copies of $\beta^n$.
We let
$\beta^{n+1}$ be the concatenation of $\beta^n$
followed by $a-1$ copies of $\alpha^n$ followed by $b$
copies of $\beta^n$;
equivalently,
$\beta^{n+1}$ is $\alpha^{n+1}$ with its first
entry replaced by $2$.
Now, $\alpha^{n+1}$ and $\beta^{n+1}$ both have length $d_{n+1}$,
since $a+b=r_n$, and
$\alpha^{n+1}$ and $\beta^{n+1}$ differ only in the first entry.
Let $\gamma$ be a concatenation of copies of $\alpha^{n+1}$, $\beta^{n+1}$.
Then $\gamma$ is a concatenation of copies of $\alpha^n$ and $\beta^n$,
and so, by the induction hypothesis, $\gamma$ is $4$-compatible
with $\left\{d_1,d_2,\dotsc,d_n\right\}$.
In order to prove that $\alpha^{n+1}$, $\beta^{n+1}$ satisfy the
claim, it remains only to show that $\gamma$ is $4$-compatible
with $\left\{d_{n+1}\right\}$.
This is true if the concatenation of repeated copies of $\gamma$
has no $d_{n+1}$ consecutive entries whose sum is a multiple
of $4$.
Since $\alpha^{n+1}$ and $\beta^{n+1}$ differ in only one entry,
the sum of $d_{n+1}$ consecutive entries of
repeated copies of $\gamma$ is equal either to the sum
of the entries of $\alpha^{n+1}$ or to the sum of the entries
of $\beta^{n+1}$;
that is, it is equal
\begin{align*}
\text{either to}\quad\sum_{i=1}^{d_{n+1}}\alpha^{n+1}_i&=as+bt=w,\\
\text{or to}\quad\sum_{i=1}^{d_{n+1}}\beta^{n+1}_i&=t+(a-1)s+bt=w+1.
\end{align*}
Neither of these is a multiple of $4$.
Thus, the claim is proven.\ggcqed\end{proof}
The bound in Theorem~\ref{T:dc4} is sharp:
if $D=\left\{1,2,6\right\}$,
then the subgraph of $\gz{D}$ induced by
$\iv{1}{7}$ has no proper 3-coloring.
On the other hand, graphs satisfying the hypotheses of
the theorem need not have chromatic number 4, even if
$D$ is infinite.
For example, if $D$ is the set of all powers of $3$, then
every element of $D$ is odd, and so $\ensuremath{\chi(D)}=2$,
by Proposition~\ref{P:chi2}.
\begin{example} \label{G:usedc4}
Let $D=\left\{d_1,d_2,d_3,\dotsc\right\}$, where
$d_i=i!$ for each $i$.
We use the technique of the above proof to produce part of a
proper 4-coloring of $\gz{D}$, the factorial distance graph.
Let $\alpha^1=1$ and $\beta^1=2$.
We find consecutive nonmultiples of 4
in $\iv{2\cdot1}{2\cdot2}=\{2,3,4\}$:
let $w=2$, so that $w+1=3$.
So, $a=2$, and $b=0$.
The string $\alpha^2$ is $2$ copies of $\alpha^1$
followed by $0$ copies of $\beta^1$.
That is, $\alpha^2=11$, and so $\beta^2=21$.
Continuing, we find consecutive nonmultiples of 4
in $\iv{3\cdot2}{3\cdot3}=\{6,7,8,9\}$:
let $w=6$, so that $w+1=7$.
So, $a=3$, and $b=0$.
The string $\alpha^3$ is $3$ copies of $\alpha^2$ followed by
$0$ copies of $\beta^2$.
That is, $\alpha^3=111111$, and so $\beta ^3=211111$.
Once again, we find consecutive nonmultiples of 4
in $\iv{4\cdot6}{4\cdot7}=\iv{24}{28}$:
let $w=25$, so that $w+1=26$.
So, $a=3$, and $b=1$.
The string $\alpha^4$ is $3$ copies of $\alpha^3$ followed
by $1$ copy of $\beta^3$.
That is, $\alpha^4=111111111111111111211111$,
and so $\beta^4=211111111111111111211111$.
The coloring of $\iv{1}{24}$ obtained from $\alpha^4$ is the following.
\[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,0,1,2,3,0.\ggcenddef\]
\end{example}
In \emph{almost} the entire proof of Theorem~\ref{T:dc4},
``4'' can be replaced by ``3'';
that is, we use $3$-compatibility instead of
$4$-compatibility,
we find a $3$-coloring instead of a $4$-coloring,
and we find consecutive nonmultiples of $3$ instead of $4$.
The one place where $4$ is required is the
argument in the proof showing the existence of
two consecutive nonmultiples when $r_n=2$.
Thus, if we require that $r_n\ne 2$ for each $n$,
then we can replace $4$ by $3$ in the proof, and
we have the following result.
\begin{lemma} \label{L:dc3}
Let $D$ be a divisibility chain, with ratios $r_1,r_2,\dotsc$.
If $r_i\ne2$ for all $i$,
then $\ensuremath{\chi(D)}\le3$.\ggcqed\end{lemma}
Again, the bound in this result is sharp:
if $D=\left\{1,4\right\}$, then the subgraph of $\gz{D}$
induced by $\iv{1}{5}$ has no proper 2-coloring.
We now find $\ensuremath{\chi(D)}$ for every divisibility chain
$D$.
\begin{theorem} \label{T:dcchi}
Let $D$ be a divisibility chain, with ratios $r_1,r_2,\dotsc$.
All of the following hold.
\begin{enumerate}
\item $\ensuremath{\chi(D)}\le4$.
\label{I:dcchi4}
\item $\ensuremath{\chi(D)}\le3$ if and only if there do not exist
$i$, $j$ with $i<j$, $r_i=2$, and $3\mid r_j$.
\label{I:dcchi3}
\item $\ensuremath{\chi(D)}\le2$ if and only if $r_i$ is odd,
for each $i$.
\label{I:dcchi2}
\item $\ensuremath{\chi(D)}=1$ if and only if $D=\emptyset$.
\label{I:dcchi1}
\end{enumerate}
\end{theorem}
\begin{proof}
Statement~\ref{I:dcchi4} follows from Theorem~\ref{T:dc4},
statement~\ref{I:dcchi2} follows from Proposition~\ref{P:chi2},
and statement~\ref{I:dcchi1} holds because a graph is 1-colorable
precisely when it has no edges.
It remains to prove statement~\ref{I:dcchi3}.
We may assume that $d_1=1$.
$(\Longrightarrow)$
Suppose that there exist $i$ and $j$ with $i<j$, $r_i=2$, and $3\mid r_j$.
Then $d_{i+1}=2d_i$, and $d_{j+1}$ is divisible by $3d_i$.
Suppose that $\gz{D}$ has a proper $3$-coloring.
Consider the colors assigned to the multiples of $d_i$.
Since $d_i,2d_i\in D$, vertices $0$, $d_i$, and $2d_i$
induce a complete subgraph and so must be assigned 3 different
colors.
Similarly, $d_i$, $2d_i$, and $3d_i$ must receive 3 different
colors, and so $0$ and $3d_i$ have the same color.
Continuing this argument, all multiples of $3d_i$ must receive the
same color, including $0$ and $d_{j+1}$, which is impossible.
$(\Longleftarrow)$
Suppose there do not exist $i$ and $j$ with the properties
specified in statement~\ref{I:dcchi3};
that is, every ratio divisible by $3$ precedes every ratio equal to
$2$ in the list $\left\{r_1,r_2,\dotsc\right\}$.
If there exist infinitely many ratios that are divisible by $3$,
then, by our assumption, there exists no ratio equal to $2$,
and so $\ensuremath{\chi(D)}\le3$, by Lemma~\ref{L:dc3}.
Thus, we may assume that there are only finitely many
ratios that are divisible by $3$.
Let $c$ be the least positive integer such that
$3\nmid r_i$, for all $i\ge c$.
Then none of $r_1,r_2,\dotsc,r_{c-1}$ is equal to $2$.
Thus, by Lemma~\ref{L:dc3}, the graph
$\gzmatch{\left\{d_1,d_2,\dotsc,d_c\right\}}$
has a proper $3$-coloring.
By the proof of Lemma~\ref{L:dc3}---that is, the proof of
Theorem~\ref{T:dc4}, as modified to prove Lemma~\ref{L:dc3}---there
is a string $\alpha^c$ of length $d_c$
over $\left\{1,2\right\}$ such that
$\alpha^c$ is $3$-compatible with
$\left\{d_1,d_2,\dotsc,d_c\right\}$.
We claim that $\alpha^c$ is $3$-compatible with $D$.
To see this, first note that
\[
\sum_{i=1}^{d_c}\alpha^c_i
\]
is not a multiple of $3$, since $\alpha^c$ is $3$-compatible
with $\left\{d_c\right\}$.
Thus, if integers $x$ and $y$ differ by a multiple of $d_c$,
then, in a $3$-coloring whose differences, modulo $3$,
form repeated copies of $\alpha^c$,
$x$ and $y$ receive the same color
precisely when their difference is a multiple of $3d_c$.
Now, no $r_i$ with $i\ge c$ is divisible by $3$;
thus, no $d_i$ with $i\ge c$ is divisible by $3d_c$.
We conclude that, for each $i\ge c$,
no two integers with difference $d_i$ receive the same color,
and so $\alpha^c$ is $3$-compatible with
$\left\{d_c,d_{c+1},d_{c+2},\dotsc\right\}$.
Thus, $\alpha^c$ is $3$-compatible with $D$, and we have
$\ensuremath{\chi(D)}\le3$.\ggcqed\end{proof}
By Theorem~\ref{T:dcchi}, the chromatic number of
the factorial distance graph is $4$.
We will have more to say about this graph in the next section.
\section{Periodic Colorings} \label{S:pc}
In this section, we consider periodic proper colorings
of distance graphs.
We characterize those distance graphs that have no periodic
proper coloring,
and we find a relationship between
the chromatic number and the nonexistence of periodic proper
colorings.
Periodic colorings have been previously studied in~\cite{EES90}.
\begin{lemma} \label{L:nomultper}
Let $D$ be a set of positive integers, and
let $k$ be a positive integer.
If $D$ contains no multiple of $k$,
then $\gz{D}$
has a periodic proper $k$-coloring.
\end{lemma}
\begin{proof}
We may assume $D\ne\emptyset$.
The proof of Lemma~\ref{L:nomult}
gives a periodic proper $k$-coloring of each component of
$\gz{D}$;
this results in a periodic proper $k$-coloring
of the graph.\ggcqed\end{proof}
We can use Lemma~\ref{L:nomultper} to characterize
those distance graphs that have no periodic proper coloring.
The following result generalizes an observation of
Eggleton~\cite{EggR97}
that the square distance graph has no periodic proper coloring.
\begin{proposition} \label{P:noper}
Let $D$ be a set of positive integers.
The graph $\gz{D}$ has no periodic proper coloring
if and only if $D$ contains a multiple of every positive integer.
\end{proposition}
\begin{proof}
$(\Longrightarrow)$
If there is some positive integer $k$ such
that $D$ contains no multiple of $k$, then,
by Lemma~\ref{L:nomultper},
$\gz{D}$ has a periodic proper coloring.
$(\Longleftarrow)$
Let $D$ contain a multiple of every positive integer.
Let $\gz{D}$ be colored in a periodic manner;
say this coloring has period $k$.
Every pair of vertices whose difference is a multiple
of $k$ will have the same color.
Since $D$ contains some multiple of $k$, this cannot be a
proper coloring.\ggcqed\end{proof}
\begin{remark} \label{R:fact}
It follows from Theorem~\ref{T:dcchi}
that the chromatic number of the factorial distance graph
is $4$.
However, by Proposition~\ref{P:noper}, the factorial
distance graph has no periodic proper coloring.\ggcenddef\end{remark}
Now we examine the effect of the existence of uniquely colorable
subgraphs on proper colorings of distance graphs.
We prove a useful lower bound on the chromatic number
based on uniquely colorable subgraphs and periodic
colorings.
\begin{proposition} \label{P:uniqueper}
Let $D$ be a set of positive integers, and
let $k$ be a positive integer.
If $\gz{D}$ has a finite,
uniquely $k$-colorable subgraph, then
every proper $k$-coloring of $\gz{D}$ is periodic.
\end{proposition}
\begin{proof}
Suppose that $H$ is a uniquely $k$-colorable subgraph of
$\gz{D}$.
We may assume that the least integer that is a vertex of $H$ is $1$.
Let $n$ be the greatest-numbered vertex of $H$.
Since $H$ is uniquely $k$-colorable, every $k$-coloring of
$\iv{1}{n-1}$ that can be extended to a proper $k$-coloring of
$\gz{D}$ has a unique extension to a proper $k$-coloring
of $\iv{1}{n}$.
In short, once we have $k$-colored $\iv{1}{n-1}$, the
color of vertex $n$ is forced.
But, $\iv{2}{n+1}$ also contains a copy of $H$, and so
once we have colored $\iv{2}{n}$, the color of vertex $n+1$
is forced.
By an inductive argument, we can see that $k$-coloring
$\iv{1}{n-1}$ completely determines the coloring of
$\left[1,\infty\right)$.
Essentially the same argument works in the opposite direction:
$k$-coloring $\iv{2}{n}$ forces a certain color to occur
at vertex $1$.
Hence, $k$-coloring any set of $n-1$ consecutive vertices determines
the coloring of all of $\ensuremath{\mathbb{Z}}$.
Now, there are only a finite number of $k$-colorings of $n-1$
consecutive integers.
Since the colorings of blocks of $n-1$ consecutive integers
must eventually repeat, every proper $k$-coloring of the distance
graph is periodic.\ggcqed\end{proof}
Proposition~\ref{P:noper} and Proposition~\ref{P:uniqueper}
have nearly opposite conclusions;
the former concludes that the graph has
no periodic proper coloring,
while
the latter concludes that every proper $k$-coloring of the
distance graph is periodic.
Suppose that a distance graph satisfies the hypothesis of
both propositions, that is,
the distance set contains a multiple of every
positive integer,
and
the graph has a finite, uniquely $k$-colorable subgraph.
Then the conclusions of both propositions must be true:
there is no periodic proper coloring,
and yet
every proper $k$-coloring is periodic.
We can only conclude that the distance graph must have no
proper $k$-coloring at all, and so we have the following result.
\begin{theorem} \label{T:omega+1}
Let $D$ be a set of positive integers, and
let $k$ be a positive integer.
If $D$ contains a multiple of every positive integer,
and $\gz{D}$ has a finite, uniquely $k$-colorable subgraph,
then $\ensuremath{\chi(D)}\ge k+1$.\ggcqed\end{theorem}
We can use Theorem~\ref{T:omega+1} to place a lower bound on
the chromatic number of the square distance graph.
Let $D$ be the set of all positive squares.
Any Pythagorean triple gives a $K_3$ in the square distance graph.
For example, the vertices $0$, $3^2$, $5^2$ induce a $K_3$, since
$3,4,5$ is a Pythagorean triple.
Since $\gz{D}$ has a $K_3$ subgraph,
$\ensuremath{\chi(D)}\ge3$.
Furthermore, $K_3$ is uniquely $3$-colorable, and $D$ contains
a multiple of every positive integer.
Thus, $\ensuremath{\chi(D)}\ge4$, by Theorem~\ref{T:omega+1}.
Eggleton~\cite{EggR97} has found a $K_4$ in the square distance graph:
the vertices are $0$, $672^2$, $680^2$, and $697^2$.
We have $680^2-672^2=104^2$, $697^2-680^2=153^2$, and
$697^2-672^2=185^2$.
Since the square distance graph has a uniquely $4$-colorable
subgraph, we have the following result.
\begin{corollary} \label{C:sqchige5}
The chromatic number of the square distance graph is
at least $5$.\ggcqed\end{corollary}
We do not know whether
the square distance graph contains a $K_5$ or whether its
chromatic number is greater than $5$.
\begin{problem} \label{O:sqchi}
What is the chromatic number of the square distance graph?
Equivalently, what is the least number of classes into
which the integers can be partitioned,
so that no two members of the same class differ by a
square?\ggcqed\end{problem}
It seems likely that no finite number of colors suffices.
We can ask similar questions about the distance graph
resulting when $D$ is the set of all positive $n$th
powers, for $n\ge3$.
We know that these graphs contain no $K_3$
(this is equivalent to ``Fermat's Last Theorem'',
proven by Wiles \cite{WilA95}),
that they do not have periodic proper colorings,
by Proposition~\ref{P:noper},
and that their chromatic numbers are all at least $3$, by
Theorem~\ref{T:omega+1} (or Proposition~\ref{P:chi2}).
It seems likely that these graphs have infinite chromatic
number as well.
As noted in Section~\ref{S:intro},
determining which distance graphs have chromatic number
at most $k$, for a given $k\ge3$, appears to be
difficult.
A similar problem, whose difficultly we cannot estimate
at this time, is the following.
\begin{problem} \label{O:charinf}
Characterize those sets $D$ such that
$\ensuremath{\chi(D)}$ is\break
infinite.\ggcqed
\end{problem}
No coloring requiring an infinite number of colors is periodic.
Thus, by Proposition~\ref{P:noper}, a necessary condition for
such sets $D$ is that they contain a multiple of every integer.
However, this condition is not sufficient, by Remark~\ref{R:fact}.
\section*{Acknowledgments} \label{S:ack}
The author is grateful to Professor Roger Eggleton for bringing
this topic to his attention and for helpful discussions.
| {
"timestamp": "1998-05-19T22:08:10",
"yymm": "9805",
"arxiv_id": "math/9805084",
"language": "en",
"url": "https://arxiv.org/abs/math/9805084",
"abstract": "Given a set D of positive integers, the associated distance graph on the integers is the graph with the integers as vertices and an edge between distinct vertices if their difference lies in D. We investigate the chromatic numbers of distance graphs. We show that, if $D = {d_1,d_2,d_3,...}$, with $d_n | d_{n+1}$ for all n, then the distance graph has a proper 4-coloring. We further find the exact chromatic numbers of all such distance graphs. Next, we characterize those distance graphs that have periodic proper colorings and show a relationship between the chromatic number and the existence of periodic proper colorings.",
"subjects": "Combinatorics (math.CO)",
"title": "Coloring Distance Graphs on the Integers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987375050346933,
"lm_q2_score": 0.8104789155369047,
"lm_q1q2_score": 0.8002466600333789
} |
https://arxiv.org/abs/1810.02281 | A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks | We analyze speed of convergence to global optimum for gradient descent training a deep linear neural network (parameterized as $x \mapsto W_N W_{N-1} \cdots W_1 x$) by minimizing the $\ell_2$ loss over whitened data. Convergence at a linear rate is guaranteed when the following hold: (i) dimensions of hidden layers are at least the minimum of the input and output dimensions; (ii) weight matrices at initialization are approximately balanced; and (iii) the initial loss is smaller than the loss of any rank-deficient solution. The assumptions on initialization (conditions (ii) and (iii)) are necessary, in the sense that violating any one of them may lead to convergence failure. Moreover, in the important case of output dimension 1, i.e. scalar regression, they are met, and thus convergence to global optimum holds, with constant probability under a random initialization scheme. Our results significantly extend previous analyses, e.g., of deep linear residual networks (Bartlett et al., 2018). |
\section{Approximate Balancedness and Deficiency Margin Under Customary Initialization} \label{app:balance_margin_init}
Two assumptions concerning initialization facilitate our main convergence result (Theorem~\ref{theorem:converge}):
\emph{(i)}~the initial weights $W_1(0),\ldots,W_N(0)$ are approximately balanced (see Definition~\ref{def:balance});
and \emph{(ii)}~the initial end-to-end matrix~$W_{1:N}(0)$ has positive deficiency margin with respect to the target~$\Phi$ (see Definition~\ref{def:margin}).
The current appendix studies the likelihood of these assumptions being met under customary initialization of random (layer-wise independent) Gaussian perturbations centered at zero.
For approximate balancedness we have the following claim, which shows that it becomes more and more likely the smaller the standard deviation of initialization is:
\vspace{1mm}
\begin{claim}
\label{claim:balance_init}
Assume all entries in the matrices $W_j\in\R^{d_j\times{d}_{j-1}}$, $j=1,\ldots,N$, are drawn independently at random from a Gaussian distribution with mean zero and standard deviation~$s>0$.
Then, for any~$\delta>0$, the probability of $W_1,\ldots,W_N$ being $\delta$-balanced is at least~$\max\{0,1-10\delta^{-2}Ns^4d_{max}^3\}$, where $d_{max}:=\max\{d_0,\ldots,d_N\}$.
\end{claim}
\vspace{-3mm}
\begin{proof}
See Appendix~\ref{app:proofs:balance_init}.
\end{proof}
In terms of deficiency margin, the claim below treats the case of a single output model (scalar regression), and shows that if the standard deviation of initialization is sufficiently small, with probability close to~$0.5$, a deficiency margin will be met.
However, for this deficiency margin to meet a chosen threshold~$c$, the standard deviation need be sufficiently large.
\vspace{1mm}
\begin{claim}
\label{claim:margin_init}
There is a constant $C_1 > 0$ such that the following holds. Consider the case where~$d_N=1$, $d_0\geq20$,\note{
The requirement $d_0\geq20$ is purely technical, designed to simplify expressions in the claim.
} and suppose all entries in the matrices $W_j\in\R^{d_j\times{d}_{j-1}}$, $j=1,\ldots,N$, are drawn independently at random from a Gaussian distribution with mean zero, whose standard deviation~$s>0$ is small with respect to the target, \ie~$s\leq \| \Phi\|_F^{1/N} \big/ (10^5 d_0^3d_1 \cdots d_{N-1} C_1)^{1/(2N)}$.
Then, for any~$c$ with $0<c\leq \norm{\Phi}_{F} \big/ \big(10^{5}d_0^{3}C_1(C_{1}N)^{2N}\big)$, the probability of the end-to-end matrix~$W_{1:N}$ having deficiency margin~$c$ with respect to~$\Phi$ is at least~$0.49$ if:\,\footnote{
The probability $0.49$ can be increased to any $p < 1/2$ by increasing the constant $10^5$ in the upper bounds for~$s$ and~$c$.
}
\footnote{
It is not difficult to see that the latter threshold is never greater than the upper bound for~$s$, thus sought-after standard deviations always exist.
}
$$s\geq{c}^{1/(2N)}\cdot\big(C_{1}N\norm{\Phi}_F^{1/(2N)}/(d_1\cdots{d}_{N-1})^{1/(2N)}\big)
\text{~.}$$
\end{claim}
\vspace{-3mm}
\begin{proof}
See Appendix~\ref{app:proofs:margin_init}.
\end{proof}
\section{Convergence Failures} \label{app:fail}
In this appendix we show that the assumptions on initialization facilitating our main convergence result (Theorem~\ref{theorem:converge})~---~approximate balancedness and deficiency margin~---~are both necessary, by demonstrating cases where violating each of them leads to convergence failure.
This accords with widely observed empirical phenomena, by which successful optimization in deep learning crucially depends on careful initialization (\cf~\citet{sutskever2013importance}).
Claim~\ref{claim:diverge_balance} below shows\note{
For simplicity of presentation, the claim treats the case of even depth and uniform dimension across all layers.
It can easily be extended to account for arbitrary depth and input/output/hidden dimensions.
}
that if one omits from Theorem~\ref{theorem:converge} the assumption of approximate balancedness at initialization, no choice of learning rate can guarantee convergence:
\vspace{1mm}
\begin{claim}
\label{claim:diverge_balance}
Assume gradient descent with some learning rate~$\eta>0$ is a applied to a network whose depth~$N$ is even, and whose input, output and hidden dimensions $d_0,\ldots,d_N$ are all equal to some $d\in\N$.
Then, there exist target matrices~$\Phi$ such that the following holds.
For any~$c$ with $0<c< \sigma_{min}(\Phi)$, there are initializations for which the end-to-end matrix~$W_{1:N}(0)$ has deficiency margin~$c$ with respect to~$\Phi$, and yet convergence will fail~---~objective will never go beneath a positive constant.
\end{claim}
\vspace{-3mm}
\begin{proof}
See Appendix~\ref{app:proofs:diverge_balance}.
\end{proof}
In terms of deficiency margin, we provide (by adapting Theorem~4 in~\citet{bartlett2018gradient}) a different, somewhat stronger result~---~there exist settings where initialization violates the assumption of deficiency margin, and despite being perfectly balanced, leads to convergence failure, for any choice of learning rate:\note{
This statement becomes trivial if one allows initialization at a suboptimal stationary point, \eg~$W_j(0)=0,~j=1,\ldots,N$.
Claim~\ref{claim:diverge_margin} rules out such trivialities by considering only non-stationary initializations.
}
\vspace{1mm}
\begin{claim}
\label{claim:diverge_margin}
Consider a network whose depth~$N$ is even, and whose input, output and hidden dimensions $d_0,\ldots,d_N$ are all equal to some $d\in\N$.
Then, there exist target matrices~$\Phi$ for which there are non-stationary initializations $W_1(0),\ldots,W_N(0)$ that are $0$-balanced, and yet lead gradient descent, under any learning rate, to fail~---~objective will never go beneath a positive constant.
\end{claim}
\vspace{-3mm}
\begin{proof}
See Appendix~\ref{app:proofs:diverge_margin}.
\end{proof}
\section{Implementation Details} \label{app:impl}
Below we provide implementation details omitted from our experimental report (Section~\ref{sec:exper}).
The platform used for running the experiments is PyTorch \citep{paszke2017automatic}.
For compliance with our analysis, we applied PCA whitening to the numeric regression dataset from UCI Machine Learning Repository.
That is, all instances in the dataset were preprocessed by an affine operator that ensured zero mean and identity covariance matrix.
Subsequently, we rescaled labels such that the uncentered cross-covariance matrix~$\Lambda_{yx}$ (see Section~\ref{sec:prelim}) has unit Frobenius norm (this has no effect on optimization other than calibrating learning rate and standard deviation of initialization to their conventional ranges).
With the training objective taking the form of Equation~\eqref{eq:loss_whitened}, we then computed~$c$~---~the global optimum~---~in accordance with the formula derived in Appendix~\ref{app:whitened}.
In our experiments with linear neural networks, balanced initialization was implemented with the assignment written in step~\emph{(iii)} of Procedure~\ref{proc:balance_init}.
In the non-linear network experiment, we added, for each $j\in\{1,\ldots,N-1\}$, a random orthogonal matrix to the right of~$W_j$, and its transpose to the left of~$W_{j+1}$~---~this assignment maintains the properties required from balanced initialization (see Footnote~\ref{note:balance_init_props}).
During all experiments, whenever we applied grid search over learning rate, values between $10^{-4}$ and~$1$ (in regular logarithmic intervals) were tried.
\section{Deferred Proofs} \label{app:proofs}
We introduce some additional notation here
in addition to the notation specified in Section~\ref{sec:prelim}. We use~$\norm{A}_\sigma$ to denote the spectral norm (largest singular value) of a matrix~$A$, and sometimes~$\norm{\vv}_2$ as an alternative to~$\norm{\vv}$~---~the Euclidean norm of a vector~$\vv$.
Recall that for a matrix $A$, $vec(A)$ is its vectorization in column-first order.
We let $F(\cdot)$ denote the cumulative distribution function of the standard normal distribution, \ie~$F(x) = \int_{-\infty}^x\frac{1}{\sqrt{2\pi}}e^{-\frac12u^2}du$ ($x\in\R$).
To simplify the presentation we will oftentimes use~$W$ as an alternative (shortened) notation for~$W_{1:N}$~---~the end-to-end matrix of a linear neural network.
We will also use~$L(\cdot)$ as shorthand for~$L^1(\cdot)$~---~the loss associated with a (directly parameterized) linear model, \ie~$L(W) := \frac12\norm{W-\Phi}_F^2$.
Therefore, in the context of gradient descent training a linear neural network, the following expressions all represent the loss at iteration $t$:
\[
\ell(t)=L^N(W_1(t),\ldots,W_N(t))=L^1(W_{1:N}(t)) = L^1(W(t)) = L(W(t)) = \frac12\norm{W(t)-\Phi}_F^2\,.
\]
Also, for weights $W_j\in\R^{d_j\times{d}_{j-1}}$, $j=1,\ldots,{N}$ of a linear neural network, we generalize the notation~$W_{1:N}$, and define $W_{j:j'}:=W_{j'}W_{j'-1}\cdots{W}_j$ for every $1\leq{j}\leq{j'}\leq{N}$.
Note that $W_{j:j'}^{\top}=W_{j}^{\top}W_{j+1}^{\top}\cdots{W}_{j'}^{\top}$.
Then, by a simple gradient calculation, the gradient descent updates (\ref{eq:gd}) can be written as
\begin{equation}
\label{eq:wjupdate}
W_j(t+1) = W_j(t) - \eta W_{j+1:N}^\top(t) \cdot \frac{dL}{dW}(W(t)) \cdot W_{1:j-1}^\top(t) \quad , 1\le j \le N\,,
\end{equation}
where we define $W_{1:0}(t) := I_{d_0}$ and $W_{N+1:N}(t) := I_{d_N}$ for completeness.
Finally, recall the standard definition of the tensor product of two matrices (also known as the Kronecker product):
for matrices $A \in \BR^{m_A \times n_A}, B \in \BR^{m_B \times n_B}$, their tensor product $A \otimes B \in \BR^{m_A m_B \times n_A n_B}$ is defined as
$$
A \otimes B = \left ( \begin{matrix}
a_{1,1} B& \cdots &a_{1,n_A} B \\
\vdots & \ddots & \vdots \\
a_{m_A,1} B & \cdots & a_{m_A,n_A} B
\end{matrix} \right),
$$
where $a_{i,j}$ is the element in the $i$-th row and $j$-th column of $A$.
\subsection{Proof of Claim~\ref{claim:margin_interp}} \label{app:proofs:margin_interp}
\begin{proof}
Recall that for any matrices $A$ and~$B$ of compatible sizes $\sigma_{min}(A+B)\geq\sigma_{min}(A)-\sigma_{max}(B)$, and that the Frobenius norm of a matrix is always lower bounded by its largest singular value (\citet{horn1990matrix}).
Using these facts, we have:
\beas
&&\sigma_{min}(W')=\sigma_{min}\big(\Phi+(W'-\Phi)\big)\geq\sigma_{min}(\Phi)-\sigma_{max}(W'-\Phi) \\[0.5mm]
&&~\qquad\qquad\geq\sigma_{min}(\Phi)-\norm{W'-\Phi}_F\geq\sigma_{min}(\Phi)-\norm{W-\Phi}_F \\[1mm]
&&~\qquad\qquad\geq\sigma_{min}(\Phi)-(\sigma_{min}(\Phi)-c)=c\,.
\eeas
\end{proof}
\subsection{Proof of Lemma~\ref{lemma:descent}} \label{app:proofs:descent}
To prove Lemma \ref{lemma:descent}, we will in fact prove a stronger result, Lemma \ref{lem:remainsmooth} below, which states that for each iteration $t$, in addition to (\ref{eq:descent}) being satisfied, certain other properties are also satisfied, namely:
\emph{(i)} the weight matrices $W_1(t), \ldots, W_N(t)$ are $2\delta$-balanced, and
\emph{(ii)} $W_1(t), \ldots, W_N(t)$ have bounded spectral norms.
\begin{lemma}
\label{lem:remainsmooth}
Suppose the conditions of Theorem \ref{theorem:converge} are satisfied. Then for all $t \in \BN \cup \{0\}$,
\begin{enumerate}
\item[($\MA(t)$)] For $1 \leq j \leq N-1$, $\| W_{j+1}^\top(t) W_{j+1}(t) - W_j(t) W_j^\top(t) \|_F \leq 2\delta$.
\item[($\MA'(t)$)] If $t \geq 1$, then for $1 \leq j \leq N-1$,
\begin{eqnarray}
\label{eq:sumbalancet}
&&\| W_{j+1}^\top(t) W_{j+1}(t) - W_j(t) W_j^\top(t) \|_F \nonumber\\
&\leq& \| W_{j+1}^\top(t-1) W_{j+1}(t-1) - W_j(t-1) W_j^\top(t-1) \|_F \nonumber \\
&& + \eta^2 \left\| \frac{dL^1}{dW} W(t-1)\right\|_F \cdot \left\| \frac{dL^1}{dW} W(t-1) \right\|_\sigma \cdot 4 \cdot (2 \| \Phi\|_F)^{2(N-1)/N}\nonumber.
\end{eqnarray}
\item[($\MB(t)$)] If $t = 0$, then $\ell(t) \leq \frac 12 \| \Phi\|_F^2$. If $t \geq 1$, then \begin{equation*}
\ell(t) \leq
\ell(t-1) -\frac \eta 2 \sigma_{min}(W(t-1))^{\frac{2(N-1)}{N}} \left\| \frac{dL^1}{dW} (W(t-1)) \right\|_F^2.\quad
\end{equation*}
\item[($\MC(t)$)] For $1 \leq j \leq N$, $\| W_j(t) \|_\sigma \leq (4 \| \Phi\|_F)^{1/N}$.
\end{enumerate}
\end{lemma}
First we observe that Lemma \ref{lemma:descent} is an immediate consequence of Lemma \ref{lem:remainsmooth}.
\begin{proof}[Proof of Lemma \ref{lemma:descent}]
Notice that condition $\MB(t)$ of Lemma \ref{lem:remainsmooth} for each $t \geq 1$ immediately establishes the conclusion of Lemma \ref{lemma:descent} at time step $t-1$.
\end{proof}
\subsubsection{Preliminary lemmas}
We next prove some preliminary lemmas which will aid us in the proof of Lemma \ref{lem:remainsmooth}. The first is a matrix inequality that follows from Lidskii's theorem. For a matrix $A$, let $\Sing(A)$ denote the rectangular diagonal matrix {of the same size,} whose diagonal elements are the singular values of $A$ arranged in non-increasing order (starting from the $(1,1)$ position).
\begin{lemma}[\cite{bhatia_matrix_1997}, Exercise IV.3.5]
\label{lemma:bhatiaexc}
For any two matrices $A,B$ of the same size, $\| \Sing(A) - \Sing(B) \|_\sigma \leq \| A-B \|_\sigma$ and $\| \Sing(A) - \Sing(B) \|_F \leq \| A-B\|_F$.
\end{lemma}
Using Lemma \ref{lemma:bhatiaexc}, we get:
\begin{lemma}
\label{lem:rearrange}
Suppose $D_1, D_2 \in \BR^{d \times d}$ are non-negative diagonal matrices with non-increasing values along the diagonal and $O \in \BR^{d \times d}$ is an orthogonal matrix. Suppose that $\| D_1 - OD_2O^\top\|_F \leq \ep$, for some $\ep > 0$. Then:
\begin{enumerate}
\item $\| D_1 - OD_1O^\top \|_F \leq 2\ep$.
\item $\| D_1 - D_2 \|_F \leq \ep$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $D_1$ and $OD_2O^T$ are both symmetric positive semi-definite matrices, their singular values are equal to their eigenvalues. Moreover, the singular values of $D_1$ are simply its diagonal elements and the singular values of $OD_2O^T$ are simply the diagonal elements of $D_2$. Thus by Lemma \ref{lemma:bhatiaexc} we get that $\|D_1 - D_2 \|_F \leq \| D_1 - OD_2O^T\|_F \leq \ep$. Since the Frobenius norm is unitarily invariant, $\| D_1 - D_2 \|_F = \| OD_1O^T - OD_2O^T \|_F$, and by the triangle inequality it follows that
\begin{align*}
\| D_1 - OD_1O^T\|_F \leq \| OD_1O^T - OD_2O^T\|_F + \| D_1 - OD_2O^T\|_F \leq 2\ep.
\end{align*}
\end{proof}
Lemma \ref{lem:boundcommute} below states that if $W_1, \ldots, W_N$ are approximately balanced matrices, \ie~$W_{j+1}^\top W_{j+1} - W_j W_j^\top$ has small Frobenius norm for $1 \leq j \leq N-1$, then we can bound the Frobenius distance between $W_{1:j}^\top W_{1:j}$ and $(W_1^\top W_1)^j$ (as well as between $W_{j:N}W_{j:N}^\top$ and $(W_NW_N^\top)^{N-j+1}$).
\begin{lemma}
\label{lem:boundcommute}
Suppose that $d_N \leq d_{N-1}, d_0 \leq d_1$, and that for some $\nu > 0,M>0$, the matrices $W_j \in \BR^{d_j \times d_{j-1}}$, $1 \leq j \leq N$ satisfy, for $1 \leq j \leq N-1$,
\begin{equation}
\label{eq:pcndiff}
\| W_{j+1}^\top W_{j+1} - W_j W_j^\top \|_F \leq \nu,
\end{equation}
and for $1 \leq j \leq N$, $\| W_j\|_\sigma \leq M$. Then, for $1 \leq j \leq N$,
\begin{equation}
\label{eq:wj1w}
\| W_{1:j}^\top W_{1:j} - (W_1^\top W_1)^{j} \|_F \leq \frac 32 \nu \cdot M^{2(j-1)} j^2,
\end{equation}
and
\begin{equation}
\label{eq:wjnw}
\| W_{j:N} W_{j:N}^\top - (W_NW_N^\top)^{N-j+1}\|_F \leq \frac 32 \nu \cdot M^{2(N-j)} (N-j+1)^2.
\end{equation}
Moreover, if $ \sigma_{min}$ denotes the minimum singular value of $W_{1:N}$, $\sigma_{1,min}$ denotes the minimum singular value of $W_1$ and $\sigma_{N,min}$ denotes the minimum singular value of $W_N$, then
\begin{equation}
\label{eq:singvallb}
\sigma_{min}^2 - \frac 32 \nu M^{2(N-1)} N^2 \leq
\begin{cases}
\sigma_{N,min}^{2N} \quad : \quad d_N \geq d_0.\\
\sigma_{1,min}^{2N} \quad : \quad d_N \leq d_0.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
For $1 \leq j \leq N$, let us write the singular value decomposition of $W_j$ as $W_j = U_j \Sigma_j V_j^\top$, where $U_j\in \BR^{d_j\times d_j}$ and $V_j\in \BR^{d_{j-1}\times d_{j-1}}$ are orthogonal matrices and $\Sigma_j\in \BR^{d_j\times d_{j-1}}$ is diagonal. We may assume without loss of generality that the singular values of $W_j$ are non-increasing along the diagonal of $\Sigma_j$. Then we can write (\ref{eq:pcndiff}) as
$$
\| V_{j+1}\Sigma_{j+1}^\top \Sigma_{j+1} V_{j+1}^\top - U_j \Sigma_j \Sigma_j^\top U_j^\top \|_F \leq \nu.
$$
Since the Frobenius norm is invariant to orthogonal transformations, we get that
$$
\| \Sigma_{j+1}^\top\Sigma_{j+1} - V_{j+1}^\top U_j \Sigma_j \Sigma_j^\top U_j^\top V_{j+1} \|_F \leq \nu.
$$
By Lemma \ref{lem:rearrange}, we have that $\| \Sigma_{j+1}^\top \Sigma_{j+1} - \Sigma_j \Sigma_j^\top \|_F \leq \nu$ and $\| \Sigma_j \Sigma_j^\top - V_{j+1}^\top U_j \Sigma_j \Sigma_j^\top U_j^\top V_{j+1} \|_F \leq 2\nu$. We may rewrite the latter of these two inequalities as
$$
\| [\Sigma_j \Sigma_j^\top ,V_{j+1}^\top U_j] \|_F = \| [\Sigma_j \Sigma_j^\top, V_{j+1}^\top U_j]U_j^\top V_{j+1} \|_F =\| \Sigma_j \Sigma_j^\top- V_{j+1}^\top U_j \Sigma_j \Sigma_j^\top U_j^\top V_{j+1} \|_F \leq 2\nu.
$$
Note that
$$
W_{j:N} W_{j:N}^\top = W_{j+1:N} U_j\Sigma_j\Sigma_j^\top U_j^\top W_{j+1:N}^\top.
$$
For matrices $A,B$, we have that $\| AB\|_F \leq \| A \|_\sigma \cdot \| B \|_F$. Therefore, for $j+1 \leq i \leq N$, we have that
\begin{eqnarray}
&& \| W_{i:N} U_{i-1} (\Sigma_{i-1} \Sigma_{i-1}^\top)^{i-j} U_{i-1}^\top W_{i:N}^\top - W_{i+1:N} U_{i} (\Sigma_{i}\Sigma_{i}^\top)^{i-j+1} U_{i}^\top W_{i+1:N}^\top\|_F \nonumber\\
& =& \| W_{i+1:N}U_{i} \left(\Sigma_{i}V_{i}^\top U_{i-1} (\Sigma_{i-1} \Sigma_{i-1}^\top)^{i-j} U_{i-1}^\top V_{i}\Sigma_{i}^\top - (\Sigma_{i}\Sigma_{i}^\top)^{i-j+1} \right) U_{i}^\top W_{i+1:N}^\top \|_F \nonumber\\
&\leq& \| W_{i+1:N} U_{i}\Sigma_{i}\|_\sigma^2 \cdot \| (\Sigma_{i-1} \Sigma_{i-1}^\top)^{i-j} + [V_{i}^\top U_{i-1}, (\Sigma_{i-1}\Sigma_{i-1}^\top)^{i-j}]U_{i-1}^\top V_{i} - (\Sigma_{i}^\top\Sigma_{i})^{i-j}\|_F\nonumber\\
& \leq & \| W_{i:N}\|_\sigma^2 \left( \| [V_{i}^\top U_{i-1}, (\Sigma_{i-1} \Sigma_{i-1}^\top)^{i-j}]\|_F + \| (\Sigma_{i-1}\Sigma_{i-1}^\top)^{i-j} - (\Sigma_{i}^\top \Sigma_{i})^{i-j}\|_F\right)\nonumber.
\end{eqnarray}
Next, we have that
\begin{eqnarray}
\| [V_{i}^\top U_{i-1}, (\Sigma_{i-1} \Sigma_{i-1}^\top)^{i-j}]\|_F & \leq & \sum_{k=0}^{i-j-1} \| (\Sigma_{i-1} \Sigma_{i-1}^\top)^k [V_i^\top U_{i-1}, \Sigma_{i-1} \Sigma_{i-1}^\top]( \Sigma_{i-1} \Sigma_{i-1}^\top)^{i-j-1-k}\|_F \nonumber\\
& \leq & \sum_{k=0}^{i-j-1} \| (\Sigma_{i-1} \Sigma_{i-1}^\top)^{i-j-1}\|_\sigma \cdot \| [V_i^\top U_{i-1} ,\Sigma_{i-1} \Sigma_{i-1}^\top]\|_F\nonumber\\
& \leq & (i-j) \| W_{i-1} \|_\sigma^{2(i-j-1)} \cdot 2\nu\nonumber.
\end{eqnarray}
We now argue that $\| (\Sigma_{i-1}\Sigma_{i-1}^\top)^k - (\Sigma_i^\top \Sigma_i)^k\|_F \leq \nu \cdot kM^{2(k-1)}$. Note that $\| \Sigma_{i-1} \Sigma_{i-1}^\top - \Sigma_i^\top \Sigma_i\|_F \leq \nu$, verifying the case $k=1$. To see the general case, since square diagonal matrices commute, we have that
\begin{eqnarray}
\| (\Sigma_{i-1}\Sigma_{i-1}^\top)^k - (\Sigma_i^\top \Sigma_i)^k\|_F&=& \left\| (\Sigma_{i-1}\Sigma_{i-1}^\top - \Sigma_i^\top \Sigma_i)\cdot \left( \sum_{\ell=0}^{k-1} (\Sigma_{i-1}\Sigma_{i-1}^\top)^\ell (\Sigma_i^\top \Sigma_i)^{k-1-\ell}\right)\right\|_F\nonumber\\
& \leq & \nu \cdot \sum_{\ell=0}^{k-1} \| W_{i-1}\|_\sigma^{2\ell} \cdot \| W_i\|_\sigma^{2(k-\ell-1)}\nonumber\\
& \leq & \nu k M^{2(k-1)}\nonumber.
\end{eqnarray}
It then follows that
\begin{eqnarray}
&& \| W_{i:N} U_{i-1} (\Sigma_{i-1} \Sigma_{i-1}^\top)^{i-j} U_{i-1}^\top W_{i:N}^\top - W_{i+1:N} U_{i} (\Sigma_{i}\Sigma_{i}^\top)^{i-j+1} U_{i}^\top W_{i+1:N}^\top\|_F \nonumber\\
& \leq & \| W_{i:N}\|_\sigma^2 \cdot \left((i-j) M^{2(i-j-1)} \cdot 2\nu + \nu(i-j) M^{2(i-j-1)}\right)\nonumber\\
& = & \| W_{i:N}\|_\sigma^2 \cdot 3\nu (i-j) M^{2(i-j-1)}\nonumber.
\end{eqnarray}
By the triangle inequality, we then have that
\begin{eqnarray}
&& \| W_{j:N}W_{j:N}^\top - U_N (\Sigma_N \Sigma_N^\top)^{N-j+1} U_N^\top\|_F\nonumber\\
& \leq & \nu \sum_{i=j+1}^N \| W_{i:N}\|_\sigma^2 \cdot 3(i-j) M^{2(i-j-1)}\nonumber\\
& \leq & 3\nu \sum_{i=j+1}^N (i-j) M^{2(N-i+1)} M^{2(i-j-1)}\nonumber\\
\label{eq:diffN}
&=& 3\nu M^{2(N-j)} \sum_{i=j+1}^N (i-j) \leq \frac 32 \nu \cdot M^{2(N-j)} \cdot (N-j+1)^2.
\end{eqnarray}
By an identical argument (formally, by replacing $W_j$ with $W_{N-j+1}^\top$), we get that
\begin{equation}
\label{eq:diff1}
|| W_{1:j}^\top W_{1:j} - V_1 (\Sigma_1^\top \Sigma_1)^{j} V_1^\top\|_F \leq \frac 32\nu \cdot M^{2(j-1)} \cdot j^2.
\end{equation}
(\ref{eq:diffN}) and (\ref{eq:diff1}) verify (\ref{eq:wjnw}) and (\ref{eq:wj1w}), respectively, so it only remains to verify (\ref{eq:singvallb}).
Letting $j=1$ in (\ref{eq:diffN}), we get
\begin{equation}
\label{eq:w1nundiff}
\| W_{1:N}W_{1:N}^\top - U_N(\Sigma_N \Sigma_N^\top)^{N} U_N^\top\|_F \leq \frac 32\nu \cdot M^{2(N-1)} \cdot N^2.
\end{equation}
Let us write the eigendecomposition of $W_{1:N}W_{1:N}^\top$ with an orthogonal eigenbasis as $W_{1:N}W_{1:N}^\top = U \Sigma U^\top$, where $\Sigma$ is diagonal with its (non-negative) elements arranged in non-increasing order and $U$ is orthogonal. We can write the left hand side of (\ref{eq:w1nundiff}) as $\| U \Sigma U^\top - U_N(\Sigma_N \Sigma_N^\top)^N U_N^\top\|_F = \| \Sigma - U^\top U_N (\Sigma_N \Sigma_N^\top)^N U_N^\top U\|_F$.
By Lemma \ref{lem:rearrange}, we have that
\begin{equation}
\label{eq:ntildediff}
\| \Sigma - (\Sigma_N\Sigma_N^\top)^N \|_F \leq \frac 32 \nu M^{2(N-1)} N^2.
\end{equation}
Recall that $W \in \BR^{d_N \times d_0}$. Suppose first that $d_N \leq d_0$. Let $\sigma_{min}$ denote the minimum singular value of $W_{1:N}$ (so that $\sigma_{min}^2$ is the element in the $(d_N, d_N)$ position of $ \Sigma \in \BR^{d_N \times d_N}$), and $\sigma_{N,min}$ denote the minimum singular value (i.e. diagonal element) of $\Sigma_N$, which lies in the $(d_N, d_N)$ position of $\Sigma_N$. (Note that the $(d_N, d_N)$ position of $\Sigma_N \in \BR^{d_N \times d_{N-1}}$ exists since $d_{N-1} \geq d_N$ by assumption.) Then
$$
(\sigma_{N,min}^{2N} - \sigma_{min}^2)^2 \leq \left(\frac 32 \nu M^{2(N-1)} N^2\right)^2,
$$
so
$$
\sigma_{N,min}^{2N} \geq \sigma_{min}^2 - {\frac 32 \nu M^{2(N-1)} N^2}.
$$
By an identical argument using (\ref{eq:diff1}), we get that, in the case that $d_0 \leq d_N$, if $\sigma_{1,min}$ denotes the minimum singular value of $\Sigma_1$, then
$$
\sigma_{1,min}^{2N} \geq \sigma_{min}^2 - \frac{3}{2} \nu M^{2(N-1)} N^2.
$$
(Notice that we have used the fact that the nonzero eigenvalues of $W_{1:N}W_{1:N}^\top$ are the same as the nonzero eigenvalues of $W_{1:N}^\top W_{1:N}$.) This completes the proof of (\ref{eq:singvallb}).
\end{proof}
Using Lemma \ref{lem:boundcommute}, we next show in Lemma \ref{lem:boundindiv} that if {$W_1,\ldots,W_N$ are approximately balanced,}
then an upper bound on $\| W_N \cdots W_1\|_\sigma$ implies an upper bound on $\| W_j\|_\sigma$ for $1 \leq j \leq N$.
\begin{lemma}
\label{lem:boundindiv}
Suppose $\nu, C$ are real numbers satisfying $C > 0$ and $0 < \nu \leq \frac{ C^{2/N}}{30 N^2}$. Moreover suppose that the matrices $W_1, \ldots, W_N$ satisfy the following:
\begin{enumerate}
\item For $1 \leq j \leq N-1$, $\| W_{j+1}^\top W_{j+1} - W_j W_j^\top \|_F \leq \nu$.
\item $\| W_N \cdots W_1 \|_\sigma \leq C$.
\end{enumerate}
Then for $1 \leq j \leq N$, $\| W_j\|_\sigma \leq C^{1/N} \cdot 2^{1/(2N)}$.
\end{lemma}
\begin{proof}
For $1 \leq j \leq N$, let us write the singular value decomposition of $W_j$ as $W_j = U_j \Sigma_j V_j^\top$, where the singular values of $W_j$ are decreasing along the main diagonal of $\Sigma_j$. By Lemma \ref{lem:rearrange}, we have that for $1 \leq j \leq N-1$, $\| \Sigma_{j+1}^\top \Sigma_{j+1} - \Sigma_j \Sigma_j^\top \|_F \leq \nu$, which implies
that $\left|\| \Sigma_{j+1}^\top \Sigma_{j+1}\|_\sigma -\| \Sigma_j \Sigma_j^\top \|_\sigma \right| \leq \nu$.
Write $M = \max_{1 \leq j \leq N} \| W_j\|_\sigma = \max_{1 \leq j \leq N} \| \Sigma_j \|_\sigma$. By the above we have that $\| \Sigma_j \Sigma_j^\top\|_\sigma \geq M^2 - N\nu$ for $1 \leq j \leq N$.
Let the singular value decomposition of $W_{1:N}$ be denoted by $W_{1:N} = U\Sigma V^\top$, so that $\| \Sigma \|_\sigma \leq C$. Then by (\ref{eq:wjnw}) of Lemma \ref{lem:boundcommute} and Lemma \ref{lem:rearrange} (see also (\ref{eq:ntildediff}), where the same argument was used), we have that
$$
\| \Sigma\Sigma^\top - (\Sigma_N \Sigma_N^\top)^N \|_F\leq \frac 32 \nu M^{2(N-1)}N^2.
$$
Then
\begin{equation}
\label{eq:sigman1}
\| (\Sigma_N \Sigma_N^\top)^N\|_\sigma \leq \|\Sigma\Sigma^\top\|_\sigma + \frac 32 \nu M^{(2(N-1))}N^2 \leq \| \Sigma \Sigma^\top \|_\sigma + \frac 32 \nu \left( \| \Sigma_N \Sigma_N^\top\|_\sigma + \nu N\right)^{N-1} N^2.
\end{equation}
Now recall that $\nu$ is chosen so that $\nu \leq \frac{ C^{2/N}}{30 \cdot N^2}.$ Suppose for the purpose of contradiction that there is some $j$ such that $\| W_jW_j^\top \|_\sigma > 2^{1/N} C^{2/N}$. Then it must be the case that
\begin{equation}
\label{eq:2nabove}
\| \Sigma_N \Sigma_N^\top\|_\sigma > 2^{1/N} C^{2/N} - \nu \cdot N \geq (5/4)^{1/N} C^{2/N} > \nu \cdot 30N^2,
\end{equation}
where we have used that
$$
2^{1/N} - (5/4)^{1/N} \geq \frac{1}{30N}
$$
for all $N \geq 2$, which follows by considering the Laurent series $\exp(1/z) = \sum_{i=1}^\infty \frac{1}{i! z^i}$, which converges in $|z| > 0$ for $z \in \BC$.
We now rewrite inequality (\ref{eq:2nabove}) as
\begin{equation}
\label{eq:boundepcontr}
\nu \leq \frac{\| \Sigma_N \Sigma_N^\top\|_\sigma}{30N^2}.
\end{equation}
Next, using (\ref{eq:boundepcontr}) and $(1+1/x)^x \leq e$ for all $x > 0$,
\begin{equation}
\label{eq:sigman2}
\frac 32 \nu \left( \| \Sigma_N \Sigma_N^\top\|_\sigma + \nu N\right)^{N-1} N^2 \leq \frac{e^{1/30}}{20} \cdot \| \Sigma_N \Sigma_N^\top \|_\sigma^{N} < \frac{e}{20} \cdot \| \Sigma_N \Sigma_N^\top \|_\sigma^{N}.
\end{equation}
Since $\| (\Sigma_N \Sigma_N^\top)^N \|_\sigma = \| \Sigma_N \Sigma_N^\top\|_\sigma^N$, we get by combining (\ref{eq:sigman1}) and (\ref{eq:sigman2}) that
$$
\| \Sigma_N \Sigma_N^\top \|_\sigma < (1-e/20)^{-1/N} \cdot \| \Sigma \Sigma^\top\|_\sigma^{1/N} \leq (1-e/20)^{-1/N} \cdot C^{2/N},
$$
and since $1-e/20 > 1/(5/4)$, it follows that $\| \Sigma_N \Sigma_N^\top\|_\sigma < (5/4)^{1/N} C^{2/N}$, which contradicts (\ref{eq:2nabove}). It follows that for all $1 \leq j \leq N$, $\| W_j W_j^\top \|_\sigma \leq 2^{1/N} C^{2/N}$. The conclusion of the lemma then follows from the fact that $\| W_j W_j^\top\|_\sigma = \| W_j\|_\sigma^2$.
\end{proof}
\subsubsection{Single-Step Descent}
Lemma~\ref{lem:boundgdupdate} below states that if certain conditions on $W_1(t),\ldots,W_N(t)$ are met, the sought-after descent~---~Equation~\eqref{eq:descent}~---~will take place at iteration~$t$.
We will later show (by induction) that the required conditions indeed hold for every~$t$, thus the descent persists throughout optimization.
The proof of Lemma~\ref{lem:boundgdupdate} is essentially a discrete, single-step analogue of the continuous proof for Lemma~\ref{lemma:descent} (covering the case of gradient flow) given in Section~\ref{sec:converge}.
\begin{lemma}
\label{lem:boundgdupdate}
Assume the conditions of Theorem~\ref{theorem:converge}.
Moreover, suppose that for some $t$, the matrices $W_1(t), \ldots, W_N(t)$ and the end-to-end matrix $W(t) := W_{1:N}(t)$ satisfy the following properties:
\begin{enumerate}
\item $\| W_j(t)\|_\sigma \leq (4 \| \Phi\|_F)^{1/N}$ for $1 \leq j \leq N$.
\item $\| W(t) - \Phi\|_\sigma \leq \| \Phi\|_F$.
\item $\| W_{j+1}^\top(t) W_{j+1}(t) - W_j(t)W_j^\top(t)\|_F \leq 2\delta$ for $1 \leq j \leq N-1$.
\item ${\sigma}_{min}:={\sigma}_{min}(W(t)) \geq c$.
\end{enumerate}
Then, after applying a gradient descent update (\ref{eq:gd}) we have that
$$
L(W(t+1)) - L(W(t)) \leq - \frac \eta 2 \sigma_{min}^{2(N-1)/N} \left\| \frac{dL}{dW}(W(t)) \right\|_F^2.
$$
\end{lemma}
\begin{proof}
For simplicity write $M = (4 \| \Phi\|_F)^{1/N}$ and $B = \| \Phi\|_F$. We first claim that
{\small\begin{equation}
\label{eq:assumeeta}
\eta \leq \min \left\{ \frac{1}{2M^{N-2}BN}, \frac{\sigma_{min}^{2(N-1)/N}}{24 \cdot 2 M^{3N-4} N^2B}, \frac{\sigma_{min}^{2(N-1)/N}}{24N^2 M^{4(N-1)}}, \frac{\sigma_{min}^{2(N-1)/(3N)}}{ \left(24 \cdot 4M^{6N-8}N^4B^2 \right)^{1/3}}\right\}.
\end{equation}}
Since $c \leq \sigma_{min}$, for (\ref{eq:assumeeta}) to hold it suffices to have
\begin{eqnarray}
\eta &\leq& \min \left\{ \frac{1}{8 \| \Phi\|_F^{(2N-2)/N} N}, \frac{c^{2(N-1)/N}}{3 \cdot 2^{11} \| \Phi\|_F^{4(N-1)/N} N^2},\frac{c^{2(N-1)/(3N)}}{3 \cdot 2^6 \left(\| \Phi\|_F^{(8N-8)/N}\right)^{1/3} N^{4/3}}\right\}\nonumber.
\end{eqnarray}
As the minimum singular value of $\Phi$ must be at least $c$, we must have $c \leq \| \Phi\|_\sigma$. Since then $\frac{c}{\| \Phi\|_F} \leq \frac{c}{\| \Phi\|_\sigma} \leq 1$, it holds that
$$
\frac{c^{2(N-1)/N}}{\| \Phi\|_F^{4(N-1)/N}} \leq \min \left\{ \frac{1}{\| \Phi\|_F^{2(N-1)/N}}, \frac{c^{2(N-1)/(3N)}}{\| \Phi\|_F^{(8N-8)/(3N)}} \right\},
$$
meaning that it suffices to have
$$
\eta \leq \frac{ c^{2(N-1)/N}}{3 \cdot 2^{11} N^2 \| \Phi\|_F^{4(N-1)/N}},
$$
which is guaranteed by (\ref{eq:descent_eta}).
Next, we claim that
\begin{eqnarray}
\label{eq:assumeep}
2\delta &\leq &\min \left\{ \frac{ c^{2(N-1)/N}}{8 \cdot 2^4N^3 \| \Phi\|_F^{2(N-2)/N}}, \frac{c^{2}}{6 \cdot 2^4 N^2 \| \Phi\|_F^{2(N-1)/N}}\right\}\\
& \leq & \min \left\{ \frac{ \sigma_{min}^{2(N-1)/N}}{8N^3M^{2(N-2)}}, \frac{\sigma_{min}^{2}}{6N^2 M^{2(N-1)}}\right\} .\nonumber
\end{eqnarray}
The second inequality above is trivial, and for the first to hold, since $c \le \| \Phi\|_F$, it suffices to take
$$
2\delta \leq \frac{c^2}{128 \cdot N^3 \cdot \| \Phi\|_F^{2(N-1)/N}},
$$
which is guaranteed by the definition of $\delta$ in Theorem \ref{theorem:converge}.
Next we continue with the rest of the proof.
It follows from (\ref{eq:wjupdate}) that\footnote{Here, for matrices $A_1, \ldots, A_K$ such that $A_K A_{K-1} \cdots A_1$ is defined, we write $\prod_{1}^{j=K} A_j := A_K A_{K-1} \cdots A_1$.
}
\begin{eqnarray}
&& W(t+1) - W(t) \nonumber\\
&=& \prod_{1}^{j=N} \left( W_j(t) - \eta W_{j+1:N}^\top(t) \frac{dL}{dW}(W(t)) W_{1:j-1}^\top(t) \right) - W_{1:N}(t)\nonumber\\
\label{eq:wt1wt}
&=& -\eta \left(\sum_{j=1}^N W_{j+1:N}W_{j+1:N}^\top(t) \frac{dL}{dW}(W(t)) W_{1:j-1}^\top(t) W_{1:j-1}(t)\right) + (\star),
\end{eqnarray}
where $(\star)$ denotes higher order terms in $\eta$. We now bound the Frobenius norm of $(\star)$. To do this, note that since $L(W) = \frac 12 \| W - \Phi\|_F^2$, $\frac{dL}{dW}(W(t)) = W(t) - \Phi$. Then
\begin{eqnarray}
\| (\star) \|_F & \leq & \sum_{k=2}^N \eta^k \cdot M^{k(N-1) + N-k} \cdot \left\| \frac{dL}{dW}(W(t)) \right\|_F \cdot \left\| \frac{dL}{dW}(W(t)) \right\|_\sigma^{k-1} \cdot {N \choose k}\nonumber\\
& \leq & \eta M^{2N-2}N \left\| \frac{dL}{dW}(W(t)) \right\|_F \sum_{k=2}^N \left(\eta M^{N-2} BN\right)^{k-1}\nonumber\\
\label{eq:starnorm}
& \leq & \eta \cdot (2\eta M^{3N-4} N^2B) \cdot \left\| \frac{dL}{dW}(W(t)) \right\|_F,
\end{eqnarray}
where the last inequality uses $\eta M^{N-2} BN \leq 1/2$, which is a consequence of (\ref{eq:assumeeta}). Next, by Lemma \ref{lem:boundcommute} with $\nu = 2\delta$,
{\small\begin{eqnarray}
&& \left\|\sum_{j=1}^N W_{j+1:N}W_{j+1:N}^\top(t) \frac{dL}{dW}(W(t)) W_{1:j-1}^\top(t) W_{1:j-1}(t) \right.\nonumber\\
&&- \left. \sum_{j=1}^N (W_NW_N^\top)^{N-j} \frac{dL}{dW}(W(t)) (W_1^\top W_1)^{j-1} \right\|_F\nonumber\\
& \leq & \left\|\sum_{j=1}^N (W_{j+1:N}W_{j+1:N}^\top(t) - (W_NW_N^\top)^{N-j}) \frac{dL}{dW}(W(t)) W_{1:j-1}^\top(t) W_{1:j-1}(t) \right\|_F \nonumber\\
& + & \left\| \sum_{j=1}^N (W_NW_N^\top)^{N-j} \frac{dL}{dW}(W(t)) (W_{1:j-1}^\top W_{1:j-1} - (W_1^\top W_1)^{j-1})\right\|_F\nonumber\\
& \leq & \left\| \frac{dL}{dW}(W(t)) \right\|_F \cdot \left(\sum_{j=1}^{N-1} \frac 32 2\delta \cdot M^{2(N-j)}(N-j)^2 M^{2(j-1)} + \sum_{j=2}^N \frac 32 2\delta \cdot M^{2(j-2)}(j-1)^2 M^{2(N-j)}\right)\nonumber\\
& \leq & \left\| \frac{dL}{dW}(W(t)) \right\|_F \cdot 2\delta N^3 M^{2(N-2)}\nonumber.
\end{eqnarray}}
Next, by standard properties of tensor product, we have that
\begin{eqnarray}
&& vec\left(\sum_{j=1}^N (W_NW_N^\top)^{N-j} \frac{dL}{dW}(W(t)) (W_1^\top W_1)^{j-1} \right)\nonumber\\
&=& \sum_{j=1}^N \left( (W_1^\top W_1)^{j-1} \otimes (W_NW_N^\top)^{N-j} \right)vec \left( \frac{dL}{dW}(W(t))\right)\nonumber.
\end{eqnarray}
Let us write eigenvalue decompositions $W_1^\top W_1 = UDU^\top, W_NW_N^\top = VEV^\top$. Then
\begin{eqnarray}
&& \sum_{j=1}^N \left( (W_1^\top W_1)^{j-1} \otimes (W_NW_N^\top)^{N-j} \right)\nonumber\\
&=& \sum_{j=1}^N \left( UD^{j-1}U^\top \otimes VE^{N-j} V^\top \right)\nonumber\\
&=& (U \otimes V) \left(\sum_{j=1}^N D^{j-1} \otimes E^{N-j}\right) (U \otimes V)^\top\nonumber\\
&=& O\Lambda O^\top\nonumber,
\end{eqnarray}
with $O = U \otimes V$, and $\Lambda = \sum_{j=1}^N D^{j-1} \otimes E^{N-j}$. As $W_1 \in \BR^{d_1 \times d_0}$, and $W_N \in \BR^{d_N \times d_{N-1}}$, then $D \in \BR^{d_0 \times d_0}, E \in \BR^{d_N \times d_N}$, so $\Lambda \in \BR^{d_0d_N \times d_0d_N}$. Moreover note that $\Lambda \succeq D^0 \otimes E^{N-1} + D^{N-1} \otimes E^0 = I_{d_0} \otimes E^{N-1} + D^{N-1} \otimes I_{d_N}$. If $\lambda_D$ denotes the minimum diagonal element of $D$ and $\lambda_E$ denotes the minimum diagonal element of $E$, then the minimum diagonal element of $\Lambda$ is therefore at least $\lambda_D^{N-1} + \lambda_E^{N-1}$. But, it follows from Lemma \ref{lem:boundcommute} (with $\nu = 2\delta$) that
$$
\max\{ \lambda_D^{N}, \lambda_E^{N} \} \geq \sigma_{min}^2 - \frac 32 2\delta M^{2(N-1)} N^2 \geq 3\sigma_{min}^2/4,
$$
where the second inequality follows from (\ref{eq:assumeep}). Hence the minimum diagonal element of $\Lambda$ is at least $( \sigma_{min}^2/(4/3))^{(N-1)/N} \geq \sigma_{min}^{2(N-1)/N}/(4/3)$.
It follows as a result of the above inequalities that if we write $E(t) = vec(W(t+1)) - vec(W(t)) + \eta (O\Lambda O^\top) vec\left( \frac{dL}{dW}(W(t)) \right)$, then
\begin{eqnarray}
\| E(t) \|_2 & = &\left\| vec(W(t+1)) - vec(W(t)) + \eta (O\Lambda O^\top) vec\left( \frac{dL}{dW}(W(t)) \right) \right\|_2\nonumber\\
&\leq & \eta \left\| \frac{dL}{dW}(W(t)) \right\|_F \cdot (2\eta M^{3N-4} N^2B + 2\delta N^3 M^{2(N-2)}) \nonumber.
\end{eqnarray}
Then we have
{\small\begin{eqnarray}
&& L(W(t+1)) - L(W(t)) \nonumber\\
& \leq & vec\left( \frac{d}{dW} L(W(t)) \right)^\top vec \left( W(t+1) - W(t) \right) + \frac 12 \| W(t+1) - W(t) \|_F^2\nonumber\\
& = & \eta \left( - vec\left(\frac{d}{dW} L(W(t))\right)^\top (O\Lambda O^\top) vec \left(\frac{d}{dW} L(W(t)) \right) + \frac{1}{\eta}vec\left(\frac{d}{dW} L(W(t))\right)^\top E(t) \right) \nonumber\\
&& + \frac 12 \| W(t+1) - W(t) \|_F^2\nonumber\\
& \leq & \eta \left(-\left\| \frac{d}{dW} L(W(t)) \right\|_F^2 \cdot \frac{ \sigma_{min}^{2(N-1)/N}}{4/3} + \left\| \frac{d}{dW} L(W(t)) \right\|_F^2 \cdot \left(2\eta M^{3N-4} N^2B + 2\delta N^3 M^{2(N-2)}\right) \right) \nonumber\\
&& + \frac 12 \| W(t+1) - W(t) \|_F^2\nonumber,
\end{eqnarray}}
where the first inequality follows since $L(W) = \frac 12 \| W - \Phi\|_F^2$ is $1$-smooth as a function of $W$. Next, by (\ref{eq:wt1wt}) and (\ref{eq:starnorm}),
\begin{eqnarray}
&& \| W(t+1) - W(t) \|_F^2 \nonumber\\
& \leq & 2\eta^2 \cdot \left(N M^{2(N-1)} \cdot \left\| \frac{dL}{dW}(W(t)) \right\|_F\right)^2 + 2\eta^2 \cdot (2\eta M^{3N-4} N^2B)^2 \cdot \left\| \frac{dL}{dW}(W(t)) \right\|_F^2\nonumber\\
\label{eq:boundwtwt1}
& = &2 \eta^2 \left\| \frac{dL}{dW}(W(t)) \right\|_F^2 \cdot \left( N^2M^{4(N-1)} + (4\eta^2 M^{6N-8} N^4B^2)\right).
\end{eqnarray}
Thus
\begin{eqnarray}
&& L(W(t+1)) - L(W(t))\nonumber\\
& \leq & \eta \cdot \left\| \frac{dL}{dW}(W(t)) \right\|_F^2 \cdot \left( -\frac{\sigma_{min}^{2(N-1)/N}}{4/3} + 2\eta M^{3N-4} N^2B + 2\delta N^3 M^{2(N-2)} \right.\nonumber\\
&& \left. + \eta \cdot (N^2M^{4(N-1)} + 4\eta^2 M^{6N-8}N^4B^2) \right)\nonumber.
\end{eqnarray}
By (\ref{eq:assumeeta}, \ref{eq:assumeep}), which bound $\eta, 2\delta$, respectively, we have that
{\small\begin{eqnarray}
\label{eq:descentl}
&&L(W(t+1)) - L(W(t))\nonumber\\
& \leq& \eta \cdot \left\| \frac{dL}{dW}(W(t)) \right\|_F^2 \cdot \left( -\frac{\sigma_{min}^{2(N-1)/N}}{4/3} + \frac{\sigma_{min}^{2(N-1)/N}}{24} + \frac{\sigma_{min}^{2(N-1)/N}}{8} + \frac{\sigma_{min}^{2(N-1)/N}}{24} + \frac{\sigma_{min}^{2(N-1)/N}}{24} \right)\nonumber\\
&=& - \frac 12 \sigma_{min}^{2(N-1)/N} \eta \left\| \frac{dL}{dW}(W(t)) \right\|_F^2.
\end{eqnarray}}
\end{proof}
\subsubsection{Proof of Lemma \ref{lem:remainsmooth}}
\begin{proof}[Proof of Lemma \ref{lem:remainsmooth}]
We use induction on $t$, beginning with the base case $t=0$. Since the weights $W_1(0), \ldots, W_N(0)$ are $\delta$-balanced, we get that $\MA(0)$ holds automatically. To establish $\MB(0)$, note that since $W_{1:N}(0)$ has deficiency margin $c > 0$ with respect to $\Phi$, we must have $\| W_{1:N}(0) - \Phi \|_F \leq \sigma_{min}(\Phi) \leq \|\Phi\|_F$, meaning that $L^1(W_{1:N}(0)) \leq \frac 12 \| \Phi\|_F^2$.
Finally, by $\MB(0)$, which gives $\| W(0) - \Phi\|_F \leq \| \Phi\|_F$, we have that
\begin{equation}
\label{eq:boundprod0}
\| W(0) \|_\sigma \leq \| W(0) \|_F \leq \| W(0) - \Phi\|_F + \| \Phi\|_F \leq 2 \| \Phi\|_F.
\end{equation}
To show that the above implies $\MC(0)$, we use condition $\MA(0)$ and Lemma \ref{lem:boundindiv} with $C = 2 \| \Phi\|_F$ and $\nu = 2\delta$. By the definition of $\delta$ in Theorem \ref{theorem:converge} and since $c \leq \| \Phi\|_F$, we have that
\begin{equation}
\label{eq:epltc}
2\delta \leq \frac{c^2}{128 \cdot N^3 \cdot \| \Phi\|_F^{2(N-1)/N}} = \frac{\| \Phi\|_F^{2/N}}{128N^3} \cdot \frac{c^2}{\| \Phi\|_F^2}< \frac{\| \Phi\|_F^{2/N}}{30 N^2},
\end{equation}
as required by Lemma \ref{lem:boundindiv}.
As $\MA(0)$ and (\ref{eq:boundprod0}) verify the preconditions 1.~and 2., respectively, of Lemma \ref{lem:boundindiv}, it follows that for $1 \leq j \leq N$, $\| W_j(t) \|_\sigma \leq (2 \| \Phi\|_F)^{1/N} \cdot 2^{1/(2N)} < (4 \| \Phi\|_F)^{1/N}$, verifying $\MC(0)$ and completing the proof of the base case.
The proof of Lemma \ref{lem:remainsmooth} follows directly from the following inductive claims.
\begin{enumerate}
\item $\MA(t), \MB(t), \MC(t) \Rightarrow \MB(t+1)$. To prove this, we use Lemma \ref{lem:boundgdupdate}. We verify first that the preconditions hold.
First, $\MC(t)$ immediately gives condition 1.~of Lemma \ref{lem:boundgdupdate}. By $\MB(t)$, we have that $\| W(t) - \Phi\|_\sigma \leq \| W(t) - \Phi\|_F \leq \| \Phi\|_F$, giving condition 2.~of Lemma \ref{lem:boundgdupdate}. $\MA(t)$ immediately gives condition 3.~of Lemma \ref{lem:boundgdupdate}. Finally, by $\MB(t)$, we have that $L^N(W_1(t), \ldots, W_N(t)) \leq L^N(W_1(0), \ldots, W_N(0))$, so $\sigma_{min}(W_{1:N}(t)) \geq c$ by Claim \ref{claim:margin_interp}. This verifies condition 4.~of Lemma \ref{lem:boundgdupdate}. Then Lemma \ref{lem:boundgdupdate} gives that $L^N(W_{1}(t+1), \ldots, W_N(t+1)) \leq L^N(W_1(t), \ldots, W_N(t)) - \frac 12 \sigma_{min}(W(t))^{2(N-1)/N} \eta \left\| \frac{dL}{dW}(W(t)) \right\|_F^2$, establishing $\MB(t+1)$.
\item $\MA(0), \MA'(1), \ldots, \MA'(t), \MA(t), \MB(0), \ldots, \MB(t), \MC(t) \Rightarrow \MA(t+1), \MA'(t+1)$. To prove this, note that for $1 \leq j \leq N-1$,
\begin{eqnarray}
&& W_{j+1}^\top(t+1) W_{j+1}(t+1) - W_j(t+1) W_j^\top(t+1)\nonumber\\
&=& \left( W_{j+1}^\top(t) - \eta W_{1:j}(t) \frac{dL}{dW}(W(t))^\top W_{j+2:N}(t) \right) \nonumber\\
&& \cdot \left( W_{j+1}(t) - \eta W_{j+2:N}^\top(t) \frac{dL}{dW}(W(t)) W_{1:j}^\top(t) \right)\nonumber\\
&&- \left( W_j(t) - \eta W_{j+1:N}^\top(t) \frac{dL}{dW}(W(t)) W_{1:j-1}^\top(t) \right)\nonumber\\
&& \cdot \left( W_j^\top(t) - \eta W_{1:j-1}(t) \frac{dL}{dW}(W(t))^\top W_{j+1:N}(t) \right)\nonumber.
\end{eqnarray}
By $\MB(0), \ldots, \MB(t)$, $\| W_{1:N}(t) - \Phi \|_F \leq \| \Phi\|_F$. By the triangle inequality it then follows that $\| W_{1:N}(t)\|_\sigma \leq 2 \| \Phi\|_F$. Also $\MA(t)$ gives that for $1 \leq j \leq N-1$, $\| W_{j}(t) W_j^\top(t) - W_{j+1}^\top(t) W_{j+1}(t) \|_F \leq 2\delta$. By Lemma \ref{lem:boundindiv} with $C = 2 \| \Phi\|_F, \nu = 2\delta$ (so that (\ref{eq:epltc}) is satisfied),
\begin{eqnarray}
&& \left\| W_{j+1}^\top(t+1) W_{j+1}(t+1) - W_j(t+1) W_j^\top(t+1) \right\|_F\nonumber\\
& \leq & \| W_{j+1}^\top(t) W_{j+1}(t) - W_j(t) W_j^\top(t) \|_F + \eta^2 \left\| \frac{dL}{dW}(W(t)) \right\|_F \cdot \left\| \frac{dL}{dW}(W(t)) \right\|_\sigma \nonumber\\
&& \cdot \left( \| W_{j+2:N}(t)\|_\sigma^2 \| W_{1:j}(t) \|_\sigma^2 + \| W_{1:j-1}\|_\sigma^2 \|W_{j+1:N} \|_\sigma^2\right)\nonumber\\
\label{eq:wtwt1diverge}
& \leq & \| W_{j+1}^\top(t) W_{j+1}(t) - W_j(t) W_j^\top(t) \|_F \nonumber\\
&& + 4\eta^2 \left\| \frac{dL}{dW}(W(t)) \right\|_F \left\| \frac{dL}{dW}(W(t)) \right\|_\sigma (2 \| \Phi\|_F)^{2(N-1)/N}.
\end{eqnarray}
In the first inequality above, we have also used the fact that for matrices $A, B$ such that $AB$ is defined, $\| AB\|_F \leq \| A \|_\sigma \| B \|_F$. (\ref{eq:wtwt1diverge}) gives us $\MA'(t+1)$.
We next establish $\MA(t+1)$. By $\MB(i)$ for $0 \leq i \leq t$, we have that $\left\| \frac{dL}{dW}(W(i)) \right\|_F = \left\| W - \Phi \right\|_F \leq \| \Phi\|_F$. Using $\MA'(i)$ for $0 \leq i \leq t$ and summing over $i$ gives
\begin{eqnarray}
&& \| W_{j+1}^\top(t+1) W_{j+1}(t+1) - W_j(t+1) W_j^\top(t+1) \|_F\nonumber\\
\label{eq:hybridep}
& \leq & \| W_{j+1}^\top(0) W_{j+1}(0) - W_j(0) W_j^\top(0) \|_F \nonumber\\
&& + 4 (2 \| \Phi\|_F)^{2(N-1)/N} \cdot \eta^2 \sum_{i=0}^{t} \left\| \frac{dL}{dW}(W(i)) \right\|_F^2.
\end{eqnarray}
Next, by $\MB(0), \ldots, \MB(t)$, we have that $L(W(i)) \leq L(W(0))$ for $i \leq t$. Since $W(0)$ has deficiency margin of $c$ and by Claim \ref{claim:margin_interp}, it then follows that $\sigma_{min}(W(i)) \geq c$ for all $i \leq t$. Therefore, by summing $\MB(0), \ldots, \MB(t)$,
\begin{eqnarray}
&& \frac{1}{2} c^{2(N-1)/N} \eta\sum_{i=0}^t \left\| \frac{dL}{dW} W(i) \right\|_F^2\nonumber\\
& \leq & \frac 12 \eta \sum_{i=0}^t \sigma_{min}(W(i))^{2(N-1)/N} \left\| \frac{dL}{dW} (W(i)) \right\|_F^2\nonumber\\
& \leq & L(W(0)) - L(W(t))\nonumber\\
& \leq & L(W(0)) \leq \frac 12 \| \Phi\|_F^2\nonumber.
\end{eqnarray}
Therefore,
\begin{eqnarray}
&&4 \left( 2 \| \Phi\|_F\right)^{2(N-1)/N} \eta^2 \sum_{i=0}^t \left\| \frac{dL}{dW} W(i)\right\|_F^2 \nonumber\\
& \leq & 16 \| \Phi\|_F^{2(N-1)/N} \eta \frac{\| \Phi\|_F^2}{c^{2(N-1)/N}}\nonumber\\
\label{eq:sumetagrad}
& \leq & 16 \| \Phi\|_F^{2(N-1)/N} \cdot \frac{1}{3 \cdot 2^{11} \cdot N^3} \cdot \frac{c^{(4N-2)/N}}{\| \Phi\|_F^{(6N-4)/N}} \cdot \frac{\| \Phi\|_F^2}{c^{2(N-1)/N}}\\
& \leq & \frac{c^2}{256 N^3 \| \Phi\|_F^{2(N-1)/N}}\nonumber\\
& = & \delta\nonumber,
\end{eqnarray}
where \eqref{eq:sumetagrad} follows from the definition of $\eta$ in (\ref{eq:descent_eta}), and the last equality follows from definition of $\delta$ in Theorem \ref{theorem:converge}.
By (\ref{eq:hybridep}), it follows that
$$
\| W_{j+1}^\top(t+1) W_{j+1}(t+1) - W_j(t+1) W_j^\top(t+1) \|_F \leq 2\delta,
$$
verifying $\MA(t+1)$.
\item $ \MA(t), \MB(t) \Rightarrow \MC(t)$. We apply Lemma \ref{lem:boundindiv} with $\nu = 2\delta$ and $C = 2 \| \Phi\|_F$. First, the triangle inequality and $\MB(t)$ give
$$
\| W_{1:N}(t) \|_\sigma \leq \| \Phi\|_\sigma + \| \Phi - W_{1:N}(t) \|_\sigma \leq \| \Phi\|_F + \sqrt{2 \cdot L(W_{1:N}(t))} \leq 2 \| \Phi\|_F,
$$
verifying precondition 2.~of Lemma \ref{lem:boundindiv}. $\MA(t)$ verifies condition 1.~of Lemma \ref{lem:boundindiv}, so for $1 \leq j \leq N$, $\| W_j(t) \|_\sigma \leq (4 \| \Phi\|_F)^{1/N}$, giving $\MC(t)$.
\end{enumerate}
The proof of Lemma \ref{lem:remainsmooth} then follows by induction on $t$.
\end{proof}
\subsection{Proof of Theorem~\ref{theorem:converge_balance_init}} \label{app:proofs:converge_balance_init}
Theorem \ref{theorem:converge_balance_init} is proven by combining Lemma \ref{lem:integrate_rad} below, which implies that the balanced initialization is likely to lead to an end-to-end matrix $W_{1:N}(0)$ with sufficiently large deficiency margin, with Theorem~\ref{theorem:converge}, which establishes convergence.
\begin{lemma}
\label{lem:integrate_rad}
Let $d \in \BN, d \geq 20$; $b_2 > b_1 \geq 1$ be real numbers (possibly depending on $d$); and $\Phi \in \BR^d$ be a vector. Suppose that $\mu$ is a rotation-invariant distribution\footnote{Recall that a distribution on vectors $V \in \BR^d$ is {\it rotation-invariant} if the distribution of $V$ is the same as the distribution of $OV$, for any orthogonal $d \times d$ matrix $O$. If $V$ has a well-defined density, this is equivalent to the statement that for any $r > 0$, the distribution of $V$ conditioned on $\| V\|_2 = r$ is uniform over the sphere centered at the origin with radius $r$.} over $\BR^d$ with a well-defined density, such that, for some $0 < \ep < 1$,
$$
\pr_{V \sim \mu} \left[ \frac{\| \Phi\|_2}{\sqrt{b_2d}} \leq \| V \|_2 \leq \frac{\| \Phi\|_2}{\sqrt{b_1d}}\right] \geq 1-\ep.
$$
Then, with probability at least $(1-\ep) \cdot \frac{3 - 4F(2/\sqrt{b_1})}{2}$, $V$ will have deficiency margin $\| \Phi\|_2 / (b_2d)$ with respect to $\Phi$.
\end{lemma}
The proof of Lemma~\ref{lem:integrate_rad} is postponed to Appendix~\ref{app:proofs:margin_init}, where Lemma~\ref{lem:integrate_rad} will be restated as Lemma~\ref{lem:integrate_rad_restate}.
One additional technique is used in the proof of Theorem \ref{theorem:converge_balance_init}, which leads to an improvement in the guaranteed convergence rate. Because the deficiency margin of $W_{1:N}(0)$ is very small, namely $\OO(\| \Phi\|_2/d_0)$ (which is necessary for the theorem to maintain constant probability), at the beginning of optimization, $\ell(t)$ will decrease very slowly. However, after a certain amount of time, the deficiency margin of $W_{1:N}(t)$ will increase to a constant, at which point the decrease of $\ell(t)$ will be much faster. To capture this acceleration, we apply Theorem~\ref{theorem:converge} a second time, using the larger deficiency margin at the new ``initialization.'' From a geometric perspective, we note that the matrices $W_1(0), \ldots, W_N(0)$ are very close to 0, and the point at which $W_j(0) = 0$ for all $j$ is a saddle. Thus, the increase in $\ell(t) - \ell(t+1)$ over time captures the fact that the iterates $(W_1(t), \ldots, W_N(t))$ escape a saddle point.
\begin{proof}[Proof of Theorem \ref{theorem:converge_balance_init}]
Choose some $a \geq 2$, to be specified later.
By assumption, all entries of the end-to-end matrix at time 0, $W_{1:N}(0)$, are distributed as independent Gaussians of mean 0 and standard deviation $s \leq \|\Phi\|_2/\sqrt{ad_0^2}$. We will apply Lemma~\ref{lem:integrate_rad} to the vector $W_{1:N}(0) \in \BR^{d_0}$.
Since its distribution is obviously rotation-invariant, in remains to show that the distribution of the norm $\| W_{1:N}(0) \|_2$ is not too spread out. The following lemma~---~a direct consequence of the Chernoff bound applied to the $\chi^2$ distribution with $d_0$ degrees of freedom~---~will give us the desired result:
\begin{lemma}[\cite{laurent_adaptive_2000}, Lemma 1]
\label{lem:laurent}
Suppose that $d \in \BN$ and $V \in \BR^d$ is a vector whose entries are i.i.d.~Gaussians with mean 0 and standard deviation $s$. Then, for any $k > 0$,
\begin{eqnarray}
\pr\left[\| V \|_2^2 \geq s^2 \left(d + 2k + 2\sqrt{kd} \right) \right] &\leq& \exp(-k)\nonumber\\
\pr\left[\| V \|_2^2 \leq s^2 \left( d - 2\sqrt{kd} \right) \right] & \leq & \exp(-k)\nonumber.
\end{eqnarray}
\end{lemma}
By Lemma \ref{lem:laurent} with $k = d_0/16$, we have that
$$
\pr\left[ \frac{s^2 d_0}{2} \leq \| V \|_2^2 \leq 2s^2 d_0\right] \geq 1 - 2 \exp(-d_0/16).
$$
We next use Lemma \ref{lem:integrate_rad}, with $b_1 = \| \Phi\|_2^2/(2s^2d_0^2), b_2 = 2\| \Phi\|_2^2/(s^2d_0^2)$; note that since $a \geq 2$, $b_1 \geq 1$, as required by the lemma. Lemma \ref{lem:integrate_rad} then implies that with probability at least
\be
\label{eq:expF}
\left(1 - 2\exp(-d_0/16) \right) \frac{3-4F\left(2/\sqrt{a/2}\right)}{2},
\ee
$W_{1:N}(0)$ will have deficiency margin ${s^2d_0}/{2\| \Phi\|_2}$
with respect to $\Phi$. By the definition of balanced initialization (Procedure~\ref{proc:balance_init}) $W_1(0), \ldots, W_N(0)$ are $0$-balanced. Since $2^4 \cdot 6144 < 10^5$, our assumption on $\eta$ gives
\begin{equation}
\label{eq:eta_single_output}
\eta \leq \frac{(s^2d_0)^{4-2/N}}{2^4 \cdot 6144N^3 \| \Phi\|_2^{10-6/N}},
\end{equation}
so that Equation~(\ref{eq:descent_eta}) holds with $c = \frac{s^2d_0}{2\|\Phi\|_2}$.
The conditions of Theorem \ref{theorem:converge} thus hold with probability at least that given in Equation~(\ref{eq:expF}). In such a constant probability event, by Theorem \ref{theorem:converge} (and the fact that a positive deficiency margin implies $L^1(W_{1:N}(0))\leq\frac{1}{2}\| \Phi\|_2^2$), if we choose
\begin{equation}
\label{eq:t0lb}
t_0 \geq \eta^{-1} \left( \frac{ 2\| \Phi\|_2}{s^2d_0} \right)^{2-2/N} \ln(4),
\end{equation}
then $L^1(W_{1:N}(t_0)) \leq \frac 18 \| \Phi\|_2^2 $, meaning that $\| W_{1:N}(t_0) - \Phi\|_2 \leq \frac 12 \| \Phi\|_2 = \| \Phi\|_2 - \frac 12 \sigma_{min}(\Phi)$. Moreover, by condition $\MA(t_0)$ of Lemma \ref{lem:remainsmooth} and the definition of $\delta$ in Theorem \ref{theorem:converge}, we have, for $1 \leq j \leq N-1$,
\begin{equation}
\label{eq:balanced_single_output}
\| W_{j+1}^T(t_0) W_{j+1}(t_0) - W_j(t_0) W_j^T(t_0) \|_F \leq
\frac{2s^4d_0^2}{(2\| \Phi\|_2)^2 \cdot 256 N^3 \| \Phi\|_2^{2-2/N}} = \frac{s^4 d_0^2}{512 N^3 \| \Phi\|_2^{4-2/N}}.
\end{equation}
We now apply Theorem \ref{theorem:converge} again, verifying its conditions again, this time with the initialization $(W_1(t_0), \ldots, W_N(t_0))$. First note that the end-to-end matrix $W_{1:N}(t_0)$ has deficiency margin $c = \| \Phi\|_2/2$ as shown above. The learning rate $\eta$, by Equation~(\ref{eq:eta_single_output}), satisfies Equation~(\ref{eq:descent_eta}) with $c = \| \Phi\|_2/2$. Finally, since
$$
\frac{s^4 d_0^2}{512 N^3 \| \Phi\|_2^{4-2/N}} \leq \frac{\|\Phi\|^{2/N}}{(a^2d_0^2) \cdot 512 N^3} \leq \frac{\| \Phi\|^{2/N}(1/2)^2}{256 N^3}
$$
for $d_0 \geq 2$, by Equation~(\ref{eq:balanced_single_output}), the matrices $W_1(t_0), \ldots, W_N(t_0)$ are $\delta$-balanced with $\delta = \frac{\| \Phi\|^{2/N}(1/2)^{2}}{256 N^3}$.
Iteration~$t_0$ thus satisfies the conditions of Theorem~\ref{theorem:converge} with deficiency margin $\| \Phi\|_2/2$, meaning that for
\be
\label{eq:tmt0lb}
T - t_0 \geq \eta^{-1} \cdot 2^{2-2/N} \cdot \| \Phi\|^{2/N-2} \ln\left( \frac{\| \Phi\|_2^2}{8\ep} \right),
\ee
we will have $\ell(T) \leq \ep$. Therefore, by Equations~(\ref{eq:t0lb}) and~(\ref{eq:tmt0lb}), to ensure that $\ell(T) \leq \ep$, we may take
$$
T \geq 4\eta^{-1} \left( \ln(4) \left(\frac{\|\Phi\|_2}{s^2d_0}\right)^{2-2/N} + \| \Phi\|_2^{2/N-2}\ln(\| \Phi\|_2^2/(8\ep)) \right).
$$
Recall that this entire analysis holds only with the probability given in Equation~(\ref{eq:expF}). As $\lim_{d \ra \infty} (1-2\exp(-d/16)) = 1$ and $\lim_{a \ra \infty} (3 - 4F(2\sqrt{2/a}))/2 = 1/2$, for any $0 < p < 1/2$, there exist $a, d_0' > 0$ such that for $d_0 \geq d_0'$, the probability given in Equation~(\ref{eq:expF}) is at least $p$. This completes the proof.
\end{proof}
In the context of the above proof, we remark that the expressions $1- 2\exp(-d_0/16)$ and $(3 - 4F(2\sqrt{2/a}))/2$ converge to their limits of $1$ and $1/2$, respectively, as $d_0,a \ra \infty$ quite quickly. For instance, to obtain a probability of greater than $0.25$ of the initialization conditions being met, we may take $d_0 \geq 100, a \geq 100$.
\subsection{Proof of Claim~\ref{claim:balance_init}} \label{app:proofs:balance_init}
We first consider the probability of $\delta$-balancedness holding between any two layers:
\begin{lemma}
\label{lem:abvarbound}
Suppose $a,b,d \in \BN$ and $A \in \BR^{a \times d}, B \in \BR^{d \times b}$ are matrices whose entries are distributed as i.i.d.~Gaussians with mean 0 and standard deviation $s$. Then for $k \geq 1$,
\begin{equation}
\label{eq:abd}
\pr\left[ \left\|A^TA - BB^T\right\|_F \geq ks^2 \sqrt{2d(a+b)^2 + d^2(a+b)} \right] \leq 1/k^2.
\end{equation}
\end{lemma}
\begin{proof}
Note that for $1 \leq i,j \leq d$, let $X_{ij}$ be the random variable $(A^TA - BB^T)_{ij}$, so that $$X_{ij} = (A^TA - BB^T)_{ij} = \sum_{1 \leq \ell \leq a} A_{\ell i} A_{\ell j} - \sum_{1 \leq r \leq b} B_{ir} B_{jr}.$$
If $i \neq j$, then $$\ex[X^2] = \sum_{1 \leq \ell \leq a} \ex[A_{\ell i}^2 A_{\ell j}^2] + \sum_{1 \leq r \leq b} \ex[B_{ir}^2 B_{jr}^2]=(a+b)s^4.$$ We next note that for a normal random variable $Y$ of variance $s^2$ and mean 0, $\ex[Y^4] = 3s^4$. Then if $i = j$,
$$
\ex[X^2] = s^4 \cdot (3(a+b) + a(a-1) + b(b-1) {-} ab) \leq s^4((a+b)^2 + 2(a+b)).
$$
Thus
\begin{eqnarray}
\ex[\| A^TA - BB^T\|_F^2] &{\leq}& s^4(d((a+b)^2 + 2(a+b)) + d(d-1)(a+b)) \nonumber\\
&\leq & s^4(2d(a+b)^2 + d^2(a+b))\nonumber.
\end{eqnarray}
Then (\ref{eq:abd}) follows from Markov's inequality.
\end{proof}
Now the proof of Claim \ref{claim:balance_init} follows from a simple union bound:
\begin{proof}[Proof of Claim \ref{claim:balance_init}]
By (\ref{eq:abd}) of Lemma \ref{lem:abvarbound}, for each $1 \leq j \leq N-1$, $k \geq 1$,
$$
\pr\left[ \| W_{j+1}^T W_{j+1} - W_jW_j^T \|_F {\geq} ks^2 \sqrt{10d_{max}^3} \right] \leq 1/k^2.
$$
By the union bound,
$$
\pr \left[ \forall 1 \leq j \leq N-1, \ \ \| W_{j+1}^T W_{j+1} - W_jW_j^T \|_F \leq ks^2 \sqrt{10d_{max}^3} \right] \geq 1-N/k^2,
$$
and the claim follows with $\delta = ks^2 \sqrt{10d_{max}^3}$.
\end{proof}
\subsection{Proof of Claim~\ref{claim:margin_init}} \label{app:proofs:margin_init}
We begin by introducing some notation. Given $d \in \BN$ and $r > 0$, we let $B^d(r)$ denote the open ball of radius $r$ centered at the origin in $\BR^d$. For an open subset $U \subset \BR^d$, let $\partial U := \bar U \backslash U$ be its boundary, where $\bar U$ denotes the closure of $U$. For the special case of $U = B^d(r)$, we will denote by $S^d(r)$ the boundary of such a ball, i.e.~the sphere of radius $r$ centered at the origin in $\BR^d$. Let $S^d := S^d(1)$ and $B^d := B^d(1)$. There is a well-defined uniform (Haar) measure on $S^d(r)$ for all $d, r$, which we denote by $\sigma^{d,r}$; we assume $\sigma^{d,r}$ is normalized so that $\sigma^{d,r}(S^d(r)) = 1$.
Finally, since in the context of this claim we have~$d_N=1$, we allow ourselves to regard the end-to-end matrix $W_{1:N}\in\R^{1\times{d}_0}$ as both a matrix and a vector.
\medskip
To establish Claim \ref{claim:margin_init}, we will use the following low-degree anti-concentration result of \cite{carbery_distributional_2001} (see also \cite{lovett_elementary_2010,meka_anti-concentration_2016}):
\begin{lemma}[\cite{carbery_distributional_2001}]
\label{lem:carbery}
There is an absolute constant $C_{{0}}$ such that the following holds. Suppose that ${h}$ is a multilinear polynomial of $K$ variables $X_1, \ldots, X_K$ and of degree $N$. Suppose that $X_1, \ldots, X_K$ are i.i.d.~Gaussian. Then, for any $\epsilon>0$:
$$
\mathbb{P} \left[|{h}(X_1, \ldots, X_K)| \leq \ep \cdot \sqrt{\Var[{h}(X_1, \ldots, X_K)]}\right] \leq C_{{0}} N\ep^{1/N}.
$$
\end{lemma}
The below lemma characterizes the norm of the end-to-end matrix~$W_{1:N}$ following zero-centered Gaussian initialization:
\begin{lemma}
\label{lem:norm_concentrate}
For any constant $0 < C_2 < 1$, there is an absolute constant $C_1 > 0$ such that the following holds. Let $N, d_0, \ldots, d_{N-1} \in \BN$. Set $d_N = 1$. Suppose that for $1 \leq j \leq N$, $W_j \in \BR^{d_j \times d_{j-1}}$ are matrices whose entries are i.i.d.~Gaussians of standard deviation $s$ and mean 0. Then
$$
\pr\left[ s^{2N} d_1 \cdots d_{N-1} \left( \frac{1}{C_1N} \right)^{2N} \leq \| W_{1:N}\|_2^2 \leq C_1d_0^2 d_1 \cdots d_{N-1} s^{2N} \right] \geq C_2.
$$
\end{lemma}
\begin{proof}
Let $f(W_1, \ldots, W_N) = \| W_{1:N}\|_2^2$, so that $f$ is a polynomial of degree $2N$ in the entries of $W_1, \ldots, W_N$. Notice that
$$
f(W_1, \ldots, W_N) = \sum_{i_0 = 1}^{d_0} \left( \sum_{i_1=1}^{d_1} \cdots \sum_{i_{N-1} = 1}^{d_{N-1}} (W_N)_{1,i_{N-1}} (W_{N-1})_{i_{N-1}, i_{N-2}} \cdots (W_1)_{i_1, i_0} \right)^2.
$$
For $1 \leq i_0 \leq d_0$, set
$$
g_{i_0}(W_1, \ldots, W_N) = \sum_{i_1=1}^{d_1} \cdots \sum_{i_{N-1} = 1}^{d_{N-1}} (W_N)_{1,i_{N-1}} (W_{N-1})_{i_{N-1}, i_{N-2}} \cdots (W_1)_{i_1, i_0},
$$
so that $f = \sum_{i_0 = 1}^{d_0} g_{i_0}^2$. Since each $g_{i_0}$ is a multilinear polynomial in $W_1,\ldots,W_N$, we have that $\ex[g_{i_0}(W_1, \ldots, W_N)] = 0$ for all $1 \leq i_0 \leq d_0$. Also
\begin{eqnarray}
\Var[g_{i_0}(W_1, \ldots, W_N)] &=& \ex[g_{i_0}(W_1, \ldots, W_N)^2]\nonumber\\
&=& \sum_{i_1=1}^{d_1} \cdots \sum_{i_{N-1} = 1}^{d_{N-1}} \ex\left[ (W_N)_{1,i_{N-1}}^2 (W_{N-1})_{i_{N-1}, i_{N-2}}^2 \cdots (W_1)_{i_1, i_0}^2 \right]\nonumber\\
&=& d_1 d_2 \cdots d_{N-1} s^{2N} \nonumber.
\end{eqnarray}
It then follows by Markov's inequality that for any $k \geq 1$, $\pr[g_{i_0}^2 \geq k s^{2N} d_1 \cdots d_{N-1}] \leq 1/k$. For any constant $B_1$ (whose exact value will be specified below), it follows that
\begin{eqnarray}
&&\pr[f(W_1, \ldots, W_N) {\geq} B_1 d_0^2 d_1 d_2 \cdots d_{N-1} s^{2N}]\nonumber\\
&=& \pr\left[ \sum_{i_0=1}^{d_0} g_{i_0}(W_1, \ldots, W_N)^2 {\geq} B_1 d_0^2 d_1 d_2 \cdots d_{N-2} s^{2N} \right]\nonumber\\
& \leq & d_0 \cdot \pr[g_1(W_1, \ldots, W_N)^2 {\geq} B_1 d_0d_1 \cdots d_{N-1} s^{2N}]\nonumber\\
\label{eq:poly_ub}
& \leq & 1/B_1.
\end{eqnarray}
Next, by Lemma~\ref{lem:carbery}, there is an absolute constant $C_0 > 0$ such that for any $\ep > 0$, and any $1 \leq i_0 \leq d_0$,
$$
\pr\left[ |g_{i_0}(W_1, \ldots, W_N)| \leq \ep^N \sqrt{s^{2N} d_1 \cdots d_{N-1}} \right] \leq C_0 N \ep.
$$
Since $f^2 \geq g_{i_0}^2$ for each $i_0$, it follows that
\begin{equation}
\label{eq:poly_lb}
\pr[ f(W_1, \ldots, W_N) \geq \ep^{2N} s^{2N} d_1 \cdots d_{N-1} ] \geq 1 - C_0 N \ep.
\end{equation}
Next, given $0 < C_2 < 1$, choose $\ep = (1-C_2)/(2C_0N)$, and $B_1 = 2/(1-C_2)$. Then by (\ref{eq:poly_ub}) and (\ref{eq:poly_lb}) and a union bound, we have that
$$
\pr\left[\left( \frac{1-C_2}{2C_0N}\right)^{2N} s^{2N} d_1 \cdots d_{N-1} \leq f(W_1, \ldots, W_N) \leq \frac{2}{1-C_2} {s^{2N}}d_0^2 d_1 \cdots d_{N-1} \right] \geq C_2.
$$
The result of the lemma then follows by taking $C_1 = \max\left\{ \frac{2}{1-C_2}, \frac{2C_0}{1-C_2} \right\}$.
\end{proof}
\begin{lemma}
\label{lem:rotation_inv}
Let $N, d_0, \ldots, d_{N-1} \in \BN$, and set $d_N = 1$. Suppose $W_j \in \BR^{d_j \times d_{j-1}}$ for $1 \leq j \leq N$, are matrices whose entries are i.i.d.~Gaussians with mean~$0$ and standard deviation~$s$. Then, the distribution of $W_{1:N}$ is rotation-invariant.
\end{lemma}
\begin{proof}
First we remark that for any orthogonal matrix $O \in \BR^{d_0 \times d_0}$, the distribution of $W_1$ is the same as that of $W_1O$. To see this, let us denote the rows of $W_1$ by $(W_1)_1, \ldots, (W_1)_{d_1}$, and the columns of $O$ by $O^1, \ldots, O^{d_0}$. Then the $(i_1, i_0)$ entry of $W_1O$, for $1 \leq i_1 \leq d_1, 1 \leq i_0 \leq d_0$ is $\langle (W_1)_{i_1}, O^{i_0} \rangle$, which is a Gaussian with mean~$0$ and standard deviation~$s$, since $\| O^{i_0}\|_2 = 1$. Since $\langle O^{i_0}, O^{i_0'} \rangle = 0$ for $i_0 \neq i_0'$, the covariance between any two distinct entries of $W_1O$ is 0. Therefore, the entries of $W_1O$ are independent Gaussians with mean~$0$ and standard deviation~$s$, just as are the entries of $W_1$.
But now for any matrix $O \in \BR^{d_0 \times d_0}$, the distribution of $W_{1:N}O$ is the distribution of $W_N W_{N-1} \cdots W_2 (W_1O)$, which is the same as the distribution of $W_N W_{N-1} \cdots W_2 W_1 = W_{1:N}$, since $W_1, W_2, \ldots, W_N$ are all independent.
\end{proof}
For a dimension~$d\in\N$, radius~$r>0$, and $0 < h < r$, a {\it $(d,r)$-hyperspherical cap of height~$h$} is a subset $\MC \subset B^d(r)$ of the form $\{ x \in B^d(r) : \langle x, u \rangle \geq r-h\}$, where $u$ is any $d$-dimensional unit vector. We define the {\it area of a $(d,r)$-hyperspherical cap of height $h$}~---~$\MC$~---~to be $\sigma^{d,r}(\partial \MC \cap S^d(r))$.
\begin{lemma}
\label{lem:volcap}
For $d \geq 20$, choose any $0 \leq h \leq 1$. Then, the area of a $(d,1)$-hyperspherical cap of height $h$ is at least
$$
\frac{3 - 4 F((1-h) \sqrt{d-3})}{2}.
$$
\end{lemma}
\begin{proof}
In \cite{chudnov_minimax_1986}, it is shown that the area of a $(d,1)$-hyperspherical cap of height $h$ is given by $\frac{1 - C_{d-2}(h)/C_{d-2}(0)}{2}$, where
$$
C_d(h) := \int_0^{1-h} (1-t^2)^{(d-1)/2} dt.
$$
Next, by the inequality $1-t^2 \geq \exp(-2t^2)$ for $0 \leq t \leq 1/2$,
\begin{eqnarray}
\int_0^1 (1-t^2)^{(d-3)/2} dt & \geq & \int_0^{1/2} \exp\left(2 \cdot \frac{-t^2(d-3)}{2} \right) dt\nonumber\\
& = & \sqrt{\pi/(d-3)} \cdot \frac{2F(\sqrt{(d-3)/2}) - 1}{2}\nonumber\\
\label{eq:cd0}
& \geq & \sqrt{\pi/(d-3)} \cdot \frac{1 - 2 \exp(-(d-3)/4)}{2},
\end{eqnarray}
where the last inequality follows from the standard estimate $F(x) \geq 1 - \exp(-x^2/2)$ for $x \geq 1$.
Also, since $1-t^2 \leq \exp(-t^2)$ for all $t$,
\begin{eqnarray}
\int_0^{1-h} (1-t^2)^{(d-3)/2} dt & \leq & \int_0^{1-h} \exp\left( \frac{-t^2 (d-3)}{2} \right) dt \nonumber\\
\label{eq:cdh}
& = & \sqrt{2\pi/(d-3)} \cdot \frac{2F((1-h) \sqrt{d-3}) - 1}{2}.
\end{eqnarray}
Therefore, for $d \geq 20$, by (\ref{eq:cd0}) and (\ref{eq:cdh}),
\begin{eqnarray}
\frac{1-C_{d-2}(h)/C_{d-2}(0)}{2} & \geq &\frac{1 - \frac{\sqrt{2} \cdot (2F((1-h)\sqrt{d-3}) - 1)}{1 - 2 \exp(-(d-3)/4)}}{2} \nonumber\\
& \geq & \frac{1 - \sqrt{2} \cdot (2F((1-h)\sqrt{d-3}) - 1) \cdot (1 + 4 \exp(-(d-3)/4))}{2}\nonumber\\
& \geq & \frac{3 - 4 F((1-h)\sqrt{d-3})}{2},\nonumber
\end{eqnarray}
where the second inequality has used $1/(1-y) \leq 1 + 2y$ for all $0 < y < 1/2$ (and where $y = 2\exp((-(d-3)/4)) < 2 \exp(-17/4) < 1/2$), and the final inequality uses $1 + 4\exp(-(d-3)/4) \leq \sqrt{2}$ for $d \geq 20$. The above chain of inequalities gives us the desired result.
\end{proof}
\begin{lemma}
\label{lemma:haar_def}
Let $d \in \BN, d \geq 20$; $a \geq 1$ be a real number (possibly depending on $d$); and $\Phi \in \BR^d$ be some vector.
Set $r = \| \Phi\|_2 / \sqrt{ad}$, and suppose that $V \in S^d(r)$ is drawn according to the uniform measure.
Then, with probability at least $\frac{3 - 4F(2/\sqrt{a})}{2}$, $V$ will have deficiency margin $\| \Phi\|_2 / (ad)$ with respect to $\Phi$.
\end{lemma}
\begin{proof}
By rescaling, we may assume without loss of generality that $\| \Phi\|_2 = 1$, so that $r = 1/\sqrt{ad}$. Let $\MD$ denote the intersection of $B^{d}(r)$ with the open $d$-ball of radius $1-1/(ad)$ centered at $\Phi$. Let $\MC \subset B^{d}(r)$ denote the $(d,r)$-hyperspherical cap of height $r \cdot \big( 1 - 2/(\sqrt{ad})\big) = r - 2/(ad)$ whose base is orthogonal to the line between $\mathbf{0}$ and $\Phi$ (see Figure \ref{fig:initfig}). Note that ${\sigma}^{d,r}(\partial\MD \cap S^d(r))$, the Haar measure of the portion of $\partial\MD$ intersecting $S^d(r)$, gives the probability that $V$ belongs to the boundary of $\MD$. By Lemma \ref{lem:volcap} above (along with rescaling arguments), since $d \geq 20$, $\sigma^{d,r}(\partial \MC \cap S^d(r)) \geq \frac 12 \cdot (3 - 4F(2/\sqrt{a}))$, and therefore $V \in {\partial}\MC$ with at least this probability.
We next claim that $\MC \subseteq \MD$. To see this, first let $\MT \subset \BR^d$ denote the $(d-1)$-sphere of radius $1-1/(ad)$ centered at $\Phi$ (see Figure \ref{fig:initfig}). Let $P$ be the intersection of $\MT$ with the line from $\mathbf{0}$ to $\Phi$, and $Q$ denote the intersection of this line with the unique hyperplane of codimension 1 containing $\MT \cap \partial B^{d}(r)$~---~we denote this hyperplane by $\MH$. If we can show that $\norm{P-Q}_2 \leq 1/(ad)$, then it follows that $\MC$ lies entirely on the other side of $\MH$ as $\mathbf{0}$, which will complete the proof that $\MC \subseteq \MD$.
The calculation of $\norm{P-Q}_2$ is simply an application of the law of cosines: letting $\theta$ be the angle determining the intersection of $\partial B^{d}(r)$ and $\MT$ (see Figure~\ref{fig:initfig}), note that
$$
(1-1/(ad))^2 = r^2 + 1^2 - 2r\cos \theta = 1/(ad) + 1 - 2/\sqrt{ad} \cdot \cos(\theta),
$$
so
$$
d(P,Q) = r \cos \theta - 1/(ad) = \frac 12 (1/(ad) - 1/(a^2d^2)) < 1/(ad),
$$
as desired.
Using that $\MC \subseteq \MD$, we continue with the proof. Notice the fact that $\MC\subseteq \MD$ is equivalent to $\partial \MC \cap S^d(r) \subseteq \partial \MD \cap S^d(r)$, by the structure of $\MC$ and~$\MD$. {Since the probability that $V$ lands in $\partial\MC$ is at least $\frac{3-4F(2/\sqrt{a})}{2}$, this lower bound applies to $V$ landing in $\partial\MD$ as well}. Since all $V \in \partial\MD$ have distance at most $1-1/(ad)$ from $\Phi$, and since $\sigma_{min}(\Phi) = \| \Phi\|_2 = 1$, it follows that for any $V \in \partial\MD$, $\| V - \Phi\|_2 \leq \sigma_{min}(\Phi) - 1/(ad)$. Therefore, with probability of at least $\frac{3-4F(2/\sqrt a)}{2}$, $V$ has deficiency margin $\| \Phi\|_2/(ad)$ with respect to~$\Phi$.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw[very thin, pattern = north west lines, pattern color = gray, opacity=0.25] (2.73,-4.2) -- (2.73, 4.2) arc (56.65:-56.65:5cm) -- cycle;
\draw[very thin, pattern = north west lines, pattern color = gray, opacity=0.25] (2.73,-4.2) -- (2.73, 4.2) arc (148:212:8cm) -- cycle;
\draw (0,0) circle (5cm);
\draw (0,0) -- (10,0);
\draw (0,0) -- (2.7, 4.25);
\draw [thick, blue] (2.7,-5) -- (2.7,5);
\node [above] at (2.7,5) {\color{blue} $\MH$};
\node [above] at (10, 0) {$\Phi$};
\node [above] at (0,0) {$\mathbf{0}$};
\node [above left] at (1.5, 0) {$P$};
\node [above right] at (2.7, 0) {$Q$};
\node [above right] at (0.2, 0) {$\theta$};
\node [above] at (1.3, 2.1) {$r$};
\node [right] at (5.5, 6.92) {\color{red} $\MT$};
\draw [thick,red] (5.5,6.92) arc (120:240:8cm)
\filldraw[fill opacity=0.2,fill=green] (3.8,-3.25) -- (3.8,3.25) arc (40.39:-40.39:5cm) -- cycle;
\node [above] at (4.35,-0.9) {\color{darkspringgreen} $\MC$};
\draw[decoration={calligraphic brace,amplitude=5pt}, decorate, line width=1.25pt] (3.85,0.05) node {} -- (4.95,0.05);
\node [above,align=center] at (4.4, 0.1) {\small Height \\ \small of $\MC$};
\end{tikzpicture}
\end{center}
\caption{Figure for proof of Lemma \ref{lemma:haar_def}. The dashed region denotes $\MD$. Not to scale.}
\label{fig:initfig}
\end{figure}
\begin{lemma}[Lemma~\ref{lem:integrate_rad} restated]
\label{lem:integrate_rad_restate}
Let $d \in \BN, d \geq 20$; $b_2 > b_1 \geq 1$ be real numbers (possibly depending on $d$); and $\Phi \in \BR^d$ be a vector. Suppose that $\mu$ is a rotation-invariant distribution over $\BR^d$ with a well-defined density, such that, for some $0 < \ep < 1$,
$$
\pr_{V \sim \mu} \left[ \frac{\| \Phi\|_2}{\sqrt{b_2d}} \leq \| V \|_2 \leq \frac{\| \Phi\|_2}{\sqrt{b_1d}}\right] \geq 1-\ep.
$$
Then, with probability at least $(1-\ep) \cdot \frac{3 - 4F(2/\sqrt{b_1})}{2}$, $V$ will have deficiency margin $\| \Phi\|_2 / (b_2d)$ with respect to $\Phi$.
\end{lemma}
\begin{proof}
By rescaling we may assume that $\| \Phi\|_2 = 1$ without loss of generality. Then the deficiency margin of $V$ is equal to $1 - \| V- \Phi\|_2$. $\mu$~has a well-defined density, so we can set $\hat{\mu}$ to be the probability density function of $\| V\|_2$. Since $\mu$ is rotation-invariant, we can integrate over spherical coordinates, giving
\begin{eqnarray}
&& \pr[1 - \| V - \Phi\|_2 \geq {1} / (b_2d)] \nonumber\\
&=& \int_0^\infty \pr\big[1 - \| V - \Phi\|_2 \geq {1} / (b_2d) ~\big|~ \| V \|_2 = r \big]\hat \mu(r) dr\nonumber\\
& \geq & \int_{1/(\sqrt{b_2d})}^{1/(\sqrt{b_1d})} \frac{3 - 4F(2r\sqrt{d} )}{2} \hat \mu(r) dr \nonumber\\
& \geq & \frac{3 - 4F(2/\sqrt{b_1})}{2} \cdot \int_{1/(\sqrt{b_2d})}^{1/(\sqrt{b_1d})} \hat\mu(r) dr \nonumber\\
& \geq & \frac{3 - 4F(2/\sqrt{b_1})}{2} \cdot (1-\ep)\nonumber,
\end{eqnarray}
where the first inequlaity used Lemma \ref{lemma:haar_def} and the fact that the distribution of $V$ conditioned on $\| V\|_2 = r$ is uniform on $S^d(r)$.
\end{proof}
Now we are ready to prove Claim \ref{claim:margin_init}:
\begin{proof}[Proof of Claim \ref{claim:margin_init}]
We let $W \in \BR^{1 \times d_0} \simeq \BR^{d_0}$ denote the random vector $W_{1:N}$; also let $\mu$ denote the distribution of $W$, so that by Lemma \ref{lem:rotation_inv}, $\mu$ is rotation-invariant. Let $C_1$ be the constant from Lemma~\ref{lem:norm_concentrate} for $C_2 = 999/1000$. For some $a \geq 10^5$, the standard deviation of the entries of each $W_{j}$ is given by
\begin{equation}
\label{eq:define_s}
s = \left( \frac{\| \Phi\|_2^2}{ad_0^3d_1 \cdots d_{N-1} C_1} \right)^{1/(2N)}.
\end{equation}
Then by Lemma \ref{lem:norm_concentrate},
$$
\pr \left[ \frac{ \| \Phi\|_2^2}{ad_0^3 C_1} \cdot \left( \frac{1}{C_1N} \right)^{2N} \leq \| W\|_2^2 \leq \frac{\| \Phi\|_2^2}{ad_0} \right] \geq {\frac{999}{1000}}.
$$
Then Lemma~\ref{lem:integrate_rad_restate}, with $d = d_0$, $b_1 = a$ and $b_2 = ad_0^2 C_1 \cdot (C_1N)^{2N}$, implies that with probability at least $\frac{999}{1000} \cdot \frac{3 - 4F(2/\sqrt{a})}{2}$, $W$ has deficiency margin $\| \Phi\|_2 / (ad_0^3 C_1^{2N+1} N^{2N})$ with respect to $\Phi$. But $a \geq 10^5$ implies that this probability is at least $0.49$, and from (\ref{eq:define_s}),
\be
\label{eq:deficiency_margin_final}
\frac{\| \Phi\|_2}{ad_0^3 C_1^{2N+1} N^{2N}} = \frac{s^{2N} d_1 \cdots d_{N-1}}{\| \Phi\|_2 (C_1N)^{2N}}.
\ee
Next recall the assumption in the hypothesis that $s\geq{C}_{1}N(c\cdot\norm{\Phi}_2/(d_1\cdots{d}_{N-1}))^{1/2N}$. Then the deficiency margin in (\ref{eq:deficiency_margin_final}) is at least
$$
\frac{\left(C_1N(c \norm{\Phi}_2 / (d_1 \cdots d_{N-1}))^{1/(2N)}\right)^{2N} d_1 \cdots d_{N-1}}{\| \Phi\|_2 (C_1N)^{2N}} = c,
$$
completing the proof.
\end{proof}
\subsection{Proof of Claim~\ref{claim:diverge_balance}} \label{app:proofs:diverge_balance}
\begin{proof}
The target matrices~$\Phi$ that will be used to prove the claim satisfy $\sigma_{min}(\Phi)=1$.
We may assume without loss of generality that $c \geq 3/4$, the reason being that if a matrix has deficiency margin $c$ with respect to $\Phi$ and $c' < c$, it certainly has deficiency margin $c'$ with respect to $\Phi$.
We first consider the case $d=1$, so that the target and all matrices are simply real numbers; we will make a slight abuse of notation in identifying $1\times1$ matrices with their unique entries. We set $\Phi = 1$. For all choices of $\eta$, we will set the initializations $W_1(0), \ldots, W_N(0)$ so that $W_{1:N}(0) = c$. Then
$$
\| W_{1:N}(0) - \Phi\|_F = |W_{1:N}(0) - \Phi| = 1-c = \sigma_{min}(\Phi) - c,
$$
so the initial end-to-end matrix $W_{1:N}(0) \in \BR^{1 \times 1}$ has deficiency margin $c$. Now fix $\eta$. Choose $A \in \BR$ with
\begin{eqnarray}
\label{eq:define_A}
A &=& \max \left\{\sqrt{\eta N}, \frac{2}{\eta(1-c) c^{(N-1)/N}}, 2000,20/\eta,\left( \frac{20 \cdot 10^{2N-1}}{\eta^{2N}}\right)^{1/(2N-2)} \right\}.
\end{eqnarray}
We will set:
\begin{equation}
\label{eq:define_wj0}
W_j(0) = \begin{cases}
Ac^{1/N} \quad : \quad 1 \leq j \leq N/2 \\
c^{1/N}/A \quad : \quad N/2 < j \leq N,
\end{cases}
\end{equation}
so that $W_{1:N}(0) = c$. Then since $L^N(W_1, \ldots, W_N) = \frac 12 (1 - W_N \cdots W_1)^2$, the gradient descent updates are given by
$$
W_{j}(t+1) = W_j(t) - \eta (W_{1:N}(t) - 1) \cdot W_{1:j-1}(t) W_{j+1:N}(t),
$$
where we view $W_1(t), \ldots, W_N(t)$ as real numbers. This gives
\begin{equation}
W_j(1) = \begin{cases}
c^{1/N}A - \eta (c-1)c^{(N-1)/N}/A \quad : \quad 1 \leq j \leq N/2 \\
c^{1/N}/A - \eta (c-1) c^{(N-1)/N}A \quad : \quad N/2 < j \leq N.
\end{cases}\nonumber
\end{equation}
Since $3/4 \leq c < 1$ and $-\eta(c-1)c^{(N-1)/N}A \geq 0$, we have that $A/2 \leq 3A/4 \leq W_j(1)$ for $1 \leq j \leq N/2$. Next, since $\frac{1-c}{1-c^{1/N}} \leq N$ for $0 \leq c < 1$, we have that $A^2 \geq \eta N \geq \frac{\eta (1-c)}{1-c^{1/N}}$, which implies that $A^2 \geq c^{1/N}A^2 + \eta (1-c)$, or $c^{1/N} A + \frac{\eta(1-c)}{A} \leq A$. Thus $W_j(1) \leq A$ for $N/2 < j \leq N$.
Similarly, using the same bound $3/4 \leq c < 1$ and the fact that $\eta (1-c) c^{(N-1)/N} A \geq 2$ we get $\frac{3}{16} \eta A \leq W_j(1) \leq \eta A$ for $N/2 < j \leq N$. In particular, for all $1 \leq j \leq N$, we have that $\frac{\min\{\eta, 1\}}{10} A \leq W_j(1) \leq \max\{\eta, 1\} A$.
We prove the following lemma by induction:
\begin{lemma}
\label{lemma:weight_explode}
For each $t \geq 1$, the real numbers $W_1(t), \ldots, W_N(t)$ all have the same sign and this sign alternates for each integer $t$. Moreover, there are real numbers $2 \leq B(t) < C(t)$ for $t \geq 1$ such that for $1 \leq j \leq N$, $B(t) \leq |W_j(t)| \leq C(t)$ and $\eta B(t)^{2N-1} \geq 20 C(t)$.
\end{lemma}
\begin{proof}
First we claim that we may take $B(1) = \frac{\min\{\eta, 1\}}{10} A$ and $C(1) = \max\{ \eta, 1\} A$. We have shown above that $B(1) \leq W_j(1) \leq C(1)$ for all $j$. Next we establish that $\eta B(1)^{2N-1} \geq 20 C(1)$. If $\eta \leq 1$, then
$$
\eta B(1)^{2N-1} =\eta^{2N} \cdot (A/10)^{2N-1} \geq 20 A = 20 C(1),
$$
where the inequality follows from $A \geq \left(\frac{20 \cdot 10^{2N-1}}{\eta^{2N}}\right)^{1/(2N-2)}$ by definition of $A$. If $\eta \geq 1$, then
$$
\eta B(1)^{2N-1} = \eta (A/10)^{2N-1} \geq 20 \eta A = 20 C(1),
$$
where the inequality follows from $A \geq 2000 \geq \left(20 \cdot 10^{2N-1}\right)^{1/(2N-2)}$ by definition of $A$.
Now, suppose the statement of Lemma \ref{lemma:weight_explode} holds for some $t$. Suppose first that $W_j(t)$ are all positive for $1 \leq j \leq N$. Then for all $j$, as $B(t) \geq 2$, and $\eta B(t)^{2N-1} \geq 20 C(t)$,
\begin{eqnarray}
W_j(t+1) & \leq & C(t) - \eta \cdot (B(t)^N - 1) \cdot B(t)^{N-1} \nonumber\\
& \leq & C(t) - \frac{\eta}{2} B(t)^{2N-1}\nonumber\\
& \leq & -9 C(t)\nonumber,
\end{eqnarray}
which establishes that $W_j(t+1)$ is negative for all $j$.
Moreover,
\begin{eqnarray}
W_j(t+1) & \geq & -\eta (C(t)^N - 1) \cdot C(t)^{N-1} \nonumber\\
& \geq & -\eta C(t)^{2N-1}. \nonumber
\end{eqnarray}
Now set $B(t+1) = 9C(t)$ and $C(t+1) = \eta C(t)^{2N-1}$. Since $N \geq 2$, we have that
$$
\eta B(t+1)^{2N-1} = \eta (9C(t))^{2N-1} \geq \eta 9^3 C(t)^{2N-1} > 20 \eta C(t)^{2N-1} = 20 C(t+1).
$$
The case that all $W_j(t)$ are negative for $1 \leq j \leq N$ is nearly identical, with the same values for $B(t+1), C(t+1)$ in terms of $B(t), C(t)$, except all $W_{j}(t+1)$ will be positive.
This establishes the inductive step and completes the proof of Lemma \ref{lemma:weight_explode}.
\end{proof}
By Lemma \ref{lemma:weight_explode}, we have that for all $t \geq 1$, $L^N(W_1(t), \ldots, W_N(t)) = \frac 12 (W_{1:N}(t) - 1)^2 \geq \frac 12 (2^N - 1)^2 > 0$, thus completing the proof of Claim \ref{claim:diverge_balance} for the case where all dimensions are equal to 1.
For the general case where $d_0 = d_1 = \cdots = d_N = d$ for some $d \geq 1$, we set $\Phi = I_d$, and given $c, \eta$, we set $W_j(0)$ to be the $d \times d$ diagonal matrix where all diagonal entries except the first one are equal to 1, and where the first diagonal entry is given by Equation~\eqref{eq:define_wj0}, where $A$ is given by Equation~\eqref{eq:define_A}. It is easily verified that all entries of $W_j(t)$, $1 \leq j \leq N$, except for the first diagonal element of each matrix, will remain constant for all $t \geq 0$, and that the first diagonal elements evolve exactly as in the 1-dimensional case presented above. Therefore the loss in the $d$-dimensional case is equal to the loss in the 1-dimensional case, which is always greater than some positive constant.
\end{proof}
We remark that the proof of Claim \ref{claim:diverge_balance} establishes that the loss $\ell(t):=L^N(W_1(t), \ldots, W_N(t))$ grows at least exponentially in $t$ for the chosen initialization. Such behavior, in which gradients and weights explode, indeed takes place in deep learning practice if initialization is not chosen with care.
\subsection{Proof of Claim~\ref{claim:diverge_margin}} \label{app:proofs:diverge_margin}
\begin{proof}
We will show that a target matrix $\Phi\in\R^{d\times{d}}$ which is symmetric with at least one negative eigenvalue, along with identity initialization ($W_j(0) = I_d$, $\forall{j}\in\{1,\ldots,N\}$), satisfy the conditions of the claim.
First, note that non-stationarity of initialization is met, as for any $1 \leq j \leq N$,
$$
\frac{\partial L^N(W_1(0), \ldots, W_N(0))}{\partial W_j(0)} = W_{j+1:N}(0)^\top (W_{1:N}(0) - \Phi) W_{1:j-1}(0) = I_d - \Phi \neq \mathbf{0},
$$
where the last inequality follows since $\Phi$ has a negative eigenvalue.
To analyze gradient descent we use the following result, which was established in \cite{bartlett2018gradient}:
\begin{lemma}[\cite{bartlett2018gradient}, Lemma 6]
\label{lemma:failure_idinit}
If $W_1(0), \ldots, W_N(0)$ are all initialized to identity, $\Phi$ is symmetric, $\Phi = UDU^\top$ is a diagonalization of $\Phi$, and gradient descent is performed with any learning rate, then for each $t \geq 0$ there is a diagonal matrix $\hat D(t)$ such that $W_j(t) = U\hat D(t) U^\top$ for each $1 \leq j \leq N$.
\end{lemma}
By Lemma \ref{lemma:failure_idinit}, for any choice of learning rate $\eta$, the end-to-end matrix at time $t$ is given by $W_{1:N}(t) = U \hat D(t)^N U^\top$. As long as some diagonal element of $D$ is negative, say equal to $-\lambda < 0$, then
$$
\ell(t)=L^N(W_1(t), \ldots, W_N(t)) = \frac 12 \| W_{1:N}(t) - \Phi\|_F^2 = \frac 12 \| \hat D(t)^L - D \|_F^2 \geq \frac 12 \lambda^2 > 0.
$$
\end{proof}
\section{$\ell_2$ Loss over Whitened Data} \label{app:whitened}
Recall the $\ell_2$~loss of a linear predictor~$W\in\R^{d_y\times{d}_x}$ as defined in Section~\ref{sec:prelim}:
$$L(W)=\frac{1}{2m}\|WX-Y\|_F^2
\text{\,,}$$
where~$X\in\R^{d_x\times{m}}$ and~$Y\in\R^{d_y\times{m}}$.
Define $\Lambda_{xx}:=\tfrac{1}{m}XX^\top\in\R^{d_x\times{d}_x}$, $\Lambda_{yy}:=\tfrac{1}{m}YY^\top\in\R^{d_y\times{d}_y}$ and $\Lambda_{yx}:=\tfrac{1}{m}YX^\top\in\R^{d_y\times{d}_x}$.
Using the relation $\norm{A}_F^2=\Tr(AA^\top)$, we have:
\beas
L(W)&=&\tfrac{1}{2m}\Tr\big((WX-Y)(WX-Y)^\top\big) \\[1mm]
&=&\tfrac{1}{2m}\Tr(WXX^\top{W}^\top)-\tfrac{1}{m}\Tr(WXY^\top)+\tfrac{1}{2m}\Tr(YY^\top) \\[1mm]
&=&\tfrac{1}{2}\Tr(W\Lambda_{xx}{W}^\top)-\Tr(W\Lambda_{yx}^\top)+\tfrac{1}{2}\Tr(\Lambda_{yy})
\text{\,.}
\eeas
By definition, when data is whitened, $\Lambda_{xx}$~is equal to identity, yielding:
\beas
L(W)&=&\tfrac{1}{2}\Tr(W{W}^\top)-\Tr(W\Lambda_{yx}^\top)+\tfrac{1}{2}\Tr(\Lambda_{yy}) \\[1mm]
&=&\tfrac{1}{2}\Tr\big((W-\Lambda_{yx})(W-\Lambda_{yx})^\top\big)-\tfrac{1}{2}\Tr(\Lambda_{yx}\Lambda_{yx}^\top)+\tfrac{1}{2}\Tr(\Lambda_{yy}) \\[1mm]
&=&\tfrac{1}{2}\norm{W-\Lambda_{yx}}_F^2+c
\text{\,,}
\eeas
where~$c:=-\tfrac{1}{2}\Tr(\Lambda_{yx}\Lambda_{yx}^\top)+\tfrac{1}{2}\Tr(\Lambda_{yy})$ does not depend on~$W$.
Hence we arrive at Equation~\eqref{eq:loss_whitened}.
\subsubsection*{Acknowledgments}
\else
\section*{Acknowledgments}
\fi
\acknowledgments
\fi
\fi
\section*{References}
{\small
\ifdefined\ICML
\bibliographystyle{icml2018}
\else
\bibliographystyle{plainnat}
\fi
\section{Conclusion} \label{sec:conc}
For deep linear neural networks, we have rigorously proven convergence of gradient descent to global minima, at a linear rate, provided that the initial weight matrices are approximately balanced and the initial end-to-end matrix has positive deficiency margin.
The result applies to networks with arbitrary depth, and any configuration of input/output/hidden dimensions that supports full rank, \ie~in which no hidden layer has dimension smaller than both the input and output.
Our assumptions on initialization~---~approximate balancedness and deficiency margin~---~are both necessary, in the sense that violating any one of them may lead to convergence failure, as we demonstrated explicitly.
Moreover, for networks with output dimension~$1$ (scalar regression), we have shown that a balanced initialization, \ie~a random choice of the end-to-end matrix followed by a balanced partition across all layers, leads assumptions to be met, and thus convergence to take place, with constant probability.
Rigorously proving efficient convergence with significant probability under customary layer-wise independent initialization remains an open problem.
The recent work of~\citet{shamir2018exponential} suggests that this may not be possible, as at least in some settings, the number of iterations required for convergence is exponential in depth with overwhelming probability.
This negative result, a theoretical manifestation of the ``vanishing gradient problem'', is circumvented by balanced initialization.
Through simple experiments we have shown that the latter can lead to favorable convergence in deep learning practice, as it does in theory.
Further investigation of balanced initialization, including development of variants for convolutional layers, is regarded as a promising direction for future research.
The analysis in this paper uncovers special properties of the optimization landscape in the vicinity of gradient descent trajectories.
We expect similar ideas to prove useful in further study of gradient descent on non-convex objectives, including training losses of deep non-linear neural networks.
\section{Convergence Analysis} \label{sec:converge}
In this section we establish convergence of gradient descent for deep linear neural networks (Equations~\eqref{eq:gd} and~\eqref{eq:lnn_obj}) by directly analyzing the trajectories taken by the algorithm.
We begin in Subsection~\ref{sec:converge:balance_margin} with a presentation of two concepts central to our analysis: \emph{approximate balancedness} and \emph{deficiency margin}.
These facilitate our main convergence theorem, delivered in Subsection~\ref{sec:converge:theorem}.
We conclude in Subsection~\ref{sec:converge:balance_init} by deriving a convergence guarantee that holds with constant probability over a random initialization.
\subsection{Approximate Balancedness and Deficiency Margin} \label{sec:converge:balance_margin}
In our context, the notion of approximate balancedness is formally defined as follows:
\vspace{1mm}
\begin{definition}
\label{def:balance}
For~$\delta\geq0$, we say that the matrices $W_j\in\R^{d_j\times{d}_{j-1}}$, $j{=}1,\ldots,N$, are $\delta$-\emph{balanced} if:
$$\norm{W_{j+1}^{\top}W_{j+1}-W_{j}W_j^\top}_F\leq\delta\quad,\,\forall{j}\in\{1,\ldots,N-1\}
\text{\,.}$$
\end{definition}
Note that in the case of $0$-balancedness, \ie~$W_{j+1}^{\top}W_{j+1}=W_{j}W_j^\top$, $\forall{j}\in\{1,\ldots,N-1\}$, all matrices~$W_j$ share the same set of non-zero singular values.
Moreover, as shown in the proof of Theorem~1 in~\citet{arora2018optimization}, this set is obtained by taking the $N$-th root of each non-zero singular value in the end-to-end matrix~$W_{1:N}$.
We will establish approximate versions of these facts for $\delta$-balancedness with $\delta>0$, and admit their usage by showing that if the weights of a linear neural network are initialized to be approximately balanced, they will remain that way throughout the iterations of gradient descent.
The condition of approximate balancedness at initialization is trivially met in the special case of linear residual networks ($d_0=\cdots=d_N=d$ and $W_1(0)=\cdots=W_N(0)=I_d$).
Moreover, as Claim~\ref{claim:balance_init} in Appendix~\ref{app:balance_margin_init} shows, for a given~$\delta>0$, the customary initialization via random Gaussian distribution with mean zero leads to approximate balancedness with high probability if the standard deviation is sufficiently small.
\medskip
The second concept we introduce~---~deficiency margin~---~refers to how far a ball around the target is from containing rank-deficient (\ie~low rank) matrices.
\vspace{1mm}
\begin{definition}
\label{def:margin}
Given a target matrix~$\Phi\in\R^{d_N\times{d}_0}$ and a constant $c>0$, we say that a matrix~$W\in\R^{d_N\times{d}_0}$ has \emph{deficiency margin~$c$ with respect to~$\Phi$} if:\note{
Note that deficiency margin~$c>0$ with respect to~$\Phi$ implies~$\sigma_{min}(\Phi)>0$, \ie~$\Phi$ has full rank.
Our analysis can be extended to account for rank-deficient~$\Phi$ by replacing~$\sigma_{min}(\Phi)$ in Equation~\eqref{eq:margin} with the smallest positive singular value of~$\Phi$, and by requiring that the end-to-end matrix~$W_{1:N}$ be initialized such that its left and right null spaces coincide with those of~$\Phi$.
Relaxation of this requirement is a direction for future work.
}
\be
\norm{W-\Phi}_F\leq\sigma_{min}(\Phi)-c
\label{eq:margin}
\text{\,.}
\ee
\end{definition}
The term ``deficiency margin'' alludes to the fact that if Equation~\eqref{eq:margin} holds, every matrix~$W'$ whose distance from~$\Phi$ is no greater than that of~$W$, has singular values $c$-bounded away from zero:
\vspace{1mm}
\begin{claim}
\label{claim:margin_interp}
Suppose~$W$ has deficiency margin~$c$ with respect to~$\Phi$.
Then, any matrix~$W'$ (of same size as~$\Phi$ and~$W$) for which $\norm{W'-\Phi}_F\leq\norm{W-\Phi}_F$ satisfies~$\sigma_{min}(W')\geq{c}$.
\end{claim}
\vspace{-3mm}
\begin{proof}
Our proof relies on the inequality $\sigma_{min}(A+B)\geq\sigma_{min}(A)-\sigma_{max}(B)$~---~see Appendix~\ref{app:proofs:margin_interp}.
\end{proof}
\vspace{-2mm}
We will show that if the weights $W_1,\ldots,W_N$ are initialized such that (they are approximately balanced and) the end-to-end matrix~$W_{1:N}$ has deficiency margin~$c>0$ with respect to the target~$\Phi$, convergence of gradient descent to global minimum is guaranteed.\note{
In fact, a deficiency margin implies that all critical points in the respective sublevel set (set of points with smaller loss value) are global minima.
This however is far from sufficient for proving convergence, as sublevel sets are unbounded, and the loss landscape over them is non-convex and non-smooth.
Indeed, we show in Appendix~\ref{app:fail} that deficiency margin alone is not enough to ensure convergence~---~without approximate balancedness, the lack of smoothness can cause divergence.
}
Moreover, the convergence will outpace a particular rate that gets faster when $c$ grows larger.
This suggests that from a theoretical perspective, it is advantageous to initialize a linear neural network such that the end-to-end matrix has a large deficiency margin with respect to the target.
Claim~\ref{claim:margin_init} in Appendix~\ref{app:balance_margin_init} provides information on how likely deficiency margins are in the case of a single output model (scalar regression) subject to customary zero-centered Gaussian initialization.
It shows in particular that if the standard deviation of the initialization is sufficiently small, the probability of a deficiency margin being met is close to~$0.5$; on the other hand, for this deficiency margin to have considerable magnitude, a non-negligible standard deviation is required.
\medskip
Taking into account the need for both approximate balancedness and deficiency margin at initialization, we observe a delicate trade-off under the common setting of Gaussian perturbations around zero:
if the standard deviation is small, it is likely that weights be highly balanced and a deficiency margin be met;
however overly small standard deviation will render high magnitude for the deficiency margin improbable, and therefore fast convergence is less likely to happen;
on the opposite end, large standard deviation jeopardizes both balancedness and deficiency margin, putting the entire convergence at risk.
This trade-off is reminiscent of empirical phenomena in deep learning, by which small initialization can bring forth efficient convergence, while if exceedingly small, rate of convergence may plummet (``vanishing gradient problem''), and if made large, divergence becomes inevitable (``exploding gradient problem'').
The common resolution of residual connections~\citep{he2016deep} is analogous in our context to linear residual networks, which ensure perfect balancedness, and allow large deficiency margin if the target is not too far from identity.
\subsection{Main Theorem} \label{sec:converge:theorem}
Using approximate balancedness (Definition~\ref{def:balance}) and deficiency margin (Definition~\ref{def:margin}), we present our main theorem~---~a guarantee for linear convergence to global minimum:
\vspace{1mm}
\begin{theorem}
\label{theorem:converge}
Assume that gradient descent is initialized such that the end-to-end matrix~$W_{1:N}(0)$ has deficiency margin~$c>0$ with respect to the target~$\Phi$, and the weights $W_1(0),\ldots,W_N(0)$ are $\delta$-balanced with $\delta=c^2\big/\big(256\cdot{N}^3\cdot\norm{\Phi}_{F}^{2(N-1)/N}\big)$.
Suppose also that the learning rate~$\eta$ meets:
\be
\eta\leq\frac{c^{(4N-2)/N}}{6144\cdot{N}^3\cdot\norm{\Phi}_{F}^{(6N-4)/N}}
\label{eq:descent_eta}
\text{~.}
\ee
Then, for any~$\epsilon>0$ and:
\be
T\geq\frac{1}{\eta\cdot{c}^{2(N-1)/N}}\cdot\log\left(\frac{\ell(0)}{\epsilon}\right)
\label{eq:converge_t}
\text{\,,}
\ee
the loss at iteration~$T$ of gradient descent~---~$\ell(T)$~---~is no greater than~$\epsilon$.
\end{theorem}
\subsubsection{On the Assumptions Made}
The assumptions made in Theorem~\ref{theorem:converge}~---~approximate balancedness and deficiency margin at initialization~---~are both necessary, in the sense that violating any one of them may lead to convergence failure.
We demonstrate this in Appendix~\ref{app:fail}.
In the special case of linear residual networks (uniform dimensions and identity initialization), a sufficient condition for the assumptions to be met is that the target matrix have (Frobenius) distance less than~$0.5$ from identity.
This strengthens one of the central results in~\citet{bartlett2018gradient} (see Section~\ref{sec:related}).
For a setting of random near-zero initialization, we present in Subsection~\ref{sec:converge:balance_init} a scheme that, when the output dimension is~$1$ (scalar regression), ensures assumptions are satisfied (and therefore gradient descent efficiently converges to global minimum) with constant probability.
It is an open problem to fully analyze gradient descent under the common initialization scheme of zero-centered Gaussian perturbations applied to each layer independently.
We treat this scenario in Appendix~\ref{app:balance_margin_init}, providing quantitative results concerning the likelihood of each assumption (approximate balancedness or deficiency margin) being met individually.
However the question of how likely it is that both assumptions be met simultaneously, and how that depends on the standard deviation of the Gaussian, is left for future work.
An additional point to make is that Theorem~\ref{theorem:converge} poses a structural limitation on the linear neural network.
Namely, it requires the dimension of each hidden layer~($d_i,~i=1,\ldots,N-1$) to be greater than or equal to the minimum between those of the input~($d_0$) and output~($d_N$).
Indeed, in order for the initial end-to-end matrix~$W_{1:N}(0)$ to have deficiency margin~$c>0$, it must (by Claim~\ref{claim:margin_interp}) have full rank, and this is only possible if there is no intermediate dimension~$d_i$ smaller than~$\min\{d_0,d_N\}$.
We make no other assumptions on network architecture (depth, input/output/hidden dimensions).
\subsubsection{Proof}
The cornerstone upon which Theorem~\ref{theorem:converge} rests is the following lemma, showing non-trivial descent whenever $\sigma_{min}(W_{1:N})$ is bounded away from zero:
\vspace{1mm}
\begin{lemma}
\label{lemma:descent}
Under the conditions of Theorem~\ref{theorem:converge}, we have that for every $t=0,1,2,\ldots$~:\note{
Note that the term~$\frac{dL^1}{dW}(W_{1:N}(t))$ below stands for the gradient of~$L^1(\cdot)$~---~a convex loss over (directly parameterized) linear models (Equation~\eqref{eq:lin_obj})~---~at the point~$W_{1:N}(t)$~---~the end-to-end matrix of the network at iteration~$t$.
It is therefore (see Equation~\eqref{eq:gd_loss}) non-zero anywhere but at a global minimum.
}
\be
\ell(t+1)\leq\ell(t)-\frac{\eta}{2}\cdot\sigma_{min}\big(W_{1:N}(t)\big)^{\frac{2(N-1)}{N}}\cdot\norm{\frac{dL^1}{dW}\big(W_{1:N}(t)\big)}_F^2
\label{eq:descent}
\text{~.}
\ee
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:descent} (in idealized setting; for complete proof see Appendix~\ref{app:proofs:descent})]
\hspace{-0.5mm}We prove the lemma here for the idealized setting of perfect initial balancedness~($\delta=0$):
$$W_{j+1}^{\top}(0)W_{j+1}(0)=W_{j}(0)W_j^\top(0)\quad,~\forall{j}\in\{1,\ldots,N-1\}
\text{~,}$$
and infinitesimally small learning rate~($\eta\to0^+$)~---~\emph{gradient flow}:
$$\dot{W}_j(\tau)=-\frac{\partial{L}^N}{\partial{W}_j}\big(W_1(\tau),\ldots,W_N(\tau)\big)\quad,~j=1,\ldots,N\quad,~\tau\in[0,\infty)
\text{~,}$$
where~$\tau$ is a continuous time index, and dot symbol (in~$\dot{W}_j(\tau)$) signifies derivative with respect to time.
The complete proof, for the realistic case of approximate balancedness and discrete updates~($\delta,\eta>0$), is similar but much more involved, and appears in Appendix~\ref{app:proofs:descent}.
Recall that~$\ell(t)$~---~the objective value at iteration~$t$ of gradient descent~---~is equal to~$L^1(W_{1:N}(t))$ (see Equation~\eqref{eq:gd_loss}).
Accordingly, for the idealized setting in consideration, we would like to show:
\be
\frac{d}{d\tau}L^1\left(W_{1:N}(\tau)\right)\leq-\frac{1}{2}\sigma_{min}\big(W_{1:N}(\tau)\big)^{\frac{2(N-1)}{N}}\cdot\norm{\frac{dL^1}{dW}\big(W_{1:N}(\tau)\big)}_F^2
\label{eq:descent_ideal}
\text{\,.}
\ee
We will see that a stronger version of Equation~\eqref{eq:descent_ideal} holds, namely, one without the $1/2$ factor (which only appears due to discretization).
By (Theorem~$1$ and Claim~$1$ in)~\citet{arora2018optimization}, the weights $W_1(\tau),\ldots,W_N(\tau)$ remain balanced throughout the entire optimization, and that implies the end-to-end matrix~$W_{1:N}(\tau)$ moves according to the following differential equation:
\be
vec\left(\dot{W}_{1:N}(\tau)\right)=-{P}_{W_{1:N}(\tau)}\cdot{vec}\left(\frac{dL^1}{dW}\left(W_{1:N}(\tau)\right)\right)
\label{eq:e2e_gf}
\text{\,,}
\ee
where~$vec(A)$, for an arbitrary matrix~$A$, stands for vectorization in column-first order, and $P_{W_{1:N}(\tau)}$ is a positive semidefinite matrix whose eigenvalues are all greater than or equal to $\sigma_{min}(W_{1:N}(\tau))^{2(N-1)/N}$.
Taking the derivative of~$L^1(W_{1:N}(\tau))$ with respect to time, we obtain the sought-after Equation~\eqref{eq:descent_ideal} (with no $1/2$ factor):
\beas
\frac{d}{d\tau}L^1\left(W_{1:N}(\tau)\right)&=&\inprod{vec\left(\frac{dL^1}{dW}\big(W_{1:N}(\tau)\big)\right)}{vec\left(\dot{W}_{1:N}(\tau)\right)} \\
&=&\inprod{vec\left(\frac{dL^1}{dW}\big(W_{1:N}(\tau)\big)\right)}{-{P}_{W_{1:N}(\tau)}\cdot{vec}\left(\frac{dL^1}{dW}\left(W_{1:N}(\tau)\right)\right)} \\
&\leq&-\sigma_{min}\big(W_{1:N}(\tau)\big)^{\frac{2(N-1)}{N}}\cdot\norm{vec\left(\frac{dL^1}{dW}\big(W_{1:N}(\tau)\big)\right)}^2 \\
&=&-\sigma_{min}\big(W_{1:N}(\tau)\big)^{\frac{2(N-1)}{N}}\cdot\norm{\frac{dL^1}{dW}\big(W_{1:N}(\tau)\big)}_F^2
\text{\,.}
\eeas
The first transition here (equality) is an application of the chain rule;
the second (equality) plugs in Equation~\eqref{eq:e2e_gf};
the third (inequality) results from the fact that the eigenvalues of the symmetric matrix $P_{W_{1:N}(\tau)}$ are no smaller than $\sigma_{min}(W_{1:N}(\tau))^{2(N-1)/N}$ (recall that~$\norm{\cdot}$ stands for Euclidean norm);
and the last (equality) is trivial~---~$\norm{A}_F=\norm{vec(A)}$ for any matrix~$A$.
\end{proof}
With Lemma~\ref{lemma:descent} established, the proof of Theorem~\ref{theorem:converge} readily follows:
\begin{proof}[Proof of Theorem~\ref{theorem:converge}]
By the definition of~$L^1(\cdot)$ (Equation~\eqref{eq:lin_obj}), for any~$W\in\R^{d_N\times{d}_0}$:
$$\frac{dL^1}{dW}(W)=W-\Phi\quad\implies\quad\norm{\frac{dL^1}{dW}(W)}_F^2=2\cdot{L}^1(W)
\text{~.}$$
Plugging this into Equation~\eqref{eq:descent} while recalling that $\ell(t)=L^1(W_{1:N}(t))$ (Equation~\eqref{eq:gd_loss}), we have (by Lemma~\ref{lemma:descent}) that for every $t=0,1,2,\ldots$~:
$$L^1\big(W_{1:N}(t+1)\big)\leq{L}^1\big(W_{1:N}(t)\big)\cdot\Big(1-\eta\cdot\sigma_{min}\big(W_{1:N}(t)\big)^{\frac{2(N-1)}{N}}\Big)
\text{~.}$$
Since the coefficients $1-\eta\cdot\sigma_{min}(W_{1:N}(t))^{\frac{2(N-1)}{N}}$ are necessarily non-negative (otherwise would contradict non-negativity of $L^1(\cdot)$), we may unroll the inequalities, obtaining:
\be
L^1\big(W_{1:N}(t+1)\big)\leq{L}^1\big(W_{1:N}(0)\big)\cdot\prod\nolimits_{t'=0}^{t}\Big(1-\eta\cdot\sigma_{min}\big(W_{1:N}(t')\big)^{\frac{2(N-1)}{N}}\Big)
\text{\,.}
\label{eq:descent_concat}
\ee
Now, this in particular means that for every~$t'=0,1,2,\ldots$~:
$$L^1\big(W_{1:N}(t')\big)\leq{L}^1\big(W_{1:N}(0)\big)\quad\implies\quad\norm{W_{1:N}(t')-\Phi}_F\leq\norm{W_{1:N}(0)-\Phi}_F \text{\,.}$$
Deficiency margin~$c$ of~$W_{1:N}(0)$ along with Claim~\ref{claim:margin_interp} thus imply $\sigma_{min}\big(W_{1:N}(t')\big)\geq{c}$, which when inserted back into Equation~\eqref{eq:descent_concat} yields, for every~$t=1,2,3,\ldots$~:
\be
L^1\big(W_{1:N}(t)\big)\leq{L}^1\big(W_{1:N}(0)\big)\cdot\Big(1-\eta\cdot{c}^{\frac{2(N-1)}{N}}\Big)^t
\label{eq:descent_geo}
\text{~.}
\ee
$\eta\cdot{c}^{\frac{2(N-1)}{N}}$ is obviously non-negative, and it is also no greater than~$1$ (otherwise would contradict non-negativity of $L^1(\cdot)$).
We may therefore incorporate the inequality $1-\eta\cdot{c}^{2(N-1)/N}\leq\exp\big(-\eta\cdot{c}^{2(N-1)/N}\big)$ into Equation~\eqref{eq:descent_geo}:
$$L^1\big(W_{1:N}(t)\big)\leq{L}^1\big(W_{1:N}(0)\big)\cdot\exp\big(-\eta\cdot{c}^{2(N-1)/N}\cdot{t}\big)
\text{\,,}$$
from which it follows that $L^1(W_{1:N}(t))\leq\epsilon$ if:
$$t\geq\frac{1}{\eta\cdot{c}^{2(N-1)/N}}\cdot\log\left(\frac{L^1(W_{1:N}(0))}{\epsilon}\right)
\text{\,.}$$
Recalling again that $\ell(t)=L^1(W_{1:N}(t))$ (Equation~\eqref{eq:gd_loss}), we conclude the proof.
\end{proof}
\subsection{Balanced Initialization} \label{sec:converge:balance_init}
We define the following procedure, \emph{balanced initialization}, which assigns weights randomly while ensuring perfect balancedness:
\vspace{1mm}
\begin{procedure}[Balanced initialization]
\label{proc:balance_init}
Given $d_0, d_1, \ldots, d_N \in \N$ such that $\min\{d_1,\ldots,d_{N-1}\}\geq\min\{d_0,d_N\}$ and a distribution~$\D$ over $d_N\times{d}_0$~matrices, a \emph{balanced initialization} of~$W_j\in\R^{d_j\times{d}_{j-1}}$, $j{=}1,\ldots,N$, assigns these weights as follows:
\vspace{-1.5mm}
\begin{enumerate}[label=(\roman*)]
\item Sample $A\in\R^{d_N\times{d}_0}$ according to~$\D$.
\vspace{-1mm}
\item Take singular value decomposition~$A=U\Sigma{V}^\top$, where $U \in \R^{d_N \times \min\{d_0, d_N\}}$, $V \in \R^{d_0 \times \min\{d_0, d_N\}}$ have orthonormal columns, and $\Sigma \in \R^{\min\{d_0, d_N\} \times \min\{d_0, d_N\}}$ is diagonal and holds the singular values of $A$.
\vspace{-1mm}
\item Set $W_N\simeq{U}\Sigma^{1/N},W_{N-1}\simeq\Sigma^{1/N},\ldots,W_2\simeq\Sigma^{1/N},W_1\simeq\Sigma^{1/N}V^\top$, where the symbol~``$\simeq$'' stands for equality up to zero-valued padding.\note{
These assignments can be accomplished since $\min\{d_1,\ldots,d_{N-1}\}\geq\min\{d_0,d_N\}$.
}
\note{
By design $W_{1:N}=A$ and $W_{j+1}^{\top}W_{j+1}=W_{j}W_j^\top$, $\forall{j}\in\{1,\ldots,N{-}1\}$~---~these properties are actually all we need in Theorem~\ref{theorem:converge_balance_init}, and step~\emph{(iii)} in Procedure~\ref{proc:balance_init} can be replaced by any assignment that meets them.
\label{note:balance_init_props}
}
\end{enumerate}
\end{procedure}
The concept of balanced initialization, together with Theorem~\ref{theorem:converge}, leads to a guarantee for linear convergence (applicable to output dimension~$1$~---~scalar regression) that holds with constant probability over the randomness in initialization:
\vspace{1mm}
\begin{theorem}
\label{theorem:converge_balance_init}
For any constant $0<p<1/2$, there are constants $d_0',a>0$ \note{
As shown in the proof of the theorem (Appendix \ref{app:proofs:converge_balance_init}), $d_0',a>0$ can take on any pair of values for which:
\emph{(i)}~$d_0'\geq20$;
and~\emph{(ii)}~$\big(1 - 2\exp(-d_0'/16)\big)\big(3-4F(2/\sqrt{a/2})\big) \geq 2p$, where $F(\cdot)$ stands for the cumulative distribution function of the standard normal distribution.
For example, if~$p=0.25$, it suffices to take any~$d_0'\geq100,a\geq100$.
We note that condition~\emph{(i)} here ($d_0'\geq20$) serves solely for simplification of expressions in the theorem.
}
such that the following holds.
Assume $d_N=1,d_0\geq d_0'$, and that the weights $W_1(0),\ldots,W_N(0)$ are subject to balanced initialization (Procedure~\ref{proc:balance_init}) such that the entries in $W_{1:N}(0)$ are independent zero-centered Gaussian perturbations with standard deviation $s\leq \|\Phi\|_2/\sqrt{ad_0^2}$.
Suppose also that we run gradient descent with learning rate $ \eta \leq (s^2d_0)^{4-2/N}\big/\big(10^5 N^3 \| \Phi\|_2^{10-6/N}\big)$.
Then, with probability at least~$p$ over the random initialization, we have that for every~$\epsilon>0$ and:
$$
T \geq \frac4\eta \left( \ln(4) \left(\frac{\|\Phi\|_2}{s^2d_0}\right)^{2-2/N} + \| \Phi\|_2^{2/N-2}\ln(\| \Phi\|_2^2/(8\ep)) \right)
\text{\,,}
$$
the loss at iteration~$T$ of gradient descent~---~$\ell(T)$~---~is no greater than~$\epsilon$.
\end{theorem}
\vspace{-2mm}
\begin{proof}
See Appendix \ref{app:proofs:converge_balance_init}.
\end{proof}
\section{Experiments} \label{sec:exper}
Balanced initialization (Procedure~\ref{proc:balance_init}) possesses theoretical advantages compared with the customary layer-wise independent scheme~---~it allowed us to derive a convergence guarantee that holds with constant probability over the randomness of initialization (Theorem~\ref{theorem:converge_balance_init}).
In this section we present empirical evidence suggesting that initializing with balancedness may be beneficial in practice as well.
For conciseness, some of the details behind our implementation are deferred to Appendix~\ref{app:impl}.
We began by experimenting in the setting covered by our analysis~---~linear neural networks trained via gradient descent minimization of $\ell_2$~loss over whitened data.
The dataset chosen for the experiment was UCI Machine Learning Repository's ``Gas Sensor Array Drift at Different Concentrations'' \citep{vergara2012chemical,rodriguez2014calibration}.
Specifically, we used the dataset's ``Ethanol'' problem~---~a scalar regression task with~$2565$ examples, each comprising~$128$ features (one of the largest numeric regression tasks in the repository).
Starting with the customary initialization of layer-wise independent random Gaussian perturbations centered at zero, we trained a three layer network ($N=3$) with hidden widths ($d_1,d_2$) set to~$32$, and measured the time (number of iterations) it takes to converge (reach training loss within $\epsilon=10^{-5}$ from optimum) under different choices of standard deviation for the initialization.
To account for the possibility of different standard deviations requiring different learning rates (values for~$\eta$), we applied, for each standard deviation independently, a grid search over learning rates, and recorded the one that led to fastest convergence.
The result of this test is presented in Figure~\ref{fig:exper}(a).
As can be seen, there is a range of standard deviations that leads to fast convergence (a few hundred iterations or less), below and above which optimization decelerates by orders of magnitude.
This accords with our discussion at the end of Subsection~\ref{sec:converge:balance_init}, by which overly small initialization ensures approximate balancedness (small~$\delta$; see Definition~\ref{def:balance}) but diminishes deficiency margin (small~$c$; see Definition~\ref{def:margin})~---~``vanishing gradient problem''~---~whereas large initialization hinders both approximate balancedness and deficiency margin~---~``exploding gradient problem''.
In that regard, as a sanity test for the validity of our analysis, in a case where approximate balancedness is met at initialization (small standard deviation), we measured its persistence throughout optimization.
As Figure~\ref{fig:exper}(c) shows, our theoretical findings manifest themselves here~---~trajectories of gradient descent indeed preserve weight balancedness.
In addition to a three layer network, we also evaluated a deeper, eight layer model (with hidden widths identical to the former~---~$N=8$, $d_1=\cdots=d_7=32$).
In particular, using the same experimental protocol as above, we measured convergence time under different choices of standard deviation for the initialization.
Figure~\ref{fig:exper}(a) displays the result of this test alongside that of the three layer model.
As the figure shows, transitioning from three layers to eight aggravated the instability with respect to initialization~---~there is now a narrow band of standard deviations that lead to convergence in reasonable time, and outside of this band convergence is extremely slow, to the point where it does not take place within the duration we allowed ($10^6$~iterations).
From the perspective of our analysis, a possible explanation for the aggravation is as follows: under layer-wise independent initialization, the magnitude of the end-to-end matrix~$W_{1:N}$ depends on the standard deviation in a manner that is exponential in depth, thus for large depths the range of standard deviations that lead to moderately sized~$W_{1:N}$ (as required for a deficiency margin) is limited, and within this range, there may not be many standard deviations small enough to ensure approximate balancedness.
The procedure of balanced initialization (Procedure~\ref{proc:balance_init}) circumvents these difficulties~---~it assigns~$W_{1:N}$ directly (no exponential dependence on depth), and distributes its content between the individual weights $W_1,\ldots,W_N$ in a perfectly balanced fashion.
Rerunning the experiment of Figure~\ref{fig:exper}(a) with this initialization replacing the customary layer-wise scheme (using same experimental protocol), we obtained the results shown in Figure~\ref{fig:exper}(b)~---~both the original three layer network, and the deeper eight layer model, converged quickly under virtually all standard deviations tried.
As a final experiment, we evaluated the effect of balanced initialization in a setting that involves non-linear activation, softmax-cross-entropy loss and stochastic optimization (factors not accounted for by our analysis).
For this purpose, we turned to the MNIST tutorial built into TensorFlow \citep{abadi2016tensorflow},\note{
\url{https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/tutorials/mnist}
}
which comprises a fully-connected neural network with two hidden layers (width~$128$ followed by~$32$) and ReLU activation \citep{nair2010rectified}, trained through stochastic gradient descent (over softmax-cross-entropy loss) with batch size~$100$, initialized via customary layer-wise independent Gaussian perturbations centered at zero.
While keeping the learning rate at its default value $0.01$, we varied the standard deviation of initialization, and for each value measured the training loss after $10$~epochs.\note{
As opposed to the dataset used in our experiments with linear networks, measuring the training loss with MNIST is non-trivial computationally (involves passing through $60K$~examples).
Therefore, rather than continuously polling training loss until it reaches a certain threshold, in this experiment we chose to evaluate speed of convergence by measuring the training loss once after a predetermined number of iterations.
}
We then replaced the original (layer-wise independent) initialization with a balanced initialization based on Gaussian perturbations centered at zero (latter was implemented per Procedure~\ref{proc:balance_init}, disregarding non-linear activation), and repeated the process.
The results of this experiment are shown in Figure~\ref{fig:exper}(d).
Although our theoretical analysis does not cover non-linear activation, softmax-cross-entropy loss or stochasticity in optimization, its conclusion of balanced initialization leading to improved (faster and more stable) convergence carried over to such setting.
\begin{figure}
\vspace{-5mm}
\subfloat[]{\includegraphics[scale=0.21]{std_vs_convtime_unbalanced.png}} \
\subfloat[]{\includegraphics[scale=0.21]{std_vs_convtime_balanced.png}} \
\subfloat[]{\includegraphics[scale=0.21]{{unbalanced_shallow_balances}.png}} \
\subfloat[]{\includegraphics[scale=0.21]{mnist_std_train_loss.png}} \
\vspace{-1mm}
\caption{
Experimental results.
\textbf{(a)}~Convergence of gradient descent training deep linear neural networks (depths~$3$ and~$8$) under customary initialization of layer-wise independent Gaussian perturbations with mean~$0$ and standard deviation~$s$.
For each network, number of iterations required to reach $\epsilon=10^{-5}$ from optimal training loss is plotted as a function of~$s$ (missing values indicate no convergence within $10^6$~iterations).
Dataset in this experiment is a numeric regression task from UCI Machine Learning Repository (details in text).
Notice that fast convergence is attained only in a narrow band of values for~$s$, and that this phenomenon is more extreme with the deeper network.
\textbf{(b)}~Same setup as in~(a), but with layer-wise independent initialization replaced by balanced initialization (Procedure~\ref{proc:balance_init}) based on Gaussian perturbations with mean~$0$ and standard deviation~$s$.
Notice that this change leads to fast convergence, for both networks, under wide range of values for~$s$.
Notice also that the shallower network converges slightly faster, in line with the results of~\citet{saxe2014exact} and~\citet{arora2018optimization} for $\ell_2$~loss.
\textbf{(c)}~For the run in~(a) of a depth-$3$ network and standard deviation~$s={10^{-3}}$, this plot shows degree of balancedness (minimal~$\delta$ satisfying $\|W_{j+1}^{\top}W_{j+1}-W_{j}W_j^\top\|_F\leq\delta~,\,\forall{j}\in\{1,\ldots,N-1\}$) against magnitude of weights ($\min_{j=1,\ldots,N}\|W_{j}W_j^\top\|_F$) throughout optimization.
Notice that approximate balancedness persists under gradient descent, in line with our theoretical analysis.
\textbf{(d)}~Convergence of stochastic gradient descent training the fully-connected non-linear (ReLU) neural network of the MNIST tutorial built into TensorFlow (details in text).
Customary layer-wise independent and balanced initializations~---~both based on Gaussian perturbations centered at zero~---~are evaluated, with varying standard deviations.
For each configuration~$10$ epochs of optimization are run, followed by measurement of the training loss.
Notice that although our theoretical analysis does not cover non-linear activation, softmax-cross-entropy loss and stochastic optimization, the conclusion of balanced initialization leading to improved convergence carries over to this setting.
}
\label{fig:exper}
\vspace{-3mm}
\end{figure}
\section{Introduction} \label{sec:intro}
Deep learning builds upon the mysterious ability of gradient-based optimization methods to solve related non-convex problems.
Immense efforts are underway to mathematically analyze this phenomenon.
The prominent \emph{landscape} approach focuses on special properties of critical points (\ie~points where the gradient of the objective function vanishes) that will imply convergence to global optimum.
Several papers (\eg~\citet{ge2015escaping,lee2016gradient}) have shown that (given certain smoothness properties) it suffices for critical points to meet the following two conditions:
\emph{(i)}~\emph{no poor local minima}~---~every local minimum is close in its objective value to a global minimum;
and~\emph{(ii)}~\emph{strict saddle property}~---~every critical point that is not a local minimum has at least one negative eigenvalue to its Hessian.
While condition~\emph{(i)} does not always hold (\cf~\citet{safran2018spurious}), it has been established for various simple settings (\eg~\citet{soudry2016no,kawaguchi2016deep}).
Condition~\emph{(ii)} on the other hand seems less plausible, and is in fact provably false for models with three or more layers (\cf~\citet{kawaguchi2016deep}), \ie~for \emph{deep} networks.
It has only been established for problems involving \emph{shallow} (two layer) models, \eg~matrix factorization (\citet{ge2016matrix,du2018algorithmic}).
The landscape approach as currently construed thus suffers from inherent limitations in proving convergence to global minimum for deep networks.
A potential path to circumvent this obstacle lies in realizing that landscape properties matter only in the vicinity of trajectories that can be taken by the optimizer, which may be a negligible portion of the overall parameter space.
Several papers (\eg~\citet{saxe2014exact,arora2018optimization}) have taken this \emph{trajectory-based} approach, primarily in the context of \emph{linear neural networks}~---~fully-connected neural networks with linear activation.
Linear networks are trivial from a representational perspective, but not so in terms of optimization~---~they lead to non-convex training problems with multiple minima and saddle points.
Through a mix of theory and experiments, \citet{arora2018optimization}~argued that such non-convexities may in fact be beneficial for gradient descent, in the sense that sometimes, adding (redundant) linear layers to a classic linear prediction model can accelerate the optimization.
This phenomenon challenges the holistic landscape view, by which convex problems are always preferable to non-convex ones.
Even in the linear network setting, a rigorous proof of efficient convergence to global minimum has proved elusive.
One recent progress is the analysis of~\citet{bartlett2018gradient} for \emph{linear residual networks}~---~a particular subclass of linear neural networks in which the input, output and all hidden dimensions are equal, and all layers are initialized to be the identity matrix (\cf~\citet{hardt2016identity}).
Through a trajectory-based analysis of gradient descent minimizing $\ell_2$~loss over a whitened dataset (see Section~\ref{sec:prelim}), \citet{bartlett2018gradient}~show that convergence to global minimum at a \emph{linear rate}~---~loss is less than~$\epsilon>0$ after $\OO(\log\frac{1}{\epsilon})$~iterations~---~takes place if one of the following holds:
\emph{(i)}~the objective value at initialization is sufficiently close to a global minimum;
or \emph{(ii)}~a global minimum is attained when the product of all layers is positive definite.
The current paper carries out a trajectory-based analysis of gradient descent for general deep linear neural networks, covering the residual setting of~\citet{bartlett2018gradient}, as well as many more settings that better match practical deep learning.
Our analysis draws upon the trajectory characterization of~\citet{arora2018optimization} for gradient flow (infinitesimally small learning rate), together with significant new ideas necessitated due to discrete updates.
Ultimately, we show that when minimizing $\ell_2$~loss of a deep linear network over a whitened dataset, gradient descent converges to the global minimum, at a linear rate, provided that the following conditions hold:
\emph{(i)}~the dimensions of hidden layers are greater than or equal to the minimum between those of the input and output;
\emph{(ii)}~layers are initialized to be \emph{approximately balanced} (see Definition~\ref{def:balance})~---~this is met under commonplace near-zero, as well as residual (identity) initializations;
and \emph{(iii)}~the initial loss is smaller than any loss obtainable with rank deficiencies~---~this condition will hold with probability close to~$0.5$ if the output dimension is~$1$ (scalar regression) and standard (random) near-zero initialization is employed.
Our result applies to networks with arbitrary depth and input/output dimensions, as well as any configuration of hidden layer widths that does not force rank deficiency (\ie~that meets condition~\emph{(i)}).
The assumptions on initialization (conditions~\emph{(ii)} and~\emph{(iii)}) are necessary, in the sense that violating any one of them may lead to convergence failure.
Moreover, in the case of scalar regression, they are met with constant probability under a random initialization scheme.
We are not aware of any similarly general analysis for efficient convergence of gradient descent to global minimum in deep learning.
\medskip
The remainder of the paper is organized as follows.
In Section~\ref{sec:prelim} we present the problem of gradient descent training a deep linear neural network by minimizing the $\ell_2$~loss over a whitened dataset.
Section~\ref{sec:converge} formally states our assumptions, and presents our convergence analysis.
Key ideas brought forth by our analysis are demonstrated empirically in Section~\ref{sec:exper}.
Section~\ref{sec:related} gives a review of relevant literature, including a detailed comparison of our results against those of~\citet{bartlett2018gradient}.
Finally, Section~\ref{sec:conc} concludes.
\section{Gradient Descent for Deep Linear Neural Networks} \label{sec:prelim}
We denote by~$\norm{\vv}$ the Euclidean norm of a vector~$\vv$, and by~$\norm{A}_F$ the Frobenius norm of a matrix~$A$.
We are given a training set $\{(\x^{(i)},\y^{(i)})\}_{i=1}^{m}\subset\R^{d_x}\times\R^{d_y}$, and would like to learn a hypothesis (predictor) from a parametric family~$\HH:=\{h_\theta:\R^{d_x}\to\R^{d_y}\,|\,\theta\in\Theta\}$ by minimizing the $\ell_2$~loss:\note{
Much of the analysis in this paper can be extended to loss types other than~$\ell_2$.
In particular, the notion of deficiency margin (Definition~\ref{def:margin}) can be generalized to account for any convex loss, and, so long as the loss is differentiable, a convergence result analogous to Theorem~\ref{theorem:converge} will hold in the idealized setting of perfect initial balancedness and infinitesimally small learning rate (see proof of Lemma~\ref{lemma:descent}).
We leave to future work treatment of approximate balancedness and discrete updates in this general setting.
}
$$\min_{\theta\in\Theta}~L(\theta):=\frac{1}{2m}\sum\nolimits_{i=1}^m\|h_\theta(\x^{(i)})-\y^{(i)}\|^2
\text{\,.}$$
When the parametric family in question is the class of linear predictors, \ie~$\HH=\{\x\mapsto{W}\x\,|\,W\in\R^{d_y\times{d}_x}\}$, the training loss may be written as $L(W)=\frac{1}{2m}\|WX-Y\|_F^2$, where~$X\in\R^{d_x\times{m}}$ and~$Y\in\R^{d_y\times{m}}$ are matrices whose columns hold instances and labels respectively.
Suppose now that the dataset is \emph{whitened}, \ie~has been transformed such that the empirical (uncentered) covariance matrix for instances~---~$\Lambda_{xx}:=\frac{1}{m}XX^\top \in\R^{d_x\times{d}_x}$~---~is equal to identity.
Standard calculations (see Appendix~\ref{app:whitened}) show that in this case:
\be
L(W)=\frac{1}{2}\norm{W-\Lambda_{yx}}_F^2+c
\label{eq:loss_whitened}
\text{\,,}
\ee
where~$\Lambda_{yx}:=\tfrac{1}{m}YX^\top\in\R^{d_y\times{d}_x}$ is the empirical (uncentered) cross-covariance matrix between instances and labels, and $c$ is a constant (that does not depend on $W$).
Denoting~$\Phi:=\Lambda_{yx}$ for brevity, we have that for linear models, minimizing $\ell_2$~loss over whitened data is equivalent to minimizing the squared Frobenius distance from a \emph{target matrix}~$\Phi$:
\be
\min\nolimits_{W\in\R^{d_y\times{d}_x}}~L^1(W):=\frac{1}{2}\norm{W-\Phi}_F^2
\text{\,.}
\label{eq:lin_obj}
\ee
Our interest in this work lies on \emph{linear neural networks}~---~fully-connected neural networks with linear activation.
A depth-$N$ ($N\in\N$) linear neural network with hidden widths $d_1,\ldots,d_{N-1}\in\N$ corresponds to the parametric family of hypotheses $\HH:=\{\x\mapsto{W}_{N}W_{N-1}\cdots{W}_1\x\,|\,W_j\in\R^{d_j\times{d}_{j-1}},\,j=1,\ldots,N\}$, where $d_0:=d_x,\,d_N:=d_y$.
Similarly to the case of a (directly parameterized) linear predictor (Equation~\eqref{eq:lin_obj}), with a linear neural network, minimizing $\ell_2$~loss over whitened data can be cast as squared Frobenius approximation of a target matrix $\Phi$:
\be
\min\nolimits_{W_j\in\R^{d_j\times{d}_{j-1}},\,j=1,\ldots,N}~L^N(W_1,\ldots,W_N):=\frac{1}{2}\norm{W_{N}W_{N-1}\cdots{W}_1-\Phi}_F^2
\label{eq:lnn_obj}
\text{\,.}
\ee
Note that the notation~$L^N(\cdot)$ is consistent with that of Equation~\eqref{eq:lin_obj}, as a network with depth~$N=1$ precisely reduces to a (directly parameterized) linear model.
We focus on studying the process of training a deep linear neural network by \emph{gradient descent}, \ie~of tackling the optimization problem in Equation~\eqref{eq:lnn_obj} by iteratively applying the following updates:
\be
W_j(t+1)\leftarrow{W}_j(t)-\eta\frac{\partial{L}^N}{\partial{W}_j}\big(W_1(t),\ldots,W_N(t)\big)\quad,\,j=1,\ldots,{N}\quad,\,t=0,1,2,\ldots
\label{eq:gd}
\text{~~,}
\ee
where $\eta>0$ is a configurable learning rate.
In the case of depth~$N=1$, the training problem in Equation~\eqref{eq:lnn_obj} is smooth and strongly convex, thus it is known (\cf~\citet{boyd2004convex}) that with proper choice of~$\eta$, gradient descent converges to global minimum at a linear rate.
In contrast, for any depth greater than~$1$, Equation~\eqref{eq:lnn_obj} comprises a fundamentally non-convex program, and the convergence properties of gradient descent are highly non-trivial.
Apart from the case~$N=2$ (shallow network), one cannot hope to prove convergence via landscape arguments, as the strict saddle property is provably violated (see Section~\ref{sec:intro}).
We will see in Section~\ref{sec:converge} that a direct analysis of the trajectories taken by gradient descent can succeed in this arena, providing a guarantee for linear rate convergence to global minimum.
\medskip
We close this section by introducing additional notation that will be used in our analysis.
For an arbitrary matrix~$A$, we denote by~$\sigma_{max}(A)$ and~$\sigma_{min}(A)$ its largest and smallest (respectively) singular values.\note{
If~$A\in\R^{d\times{d'}}$, $\sigma_{min}(A)$~stands for the $\min\{d,d'\}$-th largest singular value.
Recall that singular values are always non-negative.
}
For~$d\in\N$, we use~$I_d$ to signify the identity matrix in~$\R^{d\times{d}}$.
Given weights $W_1,\ldots,W_N$ of a linear neural network, we let~$W_{1:N}$ be the direct parameterization of the \emph{end-to-end} linear mapping realized by the network, \ie~$W_{1:N}:=W_{N}W_{N-1}\cdots{W}_1$.
Note that $L^N(W_1,\ldots,W_N)=L^1(W_{1:N})$, meaning the loss associated with a depth-$N$ network is equal to the loss of the corresponding end-to-end linear model.
In the context of gradient descent, we will oftentimes use~$\ell(t)$ as shorthand for the loss at iteration~$t$:
\be
\ell(t):=L^N(W_1(t),\ldots,W_N(t))=L^1(W_{1:N}(t))
\text{\,.}
\label{eq:gd_loss}
\ee
\section{Related Work} \label{sec:related}
\vspace{-2mm}
Theoretical study of gradient-based optimization in deep learning is a highly active area of research.
As discussed in Section~\ref{sec:intro}, a popular approach is to show that the objective landscape admits the properties of no poor local minima and strict saddle, which, by~\citet{ge2015escaping,lee2016gradient,panageas2017gradient}, ensure convergence to global minimum.
Many works, both classic (\eg~\citet{baldi1989neural}) and recent (\eg~\citet{choromanska2015loss,kawaguchi2016deep,hardt2016identity,soudry2016no,haeffele2017global,nguyen2017loss,safran2018spurious,nguyen2018loss,laurent2018deep}), have focused on the validity of these properties in different deep learning settings.
Nonetheless, to our knowledge, the success of landscape-driven analyses in formally proving convergence to global minimum for a gradient-based algorithm, has thus far been limited to shallow (two layer) models only (\eg~\citet{ge2016matrix,du2018power,du2018algorithmic}).
An alternative to the landscape approach is a direct analysis of the trajectories taken by the optimizer.
Various papers (\eg~\citet{brutzkus2017globally,li2017convergence,zhong2017recovery,tian2017analytical,brutzkus2018sgd,li2018algorithmic,du2018gradient,du2018convolutional,liao2018almost}) have recently adopted this strategy, but their analyses only apply to shallow models.
In the context of linear neural networks, deep (three or more layer) models have also been treated~---~\cf~\citet{saxe2014exact} and~\citet{arora2018optimization}, from which we draw certain technical ideas for proving Lemma~\ref{lemma:descent}.
However these treatments all apply to gradient flow (gradient descent with infinitesimally small learning rate), and thus do not formally address the question of computational efficiency.
To our knowledge, \citet{bartlett2018gradient}~is the only existing work rigorously proving convergence to global minimum for a conventional gradient-based algorithm training a deep model.
This work is similar to ours in the sense that it also treats linear neural networks trained via minimization of $\ell_2$~loss over whitened data, and proves linear convergence (to global minimum) for gradient descent.
It is more limited in that it only covers the subclass of linear residual networks, \ie~the specific setting of uniform width across all layers ($d_0=\cdots=d_N$) along with identity initialization.
We on the other hand allow the input, output and hidden dimensions to take on any configuration that avoids ``bottlenecks'' (\ie~admits $\min\{d_1,\ldots,d_{N-1}\}\geq\min\{d_0,d_N\}$), and from initialization require only approximate balancedness (Definition~\ref{def:balance}), supporting many options beyond identity.
In terms of the target matrix~$\Phi$, \citet{bartlett2018gradient}~treats two separate scenarios:\note{
There is actually an additional third scenario being treated~---~$\Phi$~is asymmetric and positive definite~---~but since that requires a dedicated optimization algorithm, it is outside our scope.
}
\emph{(i)}~$\Phi$ is symmetric and positive definite;
and~\emph{(ii)}~$\Phi$ is within distance~$1/10e$ from identity.\note{
$1/10e$~is the optimal (largest) distance that may be obtained (via careful choice of constants) from the proof of Theorem~$1$ in~\citet{bartlett2018gradient}.
}
Our analysis does not fully account for scenario~\emph{(i)}, which seems to be somewhat of a singularity, where all layers are equal to each other throughout optimization (see proof of Theorem~$2$ in~\citet{bartlett2018gradient}).
We do however provide a strict generalization of scenario~\emph{(ii)}~---~our assumption of deficiency margin (Definition~\ref{def:margin}), in the setting of linear residual networks, is met if the distance between target and identity is less than~$0.5$.
| {
"timestamp": "2019-10-29T01:06:05",
"yymm": "1810",
"arxiv_id": "1810.02281",
"language": "en",
"url": "https://arxiv.org/abs/1810.02281",
"abstract": "We analyze speed of convergence to global optimum for gradient descent training a deep linear neural network (parameterized as $x \\mapsto W_N W_{N-1} \\cdots W_1 x$) by minimizing the $\\ell_2$ loss over whitened data. Convergence at a linear rate is guaranteed when the following hold: (i) dimensions of hidden layers are at least the minimum of the input and output dimensions; (ii) weight matrices at initialization are approximately balanced; and (iii) the initial loss is smaller than the loss of any rank-deficient solution. The assumptions on initialization (conditions (ii) and (iii)) are necessary, in the sense that violating any one of them may lead to convergence failure. Moreover, in the important case of output dimension 1, i.e. scalar regression, they are met, and thus convergence to global optimum holds, with constant probability under a random initialization scheme. Our results significantly extend previous analyses, e.g., of deep linear residual networks (Bartlett et al., 2018).",
"subjects": "Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE); Machine Learning (stat.ML)",
"title": "A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987375049232425,
"lm_q2_score": 0.8104789040926008,
"lm_q1q2_score": 0.8002466478302737
} |
https://arxiv.org/abs/1610.07539 | Origami Constructions of Rings of Integers of Imaginary Quadratic Fields | In the making of origami, one starts with a piece of paper, and through a series of folds along seed points one constructs complicated three-dimensional shapes. Mathematically, one can think of the complex numbers as representing the piece of paper, and the seed points and folds as a way to generate a subset of the complex numbers. Under certain constraints, this construction can give rise to a ring, which we call an origami ring. We will talk about the basic construction of an origami ring and further extensions and implications of these ideas in algebra and number theory, extending results of Buhler,this http URL. In particular, in this paper we show that it is possible to obtain the ring of integers of an imaginary quadratic field through an origami construction. | \section{Introduction}
In origami, the artist uses intersections of folds as reference points to make new folds. This kind of construction can be extended to points on the complex plane. In \cite{Buhler2010}, the authors define one such mathematical construction. In this construction, one can think of the complex plane as representing the ``paper", and lines representing the ``folds". The question they explored is which points in the plane can be constructed through iterated intersections of lines, starting with a prescribed set of allowable angles and only the points 0 and 1.
First, we say that the set $S=\{0,1\}$ is the set of \emph{seed points}. We fix a set $U$ of angles, or ``directions", determining which lines we can draw through the points in our set. Thus, we can define the ``fold" through the point $p$ with angle $u$ as the line given by $$L_u(p):=\{p+ru:r\in\mathbb R\}.$$
Notice that $U$ can also be comprised of points (thinking of $u\in U$ as defining a direction) on the unit circle, i.e. the circle group $\mathbb{T}$. Moreover, $u$ and $-u$ define the same line, so we can think of the directions as being in the quotient group $\mathbb{T}/\{\pm1\}$.
Finally, if $u$ and $v$ in $U$ determine distinct folds, we say that $$I_{u,v}(p,q)=L_u(p)\cap L_v(q)$$ is the unique point of intersection of the lines $L_u(p)$ and $L_v(q)$.
Let $R(U)$ be the set of points obtained by iterated intersections $I_{u,v}(p,q)$, starting with the points in $S$. Alternatively, we define $R(U)$ to be the smallest subset of $\mathbb{C}$ that contains $0$ and $1$ and $I_{u,v}(p,q)$ whenever it contains $p$ and $q$, and $u, v$ determine distinct folds. The main theorem of \cite{Buhler2010} is the following.
\begin{theorem}\label{Buhler}
If $U$ is a subgroup of $\mathbb{T}/\{\pm1\}$, and $|U|\geq3$, then $R(U)$ is a subring of $\mathbb{C}$.
\end{theorem}
Let $U_n$ denote the cyclic group of order $n$ generated by $e^{i\pi/n}\mod\{\pm1\}$. Then Buhler, et. al., obtain the following corollary.
\begin{theorem}
Let $n\geq3$. If $n$ is prime, then $R(U_n)=\mathbb{Z}[\zeta_n]$ is the cyclotomic integer ring. If $n$ is not prime, then $R(U_n)=\mathbb{Z}[\zeta_n,\frac{1}{n}]$.
\end{theorem}
In \cite{nedrenco}, Nedrenco explores whether $R(U)$ is a subring of $\mathbb C$ even if $U$ is not a group, and obtains a negative answer, and some necessary conditions for this to be true. The main result is that given the set of directions $U=\{1,e^{i\alpha},e^{i\beta}\}$, if $\alpha\not\equiv\beta\bmod\pi$ then $R(U)=\mathbb Z+z\mathbb Z$ for some $z\in\mathbb C$. Clearly, this will not always be a ring.
In this paper, we explore the inverse problem, that is, given an ``interesting" subring of $\mathbb C$, can we obtain it via an origami construction? The answer is affirmative in the case of the ring of integers of an imaginary quadratic field.
The next section of this paper delves deeper into the origami construction, in particular the intersection operator. Some properties in this section are crucial for understanding the proofs of our main results. This section also explores an example of an origami construction in more depth, that of the Gaussian integers, since it illustrates the geometric and algebraic approach through a very well known ring. Finally, in Section 3, we state and prove our main result.
\begin{rem}
Even though Nedrenco essentially proved one of our two results in \cite{nedrenco}, this was accomplished completely independently by us. In fact, our proof is different, since we are interested in the reverse of Nedrenco's question. That is, we explore whether a given ring can be an origami ring. Nedrenco explores the conditions for which his origami construction ends up being a ring of algebraic integers. This distinction is subtle, but important, and we want to clarify that all of what follows, unless otherwise indicated as coming from \cite{Buhler2010}, is original work.
\end{rem}
\subsection*{Acknowledgements.} This work was part of the first author's senior Honors Thesis at Bates College, advised by the second author, and we thank the Bates Mathematics Department, in particular Peter Wong, for useful feedback and discussions throughout the thesis process. We would also like to thank the rest of the Honors committee, Pamela Harris and Matthew Cot\'{e}, for the careful reading of the thesis and their insightful questions. Finally, we would like to thank the 2015 Summer Undergraduate Applied Mathematics Institute (SUAMI) at Carnegie Mellon University for inspiring the research problem.
\section{Properties of Origami Rings}
\subsection{The intersection operator}
Let $U\subset \mathbb T$, as before. There are important properties of the $I_{u,v}(p,q)$ operator that are integral for us to prove our theorem.
\par Let $u,v\in U$ be two distinct angles. Let $p,q$ be points in $R(U)$. Consider the pair of intersecting lines $L_u(p)$ and $L_v(q)$. In \cite{Buhler2010}, it is shown that we can express $I_{u,v}(p,q)$ as
\begin{equation}\tag{$\ast$}
I_{u,v}(p,q)=\frac{u\bar pv-\bar upv}{u\bar v-\bar uv}+\frac{q\bar vu-\bar qvu}{\bar uv-u\bar v}=\frac{[u,p]}{[u,v]}v+\frac{[v,q]}{[v,u]}u
\label{closedform}
\end{equation}
where $[x,y]=\overline{x}y-x\overline{y}$.
From the algebraic closed form (\ref{closedform}) of the intersection operator, we can see by straightforward computation that the following properties hold for for $p,q,u,v\in \mathbb C$.
\begin{description}
\item[Symmetry] $I_{u,v}(p,q)=I_{v,u}(q,p)$
\item[Reduction] $I_{u,v}(p,q)=I_{u,v}(p,0)+I_{v,u}(q,0)$
\item[Linearity] $I_{u,v}(p+q,0)=I_{u,v}(p,0)+I_{u,v}(q,0)$ and $rI_{u,v}(p,0)=I_{u,v}(rp,0)$ where $r\in\mathbb R$.
\item[Projection] $I_{u,v}(p,0)$ is a projection of $p$ on the line $\{rv:r\in\mathbb R\}$ in the $u$ direction.
\item[Rotation] For $w\in \mathbb{T}$, $wI_{u,v}(p,q)=I_{wu,wv}(wp,wq)$.
\end{description}
\subsection{An illustrative example}
Let $S=\{0,1\}$ be our set of seed points. Now, let $U=\{1,e^{i\pi/4},i\}$. This is clearly not a group, since $e^{\frac{i\pi}{4}}i=e^{\frac{3i\pi}{4}}\not\in U$. In figure \ref{fig: gauss}, we show the different stages of the construction, obtained by iterated intersections.
\begin{figure}[!h]
\includegraphics[height=.3\textheight]{Gauss1}
\includegraphics[height=.3\textheight]{Gauss2}
\includegraphics[height=.3\textheight]{Gauss3}
\includegraphics[height=.3\textheight]{Gauss4}
\includegraphics[height=.3\textheight]{Gauss5}
\includegraphics[height=.3\textheight]{Gauss6}
\caption[Generational Expansion]{Each successive graph shows all possible intersections from the previous graph using $U=\{1,e^{i\pi/4},i\}$ as our set of allowable angles.}\label{fig: gauss}
\end{figure}
\begin{rem} All the figures in this document were created by coding the algorithm for iterated intersections into Maple \cite{maple}.
\end{rem}
Notice that a pattern seems to emerge: the points constructed all have the form $a+bi$ where $a,b \in \mathbb Z$. This seems to indicate that this origami construction generates the Gaussian integers, a subring of $\mathbb{C}$. In fact, this is a special case of the main result of \cite{nedrenco}, where $z=i$. We prove in the next section that the ring of algebraic integers of an imaginary quadratic field can always be obtained through an origami construction.
\section{Constructing $\mathcal{O}(\mathbb{Q}(\sqrt{m}))$}
A natural question, related to the previous section is this: Which subrings of $\mathbb C$ can be generated through an origami construction, that is, which subrings are origami rings? We have seen that the cyclotomic integers $\mathbb Z[\zeta_n]$, where $n$ is prime, are origami rings by Theorem \ref{Buhler}.
Let $m<0$ be a square-free integer, so $\mathbb Q(\sqrt{m})$ is an imaginary quadratic field. Denote by $\mathcal{O}(\mathbb Q(\sqrt{m}))$ the ring of algebraic integers in $\mathbb Q(\sqrt{m})$. Recall that a complex number is an algebraic integer if and only if it is the root of some monic polynomial with coefficients in $\mathbb Z$. Then we have the following well-known theorem (for details see, for example, \cite[pg. 15]{marcus}).
\begin{theorem}
The set of algebraic integers in the quadratic field $\mathbb Q (\sqrt{m})$ is
\begin{align*}
\{a+b\sqrt{m}: a,b\in\mathbb Z\} & \text{ if $m\equiv 2$ or $3 \mod 4$}\\
\left\{\frac{a+b\sqrt{m}}{2}: a,b\in\mathbb Z , a\equiv b \mod 2\right\} & \text{ if $m\equiv 1 \mod 4$}
\end{align*}
\end{theorem}
And so, we can state our main theorem.
\begin{theorem}\label{ourtheorem}
Let $m<0$ be a squarefree integer, and let $\theta=\arg(1+\sqrt{m})$. Then $\mathcal{O}(\mathbb Q(\sqrt{m}))=R (U)$ where
\begin{enumerate}
\item $U=\{1,i,e^{i\theta}\}$, if $m\equiv 2$ or $3 \mod 4$.
\item $U=\{1, e^{i\theta},e^{i(\pi-\theta)}\}$, if $m\equiv 1 \mod 4$.
\end{enumerate}
\end{theorem}
Notice that the Gaussian integers are a special case of Theorem \ref{ourtheorem}.1.
\subsection{Proof of Theorem \ref{ourtheorem}.1}
Let $m\equiv 2$ or $3 \mod 4$ and $m<0$. Let $U=\{1,i,e^{i\theta}\}$ where $\theta$ is the principal argument of $1+\sqrt{m}$.
\begin{lemma}\label{cases1}
$I_{u,v} (p,q)\in \mathbb Z[\sqrt{m}]$ whenever $u,v\in U$ and $p,q\in \mathbb Z[\sqrt{m}] $.
\end{lemma}
\begin{proof}
Since there are three possible directions, there are ${3\choose 2}=6$ cases to consider. Let $p=a+b\sqrt{m}$ and $q=c+d\sqrt{m}$. Then
\begin{enumerate}
\item $I_{1,i} (p,q)=c+b\sqrt{m}$\\
\item $I_{1,e^{i\theta}} (p,q)=b+c-d+b\sqrt{m}$\\
\item $I_{i,1} (p,q)=a+d\sqrt{m}$\\
\item $I_{i,e^{i\theta}} (p,q)=a+ (a-c+d)\sqrt{m}$\\
\item $I_{e^{i\theta},1} (p,q)=a-b+d+d\sqrt{m}$\\
\item $I_{e^{i\theta},i} (p,q)=c+ (-a+b+c)\sqrt{m}$\\
\end{enumerate}
In other words, if $p,q\in\mathbb Z[\sqrt{m}]$, then so is $I_{u,v} (p,q)$. All of these can be obtained from straightforward computations using equation (\ref{closedform}). For example,
$$I_{1,i} (p,q)=\frac{[1,a+b\sqrt{m}]}{[1,i]}i+\frac{[i,c+d\sqrt{m}]}{[i,1]}=c+b\sqrt{m}.$$
\end{proof}
\par This concludes the proof for the closure of the intersection operator. In other words, as long as our seed set starts with elements in $\mathbb Z[\sqrt{m}]$, then the intersections will also be in $\mathbb Z[\sqrt{m}]$. We can also express this claim as $R (U)\subseteq \mathbb Z[\sqrt{m}]$. It remains to be shown that any element in $\mathbb Z[\sqrt{m}]$ is also an element in $R (U)$.
\begin{lemma}
$\mathbb Z[\sqrt{m}]\subseteq R (U)$.
\end{lemma}
\begin{proof}
\par Let $a+b\sqrt{m}$ be an element in in $\mathbb Z[\sqrt{m}]$. We want to show that it can be constructed from starting with $\{0,1\}$ and the given set $U$.
We can reduce the problem by showing that given points $\{n+k\sqrt{m},n+1+k\sqrt{m}\}$ we can construct $$n-1+k\sqrt{m}, n+2+k\sqrt{m}, n+ (k-1)\sqrt{m} \hspace{3pt} \text{and} \hspace{3pt} n+1+ (k+1)\sqrt{m}.$$ In Figure \ref{redux} we give an illustration of this step of the proof. In essence, the following is the induction step to a double induction on the real and imaginary components of an arbitrary integer we are constructing. That is, we prove that for any two adjacent points in the construction, we can construct points that are adjacent in every direction. This, and the fact that our seed is $\{0,1\}$, is enough to show that we can construct all of the integers.
\begin{figure}[!h]
\includegraphics[ height=0.3\textheight]{redux12}
\includegraphics[ height=0.3\textheight]{redux22}
\caption[Reduction]{Given two points next to each other, we want to show that we can generate all the points immediately around them. This results in more points next to each other, upon which we can repeat the process. Notice that there are six adjacent points to an adjacent pair and we only prove this for four, but it is easy to see that this is enough. Think of this as two overlapping ``crosses". }\label{redux}
\end{figure}
We will now construct the desired points using the appropriate reference points.
\begin{description}
\item[Constructing $n+2+k\sqrt{m}$] Consider $$I_{i,1} (I_{1,e^{i\theta}} (I_{e^{i\theta},i} (n+k\sqrt{m},n+1+k\sqrt{m}),n+1+k\sqrt{m}),n+1+k\sqrt{m}).$$ Notice that we can evaluate this expression using the six cases enumerated in Lemma \ref{cases1}. In particular, we apply case (6) first to get $$I_{i,1} (I_{1,e^{i\theta}} (n+1+ (k+1)\sqrt{m},n+1+k\sqrt{m}),n+1+k\sqrt{m})$$ Next, we apply case (2) to get $$I_{i,1} (n+1+ (k+1)\sqrt{m},n+1+k\sqrt{m})$$ Finally, we use case (3) to get $$n+2+k\sqrt{m}$$
\item[Constructing $n-1+k\sqrt{m}$] Consider $$I_{i,1} (I_{1,e^{i\theta}} (I_{i,e^{i\theta}} (n+k\sqrt{m},n+1+k\sqrt{m}),n+k\sqrt{m}),n+k\sqrt{m})$$ First, we apply case (4) to get $$I_{i,1} (I_{1,e^{i\theta}} (n+ (k-1)\sqrt{m},n+k\sqrt{m}),n+k\sqrt{m})$$ Next, we apply case (2) to get $$I_{i,1} (n-1+ (k-1)\sqrt{m},n)$$ Finally, we apply case (3) to get $$n-1+k\sqrt{m}$$
\item[Constructing $n+ (k+1)\sqrt{m}$ and $n+1+ (k+1)\sqrt{m}$] Consider $$I_{e^{i\theta},i} (n+k\sqrt{m},n+1+k\sqrt{m})$$ Using case (6) we get $$n+1+ (k+1)\sqrt{m}$$ Now consider $$I_{i,1} (n+k\sqrt{m},n+1+ (k+1)\sqrt{m})$$ Using case (3) we get $$n+ (k+1)\sqrt{m}$$
\item[Constructing $n+ (k-1)\sqrt{m}$ and $n+1+ (k-1)\sqrt{m}$] Consider $$I_{i,e^{i\theta}} (n+k\sqrt{m},n+1+k\sqrt{m})$$ Using case (4) we get $$n+ (k-1)\sqrt{m}$$ Consider $$I_{i,1} (n+1+k\sqrt{m},n+ (k-1)\sqrt{m})$$ Using case (3) we get $$n+1+ (k-1)\sqrt{m}$$
\end{description}
\end{proof}
With this we have shown that $\mathbb Z[\sqrt{m}]\subseteq R (U)$, completing the proof.
\subsection{Proof of \ref{ourtheorem}.2}
The proof for \ref{ourtheorem}.2 employs the same strategy as the proof for \ref{ourtheorem}.1, with a subtle difference given by the slightly different structure of the ring.
Let $m\equiv 1 \mod 4$ and $m<0$. Let $U=\{1, e^{i\theta},e^{i (\pi-\theta)}\}$ where $\theta$ is the principal argument of $1+\sqrt{m}$.
\begin{lemma}
$I_{u,v} (p,q)\in \mathcal{O}(\mathbb Q(\sqrt{m}))$ where $u,v\in U$ and $p,q\in \mathcal{O}(\mathbb Q(\sqrt{m}))$.
\end{lemma}
\begin{proof}
Again, there are six cases to consider. Let $p=\frac{a+b\sqrt{m}}{2}$ and $q=\frac{c+d\sqrt{m}}{2}$, where $a\equiv b\bmod 2$ and $c\equiv d\bmod2$.
\begin{enumerate}
\item $I_{1,e^{i\theta}} (p,q)=b+c-d+b\sqrt{m}$\\
\item $I_{1,e^{i (\pi-\theta)}} (p,q)=c+d-b+b\sqrt{m}$\\
\item $I_{e^{i\theta},1} (p,q)=a-b+d+d\sqrt{m}$\\
\item $I_{e^{i\theta},e^{i (\pi-\theta)}} (p,q)=\frac{ (a-b+c+d)+ (b-a+c+d)\sqrt{m}}{2}$\\
\item $I_{e^{i (\pi-\theta)},1} (p,q)=a+b-d+d\sqrt{m}$\\
\item $I_{e^{i (\pi-\theta)},e^{i\theta}} (p,q)=\frac{ (a+b+c-d)+ (a+b-c+d)\sqrt{m}}{2}$\\
\end{enumerate}
All of these cases can be obtained, again, by straightforward computation using (\ref{closedform}), and are left as a exercise. Notice that (1), (2), (3), and (5) all are clearly in the ring of algebraic integers. The only additional fact to show is that (4) and (6) are as well. But it's easy to see that $$a-b+c+d\equiv b-a+c+d \bmod 2,$$ since $a\equiv b\bmod 2$. And similarly $$a+b+c-d\equiv a+b-c+d \bmod 2,$$ because $c\equiv d\bmod 2$.
\end{proof}
\par This concludes the proof for the closure of the intersection operator. In other words, as long as our seed set starts with elements in $\mathcal{O}(\mathbb Q(\sqrt{m}))$, then the intersections will also be in $\mathcal{O}(\mathbb Q(\sqrt{m}))$. That is, $R (U)\subseteq \mathcal{O}(\mathbb Q(\sqrt{m}))$. It remains to be shown that any element in $\mathcal{O}(\mathbb Q(\sqrt{m}))$ is also an element in $R (U)$.
\begin{lemma}
$\mathcal{O}(\mathbb Q(\sqrt{m}))\subseteq R(U)$.
\end{lemma}
\begin{proof}
\par Let $\dfrac{a+b\sqrt{m}}{2}$ be an element in in $\mathcal{O}(\mathbb Q(\sqrt{m}))$. As before, we can reduce the problem to one of double induction. This is done by showing that given points
$$\left\{\frac{n+k\sqrt{m}}{2},\frac{n+2+k\sqrt{m}}{2}\right\}$$
we can construct $$\dfrac{n-2+k\sqrt{m}}{2}, \dfrac{n+4+k\sqrt{m}}{2}, \dfrac{n+1+ (k-1)\sqrt{m}}{2} \hspace{3pt} \text{and} \hspace{3pt}\dfrac{n+1+ (k+1)\sqrt{m}}{2}.$$ Figure \ref{redux2} is an illustration of the points we want to construct.
\begin{figure}[!h]
\includegraphics[ height=0.2\textheight]{redux12}
\includegraphics[ height=0.2\textheight]{redux32}
\includegraphics[ height=0.2\textheight]{redux312}
\caption[Reduction]{Given two points next to each other, we want to show that we can generate all the points immediately around them. Notice that this results in more points next to each other, upon which we can repeat the process. As before, we do not need to actually make all eight adjacent points, since it suffices to make the two points on either side of where we start, and the two points above and below where we start. The graph on the left illustrates the starting points. The graph in the center are all points adjacent to the two starting points. The graph on the right shows the points whose construction is sufficient to prove the theorem.}\label{redux2}
\end{figure}
We will now show how to construct the desired points.
\begin{description}
\item[Constructing $\frac{n+1+ (k-1)\sqrt{m}}{2}$] Consider $$I_{e^{i \theta},e^{i (\pi-\theta)}} \left(\frac{n+k\sqrt{m}}{2},\frac{n+2+k\sqrt{m}}{2}\right)$$
By applying case (4) from above, we see that $$I_{e^{i \theta},e^{i (\pi-\theta)}} \left(\frac{n+k\sqrt{m}}{2},\frac{n+2+k\sqrt{m}}{2}\right)=\frac{n+1+ (k-1)\sqrt{m}}{2}$$
\item[Constructing $\frac{n+1+ (k+1)\sqrt{m}}{2}$] Consider $$I_{e^{i (\pi-\theta)},e^{i \theta}} \left(\frac{n+k\sqrt{m}}{2},\frac{n+2+k\sqrt{m}}{2}\right)$$
By applying case (6) from above, we see that $$I_{e^{i (\pi-\theta)},e^{i \theta}} \left(\frac{n+k\sqrt{m}}{2},\frac{n+2+k\sqrt{m}}{2}\right)=\frac{n+1+ (k+1)\sqrt{m}}{2}$$
\item[Constructing $\frac{n-2+k\sqrt{m}}{2}$] Consider $$I_{e^{i\theta},1} \left(I_{1,e^{i (\pi-\theta)}} \left(\frac{n+1+ (k+1)\sqrt{m}}{2},\frac{n+k\sqrt{m}}{2}\right),\frac{n+k\sqrt{m}}{2}\right)$$
By applying case (1) from above, we can reduce the previous expression to $$I_{e^{i\theta},1} \left(\frac{n-1+ (k+1)\sqrt{m}}{2},\frac{n+k\sqrt{m}}{2}\right)$$
We further reduce the expression using case (5) from above. The result is $$I_{e^{i\theta},1} \left(\frac{n-1+ (k+1)\sqrt{m}}{2},\frac{n+k\sqrt{m}}{2}\right)=\frac{n-2+k\sqrt{m}}{2}$$
\item[Constructing $\frac{n+4+k\sqrt{m}}{2}$] Consider $$I_{e^{i (\pi-\theta)},1} \left(I_{1,e^{\pi\theta}} \left(\frac{n+1+ (k+1)\sqrt{m}}{2},\frac{n+2+k\sqrt{m}}{2}\right),\frac{n+2+k\sqrt{m}}{2}\right)$$
By applying case (5) from above, we can reduce the previous expression to $$I_{e^{i (\pi-\theta)},1} \left(\frac{n+3+ (k+1)\sqrt{m}}{2},\frac{n+2+k\sqrt{m}}{2}\right)$$
We further reduce the expression using case (1) from above. The result is $$I_{e^{i (\pi-\theta)},1} \left(\frac{n+3+ (k+1)\sqrt{m}}{2},\frac{n+2+k\sqrt{m}}{2}\right)=\frac{n+4+k\sqrt{m}}{2}$$
\end{description}
\end{proof}
With this we have shown that $$\left\{\frac{a+b\sqrt{m}}{2}: a,b\in\mathbb Z , a\equiv b \mod 2\right\}\subseteq R (U)$$ completing the proof.
\subsection{Some final illustrations and remarks}
In Figure \ref{fig: gausspartial} we see that when we use $U=\{1,i,e^{i\theta}\}$ as our angle set, then the origami ring grows into the first and third quadrants, and bleeds into the others. As discussed, $R(U)=\mathbb Z[i]$.
In Figure \ref{fig: partialhalfs} we see that when we use $U=\{1,e^{i\arg(1+\sqrt{-3})},e^{i(\pi-\arg(1+\sqrt{-3}))}\}$ as our angle set, then the origami ring grows along the real line, and slowly bleeds into the rest of the complex plane.
This is an illustration of the second case of Theorem \ref{ourtheorem}. In this case, $R(U)=\mathcal{O}(\mathbb Q(\sqrt{-3}))$.
Of course, $R(U)$ is assumed to be closed under the intersection operator, so the growth pattern doesn't matter in an abstract sense. However, computationally, it means that the number of steps it takes to construct a point is not related to that point's modulus. In fact, we get an entirely different measure of distance if we only consider the number of steps it takes to generate a point. One possible additional exploration is, given more general starting angles and points, to describe the dynamics of the iterative process.
These examples also serve to illustrate the progression of the iterative process, which was coded into Maple to produce the graphs.
\begin{figure}[!h]
\includegraphics[ height=.3\textheight]{Gauss5}
\caption[Five Generations of the Easy Case]{This graph depicts the first 5 generations of origami points using $U=\{1, i,e^{i\frac{\pi}{4}}\}$}\label{fig: gausspartial}
\end{figure}
\begin{figure}[!h]
\includegraphics[ height=.3\textheight]{batman21}
\caption[Five Generations of the Hard Case]{This graph depicts the first 5 generations of origami points using $$U=\{1,e^{i\arg(1+\sqrt{-3})},e^{i(\pi-\arg(1+\sqrt{-3}))}\}$$}\label{fig: partialhalfs}
\end{figure}
\pagebreak
| {
"timestamp": "2016-10-25T02:12:43",
"yymm": "1610",
"arxiv_id": "1610.07539",
"language": "en",
"url": "https://arxiv.org/abs/1610.07539",
"abstract": "In the making of origami, one starts with a piece of paper, and through a series of folds along seed points one constructs complicated three-dimensional shapes. Mathematically, one can think of the complex numbers as representing the piece of paper, and the seed points and folds as a way to generate a subset of the complex numbers. Under certain constraints, this construction can give rise to a ring, which we call an origami ring. We will talk about the basic construction of an origami ring and further extensions and implications of these ideas in algebra and number theory, extending results of Buhler,this http URL. In particular, in this paper we show that it is possible to obtain the ring of integers of an imaginary quadratic field through an origami construction.",
"subjects": "Number Theory (math.NT)",
"title": "Origami Constructions of Rings of Integers of Imaginary Quadratic Fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9902915220858951,
"lm_q2_score": 0.8080672204860316,
"lm_q1q2_score": 0.8002221177228309
} |
https://arxiv.org/abs/1106.5444 | An Extension of Young's Inequality | Young's inequality is extended to the context of absolutely continuous measures. Several applications are included. | \section{Introduction}
Young's inequality \cite{Y1912} asserts that every strictly increasing
continuous function $f:\left[ 0,\infty\right) \longrightarrow\left[
0,\infty\right) $ with $f\left( 0\right) =0$ and $\underset{x\rightarrow
\infty}{\lim}f\left( x\right) =\infty$ verifies an inequality of the
following form
\begin{equation}
ab\leq\int_{0}^{a}f\left( x\right) dx+\int_{0}^{b}f^{-1}\left( y\right)
dy, \label{youngineq
\end{equation}
whenever $a$ and $b\ $are nonnegative real numbers. The equality occurs if and
only if $f\left( a\right) =b$. See \cite{HLP}, \cite{Mit}, \cite{NP2006} and
\cite{RV} for details and significant applications.
Several questions arise naturally in connection with this classical result.
\begin{enumerate}
\item[(Q1):] Is the restriction on strict monotonicity (or on continuity)
really necessary?
\item[(Q2):] Is there any weighted analogue of Young's inequality?
\item[(Q3):] Can Young's inequality be improved?
\end{enumerate}
F. Cunningham Jr. and N. Grossman \cite{CG1971} noticed that the question (Q1)
has a positive answer (correcting the prevalent belief that Young's inequality
is the business of strictly increasing continuous functions). The aim of the
present paper is to extend the entire discussion to the framework of locally
absolutely continuous measures and to prove several improvements.
As well known, Young's inequality is an illustration of the Legendre duality.
Precisely, the function
\[
F(a)=\int_{0}^{a}f\left( x\right) dx\text{ and }G(b)=\int_{0}^{b
f^{-1}\left( x\right) dx,
\]
are both continuous and convex on $\left[ 0,\infty\right) $ and
(\ref{youngineq}) can be restated a
\begin{equation}
ab\leq F(a)+G(b)\text{\quad for all }a,b\in\left[ 0,\infty\right) ,
\label{youngineq2
\end{equation}
with equality if and only if $f\left( a\right) =b.$ Because of the equality
case, the formula (\ref{youngineq2}) leads to the following connection between
the functions $F$ and $G:
\begin{equation}
F(a)=\sup\left\{ ab-G(b):b\geq0\right\} \label{defF
\end{equation}
and
\[
G(b)=\sup\left\{ ab-F(a):a\geq0\right\} .
\]
It turns out that each of these formulas produces a convex function (possibly
on a different interval). Some details are in order.
By definition, the \emph{conjugate} of a convex function $F$ defined on a
nondegenerate interval $I$ is the functio
\[
F^{\ast}:I^{\ast}\rightarrow\mathbb{R},\text{\quad}F^{\ast}(y)=\sup\left\{
xy-F(x):x\in I\right\} ,
\]
with domain $I^{\ast}=\left\{ y\in\mathbb{R}:F^{\ast}(y)<\infty\right\} $.
Necessarily $I^{\ast}$ is an non-empty interval and $F^{\ast}$ is a convex
function whose level sets $\left\{ y:F^{\ast}(y)\leq\lambda\right\} $ are
closed subsets of $\mathbb{R}$ for each $\lambda\in\mathbb{R}$ (usually such
functions are called \emph{closed} convex functions).
A convex function may not be differentiable, but it admits a good substitute
for differentiability.
The \emph{subdifferential\ }of a real function\emph{\ }$F$ defined on an
interval $I$ is a multivalued function $\partial F:I\rightarrow\mathcal{P
(\mathbb{R})$ defined b
\[
\partial F(x)=\left\{ \lambda\in\mathbb{R}:F(y)\geq F(x)+\lambda(y-x)\text{,
for every}\,\,y\in I\right\} .
\]
Geometrically, the subdifferential gives us the slopes of the supporting lines
for the graph of $F$. The sub\allowbreak differential at a point\emph{\ }is
always a convex set, possibly empty, but the convex functions $F:I\rightarrow
\mathbb{R}$ have the remarkable property that $\partial F(x)\neq\emptyset$ at
all interior points. It is worth noticing that $\partial F(x)=\left\{
F^{\prime}(x)\right\} $ at each point where $F$ is differentiable (so this
formula works for all points of $I$ except for a countable subset). See
\cite{NP2006}, page 30.
\begin{lemma}
\label{Lem1}\noindent\emph{(}Legendre duality, \emph{\cite{NP2006}}, page
\emph{41)}. Let $F:I\rightarrow\mathbb{R}$\ be a closed convex function. Then
its conjugate $F^{\ast}:I^{\ast}\rightarrow\mathbb{R}$ is also convex and
closed and:
$i)$ $xy\leq F(x)+F^{\ast}(y)$ for all $x\in I,$ $y\in I^{\ast};$
$ii)$ $xy=F(x)+F^{\ast}(y)$ if, and only if, $y\in\partial F(x);$
$iii)$ $\partial F^{\ast}=\,\left( \partial F\right) ^{-1}$ \emph{(}as
graphs\emph{)}$;$
$iv)$ $F^{\ast\ast}=F.$
\end{lemma}
Recall that the inverse of a graph $\Gamma$ is the set $\Gamma^{-1}=\left\{
\left( y,x\right) :(x,y)\in\Gamma\right\} .$
How far is Young's inequality from the Legendre duality? Surprisingly, they
are pretty closed in the sense that in most cases the Legendre duality can be
converted into a Young like inequality. Indeed, every continuous convex
function admits an integral representation.
\begin{lemma}
\label{Lem2}\noindent\emph{(}See \emph{\cite{NP2006}}, page \emph{37)}. Let
$F$\ be a continuous convex function defined on an interval $I$ and let
$\varphi:I\rightarrow\mathbb{R}$ be a function such that $\varphi
(x)\in\partial F(x)$ for every $x\in\,I.$\ Then for every $a<b$ in $I$ we
have
\[
F(b)-F(a)=\int_{a}^{b}\,\varphi(t)\,dt.
\]
\end{lemma}
As a consequence, the heuristic meaning of the formula $i)$ in Lemma
\ref{Lem1} is the following Young like inequality,
\[
ab\leq\int_{a_{0}}^{a}\varphi\left( x\right) dx+\int_{b_{0}}^{b}\psi\left(
y\right) dy\text{\quad for all }a\in I,\ b\in I^{\ast},
\]
where $\varphi$ and $\psi$ are selection functions for $\partial F$ and
respectively $\left( \partial F\right) ^{-1}$. Now it becomes clear that
Young's inequality should work outside strict monotonicity (as well as outside
continuity). The details are presented in Section 2. Our approach (based on
the geometric meaning of integrals as areas) allows us to extend the framework
of integrability to all positive measures $\rho$ which are locally absolutely
continuous with respect to the planar Lebesgue measure $dxdy$. See Theorem
\ref{ThmYoungNondecr}\ below.
A special case of Young's inequality i
\[
xy\leq\frac{x^{p}}{p}+\frac{y^{q}}{q},
\]
which works for all $x,y\geq0$, and $p,q>1$ with $1/p+1/q=1$. Theorem
\ref{ThmYoungNondecr} yields the following companion to this inequality in the
case of Gaussian measure $\frac{4}{2\pi}e^{-x^{2}-y^{2}}dxdy$ on
$[0,\infty)\times\lbrack0,\infty):$
\[
\operatorname{erf}(x)\operatorname{erf}(y)\leq\frac{2}{\sqrt{\pi}}\int_{0
^{x}\operatorname{erf}\left( s^{p-1}\right) e^{-s^{2}}ds+\frac{2}{\sqrt{\pi
}}\int_{0}^{y}\operatorname{erf}\left( t^{q-1}\right) e^{-t^{2}}dt,
\]
wher
\begin{equation}
\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-s^{2}}ds \label{erf
\end{equation}
is the \emph{Gauss error function} (or the erf function).
The precision of our generalization of Young's inequality makes the objective
of Section 3.
In Section 4 we discuss yet another extension of Young's inequality, based on
recent work done by J. Jak\v{s}eti\'{c} and J. E. Pe\v{c}ari\'{c} \cite{P}.
The paper ends by noticing the connection of our result to the theory of
$c$-convexity (that is, of convexity associated to a cost density function).
Last but not the least, all results in this paper can be extended verbatim to
the framework of nondecreasing functions $f:[a_{0},a_{1})\rightarrow\lbrack
A_{0},A_{1})$ such that $a_{0}<a_{1}\leq\infty$ and $A_{0}<A_{1}\leq\infty,$
$f(a_{0})=A_{0}$ and $\lim_{x\rightarrow a_{1}}f(x)=A_{1}.$ In other words,
the interval $[0,\infty)$ plays no special role in Young's inequality.
Besides, there is a straightforward companion of Young's inequality for
nonincreasing functions, but this is outside the scope of the present paper.
\section{Young's inequality for weighted measures}
In what follows $f:\left[ 0,\infty\right) \longrightarrow\left[
0,\infty\right) $ will denote a nondecreasing function such that $f\left(
0\right) =0$ and $\underset{x\rightarrow\infty}{\lim}f\left( x\right)
=\infty.$ Since $f$ is not necessarily injective we will attach to $f$ a
\emph{pseudo-inverse} by the following formula
\[
f_{\sup}^{-1}:\left[ 0,\infty\right) \longrightarrow\left[ 0,\infty\right)
,\quad f_{\sup}^{-1}\left( y\right) =\inf\{x\geq0:f(x)>y\}.
\]
Clearly, $f_{\sup}^{-1}$ is nondecreasing and $f_{\sup}^{-1}\left( f\left(
x\right) \right) \geq x$ for all $x.$ Moreover, with the convention
$f(0-)=0,$
\[
f_{\sup}^{-1}\left( y\right) =\sup\left\{ x:y\in\left[ f\left( x-\right)
,f\left( x+\right) \right] \right\} ;
\]
here $f\left( x-\right) $ and $f\left( x+\right) $ represent the lateral
limits at $x$. When $f$ is also continuous
\[
f_{\sup}^{-1}(y)=\max\left\{ x\geq0:y=f(x)\right\} .
\]
\begin{remark}
$($F. Cunningham Jr. and N. Grossman \cite{CG1971}$)$. \emph{Since
pseudo-inverses will be used as integrands, it is convenient to enlarge the
concept of pseudo-inverse by referring to any function} $g$ \emph{such that
\[
f_{\inf}^{-1}\leq g\leq f_{\sup}^{-1},
\]
\emph{where} $f_{\inf}^{-1}(y)=\sup\{x\geq0:f(x)<y\}$. \emph{Necessarily,} $g$
\emph{is nondecreasing and any two} \emph{pseudo-inverses agree except on a
countable set (so their integrals will be the same)}.
\end{remark}
Given $0\leq a<b,$ we define the \emph{epigraph} and the \emph{hypograph} of
$f|_{[a,b]}$ respectively b
\[
\operatorname{epi}f|_{[a,b]}=\left\{ \left( x,y\right) \in\left[
a,b\right] \times\left[ f\left( a\right) ,f\left( b\right) \right]
:y\geq f\left( x\right) \right\} ,
\]
an
\[
\operatorname{hyp}f|_{[a,b]}=\left\{ \left( x,y\right) \in\left[
a,b\right] \times\left[ f\left( a\right) ,f\left( b\right) \right]
:y\leq f\left( x\right) \right\} .
\]
Their intersection is the \emph{graph} of $f|_{[a,b]},$
\[
\operatorname*{graph}f|_{[a,b]}=\left\{ \left( x,y\right) \in\left[
a,b\right] \times\left[ f\left( a\right) ,f\left( b\right) \right]
:y=f\left( x\right) \right\} .
\]
Notice that our definitions of epigraph and hypograph are not the standard
ones, but agree with them in the context of monotone functions.
We will next consider a measure $\rho$ on $\left[ 0,\infty\right)
\times\left[ 0,\infty\right) ,$ which is locally absolutely continuous with
respect to the Lebesgue measure $dxdy,$ that is, $\rho$ is of the form
\[
\rho\left( A\right) =\int_{A}K\left( x,y\right) dxdy,
\]
where $K:\left[ 0,\infty\right) \times\left[ 0,\infty\right)
\longrightarrow\lbrack0,\infty)\ $is a Lebesgue locally integrable function,
and $A$ is any compact subset of $\left[ 0,\infty\right) \times\left[
0,\infty\right) $.
Clearly
\begin{align*}
\rho\left( \operatorname{hyp}f|_{[a,b]}\right) +\rho\left(
\operatorname{epi}f|_{[a,b]}\right) & =\rho\left( \left[ a,b\right]
\times\left[ f\left( a\right) ,f\left( b\right) \right] \right) \\
& =\int_{a}^{b}\int_{f\left( a\right) }^{f\left( b\right) }K\left(
x,y\right) dydx.
\end{align*}
Moreover
\[
\rho\left( \operatorname{hyp}f|_{[a,b]}\right) =\int_{a}^{b}\left(
\int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right)
dy\right) dx.
\]
an
\begin{equation}
\rho\left( \operatorname{epi}f|_{[a,b]}\right) =\int_{f\left( a\right)
}^{f\left( b\right) }\left( \int_{a}^{f_{\sup}^{-1}\left( y\right)
}K\left( x,y\right) dx\right) dy.\nonumber
\end{equation}
The discussion above can be summarized as follows:
\begin{lemma}
\label{Lem3}Let $f:\left[ 0,\infty\right) \longrightarrow\left[
0,\infty\right) $ be a nondecreasing function such that $f\left( 0\right)
=0$ and $\underset{x\rightarrow\infty}{\lim}f\left( x\right) =\infty$. Then
for every Lebesgue locally integrable function $K:\left[ 0,\infty\right)
\times\left[ 0,\infty\right) \longrightarrow\lbrack0,\infty)$ and every pair
of nonnegative numbers $a<b,
\begin{multline*}
\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left(
x,y\right) dy\right) dx+\int_{f\left( a\right) }^{f\left( b\right)
}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right)
dx\right) dy\\
=\int_{a}^{b}\int_{f\left( a\right) }^{f\left( b\right) }K\left(
x,y\right) dydx.
\end{multline*}
\end{lemma}
We can now state the main result of this section:
\begin{theorem}
\label{ThmYoungNondecr}\emph{(}Young's inequality for nondecreasing
functions\textbf{\emph{)}. }Under the assumptions of Lemma $3$, for every pair
of nonnegative numbers $a<b,$ and every number $c\geq f(a)$ we have
\begin{multline*}
\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\\
\leq\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right)
}K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left(
\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy.
\end{multline*}
If in addition $K$ is strictly positive almost everywhere, then the equality
occurs if and only if $c\in\left[ f\left( b-\right) ,f\left( b+\right)
\right] .$
\begin{proof}
We start with the case where $f\left( a\right) \leq c\leq f\left(
b-\right) $. See Figure \ref{fig1}. \
\begin{figure}
[h]
\begin{center}
\includegraphics[
height=5.7464cm,
width=6.491cm
{fig1.jpg
\caption{The geometry of Young's inequality when $f\left( a\right) \leq
c\leq f\left( b-\right) .$
\label{fig1
\end{center}
\end{figure}
In this case
\begin{multline*}
\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left(
x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int
_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\
=\int_{a}^{f_{\sup}^{-1}\left( c\right) }\left( \int_{f\left( a\right)
}^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left(
a\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left(
x,y\right) dx\right) dy\\
+\int_{f_{\sup}^{-1}\left( c\right) }^{b}\left( \int_{f\left( a\right)
}^{f\left( x\right) }K\left( x,y\right) dy\right) dx\\
=\int_{a}^{f_{\sup}^{-1}\left( c\right) }\int_{f\left( a\right)
^{c}K\left( x,y\right) dydx+\int_{f_{\sup}^{-1}\left( c\right)
^{b}\left( \int_{c}^{f\left( x\right) }K\left( x,y\right) dy\right) dx\\
+\int_{f_{\sup}^{-1}\left( c\right) }^{b}\int_{f\left( a\right)
^{c}K\left( x,y\right) dydx\\
\geq\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx,
\end{multline*}
with equality if and only if $\int_{f_{\sup}^{-1}\left( c\right)
^{b}\left( \int_{c}^{f\left( x\right) }K\left( x,y\right) dy\right)
dx=0.$ When $K$ is strictly positive almost everywhere, this means that
$c=f\left( b-\right) $.
If $c\geq f\left( b+\right) ,$ the
\begin{multline*}
\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left(
x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int
_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\
=\int_{a}^{f_{\sup}^{-1}\left( c\right) }\left( \int_{f\left( a\right)
}^{f\left( x\right) }K\left( x,y\right) dy\right) dx+\int_{f\left(
a\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left(
x,y\right) dx\right) dy\\
-\int_{b}^{f_{\sup}^{-1}\left( c\right) }\left( \int_{f\left( a\right)
}^{f\left( x\right) }K\left( x,y\right) dy\right) dx\\
=\int_{a}^{f_{\sup}^{-1}\left( c\right) }\int_{f\left( a\right)
^{c}K\left( x,y\right) dydx\\
-\left( \int_{b}^{f_{\sup}^{-1}\left( c\right) }\left( \int_{f\left(
a\right) }^{f\left( c\right) }K\left( x,y\right) dy\right)
dx-\int_{f\left( b+\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left(
y\right) }K\left( x,y\right) dx\right) dy\right) \\
\geq\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx.
\end{multline*}
Equality holds if and only if$\ \int_{f\left( b+\right) }^{c}\left(
\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy$,
that is, when $c=f\left( b+\right) $ (provided that $K$ is strictly positive
almost everywhere). See Figure 2
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=5.3927cm,
width=6.6558cm
{fig2.jpg
\caption{The case $c\geq f\left( b+\right) .$
\label{fig2
\end{center}
\end{figure}
If $c\in\left( f\left( b-\right) ,f\left( b+\right) \right) ,$ then
$f_{\sup}^{-1}\left( c\right) =b$ and the inequality in the statement of
Theorem \ref{ThmYoungNondecr} is actually an equality. See Figure \ref{fig3}
\begin{figure}
[h]
\begin{center}
\includegraphics[
height=6.1593cm,
width=6.9216cm
{fig3.jpg
\caption{The equality case.
\label{fig3
\end{center}
\end{figure}
\end{proof}
\end{theorem}
\begin{corollary}
\label{CorContIncr}\emph{(}Young's inequality for continuous increasing
functions\emph{)}\textbf{. }If $f:\left[ 0,\infty\right) \longrightarrow
\left[ 0,\infty\right) $ is also continuous and increasing, the
\begin{multline*}
\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\\
\leq\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right)
}K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left(
\int_{a}^{f^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy
\end{multline*}
for every real number $c\geq f(a)$. Assuming $K$ strictly positive almost
everywhere, the equality occurs if and only if $\ c=f\left( b\right) .$
\end{corollary}
If $K\left( x,y\right) =1$ for every $x,y\in\left[ 0,\infty\right) $, then
Corollary \ref{CorContIncr} asserts tha
\[
bc-af\left( a\right) <\int_{a}^{b}f\left( x\right) dx+\int_{f\left(
a\right) }^{c}f^{-1}\left( y\right) dy\text{\quad for all }0<a<b\text{ and
}c>f(a);
\]
equality occurs if and only if $c=f\left( b\right) $. In the special case
where $a=f\left( a\right) =0$, this reduces to the classical inequality of Young.
\begin{remark}
$($The probabilistic companion of Theorem \ref{ThmYoungNondecr}$)$.
\emph{Suppose there is given a nonnegative random variable} $X:[0,\infty
)\rightarrow\lbrack0,\infty)$ \emph{whose cumulative distribution function}
$F_{X}(x)=P\left( X\leq x\right) $ \emph{admits a density, that is, a
nonnegative Lebesgue-integrable function} $\rho_{X}$ \emph{such} \emph{that}
\[
P\left( x\leq X\leq y\right) =\int_{x}^{y}\rho_{X}(u)du\text{\quad for all
}x\leq y.
\]
\emph{The} quantile function \emph{of the distribution function} $F_{X}$
$($\emph{also known as the} increasing rearrangement \emph{of the random
variable} $X)$ \emph{is defined by
\[
Q_{X}(x)=\inf\left\{ y:F_{X}(y)\geq x\right\} .
\]
\emph{Thus, a quantile function is nothing but a pseudo-inverse of} $F_{X}$.
\emph{Motivated by Statistics, a number of fast algorithms were developed for
computing the quantile functions with high accuracy. See} \cite{A}.
\emph{Without entering the details, we recall here the remarkable formula}
\emph{(due to} \emph{G.} \emph{Steinbrecher)} \emph{for the quantile function
of the normal distribution:
\[
\operatorname{erf}^{-1}(z)=\sum_{k=0}^{\infty}\frac{c_{k}\left( \frac
{\sqrt{\pi}}{2}z\right) ^{2k+1}}{2k+1},
\]
\emph{where} $c_{0}=1$ \emph{and}
\[
c_{k}=\sum_{m=0}^{k-1}\frac{c_{m}c_{k-m-1}}{\left( m+1\right) \left(
2m+1\right) }\text{\quad\emph{for all} }k\geq1.
\]
\emph{According to Theorem} \ref{ThmYoungNondecr}, \emph{for every pair of
continuous random variables} $Y,Z:[0,\infty)\rightarrow\lbrack0,\infty)$
\emph{with density} $\rho_{Y,Z},$ \emph{and every positive numbers} $b$
\emph{and} $c,$ \emph{the following inequality holds:
\[
P\left( Y\leq b;Z\leq c\right) \leq\int_{0}^{b}\left( \int_{0}^{F_{X
(x)}\rho_{Y,Z}\left( x,y\right) dy\right) dx+\int_{0}^{c}\left( \int
_{0}^{Q_{X}(y)}\rho_{Y,Z}\left( x,y\right) dx\right) dy.
\]
\emph{This can be seen as a principle of uncertainty, since it shows that the
functions}
\[
x\rightarrow\int_{0}^{F_{X}(x)}\rho_{Y,Z}\left( x,y\right) dy\text{ and
}y\rightarrow\int_{0}^{Q_{X}(y)}\rho_{Y,Z}\left( x,y\right) dx
\]
\emph{cannot be made simultaneously small.}
\end{remark}
\begin{remark}
$($The higher dimensional analogue of Theorem $1).$ \emph{Consider a locally
absolutely continuous} \emph{kernel} $K:\left[ 0,\infty\right)
\times...\times\left[ 0,\infty\right) \longrightarrow\lbrack0,\infty
),\ K=K\left( s_{1},s_{2},...,s_{n}\right) ,$ \emph{and a family} $\phi
_{1},...,\phi_{n}:[a_{i},b_{i}]\rightarrow\mathbb{R}\ $\emph{of nondecreasing
functions defined on subintervals of} $\left[ 0,\infty\right) .$
\emph{Then}
\begin{multline*}
\int_{\phi_{1}\left( a_{1}\right) }^{\phi_{1}\left( b_{1}\right)
\int_{\phi_{2}\left( a_{2}\right) }^{\phi_{2}\left( b_{2}\right)
\cdots\int_{\phi_{n}\left( a_{n}\right) }^{\phi_{n}\left( b_{n}\right)
}K\left( s_{1},s_{2},...,s_{n}\right) ds_{n}...ds_{2}ds_{1}\\
\le
{\displaystyle\sum\limits_{i=1}^{n}}
\int_{\phi_{i}\left( a_{i}\right) }^{\phi_{i}\left( b_{i}\right) }\left(
\int_{\phi_{1}\left( a_{1}\right) }^{\phi_{1}\left( s\right) }\cdots
\int_{\phi_{n}\left( a_{n}\right) }^{\phi_{n}\left( s\right) }K\left(
s_{1},...,s_{n}\right) ds_{n}...ds_{i+1}ds_{i-1}...ds_{1}\right) ds.\
\end{multline*}
\emph{The proof is based on mathematical induction (which is left to the
reader). The above inequality cover the n-variable generalization of Young's
inequality as obtained by Oppenheim \cite{O1927} (as well as the main result
in \cite{Pa1992}).}
\end{remark}
The following stronger version of Corollary \ref{CorContIncr} incorporates the
Legendre duality.
\begin{theorem}
\label{extYoung}Let $f:\left[ 0,\infty\right) \longrightarrow\left[
0,\infty\right) $ be a continuous nondecreasing function and $\Phi
:[0,\infty)\rightarrow\mathbb{R}$ a convex function whose conjugate is also
defined on $[0,\infty)$. Then for all $b>a\geq0,$ $c\geq f(a),$ and
$\varepsilon>0$ we have
\begin{multline*}
\int_{a}^{b}\Phi\left( \varepsilon\int_{f\left( a\right) }^{f\left(
x\right) }K\left( x,y\right) dy\right) dx+\int_{f(a)}^{c}\Phi^{\ast
}\left( \frac{1}{\varepsilon}\int_{a}^{f_{\sup}^{-1}\left( y\right)
}K\left( x,y\right) dx\right) dx\\
\geq\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right)
dydx-(c-f(a))\Phi\left( \varepsilon\right) -(b-a)\Phi^{\ast}\left(
1/\varepsilon\right) .
\end{multline*}
\end{theorem}
\begin{proof}
According to the Legendre duality
\begin{equation}
\Phi(\varepsilon u)+\Phi^{\ast}(v/\varepsilon)\geq uv\text{\quad for all
}u,v,\varepsilon\geq0. \label{fi
\end{equation}
For $u=\int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right)
dy$ and $v=1$ we ge
\[
\Phi\left( \varepsilon\int_{f\left( a\right) }^{f\left( x\right)
}K\left( x,y\right) dy\right) +\Phi^{\ast}\left( 1/\varepsilon\right)
\geq\int_{f\left( a\right) }^{f\left( x\right) }K\left( x,y\right) dy,
\]
and by integrating both sides from $a$ to $b$ we obtain the inequalit
\[
\int_{a}^{b}\Phi\left( \varepsilon\int_{f\left( a\right) }^{f\left(
x\right) }K\left( x,y\right) dy\right) dx+(b-a)\Phi^{\ast}\left(
1/\varepsilon\right) \geq\int_{a}^{b}\left( \int_{f\left( a\right)
}^{f\left( x\right) }K\left( x,y\right) dy\right) dx.
\]
In a similar manner, starting with $u=1$ and $v=\int_{a}^{f_{\sup}^{-1}\left(
y\right) }K\left( x,y\right) dx,$ we arrive first at the inequality
\[
\Phi\left( \varepsilon\right) +\Phi^{\ast}\left( \frac{1}{\varepsilon
\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right)
\geq\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx,
\]
and then t
\begin{multline*}
(c-f(a))\Phi\left( \varepsilon\right) +\int_{f(a)}^{c}\Phi^{\ast}\left(
\frac{1}{\varepsilon}\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left(
x,y\right) dx\right) dx\\
\geq\int_{f\left( a\right) }^{c}\left( \int_{a}^{f_{\sup}^{-1}\left(
y\right) }K\left( x,y\right) dx\right) dy.
\end{multline*}
Therefore
\begin{multline*}
\int_{a}^{b}\Phi\left( \varepsilon\int_{f\left( a\right) }^{f\left(
x\right) }K\left( x,y\right) dy\right) dx+\int_{f(a)}^{c}\Phi^{\ast
}\left( \frac{1}{\varepsilon}\int_{a}^{f_{\sup}^{-1}\left( y\right)
}K\left( x,y\right) dx\right) dx\\
\geq\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right)
}K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left(
\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\
-(b-a)\Phi^{\ast}\left( 1/\varepsilon\right) -(c-f(a))\Phi\left(
\varepsilon\right) .
\end{multline*}
According to Theorem \ref{ThmYoungNondecr}
\begin{multline*}
\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left(
x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int
_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\
\geq\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx,
\end{multline*}
and the inequality in the statement of Theorem \ref{extYoung} is now clear.
\end{proof}
In the special case where $K\left( x,y\right) =1,$ $a=f\left( a\right) =0$
and $\Phi(x)=x^{p}/p$ (for some\emph{\ }$p>1$), Theorem \ref{extYoung} yields
the following inequality
\[
\int_{0}^{b}f^{p}\left( x\right) dx+\int_{0}^{c}\left( f_{\sup}^{-1}\left(
y\right) \right) ^{p}dy\geq pbc-\left( p-1\right) \left( b+c\right)
,\ \text{for every }b,c\geq0.
\]
This remark extends a result due to W. T. Sulaiman \cite{S}.
We end this section by noticing the following result that complements Theorem
\ref{ThmYoungNondecr}.
\begin{proposition}
\label{PropMerkle}Under the assumptions of Lemma \ref{Lem3}
\begin{multline*}
\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left(
x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int
_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\
\leq\max\left\{ \int_{a}^{b}\int_{f\left( a\right) }^{f\left( b\right)
}K\left( x,y\right) dydx,\int_{a}^{f_{\sup}^{-1}\left( c\right)
\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\right\} .
\end{multline*}
Assuming $K$ strictly positive almost everywhere, the equality occurs if and
only if $c=f\left( b\right) .$
\end{proposition}
\begin{proof}
If $c<f\left( b\right) $, then from Lemma \ref{Lem3} we infer tha
\begin{multline*}
\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left(
x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int
_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\
=\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left(
x,y\right) dy\right) dx+\int_{f\left( a\right) }^{f\left( b\right)
}\left( \int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right)
dx\right) dy\\
-\int_{c}^{f\left( b\right) }\left( \int_{a}^{f_{\sup}^{-1}\left(
y\right) }K\left( x,y\right) dx\right) dy\\
\leq\int_{a}^{b}\int_{f\left( a\right) }^{f\left( b\right) }K\left(
x,y\right) dydx
\end{multline*}
The other case, $c\geq f\left( b\right) $, has a similar approach.
\end{proof}
Proposition \ref{PropMerkle} extends a result due to M. J. Merkle \cite{Me}.
\section{The precision in Young's inequality}
The main result of this section is as follows:
\begin{theorem}
\label{ThmPrec}Under the assumptions of Lemma \ref{Lem3}, for all $b\geq
a\geq0$ and $c\geq f(a),
\begin{multline*}
\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left(
x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int
_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\
-\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\leq
\left\vert \int_{f_{\sup}^{-1}\left( c\right) }^{b}\int_{c}^{f\left(
b\right) }K\left( x,y\right) dydx\right\vert \text{.
\end{multline*}
Assuming $K$ strictly positive almost everywhere, the equality occurs if and
only if $c=f\left( b\right) $.
\end{theorem}
\begin{proof}
The case where $f\left( a\right) \leq c\leq f\left( b-\right) $ is
illustrated in Figure \ref{fig4}. The left-hand side of the inequality in the
statement of Theorem \ref{ThmPrec} represents the measure of the cross-hatched
curvilinear trapezium, while right-hand side is the measure of the $ABCD$
rectangle
\begin{figure}
[h]
\begin{center}
\includegraphics[
height=5.4696cm,
width=6.9787cm
{fig4.jpg
\caption{The geometry of the case $f\left( a\right) \leq c\leq f\left(
b-\right) .$
\label{fig4
\end{center}
\end{figure}
Therefore
\begin{multline*}
\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left(
x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int
_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\
-\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx=\int
_{f_{\sup}^{-1}\left( c\right) }^{b}\left( \int_{c}^{f\left( x\right)
}K\left( x,y\right) dy\right) dx\\
\leq\int_{f_{\sup}^{-1}\left( c\right) }^{b}\int_{c}^{f\left( b\right)
}K\left( x,y\right) dydx.
\end{multline*}
The equality holds if and only if $\int_{f_{\sup}^{-1}\left( c\right)
^{b}\left( \int_{c}^{f\left( x\right) }K\left( x,y\right) dy\right)
dx=0,$ that is, when $f\left( b-\right) =c.$
The case where $c\geq f\left( b+\right) $ is similar to the precedent one.
The first term will be
\begin{multline*}
\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right) }K\left(
x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left( \int
_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\
-\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx=\int
_{b}^{f_{\sup}^{-1}\left( c\right) }\left( \int_{f\left( b\right)
}^{f\left( x\right) }K\left( x,y\right) dy\right) dx\\
\leq\int_{b}^{f_{\sup}^{-1}\left( c\right) }\int_{f\left( b\right)
^{c}K\left( x,y\right) dydx.
\end{multline*}
Equality holds if and only if $\int_{b}^{f_{\sup}^{-1}\left( c\right)
\int_{f\left( b\right) }^{c}K\left( x,y\right) dydx=0,\ $so we must have
$f\left( b+\right) =c$.
The case where $c\in\left[ f\left( b-\right) ,f\left( b+\right) \right]
$ is trivial, both sides of our inequality being equal to zero.
\end{proof}
\begin{corollary}
\label{CorMing}\emph{(}E. Minguzzi\emph{\ }\cite{M}\emph{)}. If moreover
$K\left( x,y\right) =1$ on $\left[ 0,\infty\right) \times\left[
0,\infty\right) $, and $f$ is continuous and increasing, then
\[
\int_{a}^{b}f\left( x\right) dx+\int_{f\left( a\right) }^{c}f^{-1}\left(
y\right) dy\ -bc+af\left( a\right) \leq\left( f^{-1}\left( c\right)
-b\right) \cdot\left( c-f\left( b\right) \right) .
\]
The equality occurs if and only if $c=f\left( b\right) $.
\end{corollary}
More accurate bounds can be indicated under the presence of convexity.
\begin{corollary}
\label{CorJP}Let $f$ be a nondecreasing continuous function, which is convex
on the interval $\left[ \min\left\{ f_{\sup}^{-1}\left( c\right)
,b\right\} ,\max\left\{ f_{\sup}^{-1}\left( c\right) ,b\right\} \right]
$. Then
\begin{multline*}
i)~\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right)
}K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left(
\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\
-\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\\
\leq\int_{f_{\sup}^{-1}\left( c\right) }^{b}\int_{c}^{c+\frac{f(b)-c
{b-f_{\sup}^{-1}\left( c\right) }(x-f_{\sup}^{-1}\left( c\right)
)}K\left( x,y\right) dydx\text{,\quad for every }c\leq f\left( b\right) ;
\end{multline*
\begin{multline*}
ii)~\int_{a}^{b}\left( \int_{f\left( a\right) }^{f\left( x\right)
}K\left( x,y\right) dy\right) dx+\int_{f\left( a\right) }^{c}\left(
\int_{a}^{f_{\sup}^{-1}\left( y\right) }K\left( x,y\right) dx\right) dy\\
-\int_{a}^{b}\int_{f\left( a\right) }^{c}K\left( x,y\right) dydx\\
\geq\int_{b}^{f_{\sup}^{-1}\left( c\right) }\int_{f\left( b\right)
}^{f(b)+\frac{c-f(b)}{f_{\sup}^{-1}\left( c\right) -b}(x-b)}K\left(
x,y\right) dydx\text{,\quad for every }c\geq f\left( b\right) .
\end{multline*}
If $\ f$ is concave on the aforementioned interval, then the inequalities
above work in the reverse way.
Assuming $K$ strictly positive almost everywhere, the equality occurs if and
only if$\ f$ is an affine function or $f\left( b\right) =c$.
\begin{proof}
We will restrict here to the case of convex functions, the argument for the
concave functions being similar.
The left-hand side term of each of the inequalities in our statement
represents the measure of the cross-hatched surface. See Figure 5 and Figure 6.
\medski
\raisebox{-0cm}{\parbox[b]{5.9638cm}{\begin{center}
\includegraphics[
height=4.9446cm,
width=5.9638cm
{fig5.jpg
\\
Figure 5. The geometry of the case $c\leq f\left( b\right) .
\end{center}}
\raisebox{-0cm}{\parbox[b]{5.9594cm}{\begin{center}
\includegraphics[
height=4.9776cm,
width=5.9594cm
{fig6.jpg
\\
Figure 6. The geometry of the case $c\geq f\left( b\right) .
\end{center}}
\qquad\medskip
As the points of the graph of the convex function $f$ (restricted to the
interval of endpoints $b$ and $f_{\sup}^{-1}\left( c\right) )$ are under the
chord joining $\left( b,f\left( b\right) \right) $ and $\left( f_{\sup
}^{-1}\left( c\right) ,c\right) ,$ it follows that this measure is less
than the measure of the enveloping triangle $MNQ$ when $c\leq f(b).$ This
yields $i)$. The assertion $ii)$ follows in a similar way.
\end{proof}
\end{corollary}
Corollary \ref{CorJP} extends a result due to J. Jak\v{s}eti\'{c} and J. E.
Pe\v{c}ari\'{c}\textit{\ \cite{P}.} They considered the special case were
$K\left( x,y\right) =1$ on $\left[ 0,\infty\right) \times\left[
0,\infty\right) $ and $f:\left[ 0,\infty\right) \rightarrow\left[
0,\infty\right) $ is increasing and differentiable, with an increasing
derivative on the interval $\left[ \min\left\{ f^{-1}\left( c\right)
,b\right\} ,\max\left\{ f^{-1}\left( c\right) ,b\right\} \right] $ and
$f(0)=0.$ In this case the conclusion of Corollary \ref{CorJP} reads as
follows
\begin{align*}
i)\text{ }\int_{0}^{b}f\left( x\right) dx+\int_{0}^{c}f^{-1}\left(
y\right) dy\ -bc & \leq\frac{1}{2}\left( f^{-1}\left( c\right)
-b\right) \left( c-f\left( b\right) \right) \ \text{for }c<f\left(
b\right) ;\\
ii)\text{ }\int_{0}^{b}f\left( x\right) dx+\int_{0}^{c}f^{-1}\left(
y\right) dy\ -bc & \geq\frac{1}{2}\left( f^{-1}\left( c\right)
-b\right) \left( c-f\left( b\right) \right) \ \text{for }c>f\left(
b\right) .
\end{align*}
The equality holds if $f\left( b\right) =c$ or $f$ is an affine function.
The inequality sign should be reversed if $f$ has a decreasing derivative on
the interval
\[
\left[ \min\left\{ f^{-1}\left( c\right) ,b\right\} ,\max\left\{
f^{-1}\left( c\right) ,b\right\} \right] .
\]
\section{The connection with $c$-convexity}
Motivated by the mass transportation theory, several people \cite{D1988},
\cite{EN1974} drew a parallel to the classical theory of convex functions by
extending the Legendre duality. Technically, given two compact metric spaces
$X$ and $Y$ and a \emph{cost density} function $c:X\times Y\rightarrow
\mathbb{R}$ (which is supposed to be continuous), we may consider the
following generalization of the notion of convex function:
\begin{definition}
\label{cConv}A function $F:X\rightarrow\mathbb{R}$ is $c$-convex if there
exists a function $G:Y\rightarrow\mathbb{R}$ such that
\begin{equation}
F(x)=\sup_{y\in Y}\left\{ c(x,y)-G(y)\right\} ,\;\text{for all }x\in X.
\label{c-conv
\end{equation}
\end{definition}
We abbreviate (\ref{c-conv}) by writing $F=G^{c}$. A useful remark is the
equalit
\[
F^{cc}=F,
\]
that is
\begin{equation}
F(x)=\sup_{y\in Y}\left\{ c(x,y)-F^{c}(y)\right\} ,\;\text{for all }x\in X.
\label{cdual
\end{equation}
The classical notion of convex function corresponds to the case where $X$ is a
compact interval and $c(x,y)=xy$. The details can be found in \cite{NP2006},
pp. 40-42.
Theorem \ref{ThmYoungNondecr} illustrates the theory of $c$-convex functions
for the spaces $X=[a,\infty]$, $Y=[f(a),\infty]$ (the Alexandrov one point
compactification of $[a,\infty)$ and respectively $[f(a),\infty)$), and the
cost function
\begin{equation}
c(x,y)=\int_{a}^{x}\int_{f(a)}^{y}K\left( s,t\right) dtds\text{.}
\label{cKrelation
\end{equation}
In fact, under the hypotheses of this theorem, the function
\[
F(x)=\int_{a}^{x}\left( \int_{f\left( a\right) }^{f\left( s\right)
}K\left( s,t\right) dt\right) ds,\quad x\geq a,
\]
an
\[
G(y)=\int_{f\left( a\right) }^{y}\left( \int_{a}^{f_{\sup}^{-1}\left(
t\right) }K\left( s,t\right) ds\right) dt,\quad y\geq f(a),
\]
verify the relations $F^{c}=G$ and $G^{c}=F$ (due to the equality case as
specified in the statement of Theorem \ref{ThmYoungNondecr}, so they are both
$c$-convex.
On the other hand, a simple argument shows that $F$ and $G$ are also convex in
the usual sense.
Let us call the functions $c$ that admits a representation of the form
(\ref{cKrelation}) with $K\in L^{1}(\mathbb{R\times R}),$ \emph{absolutely}
\emph{continuous in the hyperbolic sense}. With this terminology, Theorem
\ref{ThmYoungNondecr} can be rephrased as follows:
\begin{theorem}
\label{ThmcConv}Suppose that $c:[a,b]\times\lbrack A,B]\rightarrow\mathbb{R}$
is an absolutely continuous function in the hyperbolic sense with mixed
derivative $\frac{\partial^{2}c}{\partial x\partial y}\geq0,$ and
$f:[a,b]\rightarrow\lbrack A,B]$ is a nondecreasing function such that
$f(a)=A.$ The
\begin{equation}
c(x,y)-c(a,f(a))\leq\int_{a}^{x}\frac{\partial c}{\partial t}(t,f(t))dt+\int
_{f(a)}^{y}\frac{\partial c}{\partial s}(f_{\sup}^{-1}(s),s)ds \label{cyineq
\end{equation}
for all $(x,y)\in\lbrack a,A]\times\lbrack b,B]$.
If $\frac{\partial^{2}c}{\partial x\partial y}>0$ almost everywhere,\ then
(\ref{cyineq}) becomes an equality if and only if $y\in\left[
f(x-),f(x+)\right] ;$ here we made the convention $f(a-)=f(a)$ and
$f(b+)=f(b).$
\end{theorem}
Necessarily, an absolutely continuous function $c$ in the hyperbolic sense, is
continuous. It admits partial derivatives of the first order and a mixed
derivative $\frac{\partial^{2}c}{\partial x\partial y}$ almost everywhere.
Besides, the functions $y\rightarrow\frac{\partial c}{\partial x}(x,y)$ and
$x\rightarrow\frac{\partial c}{\partial y}(x,y)$ are defined everywhere in
their interval of definition and represent absolutely continuous functions;
they are also nondecreasing provided that $\frac{\partial^{2}c}{\partial
x\partial y}\geq0$ almost everywhere.
A special case of Theorem \ref{ThmcConv} was proved by Zs. P\'{a}les
\cite{Pa1990}, \cite{Pa1992} (assuming $c:[a,A]\times\lbrack b,B]\rightarrow
\mathbb{R}$ a continuously differentiable function with nondecreasing
derivatives $y\rightarrow\frac{\partial c}{\partial x}(x,y)$ and
$x\rightarrow\frac{\partial c}{\partial y}(x,y),$ and $f:[a,b]\rightarrow
\lbrack A,B]$ an increasing homeomorphism). An example which escapes his
result but is covered by Theorem \ref{ThmcConv} is offered by the functio
\[
c(x,y)=\int_{0}^{x}\left\{ \frac{1}{s}\right\} ds\int_{0}^{y}\left\{
\frac{1}{t}\right\} dt,\,\quad x,y\geq0,
\]
where $\left\{ \frac{1}{s}\right\} $ denotes the fractional part of
$\frac{1}{s}$ if $s>0,$ and $\left\{ \frac{1}{s}\right\} =0$ if $s=0$.
According to Theorem \ref{ThmcConv}
\begin{multline*}
\int_{0}^{x}\left\{ \frac{1}{s}\right\} ds\int_{0}^{y}\left\{ \frac{1
{t}\right\} dt\\
\leq\int_{0}^{x}\left( \left\{ \frac{1}{s}\right\} \int_{0}^{f(s)}\left\{
\frac{1}{t}\right\} dt\right) ds+\int_{0}^{y}\left( \left\{ \frac{1
{t}\right\} \int_{0}^{f_{\sup}^{-1}(t)}\left\{ \frac{1}{s}\right\}
ds\right) dt,
\end{multline*}
for every nondecreasing function $f:[0,\infty)\rightarrow\lbrack0,\infty)$
such that $f(0)=0.$
\medskip
\noindent\textbf{Acknowledgement.} The authors were supported by CNCSIS Grant
PN2 ID\_$420.$
| {
"timestamp": "2011-06-28T02:06:47",
"yymm": "1106",
"arxiv_id": "1106.5444",
"language": "en",
"url": "https://arxiv.org/abs/1106.5444",
"abstract": "Young's inequality is extended to the context of absolutely continuous measures. Several applications are included.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "An Extension of Young's Inequality",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126457229185,
"lm_q2_score": 0.8175744695262775,
"lm_q1q2_score": 0.8001704721455747
} |
https://arxiv.org/abs/2010.00315 | Exact hyperplane covers for subsets of the hypercube | Alon and Füredi (1993) showed that the number of hyperplanes required to cover $\{0,1\}^n\setminus \{0\}$ without covering $0$ is $n$. We initiate the study of such exact hyperplane covers of the hypercube for other subsets of the hypercube. In particular, we provide exact solutions for covering $\{0,1\}^n$ while missing up to four points and give asymptotic bounds in the general case. Several interesting questions are left open. | \section{Introduction}
A vector $v\in \mathbb{R}^n$ and a scalar $\alpha\in \mathbb{R}$ determine the hyperplane
\[
\{x\in \mathbb{R}^n:\langle v,x\rangle \coloneqq v_1x_1+\dots+v_nx_n=\alpha\}
\]
in $\mathbb{R}^n$. How many hyperplanes are needed to cover $\{0,1\}^n$? Only two are required; for instance, $\{x:x_1=0\}$ and $\{x:x_1=1\}$ will do. What happens however if $0\in \mathbb{R}^n$ is not allowed on any of the hyperplanes? We can `exactly' cover $\{0,1\}^n\setminus \{0\}$ with $n$ hyperplanes: for example, the collections $\{\{x:x_i=1\}:i\in [n]\}$ or $\{\{x:\sum_{i=1}^nx_i=j\}:j \in [n]\}$ can be used, where $[n]:=\{1,2,\ldots,n\}$.
Alon and F\"{u}redi \cite{AlonFuredi} showed that in fact $n$ hyperplanes are always necessary.
Recently, a variation was studied by Clifton and Huang \cite{CliftonHuang}, in which they require that each point from $\{0,1\}^n\setminus \{0\}$ is covered at least $k$ times for some $k\in \mathbb{N}$ (while $0$ is never covered).
Another natural generalisation is to put more than just $0$ to the set of points we wish to avoid in the cover. For $B\subseteq \{0,1\}^n$, the \emph{exact cover} of $B$ is a set of hyperplanes whose union intersects $\{0,1\}^n$ exactly in~$B$ (points from $\{0,1\}^n\setminus B$ are not covered). Let $\ec(B)$ denote the \emph{exact cover number} of $B$, i.e., the minimum size of an exact cover of $B$. We will usually write $B$ in the form $\{0,1\}^n \setminus S$ for some subset $S \subseteq \{0,1\}^n$. In particular, the result of Alon and F\"{u}redi \cite{AlonFuredi} states that $\ec(\{0,1\}^n\setminus \{0\})=n$.
We first determine what happens if we remove up to four points.
\begin{theorem}
\label{thm:uptofour}
Let $S\subseteq\{0,1\}^n$.
\begin{itemize}
\item If $|S|\in \{2,3\}$, then $\ec(\{0,1\}^n\setminus S)=n-1$.
\item If $|S|=4$, then $\ec(\{0,1\}^n\setminus S)=n-1$ if there is a hyperplane $Q$ with $|Q\cap S|=3$ and $\ec(\{0,1\}^n\setminus S)=n-2$ otherwise.
\end{itemize}
\end{theorem}
The upper bounds are shown by iteratively reducing the dimension of the problem by one using a single `merge coordinates' hyperplane; this allows us to reduce the question to the case $n\leq 7$, which we can handle exhaustively.
Since the number of required hyperplanes seems to decrease, a natural question is whether this pattern continues. For $n\in \mathbb{N}$ and $k\in [2^n]$, we also introduce the exact cover numbers
\begin{align*}
\ec(n,k)&=\max\{\ec(\{0,1\}^n\setminus S):S\subseteq\{0,1\}^n,~ |S|=k\},\\
\ec(n)&=\max\{\ec(B):B\subseteq\{0,1\}^n\}.
\end{align*}
Our main result concerns the asymptotics of $\ec(n)$ and implies that $\ec(n,k)$ can be much larger than $n$.
\begin{theorem}
\label{thm:ec:arbitrary}
For any positive integer $n$, $2^{n-2}/n^2\leq \ec(n)\leq 2^{n+1}/n$.
\end{theorem}
The lower bound uses a random construction and the upper bound uses the fact that we can efficiently cover the hypercube with Hamming spheres.
We leave open whether $\ec(n,k)\leq n$ when $k$ is sufficiently large with respect to $n$, but can show that $\ec(n,k)$ is always at most a constant (depending on $k$) away from $n$.
\begin{theorem}
\label{thm:ec:fixed_size}
For any positive integer $k$,
\[n-\log_2(k) \leq \ec(n,k) \leq n-2^k+\ec(2^k,k).\]
\end{theorem}
The proof of this theorem uses the same techniques as the proof of Theorem \ref{thm:uptofour}.
The problem of determining the asymptotics of $\ec(n)$ was also suggested by F\"{u}redi at Alon's birthday conference in 2016.
\section{Covering all but up to four points}
\label{sec:up_to_four}
In this section, we determine $\ec(\{0,1\}^n\setminus S)$ for subsets $S$ of size 2, 3 and 4. For the lower bounds, we use the following result of Alon and F\"{u}redi \cite{AlonFuredi}.
\begin{theorem}[Corollary 1 in \cite{AlonFuredi}]
\label{thm:ATcor}
If $n\geq m\geq 1$, then $m$ hyperplanes that do not cover all vertices of $\{0,1\}^n$ miss at least $2^{n-m}$ vertices.
\end{theorem}
For the upper bounds, it suffices to give an explicit construction of a collection of hyperplanes that exactly covers $\{0,1\}^n\setminus S$, for every subset $S$ of size 2, 3 or 4.
We split the proof of Theorem \ref{thm:uptofour} into two cases, the case where $|S| \in \{2, 3\}$ and the case where $|S| = 4$.
\begin{lemma}
\label{lem:cov:23}
Let $n \geq 2$ and $S\subseteq\{0,1\}^n$ with $|S|\in \{2,3\}$. Then $\ec(\{0,1\}^n\setminus S)=n-1$.
\end{lemma}
\begin{proof}
For $n=2$ the statement is true, therefore let $n \geq 3$ and $S\subseteq\{0,1\}^n$ with $|S|\in \{2,3\}$. We first prove the lower bound $\ec(\{0,1\}^n\setminus S) \geq n-1$; this follows from applying the case of $m=n-2$ in Theorem \ref{thm:ATcor}. Indeed, this shows that any $n-2$ hyperplanes that do not cover all of $\{0,1\}^n$ miss at least $4$ vertices, and hence a minimum of $n-1$ hyperplanes are required to miss $2$ or $3$ vertices.
For the upper bound, note that we may assume by vertex transitivity that $(0,\dots,0)\in S$. Consider first the case $|S|=2$. By relabelling the indices, we may assume the second vector $u$ in $S$ satisfies $\{i\in[n]:u_i=1\}=\{1,\dots,\ell\}$ for some $\ell\in \mathbb{N}$. We cover $\{0,1\}^n\setminus S$ by the collection of $n-1$ hyperplanes
\[
\{\{x:x_i=1\}:i\in \{\ell+1,\dots,n\}\}\cup \left\{\left\{x:x_1+\dots+x_\ell=j\right\}:j\in [\ell-1]\right\},
\]
noting none of these hyperplanes contain an element from $S$.
Now consider the case $|S|=3$. We may assume the second and third vectors in $S$ correspond
to the subsets $\{1,\dots,a+b\}$ and $\{1,\dots,a\}\cup\{a+b+1,\dots,a+b+c\}$ for some $a,b,c\in \mathbb{Z}_{\geq 0}$ with $a+b\geq 1$ and $c\geq 1$. We first add the $n-(a+b+c)$ hyperplanes of the form $\{x:x_i=1\}$ for $i\in \{a+b+c+1,\dots,n\}$.
For $x\in S$, we have
\begin{align*}
&x_1+\dots+x_a\in \{0,a\},\\
&x_{a+1}+\dots+x_{a+b}\in \{0,b\},\\
&x_{a+b+1}+\dots+x_{a+b+c}\in \{0,c\}.
\end{align*}
If $a\geq 1$, we add the $a-1$ hyperplanes $\{x:x_1+\dots+x_a=i\}$ for $i\in [a-1]$. Analogously, we add the $b-1$ hyperplanes $\{x: x_{a+1}+\ldots+x_{a+b}=i\}$ for $i\in[b-1]$ if $b\geq 1$, and the $c-1$ hyperplanes $\{x: x_{a+b+1}+\ldots+x_{a+b+c}=i\}$ for $i\in [c-1]$. The only points of $\{0,1\}\setminus S$ that are yet to be covered satisfy the equations above and also satisfy $x_i=0$ for $i>a+b+c$.
Suppose first that $a,b\geq 1$. In this case we have added $n-3$ hyperplanes so far. The problem has effectively been reduced to covering $\{0,1\}^3$ with three missing points $(0,0,0), (1,1,0)\text{ and } (1,0,1)$ using $2$ hyperplanes. Indeed, we may add the following two hyperplanes to our collection in order to exactly cover $\{0,1\}^n\setminus S$:
\begin{align*}
&\left\{x:\frac{x_1+\dots+x_a}a+\frac{x_{a+1}+\dots+x_{a+b}}b+\frac{x_{a+b+1}+\dots+x_{a+b+c}}c=1\right\},\\
&\left\{x:\frac{x_{a+1}+\dots+x_{a+b}}b+\frac{x_{a+b+1}+\dots+x_{a+b+c}}c=2\right\}.
\end{align*}
Suppose now that $a=0$ or $b=0$. Since $a+b\geq 1$ and $c\geq 1$, we have used $n-2$ hyperplanes so far. If $a=0$, we may add the hyperplane
\[
\left\{x:\frac{x_{1}+\dots+x_{b}}b+\frac{x_{b+1}+\dots+x_{b+c}}c=2\right\}
\]
and, if $b=0$, we add
\[
\left\{x:-\frac{x_1+\dots+x_a}a+\frac{x_{a+1}+\dots+x_{a+c}}c=1\right\}.
\]
In either case, the resulting collection covers $\{0,1\}\setminus S$ without covering any point in $S$.
\end{proof}
For the case of four missing points, we always need at least $n-2$ hyperplanes by Theorem \ref{thm:ATcor}. For $n=3$, we may need either $1$ or $2$ hyperplanes. For example, we may exactly cover $\{0,1\}^3\setminus (\{0\}\times \{0,1\}^2)$ by the single hyperplane $\{x:x_1=1\}$, but if $S$ does not lie on a hyperplane then we need two hyperplanes.
The set $\{0\}\times \{0,1\}^2$ has the special property that there is no hyperplane that covers three of its points without covering the fourth.
It turns out this condition is exactly what decides how many hyperplanes are required when removing four points.
\begin{lemma}
\label{lem:cov:4}
Let $S\subseteq\{0,1\}^n$ with $|S|=4$. Then $\ec(\{0,1\}^n\setminus S)=n-1$ if there is a hyperplane $Q$ with $|Q\cap S|=3$ and $\ec(\{0,1\}^n\setminus S)=n-2$ otherwise.
\end{lemma}
\begin{proof}
We know that $\ec(\{0,1\}^n\setminus S)\geq n-2$ from Theorem \ref{thm:ATcor}. If there is a hyperplane $Q$ intersecting $S$ in exactly three points, then $\ec(\{0,1\}^n\setminus S)\geq n-1$. Indeed, by vertex transitivity, we may assume that $0$ is the point of $S$ uncovered by $Q$. Any exact cover of $\{0,1\}^n\setminus S$ can be extended to an exact cover of $\{0,1\}^n\setminus \{0\}$ by adding the hyperplane $Q$ to the collection.
We prove the claimed upper bounds by induction on $n$, handling the case $n\leq 7$ by computer search.
Again, we may assume that $0\in S$. Let $u,v,w$ denote the other three vectors in $S$. For any $i$ with $u_i=v_i=w_i=0$, we can use a hyperplane of the form $\{x:x_i=1\}$ to reduce the covering problem to one of a lower dimension. (Note that dropping the coordinate $i$ in this case does not change whether three points in $S$ can be covered without covering the fourth.) Hence we may assume by induction that no such $i$ exists.
After possibly permuting coordinates, we assume that $u_i=v_i=w_i=1$ on the first $a$ coordinates, $u_i=v_i=1$ and $w_i=0$ on the $b$ coordinates after that, and so on, i.e., sorted by decreasing Hamming weight and lexicographically within the same weight. In other words, our four vectors take the form
\begin{equation}
\label{eq:VennForm}
\begin{pmatrix}
0\\
u\\
v\\
w\\
\end{pmatrix}=
\begin{pmatrix}
0& 0 & 0& 0& 0 & 0& 0\\
1 & 1 & 1& 0 &1 & 0 & 0 \\
1 & 1 & 0 & 1 & 0 &1 & 0\\
1 & 0 & 1 & 1 & 0 & 0 & 1\\
\end{pmatrix}
,\end{equation}
where each column may be replaced with $0$ or more columns of its type. Since $n>7$, by the pigeonhole principle one of the columns must be repeated at least twice. We will show how to handle the case for which this is the first column (i.e. $a\geq 2$); the other cases are analogous.
Our collection of hyperplanes will contain the hyperplanes
\begin{equation}
\label{eq:ec:a}
\{\{x:x_1+\dots+x_a=i\}:i\in [a-1]\}.
\end{equation}
The only points $x$ which have yet to be covered have the property that $x_i$ takes the same value in $\{0,1\}$ for all $i\in [a]$.
We now proceed similarly to the proof of Lemma \ref{lem:cov:23}. Informally, we wish to `merge' the first $a$ coordinates and then apply the induction hypothesis. For each $s\in S$, we define $\pi(s)=(s_{a},\dots,s_n)$. Let $\pi(S)=\{\pi(s):s\in S\}$. Then $|S|=|\pi(S)|=4$.
Any hyperplane
\[
P=\{y:v_1y_1+\dots+v_{n-a+1}y_{n-a+1}=\alpha\}
\]
in $\{0,1\}^{n-a+1}$ can be used to define a hyperplane
\[
L(P)= \left\{x:v_1\frac{x_1+\dots+x_a}a+v_2x_{a+1}+\dots +v_{n-a+1} x_n=\alpha\right\}
\]
in $\{0,1\}^n$. For all $x\in \{0,1\}^n$ with $\sum_{i=1}^a x_i\in \{0,a\}$, we find that $\pi(x)\in P$ if and only if $x\in L(P)$. This shows that if $P_1,\dots,P_M$ form an exact cover for $\{0,1\}^{n-a+1}\setminus \pi(S)$, then $L(P_1),\dots,L(P_M)$, together with the hyperplanes from $(\ref{eq:ec:a})$, form an exact cover for $\{0,1\}^n\setminus S$. This proves
\[
\ec(\{0,1\}^n\setminus S)\leq \ec(\{0,1\}^{n-a+1}\setminus \pi(S))+a-1.
\]
Since there is a hyperplane covering three points in $S$ without covering the fourth if and only if this is the case for $\pi(S)$, we find the claimed upper bounds by induction.
Observe that the proof reduction works also in the case $n\leq 7$ if there are at least two coordinates of the same type in (\ref{eq:VennForm}). Thus, the computer verification is needed only in the case when each column in (\ref{eq:VennForm}) appears at most once. The code used to check the small cases is attached to the arXiv submission at \url{https://arxiv.org/abs/2010.00315}.
\end{proof}
Another natural variant on the original Alon-F\"{u}redi problem is to ask for the exact cover number of a single layer of a hypercube without one point. It turns out this can be easily solved by translating it to the original problem.
\begin{proposition}
Let $n\in \mathbb{N}$ and $i\in \{0,\dots,n\}$.
Let $B$ be obtained by removing a single point from the $i$-th layer $\{x\in \{0,1\}^n:x_1+ \ldots + x_n=i\}$. Then $\ec(B)=\min\{i,n-i\}$.
\end{proposition}
\begin{proof}
We may assume that $i\leq n/2$ and that $b=(1,\dots,1,0,\dots,0)$ is the missing point. The upper bound follows by taking the hyperplanes
\[
\{\{x:x_1+\dots+x_i=j\}:j\in \{0,\dots,i-1\}\}.
\]
For the lower bound, we claim that we may find a cube of dimension $i$ within the $i$-th layer for which $b$ plays the role of the origin. Indeed, consider the affine map
\[
\iota:\{0,1\}^i\to \{0,1\}^n: x \mapsto (1-x_1,1-x_2,\dots,1-x_i,0,\dots,0,x_i,x_{i-1},\dots,x_1).
\]
That is, we view the point $b$ as the origin and take the directions of the form $(-1,0,\dots,0,1)$, $(0,-1,0,\dots,0,1,0)$, etcetera, as the axes of the cube. Now $(B\setminus \{b\}) \cap \iota(\{0,1\}^i) = \iota(\{0,1\}^i\setminus\{0\})$, and hence we may convert any exact cover for $B\setminus\{b\}$ to an exact cover for $\{0,1\}^i\setminus\{0\}$. The lower bound follows from the result of Alon and F\"{u}redi \cite{AlonFuredi}.
\end{proof}
\section{Asymptotics}
\label{sec:ec:asympt}
We first consider the asymptotics of $\ec(n,k)$ when $k$ is held fixed.
For the upper bound, we prove the following lemma.
\begin{lemma}
\label{lem:ec:k}
For all $k\in \mathbb{N}$ and $n\geq 2^{k-1}$, $\ec(n,k)\leq 1+\ec(n-1,k)$.
\end{lemma}
\begin{proof}
Fix $k\in \mathbb{N}$, $n\geq 2^{k-1}$ and a subset $S\subseteq \{0,1\}^n$ of size $|S|=k$. For any $i\in [n]$, let $S_{-i}\subseteq\{0,1\}^{n-1}$ be obtained from $S$ by deleting coordinate $i$ from each element of $S$.
We claim that there exists an $i\in [n]$ such that $|S_{-i}|=k$ and
\begin{equation}
\label{eq:ec:i}
\ec(\{0,1\}^n\setminus S)\leq 1+\ec(\{0,1\}^{n-1}\setminus S_{-i}).
\end{equation}
The lemma follows immediately from this claim.
By vertex transitivity, we may assume that $0\in S$.
Suppose first that there exists $i\in[n]$ for which $s_i=0$ for all $s\in S$. Then $|S_{-i}|=k$. From an exact cover for $\{0,1\}^{n-1}\setminus S_{-i}$, we may obtain an exact cover for $\{x \in \{0,1\}^n\setminus S:x_i=0\}$. Combining with the hyperplane $\{x:x_i=1\}$, this gives an exact cover for $\{0,1\}^n\setminus S$. This proves (\ref{eq:ec:i}).
We henceforth assume that $0\in S$ and that the remaining $k-1$ elements of~$S$ cannot all be $0$ on the same coordinate. Hence there are at most $2^{k-1}-1$ possible values that $(s_i:s\in S)$ can take for $i\in [n]$. Since $n\geq 2^{k-1}$, by the pigeonhole principle, there must exist coordinates $1\leq i<j\leq n$ with $s_i=s_j$ for all $s\in S$. This implies that $|S_{-i}|=|S|=k$. We now show (\ref{eq:ec:i}) is satisfied. After permuting coordinates, we may assume that $(i,j)=(1,2)$. An exact cover for $\{0,1\}^{n-1}\setminus S_{-1}$ is converted to an exact cover for $\{0,1\}^n\setminus S$ as in the proof of Lemma \ref{lem:cov:4}: any hyperplane of the form
\[
P=\{y:v_1y_1+\dots +v_{n-1}y_{n-1}=\alpha\}
\]
is converted to
\[
L(P)=\left\{x:v_1\frac{x_1+x_2}2+v_2x_3+\dots +v_{n-1}x_n=\alpha\right\},
\]
and we add the hyperplane $\{x:x_1+x_2=1\}$ to the adjusted collection.
\end{proof}
It is now easy to prove to that $\ec(n,k)=n+\Theta_k(1)$.
\begin{proof}[Proof of Theorem \ref{thm:ec:fixed_size}]
Let $k\in \mathbb{N}$. We need to prove that for all $n\geq 2^k$,
\[
n-\log_2(k)\leq \ec(n,k)\leq n-2^k+\ec(2^k,k).
\]
The upper bound is vacuous for $n=2^k$ and follows from $n-2^k$ applications of Lemma \ref{lem:ec:k} for $n>2^k$.
The lower bound follows from Theorem \ref{thm:ATcor}: if $n-\ell$ hyperplanes cover all but $k$ vertices, then $k\geq 2^\ell$,
and hence $n-\ell\geq n-\log_2(k)$.
(In fact, this shows $\ec(\{0,1\}^n\setminus S)\geq n-\log_2(k)$ for each subset $S\subseteq\{0,1\}^n$ of size $k$.)
\end{proof}
We now turn to the problem of comparing exact cover numbers for sets $S$ of different sizes. We use two auxiliary lemmas.
For the lower bound, we use a random argument for which we need to know the approximate number of intersection patterns of the hypercube.
An \textit{intersection pattern} of $\{0,1\}^n$ is a non-empty subset $P\subseteq \{0,1\}^n$ for which there exists a hyperplane $H$ with $H\cap \{0,1\}^n=P$.
\begin{lemma}
\label{lem:q_n_num_int_patterns}
$\{0,1\}^n$ has at most $2^{n^2}$ possible intersection patterns.
\end{lemma}
\begin{proof}
We will associate each intersection pattern with a unique element from $(\{0,1\}^n)^n$.
Let $P\subseteq \{0,1\}^n$ be an intersection pattern with $P= H\cap \{0,1\}^n$ for $H$ a hyperplane. Then $|P|<2^n$.
Let $x \in P$ be such that $\sum_{i=1}^nx_i2^i$ is minimal.
Let $\oplus$ denote coordinate-wise addition modulo $2$ and write $x\oplus P=\{x\oplus p:p\in P\}\subseteq \{0,1\}^n$.
Note that $0\in x\oplus P$ since $x\in P$, and that $x\oplus P$ is the intersection of a linear subspace of dimension $n-1$ with $\{0,1\}^n$. (The linear subspace can be obtained from $H$ by a series of reflections.)
We greedily find $0\leq k\leq n-1$ linearly independent vectors $v_1,\dots,v_k\in x\oplus P$ whose linear span intersects $\{0,1\}^n$ in $x \oplus P$. We label $P$ with the $n$-tuple $(x,v_1,\dots,v_k,0,\dots,0)$, where we added $n-1-k$ copies of the vector $0$ at the end of the tuple.
This associates each intersection pattern to a unique element from $(\{0,1\}^n)^n$.
\end{proof}
The above proof is rather crude, but in fact not far from the truth: the number of possible intersection patterns is $2^{(1+o(1))n^2}$ (see e.g. \cite[Lemma 4.3]{Baldi}).
We also use an auxiliary result for the upper bound. The \textit{total domination number} of a graph $G$ is the minimum cardinality of a subset $D\subseteq V(G)$ such that each $v\in V(G)$ has a neighbour in $D$. \begin{lemma}[Theorem 5.2 in \cite{totaldomhypercube}]
\label{lem:total_dom_num}
The total domination number of the hypercube is at most $2^{n+1}/n$ for any $n \geq 1$.
\end{lemma}
Note that this bound must be close to tight since the hypercube is a regular graph of degree $n$, so any total dominating set has cardinality at least $2^n/n$.
We are now ready to prove $2^{n-2}/n^2\leq \ec(n) \leq 2^{n+1}/n$.
\begin{proof}[Proof of Theorem \ref{thm:ec:arbitrary}]
For the lower bound, we need to give a subset $B\subseteq \{0,1\}^n$ that is difficult to cover exactly. We will find a subset $S$ for which all ``large" intersection patterns have a non-empty intersection with $S$. This means that to cover $\{0,1\}^n\setminus S$, we can only use hyperplanes with ``small" intersection patterns.
We take a subset $S\subseteq \{0,1\}^n$ at random by including each point independently with probability $1/2$. Note that the lower bound is trivial for $n \leq 8$, so we may assume that $n > 8$.
For any fixed intersection pattern $P$, the probability that it is disjoint from our random set $S$ is $\left(\frac12\right)^{|P|}$. By Lemma \ref{lem:q_n_num_int_patterns}, there are at most $2^{n^2}$ possible intersection patterns.
Hence, by the union bound, the probability that there is an intersection pattern which has at least $2n^2$ elements and does not intersect with $S$, is at most $2^{n^2}\left(\frac12\right)^{2n^2}$, which is smaller than $1/2$ for $n \geq 2$. With probability at least $1/2$, our random set $S$ has at most $2^{n-1}$ points. Hence, there exists a subset $S$ of size $2^{n-1}$ that `hits' all intersection patterns of size at least $2n^2$.
Any exact cover for $\{0,1\}^n \setminus S$ consists entirely of hyperplanes whose intersection pattern has size at most $2n^2$, and hence needs at least $|\{0,1\}^n\setminus S|/2n^2=2^{n-2}/n^2$ hyperplanes.
We now prove the upper bound. The Hamming distance on $\{0,1\}^n$ is given by $d(x,y)=\sum_{i=1}^n |x_i-y_i|$. A Hamming sphere of radius 1 around a point $x\in \{0,1\}^n$ is given by $S(x)=\{y\in \{0,1\}^n:d(x,y)=1\}$. We claim that any subset of a Hamming sphere is an intersection pattern. Since the cube is vertex-transitive, it suffices to prove our claim for $S(0)$. The hyperplane $\{x:\sum_{i=1}^n x_i=1\}$ intersects $\{0,1\}^n$ in $S(0)$. Intersecting that hyperplane with hyperplanes of the form $\{x:x_j=0\}$ gives a lower-dimensional affine subspace, and we can construct such a subspace which intersects $S(0)$ in any subset we desire. In order to turn the affine subspace into a hyperplane with the same intersection pattern, we may add generic directions that do not yield new points in the hypercube (e.g. consider adding $(1,\pi,0,\dots,0)$). This proves each subset of a Hamming sphere is an intersection pattern.
The hypercube has total domination number at most $2^{n+1}/n$ by Lemma~\ref{lem:total_dom_num}. Hence, we can find a subset $D$ of the cube such that each vertex has a neighbour in $D$. In particular, there are $M\leq 2^{n+1}/n$ Hamming spheres centered on the vertices in $D$ that cover the cube.
For any $B\subseteq \{0,1\}^n$, we write $B=B_1\cup \dots \cup B_M$ such that each $B_i$ is covered by at least one of the Hamming spheres. This means that each $B_i$ is a intersection pattern, and we may cover $B$ exactly using $M$ hyperplanes. This gives the desired exact cover of $B$ with at most $2^{n+1}/n$ hyperplanes.
\end{proof}
Noga Alon pointed out the following improvement on the constant of the lower bound in Theorem~\ref{thm:ec:arbitrary}. There are at most $2^{n^2}$ possible intersection patterns by Lemma~\ref{lem:q_n_num_int_patterns}, so if all possible nonempty $B\subseteq \{0,1\}^n$ can be achieved by taking a union of $x$ of them, then $2^{n^2x}\geq 2^{2^n} -1$. The left-hand side of this inequality is even and the right-hand side is odd, hence $2^{n^2x}\geq 2^{2^n}$ and so $x\geq \frac{2^{n}}{n^2}$.
\section{Conclusion}
\label{sec:ec:concl}
Based on the fact that $\ec(n,k)\leq n$ for $k=1,2,3,4$, one might hope to prove that in fact $\ec(n,k)\leq n+C$ for some constant $C>0$ (independent of~$k$). However, this is not true in general by Theorem \ref{thm:ec:arbitrary}. A~natural question is then whether this will be true for $n$ sufficiently large when $k$ is fixed.
\begin{problem}
Is there a constant $C>0$, such that for any $k\in \mathbb{N}$ there exists a $n_0(k)\in \mathbb{N}$ such that $\ec(n,k)\leq n+C$ for all $n\geq n_0(k)$?
\end{problem}
In an earlier version of this paper \cite{AaronsonGGJK20}, we conjectured that for any $S\subseteq \{0,1\}^r$ and $n\in \mathbb{N}$ with $n\geq r$,
\[
\ec(\{0,1\}^{n}\setminus (S\times\{0\}^{n-r}))=\ec(\{0,1\}^r\setminus S) +n-r.
\]
This would have given a negative answer to the problem above, but the counterexample $S = \{1000, 1111,1001,1011,0110,0001,0010,0111\}$ when $n = 6$ was given by Adam Zsolt Wagner \cite{Adam}.
\smallskip
One approach to improving the lower bound in Theorem \ref{thm:ec:arbitrary} is to try to prove that, for some $\varepsilon\in (0,1)$, the number of hyperplanes containing $n^{1+\varepsilon}$ points is $O(2^{n^{1+\varepsilon}})$. Unfortunately, this is false: there are $2^{(1+o(1))n^2}$ possible intersection patterns of size at least $n^2$. This can be seen by considering intersection patterns of the form $\{0,1\}^{\log_2(n^2)}\times B$ for $B\subseteq\{0,1\}^{n-\log_2(n^2)}$. (If $B$ is a non-empty intersection pattern, then $\{0,1\}^{\log_2(n^2)}\times B$ is an intersection pattern containing at least $n^2$ points.)
On the other hand, by taking every other layer we may intersect each `axis-aligned subcube' of the form $\{0,1\}^a\times \{x\}$, ensuring that no such intersection pattern can be used in a cover. However, there is a more general type of subcube to consider.
We say a subset $A\subseteq \{0,1\}^n$ of size $|A|=2^d$ forms a $d$-dimensional \emph{subcube} if there are vectors $u,w_1,\dots,w_d\in \mathbb{R}^n$ such that
\[
A=\{u+\alpha_1w_1+\dots+\alpha_d w_d: \alpha_1,\dots,\alpha_d \in \{0,1\}\}.
\]
A solution to the following problem might help improve either the upper or lower bound of Theorem \ref{thm:ec:arbitrary}.
\begin{problem}
Fix $n,d \in \mathbb{N}$. What is the smallest cardinality of a subset $S\subseteq\{0,1\}^n$ for which $A\cap S\neq \emptyset$ for all $d$-dimensional subcubes $A \subseteq \{0,1\}^n$?
\end{problem}
This is of a similar flavour to a problem proposed by Alon, Krech and Szab\'{o}~\cite{AlonKrechSzabo}, who asked instead for the asymptotics of the above problem when the cubes have to be axis-aligned. A $d$-dimensional axis-aligned subcube is of the form $\{0,1\}^d\times\{x\}$ after permuting coordinates. Let $g(n,d)$ denote the minimal cardinality of a subset that hits all $d$-dimensional axis-aligned subcubes in $\{0,1\}^n$. The best-known asymptotic bounds for $g(n,d)$ are from~\cite{AlonKrechSzabo}:
\[
\frac{\log_2(d)}{2^{d+2}}\leq \lim_{n\to \infty}\frac{g(n,d)}{2^n} \leq \frac1{d+1}.
\]
Finally, we remark that we have already seen these subcubes come up in Lemma \ref{lem:cov:4} as well: the sets $S\subseteq \{0,1\}^n$ of size 4 with $\ec(\{0,1\}^n\setminus S)=n-2$ are exactly the 2-dimensional subcubes.
\subsection*{Acknowledgements}
We thank Noga Alon for pointing out to us that the problem had also been suggested by F\"{u}redi and for providing the reference~\cite{Baldi}.
We would also like to thank Alex Scott for useful discussions and the Jagiellonian University for their hospitality in hosting us during the time this research was conducted.
\bibliographystyle{abbrv}
| {
"timestamp": "2021-07-02T02:25:57",
"yymm": "2010",
"arxiv_id": "2010.00315",
"language": "en",
"url": "https://arxiv.org/abs/2010.00315",
"abstract": "Alon and Füredi (1993) showed that the number of hyperplanes required to cover $\\{0,1\\}^n\\setminus \\{0\\}$ without covering $0$ is $n$. We initiate the study of such exact hyperplane covers of the hypercube for other subsets of the hypercube. In particular, we provide exact solutions for covering $\\{0,1\\}^n$ while missing up to four points and give asymptotic bounds in the general case. Several interesting questions are left open.",
"subjects": "Combinatorics (math.CO)",
"title": "Exact hyperplane covers for subsets of the hypercube",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9932024678460111,
"lm_q2_score": 0.8056321889812553,
"lm_q1q2_score": 0.8001558782723668
} |
https://arxiv.org/abs/2301.11638 | One-dimensional integral Rellich type inequalities | The motive of this note is twofold. Inspired by the recent development of a new kind of Hardy inequality, here we discuss the corresponding Hardy-Rellich and Rellich inequality versions in the integral form. The obtained sharp Hardy-Rellich type inequality improves the previously known result. Meanwhile, the established sharp Rellich type integral inequality seems new. | \section{Introduction}
In the celebrated paper \cite{hardy}, Godfrey H. Hardy first stated the famous inequality which reads as: let $1<p<\infty$ and $f$ be a $p$-integrable function on $(0, \infty)$, which vanishes at zero, then the function $r\longmapsto \frac{1}{r}\int_{0}^{r} f(t) \:{\rm d}t$ is $p$-integrable over $(0, \infty)$ and there holds
\begin{align}\label{hardy}
\int_{0}^{\infty}\bigg|\frac{1}{r}\int_{0}^{r} f(t)\:{\rm d}t\bigg|^p\:{\rm d}r\leq \bigg(\frac{p}{p-1}\bigg)^p \int_{0}^{\infty}|f(r)|^p\:{\rm d}r.
\end{align}
The constant on the right-hand side of \eqref{hardy} is sharp. The development of the famous Hardy inequality \eqref{hardy} during the period 1906-1928 has its own history and we refer to \cite{kuf} (also, see the preface of \cite{hardy-book-rs}). Recent progress by Frank-Laptev-Weidl \cite{nh} presents a novel one-dimensional inequality with the same sharp constant, which improves the classical Hardy inequality \eqref{hardy}.
This new version looks as follows: let $1<p<\infty$, then for any $f\in L^p(0,\infty)$, which vanishes at zero, one has
\begin{align}\label{new-hardy}
\int_{0}^{\infty}\sup_{0<s<\infty}\bigg|\min\biggr\{\frac{1}{r},\frac{1}{s}\biggr\}\int_{0}^{s}f(t)\:{\rm d}t \bigg|^p\:{\rm d}r\leq \bigg(\frac{p}{p-1}\bigg)^p \int_{0}^{\infty}|f(r)|^p\:{\rm d}r.
\end{align}
Certainly, \eqref{new-hardy} gives an improvement of \eqref{hardy}. Recently, the multidimensional version in the supercritical case and the discrete version of \eqref{new-hardy} have been established in \cite{nhm} and \cite{nhd}, respectively. In the same spirit, one may ask about the possible structure of Hardy-Rellich and Rellich type inequalities. In this short note, we answer affirmatively about the possible forms of these two types of inequalities.
Let us recall the one-dimensional Hardy-Rellich inequality. For $f\in C^2(0,\infty)$ with $f^\prime(0)=0$, there holds
\begin{align}\label{hardy-rel}
\int_{0}^{\infty}\frac{|f^\prime(r)|^2}{r^{2}}\:{\rm d}r \leq 4 \int_{0}^{\infty}|f^{\prime\prime}(r)|^2\:{\rm d}r.
\end{align}
Starting from it, there have been several articles in which the authors studied many improvements of inequality \eqref{hardy-rel}. Here we mention only a few of them \cite{bgr-22, coss,cazacu, ya,tz} and references therein.
Now let us write \eqref{hardy-rel} in the integral form. Note that it can be derived from the weighted 1D classical Hardy inequality. This reads as follows: let $f\in C^1(0,\infty)$, then there holds
\begin{align}\label{hardy-rel-int}
\int_{0}^{\infty}\frac{|\int_{0}^{r}f^\prime(t)\:{\rm d}t|^2}{r^{2}}\:{\rm d}r\leq 4\int_{0}^{\infty}|f^{\prime}(r)|^2\:{\rm d}r.
\end{align}
Here the constant $4$ is sharp. We give an improved version of this inequality in Theorem \ref{hardy-rel-th}.
Let us briefly mention another important function inequality the so-called Rellich inequality which was first introduced in \cite{rel}. It is worth recalling the one-dimensional Rellich inequality. The classical (1D-)Rellich inequality states that for $f\in C^2(0,\infty)$ with $f(0)=0$ and $f^\prime(0)=0$, there holds
\begin{align}\label{rel}
\int_{0}^{\infty}\frac{|f(r)|^2}{r^{4}}\:{\rm d}r\leq \frac{16}{9} \int_{0}^{\infty}|f^{\prime\prime}(r)|^2\:{\rm d}r.
\end{align}
Over the past few decades, there has been a constant effort to improve \eqref{rel}. Here are some references \cite{hinz,ozawa, BT, MNSS,bmo,rsadv, cm}. In this short contribution, we also obtain another type of the Rellich inequality (see Theorem \ref{rel-th} with $p=2$).
To the best of our knowledge, the most recent progress in this direction was made in \cite{bmo}. However, a one-dimensional study is still missing. Thus, trying to fill this gap is another motivation for the present paper. Taking inspiration from there we obtain the following version of Rellich inequality, which reads as: for $f\in L^2(0,\infty)$ there holds
\begin{align}\label{n-rel-2}
&\int_{0}^{\infty}\frac{1}{r^{4}}\bigg(\int_{0}^{r}\int_{0}^{\tau}|f(t)|\:{\rm d}t\:{\rm d}\tau\bigg)^2\:{\rm d}r\nonumber\\&\leq\int_{0}^{\infty}\frac{1}{r^{4}}\bigg(\int_{0}^{r}\sup_{0<s<\infty}\min\biggl\{1,\frac{\tau}{s}\biggr\}\int_{0}^{s}|f(t)|\:{\rm d}t\:{\rm d}\tau\bigg)^2\:{\rm d}r\nonumber \\&\leq \frac{16}{9} \int_{0}^{\infty}|f(r)|^2\:{\rm d}r.
\end{align}
Moreover, we will establish the constant $16/9$ is a sharp constant. Therefore, \eqref{n-rel-2} can be compared with \eqref{rel}. \color{black} Note that we have mentioned only the $L^2(0,\infty)$ case but we will discuss the result for the general $L^p(0,\infty)$ case.
The plan of this paper is simple. In Section \ref{prelm} we discuss a few preliminaries and then we state the main results. In Section \ref{proof} we complete the proofs of the main theorems. In the final section, we will briefly mention some more related inequalities.
\section{Preliminaries and main results}\label{prelm}
Let us begin this section with basic facts about \emph{symmetric decreasing rearrangement}. For more details, we refer to \cite[Chapter 3]{lieb}. We denote $f^*$ the symmetric decreasing rearrangement of $f$. It is well known that $f^*$ is a nonnegative, radially symmetric, and nonincreasing function. Irrespective of several properties of $f^*$, the useful properties in our context is the equimeasurability property, i.e.
\begin{align*}
\text{vol}(\{|f|>\tau\})=\text{vol}(\{f^*>\tau\}) \text{ for all }\tau\geq 0.
\end{align*}
By using the \emph{layer cake representation} and the above property, we have the following helpful identity:
\begin{align}\label{norm-presv}
\int_{0}^\infty|f(t)|^p\:{\rm d}t=\int_{0}^\infty|f^*(t)|^p\:{\rm d}t\:\: \text{ for all }p\geq 1.
\end{align}
Also it is clear that for any $s>0$ there holds
\begin{align}\label{norm-comp}
\int_0^s|f(t)|\:{\rm d}t\leq \int_0^sf^*(t)\:{\rm d}t.
\end{align}
These relations will be very much valuable in the proofs.
Now, we are ready to state the following important observation.
\begin{lemma}\label{rel-lem-1}
Let $f$ be a locally absolutely continuous function on $(0,\infty)$. Then for a fixed $r>0$ the following identity holds:
\begin{align}\label{rel-eq-1}
\sup_{0<s<\infty}\min\biggl\{1,\frac{r}{s}\biggr\}\int_{0}^{s}f^*(t)\:{\rm d}t=\int_{0}^{r}f^*(t)\:{\rm d}t,
\end{align}
where $f^*$ is the non-increasing rearrangement of $f$.
\end{lemma}
\begin{proof}
We wish to calculate the supremum by using the advantage of the non-increasing property of $f^*$. For any fixed $r>0$, we consider the following two cases:
{\bf Case 1:} Let $0<s\leq r$. Then we obtain
\begin{align*}
\min\biggl\{1,\frac{r}{s}\biggr\}\int_{0}^{s}f^*(t)\:{\rm d}t=\int_{0}^{s}f^*(t)\:{\rm d}t\leq \int_{0}^{r}f^*(t)\:{\rm d}t.
\end{align*}
{\bf Case 2:} Let $r\leq s<\infty$. Then we have by change of variable
\begin{align*}
\min\biggl\{1,\frac{r}{s}\biggr\}\int_{0}^{s}f^*(t)\:{\rm d}t=\frac{r}{s}\int_0^{s}f^*(t)\:{\rm d}t\leq\frac{r}{s}\int_{0}^{s}f^*(tr/s)\:{\rm d}t=\int_{0}^{r}f^*(v)\:{\rm d}v.
\end{align*}
In both cases, we get
\begin{align*}
\min\biggl\{1,\frac{r}{s}\biggr\}\int_{0}^{s}f^*(t)\:{\rm d}t \leq \int_{0}^{r}f^*(t)\:{\rm d}t.
\end{align*}
Hence the supremum is attained at $s=r$ and we arrive at
\begin{align*}
\sup_{0<s<\infty}\min\biggl\{1,\frac{r}{s}\biggr\}\int_{0}^{s}f^*(t)\:{\rm d}t=\int_{0}^{r}f^*(t)\:{\rm d}t.
\end{align*}
\end{proof}
Now we are ready to present an improvement of \eqref{hardy-rel-int}. That is, this gives a natural improvement of the Hardy-Rellich inequality in the integral form. Below we will describe the corresponding differential form which improves the original Hardy-Rellich inequality \eqref{hardy-rel} in a simple form.
\begin{theorem}\label{hardy-rel-th}
Let $g\in C^1(0,\infty)$, then there holds
\begin{align}\label{eqn-hardy-rel-th}
\int_{0}^{\infty}\sup_{0<s<\infty}\bigg|\min\biggl\{\frac{1}{r},\frac{1}{s}\biggr\}\int_{0}^{r}g^\prime(t)\:{\rm d}t\bigg|^2\:{\rm d}r\leq 4 \int_{0}^{\infty}|g^\prime(r)|^2\:{\rm d}r.
\end{align}
Moreover, the constant is sharp.
\end{theorem}
\begin{remark}\label{rem-hardy-rel}
Let $f\in C^2(0,\infty)$, with $f^\prime(0)=0$. Then choosing $g(r)=f^\prime(r)$ in the improved Hardy-Rellich inequality \eqref{eqn-hardy-rel-th}, we obtain
\begin{align}\label{dif-hr-1}
\int_{0}^{\infty}\sup_{0<s<\infty}\bigg|\min\biggl\{\frac{1}{r},\frac{1}{s}\biggr\}f^\prime(s)\bigg|^2\:{\rm d}r\leq 4\int_{0}^{\infty}|f^{\prime\prime}(r)|^2\:{\rm d}r.
\end{align}
Moreover, it is straightforward to check the following identity
\begin{align}\label{dif-hr-2}
\sup_{0<s<\infty}\bigg|\min\biggl\{\frac{1}{r},\frac{1}{s}\biggr\}\;f^\prime(s)\bigg|^2=\max\biggl\{ \sup_{0<s\leq r}\frac{|f^\prime(s)|^2}{r^2}\:,\: \sup_{r\leq s<\infty}\frac{|f^\prime(s)|^2}{s^2}\biggr\}.
\end{align}
Then combining \eqref{dif-hr-1} and \eqref{dif-hr-2} we deduce
\begin{align}\label{hardy-rel-impv}
\int_{0}^{\infty}\frac{|f^\prime(r)|^2}{r^{2}}\:{\rm d}r\leq \int_{0}^{\infty}\max\biggl\{ \sup_{0<s\leq r}\frac{|f^\prime(s)|^2}{r^2}\:,\: \sup_{r\leq s<\infty}\frac{|f^\prime(s)|^2}{s^2}\biggr\}\:{\rm d}r \leq 4 \int_{0}^{\infty}|f^{\prime\prime}(r)|^2\:{\rm d}r.
\end{align}
Therefore, \eqref{hardy-rel-impv} appears as an immediate improvement of \eqref{hardy-rel} in the one-dimension differential form. The similar discussion gives the improvement of \eqref{hardy-rel-int} as well (cf. \cite[Theorem
1.3]{coss}).
\end{remark}
Now we are going to discuss the second main result of this note. Before presenting the statement first let us recall the classical one-dimensional $L^{p}$-Rellich inequality. This reads as follows: let $p>1$, $f\in C^2(0,\infty)$ with $f(0)=0$ and $f^\prime(0)=0$ there holds
\begin{align}\label{p-rel}
\int_{0}^{\infty}\frac{|f(r)|^p}{r^{2p}}\:{\rm d}r\leq \frac{p^{2p}}{(p-1)^p(2p-1)^p} \int_{0}^{\infty}|f^{\prime\prime}(r)|^p\:{\rm d}r.
\end{align}
Now we are ready to demonstrate the one-dimensional Rellich-type inequality in the following integral form.
\begin{theorem}\label{rel-th}
Let $f\in L^p(0,\infty)$, $p>1$. Then we have
\begin{align}\label{eqn-rel-th}
&\int_{0}^{\infty}\frac{1}{r^{2p}}\bigg(\int_{0}^{r}\int_{0}^{\tau}|f(t)|\:{\rm d}t\:{\rm d}\tau\bigg)^p\:{\rm d}r\nonumber\\&\leq\int_{0}^{\infty}\frac{1}{r^{2p}}\bigg(\int_{0}^{r}\sup_{0<s<\infty}\min\biggl\{1,\frac{\tau}{s}\biggr\}\int_{0}^{s}|f(t)|\:{\rm d}t\:{\rm d}\tau\bigg)^p\:{\rm d}r\nonumber \\&\leq \frac{p^{2p}}{(p-1)^p(2p-1)^p} \int_{0}^{\infty}|f(r)|^p\:{\rm d}r.
\end{align}
Moreover, the constant is sharp.
\end{theorem}
\section{Proofs of Theorems \ref{hardy-rel-th} and \ref{rel-th}}\label{proof}
This section is concerned with the proofs of Theorems \ref{hardy-rel-th} and \ref{rel-th}. Before going further let us recall the following lemma.
\begin{lemma}\cite[Lemma 3.1]{nhm}\label{hr-lemma}
Let $1<p<\infty$. Let $g$ be any nonnegative function on $(0,\infty)$. Assume $h$ is a strictly positive non-decreasing function on $(0,\infty)$ such that $s h(r)\leq r h(s)$ for any $r,s\in(0,\infty)$ with $r\leq s$. Let $f$ be a locally absolutely continuous function on $(0,\infty)$. Then we have
\begin{equation*}
\int_{0}^{\infty}g(r)\sup_{0<s<\infty}\bigg|\min\biggl\{\frac{1}{h(r)},\frac{1}{h(s)}\biggr\}\int_{0}^{s}f(t)\:{\rm d}t\bigg|^p\:{\rm d}r \leq \int_{0}^{\infty}g(r)\bigg|\frac{1}{h(r)}\int_{0}^{r}f^*(t)\:{\rm d}t\bigg|^p\:{\rm d}r,
\end{equation*}
where $f^*$ is the non-increasing rearrangement of $f$.
\end{lemma}
Now, as a direct consequence of Lemma \eqref{hr-lemma}, we derive the proof of Theorem \ref{hardy-rel-th}.
{\bf Proof of Theorem \ref{hardy-rel-th}:}
Let us consider $g(r)=1$ and $h(r)=r$ be functions on $(0,\infty)$. Substituting these in Lemma \ref{hr-lemma} with $p=2$, then we have
\begin{equation*}
\int_{0}^{\infty}\sup_{0<s<\infty}\bigg|\min\biggl\{\frac{1}{r},\frac{1}{s}\biggr\}\int_{0}^{s}g^{\prime}(t)\:{\rm d}t\bigg|^2\:{\rm d}r \leq\int_{0}^{\infty}\frac{1}{r^{2}}\bigg|\int_{0}^{r}(g^{\prime})^*(t)\:{\rm d}t\bigg|^2\:{\rm d}r.
\end{equation*}
By using the Hardy-Rellich inequality in the form \eqref{hardy-rel-int} for the function $(g^\prime)^*$, we obtain
\begin{align*}
\int_{0}^{\infty}\sup_{0<s<\infty}\bigg|\min\biggl\{\frac{1}{r},\frac{1}{s}\biggr\}\int_{0}^{s}g^\prime(t)\:{\rm d}t\bigg|^2\:{\rm d}r&\leq 4\int_{0}^{\infty}|{(g^\prime)}^{*}(r)|^2\:{\rm d}r\\&=4\int_{0}^{\infty}|g^\prime(r)|^2\:{\rm d}r.
\end{align*}
In the last step, we have used norm preserving property \eqref{norm-presv}. The sharpness follows from the optimality of the constant in \eqref{hardy-rel-int}. It ends the proof.
{\bf Proof of Theorem \ref{rel-th}:}
The first inequality follows from the property of the supremum. Now taking the integral of \eqref{rel-eq-1} from $0$ to $r$ we have
\begin{align}\label{rel-eq-2}
\int_{0}^{r}\sup_{0<s<\infty}\min\biggl\{1,\frac{\tau}{s}\biggr\}\int_{0}^{s}f^*(t)\:{\rm d}t\:{\rm d}\tau=\int_{0}^{r}\int_{0}^{\tau}f^*(t)\:{\rm d}t\:{\rm d}\tau.
\end{align}
\begin{align*}
&\int_{0}^{\infty}\frac{1}{r^{2p}}\bigg(\int_{0}^{r}\sup_{0<s<\infty}\min\biggl\{1,\frac{\tau}{s}\biggr\}\int_{0}^{s}|f(t)|\:{\rm d}t\:{\rm d}\tau\bigg)^p\:{\rm d}r\\&\overset{\eqref{norm-comp}}{\leq} \int_{0}^{\infty}\frac{1}{r^{2p}}\bigg(\int_{0}^{r}\sup_{0<s<\infty}\min\biggl\{1,\frac{\tau}{s}\biggr\}\int_{0}^{s}f^*(t)\:{\rm d}t\:{\rm d}\tau\bigg)^p\:{\rm d}r\\&\overset{\eqref{rel-eq-2}}{=}\int_{0}^{\infty}\frac{1}{r^{2p}}\bigg(\int_{0}^{r}\int_{0}^{\tau}f^*(t)\:{\rm d}t\:{\rm d}\tau\bigg)^p\:{\rm d}r\\&\overset{\eqref{p-rel}}{\leq}\frac{p^{2p}}{(p-1)^p(2p-1)^p}\int_{0}^{\infty}|f^*(r)|^p\:{\rm d}r\\&\overset{\eqref{norm-presv}}{=}\frac{p^{2p}}{(p-1)^p(2p-1)^p}\int_{0}^{\infty}|f(r)|^p\:{\rm d}r.
\end{align*}
This completes the proof of the new type Rellich inequality in the integral form.
{\bf Optimality:} We set
\begin{align}\label{rel-const}
C_{p}:=\inf_{f\in L^p(0,\infty)\setminus\{0\}}\frac{\int_{0}^{\infty}\frac{1}{r^{2p}}\big(\int_{0}^{r}\int_{0}^{\tau}|f(t)|\:{\rm d}t\:{\rm d}\tau\big)^p\:{\rm d}r}{\int_{0}^{\infty}|f(r)|^p\:{\rm d}r}.
\end{align}
The validity of \eqref{eqn-rel-th} immediately implies
\begin{align*}
C_{p}\leq \frac{p^{2p}}{(p-1)^p(2p-1)^p}.
\end{align*}
So it remains to show other side and this will be done by giving a proper minimizing sequence. We divide the proof in some steps.
{\bf Step 1.} Let us start with a cut-off function $\chi:[0,\infty)\rightarrow \mathbb{R}$ with the following properties:
\begin{itemize}
\item[1.] $\chi(r)\in [0,1]$ for all $r\in [0,\infty)$ and $\chi$ is smooth;
\item[2.] $\chi$ satisfies the following
\begin{equation*}
\chi(r)=
\begin{dcases}
1 & 0\leq r\leq 1, \\
0 & 2\leq r< \infty; \\
\end{dcases}
\end{equation*}
\item[3.] $\chi$ is decreasing function, i.e. $\chi^\prime(r)\leq 0$ for all $r\in [0,\infty)$.
\end{itemize}
Now for a small $\epsilon>0$, let us define the minimizing functions $\{f_\epsilon\}$ as follows:
\begin{align*}
f_\epsilon(r):=r^{\frac{\epsilon-1}{p}}\chi(r).
\end{align*}
{\bf Step 2.} In this step we will estimate r.h.s. of \eqref{eqn-rel-th}. The denominator of \eqref{rel-const} gives
\begin{align}\label{denom}
\int_{0}^{\infty}|f_\epsilon(r)|^p\:{\rm d}r&=\int_0^\infty r^{\epsilon-1}\chi^p(r)\:{\rm d}r\nonumber
\\&=\int_0^1 r^{\epsilon-1}\:{\rm d}r+\int_1^2r^{\epsilon-1}\chi^p(r)\:{\rm d}r\nonumber
\\&=\frac{1}{\epsilon}+O(1).
\end{align}
Therefore, for a fixed positive $\epsilon$, we have $f_\epsilon\in L^p(0,\infty)$.
{\bf Step 3.} In this part we will evaluate the numerator of \eqref{rel-const}. Consider the integration by parts and we compute
\begin{align}\label{numeo}
&\int_{0}^{\infty}\frac{1}{r^{2p}}\bigg(\int_{0}^{r}\int_{0}^{\tau}|f_\epsilon(t)|\:{\rm d}t\:{\rm d}\tau\bigg)^p\:{\rm d}r\nonumber
\\&=\int_{0}^{\infty}\frac{1}{r^{2p}}\bigg(\int_{0}^{r}\int_{0}^{\tau}t^{\frac{\epsilon-1}{p}}\chi(t)\:{\rm d}t\:{\rm d}\tau\bigg)^p\:{\rm d}r\nonumber
\\&=\bigg(\frac{p}{\epsilon-1+p}\bigg)^p\int_{0}^{\infty}\frac{1}{r^{2p}}\bigg[\int_0^r\chi(\tau)\tau^{\frac{\epsilon-1+p}{p}}\:{\rm d}\tau-\int_{0}^{r}\int_{0}^{\tau}t^{\frac{\epsilon-1+p}{p}}\chi^\prime(t)\:{\rm d}t\:{\rm d}\tau\bigg]^p\:{\rm d}r\nonumber
\\&\geq \bigg(\frac{p}{\epsilon-1+p}\bigg)^p\int_{0}^{\infty}\frac{1}{r^{2p}}\bigg[\int_0^r\chi(\tau)\tau^{\frac{\epsilon-1+p}{p}}\:{\rm d}\tau\bigg]^p\:{\rm d}r\nonumber
\\&=\bigg(\frac{p}{\epsilon-1+p}\bigg)^p\bigg(\frac{p}{\epsilon-1+2p}\bigg)^p\int_{0}^{\infty}\frac{1}{r^{2p}}\bigg[\chi(r)r^{\frac{\epsilon-1+2p}{p}}-\int_{0}^{r}\tau^{\frac{\epsilon-1+2p}{p}}\chi^\prime(\tau)\:{\rm d}\tau\bigg]^p\:{\rm d}r\nonumber
\\&\geq \bigg(\frac{p}{\epsilon-1+p}\bigg)^p\bigg(\frac{p}{\epsilon-1+2p}\bigg)^p\int_{0}^{\infty}r^{\epsilon-1}\chi^p(r)\:{\rm d}r\nonumber
\\&=\bigg(\frac{p}{\epsilon-1+p}\bigg)^p\bigg(\frac{p}{\epsilon-1+2p}\bigg)^p\bigg[\int_0^1 r^{\epsilon-1}\:{\rm d}r+\int_1^2r^{\epsilon-1}\chi^p(r)\:{\rm d}r\bigg]\nonumber
\\&=\frac{1}{\epsilon}\bigg(\frac{p}{\epsilon-1+p}\bigg)^p\bigg(\frac{p}{\epsilon-1+2p}\bigg)^p+O(1).
\end{align}
In between, exploiting $\chi^\prime\leq 0$, we used a simple inequality $(a+b)^p\geq a^p$ twice, for nonnegative real numbers $a$ and $b$ with $p>1$.
{\bf Step 4.} Finally, by using \eqref{denom} and \eqref{numeo} we deduce
the ratio
\begin{align*}
&\frac{\int_{0}^{\infty}\frac{1}{r^{2p}}\big(\int_{0}^{r}\int_{0}^{\tau}|f(t)|\:{\rm d}t\:{\rm d}\tau\big)^p\:{\rm d}r}{\int_{0}^{\infty}|f(r)|^p\:{\rm d}r}\\
&\geq \frac{\frac{1}{\epsilon}\big(\frac{p}{\epsilon-1+p}\big)^p\big(\frac{p}{\epsilon-1+2p}\big)^p+O(1)}{\frac{1}{\epsilon}+O(1)}\rightarrow \frac{p^{2p}}{(p-1)^p(2p-1)^p}\; \text{ for }\epsilon\rightarrow 0.
\end{align*}
Hence $\{f_\epsilon\}$ is the required minimizing sequence and, in turn, we have $$C_p=\frac{p^{2p}}{(p-1)^p(2p-1)^p}.$$
\section{Few more remarks}\label{inq}
Here we shortly revisit some more inequalities. It hints that rescaling above inequalities may generate new inequalities. Let us recall the differential form of the inequality \eqref{new-hardy}.
\begin{lemma}\cite[Theorem 1]{nh}
Let $1 < p < \infty$. Then, for any locally absolutely continuous function $f$ on $(0, \infty)$ with $\liminf_{r\rightarrow 0} |f(r)| = 0$, there holds
\begin{align*}
\int_{0}^{\infty}\max\biggl\{\sup_{0<s\leq r} \frac{|f(s)|^p}{r^p}, \sup_{r\leq s<\infty} \frac{|f(s)|^p}{s^p} \biggr\}\:{\rm d}r\leq \bigg(\frac{p}{p-1}\bigg)^p \int_{0}^{\infty}|f^\prime(r)|^p\:{\rm d}r.
\end{align*}
\end{lemma}
Now by substituting $\frac{1}{r}\int_{0}^{r}f(t)\:{\rm d}t$ and $\int_{0}^{r}f(t)\:{\rm d}t$ instead of $f(r)$ one can have two immediate corollaries, respectively.
\begin{corollary}
Let $1 < p < \infty$. Then, for any locally absolutely continuous function $f$ on $(0, \infty)$, there holds
\begin{align*}
&\int_{0}^{\infty}\max\biggl\{\sup_{0<s\leq r} \frac{|\frac{1}{s}\int_{0}^{s}f(t)\:{\rm d}t|^p}{r^p}, \sup_{r\leq s<\infty} \frac{|\frac{1}{s}\int_{0}^{s}f(t)\:{\rm d}t|^p}{s^p} \biggr\}\:{\rm d}r\nonumber\\&\leq \bigg(\frac{p}{p-1}\bigg)^p 2^{p-1} \biggl\{\int_{0}^{\infty} \frac{|f(r)|^p}{r^p}\:{\rm d}r + \int_{0}^{\infty}\frac{1}{r^{2p}}\bigg(\int_{0}^{r}|f(t)|^p\:{\rm d}t\bigg)\:{\rm d}r\biggr\}.
\end{align*}
\end{corollary}
\begin{corollary}
Let $1 < p < \infty$. Then, for any locally absolutely continuous function $f$ on $(0, \infty)$, there holds
\begin{align*}
\int_{0}^{\infty}\max\biggl\{\sup_{0<s\leq r} \frac{|\int_{0}^{s}f(t)\:{\rm d}t|^p}{r^p}, \sup_{r\leq s<\infty} \frac{|\int_{0}^{s}f(t)\:{\rm d}t|^p}{s^p} \biggr\}\:{\rm d}r\leq \bigg(\frac{p}{p-1}\bigg)^p \int_{0}^{\infty}|f(r)|^p\:{\rm d}r
\end{align*}
\end{corollary}
\medskip
\section*{Acknowledgments}
This work was partially supported by the NU program 20122022CRP1601. The first author is supported in part by JSPS Kakenhi 18KK0073, 19H00644. The second author is supported in part by National Theoretical Science Research Center Operational Plan V-Mathematics Field (2/5) (Project number 111-2124-M-002-014-G58-01).
\medskip
| {
"timestamp": "2023-01-30T02:10:02",
"yymm": "2301",
"arxiv_id": "2301.11638",
"language": "en",
"url": "https://arxiv.org/abs/2301.11638",
"abstract": "The motive of this note is twofold. Inspired by the recent development of a new kind of Hardy inequality, here we discuss the corresponding Hardy-Rellich and Rellich inequality versions in the integral form. The obtained sharp Hardy-Rellich type inequality improves the previously known result. Meanwhile, the established sharp Rellich type integral inequality seems new.",
"subjects": "Functional Analysis (math.FA); Classical Analysis and ODEs (math.CA)",
"title": "One-dimensional integral Rellich type inequalities",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363549643401,
"lm_q2_score": 0.8128673269042767,
"lm_q1q2_score": 0.8001348616345624
} |
https://arxiv.org/abs/1602.03445 | Davenport constant for commutative rings | The Davenport constant is one measure for how "large" a finite abelian group is. In particular, the Davenport constant of an abelian group is the smallest $k$ such that any sequence of length $k$ is reducible. This definition extends naturally to commutative semigroups, and has been studied in certain finite commutative rings. In this paper, we give an exact formula for the Davenport constant of a general commutative ring in terms of its unit group. | \section{Introduction}
The Davenport constant is an important concept in additive number theory. In
particular, it measures the largest zero-free sequence of an abelian group.
The Davenport constant was introduced by Davenport in 1966 \cite{Davenport}, but
was actually studied prior to that in 1963 by Rogers \cite{Rogers}.
The definition was first extended by Geroldinger and Schneider to abelian semigroups by \cite{GHK2006} as follows:
\begin{definition}
For an additive abelian semigroup $S$, let $d(S)$ denote the smallest $d\in
\mathbb{N}_0\cup \{\infty\}$ with the following property:
For any $m\in \mathbb{N}$ and $s_1, \dots, s_m\in S$ there exists a subset
$J\subseteq [1, m]$ such that $\abs{J}\le d$ and
\[
\sum_{j = 1}^m s_j = \sum_{j\in J} s_j.
\]
\end{definition}
In addition, \cite{GHK2006} showed the following:
\begin{proposition}
\label{infprop}
If $\abs{S} < \infty$, then $d(S) < \infty$.
\end{proposition}
Wang and Guo \cite{WangGuo2008} then gave the definition of the large
Davenport constant in terms of reducible and irreducible sequences, as follows:
\begin{definition}
Let $S$ be a commutative semigroup (not necessarily finite). Let $A$ be a
sequence of elements in $S$. We say that $A$ is \textit{reducible} if there exists a
proper subsequence $B\subsetneq A$ such that the sum of the elements in $A$ is
equal to the sum of the elements in $B$. Otherwise, we say that $A$ is
\textit{irreducible}.
\end{definition}
\begin{definition}
Let $S$ be a finite commutative semigroup. Define the \textit{Davenport constant}
$D(S)$ of $S$ as the smallest $d\in \mathbb{N} \cup \{\infty\}$ such that
every sequence $S$ of $d$ elements in $S$ is reducible.
\end{definition}
\begin{remark}
$D$ and $d$ are related by the equation $D(S) = d(S) + 1$.
\end{remark}
Note that if $S$ is an abelian group, being irreducible is equivalent to being
zero-sum free, so the definition of the Davenport constant here is equivalent to
the classical definition of the Davenport constant for abelian groups.
In all following sections, unless otherwise noted:
\begin{itemize}
\item All semigroups are unital and commutative.
Furthermore, we will use multiplication notation for semigroups, as opposed to
the additive convention used in \cite{GHK2006,WangGuo2008,Wang2015124,QuWangZhang}.
\item Similarly, rings are unital and commutative.
\item Sets represented by the capital letter $S$ are semigroups.
\item Sets represented by the capital letter $T$ are ideals in a
semigroup.
\item Sets represented by the capital letters $A, B$ are
sequences of elements in a semigroup. In addition, $\pi(A)$ denotes the
product of the elements in $A$.
\item Sets represented by the capital letter $R$ are commutative rings.
\item By abuse of notation, when we write $D(R)$, we actually mean $D(S_R)$,
where $S_R$ is the semigroup of $R$ {\it under multiplication}.
\item $C_n$ denotes
the cyclic group of order $n$; $\mathbb{Z} / n\mathbb{Z}$ denotes the ring with additive group $C_n$.
\end{itemize}
\section{Previous Results}
So far, this general setting has been studied extensively in the cases where the
semigroup is the semigroup of a (commutative) ring under multiplication.
The general idea in this class of problems is to show that the Davenport
constant of the semigroup of a ring $R$ under multiplication is ``close'' to
the Davenport constant of its group of units $U(R)$. The reason this has been done
is that very little is known about the precise value of $D(G)$ when $G$ is an
abelian group of high rank, and often the unit group of these commutative rings
will be a group of high rank. However, as we will show, if there is a general theorem
for the Davenport constant of an abelian group, then by \Cref{thm1}, we will also have
a general theorem for the Davenport constant of an arbitrary finite commutative ring.
\begin{remark}
Since $U(R)$
is a sub-semigroup of $R$, we clearly have $D(R)\ge D(U(R))$, so we would like to
say something about the difference $D(R) - D(U(R))$.
\end{remark}
In their paper, Wang and Guo showed the following result:
\footnote{\cite{WangGuo2008} actually erroneously claimed that for general
$n_i$, $D(R) = D(U(R)) + \#\{1\le i\le r: 2\| n_i\}$. The author has
corresponded with the authors of \cite{WangGuo2008} on this matter, and they
offer the result given above.}
\begin{theorem}[Wang, Guo, 2008, \cite{WangGuo2008}]
\label{wangguo}
If $R = \mathbb{Z}/n_1\mathbb{Z} \times \dots \times \mathbb{Z}/n_r\mathbb{Z}$, where each of the
$n_1, \dots, n_r$ are odd. Then
$D(R) = D(U(R))$.
\end{theorem}
Later, Wang showed the following theorem:
\begin{theorem}[Wang, 2015, \cite{Wang2015124}]
\label{wang}
Suppose $q > 2$ is a prime power.
If $R \neq \mathbb{F}_q[x]$ is a quotient ring of $\mathbb{F}_q[x]$, then $D(R) = D(U(R))$.
\end{theorem}
However, Wang left the case $q = 2$ open and gave an instance for which
$D(R)\neq D(U(R))$. Later, Zhang, Wang, and Qu gave the following bound
when $q = 2$:
\begin{theorem}[Zhang, Wang, Qu, 2015, \cite{QuWangZhang}]
\label{qwz}
If $f\in \mathbb{F}_2[x]$ is nonconstant and $R = \mathbb{F}_2[x] / (f)$,
then $D(U(R))\le D(R) \le D(U(R)) + \delta_f$, where $\delta_f = \deg[\gcd(f,
x(x + 1))]$.
\end{theorem}
\section{Summary of New Results}
First, we will show that the converse of \Cref{infprop} holds for rings:
\begin{theorem}
\label{thm4}
If $R$ is a commutative ring and $D(R) < \infty$, then $\abs{R} < \infty$.
\end{theorem}
However, the main result of this paper will be to relate $D(R)$ and $D(U(R))$ for arbitrary finite rings $R$.
As we will see, the reason why $D(R) = D(U(R))$ does not hold in general is closely related to the presence of certain index 2 ideals in $R$, which we can see immediately in the following exact formula for $D(R)$ in terms of the Davenport constants of certain subgroups of $U(R)$:
\begin{theorem}
\label{thm1}
Suppose $R$ is of the form
\[
(\mathbb{Z} / 2\mathbb{Z})^{k_1} \times (\mathbb{Z} / 4\mathbb{Z})^{k_2} \times (\mathbb{Z} / 8\mathbb{Z})^{k_3} \times (\mathbb{F}_2[x] / (x^2))^{k_4
} \times R',
\]
where $R'$ is a product of local rings not isomorphic to $\mathbb{Z} / 2\mathbb{Z}, \mathbb{Z} / 4\mathbb{Z}, \mathbb{Z} / 8\mathbb{Z}, \mathbb{F}_2[x]/(x^2)$.
Then
\[
D(R) = \max_{0\le a\le k_2 + k_4, 0\le b\le k_3}\bigg[D(U(R') \times {C_2}^{k_2 + k_4 + 2k_3 - a -
2b}) + k_1 + 2a + 3b \bigg].
\]
\end{theorem}
\begin{remark}
Note that since $R$ is finite, $R$ is Artinian, so there is a unique decomposition of $R$ as a product as a product of local rings (\cite{atiyahmacdonald}, \S 8). Thus the quantities $k_1, k_2, k_3, k_4$ are well-defined as functions of $R$.
\end{remark}
As a corollary, we get the following bound on $D(R) - D(U(R))$:
\begin{corollary}
\label{thm2}
Suppose $R$ is of the form
\[
(\mathbb{Z} / 2\mathbb{Z})^{k_1} \times (\mathbb{Z} / 4\mathbb{Z})^{k_2} \times (\mathbb{Z} / 8\mathbb{Z})^{k_3} \times (\mathbb{F}_2[x] / (x^2))^{k_4
} \times R',
\]
where $R'$ is a product of local rings not isomorphic to $\mathbb{Z} / 2\mathbb{Z}, \mathbb{Z} / 4\mathbb{Z}, \mathbb{Z} / 8\mathbb{Z}, \mathbb{F}_2[x]/(x^2)$.
Then
\[
D(U(R)) + k_1 \le D(R) \le D(U(R)) + k_1 + k_2 + k_3 + k_4.
\]
In addition, equality holds on the right if $U(R)$ is a power of 2 or if
$k_i = 0$ for $i = 2, 3, 4$.
\begin{proof}
For the left hand side, note that we can pick $a = b = 0$ to get
\[
D(R)\ge D(U(R') \times {C_2}^{k_2 + k_4 + 2k_3}) + k_1 = D(U(R)) + k_1
\]
For the right hand side, we use the well-known facts $D(G\times H) \ge D(G) + D(H) - 1$, and $D(C_2) = 2$ to get
\begin{align*}
D(R) &= \max_{0\le a\le k_2, 0\le b\le k_3}\bigg[D(U(R') \times {C_2}^{k_2 + k_4 + 2k_3 - a -
2b}) + k_1 + 2a + 3b \bigg] \\ &= \max_{0\le a\le k_2, 0\le b\le
k_3}\bigg[D(U(R') \times {C_2}^{k_2 + k_3 + 2k_3 - a - 2b}) + (a + 2b) (D(C_2) - 1) + k_1 + a
+ b \bigg] \\
&\le \max_{0\le a\le k_2, 0\le b\le k_3}\bigg[D(U(R') \times
{C_2}^{k_2 + k_4 + 2k_3}) + k_1 + a + b \bigg] \\
&= \max_{0\le a\le k_2, 0\le b\le k_3}\bigg[D(U(R)) + k_1 + a + b \bigg] \\
&= D(U(R)) + k_1 + k_2 + k_3 + k_4.
\end{align*}
When $k_2 + k_3 + k_4 = 0$, equality clearly holds as the left hand side and the right hand side are the same.
On the other hand, when $\abs{U(R)}$ is a power of 2,
then $U(R')$ is also a 2-group. However, by \cite{Olson1969}, if $G$ and $H$ are
2-groups, then $D(G\times H) = D(G) + D(H) - 1$. Thus
\begin{align*}
D(R) &\ge D(U(R')) + k_1 + 2k_2 + 3k_3 + 2k_4 \\
&= D(U(R')) + (k_2 + k_4 + 2k_3) (D(C_2) - 1) + k_1 + k_2 + k_3 + k_4 \\
&= D(U(R')\times {C_2}^{k_2 + k_4 + 2k_3}) + k_1 + k_2 + k_3 + k_4 \\
&= D(R) + k_1 + k_2 + k_3 + k_4
\ge D(R),
\end{align*}
so equality holds in this case as well.
\end{proof}
\end{corollary}
We also have the following more concise bound on $D(R) - D(U(R))$ that does
not depend on writing down a local ring product decomposition.
\begin{corollary}
\label{thm3}
Suppose $R$ is a finite commutative ring, and let $n_2(R)$ be the number of
index two (prime, maximal) ideals of $R$. Then $D(R)\le D(U(R)) + n_2(R)$.
\begin{proof}
Since $R$ is finite (and thus Artinian), $R$ can be expressed as a finite
product of local rings. Thus in the setting of \Cref{thm2}, it suffices to
show that $k_1 + k_2 + k_3 + k_4\le n_2(R)$. Since $R$ is Artinian, we have that $R\simeq
\prod_{i = 1}^n R / \mathfrak{q}_i$, where $\{\sqrt{\mathfrak{q}_i}\}_{i = 1}^n =
\Spec(R)$. In particular, the number of $R / \mathfrak{q}_i$ that can be
isomorphic to $\mathbb{Z} / 2\mathbb{Z}, \mathbb{Z}/4\mathbb{Z}, \mathbb{Z}/8\mathbb{Z}$ or $\mathbb{F}_2[x] / (x^2)$ is at
most the number of index 2 ideals in $\Spec(R)$, which is precisely
$n_2(R)$.
\end{proof}
\end{corollary}
Using \Cref{thm1} and \Cref{thm2,thm3}, we will give generalizations of each of the results in the previous section.
The main line of attack to prove \Cref{thm1} will be to reduce the problem to the case of finite local rings and then talk about what happens when we ``glue'' local rings together via product. However, we will first discuss the gluing mechanism for a more general class of semigroups (``almost unit-stabilized''); this will require the notion of \textit{unit-stabilized pairs} and a \textit{relative Davenport constant}. Afterwards, we will show that finite local rings are almost unit-stabilized, and that more structure holds for all finite local rings other than $\mathbb{Z} / 2\mathbb{Z}, \mathbb{Z} / 4 \mathbb{Z}, \mathbb{Z} / 8\mathbb{Z}$, and $\mathbb{F}_2[x] / (x^2)$.
\section{Davenport constant for semigroups}
Let $S$ be a semigroup and $U(S)$ denote its group of units.
\begin{definition}
We say that a sequence $A$ with elements in
$S$ is \textit{reducible} if there exists a proper subsequence $A'\subsetneq A$ such that
$\pi(A') = \pi(A)$. Otherwise, $A$ is \textit{irreducible}.
\end{definition}
\begin{definition}
The Davenport constant $D(S)$ is the minimum positive integer
$k$ such that any sequence $A$ of size $k$ is reducible.
\end{definition}
\begin{definition}
For $S$ a semigroup, let $\Delta(S) = D(S) - D(U(S))$.
\end{definition}
\begin{remark}
Clearly $\Delta(S) \ge 0$.
\end{remark}
\begin{lemma}
\label{prodlemma}
Suppose $S$ and $S'$ are semigroups. Then
\[
D(S\times S')\ge D(S) + D(S') - 1.
\]
\begin{proof}
Let $\ell = D(S) - 1$ and $\ell' = D(S') - 1$. Then there exists an irreducible sequence $s_1, \dots, s_\ell$ in $S$ of length $\ell$ and an irreducible sequence $s_1',\dots s_{\ell'}'$ in $S'$ of length $\ell'$. Then the sequence
\[
(s_1, 1_{S'}), \dots, (s_\ell, 1_{S'}),
(1_S, s_1'), \dots, (1_S, s_{\ell'}')
\]
is an irreducible sequence in $S\times S'$ of length $\ell + \ell' = D(S) + D(S') - 2$, which means that $D(S\times S') \ge D(S) + D(S') - 1$.
\end{proof}
\end{lemma}
\section{Proof of \texorpdfstring{\Cref{thm4}}{Theorem \ref{thm4}}}
We will first handle the case where $R$ is infinite. In particular, we
will show that if $\abs{R} = \infty$, then $D(R) = \infty$.
\begin{remark}
The corresponding theorem for semigroups clearly does not hold: let $S$ be the
semigroup with underlying set $\mathbb{Z}^{\ge 0}$ and operation $a * b = \max(a,
b)$. Then $S$ is infinite but $D(S) = 2$.
\end{remark}
\begin{lemma}
\label{inflemma}
Suppose $G$ is an infinite abelian group. Then $D(G) = \infty$.
\begin{proof}
Let $g_1, \dots, g_n$ be an irreducible sequence. Then $g_1, \dots, g_n, g$
is not irreducible if and only if $g^{-1}$ is not of the form $g_{i_1} \cdots g_{i_k}$.
However, there are at most $2^n$ such $g$, which means that we can always
extend an irreducible sequence to get a longer irreducible sequence, which
means that $D(G) = \infty$.
\end{proof}
\end{lemma}
\begin{lemma}
Suppose $R$ is a ring and $I\subseteq R$ is an ideal. Then $D(R)\ge D(R / I)$.
\begin{proof}
This follows from the fact that any irreducible sequence in $R / I$ can
be lifted to an irreducible sequence in $R$.
\end{proof}
\end{lemma}
We are now ready to give the proof of \Cref{thm4}:
\begin{theorem*}
If $R$ is a commutative ring and $D(R) < \infty$, then $\abs{R} < \infty$.
\begin{proof}
Suppose $D(R) = k < \infty$.
\begin{itemize}
\item $\abs{R / \mathfrak{m}}$ is finite for every maximal ideal $\mathfrak{m}
\subseteq R$. Otherwise, $U(R / \mathfrak{m})$ is an infinite abelian
group, so $D(R / \mathfrak{m}) = \infty$, which means that $D(R) =
\infty$ as well.
\item $R$ has at most $k - 1$ maximal ideals. Suppose otherwise, that
$R$ has $k$ maximal ideals $\mathfrak{m}_1, \dots, \mathfrak{m}_k$. Then by
the Chinese Remainder theorem, there exists $a_1, \dots, a_k\in
R$ such that $a_i \in \mathfrak{m}_i$ and $a_i - 1 \in \mathfrak{m}_j$ for
$i\neq j$, and $a_1, \dots, a_k$ would be an irreducible
sequence of length $k$, contradiction.
\item Let $J(R)$ denote the Jacobson radical of $R$, the
intersection of all of the maximal ideals of $R$. Since $R$
has a finite number of maximal ideals,
by the Chinese Remainder Theorem, $R / J(R)\simeq \prod_{i}
R / \mathfrak{m}_i$, which means that $R / J(R)$ is finite.
\item On the other hand, $\abs{U(R)} < \infty$, for if $\abs{U(R)} = \infty$, then $D(U(R)) = \infty$, so $D(R) =
\infty$ as well.
\item Finally, $1 + J(R)\subseteq U(R)$, which means that
$\abs{J(R)} < \infty$ as well. Thus $\abs{R} = \abs{R / J(R)}
\cdot \abs{J(R)} < \infty$.
\end{itemize}
\end{proof}
\end{theorem*}
\section{Relative Davenport constant}
In order to prove \Cref{thm1} (and its subsequent corollaries), we need to introduce the notion of the \textit{relative Davenport constant}, which measures the longest irreducible sequence in $S$ with product lying in an ideal $T\subseteq S$.
\begin{definition}
A set $T\subseteq S$ is an ideal if $S\cdot T\subseteq T$. ($T = \emptyset$
is allowed here.)
\end{definition}
Suppose $S_1, S_2$ are semigroups and $T_1\subseteq S_1, T_2\subseteq S_2$ are
ideals. Then $T_1\times T_2\subseteq S_1\times S_2$ is also an ideal.
\begin{definition}
Given a semigroup $S$ and an ideal $T\subseteq S$, $A$ is a \textit{$T$-sequence}
if $\pi(A)\in T$.
\end{definition}
\begin{definition}
Given a semigroup $S$ and a (possibly empty) ideal $T\subseteq S$, define
the \textit{relative Davenport constant} $D(S, T)$ to be the minimum positive integer
$k$ such that any $T$-sequence $A$ of size $k$ is reducible.
\end{definition}
We can again define $d(S, T)$ as the maximum length of an irreducible $T$-sequence, and we once again have the relation $D(S, T) = d(S, T) + 1$.
\begin{remark}
If $T = \emptyset$ is the empty ideal, then $d(S, T)$ is tautologically 0 and so $D(S, T)$ is 1.
\end{remark}
Before we proceed, we first need the following helpful lemma.
\begin{lemma}
\label{rellemma}
Let $S$ be a semigroup and $T$ be an ideal. Suppose $s\in S$ and $A$ is a
sequence in $S$ of length $D(S, T)$ such that $s\cdot \pi(A) \in T$. Then
there exists a proper subsequence $A'\subsetneq A$ such that $s\pi(A') =
s\cdot \pi(A)$.
\begin{proof}
If $\pi(A)\in T$, then we are done by the definition of $D(S, T)$.
Otherwise, consider the sequence $B = A\cup \{s\}$. Since $\abs{B} =
D(S, T) + 1 > D(S, T)$, there exists a proper subsequence $B'\subsetneq
B$ such that $\pi(B') = \pi(B) = s\cdot \pi(A)$. Note that if $s\notin B'$,
then $B'\subseteq A$, which means that $\pi(B')\notin T$ (as
$\pi(A)\notin T$ by definition), whereas $\pi(B) \in T$, contradiction.
Thus $a\in B'$, so if we let $A' = B' \backslash \{s\}$, then $s\cdot \pi(A')
= \pi(B') = \pi(B) = s\cdot \pi(A)$, as desired.
\end{proof}
\end{lemma}
\begin{lemma}
\label{grplem}
Suppose $G$ is an abelian group and $H$ is a subgroup of $G$, and suppose
$S$ is a semigroup.
Then
\[
D(G \times S, G \times T) \ge D((G / H) \times S, (G / H) \times T) +
D(H) - 1.
\]
\begin{proof}
Let $\ell_1 = D(G / H\times S, (G / H)\times T) - 1, \ell_2 = D(H) - 1$.
Then there exists an irreducible ($(G/ H)\times T$)-sequence $(g_1,
s_1), \dots (g_{\ell_1}, s_{\ell_1})$ in $(G / H) \times S$ of length
$\ell_1$, and an irreducible sequence $h_1, \ldots, h_{\ell_2}$ in $H$
of length $\ell_2$. In this case, if we let $g_i'$ be any lift of $g_i$
from $G / H$ to $G$, then the sequence
\[
(g_1', s_1), \ldots, (g_{\ell_1}', s_{\ell_1}), (h_1, 1_S), \ldots,
(h_{\ell_2}, 1_S)
\]
is an irreducible $G\times T$-sequence in $G\times S$ of length
\[
\ell_1 + \ell_2 = D((G / H)\times S, (G / H)\times T) + D(H) - 2.
\]
\end{proof}
\end{lemma}
\section{Unit-Stabilized Pairs}
\begin{definition}
An ordered pair $(S,T)$ of a semigroup $S$ and an ideal $T\subseteq S$ is a
\textit{unit-stabilized pair} if for all $a, b\in S$ such that $ab\notin T$
and $\Stab_{U(S)}(a) = \Stab_{U(S)}(ab)$, then $ab = au$ for some $u \in U(S)$.
\end{definition}
\begin{definition}
We say that $S$ is a \textit{unit-stabilized semigroup} if $(S, \emptyset)$ is a unit-stabilized pair.
\end{definition}
\begin{example}
If $R$ is a finite field other than $\mathbb{F}_2$, then [the semigroup of] $R$ [under
multiplication] is unit-stabilized. $(\mathbb{F}_2, 0)$ is a unit-stabilized pair.
\end{example}
It turns out most local rings $R$ are unit-stabilized.
Unit-stabilized rings are nice because they satisfy $\Delta(R) = 0$ and are closed
under the operation of taking products. However, if $R$ has residue field $\mathbb{F}_2$, then
it is in fact not unit-stabilized: $U(R) = 1 + \mathfrak{m}$, where $\mathfrak{m}$ is the maximal
ideal of $R$. Thus if $x\in R$ satisfies $\Ann_R(x) = \mathfrak{m}$, then $\Stab_{U(R)}(x) = \Stab_{U(R)}(0) = U(R)$, but clearly $0\neq ux$ for any unit $u$.
As we can see, the unit-stabilized behavior breaks down around 0. However,
even when $R$ is not unit-stabilized, it turns out that $(R, 0)$ will be a unit-stabilized pair.
We have the following formula that helps for working with unit-stabilized pairs:
\begin{theorem}
\label{relthm}
Suppose $(S, T)$ is a unit-stabilized pair and $S'$ is a semigroup and $T'$
is an ideal in $S'$.
Then
\begin{equation*}
D(S\times S', S\times T') = \max[D(U(S)\times S', U(S)\times T'),
D(S\times S', T \times T')].
\end{equation*}
\begin{proof}
First off, it is clear that $D(S\times S', S \times T')\ge
D(U(S)\times S', U(S)\times T')$ and $D(S\times S', S\times T')\ge
D(S\times S', T\times T')$, so it suffices to show the inequality
in the other direction.
Let
\[
\ell = \max[D(U(S)\times S', U(S)\times T'), D(S\times S', T \times T')].
\]
Suppose we have a sequence $A = \{a_1,\ldots,a_\ell\}$ with $\pi(A)\in
S\times T'$, and suppose for the sake of contradiction that $A$ is
irreducible.
Let $P_{S}$ and $P_{S'}$ denote the projection maps from $S\times S'$ to
$S$ and $S'$, respectively.
If $P_S(\pi(A))\in T$, then $\pi(A)\in T\times T'$, and by the
definition of $D(S\times S', T \times T')$ we automatically have that
$A$ is not irreducible.
Thus $P_S(\pi(A))\notin T$.
Let $B$ be a minimal subsequence of $A$ such that
\[
\Stab_{U(S)}(P_S(\pi(B))) = \Stab_{U(S)}(P_S(\pi(A))),
\]
and without loss of generality suppose that $B = \{a_1, \ldots, a_k\}$. Let
$p_0 = (1_{S}, 1_S)$, and for $1\le i\le \ell$, $p_i = a_ip_{i - 1}$ and let
$K_i = \Stab_{U(S)}(P_S(p_i))$. Then we have
\[
0 = K_0\subseteq K_1\subseteq \dots \subseteq K_\ell.
\]
In addition, for $1\le i\le k$, $K_{i-1}\neq K_i$, so by \Cref{grplem},
\begin{align*}
D(K_i)\ge D(K_{i - 1}) + D(K_i / K_{i - 1}) - 1 \ge D(K_{i - 1}) + 1,
\end{align*}
which means that $D(K_k)\ge k + 1$. Applying \Cref{grplem} again, we have,
\begin{align*}
D((U(S) / K_k) \times S', (U(S) / K_k)\times T') \le
D(U(S) \times S', U(S) \times T') - D(K_k) + 1
\le \ell - k.
\end{align*}
In addition, for $k < i \le \ell$,
\[
\Stab(P_S(p_k)) \subseteq \Stab(P_S(a_ip_k))
\subseteq \Stab(P_S(p_\ell)) = \Stab(P_S(p_k)),
\]
so the inclusions are equalities.
Since $p_\ell = P_S(\pi(A))\notin T$ and $(S, T)$ is unit-stabilized,
for each $k < i \le \ell$ there exists a $u_i\in U(S)$ such that
$u_iP_S(p_k) = a_iP_S(p_k)$ for $k < i \le \ell$. Let $t_i =
(u_i, P_{S'}(a_i))$. We have $p_kt_i = p_ka_i$ for $k\le i\le \ell$.
Since $t_{k+1}, \ldots, t_{\ell}$ is a sequence of length \[
\ell - k\ge D((U(S) / K_k) \times S', (U(S) / K_k)\times T')
\]
and $(1, \pi_{S'}(p_k))\cdot t_{k+1}\cdots t_{\ell} \in U(S)\times T'$, by \Cref{rellemma},
there exists a
(possibly empty) proper subsequence $i_1, \ldots, i_m$ such that $
(1, \pi_{S'}(p_k))\cdot t_{i_1}\cdots t_{i_m}\cdot (u, 1_{S'}) = (1, \pi_{S'}(p_k))\cdot t_{k+1}\cdots t_{\ell}$ for
some $u\in K_k$.
Then since $u\in \Stab(P_S(p_k))$, $(u, 1) \cdot p_k = p_k$, so
\begin{align*}
a_1a_2\cdots a_k \cdot a_{i_1}\cdots a_{i_m} &= p_k \cdot a_{i_1}\cdots a_{i_m} \\
&= p_k \cdot t_{i_1}\cdots t_{i_m}\\
&= (1, P_{S'}(p_k))\cdot (P_S(p_k), 1) \cdot t_{i_1}\cdots t_{i_m} \\
&= (1, P_{S'}(p_k))\cdot (P_S(p_k), 1) \cdot (u, 1) \cdot t_{i_1}\cdots t_{i_m} \\
&= (1, P_{S'}(p_k))\cdot (P_S(p_k), 1) \cdot t_{k + 1}\cdots t_{\ell} \\
&= (1, P_{S'}(p_k))\cdot (P_S(p_k), 1) \cdot a_{k + 1}\cdots a_{\ell} \\
&= p_k \cdot a_{k+1}\cdot a_\ell \\
&= a_1\cdots a_\ell = \pi(A).
\end{align*}
This contradicts the assumption that $A$ was irreducible.
Thus $D(S\times S', T \times T')\le \max[D(U(S)\times S', U(S)\times
T'), D(S\times S', T \times T')]$, as desired.
\end{proof}
\end{theorem}
\begin{corollary}
Suppose $(S, T)$ is a unit-stabilized pair. Then $D(S) = \max[D(U(S)), D(S,
T)]$.
\begin{proof}
Apply \Cref{relthm} to the case where $S'$ is the trivial group
and $T' = S'$.
\end{proof}
\end{corollary}
\begin{corollary}
If $S$ is a unit-stabilized semigroup, then $D(S) = D(U(S))$.
\end{corollary}
\section{Almost Unit-Stabilized semigroups}
\begin{lemma}
\label{zerolemma}
If $S$ has a 0 element, then for all semigroups $S'$,
\[
D(S \times S', \{0\}\times T') = D(S', T') + D(S, \{0\}) - 1.
\]
\begin{proof}
First we will show that $D(S\times S', \{0\}\times T)\le D(S', T') +
D(S, \{0\}) - 1$.
Let $\ell = D(S', T') + D(S, \{0\}) - 1$, and suppose we have an
irreducible $\{0\}\times T$-sequence $(s_1, s_1'), \ldots, (s_\ell,
s_\ell')$. Consider the smallest subset such that the second coordinate
of the product is equal to 0. By the definition of $D(S, \{0\})$,
this subset has size at most $k
= D(S, \{0\}) - 1$. Without loss of
generality suppose $s_1\cdots s_k = 0$, and let $p = s_1'\cdots s_k'$. Then the sequence $s_{k+1}',\dots,
s_\ell'$ has length $D(S', T')$, so by \Cref{rellemma} there is some
proper subsequence such that $ps_{i_1}'\cdots s_{i_r}' = ps_{k +
1}'\cdots s_{\ell}'$. Since the first coordinate of both sides must be
zero, we have found a proper subsequence with the same product,
contradiction.
Now to show the reverse inequality, it suffices to construct an
irreducible ($\{0\}\times T'$)-sequence of length $D(S', T') + D(S,
\{0\}) - 2$. Let $\ell_1 = D(S', T') - 1$ and $\ell_2 = D(S, \{0\}) -
1$. By definition, there exists an irreducible $T'$-sequence $s_1',
s_2', \ldots, s_{\ell_1}'\in S'$ and an irreducible $\{0\}$-sequence
$s_1, s_2, \ldots, s_{\ell_2}$. Then $(1_{S},
s_1'), \dots, (1_{S}, s_{\ell_1}'), (s_{1}, 1_{S'}), \dots, (s_{\ell_2},
1_{S'})$ is an irreducible $(\{0\}\times T')$-sequence of length $\ell_1
+ \ell_2 = D(S', T') + D(S, \{0\}) - 2$, as desired.
\end{proof}
\end{lemma}
\begin{definition}
A semigroup $S$ is \textit{almost unit-stabilized} if $S$ has a 0 element
and $(S, \{0\})$ is a unit-stabilized pair.
\end{definition}
\begin{remark}
If $S$ is unit-stabilized and contains a 0 element, the $S$ is also almost unit-stabilized.
\end{remark}
\begin{lemma}
\label{auslem}
Suppose $S$ is almost unit-stabilized, and suppose $S'$ is any other semigroup.
Then
\begin{equation*}
D(S\times S') = \max(D(U(S)\times S'), D(S, \{0\}) + D(S') - 1).
\end{equation*}
\begin{proof}
This follows directly from \Cref{relthm} and
\Cref{zerolemma}.
\end{proof}
\end{lemma}
\begin{corollary}
\label{auscor}
Suppose $S$ is almost unit-stabilized and $\Delta(S) = 0$. Then for any
semigroup $S'$, $D(S\times S') = D(U(S)\times S')$.
\begin{proof}
If $\Delta(S) = 0$, then $D(S, \{0\})\le D(U(S))$. Thus by \Cref{prodlemma},
\begin{align*}
D(S, \{0\}) + D(S') - 1 \le D(U(S)) + D(S') - 1\le D(U(S)\times S'),
\end{align*}
which means that by \Cref{auslem},
\[
D(S\times S') = \max(D(U(S) \times S'), D(S, \{0\}) + D(S') - 1) = D(U(S) \times S').
\]
\end{proof}
\end{corollary}
\begin{theorem}
\label{austhm}
Suppose $S_1, \dots, S_k$ are almost unit-stabilized semigroups. Suppose further that
for $i > \ell$, $\Delta(S_i) = 0$. Then
\begin{equation*}
D(S_1\times\dots\times S_k) = \max_{I\subseteq [1,\ell]}\left[D\left(\prod_{i\in
[1,k]\backslash I} U(S_i)\right) + \sum_{i\in I}
(D(S_i, \{0\}) - 1)\right].
\end{equation*}
\begin{proof}
First, by $\ell$ applications of \Cref{auslem}, we have that for any semigroup $S'$,
\begin{equation*}
D(S_1\times\dots\times S_\ell\times S')
= \max_{I\subseteq [1,\ell]}\left[D\left(\left(\prod_{i\in
[1,\ell]\backslash I} U(S_i)\right)\times S'\right) + \sum_{i\in I} (D(S_i,
\{0\}) - 1)\right].
\end{equation*}
Now if we let $S' = S_{\ell + 1} \times \dots \times S_k$ and apply
\Cref{auscor} $k - \ell$ times, we have
\begin{equation*}
D(S_1\times\dots\times S_k)
= \max_{I\subseteq [1,\ell]}\left[D\left(\prod_{i\in [1,k]\backslash I}
U(S_i)\right) + \sum_{i\in I} (D(S_i, \{0\}) - 1)\right].
\end{equation*}
\end{proof}
\end{theorem}
\begin{corollary}
If $T = S_1 \times\dots\times S_k$ is a product of almost unit-stabilized
semigroups, then
\begin{equation*}
\Delta(T) \le \Delta(S_1)+\dots+\Delta(S_k).
\end{equation*}
\begin{proof}
Applying \Cref{austhm} to the (trivial) case where $\ell = k$, we
have
\begin{equation*}
D(T) = \max_{I\subseteq [1,k]}\left[D\left(\prod_{i\in
[1,k]\backslash I} U(S_i)\right) + \sum_{i\in I} (D(S_i, \{0\}) -
1)\right].
\end{equation*}
However, using the inequality $D(G)\ge D(G / H) + D(H) - 1$ (a special case
of \Cref{grplem} where $S$ is the trivial group), we have that for any
$I\subseteq [1,k]$,
\begin{align*}
D\left(\prod_{i\in [1,k]\backslash I} U(S_i)\right) + \sum_{i\in I}
(D(U(S_i)) - 1) \le D\left(\prod_{i\in [1,k]} U(S_i)\right) =
D(U(T)).
\end{align*}
Thus
\begin{align*}
D(T) &= \max_{I\subseteq [1,k]}\left[D\left(\prod_{i\in
[1,k]\backslash I} U(S_i)\right) + \sum_{i\in I} (D(S_i, \{0\}) -
1)\right] \\ &\le \max_{I\subseteq [1,k]}\left[D(U(T)) + \sum_{i\in I}
(D(S_i, \{0\}) - D(U(S_i))\right] \\ &= D(U(T)) + \sum_{i = 1}^k \max(0,
D(S_i, \{0\}) - D(U(S_i)))\\
&= D(U(T)) + \sum_{i = 1}^k \Delta(S_i),
\end{align*}
from which the desired result immediately follows.
\end{proof}
\end{corollary}
\begin{remark}
It would be nice if it were true in general that $\Delta(S_1\times S_2) \le
\Delta(S_1) + \Delta(S_2)$. However, this is not true, even when one of
$S_1$ or $S_2$ is almost unit-stabilized! For an example, let $S_1 = \mathbb{Z} /
3\mathbb{Z}$ and let $S_2 = S\times S'$, where $S' = \mathbb{Z}/4\mathbb{Z}$ and $S = \{0, 1, 2,
4\}\subseteq \mathbb{Z}/7\mathbb{Z}$.
We have that the following:
\begin{itemize}
\item $U(S_1)\simeq C_2, U(S)\simeq C_3, U(S')\simeq C_2$.
\item $S_1, S, S'$ are unit-stabilized.
\item $D(S_1) = D(U(S_1)) = 2$.
\item $D(S) = D(U(S)) = 3$.
\item $D(S') = D(S', \{0\}) = 3$. $D(U(S')) = 2$.
\item $D(S_2) = D(S\times S') = D(U(S)\times S') = \max(D(U(S)\times
U(S')), D(U(S)) + D(S', \{0\}) - 1) = \max(6, 3 + 3 - 1) = 6$,
$D(U(S_2)) = D(C_6) = 6$.
\end{itemize}
However,
\begin{align*}
D(S_1\times S_2) = D(S_1\times S \times S') &= D(U(S_1)\times U(S)\times S') \\
&= \max(D(C_6\times U(S')), D(C_6) + D(S', \{0\} - 1) \\
&= \max(7, 6 + 3 - 1) = 8.
\end{align*}
On the other hand,
\[
D(U(S_1\times S_2)) = D(U(S_1)\times U(S)\times U(S')) = D(C_6\times C_2) = 7.
\]
Thus $\Delta(S_1) = \Delta(S_2) = 0$ but $\Delta(S_1\times S_2) = 1$. In
fact, it is possible to construct $S_1, S_2$ such that $\Delta(S_1) =
\Delta(S_2) = 0$ and one of $S_1, S_2$ is almost unit-stabilized, but
$\Delta(S_1\times S_2)$ is arbitrarily large.
\end{remark}
\section{Local Rings}
In this section, we look at the case where $R$ is a (finite) local ring with
maximal ideal $\mathfrak{m}$ and residue field $k$. It turns out local rings are all
either unit-stabilized or almost unit-stabilized.
\begin{lemma}
For all $a\in R$ there exists a positive integer $n$ such that $a\mathfrak{m}^n = 0$.
\begin{proof}
Since $R$ is finite, the chain of ideals
\[
aR\supseteq a\mathfrak{m}\supseteq a\mathfrak{m}^2\supseteq \cdots
\]
must stabilize, so $a\mathfrak{m}^n = a\mathfrak{m}^{n+1}$ for some positive integer $n$. Then by Nakayama's lemma we must have $a\mathfrak{m}^n = 0$.
\end{proof}
\end{lemma}
\begin{lemma}
If $a\neq 0$ and $\Ann_R(a) = \Ann_R(ab)$, then $b\in U(R)$.
\begin{proof}
Suppose instead that $b\notin U(R)$, so $b\in \mathfrak{m}$.
Consider the minimum $n$ such that $a\mathfrak{m}^n = 0$. (Clearly $n > 0$.) Then $ab\mathfrak{m}^{n-1}
\subseteq a\mathfrak{m}^n = 0$ but $a\mathfrak{m}^{n - 1}\neq 0$, which means that
$\mathfrak{m}^{n-1}\not\subseteq \Ann_R(a)$ but $\mathfrak{m}^{n-1}\subseteq
\Ann_R(ab)$, contradiction.
\end{proof}
\end{lemma}
\begin{theorem}
\label{goodrings}
We have:
\begin{enumerate}
\item If $k \not \simeq \mathbb{F}_2$, then $R$ is unit stabilized.
\item If $k \simeq \mathbb{F}_2$, then $R$ is almost unit-stabilized.
\end{enumerate}
\begin{proof}
Note that for all $a\in R$, $\Stab_{U(R)}(a) = (1 + \Ann_R(a))
\cap U(R)$. In addition, if $a\neq 0$, then $\Ann_R{U(R)}$ is a
proper ideal of $R$, which means that $\Ann_R{U(R)}\subseteq \mathfrak{m}$, so
$1 + \Ann_R{U(R)}\subseteq U(R)$, which means that $\Stab_{U(R)}(a) = 1
+ \Ann_R(a)$. Thus if $ab\neq 0$ and $\Stab_{U(R)}(a) =
\Stab_{U(R)}(ab)$, then $\Ann_R(a) = \Ann_R(ab)$, which by the previous
lemma implies $b\in U(R)$.
In the case where $R / \mathfrak{m}\not\simeq \mathbb{F}_2$, then we have $1 +
\mathfrak{m}\subsetneq U(R)$, so for all $a \neq 0$, $\Stab_{U(R)}(a)\subseteq
1 + \mathfrak{m} \subsetneq U(R) = \Stab_{U(R)}(0)$, which means that $R$ is
unit-stabilized.
\end{proof}
\end{theorem}
\begin{corollary}
If $k\not \simeq \mathbb{F}_2$, then $\Delta(R) = 0$.
\end{corollary}
\begin{theorem}
\label{badrings}
Suppose $R$ is a finite local ring. Then $\Delta(R) \le 1$, and equality holds if and only
if $R$ is isomorphic to one of the following rings:
\begin{itemize}
\item $\mathbb{Z} / 2\mathbb{Z}$
\item $\mathbb{Z} / 4\mathbb{Z}$
\item $\mathbb{Z} / 8\mathbb{Z}$
\item $\mathbb{F}_2[x] / (x^2)$
\end{itemize}
\begin{proof}
We have already taken care of the case where $k\not\simeq\mathbb{F}_2$.
Suppose $R$ is a local ring with residue field $\mathbb{F}_2$. Then $\abs{R} = 2^n$ for a positive integer $n$.
We will show the following:
\begin{enumerate}
\item $D(R, 0)\le n + 1$.
\item If $D(R, 0) = n + 1$, then $\mathfrak{m}^{n - 1}\neq 0$.
\item $D(U(R))\ge n$.
\item If $D(U(R)) = n$, then $U(R)\simeq C_2^{n-1}$.
\item If $n \ge 3$ and $\mathfrak{m}^{n - 1}\neq 0$ and $U(R)\simeq C_2^{n
- 1}$, then $\mathfrak{m} = (2)$ and $R\simeq \mathbb{Z} / 2^n\mathbb{Z}$.
\item If $n\ge 4$, then $D(U(\mathbb{Z} / 2^n\mathbb{Z})) \not \simeq C_2^{n -
1}$.
\item If $n\le 2$, then $R\simeq \mathbb{Z} / 2\mathbb{Z}, \mathbb{Z} / 4\mathbb{Z}$, or $\mathbb{F}_2[x]
/ (x^2)$.
\end{enumerate}
To finish the proof, note that since $R$ is almost unit-stabilized,
$D(R) = \max(D(U(R)), D(R, 0)$, so from (1) and (2) we immediately get
$\Delta(R)\le 1$. Now suppose $\Delta(R) = 1$. Since $D(R, 0) \le n + 1$,
$D(U(R))\ge n$, and $\Delta(R) = \max(0, D(R, 0) - D(U(R))$, equality
must hold in both cases. Thus by (2), $\mathfrak{m}^{n - 1}\neq 0$; by (4),
$U(R)\simeq C_2^{n - 1}$; by (5), this means that either $\abs{R}\le 4$
or $R\simeq \mathbb{Z} / 2^n\mathbb{Z}$. In the first case, by (7), $R$ must be either
$\mathbb{Z} / 2\mathbb{Z}, \mathbb{Z} / 4\mathbb{Z}$, or $\mathbb{F}_2[x] / (x^2)$. Otherwise, by (6), we
must have $n = 3$, in which case $R\simeq \mathbb{Z} / 8\mathbb{Z}$.
Finally, we have the following values of $D(R)$ and $D(U(R))$ for $R = \mathbb{Z} / 2\mathbb{Z}, \mathbb{Z} /
4\mathbb{Z}, \mathbb{Z} / 8\mathbb{Z}, \mathbb{F}_2[x] / (x^2)$
\begin{center}
\begin{tabular}{c|c|c}
$R$ & $D(R)$ & $D(U(R))$ \\
\hline
$\mathbb{Z} / 2\mathbb{Z}$ & 2 & 1 \\
$\mathbb{Z} / 4\mathbb{Z}$ & 3 & 2 \\
$\mathbb{Z} / 8\mathbb{Z}$ & 4 & 3 \\
$\mathbb{F}_2[x] / (x^2) $ & 3 & 2 \\
\end{tabular}
\end{center}
In each of these cases, we have $\Delta(R) = 1$.
Here are proofs of the 7 claims:
\begin{enumerate}
\item Suppose $a_1, \dots, a_{n+1}$ is an irreducible sequence in
$R$ with $a_1 \cdots a_{n + 1} = 0$. Then none of $a_1, \dots,
a_{n+1}$ are in $U(R)$, or else they could be omitted. In
addition, $a_1\cdots a_n\neq 0$. Thus if $p_i = a_1\cdots a_i$, we
have $\Stab_{U(R)}(p_i)\subsetneq \Stab_{U(R)}(p_{i+1})$ for
$0\le i< n$. In particular, this means that
$\abs{\Stab_{U(R)}(p_i)}\ge 2^i$. However, when $i = n$ this
gives $2^{i - 1} = \abs{U(R)}\ge \abs{\Stab_{U(R)}(p_i)} \ge
2^i$, contradiction.
\item Suppose $D(R, 0) = n + 1$, and let $a_1, \dots, a_n$ be an
irreducible sequence in $R$ with $a_1\cdots a_n = 0$. Then none
of the $a_i$ are units, so $a_i\in \mathfrak{m}$. In addition,
$a_1\cdots a_{n-1} \neq 0$, which means that $\mathfrak{m}^{n - 1}\neq
0$.
\item We have that $\abs{U(R)} = 2^{n - 1}$. Thus we can write
\[
U(R) = C_{2^{e_1}}\times \cdots \times C_{2^{e_r}},
\]
where $e_1 +
\dots + e_r = n - 1$. Since $R$ is a 2-group, by
\cite{Olson1969},
\[
D(U(R)) = 1 + \sum_{i = 1}^r (D(C_{2^{e_i}}) - 1) = 1 +
\sum_{i = 1}^r (2^{e_i} - 1)
\ge 1 + \sum_{i = 1}^r e_i = n.
\]
Here we also used the inequality $2^n - 1\ge n$ for all $n\in \mathbb{Z}$.
\item In the above equation, equality holds if and only if each of the $e_i$ is
equal to 1, in which case $U(R)\simeq C_2^{n - 1}$.
\item
Since $\mathfrak{m}^{n- 1} \neq 0$, by Nakayama's lemma we have that $\mathfrak{m}^i\subsetneq
\mathfrak{m}^{i - 1}$ for $1\le i\le n$, in which case
$2\abs{\mathfrak{m}^i}\le \abs{\mathfrak{m}^{i - 1}}$. Thus we have
\[
2^{n} = \abs{\mathfrak{m}^0} \ge 2\abs{\mathfrak{m}^1} \ge \dots \ge
2^{n - 1} \abs{\mathfrak{m}^{n - 1}} \ge 2^{n -1 } \cdot 2 = 2^n,
\]
so equality holds in each of the above inequalities. In
particular, we have $\abs{\mathfrak{m} / \mathfrak{m^2}} = 2$, so
\[
\dim_k(\mathfrak{m} / \mathfrak{m^2}) = 1,
\]
which means that $\mathfrak{m}$ is principal.
Suppose $\mathfrak{m} = (t)$. Then $t - 1$ is a unit, and since
$U(R)\simeq C_2^{n - 1}$, we must have $(t - 1)^2 = 1$, or $t^2
= 2t$.
I claim that $2\in \mathfrak{m} \backslash \mathfrak{m}^2$. Suppose otherwise, that $2\in \mathfrak{m}^2$. Then we have $t^2 = 2t\in \mathfrak{m}^3$,
which means that $\mathfrak{m}^2 = (t^2) = \mathfrak{m}^3$. By Nakayama's
lemma, this means that $\mathfrak{m}^2 = 0$, which also means $\mathfrak{m}^{n - 1} = 0$ as $n\ge 3$,
contradiction.
Thus $2\in \mathfrak{m} \backslash \mathfrak{m}^2$, which means that $2
= ut$ for some unit $u$, so $\mathfrak{m} = (t) = (2)$. Finally,
note that
\[
(2^{n - 1}) = \mathfrak{m}^{n - 1} \neq 0,
\] so $2^{n -
1}\neq 0$. Thus in the additive group of $R$, 1 has order $2^n$, which
means that we must have $R\simeq
\mathbb{Z} / 2^n\mathbb{Z}$.
\item If $n\ge 4$, then 5 is a unit in $\mathbb{Z} / 2^n\mathbb{Z}$, but $3^2\not
\equiv 1\pmod{2^n}$, so $U(\mathbb{Z} / 2^n\mathbb{Z})\not\simeq C_2^{n - 1}$.
\item When $n = 1$, there is a unique ring with two elements, $\mathbb{Z} /
2\mathbb{Z}$. Now if $n = 2$, $\abs{\mathfrak{m}} = 2$, so $\mathfrak{m} = \{0,
x\}$ for some $x\neq 0, 1$. Then $R = \{0, 1, x, x - 1\}$. We
have two cases:
\begin{itemize}
\item $2\neq 0$. Then we must have $x = 2$ and $x - 1 =
3$, so $R\simeq \mathbb{Z} / 4\mathbb{Z}$.
\item 2 = 0. Then $1 = (x + 1)^2 = x^2 + 1$, so $x^2 = 0,
(x+1)^2 = 1, x(x+1) = x$. In this case, $R\simeq \mathbb{F}_2[x]
/ x^2$.
\end{itemize}
\end{enumerate}
\end{proof}
\end{theorem}
\section{Proof of \texorpdfstring{\Cref{thm1}}{Theorem \ref{thm1}}}
Let $R_1 = \mathbb{Z} / 2\mathbb{Z}, R_2 = \mathbb{Z} / 4\mathbb{Z}, R_3 = \mathbb{Z} / 8\mathbb{Z}, R_4 = \mathbb{F}_2[x]/(x^2)$ denote the ``bad'' local rings.
We have that $R$ is of the form
\[
R_1^{k_1} \times R_2^{k_2} \times R_3^{k_3} \times R_4^{k_4
} \times R',
\]
where $R'$ is a product of local rings not isomorphic to $R_1, R_2, R_3$, or $R_4$.
Then by \Cref{badrings}, $R'$ is a product of almost unit-stabilized local rings with $\Delta = 0$. In addition, we have
\begin{itemize}
\item $U(R_1) = 0$, $D(R_1) = 1$.
\item $U(R_2)\simeq C_2$, $D(R_2) = 2$.
\item $U(R_3)\simeq C_2 \times C_2$, $D(R_3) = 3$.
\item $U(R_4)\simeq C_2$, $D(R_4) = 2$.
\end{itemize}
Plugging this all into \Cref{austhm}, we have
\begin{align*}
D(R)
&= \max_{n_i\in [0, k_i]} \bigg[
D\left( {C_2}^{(k_2 - n_2) + 2(k_3 - n_3) + (k_4 - n_4)} \times U(R')\right) + n_1 + 2n_2 + 3n_3 + 2n_4
\bigg] \\
&= \max_{n_i\in [0, k_i]} \bigg[
D\left( {C_2}^{k_2 + k_4 + 2k_3 - (n_2 + n_4) - 2n_3} \times U(R')\right) + k_1 + 2(n_2 + n_4) + 3n_3
\bigg] \\
&= \max_{0\le a\le k_2 + k_4, 0\le b\le k_3} \bigg[
D\left( U(R') \times {C_2}^{k_2 + k_4 + 2k_3 - a - 2b} \right) + k_1 + 2a + 3b
\bigg].
\end{align*}
\section{A few more corollaries of \texorpdfstring{\Cref{thm1}}{Theorem \ref{thm1}}}
\begin{theorem}
\label{randomthms}
$D(R) = D(U(R))$ in any of the following scenarios:
\begin{enumerate}
\item $\abs{R}$ is odd.
\item $R$ is an $\mathbb{F}_q$ algebra, where $q$ is a power of 2 other than 2.
\item $R = \mathbb{Z} / n\mathbb{Z}$, where $16\mid n$.
\item $R = \mathbb{F}_2[x] / (f)$, where $v_x(f), v_{x + 1}(f)\notin \{1, 2\}$.
\item $R$ is of the form $\mathbb{Z} / 4\mathbb{Z} \times R'$ or $\mathbb{F}_2[x] / (x^2) \times R'$, where $\abs{U(R')} > 1$ is odd and $\Delta(R') = 0$.
\item $R = \mathbb{F}_2[x] / (x^2g)$, where $g$ is a product of (at least one) distinct irreducibles that are not $x$ or $x + 1$.
\end{enumerate}
\begin{proof}
\
\begin{enumerate}
\item This follows immediately from \Cref{thm3}, as a ring with odd order cannot have any ideals of index 2.
\item Note that any quotient ring of $R$ is also an $\mathbb{F}_q$ algebra, which means that any proper ideal has index at least $q$ in $R$. Thus $n_2(R) = 0$, so by \Cref{thm3}, $\Delta(R) = 0$.
\item In the setting of \Cref{thm2}, it is easy to see that for the ring $R = \mathbb{Z} / n\mathbb{Z}$ with $16\mid n$, so we have $k_1 = k_2 = k_3 = k_4 = 0$, which means that $\Delta(R) = 0$ as well.
\item This is a direct consequence of \Cref{polythm} below, which is in turn a consequence of \Cref{thm2}.
\item We first need the following lemma:
\begin{lemma}
Suppose $G$ is a nontrivial group of odd order. Then $D(G\times C_2) > D(G) + 1$.
\begin{proof}
(We will use additive notation for this proof).
Let $(g_1, \dots, g_\ell)$ be an irreducible
sequence in $G$, where $\ell = D(G) - 1$. Note that the map $g\mapsto g + g$ in $G$ is injective, which means that the map $(\times \frac12)$ is well-defined. Then the sequence
\[
(g_1, 0), \dots, (g_{\ell - 1}, 0),
(\frac12 g_\ell, 1),
(\frac12 g_\ell, 1),
(\frac12 g_\ell, 1)
\]
is an irreducible sequence in $G\times C_2$ of length $\ell + 2 = D(G) + 1$, so $D(G\times C_2) > D(G) + 1$.
\end{proof}
\end{lemma}
Since $\abs{U(R')}$ is odd, the decomposition of $R'$ into a product of local rings cannot have any rings isomorphic to $\mathbb{Z} / 4\mathbb{Z}, \mathbb{Z} / 8\mathbb{Z}$, or $\mathbb{F}_2[x] / (x^2)$. In addition, since $\Delta(R') = 0$, by \Cref{thm2}, this decomposition cannot have any rings isomorphic to $\mathbb{Z} / 2\mathbb{Z}$. Thus the setting of \Cref{thm1} applies; plugging in both $R = \mathbb{Z} / 4\mathbb{Z} \times R'$ and $\mathbb{F}_2[x] / (x^2) \times R'$ gives
\[
D(R) = \max(D(U(R')) + 2, D(U(R') \times C_2))
\]
However, by the lemma, $D(U(R') \times C_2) > D(U(R')) + 1$, which means that $D(R) = D(U(R') \times C_2) = D(U(R))$.
\item
If $f\in \mathbb{F}_2$ is irreducible, then the unit group of $\mathbb{F}_2[x] / (f)$ has odd order (in particular, it has order $2^{\deg f} - 1$.) Thus if $g$ is a product of irreducibles other than $x, x + 1$, then $\abs{U(\mathbb{F}_2[x] / (g))} > 1$ is odd, and by \Cref{thm2}, $\Delta(\mathbb{F}_2[x] / (g)) = 0$. Thus this is a special case of (5), with $R' = \mathbb{F}_2[x] / (g)$ and $R\simeq \mathbb{F}_2[x] / (x^2) \times R'$, so $\Delta(R) = 0$.
\end{enumerate}
\end{proof}
\end{theorem}
Note that (1) in \Cref{randomthms} is a generalization of \Cref{wangguo} and (2) is a generalization of \Cref{wang}. In addition, we have the following refinement of \Cref{qwz}.
\begin{theorem}
\label{polythm}
Let $f\in \mathbb{F}_2[x]$ be nonconstant and $R = \mathbb{F}_2[x] / (f)$. If $f = x^a (x +
1)^b g$, where $\gcd(g, x(x+1)) = 1$, then
\[
\delta_{a1} + \delta_{b1}\le D(R) - D(U(R)) \le \delta_{a1} + \delta_{b1} +
\delta_{a2} + \delta_{b2},
\]
where $\delta_{ij}$ is the Kronecker $\delta$ function.
\begin{proof}
Note that if $f$ factors into irreducibles as $f = f_1^{e_1}\cdots f_n^{e_n}$, then the decomposition of $\mathbb{F}_2[x] / (f)$ into products of local rings is
\[
\mathbb{F}_2[x]/(f) \simeq \mathbb{F}_2[x] / (f_1^{e_1}) \times \dots \times \mathbb{F}_2[x] / (f_n^{e_n}).
\]
In the setting of \Cref{thm2}, we have $k_1 = \delta_{a1} + \delta_{b1}$, $k_2 = k_3 = 0$, and $k_4 = \delta_{a2} + \delta_{b2}$. The desired result immediately follows.
\end{proof}
\end{theorem}
Finally, we give a partial answer to the general problem stated by Wang and Guo in \cite{WangGuo2008}:
\begin{theorem}
If $R = \mathbb{Z}/n_1\mathbb{Z} \times \dots \times \mathbb{Z}/n_r\mathbb{Z}$, then
\[
\#\{1\le i\le r: 2\| n_i\} \le D(R) - D(U(R)) \le \#\{1\le i\le r: 2\mid n_i, 16\nmid n_i\}.
\]
In addition, equality holds on the right when all of the $n_i$ are powers of 2.
\begin{proof}
Again in the setting of \Cref{thm2}, we have $k_4 = 0$, and for $1\le i\le 3$,
\[
k_i = \#\{1\le i\le r: 2^i\| n_i\}
\]
Thus by \Cref{thm2},
\[
D(R) - D(U(R)) \ge k_1 = \#\{1\le i\le r: 2\| n_i\}
\]
and
\begin{align*}
D(R) - D(U(R))&\le k_1 + k_2 + k_3 + k_4 \\
&= \sum_{i = 1}^3 \#\{1\le i\le r: 2^i\| n_i\}
\\
&= \#\{1\le i\le r: 2\mid n_i, 16\nmid n_i\}.
\end{align*}
In addition, if each of the $n_i$ is a power of 2, $U(R)$ is a 2-group, so by \Cref{thm2}, equality holds on the right.
\end{proof}
\end{theorem}
\section{Further Direction}
Unfortunately, we still do not have a complete classification of the finite rings $R$ for which $D(R) = D(U(R))$, even if we restrict to the simple cases where $R$ is of the form $\mathbb{Z} / n\mathbb{Z}$ or $\mathbb{F}_2[x] / (f)$.
As we can see, these questions depend on the relation between $D(G)$ and $D(G\times C_2^e)$ for an abelian group $G$. This is what we know:
If $G = C_{n_1} \times \dots \times C_{n_r}$, where $n_1 \mid \cdots \mid n_r$, then we can define $M(G) = 1 + \sum_{i = 1}^r (n_r - 1)$. Clearly $D(G)\ge M(G)$. However, we have the following:
\begin{proposition}
\label{gooddelta}
$D(G) = M(G)$ in each of the following cases:
\begin{enumerate}
\item $G$ is a $p$-group. \cite{Olson1969}
\item $G = C_n \times C_{kn}$. \cite{Olson1969}
\item $G = C_{p^ak} \times H$, where $H$ is $p$-group and $p^a \ge M(H)$. \cite{vebii}
\item $G = C_2 \times C_{2n} \times C_{2kn}$, where $n, k$ are odd and the largest prime dividing $n$ is less than 11. \cite{vebii}
\item $G = C_2^3 \times C_{2n}$, where $n$ is odd. \cite{baayen}
\end{enumerate}
\end{proposition}
(For a more complete list, see \cite{mazur}.) On the other hand, it is not true that $D(G) = M(G)$ for all $G$.
\begin{proposition}
\label{baddelta}
$D(G) > M(G)$ if $G = C_2^e \times C_{2n} \times C_{2nk}$ for $n, k$ odd and $e\ge 3$. \cite{Geroldinger}
\end{proposition}
In what follows, we will try and compute exact values of $\Delta(R)$ when $R$ is of the form $\mathbb{Z} / n\mathbb{Z}$ or $\mathbb{F}_2[x] / (f)$.
\subsection{\texorpdfstring{$R = \mathbb{Z} / n\mathbb{Z}$}{R = Z / nZ}}
By \Cref{thm3}, we have $\Delta(\mathbb{Z} / n\mathbb{Z}) \le 1$.
\begin{question}
For $n\in\mathbb{Z}^+$, when is $\Delta(\mathbb{Z} / n\mathbb{Z})$ equal to 0 and when is it 1?
\end{question}
Note that we have already resolved this problem for most $n$. In particular, we have the following:
\begin{itemize}
\item If $n$ is odd or $16 \mid n$, $\Delta(R) = 0$.
\item If $n = 2b$ where $b$ is odd, then $\Delta(R) = 1$.
\item If $n = 4b$ where $b$ is odd, then by \Cref{thm1} we have
\[
\Delta(R) = \max(0, D(U(\mathbb{Z} / b\mathbb{Z})) + 2 - D(U(\mathbb{Z} / n\mathbb{Z})))
\]
However, $U(\mathbb{Z} / n\mathbb{Z}) = U(\mathbb{Z} / b\mathbb{Z}) \times C_2$, which means that
\[
D(U(\mathbb{Z} / n\mathbb{Z})) \ge D(U(\mathbb{Z} / b\mathbb{Z})) + 1.
\]
Thus $\Delta( \mathbb{Z} / n\mathbb{Z}) = 1$ if and only if
\[
D(U(\mathbb{Z} / n\mathbb{Z})) = D(U(\mathbb{Z} / b\mathbb{Z})) + 1.
\]
Using this, we can calculate $\Delta(R)$ in the following special cases using \Cref{gooddelta,baddelta} in conjunction with \Cref{thm1}:
\begin{itemize}
\item If $b$ is a prime power then $\Delta(R) = 1$.
\item If $b$ is a product of distinct primes of the form $2^{2^n} + 1$, then $\Delta(R) = 1$.
\item If $b = p_1^{e_1} p_2^{e_2}$ and $\gcd(\phi(p_1^{e_1}), \phi(p_2^{e_2}))$ has no prime factors greater than 7, then $\Delta(R) = 1$.
\item If $b = p_1^{e_1}p_2^{e_2}p_3^{e_3}$ and $
\phi(p_1^{e_1})/2,
\phi(p_2^{e_2})/2,
\phi(p_3^{e_3})/2
$ are pairwise relatively prime, then $\Delta(R) = 1$.
\item If $b = p_1^{e_1}p_2^{e_2}p_3^{e_3}p_4^{e_4}$, $p_1, p_2, p_3, p_4 \equiv 3\pmod{4}$, and $
\phi(p_1^{e_1})/2,
\phi(p_2^{e_2})/2,
\phi(p_3^{e_3})/2,
\phi(p_4^{e_4})/2
$ are pairwise relatively prime, then $\Delta(R) = 0$.
\end{itemize}
\item If $n = 8b$ where $b$ is odd, then using similar logic as the previous case we have $\Delta(\mathbb{Z} / n\mathbb{Z}) = 1$ if and only if \[
D(U(\mathbb{Z} / n\mathbb{Z})) = D(U(\mathbb{Z} / b\mathbb{Z})) + 2.
\]
Thus we can calculate $\Delta(R)$ in the following special cases:
\begin{itemize}
\item If $b$ is a prime power then $\Delta(R) = 1$.
\item If $b$ is a product of distinct primes of the form $2^{2^n} + 1$, then $\Delta(R) = 1$.
\item If $b = p_1^{e_1} p_2^{e_2}$ and $\gcd(\phi(p_1^{e_1}), \phi(p_2^{e_2})) = 1$, then $\Delta(R) = 1$.
\item If $b = p_1^{e_1}p_2^{e_2}p_3^{e_3}$, $p_1, p_2, p_3\equiv 3\pmod{4}$, and
\[
\gcd[
\phi(p_1^{e_1})/2,
\phi(p_2^{e_2})/2,
\phi(p_3^{e_3})/2] = 2
\]
and for $i\neq j$,
$ \gcd[ \phi(p_i^{e_i})/2, \phi(p_j^{e_j})/2] $ has
no prime factors greater than 7, then $\Delta(R) = 0$.
\item If $b = p_1^{e_1}p_2^{e_2}p_3^{e_3}p_4^{e_4}$, $p_1, p_2, p_3, p_4 \equiv 3\pmod{4}$, and $
\phi(p_1^{e_1})/2,
\phi(p_2^{e_2})/2,
\phi(p_3^{e_3})/2,
\phi(p_4^{e_4})/2
$ are pairwise relatively prime, then $\Delta(R) = 0$.
\end{itemize}
\end{itemize}
However, the general cases $n = 4b, 8b$ remain open.
\subsection{\texorpdfstring{$R = \mathbb{F}_2[x]/(f)$}{R = F2[x]/(f)}}
In this case, \Cref{thm3} gives the bound $\Delta(\mathbb{F}_2[x] / (f)) \le 2$.
\begin{question}
For $f\in \mathbb{F}_2[x]$, when is $\Delta(\mathbb{F}_2[x] / (f))$ equal to 0, when is it 1, and when is it 2?
\end{question}
Again, we have already answered this question for most $f\in \mathbb{F}_2[x]$. Let $f = x^a(x+1)^b g$, where $\gcd(g, x(x + 1)) = 1$. Then
\begin{itemize}
\item If $g = 1$, then $\Delta(R) = \delta_{a1} + \delta_{a2} + \delta_{b1} + \delta_{b2}$.
\item If $a, b\neq 2$, then $\Delta(R) = \delta_{a1} + \delta_{b1}$.
\item If $a = 2$ and $b \le 1$ and $\abs{U(\mathbb{F}_2[x] / (g))}$ is odd, then $\Delta(R) = b$.
(Note that this condition corresponds to $g$ being a product of distinct irreducibles.)
\item If $a = b = 2$ and $U(\mathbb{F}_2[x] / (g))$ is cyclic (i.e. product of distinct irreducibles with pairwise relatively prime degree), then $\Delta(R) = 1$.
\item If $a = 2$, $b\in [3, 14]\cup [17, 24] \cup[33, 40]$ and $U(\mathbb{F}_2[x]/(g))$ is cyclic, then $\Delta(R) = 0$.
\end{itemize}
\subsection{Recap}
For general $n\in \mathbb{Z}$, the rank of $U(\mathbb{Z} / n\mathbb{Z})$ is equal to the number of prime factors of $n$, which gets arbitrarily large.
Unfortunately, very little is known about $D(G) - M(G)$ for general $G$ of large rank.
In a similar vein, the rank of $\mathbb{F}_2[x] / (f)$ for general $f\in \mathbb{F}_2[x] / (f)$ can get annoyingly large, and even the special case $\Delta(\mathbb{F}_2[x]/(x^2g))$ with $g\in \mathbb{F}_2[x]$ irreducible is tricky to calculate.
For example, \cite{Geroldinger} gives for any $k > 0$, $D(C_{2k + 1} \times {C_2}^r) =
4k + 1 + r$ for $1\le r\le 4$ but $D(C_{2k + 1} \times {C_2}^5) > 4k + 7$.
If $f$ is an irreducible polynomial in $\mathbb{F}_2[x]$ of degree $d$, one
can check that $U(\mathbb{F}_2[x] / (f^2))\simeq C_{2^d - 1}\times {C_2}^d$.
Thus if $R_d = \mathbb{F}_2[x] / (x^2f_d^2)$, where $f_d$ is an irreducible polynomial of
degree $d$, applying \Cref{thm1} gives
\[
D(R_d) = \max(D(C_{2^d - 1} \times {C_2}^d) + 2, D(C_{2^d - 1} \times
{C_2}^{d+1}))
\]
which gives
$\Delta(R_2) = \Delta(R_3) = 1$ but $\Delta(R_4) = 0$. The general problem of determining $\Delta(\mathbb{F}_2[x] / (f))$ for non-squarefree $f\in \mathbb{F}_2[x]$ is still open for reasons such as this. For more information about the growth of $D(C_{2k + 1} \times {C_2}^d)$, we refer the reader to \cite{mazur}.
\subsection{Infinite Semigroups}
Finally, we look at the case where $S$ is an infinite commutative semigroup.
By \Cref{thm4} and \Cref{inflemma}, when $S$ is either
\begin{itemize}
\item a group, or
\item the semigroup of a commutative ring $R$ under multiplication,
\end{itemize}
then $D(S)$ being finite implies $S$ is finite. However, there are semigroups $S$ such that $S$ is infinite but $S$ is finite.
Thus we have the following question:
\begin{question}
For what other families $\mathcal{F}$ of (commutative, unital) semigroups is it true $S\in \mathcal{F}$ and $\abs{S} = \infty$ implies $D(S) = \infty$?
\end{question}
\section*{Acknowledgments}
This research was conducted at the University of Minnesota Duluth REU and was
supported by NSF grant 1358695 and NSA grant H98230-13-1-0273. The author thanks
Joe Gallian for suggesting the problem and supervising the research, and Benjamin Gunby for
helpful comments on the manuscript.
\bibliographystyle{plain}
| {
"timestamp": "2016-02-11T02:10:05",
"yymm": "1602",
"arxiv_id": "1602.03445",
"language": "en",
"url": "https://arxiv.org/abs/1602.03445",
"abstract": "The Davenport constant is one measure for how \"large\" a finite abelian group is. In particular, the Davenport constant of an abelian group is the smallest $k$ such that any sequence of length $k$ is reducible. This definition extends naturally to commutative semigroups, and has been studied in certain finite commutative rings. In this paper, we give an exact formula for the Davenport constant of a general commutative ring in terms of its unit group.",
"subjects": "Group Theory (math.GR); Commutative Algebra (math.AC); Combinatorics (math.CO)",
"title": "Davenport constant for commutative rings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363512883317,
"lm_q2_score": 0.8128673223709251,
"lm_q1q2_score": 0.8001348541841125
} |
https://arxiv.org/abs/2010.05840 | Cohomology fractals, Cannon-Thurston maps, and the geodesic flow | Cohomology fractals are images naturally associated to cohomology classes in hyperbolic three-manifolds. We generate these images for cusped, incomplete, and closed hyperbolic three-manifolds in real-time by ray-tracing to a fixed visual radius. We discovered cohomology fractals while attempting to illustrate Cannon-Thurston maps without using vector graphics; we prove a correspondence between these two, when the cohomology class is dual to a fibration. This allows us to verify our implementations by comparing our images of cohomology fractals to existing pictures of Cannon-Thurston maps.In a sequence of experiments, we explore the limiting behaviour of cohomology fractals as the visual radius increases. Motivated by these experiments, we prove that the values of the cohomology fractals are normally distributed, but with diverging standard deviations. In fact, the cohomology fractals do not converge to a function in the limit. Instead, we show that the limit is a distribution on the sphere at infinity, only depending on the manifold and cohomology class. | \section{Introduction}
\begin{figure}[htb]
\centering
\subfloat[Cannon--Thurston map.]{
\label{Fig:CTVector}
\includegraphics[width=0.47\textwidth]{Figures/match_up/two_colour2_rot/two_colour_Cannon-Thurston_match_up_vector_1000}
}
\thinspace
\subfloat[Cohomology fractal.]{
\label{Fig:CTPixelColour}
\includegraphics[width=0.47\textwidth]{Figures/match_up/two_colour2_rot/two_colour_Cannon-Thurston_match_up_colour_1000}
}
\subfloat[\reffig{CTPixelColour} in black and white.]{
\label{Fig:CTPixelBW}
\includegraphics[width=0.47\textwidth]{Figures/match_up/two_colour2_rot/two_colour_Cannon-Thurston_match_up_bw_1000}
}
\thinspace
\subfloat[\reffig{CTVector} overlaid on \reffig{CTPixelBW}.]{
\label{Fig:CTPixelBWandVector}
\includegraphics[width=0.47\textwidth]{Figures/match_up/two_colour2_rot/two_colour_Cannon-Thurston_match_up_bw+vector_1000}
}
\caption{Matching up Cannon--Thurston map images with the cohomology fractal for \texttt{m004}, the figure-eight knot complement. Compare our \reffig{CTPixelBW} with Figure~10.11 of \emph{Indra's Pearls}~\cite[page 335]{IndrasPearls}, which was produced by paint-filling a vector graphics image~\cite{Wright19}.}
\label{Fig:MatchUp}
\end{figure}
Cannon and Thurston discovered that Peano curves arise naturally in hyperbolic geometry~\cite{CannonThurston07}.
They proved that for every closed hyperbolic three-manifold, equipped with a fibration over the circle,
there is a map from the circle to the sphere that is continuous, finite to one, and surjective.
Furthermore this \emph{Cannon--Thurston map} is equivariant with respect to the action of the fundamental group.
We review their construction in \refsec{CannonThurston}; \reffig{CTVector} shows an approximation.
In a previous expository paper~\cite{BachmanSchleimerSegerman20}, we introduced \emph{cohomology fractals};
these are images arising from a hyperbolic three-manifold $M$ equipped with a cohomology class $[\omega] \in H^1(M; \mathbb{R})$. See~\reffig{CTPixelColour}.
In that paper we gave an overview of the construction;
we also discussed some of the features of the three-manifold and cohomology class that can be seen in its cohomology fractal.
We have also written an open-source~\cite{github_cohomology_fractals} real-time web application for exploring these fractals.
This is available at \url{https://henryseg.github.io/cohomology_fractals/}.
In the present work we give rigorous definitions of cohomology fractals, we relate them to Cannon--Thurston maps (see \reffig{CTPixelBWandVector}), we give technical details of our implementation, and we discuss their limiting behaviour.
We now outline the contents of each section of the paper. Note that we include a glossary of notation in \refapp{Notation}.
We begin by reviewing the definitions of ideal and material triangulations, and their hyperbolic geometry in \refsec{Triangulations}. In \refsec{CannonThurston} we define Cannon--Thurston maps. In \refsec{Graphics} we discuss the differences between vector and raster graphics. We also recall a vector graphics algorithm (\refalg{CTApprox}) used in previous work to illustrate Cannon--Thurston maps.
In \refsec{CohomologyFractals} we give several equivalent definitions of the cohomology fractal.
It depends on choices beyond the manifold $M$ and the cohomology class $[\omega]$:
there is a choice of viewpoint $p \in M$ and a choice of a visual radius $R$.
The fractal is a function $\Phi^{\omega,p}_{R} \from \UT{p}{M} \to \mathbb{R}$.
Roughly, for each vector $v \in \UT{p}{M}$ we build the geodesic arc $\gamma$ of length $R$ from $p$ in the direction of $v$ and compute $\Phi^{\omega,p}_{R}(v) = \omega(\gamma)$.
(Note that we repeatedly generalise the definition of the cohomology fractal throughout the paper;
the decorations alter to remind the reader of the desired context.)
In \reffig{MatchUp}, we see a cohomology fractal closely matching an approximation of a Cannon--Thurston map, as produced by \refalg{CTApprox}.
In \refsec{Matching} we prove the following.
\begin{restate}{Proposition}{Prop:LightDark}
Cohomology fractals are dual to approximations of the Cannon--Thurston map.
\end{restate}
Thus we have a new representation of Cannon--Thurston maps.
We also compare cohomology fractals with the \emph{lightning curves} of Dicks and various coauthors.
(The name is due to Wright~\cite[page~324]{IndrasPearls}.)
We experimentally observe that the lightning curve corresponds to some of the brightest points of the cohomology fractal.
In \refsec{Implement} we describe the algorithms we use to produce images of cohomology fractals.
Adding the ability to move through the manifold leads us to separate the viewpoint $p$ from a \emph{basepoint}, denoted $b$, of the cohomology fractal.
We still trace rays starting at $p$, but then evaluate $\omega$ on any path in $\cover{M}$ from $b$ to the endpoint of $\gamma$.
We also generalise the above \emph{material} view (with vectors $v$ in $\UT{p}{\cover{M}}$) to the \emph{ideal} and \emph{hyperideal} views (with vectors $v$ being perpendicular to a horosphere or geodesic plane, respectively). Each view is a subset $D \subset \UT{}{\cover{M}}$; our notation for the cohomology fractal becomes $\Phi^{\omega,b,D}_{R} \from D \to \mathbb{R}$.
In \refsec{Cone} we discuss cohomology fractals for incomplete and closed manifolds.
We draw cohomology fractals in the closed case in two ways. First, we deform the cohomology fractal for a surgery parent through Thurston's \emph{Dehn surgery space}. Second, we reimplement our algorithms using material triangulations. We also discuss possible sources of numerical error in our implementations.
In \refsec{Experiments} we give a sequence of experiments exploring the dependence of cohomology fractals on the visual radius $R$. For any fixed $R$, the cohomology fractal is constant on regions with sizes roughly proportional to $\exp(-R)$. As $R$ increases, these regions subdivide, and intricate patterns come into focus. This suggests that there is a limiting object. The following shows that such a limit cannot be a function.
\begin{restate}{Theorem}{Thm:NoPicture}
Suppose that $M$ is a finite volume, oriented hyperbolic three-manifold. Suppose that $F$ is a transversely oriented surface. Then the limit
\[
\lim_{R \to \infty} \Phi_R(v)
\]
does not exist for almost all $v \in \UT{}{\cover{M}}$.
\end{restate}
Indeed, experimentally, increasing $R$ leads to noisy pictures. However, this is due to undersampling. A heuristic argument (see \refrem{Embiggen}) shows that we can avoid noise if we increase the screen resolution as we increase $R$. We simulate this by computing \emph{supersampled} images.
These, and further experiments, indicate that in contrast with \refthm{NoPicture}, the \emph{mean} of the cohomology fractal, taken over a pixel, converges.
Its values appear to be normally distributed with standard deviation growing like $\sqrt{R}$.
Motivated by this, in \refsec{CLT}, we show that the cohomology fractal obeys a central limit theorem.
\begin{restate}{Theorem}{Thm:CLT}
Fix a connected, orientable, finite volume, complete hyperbolic three-manifold $M$ and a closed, non-exact, compactly supported one-form $\omega \in \Omega_c^1(M)$. There is $\sigma > 0$ such that for all basepoints $b$, all views $D$ with area measure $\mu_D$, for all probability measures $\nu_D\ll \mu_D$, and for all $\alpha\in\mathbb{R}$, we have
$$\lim_{T\to\infty} \nu_D\left[ v\in D : \frac{\Phi_T(v)}{\sqrt{T}} \leq \alpha \right] = \int_{-\infty}^\alpha \frac{1}{\sigma\sqrt{2\pi}} e^{-(s/\sigma)^2/2} ds$$
where $\Phi_T=\Phi^{\omega,b,D}_{T}$ is the associated cohomology fractal.
\end{restate}
\noindent
That is, if we regard the cohomology fractal across a pixel as a random variable, divide it by $\sqrt{T}$, and take the limit, the result is a normal distribution of mean zero.
The standard deviation of the normal distribution only depends on the manifold and cohomology class.
The proof uses Sinai's central limit theorem for geodesic flows.
In \refsec{Pixel}, we prove that treating the cohomology fractals as \emph{distributions} gives a well-defined limit.
In this introduction, for simplicity, we focus on the case where $D$ is a material view.
The \emph{pixel theorem} (\refthm{Pixel}) states that the limit
\[
\Phi^{\omega,b,D}(\eta) = \lim_{T\to\infty} \int_{D} \Phi^{\omega,b,D}_{T} \eta
\]
is well-defined for any two-form $\eta \in \Omega^2(D)$.
\refthm{Pixel} also states various transformation laws relating, for example, the distributions corresponding to different views.
Thus there is a view-independent distribution related to the view-dependent distributions via the conformal isomorphism $i_D$ from $D$ to $\bdy \cover{M}$.
\begin{restate}{Corollary}{Cor:BoundaryDistribution}
Suppose that $M$ is a connected, orientable, finite volume, complete hyperbolic three-manifold.
Fix
a closed, compactly supported one-form $\omega \in \Omega_c^1(M)$ and
a basepoint $b \in \cover{M}$.
Then there is a distribution $\Phi^{\omega,b}$ on $\bdy_\infty \cover{M}$ so that,
for any material view $D$ and for any $\eta \in \Omega^2(D)$, we have
\[
\Phi^{\omega,b,D}(\eta) =
\Phi^{\omega, b}((i_D^{-1})^* \eta)
\]
\end{restate}
The above discussion addresses smooth test functions.
We can also prove convergence for a wider class of test functions;
these include the indicator functions of regions with piecewise smooth boundary.
However, we do not know whether or not the cohomology fractal converges to a measure.
We conclude with a few questions and directions for future work in \refsec{Questions}.
\subsection*{Acknowledgements}
This material is based in part upon work supported by the National Science Foundation under Grant No. DMS-1439786 and the Alfred P. Sloan Foundation award G-2019-11406 while the authors were in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the Illustrating Mathematics program.
The fourth author was supported in part by National Science Foundation grant DMS-1708239.
We thank Fran\c{c}ois Gu\'eritaud for suggesting we use ray-tracing to generate cohomology fractals.
We thank Curt McMullen for suggesting that \refthm{Pixel} should be true and also for giving us permission to reproduce \reffig{McMullen}.
We thank Mark Pollicott and Alex Kontorovich for guiding us through the literature on exponential mixing of the geodesic flow.
We thank Ian Melbourne for enlightening conversations on central limit theorems.
\section{Triangulations}
\label{Sec:Triangulations}
We briefly review the notions of material and ideal triangulations of three-manifolds.
\subsection{Combinatorics}
Suppose that $M$ is a compact, connected, oriented three-manifold.
We will consider two cases. Either
\begin{itemize}
\item
the boundary $\bdy M$ is empty; here we call $M$ \emph{closed}, or
\item
the boundary is non-empty, consisting entirely of tori; here we call $M$ \emph{cusped}.
\end{itemize}
Suppose that $\mathcal{T}$ is a \emph{triangulation}: that is, a collection of oriented model tetrahedra together with a collection of orientation-reversing face pairings. We allow distinct faces of a tetrahedron to be glued, but we do not allow a face to be glued to itself. The quotient space, denoted $|\mathcal{T}|$, is thus a CW--complex which is an oriented three-manifold away from its zero-skeleton. We say that $\mathcal{T}$ is a \emph{material} triangulation of $M$ if there is an orientation-preserving homeomorphism from $|\mathcal{T}|$ to $M$. We say that $\mathcal{T}$ is an \emph{ideal} triangulation of $M$ if there is an orientation-preserving homeomorphism from $|\mathcal{T}|$, minus a small open neighbourhood of its vertices, to $M$. Equivalently, $|\mathcal{T}|$ minus its vertices is homeomorphic to $M^\circ$, the interior of $M$.
\begin{example}
Suppose that $M$ is obtained from $S^3$ by removing a small open neighbourhood of the figure-eight knot. See \reffig{FigureEightKnot}. As discussed in~\cite[Chapter 1]{ThurstonNotes}, the knot exterior $M$ has an ideal triangulation with two tetrahedra. See \reffig{FigureEightTriangulation}. Here we have not truncated the model tetrahedra. Instead we draw the vertex link; in this case it is a torus in $M$.
\end{example}
\begin{figure}[htb]
\centering
\subfloat[{Tube around the figure-eight knot.}]{
\label{Fig:FigureEightKnot}
\includegraphics[width = 0.33\textwidth]{Figures/fig_8_knot_triang_2d_pic_3k}
}%
\hfill%
\subfloat[Triangulation of the figure-eight knot complement. The colours and arrows indicate the gluings.]{
\label{Fig:FigureEightTriangulation}
\includegraphics[width = 0.61\textwidth]{Figures/Fig8_knot_complement}
}
\caption{The figure-eight knot complement. This manifold is known as \texttt{m004} in the SnapPy census. The black lines in \reffig{FigureEightKnot} cut the tube into triangles, corresponding to the eight green triangles in \reffig{FigureEightTriangulation}.}
\end{figure}
\subsection{Geometry}
\label{Sec:Geometry}
We deal with the geometry of the two types of triangulations separately.
\subsubsection{Ideal triangulations}
We give each model ideal tetrahedron $t$ a hyperbolic structure.
That is, we realise $t$ as an ideal tetrahedron in $\mathbb{H}^3$ with geodesic faces.
This can be constructed as the convex hull of four points on $\bdy_\infty \mathbb{H}^3$.
We require that the face pairings be orientation reversing isometries.
For these hyperbolic tetrahedra to combine to give a complete hyperbolic structure on the manifold $M^\circ$ requires certain conditions to be satisfied.
Very briefly: consider a loop in the dual one-skeleton of the triangulation.
This visits the tetrahedra in some order. The product of the corresponding sequence of isometries must give the identity if the loop is trivial in the fundamental group. If the loop is peripheral then the product must be a parabolic element.
These conditions reduce to a finite set of algebraic constraints. These are Thurston's \emph{gluing equations}, see~\cite[Section~4.2]{ThurstonNotes} and~\cite[Section~4.2]{PurcellKnotTheory}. Using the upper half-space model of $\mathbb{H}^3$ we define the \emph{shape} of each ideal hyperbolic tetrahedron to be the cross-ratio of its four ideal points.
The gluing equations impose a finite number of polynomial conditions on these shapes.
For our implementation, we also require that the shapes have positive imaginary part. This ensures that the ideal hyperbolic tetrahedra glue together to give a complete, finite volume hyperbolic structure on $M^\circ$. Furthermore, the model orientations of all of the tetrahedra agree with the orientation on $M$. In particular, when a geodesic ray crosses a face, it has a sensible continuation.
\subsubsection{Material triangulations}
\label{Sec:MaterialGeometry}
To find a hyperbolic structure for material triangulations, we replace Thurston's gluing equations with a construction due to Andrew Casson \cite{casson:geo} and Damian Heard \cite{Orb,heardThesis}.
To specify a hyperbolic structure on a material triangulation, it suffices to assign lengths to its edges. This is because the isometry class of an oriented material hyperbolic tetrahedron is determined by its six edge lengths.
There are two conditions that must be satisfied. First, for each model tetrahedron, there is a collection of inequalities that must be satisfied for its edges. Second, for each edge of the triangulation, the dihedral angles about it must sum to $2\pi$.
If these inequalities and equalities hold, then we obtain a hyperbolic structure on the three-manifold. See \cite[Section~2]{matthiasVerifyingFinite} for further details.
\section{Cannon--Thurston maps}
\label{Sec:CannonThurston}
\begin{wrapfigure}[15]{r}{0.36\textwidth}
\vspace{-21pt}
\begin{minipage}{0.355\textwidth}
\[
\begin{tikzcd}
\bdy_\infty \cover{F} \arrow[r, "\Psi"] \arrow[d, hook'] & \bdy_\infty \cover{M} \arrow[d, hook'] \\
\closure{F} \arrow[r, "\closure{\alpha}"] & \closure{M} \\
\cover{F} \arrow[r, "\cover{\alpha}"] \arrow[u, hook] \arrow[d] & \cover{M} \arrow[u, hook] \arrow[d] \\
F \arrow[r, "\alpha"] & M
\end{tikzcd}
\]
\end{minipage}
\caption{The various spaces and maps involved in constructing the Cannon--Thurston map $\Psi$.}
\label{Fig:CannonThurston}
\end{wrapfigure}
Here we sketch Cannon and Thurston's construction; see \reffig{CannonThurston} for an overview.
We refer to~\cite{CannonThurston07} for the details. See also~\cite{Mj18}.
Suppose that $F=F^2$ and $M=M^3$ are connected, compact, oriented two- and three-manifolds. Suppose that $F^\circ$ and $M^\circ$ admit complete hyperbolic metrics of finite area and volume respectively. In an abuse of notation, we will conflate $F$ with $F^\circ$, and similarly $M$ with $M^\circ$.
We call a proper embedding $\alpha \from F \to M$ a \emph{fibre} if there is a map $\rho \from M \to S^1$ so that for all $t \in S^1$ the preimage $\rho^{-1}(t)$ is a surface properly isotopic to $\alpha(F)$.
Let $\cover{F}$ and $\cover{M}$ be the universal covers of $F$ and $M$ respectively. Since $F$ and $M$ are hyperbolic, their covers are identified with hyperbolic two- and three-space respectively. Let $\bdy_\infty \cover{F} \homeo \bdy_\infty\mathbb{H}^2 \homeo S^1$ and $\bdy_\infty \cover{M} \homeo \bdy_\infty\mathbb{H}^3 \homeo S^2$ be their ideal boundaries.
We set
\[
\closure{F} = \cover{F} \cup \bdy_\infty \cover{F} \qquad \mbox{and} \qquad \closure{M} = \cover{M} \cup \bdy_\infty \cover{M}
\]
Each union is equipped with the unique topology that makes the group action continuous.
Note that $\closure{F}$ and $\closure{M}$ are homeomorphic to a closed two- and three-ball, respectively.
We compose the covering map from $\cover{F} \to F$ with the embedding $\alpha$ and then lift to obtain an equivariant map $\cover{\alpha} \from \cover{F} \to \cover{M}$.
We call $\cover{\alpha}$ an \emph{elevation} of $F$.
\reffig{Elevation} shows an elevation of the fibre of the figure-eight knot complement.
Cannon and Thurston gave the first proof of the following theorem in the closed case~\cite[page 1319]{CannonThurston07}.
The cusped case follows from work of Bowditch~\cite[Theorem 0.1]{Bowditch07}.
\begin{theorem}
\label{Thm:CannonThurston}
Suppose $M$ is a connected, oriented, finite volume hyperbolic three-manifold.
Suppose that $\alpha \from F \to M$ is a fibre of a surface bundle structure on $M$.
Then there is an extension of $\cover{\alpha}$ to a continuous and equivariant (with respect to the fundamental group of $M$) map $\closure{\alpha} \from \closure{F} \to \closure{M}$.
The restriction of $\closure{\alpha}$ to $\bdy_\infty \cover{F}$ gives a sphere-filling curve. \qed
\end{theorem}
We will use the notation $\Psi \from \bdy_\infty \cover{F} \to \bdy_\infty \cover{M}$ for the restriction of $\closure{\alpha}$ to $S^1_\infty$.
We call this a \emph{Cannon--Thurston map}.
We now turn to the task of visualising $\Psi$.
\section{Illustrating Cannon--Thurston maps}
\label{Sec:Graphics}
The standard joke (see~\cite[page~373]{Thurston82} and~\cite[page~335]{IndrasPearls}) is that it is straightforward to draw an accurate picture of a Cannon--Thurston map; it is solid black.
\begin{figure}[htb]
\centering
\subfloat[An elevation of the fibre. The fibre is a pleated surface made from two ideal triangles. The three pleating angles are $\pi/3$, $\pi$, and $5\pi/3$.]{
\includegraphics[width = 0.47\textwidth]{Figures/m004_elevation}
\label{Fig:Elevation}
}
\subfloat[An approximation of the Cannon--Thurston map, reproduced from Figure~8 of~\cite{Thurston82}.]
{
\includegraphics[width = 0.47\textwidth]{Figures/bending_the_circle_fig_8_part_6}
\label{Fig:Thurston}
}
\caption{Views in the universal cover of the figure-eight knot complement. }
\end{figure}
The first (more instructive) illustration of a Cannon--Thurston map is due to Thurston. He gives a sequence of approximations to the sphere-filling curve in~\cite[Figure~8]{Thurston82}.
We reproduce the last of these in \reffig{Thurston}. A striking version of this image by Wright also appears in \emph{Indra's Pearls}~\cite[Figure~10.11]{IndrasPearls}.
In this example both $M$ and $F$ are non-compact; $M$ is the complement of the figure-eight knot and $F$ is a Seifert surface.
\subsection{Vector and raster graphics}
In this section, we outline the technique used by Thurston and Wright to generate images of Cannon--Thurston maps,
in order to contrast it with our algorithm.
Our algorithm generates an image by producing a colour for each pixel on a screen. In other words, its output is a map from a grid of pixels in the \emph{image plane} into a space of possible colours. We call such a map a \emph{raster graphics image}.
In contrast, \refalg{CTApprox} (below) produces \emph{vector graphics} -- that is a description of an image as a collection of various primitive objects in the (euclidean) image plane.
An example of a primitive is a line segment, specified by the coordinates of its end points.
Other primitives include arcs, circles, and so on.
Note that we generally need to convert vector graphics to raster graphics to make a physical representation of an image.
To \emph{rasterise} a vector graphics image, we need to decide which pixels are coloured by which primitives.
For example, a disk colours all of the pixels whose coordinates are close enough to the centre of the disk.
Rasterisation is necessary for most output devices, such as screens or printers.
The exceptions include plotters, laser cutters, and cathode-ray oscilloscopes.
There are advantages to deferring rasterisation and saving the vector graphics to a file (in the PDF, PostScript, or SVG format, for example). For example, deferred rasterisation can take the resolution of the output device into account. Design programs such as Inkscape and Adobe Illustrator allow editing the geometric primitives in a vector graphics image. Rasterisation is usually carried out by a black-box general purpose algorithm, the details of which are hidden from the user.
\begin{algorithm}
\label{Alg:CTApprox}
(Approximate a Cannon--Thurston map)
We are given a fibre $F$ of the three-manifold $M$.
We choose an elevation $\cover{F} \subset \cover{M}$ of $F$.
As described in \refsec{CannonThurston}, the map
\[
\Psi \from S^1 \homeo \bdy_\infty \cover{F} \to \bdy_\infty \cover{M} \homeo S^2
\]
is sphere-filling.
To approximate $\Psi$, we first choose a large disk $D \subset \cover{F}$.
Typically, $M$ is described with an ideal triangulation $\mathcal{T}$, with $F$ realised as a surface carried by the two-skeleton $\mathcal{T}^{(2)}$.
Therefore $\cover{F}$ is given as a surface carried by $\cover{\mathcal{T}}^{(2)}$.
The disk $D$ then consists of some finite collection of ideal hyperbolic triangles in $\cover{\mathcal{T}}^{(2)}$.
The boundary of $D$ consists of a loop of geodesics in $\mathbb{H}^3 \cup \bdy_\infty \mathbb{H}^3$.
We now define $\Psi_D$ to be the loop in $\bdy_\infty \mathbb{H}^3$ obtained by projecting each arc of $\bdy D$ to an arc in $\bdy_\infty \mathbb{H}^3$.
\end{algorithm}
Note that the algorithm produces a circularly ordered collection of points in $\bdy_\infty \mathbb{H}^3$ spanning geodesics in $\mathbb{H}^3$.
However, conventional vector graphics require primitives to be in the euclidean plane.
Thus, we must make two choices of projections.
The first projection from $\mathbb{H}^3$ to $\bdy_\infty\mathbb{H}^3$ takes the geodesics to arcs in $\bdy_\infty\mathbb{H}^3$ and the second projection takes these arcs in $\bdy_\infty\mathbb{H}^3$ to arcs in the euclidean image plane.
We draw our pictures in the ``ideal view''.
That is, we use the upper half space model of $\mathbb{H}^3$ and project $\bdy D$ down to $\mathbb{C}$ (viewed as the boundary of $\mathbb{H}^3$).
The arcs between vertices now simply become straight lines in $\mathbb{C}$.
This is also the choice made by Thurston in~\reffig{Thurston}, as well as Wada in his program \emph{OPTi}~\cite{opti}.
Other depictions by McMullen~\cite{McMullenWeb} and Calegari~\cite[Figure~1.14]{Calegari07}
project outward from the origin to the boundary of the Poincar\'e ball model of $\mathbb{H}^3$ (and then to the image plane using a perspective or orthogonal projection).
\begin{remark}
The images in \emph{Indra's Pearls}~\cite{IndrasPearls} have been rasterised using a customised rasteriser that illustrates further features of the Cannon--Thurston map. For example, one side of the polygonal path of line segments is filled, or
the line segments are coloured using some combinatorial condition. See Figures~10.11 and~10.13 of~\cite{IndrasPearls}.
\end{remark}
\subsection{Motivating raster graphics}
Our work here began when we asked if we could avoid vector graphics when illustrating Cannon--Thurston maps.
This is less natural, but would allow us to take advantage of extremely fast graphics processing unit (GPU) calculation.
We were inspired in part by work of Vladimir Bulatov~\cite{Bulatov18} and also of Roice Nelson and the fourth author~\cite{visualizing_hyperbolic_honeycombs}, using reflection orbihedra. (See also the work of Peter Stampfli~\cite{Stampfli19}.)
They all use raster graphics strategies to draw tilings of $\mathbb{H}^2$ and $\mathbb{H}^3$.
A number of others have also used raster graphics to explore kleinian groups,
outside of the setting of reflection orbihedra.
They include Peter Liepa~\cite{Liepa11}, Jos Leys~\cite{Leys17}, and Abdelaziz Nait Merzouk~\cite{Knighty} (also see~\cite{Christensen}).
\section{Cohomology fractals}
\label{Sec:CohomologyFractals}
We give a sequence of more-or-less equivalent definitions of cohomology fractals, beginning with the conceptually simplest (for us), and moving towards versions that are most convenient for our implementation or our proofs.
Fixing notation, we take $M$ to be a riemannian manifold, $\cover{M}$ its universal cover, and $\pi \from \UT{}{M} \to M$ its unit tangent bundle.
\begin{definition}
\label{Def:Basic}
Suppose that we are given the following data.
\begin{itemize}
\item A connected, complete, oriented riemannian manifold $M^n$,
\item a cocycle $\omega \in Z^1(M; \mathbb{Z})$,
\item a point $p \in M$, and
\item a radius $R \in \mathbb{R}_{>0}$.
\end{itemize}
From these, we define the \emph{cohomology fractal} on the unit sphere $\UT{p}{M} \homeo S^{n-1}$.
This is a function $\Phi_R = \Phi^{\omega,p}_{R} \from \UT{p}{M} \to \mathbb{Z}$ defined as follows.
Suppose that $v \in \UT{p}{M}$ is a unit tangent vector.
Let $\gamma$ be the unique geodesic segment starting at $p$ with initial direction $v$ and of length $R$.
Let $q$ be the endpoint of $\gamma$.
Choose any shortest path $\gamma'$ from $q$ to $p$ (on a set of full measure $\gamma'$ is unique).
Thus $\gamma \cup \gamma'$ is a one-cycle.
We define $\Phi_R(v) = \omega(\gamma \cup \gamma')$.
\end{definition}
Our next definition moves in the direction of concrete examples:
\begin{definition}
\label{Def:Dual}
Here we further assume that $M$ is a three-manifold.
Let $F \subset M$ be a properly embedded, transversely oriented surface.
We choose $F$ so that $p \notin F$.
We define $\gamma$ and $q$ as above.
Now take $\gamma'$ to be the shortest path from $q$ to $p$ in the complement of $F$.
We now define $\Phi_R(v) = \Phi^{F,p}_{R}(v)$ to be the algebraic intersection number between $F$ and $\gamma \cup \gamma'$.
\end{definition}
We modify once again to obtain a definition very close to our implementation.
\begin{definition}
\label{Def:UsingTriangulation}
We equip $M$ with a material (or ideal) triangulation $\mathcal{T}$.
We properly homotope the surface $F$ to lie in the two-skeleton $\mathcal{T}^{(2)}$.
For each face $f$ this gives us a weight $\omega(f)$.
This is the signed number of sheets of $F$ running across $f$.
We dispense with $\gamma'$;
we take $\Phi_R(v)$ to be the sum of the weighted intersections between $\gamma$ and the faces of the triangulation.
\end{definition}
To aid in comparing cohomology fractals to Cannon--Thurston maps (in \refsec{Matching}), we lift to the universal cover, $\cover{M}$.
\begin{definition}
\label{Def:UniversalCover}
Since cochains pull back, let $\cover{\omega}$ be the lift of $\omega$.
Let $\cover{p}$ be a fixed lift of the point $p$.
Since $\cover{\omega}$ is a coboundary, it has a primitive, say $W$; we choose $W$ so that $W(\cover{p}) = 0$.
We form $\cover{\gamma}$ as before and let $\cover{q}$ be its endpoint.
We define $\Phi_R(v) = W(\cover{q})$.
\end{definition}
To analyse the behaviour of the cohomology fractal as $R$ tends to infinity, we rephrase our definition in a dynamical setting. Here, the radius $R$ is replaced by a time $T$.
\begin{definition}
\label{Def:OneForm}
Suppose that $\omega \in \Omega^1(M,\mathbb{R})$ is a closed one-form.
Let $\varphi_t \from \UT{}{M} \to \UT{}{M}$ be the geodesic flow for time $t$. We define
\[
\Phi_T(v) = \Phi_{T}^{\omega,p}(v) = \int_0^T \omega(\varphi_t(v))\, dt \qedhere
\]
\end{definition}
\begin{figure}[htb!]
\centering
\subfloat[$R=e^{0.5}$]{
\label{Fig:exp0.5}
\includegraphics[width=0.47\textwidth]{Figures/m004_exp0p5}
}
\subfloat[$R=e^1$]{
\includegraphics[width=0.47\textwidth]{Figures/m004_exp1}
}
\\
\subfloat[$R=e^{1.5}$]{
\includegraphics[width=0.47\textwidth]{Figures/m004_exp1p5}
}
\subfloat[$R=e^{2}$]{
\includegraphics[width=0.47\textwidth]{Figures/m004_exp2}
}
\caption{Cohomology fractals for \texttt{m004}, with various values of $R$.}
\label{Fig:VisSphereRadii}
\end{figure}
In \refsec{Implement} we will discuss how to calculate $\Phi_R$ in practice.
Before giving those details, we show the reader what $\Phi_R$ looks like for a few values of $R$. See \reffig{VisSphereRadii}. The map $\Phi_R$ maps into $\mathbb{R}$; we indicate the value of $\Phi_R(v)$ by brightness. For each value of $R$, we draw the value of $\Phi_R(v)$ for a small square subset of the unit tangent vectors, $\UT{p}{M}$.
Here we are using \refdef{UsingTriangulation}, our manifold $M$ is the complement of the figure-eight knot, and the surface $F$ is a fibre of $M$.
Note that when $R$ is small, as in \reffig{exp0.5}, $\Phi_R$ is constant on large regions of the sphere. As $R$ increases, the value of $\Phi_R$ on nearby rays becomes less correlated, and we see a fractal structure come into focus.
\begin{remark}
This complicated behaviour is a consequence of the hyperbolic geometry of our manifold.
Consider instead the example where $M$ is the three-torus $S^1 \times S^1 \times S^1$ and the surface $F$ is an essential torus embedded in $M$.
Again, $\Phi_R$ counts the number of elevations of $F$ the ray $\gamma$ passes through.
Since the elevations are parallel planes in $\cover{M} \homeo \mathbb{R}^3$, the value of $\Phi_R$ is constant on circles in $\UT{p}{M}$ parallel to these planes.
Here $\Phi_R$ is much simpler.
\end{remark}
\begin{remark}
Some of the geometry and topology of the manifold $M$ can be seen from the cohomology fractal $\Phi_R$.
Our recent expository paper on cohomology fractals~\cite{BachmanSchleimerSegerman20} gives many such examples, including the appearances of cusps, totally geodesic subsurfaces, and loxodromic elements of the fundamental group.
\end{remark}
\section{Matching figures}
\label{Sec:Matching}
In this section we compare our cohomology fractals to Cannon--Thurston maps.
\begin{example}
Suppose that $M$ is the figure-eight knot complement.
Suppose that $F$ is a fibre of the fibration of $M$.
Suppose that $\Psi$ is the associated Cannon--Thurston map.
\reffig{CTVector} shows an approximation $\Psi_D \from S^1 \to S^2 \homeo \bdy_\infty \mathbb{H}^3$ of $\Psi$;
we produced this image using \refalg{CTApprox}. (Note that the vector graphics image in \reffig{CTVector} has been converted to a raster graphics image to save on file size and rendering time.)
\reffig{CTPixelColour} shows a cohomology fractal $\Phi_R$ corresponding to $F$ and looking towards the same part of $\bdy_\infty \mathbb{H}^3$.
\reffig{CTPixelBW} shows $\Phi_R$ again, but with the contrast increased and colour scheme simplified.
Here the colour associated to a vector $v$ is either white or grey, according to whether $\Phi(v)$ is negative or not.
\reffig{CTPixelBWandVector} shows \ref{Fig:CTVector} overlaid on \ref{Fig:CTPixelBW}.
We see that the red curve of $\Psi_D$ is almost the common boundary of the white and grey regions of $\Phi_R$.
There are several small areas where $\Psi_D$ does not track the boundary.
These only appear close to fairly large cusps;
they exist because implementations of \refalg{CTApprox} generally have trouble approaching cusps from the side.
In the cohomology fractal $\Phi_R$ we see that there are chains of ``octopus heads'' that reach almost all the way towards each cusp.
\end{example}
This behaviour is generally true for fibrations, as follows. Here we follow \refdef{UniversalCover}.
\begin{proposition}
\label{Prop:LightDark}
Suppose that $M$ is a connected, oriented, finite volume hyperbolic three-manifold.
Suppose that $\alpha \from F \to M$ is a fibre of a surface bundle structure on $M$.
Fix $p$ in $F$, and a lift $\cover{p} \in \cover{M}$.
Fix any $R > 0$ and let $\Phi_R = \Phi^F_R$ be the resulting cohomology fractal for $F$.
Let $Z = \Phi_R^{-1}(\mathbb{R}_{\geq 0})$.
Then there is a disk $D \subset \cover{F}$, containing $\cover{p}$, so that the Cannon--Thurston map approximation $\Psi_D$ is a component of $\bdy Z$ (with error at most $\exp(-R)$).
\end{proposition}
\begin{proof}[Proof sketch]
We assume we are in the setting of \refdef{UsingTriangulation}.
Let $\omega$ be the one-cocycle dual to $F$.
Let $W$ be a primitive for $\cover{\omega}$.
Consider the two regions of $\cover{M}$ where $W$ is negative or, respectively, non-negative.
The common boundary of these is exactly an elevation $\cover{F}$ of the fibre $F$.
Let $B^3_R$ be the ball in $\mathbb{H}^3$ of radius $R$ with centre $\cover{p}$.
Let $S^2_R$ be the boundary of $B^3_R$.
Thus the intersection $\cover{F} \cap S^2_R$ is a collection of curves; these separate the points $\cover{q} \in S^2_R$ where $W$ is negative from those where it is non-negative.
Finally, note that $\pi \circ \varphi_R \from \UT{\cover{p}}{\mathbb{H}^3} \to S^2_R$ is the exponential map.
We deduce that $\Phi_R = W \circ \pi \circ \varphi_R$ is, up to change of domain, the same as $W | S^2_R$.
Recall that $\cover{F}$ is a union of triangles.
Assume that one of these contains $\cover{p}$.
Let $E$ be the collection of triangles of $\cover{F}$ having at least one edge meeting the ball $B^3_R$.
Let $D$ be the connected component of (the union over) $E$ that contains $\cover{p}$.
Let $\rho \from \mathbb{H}^3 - \{\cover{p}\} \to \UT{\cover{p}}{\mathbb{H}^3}$ be the inward central projection.
For any set $K \subset \mathbb{H}^3 - \{\cover{p}\}$ we call the diameter of $\rho(K)$ the \emph{visual diameter} of $K$.
This is measured with respect to the fixed metric on the unit two-sphere $\UT{\cover{p}}{\mathbb{H}^3}$.
Suppose that $e$ is a bi-infinite geodesic in $\mathbb{H}^3$.
If $e$ lies outside of $B^3_R$ then the visual diameter of $e$ is small;
in fact, for large $R$ the visual diameter of $e$ is less than $2\exp(-R)$.
Likewise, if $e$ meets $S^2_R$, then for either component $e'$ of $e - B^3_R$ the visual diameter of $e'$ is less than $\exp(-R)$.
Now suppose that $T$ is a triangle of $E$.
Using the above, we deduce that the visual diameter of each component of $T - B^3_R$ is small.
Thus the inward central projections of $\bdy D$ and $D \cap S^2_R$ have Hausdorff distance bounded by a small multiple of $\exp(-R)$.
So let $\Psi_D$ be the curve in $\bdy_\infty \mathbb{H}^3$ obtained by projecting $\bdy D$ outwards.
By the above, each arc of the resulting polygonal curve has small visual diameter.
Also by the above, the Hausdorff distance between the curves $\rho \circ \Psi_D$ and $\rho \circ (\cover{F} \cap S^2_R)$ is small.
\end{proof}
\begin{remark}
Suppose that $F$ is totally geodesic or, more generally, quasi-fuchsian.
In this case, the Cannon--Thurston map $\Psi$ is a circle or quasi-circle, respectively.
Note however that different elevations now give distinct Cannon--Thurston maps.
It is natural to take their union and obtain a circle (or quasi-circle) packing.
Now, if $F$ is also Thurston-norm minimising then we still obtain matches.
For example in~\cite[Figure~6]{BachmanSchleimerSegerman20}, we see how, for a totally geodesic surface in the Whitehead link complement, the cohomology fractal matches the associated circle packing.
On the other hand, if $[F]$ is trivial in $H_2(M, \bdy M)$ then the cohomology fractal $\Phi_R$ is bounded and oscillates as $R$ tends to infinity.
\end{remark}
\subsection{Lightning curves}
\label{Sec:Lightning}
\begin{figure}[htb]
\centering
\subfloat[The lightning curve superimposed on the cohomology fractal.]{
\includegraphics[width = 0.47\textwidth]{Figures/match_up_lightning_1000}
\label{Fig:LightningMatchUp}
}
\subfloat[A black and white version of \reffig{CTPixelColour} highlighting only the brightest pixels.]{
\includegraphics[width = 0.47\textwidth]{Figures/match_up_lightning_threshhold_1000}
\label{Fig:LightningThreshhold}
}
\caption{Matching up the lightning curve, reproduced from \cite[Figure~7]{DicksEtAl06}, with the cohomology fractal for \texttt{m004}. }
\end{figure}
Suppose that $M$ is a cusped, fibred three-manifold, with fibre $F$.
Dicks with various co-authors defines and studies the \emph{lightning curves}~\cite{AlperinEtAl99, DicksPorti02, DicksEtAl02, DicksEtAl06, DicksEtAl10, DicksEtAl12}; these are certain fractal arcs in the plane.
In more detail; suppose that $c$ and $d$ are distinct cusps of an elevation $\cover{F}$ of $F$. Let $[c, d]^\rcurvearrowup$ be the arc of $\bdy_\infty \cover{F}$ that is between $c$ and $d$ and anti-clockwise of $c$. The Cannon--Thurston map $\Psi$ sends the arc $[c, d]^\rcurvearrowup$ to a union of disks in $\bdy_\infty \cover{M}$ meeting only along points. The boundary of any one of these disks is a \emph{lightning curve}.
Since the lightning curve is defined in terms of the Cannon--Thurston map, it is not too surprising that we can also see something of the lightning curve in the cohomology fractal.
In \reffig{LightningMatchUp} we show a segment of the lightning curve for the figure-eight knot complement generated by Cannon and Dicks~\cite[Figure~7]{DicksEtAl06} overlaid on \reffig{CTPixelColour}.
The lightning curve seems to follow some of the brightest pixels in the cohomology fractal.
\reffig{LightningThreshhold} is a black and white version of \reffig{CTPixelColour}, with a relatively high threshold set for a pixel to be white -- the lightning curve seems to be there, but this is nowhere near as clear as it was for the approximations to the Cannon--Thurston map.
We do not fully understand the correspondence here.
We note that for clarity, \reffig{LightningMatchUp} shows only one segment of the lightning curve.
There is another segment, symmetrical with the shown segment under a 180 degree rotation about the centre of the image.
This second segment seems to follow the darkest pixels of the cohomology fractal.
\section{Implementation}
\label{Sec:Implement}
In this section we give an overview of an implementation of cohomology fractals.
Our implementation is written in Javascript and GLSL; the code is available at \cite{github_cohomology_fractals}.
We are in the process of making cohomology fractals available in SnapPy.
The second author has already implemented ray-tracing in hyperbolic three-manifolds~\cite{snappy}.
We now follow \refdef{UsingTriangulation}.
Suppose that $M$ is a connected, oriented, finite volume hyperbolic three-manifold.
Let $\mathcal{T}$ be a material or ideal triangulation of $M$.
We are given a weighting $\omega \from \mathcal{T}^{(2)} \to \mathbb{R}$ for the faces of the two-skeleton.
We represent the triangulation $\mathcal{T}$ as a collection $\{t_i\}$ of model hyperbolic tetrahedra.
Each tetrahedron $t_i$ has four faces $f^i_m$ lying in four geodesic planes $P^i_m$ in the hyperboloid model of $\mathbb{H}^3$.
Suppose that $t_j \in \mathcal{T}$ is another model tetrahedron, with faces $f^j_n$.
If the face $f^i_m$ is glued to $f^j_n$, then we have isometries $g^i_m$ and $g^j_n$ realising the gluings.
Note that $g^i_m$ and $g^j_n$ are inverses.
We are given a camera location $p$ in $M$; this is realised as a point (again called $p$) in some tetrahedron $t_i$.
\begin{remark}
The reader familiar with computer graphics will note that we also require a frame at the camera location.
To simplify the exposition, we will mostly suppress this detail.
\end{remark}
We are also given a radius $R$ as well as a maximum allowed step count $S$.
\subsection{Ray-tracing}
For each pixel of the screen, we generate a corresponding unit tangent vector $u$
in the tangent space to the current tetrahedron $t_i$.
We then \emph{ray-trace} through $\mathcal{T}$.
That is, we travel along the geodesic starting at $p$, in the direction $u$, for distance $R$, taking at most $S$ steps. \reffig{DesiredRayPath} shows a toy example, where we replace the three-dimensional hyperbolic triangulation $\mathcal{T}$ of $M$ with a two-dimensional euclidean triangulation of the two-torus.
\begin{figure}[htb]
\centering
\subfloat[\emph{The desired ray path, starting from the pair $(p, u)$ of length $R$.}]{
\label{Fig:DesiredRayPath}
\labellist
\small\hair 2pt
\pinlabel $p$ [r] at 25 14
\pinlabel $u$ [b] at 32.5 18
\endlabellist
\includegraphics[height=1.67in]{Figures/ray_trace_schematic_developed}
}
\subfloat[\emph{The implementation of the ray path. The iterations of the loop are labelled with integers. }]{
\label{Fig:ImplementationRayPath}
\labellist
\small\hair 2pt
\pinlabel 1 at 32 20
\pinlabel 3 at 20 27.5
\pinlabel 5 at 23.5 34
\pinlabel 9 at 13.5 10.5
\pinlabel 7 at 43 5.5
\pinlabel \rotatebox{300}{6} at 90 30
\pinlabel \rotatebox{300}{4} at 94 20
\pinlabel \rotatebox{300}{2} at 84 14
\pinlabel \rotatebox{300}{8} at 72 5
\pinlabel \rotatebox{300}{10} at 82 5
\endlabellist
\includegraphics[height=1.67in]{Figures/ray_trace_schematic_implementation}
}
\caption{A toy example of developing a ray through a tiling of a euclidean torus.
Note that the geodesic segments passing through a tile are parallel;
this is only because the geometry is euclidean.
In a hyperbolic tiling the segments are much less ordered. }
\end{figure}
It is perhaps most natural to think of ray-tracing as occurring in $\cover{M}$, the universal cover of the manifold, as shown in \reffig{DesiredRayPath}.
However, the na\"ive floating-point implementation in the hyperboloid model quickly loses precision.
We instead ray-trace in the manifold, as illustrated in \reffig{ImplementationRayPath}.
Thus, all points we calculate lie within our fixed collection of ideal hyperbolic tetrahedra $\{t_i\}$.
For each pixel, we do the following.
\begin{enumerate}
\item
The following initial data are given: an index $i$ of a tetrahedron, a point $p$ in $t_i$, and a tangent vector $u$ at $p$.
Initialise the following variables.
\begin{itemize}
\item
The total distance travelled: $r \gets 0$.
\item
The number of steps taken: $s \gets 0$.
\item
The current tetrahedron index: $j \gets i$.
\item
The current position: $q \gets p$.
\item
The current tangent vector: $v \gets u$.
\end{itemize}
\item
\label{Itm:LoopStart}
Let $\gamma$ be the geodesic ray starting at $p$ in the direction of $u$.
Find the index $n$ so that $\gamma$ exits $t_j$ through the face $f^j_n$.
Let $t_k$ be the other tetrahedron glued to face $f^j_n$.
\item
\label{Itm:LoopHitFace}
Calculate the position $q'$ and tangent vector $v'$ where $\gamma$ intersects $f^j_n$.
Let $r'$ be the distance from $q$ to $q'$.
Set $r \gets r + r'$ and set $s \gets s + 1$.
\item
If $r > R$ or $s > S$ then stop.
\item
Set $j \gets k$, set $q \gets g^j_n(q')$, and set $v \gets Dg^j_n(v')$.
\item
Go to step \refitm{LoopStart}.
\end{enumerate}
This implements the ray-tracing part of the algorithm.
In our toy example, this is shown in \reffig{ImplementationRayPath}.
\subsection{Integrating}
To determine the colour of the pixel, we also track the total signed weight we accumulate along the ray.
For this, we add the following steps to the loop above.
\begin{itemize}
\item[(1b)]
An initial weight $w_0$ is given. Initialise the following.
\begin{itemize}
\item[$\bullet$]
The current weight: $w_c \gets w_0$.
\end{itemize}
\item[(5b)]
Let $f$ be the face between $t$ and $t'$, co-oriented towards $t'$.
Set $w_c \gets w_c + \omega(f)$.
\end{itemize}
At the end of the loop, the value of $w_c$ gives the brightness of the current pixel.
(In fact, we apply a function very similar to the arctan function to remap the possible values of $w_c$ to a bounded interval.
We then apply a gradient that passes through a number of different colours.
This helps the eye see finer differences between values than a direct map to brightness.)
\subsection{Moving the camera}
In our applications, we enable the user to fly through the manifold $M$.
Depending on the keys pressed by the user at each time step, we apply an isometry $g$ to $p$.
We also track an orthonormal frame for the user; this determines how tangent vectors correspond to pixels of the screen.
We also apply the isometry $g$ to this frame.
When the user flies out of a face $f$ of the tetrahedron they are in, we apply the corresponding isometry $g^i_k$ to the position $p$ and the user's frame.
We also add $w(f)$ to the initial weight $w_0$.
Without this last step, the overall brightness of the image would change abruptly as the user flies through a face with non-zero weight.
\begin{remark}
\label{Rem:Basepoint}
With this last modification, the cohomology fractal depends on a choice of basepoint $b \in \cover{M}$. The point $p \in M$ must now also be replaced by $p \in \cover{M}$ (abusing notation, we use the same symbol for both points). We add $b$ to the notation, and now write the cohomology fractal as
\[
\Phi_R^{\omega, b, p} \from \UT{p}{\cover{M}} \to \mathbb{R} \qedhere
\]
\end{remark}
\begin{remark}
\label{Rem:DependenceOnb}
The dependence of the cohomology fractal on $b$ is minor: If we change $b$ to $b'$, then the value of $\Phi_R^{\omega, b, p}(v)$ changes by the weight we pick up along any path from $b'$ to $b$.
\end{remark}
\subsection{Material, ideal, and hyperideal views}
\label{Sec:Views}
The above discussion describes the \emph{material view}; the geodesic rays emanate radially from $p$.
To render an image, we place a rectangle in the tangent space at $p$.
For each pixel of the screen, we take the tangent vector $u$ to be the corresponding point of the rectangle.
See \reffig{ViewsMaterial}.
\begin{definition}
The \emph{field of view} of a material image is the angle between the tangent vectors pointing at the midpoints of the left and right sides of the image.
\end{definition}
\begin{remark}
The material view suffers from perspective distortion. This is most noticeable towards the edges of the image, and is worse when the field of view is large.
\end{remark}
To generalise the material view to the ideal and hyperideal, we introduce the following terminology.
We say that a subset $D \subset \UT{}{\cover{M}}$ is a \emph{view} if it is one of the following.
\begin{enumerate}
\item In the \emph{material view}, $D$ is a fibre $\UT{p}{\cover{M}}\homeo S^2$.
\item In the \emph{ideal view}, we take $D$ to be the collection of outward normals to a horosphere $H$.
That is, the vectors point away from $\bdy_\infty H$.
To render an image we place a rectangle in $D$.
For each pixel of the screen we set the initial vector $v$ to be the corresponding point of the rectangle.
The starting point is then $\pi(v) \in \cover{M}$, the basepoint of $v$.
Finally, we set $w_0$ to be the total weight accumulated, along the arc from $b$ to $\pi(v)$, as we pass through faces of the triangulation.
See \reffig{ViewsIdeal}.
\item In the \emph{hyperideal view}, we take $D$ to be the collection of normals to a transversely oriented geodesic plane $P$.
We draw $P$ on the euclidean rectangle of the screen using the Klein model.
The algorithm is otherwise identical to the ideal view case.
See \reffig{ViewsHyperideal}.
\end{enumerate}
\begin{remark}
The ideal view in hyperbolic geometry is the analogue of an orthogonal view in euclidean geometry.
In both cases this is the limit of backing the camera away from the subject while simultaneously zooming in.
\end{remark}
\begin{remark}
The hyperideal view suffers from an ``inverse'' form of perspective distortion. Towards the edges of the image, round circles look like ellipses, with the minor axis along the radial direction.
\end{remark}
\begin{definition}
\label{Def:View}
Let $D \subset \UT{}{\cover{M}}$ be a view, as discussed above.
In the notation for the cohomology fractal, we replace $p$ by $D$:
\[
\Phi_R^{\omega, b, D} \from D \to \mathbb{R} \qedhere
\]
\end{definition}
\begin{figure}[htb]
\centering
\subfloat[Material.]{%
\includegraphics[width = 0.32\textwidth]{Figures/comparison_material}%
\label{Fig:ViewsMaterial}
}
\subfloat[Ideal.]{%
\includegraphics[width = 0.32\textwidth]{Figures/comparison_ideal}%
\label{Fig:ViewsIdeal}
}
\subfloat[Hyperideal.]{%
\includegraphics[width = 0.32\textwidth]{Figures/comparison_hyperideal}%
\label{Fig:ViewsHyperideal}%
}
\caption{Comparison between different views of the cohomology fractal for \texttt{m004}. }
\end{figure}
\subsection{Edges}
We give the user the option to see the edges of the triangulation. The user selects an edge thickness $\varepsilon > 0$. The web application implements this in a lightweight fashion:
In step \refitm{LoopHitFace}, if the distance from the point $q'$ to one of the three edges of the face we have intersected is less than $\varepsilon$, then we exit the loop early.
Depending on user choice, the pixel is either coloured by the weight $w_c$ or by the distance $d$.
See \reffig{Edges}.
In SnapPy, we compute the intersection of the ray with a cylinder about the edge in addition to the intersection with the faces.
\begin{figure}[htb]
\centering
\subfloat[Coloured by cohomology fractal.]{
\includegraphics[width = 0.47\textwidth]{Figures/edges_cohom_fractal_ideal}
}
\subfloat[Coloured by distance.]{
\includegraphics[width = 0.47\textwidth]{Figures/edges_distance_ideal}
}
\caption{Edges of the ideal triangulation of \texttt{m004}, as seen in the material view.}
\label{Fig:Edges}
\end{figure}
\subsection{Elevations}
We also give the user the option to see several elevations of the surface $F$. The user selects a weight $w_{\max} > 0$.
In step (5b), if $w_0 < 0$ but $w_c>0$, then we have crossed the elevation at weight zero. In this case we exit the loop, and colour the pixel by the distance $d$. Similarly, if $w_0 > w_{\max}$ but $w_c < w_{\max}$, then we have crossed the elevation at weight $w_{\max}$, and again we stop and colour by distance. Finally, if $0<w_0<w_{\max}$, then we stop if $w_c$ has changed from $w_0$. \reffig{Elevation} shows a single elevation.
\subsection{Triangulations, geometry, and cocycles}
We obtain our triangulations and their hyperbolic shapes from the SnapPy census. We put some effort into choosing good representative cocycles; the choice here makes very little difference to the appearance of the cohomology fractal, but it makes a large difference to the appearance of the elevations. That is, a poor choice of cocycle gives a ``noisy'' elevation. For example, adding the boundary of a tetrahedron to the Poincar\'e dual surface may perform a one-three move to its triangulation. This adds unnecessary ``spikes'' to the elevations.
When our manifold has Betti number one, there is only one cohomology class of interest. Here we searched for taut ideal structures dual to this class~\cite{Lackenby00}. When the SnapPy triangulation did not admit such a taut structure, we randomly searched for one that did. A taut structure gives a Poincar\'e dual surface with the minimum possible Euler characteristic.
When the Betti number is larger than one, we used tnorm~\cite{tnorm20} to find initial simplicial representatives of vertices of the Thurston norm ball~\cite{Thurston86} in $H_2(M,\bdy M)$. We then greedily performed Pachner moves to reduce the complexity of the cocycles. We often, but not always, realised the minimum possible Euler characteristic.
\subsection{Discussion}
Any visualisation of a hyperbolic tiling suffers from the mismatch between the hyperbolic metric of the tiling and the euclidean metric of the image.
The tools for generating more of the tiling involve applying hyperbolic isometries.
The tiles thus shrink exponentially in size while growing exponentially in number.
This makes it difficult for the tiles to cleanly approach $\bdy_\infty \mathbb{H}^2$ or $\bdy_\infty \mathbb{H}^3$.
Approaching a ``parabolic'' point at infinity is even more difficult.
In the vector graphics approach, one must be careful to avoid wasting time generating huge numbers of invisible objects: tiles may be too small or their aspect ratios too large.
The ray-tracing approach (and any similar raster graphics approach) deals with this mismatch directly.
Here we start with the pixel that is to be coloured and then generate only the hyperbolic geometry needed to determine its colour.
A disadvantage of the ray-tracing approach is that we generate the hyperbolic geometry necessary for each pixel independently, meaning that much work is duplicated. However, the massive parallelism in modern graphics processing units mitigates, and is in fact designed to deal with, this kind of issue. It often turns out to be faster to duplicate work in many parallel processes rather than compute once then transmit the result to all processes requiring it.
\section{Incomplete structures and closed manifolds}
\label{Sec:Cone}
Suppose that $M$ is a cusped hyperbolic manifold. Recall that we generate cohomology fractals for $M$ by using an ideal triangulation $\mathcal{T}$. Associated to $\mathcal{T}$ there is the \emph{shape variety}; that is we impose the gluing equations outlined in \refsec{Geometry}, omitting the peripheral ones. This gives us a space of deformations of the complete hyperbolic structure to incomplete hyperbolic structures; see~\cite[Section~4.4]{ThurstonNotes} and~\cite[Section~6.2]{PurcellKnotTheory}.
If we deform correctly, we reach an incomplete structure whose completion has the structure of a hyperbolic manifold. The result is a \emph{hyperbolic Dehn filling} of the original cusped manifold.
\subsection{Incomplete structures}
Suppose that $(M, \mathcal{T})$ is an ideally triangulated manifold. Let $Z^s$ be a path in the shape variety, where $Z^\infty$ is the complete structure and the completion of $Z^1$ is a closed hyperbolic three-manifold obtained by Dehn filling $M$.
Between the two endpoints, we have \emph{incomplete structures} $M_s$ on the manifold $M$.
In an incomplete geometry, there are geodesic segments that cannot be extended indefinitely.
Suppose that, as in our algorithm, we only consider geodesic segments emanating from $p$ of length at most $R$.
The endpoints of the rays that do not extend to distance $R$ form the \emph{incompleteness locus} $\Sigma_s$ in the ball $B^3_R \subset \mathbb{H}^3$.
It follows from work of Thurston that $\Sigma_s$ is a discrete collection of geodesic segments, for generic values of $s$~\cite{Thurston82}.
Suppose that $\omega \from \mathcal{T}^{(2)} \to \mathbb{R}$ is the given weight function dual to a properly embedded surface $F$ in $M$.
We assume that the boundary of $F$ (if any) gives loops in the filled manifold that, there, bound disks.
Thus $F$ also gives a cohomology fractal in the filled manifold.
\begin{remark}
Note that there is no canonical way of transferring a base point $b$ or view $D$ between two different geometric structures $M_s$ and $M_{s'}$. However, we can choose $b$ and $D$ for each $M_s$ in a way that gives us continuously varying pictures.
We do not dwell on the details here.
\end{remark}
\begin{figure}[htb!]
\centering
\subfloat[$s=10$]{
\includegraphics[width=0.31\textwidth]{Figures/FiguresMatthiasVer2/m122_4t_-t/cone_10}
}
\subfloat[$s=5$]{
\includegraphics[width=0.31\textwidth]{Figures/FiguresMatthiasVer2/m122_4t_-t/cone_5}
}
\subfloat[$s=4$]{
\includegraphics[width=0.31\textwidth]{Figures/FiguresMatthiasVer2/m122_4t_-t/cone_4}
}
\\
\subfloat[$s=3$]{
\includegraphics[width=0.31\textwidth]{Figures/FiguresMatthiasVer2/m122_4t_-t/cone_3}
}
\subfloat[$s=2$]{
\includegraphics[width=0.31\textwidth]{Figures/FiguresMatthiasVer2/m122_4t_-t/cone_2}
}
\subfloat[$s=1.8$]{
\includegraphics[width=0.31\textwidth]{Figures/FiguresMatthiasVer2/m122_4t_-t/cone_1p8}
}
\\
\subfloat[$s=1.6$]{
\includegraphics[width=0.31\textwidth]{Figures/FiguresMatthiasVer2/m122_4t_-t/cone_1p6}
}
\subfloat[$s=1.4$]{
\includegraphics[width=0.31\textwidth]{Figures/FiguresMatthiasVer2/m122_4t_-t/cone_1p4}
}
\subfloat[$s=1.2$]{
\includegraphics[width=0.31\textwidth]{Figures/FiguresMatthiasVer2/m122_4t_-t/cone_1p2}
}
\caption{Cohomology fractals for \texttt{m122($4s, -s$)} as $s$ varies.}
\label{Fig:Bending}
\end{figure}
\reffig{Bending} shows cohomology fractals for various $M_s$.
We see a kind of branch cut in the background to either side of the incompleteness locus $\Sigma_s$.
As we vary $s$, the background appears to bend along the geodesic. Other paths in the shape variety will give shearing as well as (or instead of) bending.
When we reach a Dehn filling, the two sides again match, and we see the structure of the closed filled manifold.
See \reffig{m122_4_-1}.
(The two sides can also match before we reach the Dehn filling due to symmetries of the cusped manifold lining up with the cone structure.)
\begin{figure}[htb]
\centering
\includegraphics[width = 0.65\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/without_evil}
\caption{Cohomology fractal for the Dehn filling \texttt{m122(4,-1)}.
This gives a final image for \reffig{Bending}, with $s = 1$.}
\label{Fig:m122_4_-1}
\end{figure}
\subsection{Numerical instability near the incompleteness locus}
\label{Sec:NumericalInstability}
Our algorithm, given in \refsec{Implement}, does not require completeness. However, a ray from $p$ to $\Sigma_s$ necessarily meets infinitely many tetrahedra.
This is because near $\Sigma_s$ we are far from the thick part of any tetrahedron, and the thin parts of the tetrahedra are almost ``parallel'' to $\Sigma_s$.
Thus the innermost loop of the algorithm will always halt by reaching the maximum step count; it follows that we cannot ``see through'' a neighbourhood of $\Sigma_s$. \reffig{m122_4_-1_with_evil} shows the cohomology fractal drawn with a small maximum step count, making such a neighbourhood visible.
\begin{figure}[htb]
\centering
\includegraphics[width = 0.65\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/with_evil}
\caption{Cohomology fractal for the Dehn filling \texttt{m122(4,-1)} drawn with an incomplete structure on an ideal triangulation.
Here the maximum number of steps $S$ is 55. Compare with \reffig{m122_4_-1}.}
\label{Fig:m122_4_-1_with_evil}
\end{figure}
Increasing the maximum step count shrinks the opaque neighbourhood of $\Sigma_s$. However, as a ray approaches $\Sigma_s$, its segments within the model tetrahedra tend to their ideal vertices. Thus the coordinates blow up; this appears to lead to numerically unstable behaviour. See \reffig{EvilCloseUp}. In the next section we describe a method to eliminate these numerical defects; we use this to produce \reffig{WithoutEvilCloseUp}.
\begin{figure}[htb]
\centering
\subfloat[Numerical instability near the incompleteness locus.]{
\includegraphics[width = 0.47\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/with_evil_detail}
\label{Fig:EvilCloseUp}
}
\subfloat[The same view with a material triangulation implementation.]{
\includegraphics[width = 0.47\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/without_evil_detail}
\label{Fig:WithoutEvilCloseUp}
}
\caption{A view of the cohomology fractal for the manifold \texttt{m122(4,-1)} near the incompleteness locus.
On the left we have taken the maximum number of steps $S$ sufficiently large to ensure that all rays reach distance $R$.}
\end{figure}
Note that numerical instability caused by rays approaching the ideal vertices also occurs for the complete structure on a cusped manifold. It is less noticeable in this case however, because these errors occur in a small part of the visual sphere for typical positions.
\subsection{Material triangulations}
\label{Sec:MaterialTriangulations}
In order to remove instability around the incompleteness locus, we remove it. That is, we abandon (spun) ideal triangulations in favour of material triangulations. There is no change to the algorithm in \refsec{Implement}; we only alter the input data (the planes $P^i_k$ and face-pairing matrices $g^i_k$):
Given the edge lengths (see \refsec{MaterialGeometry}) for a material triangulation, \cite[Lemma~3.4]{matthiasVerifyingFinite} assigns hyperbolic isometries to the edges of a doubly truncated simplex (also known as permutahedron). These can be used to switch a tetrahedron between different standard positions (as defined in \cite[Definition~3.2]{matthiasVerifyingFinite}) where one of its faces is in the $\mathbb{H}^2\subset\mathbb{H}^3$ plane.
We assume that every tetrahedron is in $(0,1,2,3)$--standard position. Given a face-pairing, we apply the respective isometries to each of the two tetrahedra such that the faces in question line up in the $\mathbb{H}^2\subset\mathbb{H}^3$ plane. The face-pairing matrix $g^i_k$ is now given by composing the inverse of the first isometry with the second isometry. For example, let face 3 of one tetrahedron be paired with face 2 of another tetrahedron via the permutation $(0,1,2,3)\mapsto(0,1,3,2)$. To line up the faces, we need to bring the second tetrahedron from the default $(0,1,2,3)$--standard position into $(0,1,3,2)$--standard position by applying $\gamma_{012}$ from \cite[Lemma~3.4]{matthiasVerifyingFinite} which will thus be the face-pairing matrix, see \cite[Figure~4]{matthiasVerifyingFinite}.
It is left to compute the planes $P^i_k$.
Note that $P^i_3$ (for each $i$) is the canonical copy of $\mathbb{H}^2 \subset \mathbb{H}^3$.
All other $P^i_k$ can be obtained by applying the isometries from \cite[Lemma~3.4]{matthiasVerifyingFinite} again.
\subsection{Cannon--Thurston maps in the closed case}
\begin{figure}[htb]
\centering
\subfloat[McMullen's illustration~\cite{McMullen19}. See also~\cite{McMullenWeb}.]{
\includegraphics[width = 0.47\textwidth]{Figures/McMullen_fiber_refl_1000}
\label{Fig:McMullen}
}
\subfloat[Cohomology fractal - hyperideal view.]{
\includegraphics[width = 0.47\textwidth]{Figures/mcmullenFractal}
\label{Fig:OrbifoldCohomFractal}
}
\caption{Views of \texttt{m004(0,2)}.}
\label{Fig:Orbifold}
\end{figure}
Cannon and Thurston's original proof was in the closed case. Thurston's original images and all subsequent renderings, with one notable exception, are in the cusped case.
With some minor modifications, \refprop{LightDark} applies in the closed case; thus the cohomology fractals again approximate Cannon--Thurston maps.
We are aware of only one previous example in the closed case, due to McMullen~\cite{McMullenWeb}.
In \reffig{Orbifold} we give a rasterisation of his original vector graphics image~\cite{McMullen19}, and our version of the same view.
The filling \texttt{m004(0,2)} of the figure-eight knot complement has an incomplete hyperbolic metric.
The completion is a hyperbolic orbifold $\mathcal{O}$ with angle $\pi$ about the orbifold locus; the universal cover is $\mathbb{H}^3$.
Since the filling is a multiple of the longitude, the orbifold $\mathcal{O}$ is again fibred. An elevation of this fibre to $\mathbb{H}^3$ gives a Cannon--Thurston map. Our image, \reffig{OrbifoldCohomFractal} is the cohomology fractal for the fibre in $\mathcal{O}$, in the hyperideal view. This is implemented using a material triangulation of an eight-fold cover $M$. Since $M$ with its fibre, is commensurable with $\mathcal{O}$ with its fibre, we obtain the same image.
McMullen's image, reproduced in \reffig{McMullen} was generated using his program \texttt{lim}~\cite{lim}. Briefly, let $\mathcal{O}^\infty$ be the infinite cyclic cover of $\mathcal{O}$. McMullen produces a sequence $\mathcal{O}^n$ of quasi-fuchsian orbifolds that converge in the geometric topology to $\mathcal{O}^\infty$. In each of these the convex core boundary is a pleated surface. The supporting planes of this pleated surfaces give round circles in $\bdy \mathbb{H}^3$. His image then is obtained by taking $n$ fairly large, passing to the universal cover of $\mathcal{O}^n$, and drawing the boundaries of many supporting planes~\cite{McMullen19}.
\subsection{Accumulation of floating point errors}
\label{Sec:Accumulate}
Our implementation uses single-precision floating point numbers. As we saw in \refsec{NumericalInstability}, this can cause problems when rays approach the vertices of ideal tetrahedra. However, floating point errors can accumulate for large values of $R$ whether or not rays approach the vertices. This can therefore also affect material triangulations.
With these problems in mind, we cannot claim that our images are rigorously correct.
However, for small values of $R$ we can be confident that our images are accurate. For very small values the endpoints of our rays all sit within the same tetrahedron, and so all pixels are the same colour. As we increase $R$ (as in \reffig{VisSphereRadii}), we see regions of constant colour, separated by arcs of circles. This is provably correct: (horo-)spheres meet the totally geodesic faces of tetrahedra in circles.
If we zoom in whilst increasing $R$, eventually floating point errors become visible.
\reffig{NumNoise} shows the results of an experiment to determine when this happens, for a material triangulation. At around $R = 11$, the circular arcs separating regions of the same colour become stippled. At around $R = 13$, the regions are no longer distinct.
\begin{remark}
\label{Rem:Quasigeodesic}
Perhaps surprisingly, this accumulation of error does not mean that our pictures are inaccurate. Suppose that the side lengths of our pixels are on a somewhat larger scale than the precision of our floating point numbers. For each pixel, our implementation produces a piecewise geodesic, starting in the direction through the centre of the pixel, but with small angle defect at each vertex.
Due to the nature of hyperbolic geometry, this piecewise geodesic cannot curve away from the true geodesic fast enough to leave the visual cone on the desired pixel. Thus, as long as the pixel size is not too small, each pixel is coloured according to some sample within that pixel.
\end{remark}
\begin{figure}[htb]
\centering
\subfloat[$R=10.5,$ field of view $\sim 0.015^\circ$]{
\includegraphics[width=0.45\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/dist_10_5.png}
}
\thinspace
\subfloat[$R=11.5,$ field of view $\sim 0.005^\circ$]{
\includegraphics[width=0.45\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/dist_11_5.png}
}
\subfloat[$R=12.5,$ field of view $\sim 0.002^\circ$]{
\includegraphics[width=0.45\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/dist_12_5.png}
}
\thinspace
\subfloat[$R=13.5,$ field of view $\sim 0.0007^\circ$]{
\includegraphics[width=0.45\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/dist_13_5.png}
}
\caption{We zoom into the cohomology fractal for \texttt{m122(4,-1)} while increasing $R$.
The field of view of the image is proportional to $e^{-R}$.
Noise due to rounding errors becomes visible at $R\sim 10.5$ and completely dominates the picture when $R\sim13.5$. }
\label{Fig:NumNoise}
\end{figure}
\section{Experiments}
\label{Sec:Experiments}
The sequence of images in \reffig{VisSphereRadii} suggests that some form of fractal object is coming into focus.
When $R$ is small, the function $\Phi_R = \Phi^{F,b,D}_{R}$ is constant on large regions of $D$.
As $R$ increases, these regions subdivide, producing intricate structures.
As we have defined it so far, the cohomology fractal depends on $R$.
A natural question is whether or not there is a limiting object that does not depend on $R$.
In this section we describe a sequence of experiments we undertook to explore this question.
Inspired by these, in Sections~\ref{Sec:CLT} and~\ref{Sec:Pixel} we provide mathematical explanations of our observations.
\subsection{These pictures do not exist}
A na\"ive guess might be that the cohomology fractal converges to a function as $R$ tends to infinity.
However, consider a ray following a closed geodesic $\gamma$ in $M$ that has positive algebraic intersection with the surface $F$.
Choosing $D$ so that it contains a tangent direction $v$ along $\gamma$, we see that $\Phi^{F}_{R}(v)$ diverges to infinity as $R$ tends to infinity.
The issue is not restricted to the measure zero set of rays along closed geodesics.
Suppose that $v$ is a generic vector in a material view $D$.
Recall that the geodesic flow is ergodic~\cite[Hauptsatz~7.1]{Hopf39}.
Thus the ray starting from $v$ hits $F$ infinitely many times.
So $\Phi^{F}_{R}(v)$ again diverges. Thus we have the following theorem.
\begin{theorem}
\label{Thm:NoPicture}
Suppose that $M$ is a finite volume, oriented hyperbolic three-manifold.
Suppose that $p$ is any point of $M$.
Suppose that $F$ is a compact, transversely oriented surface.
Then the limit
\[
\lim_{R \to \infty} \Phi_R(v)
\]
does not exist for almost all $v \in \UT{p}{M}$. \qed
\end{theorem}
\begin{remark}
To generalise \refthm{NoPicture} from finite volume to infinite volume manifolds,
we must replace Hopf's ergodicity theorem by some other dynamical property. For example, Rees~\cite[Theorem~4.7]{Rees81} proves the ergodicity of the geodesic flow on the infinite cyclic cover of a hyperbolic surface bundle. This is generalised to the bounded geometry case by Bishop and Jones~\cite[Corollary~1.4]{BishopJones97}. Both of these works rely in a crucial fashion on Sullivan's equivalent criteria for ergodicity~\cite[page~172]{Sullivan79}.
\end{remark}
One might hope that as $R$ tends to infinity, nearby points diverge in similar ways. If so, we might be able to rescale and have, say, $\Phi_{R}/R$ or $\Phi_{R}/\sqrt{R}$ converge.
However, increasing $R$ in our implementation produces the sequence of images shown in \reffig{VisSphereRadii2}.
We see that, as we increase $R$, the images become noisy as neighbouring pixels appear to decorrelate.
Eventually the fractal structure is washed away. Dividing the cohomology fractal by, say, some power of $R$ only changes the contrast. Depending on this power, the limit is either almost always zero or does not exist.
\begin{figure}[htb!]
\centering
\subfloat[$R=e^{2.5}$]{
\includegraphics[width=0.225\textwidth]{Figures/m004_exp2p5}
}
\subfloat[$R=e^{3}$]{
\includegraphics[width=0.225\textwidth]{Figures/m004_exp3}
}
\subfloat[$R=e^{4}$]{
\includegraphics[width=0.225\textwidth]{Figures/m004_exp4}
}
\subfloat[$R=e^{5}$]{
\includegraphics[width=0.225\textwidth]{Figures/m004_exp5}
}
\caption{Cohomology fractals for \texttt{m004}, with larger values of $R$. Each image here and in \reffig{VisSphereRadii} has $1000\times1000$ pixels. }
\label{Fig:VisSphereRadii2}
\end{figure}
\reffig{VisSphereRadii2} also demonstrates that \refrem{Quasigeodesic}, while valid, is misleading; it is true that for large $R$, every ray ends up somewhere within its pixel, but the colour one obtains is random noise.
This noise is due to undersampling.
In our images each pixel $U$ is coloured using a single ray passing (almost, as we saw in \refsec{Accumulate}) through its centre.
When $R$ is small relative to the side length of $U$ the function $\Phi_{R}|U$ is generally constant; thus any sample is a good representative.
As $R$ becomes larger
the function $\Phi_{R}|U$ varies more and more wildly; thus a single sample does not suffice.
\subsection{Take a step back and look from afar}
Let $D$ be an ideal view in the sense of \refsec{Views}. We identify $\pi(D)$, isometrically, with the euclidean plane $\mathbb{C}$. Using this identification, we may refer to the vectors of $D$ as $z_D$ for $z \in \mathbb{C}$.
Let $E$ be the ideal view obtained from $D$ by flowing outwards by a distance $d$. Thus, $\varphi_d(D) = E$. We similarly identify $\pi(E)$ with the euclidean plane, in such a way that for each $z \in \mathbb{C}$, we have $\varphi_d(z_D) = (e^d z)_E$. We may now state the following.
\begin{figure}[htbp]
\labellist
\small\hair 2pt
\pinlabel {$\bdy \mathbb{H}^3$} [r] at 0 0
\pinlabel {$D$} [r] at 34 218
\pinlabel {$E$} [r] at 173 74
\pinlabel {$R$} [l] at 340 50
\pinlabel {$R+d$} [l] at 493 122.5
\endlabellist
\includegraphics[width = 0.6\textwidth]{Figures/two_ideal_views}
\caption{Side view of ``screens'' (in red) for two ideal views, drawn in the upper half space model. The outward pointing normals to each horosphere point down in the figure. }
\label{Fig:TwoIdealViews}
\end{figure}
\begin{lemma}
\label{Lem:MovingIsScaling}
Suppose that $D$ is an ideal view and $E = \varphi_d(D)$.
Then the cohomology fractal based at $b$ satisfies
\[
\Phi_{R+d}^{\omega, b,D}\left(z_D\right) = \Phi_{R}^{\omega, b,E}\left((e^d z)_E\right)
\]
\end{lemma}
\begin{proof}
Consider \reffig{TwoIdealViews}.
\end{proof}
Said another way, if we fly backwards a distance $d$ and replace $R$ with $R+d$, we see the exact same image, scaled down by a factor of $e^d$.
As a consequence, in the ideal view we have the following.
\begin{remark}
Each small part of a cohomology fractal with large $R$ is the same as the cohomology fractal for a smaller $R$ with a different view.
\end{remark}
\begin{remark}
\label{Rem:Embiggen}
Since we know that we can make non-noisy images for small enough values of $R$,
we can therefore make a non-noisy image of a cohomology fractal for any value of $R$, as long as we are willing to use a screen with high enough resolution.
\end{remark}
The natural question then is how the \emph{perceived} image changes as we simultaneously increase the resolution and increase $R$.
This convergence question is different from the convergence of the cohomology fractal to a function as in \refthm{NoPicture}: when we look at a very large screen from far away, our eyes average the colours of nearby pixels. Thus, we move away from thinking of the limit as a function evaluated at points, towards thinking of it as a measure evaluated by integrating over a region. As we will see later, in fact the correct limiting object is a distribution.
\subsection{Supersampling}
\begin{figure}[htbp]
\begin{tabular}{c m{0.275\textwidth} m{0.275\textwidth} m{0.275\textwidth}}
& \multicolumn{1}{c}{$1\times 1$} & \multicolumn{1}{c}{$2\times2$} & \multicolumn{1}{c}{$128\times 128$} \\
4 &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_0400_0001.png} &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_0400_0002.png} &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_0400_0128.png} \\
6 &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_0600_0001.png} &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_0600_0002.png} &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_0600_0128.png} \\
8 &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_0800_0001.png} &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_0800_0002.png} &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_0800_0128.png} \\
10 &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_1000_0001.png} &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_1000_0002.png} &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_1000_0128.png} \\
12 &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_1200_0001.png} &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_1200_0002.png} &
\includegraphics[width=0.29\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1_sub4/b_subsampling_1200_0128.png} \\
\end{tabular}
\caption{\texttt{m122(4,-1)}. Field of view: $12.8^\circ$. $128\times 128$ pixels. For each image, the visual radius $R$ is given at the start of its row, while the number of samples per pixel is given at the top of its column.}
\label{Fig:RagainstNumSamples}
\end{figure}
To investigate this without requiring ever larger screens to view the results, we sample the cohomology fractal at many vectors in a grid within each pixel and average the results to give a colour for the pixel. That is, we employ supersampling. See \reffig{RagainstNumSamples}.
Here we draw cohomology fractals with $R$ ranging from $4$ to $12$, and with either $1$, $2^2$, or $128^2$ subsamples for each pixel. Each image has resolution $128 \times 128$.
\begin{remark*}
Note that some pdf readers do not show individual pixels with sharp boundaries: they automatically blur the image when zooming in. To combat this blurring and see the pixels clearly, we have scaled each image by a factor of three, so each pixel of our result is represented by nine pixels in these images.
\end{remark*}
With one sample per pixel, as we increase $R$ the fractal structure comes into focus but then is lost to noise. This matches our observations in Figures~\ref{Fig:VisSphereRadii} and~\ref{Fig:VisSphereRadii2}.
Taking subsamples and averaging makes little difference for small $R$: the only advantage is an anti-aliasing effect on the boundaries between regions of constant value. However, subsamples help greatly with reducing noise for larger $R$. With $2\times 2$ subsamples, we see much less noise at $R=10$, becoming more noticeable at $R=12$. Taking $128\times128$ samples seems to be very stable: there is almost no difference between the images with $R=10$ and $R=12$. This suggests that the perceived images converge.
\subsection{Mean and variance within a pixel}
To better understand how subsampling interacts with increasing $R$, in \reffig{SubPixEvolution} we graph the average value within a selection of pixel-sized regions as $R$ increases.
\begin{figure}[htb]
\includegraphics[width=0.3\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/pixel_evolution_series_1_0_0}
\includegraphics[width=0.3\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/pixel_evolution_series_1_0_1}
\includegraphics[width=0.3\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/pixel_evolution_series_1_0_2}\\
\includegraphics[width=0.3\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/pixel_evolution_series_1_1_0}
\includegraphics[width=0.3\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/pixel_evolution_series_1_1_1}
\includegraphics[width=0.3\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/pixel_evolution_series_1_1_2}\\
\includegraphics[width=0.3\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/pixel_evolution_series_1_2_0}
\includegraphics[width=0.3\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/pixel_evolution_series_1_2_1}
\includegraphics[width=0.3\textwidth]{Figures/FiguresMatthiasVer2/m122_4_-1/pixel_evolution_series_1_2_2}
\caption{The graph of the average value of the cohomology fractal for \texttt{m122(4,-1)} for various square regions $U$ with field of view $0.1^\circ$. Thus, these are the same size as the pixels of \reffig{RagainstNumSamples}. These are each computed by taking $1000\times 1000$ samples. We also show the envelopes of 0.5, 1.0 and 1.5 standard deviations.
}
\label{Fig:SubPixEvolution}
\end{figure}
When $R$ is small, the graphs are more-or-less step functions, as much of the time the pixel $U$ is inside of a constant value region of the cohomology fractal. The graphs are also very similar for small $R$. This is because the pixels are close to each other, so all of their rays initially cross the same sequence of faces of the triangulation.
Around $R=6$, we reach the ``last step'' of the step function, then the regions of constant value become smaller than $U$.
For $R \geq 10$, the mean seems to settle down, while the standard deviation appears to grow like $\sqrt{R}$.
Again this suggests that the perceived images converge. However, if the standard deviation continues to increase with $R$, then eventually any number of subsamples within each pixel will succumb to noise.
\subsection{Histograms}
We have looked at the standard deviation of a sample of values within a pixel. Next, we analyse the distribution of these values in more detail. See \reffig{m122Histogram}.
\begin{figure}[htb]
\centering
\subfloat[Histogram of weights and the normal distribution with the same mean and standard deviation.]{
\includegraphics[width = 6.5cm]{Figures/FiguresMatthiasVer2/m122_4_-1/distribution}
\label{Fig:m122(4,1)Dist}
}
\subfloat[Cohomology fractal.]{
\includegraphics[height = 5cm]{Figures/FiguresMatthiasVer2/m122_4_-1/without_evil}
\label{Fig:m122(4,1)CohomFrac}
}
\caption{Statistics for a cohomology fractal of \texttt{m122(4,-1)} for a square region with field of view $20^\circ$ and $R=e^2$.}
\label{Fig:m122Histogram}
\end{figure}
We fix $R = e^2$. We sample $\Phi_R$ at each point of a $1000 \times 1000$ grid within a square of a material view with field of view $20^\circ$. We chose a relatively large field of view here so that we get an ``in focus'' image of the cohomology fractal with a relatively small value of $R$. Here we are being cautious to get good data, avoiding potential problems that our implementation has with large values of $R$ as discussed in \refsec{Accumulate}.
We histogram the resulting data with appropriate choices of bucket widths. In \reffig{m122(4,1)Dist} we show the histogram and the normal distribution with the same mean and standard deviation for our closed example, \texttt{m122(4,-1)}.
In \reffig{m122(4,1)CohomFrac} we show the sample data as a 1000 by 1000 pixel image. We also draw the normal distribution with the same mean and standard deviation; the data seems to fit this well.
\begin{figure}[htb]
\centering
\subfloat[Histogram of weights and the normal distribution with the same mean and standard deviation.]{
\includegraphics[width = 6.5cm]{Figures/FiguresMatthiasVer2/s789/distribution}
\label{Fig:s789Dist}
}
\subfloat[Cohomology fractal.]{
\includegraphics[height = 5cm]{Figures/FiguresMatthiasVer2/s789/view}
}
\caption{Statistics for a cohomology fractal of \texttt{s789} for a class vanishing on cusp.}
\label{Fig:s789Histogram}
\end{figure}
\begin{figure}[htb]
\centering
\subfloat[Histogram of weights and the normal distribution with the same mean and standard deviation.]{
\includegraphics[width = 6.5cm]{Figures/FiguresMatthiasVer2/s789/distribution2}
}
\subfloat[Cohomology fractal.]{
\includegraphics[height = 5cm]{Figures/FiguresMatthiasVer2/s789/view2}
}
\caption{Statistics for a cohomology fractal of \texttt{s789} for a class not vanishing on cusp.}
\label{Fig:s789Histogram2}
\end{figure}
We repeat this experiment back in the cusped case with \texttt{s789}. See Figures~\ref{Fig:s789Histogram} and~\ref{Fig:s789Histogram2}. Here we show the cohomology fractal for two different cohomology classes $[\omega]\in H^1(M)$. The cohomology class shown in \reffig{s789Histogram} vanishes when restricted to $\bdy M$, while in \reffig{s789Histogram2} it does not. The distribution appears to be normal when the cohomology class vanishes on $\partial M$.
When $[\omega]$ does not vanish on $\bdy M$, something more complicated appears to be happening. One feature here is that the tails are much too long for a normal distribution. A heuristic explanation for this is that in a neighbourhood of the cusp, a geodesic ray crosses the surface repeatedly in the same direction. This allows it to gain a linear weight in logarithmic distance.
\section{The central limit theorem}
\label{Sec:CLT}
In this section, we prove a central limit theorem for the values of the cohomology fractal $\Phi_T$ across a pixel.
That is, the distribution of the values of the scaled cohomology fractal $R_T=\Phi_T/\sqrt{T}$
converges to a normal distribution with mean zero.
\subsection{Setup}
\label{Sec:formalDefView}
We recall the framework introduced in \refsec{Views} that unifies the material, ideal, and hyperideal views.
Let $M$ be a connected, orientable, finite volume, complete hyperbolic three-manifold. We will use the abbreviations $X=\UT{}{M}$ and $\cover{X}=\UT{}{\cover{M}}$.
We call a two-dimensional subset $D \subset \cover{X}$ a \emph{view} if it is of one of the following.%
\begin{itemize}
\item For a material view, fix a basepoint $p\in\cover{M}$ and let $D=\UT{p}{\cover{M}}$.
Note that $D$ can be identified isometrically with $S^2$.
\item For the ideal view, fix a horosphere $H\subset\cover{M}$ and let $D$ be the set of outward normals to $H$.
\item For the hyperideal view, fix a hyperbolic plane $H\subset\cover{M}$ and let $D$ be the set of normals to $H$ facing one of the two possible directions.
\end{itemize}
Note that $D\subset \cover{X}$ has a riemannian metric induced from the riemannian metric on $\cover{X}$. This metric also endows $D$ with an area two-form and associated measure denoted by $\zeta=\zeta_D$ and $\mu=\mu_D$. Recall that $\pi\from\cover{X}\to \cover{M}$ is the projection to the base space.
\begin{remark}
\label{Rem:extrinsicCurvCorr}
Note that there is another riemannian metric on $D$ for the ideal and hyperideal view coming from isometrically identifying the horosphere or hyperbolic plane $H=\pi(D)$ with $\mathbb{E}^2$, respectively, $\mathbb{H}^2$. Up to a constant factor, this metric is the same as the above metric. The factor is trivial for the hyperideal view and $\sqrt{2}$ for the ideal view and arises as $\sqrt{1+K_H^2}$ from the extrinsic curvature $K_H$ of $H$. We have $K_H=1$ so that adding $K_H$ to the ambient curvature $-1$ of $\mathbb{H}^3$ gives the horosphere's intrinsic curvature 0.
\end{remark}
In this notation, the definition of the cohomology fractal, for a given closed one-form $\omega\in\Omega^1(M)$ and basepoint $b\in\cover{M}$, becomes the following. For $v\in D$, we have
\begin{equation}
\label{eq:cohomFractView}
\Phi^{\omega, b, D}_{T}(v) = \int_0^T \omega(\varphi_t(v)) dt + \int_{b}^{\pi(v)} \omega
\end{equation}
For the second integral, any path from $b$ to $\pi(v)$ in $\cover{M}$ can be chosen as $\omega$ is closed. This integral is constant in $v$ for the material view since $\pi(D)=p$. Choosing $W\in\Omega^0(\cover{M})$ so that $dW=\cover{\omega}$ and $W(b) = 0$, we can simply write
\[
\Phi_T(v)=\Phi^{\omega,b,D}_{T} (v) = W(\varphi_T(v))
\]
The central limit theorem will apply to probability measures $\nu_D$ that are absolutely continuous with respect to the area measure $\mu_D$ on $D$. We use the usual notation $\nu_D\ll\mu_D$ for absolute continuity.
By the Radon-Nikodym theorem, this is equivalent to saying that the measure $\nu_D$ is given by
\begin{equation}
\label{Eqn:RN}
\nu_D(U) = \int_U h\cdot d\mu_D
\end{equation}
where $h\geq 0$ is measurable with $\int_D h\cdot d\mu_D = 1$.
\begin{remark}
\label{Rem:MeasuresForms}
In \refsec{Pixel}, we will switch from measures $\nu_D$ to forms $\eta_D$, for the following reason.
Here, in \refsec{CLT} we follow the well-established notation of \cite{Sinai60,zweimuellerInfMeasurePreserving2007}.
However, the transformation laws in \refsec{Pixel} are better stated in the language of forms.
In both cases we consider probability measures or two-forms that are ``products'':
namely of a suitable function $h\from D\to\mathbb{R}$ with the area measure $\mu_D$ or two-form $\zeta_D$ respectively.
The function $h$ should be thought of as an indicator (or a kernel) function for a pixel.
\end{remark}
\subsection{The statement of the central limit theorem}
\label{Sec:StatementCLT}
The goal of this section is to prove the following.
\begin{theorem}
\label{Thm:CLT}
Fix a connected, orientable, finite volume, complete hyperbolic three-manifold $M$ and a closed, non-exact, compactly supported one-form $\omega\in\Omega_c^1(M)$. There is $\sigma > 0$ such that for all basepoints $b$, for all views $D$ with area measure $\mu_D$, for all probability measures $\nu_D\ll \mu_D$, and for all $\alpha \in \mathbb{R}$, we have
\[
\lim_{T\to\infty} \nu_D\left[ v\in D : \frac{\Phi_T(v)}{\sqrt{T}} \leq \alpha \right] = \int_{-\infty}^\alpha \frac{1}{\sigma\sqrt{2\pi}} e^{-(s/\sigma)^2/2} ds
\]
where $\Phi_T=\Phi^{\omega,b,D}_{T}$ is the associated cohomology fractal.
\end{theorem}
Let us recall some notions from probability to clarify what this means.
Let $(P, \nu)$ be a probability space.
For each $T \in \mathbb{R}_{>0}$, let $R_T \from P \to \mathbb{R}$ be a measurable function.
For each $T$, the probability measure $\nu$ on $P$ induces a probability measure $\nu \circ R_T^{-1}$ on $\mathbb{R}$ telling us how the values of the random variable $R_T$ are distributed when sampling $P$ with respect to $\nu$.
Let $\psi$ be a probability measure on $\mathbb{R}$.
\begin{definition}
\label{Def:ConvergeInDistribution}
We say that the random variables $R_T$ \emph{converges in distribution} to $\psi$
if the measures $\nu \circ R_T^{-1}$ converge in measure to $\psi$.
This is denoted by $\nu \circ R_T^{-1} \Rightarrow \psi$.
\end{definition}
Here, by the Portmanteau theorem, we can use any of several equivalent definitions of weak convergence of measures.
We are only interested in the case where $\psi$ is absolutely continuous with respect to Lebesgue measure on $\mathbb{R}$;
that is, we can write $\psi(V) = \int_V p(x) dx$ for any measurable $V \subset \mathbb{R}$.
Note that here $p \from \mathbb{R} \to \mathbb{R}_{\geq 0}$ is the \emph{probability density function} for $\psi$.
Convergence in distribution $\nu \circ R_T^{-1} \Rightarrow \psi$ is then equivalent to saying that for all $\alpha$ we have
\[
\lim_{T\to \infty} \nu\left[ x\in P: R_T(x) \leq \alpha \right] = \int_{-\infty}^\alpha p(s) \cdot ds
\]
We define
\[
n_\sigma(s)=\frac{1}{\sigma\sqrt{2\pi}} e^{-(s/\sigma)^2/2} \quad\mbox{and}\quad \psi_\sigma(V) = \int_V n_\sigma(x) dx
\]
The latter is the \emph{normal distribution} with mean zero and standard deviation $\sigma$.
\begin{example}
\newcommand{\mathrm{head}}{\mathrm{head}}
\newcommand{\mathrm{tail}}{\mathrm{tail}}
The process of flipping coins can be modelled as follows. Set $P=\{\mathrm{head}, \mathrm{tail}\}^\mathbb{N}$. We define a measure $\nu_P$ on $P$ as follows.
Given any prefix $v$ of length $n$, the set of all infinite words in $P$ starting with $v$ has measure $2^{-n}$.
Let $S_i\from P\to \mathbb{R}$ be the random variable $S_i(w) = \pm 1$ as $w_i$ is heads or tails respectively. Define $\Sigma_N = S_0 + S_1 + \cdots + S_{N-1}$. The classical central limit theorem states that
\[
\nu_P \circ R_N^{-1} \Rightarrow \psi_1\quad\mbox{where}\quad R_N=\frac{\Sigma_N}{\sqrt{N}}\from P\to\mathbb{R}
\qedhere
\]
\end{example}
We can now restate \refthm{CLT} as
\[
\nu_D \circ R_T^{-1}\Rightarrow\psi_\sigma \quad\mbox{where}\quad R_T=\frac{\Phi_T}{\sqrt{T}}\from D\to\mathbb{R}
\]
\subsection{Sinai's theorem}
\label{Sec:Sinai}
Our proof of \refthm{CLT} starts with Sinai's central limit theorem for geodesic flows~\cite{Sinai60}.
We use the following version of Sinai's theorem which is adopted from \cite[Theorem~VIII.7.1 and subsequent Nota Bene]{FranchiLeJan}.
This applies to functions that are not derivatives in the following sense. Recall that $X = \UT{}{M}$.
\begin{definition}
\label{Def:lieDerivative}
Let $f\from X\to\mathbb{R}$ be a smooth function.
We say that $f$ is a \emph{derivative} if there is a smooth function $F\from X\to\mathbb{R}$ such that
\[
f(v)= \left.\frac{dF(\varphi_t(v))}{d t}\right|_{t=0} \qedhere
\]
\end{definition}
\noindent
Let $\mu_X = \mu_{\operatorname{Haar}}/\mu_{\operatorname{Haar}}(X)$ be the normalised Haar measure.
\begin{theorem}[Sinai-Le Jan's Central Limit Theorem]
\label{Thm:sinai}
Fix a connected, orientable, finite volume, complete hyperbolic three-manifold $M$.
Let $f \from X = \UT{}{M} \to \mathbb{R}$ be a compactly supported, smooth function with $\int_X f \cdot d\mu_X = 0$.
Assume $f$ is not a derivative.
Let
\[
R_T(v) =\frac{\int_0^T f(\varphi_t(v))dt}{\sqrt{T}}
\]
Then there is a $\sigma>0$ such that $\mu_X \circ R_T^{-1} \Rightarrow \psi_\sigma$. \qedhere
\end{theorem}
In fact, the constant $\sigma$ appearing in \refthm{sinai} is the square root of the \emph{variance} of $f$ which Franchi--Le Jan denote by $\mathcal{V}(f)$.
They give a formula for $\mathcal{V}(f)$ in \cite[Theorem~VIII.7.1]{FranchiLeJan} and state that $\mathcal{V}(f)$ vanishes if and only if $f$ is a derivative.
\begin{remark}
To relate \refthm{sinai} to~\cite[Theorem~VIII.7.1]{FranchiLeJan}, note that Franchi--Le Jan think of $f$ as a function on the frame bundle of $\cover{M}$ that is both $\Gamma$ and $\SO(2)$--invariant.
Since the smooth function $f$ is compactly supported, $f$ has bounded and H\"olderian derivatives as required by Franchi--Le Jan.
Note that they also require $f$ to not be a derivative (denoted by $\mathcal{L}_0h$, see \cite[(VIII.1)]{FranchiLeJan}) of a function $h$ but allow $h$ to be a function on the frame bundle.
However, if an $\SO(2)$--invariant $f$ is the derivative of a function $h$ on the frame bundle, it is also the derivative of an $\SO(2)$--invariant function on the frame bundle.
\end{remark}
We deduce \refthm{CLT} from Sinai's theorem in three steps.
\begin{enumerate}
\item
\refthm{sinaiOne} generalises Sinai's theorem to arbitrary probability measures $\nu_X \ll \mu_X$ on the five-dimensional $X = \UT{}{M}$.
\item
\refthm{sinaiTwo} restricts from $X$ to the two-dimensional view $D$ using a measure $\nu_D \ll \mu_D$.
\item
Finally, we show that the term $\int_{b}^{\pi(v)}\omega$ from \eqref{eq:cohomFractView} can be added to obtain $\Phi_T$.
\end{enumerate}
\subsection{Generalising Sinai's Theorem}
We begin with a definition.
\begin{definition}
\label{Def:ConvInProb}
Let $(P, \mu)$ be a finite measure space. For $n \in \mathbb{N}$, let $Q_n\from P\to \mathbb{R}$ be a measurable function.
We say that \emph{$Q_n$ converges to zero in probability} and write $\mu \circ Q_n^{-1}\to 0$ if for all $\varepsilon >0$ we have
\[
\lim_{n\to\infty} (\mu\circ Q_n^{-1}) ((-\infty, -\varepsilon) \cup (\varepsilon,\infty)) \to 0 \qedhere
\]
\end{definition}
The following result is called \emph{strong distributional convergence}, see \cite[Proposition~3.4]{zweimuellerInfMeasurePreserving2007}.
\begin{theorem}
\label{Thm:strongErgodicConvergence}
Let $(P, \mu)$ be a finite measure space and $T \from P \to P$ be an ergodic, measure-preserving transformation.
For all $n\in\mathbb{N}$, let $R_n \from P \to \mathbb{R}$ be a measurable function.
Let $Q_n = R_n\circ T-R_n$ and suppose that $\mu\circ Q_n^{-1}\to 0$.
Let $\psi$ be a probability measure on $\mathbb{R}$.
If we have $\nu\circ R_n^{-1}\Rightarrow \psi$ for some probability measure $\nu\ll \mu$, then we have $\nu\circ R_n^{-1}\Rightarrow \psi$ for all probability measures $\nu\ll \mu$. \qed
\end{theorem}
\begin{remark}
We have specialised Zweim\"uller's Proposition~3.4 in \cite{zweimuellerInfMeasurePreserving2007} to finite measure spaces.
To obtain Zweim\"uller's result for $\sigma$--finite measure spaces $(P,\mu)$, we need to replace the requirement $\mu\circ Q_n^{-1}\to 0$ by the following weaker requirement denoted by $Q_n\xrightarrow{\mu} 0$ in \cite[Footnote~3]{zweimuellerInfMeasurePreserving2007}: for all probability measures $\nu\ll \mu$ we have $\nu\circ Q_n^{-1}\to 0$.
To see that $Q_n\xrightarrow{\mu} 0$ is weaker, we can use the following standard result:
for any $\nu \ll \mu$ we have
\[
\sup \{ \nu(A) : \mbox{$A$ measurable with $\mu(A)\leq \varepsilon$} \}
\to 0\quad\mbox{as}\quad\varepsilon\to 0
\]
The requirements $\mu\circ Q_n^{-1}\to 0$ and $Q_n\xrightarrow{\mu} 0$ are equivalent if $\mu$ is finite.
\end{remark}
Using this, we now give our first variant of Sinai's theorem.
\begin{theorem}
\label{Thm:sinaiOne}
With the same hypotheses as in \refthm{sinai}, we have the following. There is a $\sigma>0$ such that for any probability measure $\nu_X \ll \mu_X$ we have
$\nu_X \circ R_T^{-1} \Rightarrow \psi_\sigma$.
\end{theorem}
\begin{proof}
By \refthm{sinai}, there is a $\sigma$ such that $\mu_X \circ R_T^{-1} \Rightarrow \psi_\sigma$ as $T\to\infty$.
Note that the random variables in \refthm{strongErgodicConvergence} are indexed by $n\in\mathbb{N}$ instead of $T\in\mathbb{R}$ but it is easy to see that sequential convergence and convergence in distribution are equivalent.
In other words, it is suffices to show that for any sequence $(T_n)$ with $T_n\to\infty$ the random variables $S_n=R_{T_n}$ satisfy $\nu_X \circ S_n^{-1}\Rightarrow \psi_\sigma$.
In order to apply \refthm{strongErgodicConvergence}, we need the following two claims.
\begin{claim}
The time-one geodesic flow $\varphi_1$ is ergodic.
\end{claim}
\begin{proof}
Set $G=\mathbb{Z}$. By \cite{PughShub}, the transformation $\varphi_1$ is an \emph{Anosov element}. By \cite[Theorem~1.1]{PughShub}, it is ergodic.
\end{proof}
\begin{claim}
Let $Q_n=S_n\circ \varphi_1-S_n$. Then, $\nu_X \circ Q_n^{-1}\to 0$.
\end{claim}
\begin{proof}
We will prove a stronger statement: $\| Q_n \|_\infty \to 0$.
\begin{align*}
\left| Q_n(v)\right|
& = \left| \frac{\int_1^{T_n+1} f(\varphi_t(v)) dt }{\sqrt{T_n}} - \frac{\int_0^{T_n} f(\varphi_t(v)) dt}{ \sqrt{T_n}}\right| \\
& = \left| \frac{\int_{T_n}^{T_n+1} f(\varphi_t(v)) dt}{\sqrt{T_n}} - \frac{\int_0^1 f(\varphi_t(v)) dt }{ \sqrt{T_n}}\right| \\
& \leq \frac{2 \| f\|_\infty}{\sqrt{T_n}} \qedhere
\end{align*}
\end{proof}
We can now finish the proof of \refthm{sinaiOne}.
We apply \refthm{strongErgodicConvergence} with $R_n$ replaced by $S_n$ and $T$ replaced by $\varphi_1$.
\end{proof}
\subsection{Coordinates}
\label{Sec:coordinatesView}
Given a view $D$, we introduce coordinates for a neighbourhood of $D$ in $\cover{X}=\UT{}{\cover{M}}\homeo\UT{}{\mathbb{H}^3}$ as follows;
it may be helpful to consult \reffig{localCoordinates}.
Fix $r \in \cover{X}$.
If $r$ is close enough to $D$ in $\cover{X}$, then there is an $x_{\sfu} \in D$ such that the rays emanating from $x_{\sfu}$ and $r$ converge to the same ideal point in $\bdy_\infty\cover{M}$.
Consider the set $H = H^{\sf{s}}(x_{\sfu}) \subset \cover{X}$ such that
\begin{itemize}
\item $x_{\sfu} \in H$,
\item $\pi(H)$ is a horosphere, and
\item $H$ are the ``inward pointing'' normals to $\pi(H)$.
\end{itemize}
Let $x_{\sfs}$ be the intersection of $H$ with the line through $r$.
Let $x_{\sff}$ be the signed distance from $x_{\sfs}$ to $r$ along this line.
The triple
\[
(x_{\sfu}, x_{\sff}, x_{\sfs})
\qquad
\mbox{with $x_{\sfu} \in D$, $x_{\sff} \in \mathbb{R}$, and $x_{\sfs} \in H^{\sf{s}}(x_{\sfu})$}
\]
determines the vector $r \in \cover{X}$ uniquely.
\begin{figure}[htbp]
\centering
\subfloat[Coordinates.]{
\labellist
\small\hair 2pt
\pinlabel {$H$} at 173 210
\pinlabel {$x_{\sfu}$} at 157 174
\pinlabel {$x_{\sfs}$} at 114 162
\pinlabel {$x_{\sff}$} at 76 175
\pinlabel {$r$} at 90 204
\pinlabel {$r_s$} at 113 128
\pinlabel {$D$} at 223 135
\endlabellist
\includegraphics[width = 0.47\textwidth]{Figures/localCoordinates}
\label{Fig:localCoordinates}}
\hspace{-0.035\textwidth}
\subfloat[Flow.]{
\labellist
\small\hair 2pt
\pinlabel {$H$} at 207 190
\pinlabel {$x_{\sfu}$} at 136 142
\endlabellist
\includegraphics[width = 0.47\textwidth]{Figures/stableDirection}
\label{Fig:StableDirection}}
\caption{Coordinates for $\cover{X}=\UT{}{\cover{M}}=\UT{}{\mathbb{H}^3}$ and flow of a box $(x_{\sfu}, (-\varepsilon,\varepsilon), B^{\sf{s}}_\varepsilon(\pi(x_{\sfu})))$.}
\end{figure}
Suppose $N$ is a submanifold.
Let $d_N(p,q)$ denote the length of the shortest curve in $N$ connecting $p$ and $q$. Given $x_{\sfu}\in D$, let
\[
B^{\sf{s}}_\varepsilon(x_{\sfu}) = \{ x_{\sfs}\in H : d_H(x_{\sfu}, x_{\sfs}) \leq \varepsilon\} \quad\mbox{where}\quad H = H^{\sf{s}}(x_{\sfu})
\]
Let
\[
D_\varepsilon = \{ (x_{\sfu}, x_{\sff}, x_{\sfs}) : x_{\sfu}\in D, x_{\sff} \in (-\varepsilon, \varepsilon), x_{\sfs} \in B^{\sf{s}}_\varepsilon(x_{\sfu}) \}
\]
\begin{remark}
\label{Rem:product}
Note that the subscripts appearing in the coordinates $(x_{\sfu}, x_{\sff}, x_{\sfs})$ refer to the unstable, flow, and stable foliations.
\begin{itemize}
\item The points $(x_{\sfu}, 0, x_{\sfu})$ give a copy of $D$, which is unstable.
\item If we fix $x_{\sfu}$ and $x_{\sfs}$, and vary $x_{\sff}$, then we obtain a geodesic flow line.
\item Also, if we fix $x_{\sfu}$ and $x_{\sff}$, and vary $x_{\sfs}$, then we obtain a stable horosphere.
\end{itemize}
Note that each $H = H^{\sf{s}}(x_{\sfu})$ is isometric to a (pointed) copy of $\mathbb{C}$.
However, the coordinates above do not live in a geometric product $D \times \mathbb{R} \times \mathbb{C}$.
They instead form a smooth fibre bundle over $D$. (In the material case, the view $D$ is a copy of $S^2$.
If we factor away the flow direction from our coordinates, what remains is isomorphic to the non-trivial bundle $\mathrm{T} S^2$.)
Thus we will only locally appeal to a ``product structure'' on these coordinates.
\end{remark}
The following lemma is deduced from the exponential convergence inside of stable leaves.
See \reffig{StableDirection}.
\begin{lemma}
\label{Lem:distFlowStable}
For all $t\geq 0$, $1>\varepsilon>0$ and all $(x_{\sfu}, x_{\sff}, x_{\sfs}) \in D_\varepsilon$, we have
\[
d_\cover{X}(\varphi_t(x_{\sfu}),\varphi_t(x_{\sfu},x_{\sff},x_{\sfs})) \leq 2 \varepsilon
\]
\end{lemma}
\begin{proof}
This follows from
\begin{align*}
d_\cover{X}((x_{\sfu},t,x_{\sfu}), (x_{\sfu},t,x_{\sfs})) \leq d_{\varphi_t(H)}((x_{\sfu},t,x_{\sfu}), (x_{\sfu},t,x_{\sfs})) & \\
= e^{-t} d_H((x_{\sfu},0,x_{\sfu}), (x_{\sfu},0,x_{\sfs})) &\leq e^{-t} \varepsilon \leq \varepsilon \\
\mbox{and} \quad d_\cover{X}((x_{\sfu},t,x_{\sfs}), (x_{\sfu},t+x_{\sff},x_{\sfs})) &= |x_{\sff}| \leq \varepsilon
\end{align*}
where $H=H^{\sf{s}}(x_{\sfu})$.
\end{proof}
\subsection{Proof of the central limit theorem}
We now use these coordinates to continue with the proof of \refthm{CLT}.
\begin{lemma}
\label{Lem:almostEqualRandomVariables}
Let $(P,\nu)$ be a probability space.
Let $R_T, S_T\from P\to \mathbb{R}$ be two one-parameter families of measurable functions.
Assume that $\nu\circ R_T^{-1}\Rightarrow \psi$ where $\psi$ is a probability measure on $\mathbb{R}$ with bounded probability distribution function $p\from\mathbb{R}\to\mathbb{R}_{\geq 0}$.
Assume that there is a monotonically growing family $U_T\subset P$ of measurable sets such that $\left\| (R_T - S_T)|U_T\right\|_\infty\to 0$ and $\nu(P-U_T)\to 0$.
Then $\nu\circ S_T^{-1}\Rightarrow \psi$.
\end{lemma}
\begin{proof}
Fix $\alpha$ and let
\[
P_T = \nu \left[x\in P : S_T(x) \leq \alpha \right]\quad\mbox{and}\quad Q_T = \nu \left[x\in P : S_T(x) > \alpha \right]
\]
We need to show that for every $\varepsilon > 0$, there is a $T_0$ such that for all $T\geq T_0$ we have
\[
\int_{-\infty}^{\alpha} p(s) \cdot ds - \varepsilon \leq P_T \leq \int_{-\infty}^{\alpha} p(s) \cdot ds + \varepsilon
\]
We only deal with the second inequality since the first inequality can be derived in an analogous way using $P_T + Q_T = 1$.
We have the following estimate.
\[
P_T \leq \nu\left[ x\in P:R_T(x) \leq \alpha + \left\| (R_T - S_T) |U_T\right\|_\infty \right] + \nu(P-U_T)
\]
Fix $\varepsilon > 0$.
Let $\delta=\varepsilon / (3 \|p\|_\infty)$.
By hypothesis, we have for all large enough $T$
\[
P_T \leq \nu\left[ x\in P:R_T(x) \leq \alpha + \delta \right] + \frac{\varepsilon}{3}
\]
Because $R_T \xRightarrow{\nu} R$, we furthermore have for all large enough $T$
\begin{align*}
P_T &\leq \frac{\varepsilon}{3} + \int_{-\infty}^{\alpha + \delta} p(s) \cdot ds + \frac{\varepsilon}{3}\\
& \leq \frac{\varepsilon}{3} + \int_{-\infty}^{\alpha} p(s) \cdot ds + \|p \|_\infty \delta + \frac{\varepsilon}{3} = \int_{-\infty}^{\alpha} p(s) \cdot ds + \varepsilon \qedhere
\end{align*}
\end{proof}
We return to the case of interest where $f$ is given by a one-form $\omega$.
\begin{lemma}
\label{Lem:lieDerivativeNonVanishing}
Let $\omega \in \Omega^1(M)$ be closed but not exact.
Then $\omega\from X\to\mathbb{R}$ is not a derivative in the sense of \refdef{lieDerivative}.
\end{lemma}
\begin{proof}
We prove the contrapositive:
that is, if $\omega$ is a derivative in the sense of \refdef{lieDerivative} then $\omega = d W$ for a function $W \from M \to \mathbb{R}$.
Fix a basepoint $p \in M$.
We define $W(q) = \int_\gamma \omega$. Here $\gamma$ is a path from $p$ to $q$.
All that is left is to show that $W$ is well-defined.
So, suppose that $\gamma'$ is another path from $p$ to $q$.
Thus $z = \gamma - \gamma'$ is a cycle.
Let $z^*$ be the geodesic representative of $z$.
Since $\omega$ is closed we have $\omega(z) = \omega(z^*)$.
Since $\omega$ is a derivative we have $\omega(z^*) = 0$ and we are done.
\end{proof}
\begin{theorem}
\label{Thm:sinaiTwo}
With the same hypotheses as in \refthm{CLT}, we have the following.
There is $\sigma > 0$ such that for all views $D$ with area measure $\mu_D$, for all probability measures $\nu_D\ll \mu_D$, and for all $\alpha\in\mathbb{R}$, we have
\[
\lim_{T\to\infty} \nu_D\left[ v\in D : R_T(v) \leq \alpha \right] = \int_{-\infty}^\alpha \frac{1}{\sigma\sqrt{2\pi}} e^{-(s/\sigma)^2/2} ds
\]
where $R_T= \int_0^T \omega(\varphi_t(v)) dt / \sqrt{T}$.
\end{theorem}
\begin{proof}
The one-form $\omega$ is not a derivative by \reflem{lieDerivativeNonVanishing}.
Taking $f=\omega$, let $\sigma$ be as in \refthm{sinaiOne}.
Fix a probability measure $\nu_D\ll\mu_D$. We define a measure $\nu_\cover{X}$ on $\cover{X}$ using the coordinates $(x_{\sfu}, x_{\sff}, x_{\sfs})$ by taking the product of, in order,
\begin{itemize}
\item $\nu_D$
\item the Lebesgue measure on $\mathbb{R}$ restricted to $[-1,1]$
\item the Lebesgue measure on $\mathbb{C}\cong H^{\sf{s}}(x_{\sfu})$ restricted to the unit disk $B^{{\sf{s}}}_1(x_{\sfu})$
\end{itemize}
We scale $\nu_\cover{X}$ to be a probability measure.
Note that the Lebesgue measure on $H^{\sf{s}}(x_{\sfu})$ does not depend on the isometric identification of $\mathbb{C}$ with $H^{\sf{s}}(x_{\sfu})$.
Thus $\nu_\cover{X}$ is well-defined.
By summing over fundamental domains, the probability measure $\nu_\cover{X}$ descends to a probability measure $\nu_X\ll \mu_X$ on $X$. Given that $R_T\from\cover{X}\to\mathbb{R}$ is $\pi_1(M)$--invariant, \refthm{sinaiOne} yields $\nu_\cover{X}\circ R_T^{-1}\Rightarrow \psi_\sigma$.
Note that $\nu_\cover{X}$ is supported in the closure of $D_1$ (as defined before \refrem{product}). We have a projection $p\from D_1 \to D$ where $p(x_{\sfu},x_{\sff},x_{\sfs})=(x_{\sfu},0,x_{\sfu})$. By construction, we have $\nu_\cover{X}(p^{-1}(U)) = \nu_D(U)$ for any measurable set $U\subset D$.
\begin{claim}
\label{Clm:projConv}
We have
\[
\nu_\cover{X}\circ S_T^{-1}\Rightarrow \psi_\sigma\quad\mbox{where}\quad S_T = R_T\circ p
\]
\end{claim}
\begin{proof}
We take $P=D$ and we take $U_T=D$ for all $T$. Applying \reflem{almostEqualRandomVariables}, it is left to show that $\| R_T - S_T \|_\infty \to 0$. Let $W\from\cover{M}\to\mathbb{R}$ be a primitive of $\cover{\omega}$. That is $dW=\cover{\omega}$. In an abuse of notation, we abbreviate $W\circ \pi$ as $W$. We can now write
\begin{align*}
R_T(x_{\sfu},x_{\sff},x_{\sfs}) & = \frac{W(\varphi_T(x_{\sfu},x_{\sff},x_{\sfs})) - W(x_{\sfu},x_{\sff},x_{\sfs})}{\sqrt{T}} \\
S_T(x_{\sfu},x_{\sff},x_{\sfs}) & = \frac{W(\varphi_T(x_{\sfu},0,x_{\sfu})) - W(x_{\sfu},0,x_{\sfu})}{\sqrt{T}}
\end{align*}
Since $1/\sqrt{T}\to 0$, it is sufficient to show that both of
\begin{align*}
W(\varphi_T(x_{\sfu},x_{\sff},x_{\sfs})) & -W(\varphi_T(x_{\sfu},0,x_{\sfu})) \textrm{ and }\\
W(x_{\sfu},x_{\sff},x_{\sfs}) & -W(x_{\sfu},0,x_{\sfu})
\end{align*}
are bounded by twice the Lipschitz constant of $W$. This follows from \reflem{distFlowStable} when replacing $\varepsilon$ by $1$ and setting $t$ to either $T$ or $0$.
\end{proof}
Fix $\alpha$.
\refthm{sinaiTwo} follows from
\begin{align*}
\nu_D[x\in D: R_T(x) \leq \alpha] & = \nu_D[x\in D: S_T(x) \leq \alpha] \\
&= \nu_\cover{X}(p^{-1}(\{x\in D: S_T(x) \leq \alpha\}))\\
&= \nu_\cover{X}[x\in D_1:S_T(x)\leq \alpha]
\end{align*}
converging to $\int^\alpha_{-\infty} n_\sigma(s) ds$ by \refclm{projConv}.
\end{proof}
\begin{proof}[Proof of \refthm{CLT}]
Note that \refthm{sinaiTwo} shows convergence for $$R_T(v)=\frac{\int_0^T f(\varphi_t(v))dt}{\sqrt{T}}$$ but we need to show convergence for $Q_T(v)= \Phi_T(v)/\sqrt{T}$, the difference being
\[
\Delta(v) = R_T(v)-Q_T(v) = \frac{\int_b^{\pi(v)} \omega}{\sqrt{T}}=\frac{\int_b^{\pi(u)} \omega + \int_{\pi(u)}^{\pi(v)} \omega}{\sqrt{T}}
\]
where $u\in D$ is a fixed basepoint.
Thus, we need to show that \reflem{almostEqualRandomVariables} applies when taking $P=D$.
Denote the constant $\int_b^{\pi(u)} \omega$ by $C$.
Let $C'$ be a bound on the absolute value of $\omega\from X\to\mathbb{R}$.
It is convenient to let $$U_T = \{ v\in D : d_D(u, v) \leq \sqrt[4]{T} \}$$
Then, $\| \Delta | U_T \|_\infty \leq (C + C' \sqrt[4]{T}) /\sqrt{T} \to 0$ for $v\in U_T$.
Since $U_T$ exhausts $D$ and $\nu$ is a finite measure, $\nu(D-U_T)\to 0$.
\end{proof}
\section{The pixel theorem}
\label{Sec:Pixel}
In this section, we prove that the cohomology fractal gives rise to a distribution at infinity. That is, integrating against the cohomology fractal then taking the limit as $R$ tends to infinity, gives a continuous linear functional on smooth, compactly supported functions.
\subsection{Motivation}
Throughout the paper, we have drawn many images of cohomology fractals, always depending on a visual radius $R$.
The obvious question is whether there is a limiting image as $R$ tends to infinity.
It turns out that the answer critically hinges on the question of what a pixel is.
As we showed in \refthm{NoPicture}, thinking of a pixel as a sampled point does not work.
After realising this, our next thought was that the cohomology fractal might converge to a signed measure $\mu$.
We managed to prove this for squares (as well as for regions with piecewise smooth boundary).
However our proof does not generalise to arbitrary measurable sets.
See \refsec{RemarkMeasureHard} for a discussion.
We finally arrived at the notion of thinking of a pixel as a smooth test function; see~\cite{Smith95}.
The cohomology fractal now assigns to a pixel its weighted ``average value'';
in other words, we obtain a well-defined \emph{distribution}.
This distribution satisfies various transformation laws;
these describe how it changes as we alter the chosen cocycle, basepoint, or view.
To prove these we rely heavily on the \emph{exponential mixing} of the geodesic flow.
\subsection{Background and statement}
\label{Sec:StatementPixel}
Before stating the theorem we establish our notation.
We define $\omega$, $b$, $D$, and $T$ as in \refsec{formalDefView}.
However, as mentioned in \refrem{MeasuresForms},
we switch from using the area measure $\mu_D$ to the area two-form $\zeta_D$ and
from a probability measure $\nu_D$ to a compactly supported two-form $\eta_D$.
To obtain $\eta = \eta_D\in\Omega^2_c(D)$, we set $\eta_D = h\zeta_D$;
here $h\in\Omega^0_c(D)$ is compactly supported and smooth.
That is, $h$ is Hodge dual to $\eta$.
The function $h$ should be thought of as the kernel function for a pixel.
The discussion below could be phrased completely in terms of $h$.
However, using $\eta$ allows us to neatly express the transformation laws between different views.
\begin{definition}
\label{Def:Distribution}
For a compactly supported two-form $\eta \in \Omega^2_c(D)$, we define
\[
\Phi^{\omega,b,D}_{T}(\eta) = \int_{D} \Phi^{\omega,b,D}_{T} \eta
\quad \mbox{and} \quad
\Phi^{\omega,b,D}(\eta)= \lim_{T\to\infty} \Phi^{\omega,b,D}_{T}(\eta) \qedhere
\]
\end{definition}
As we shall see, $\Phi^{\omega,b,D}$ is a \emph{distribution}:
a continuous linear functional on $\Omega^2_c(D)$. We recall the topology on $\Omega^2_c(D)$ in the proof of \refthm{Pixel}.
We will use $\int_{D}$ to denote the \emph{canonical distribution} $\eta \mapsto \int_{D} \eta$.
To give a transformation law between views $D$ and $E$, we will need a way to relate one to the other.
Recall that $\cover{M}$ is isometric to $\mathbb{H}^3$; thus we have $\bdy_\infty \cover{M} \homeo \CP^1$.
As $t$ tends to infinity, the flow $\varphi_t$ takes a unit tangent vector $v \in D$ to some point $\varphi_\infty(v) \in \bdy_\infty\cover{M}$.
This induces a conformal embedding $i_D$ of $D$ into $\bdy_\infty \cover{M}$.
We define $i_E$ similarly.
We take $i_{E,D} = i_D^{-1} \circ i_E$ where it is defined.
This is a \emph{conformal isomorphism} from (a subset of) $E$ to (a subset of) $D$.
We can now state the main result of this section.
\begin{theorem}[Pixel theorem]
\label{Thm:Pixel}
Suppose that $M$ is a connected, orientable, finite volume, complete hyperbolic three-manifold.
Fix
a closed, compactly supported one-form $\omega \in \Omega_c^1(M)$,
a basepoint $b \in \cover{M}$, and
a view $D$.
\begin{enumerate}
\item
\label{Itm:wellDefDist}
Then $\Phi^{\omega, b, D}$ is well-defined and is a distribution.
\item
\label{Itm:cocycleIndep}
Given $\omega' \in \Omega_c^1(M)$ with $[\omega]=[\omega']$, there is a constant $C$ so that we have
\[
\Phi^{\omega', b, D} - \Phi^{\omega, b, D}= C \cdot \int_D
\]
\item
\label{Itm:basePointPixelThm}
Given another basepoint $b' \in \cover{M}$, we have
\[
\Phi^{\omega, b', D} - \Phi^{\omega, b, D}= \left[ \int_b^{b'} \omega \right] \cdot \int_D
\]
\item
\label{Itm:viewTransform}
Given another view $E$ and a two-form $\eta \in \Omega^2_c(\operatorname{image}(i_{E,D}))$, we have
\[
\Phi^{\omega,b,D}(\eta) = \Phi^{\omega, b, E}(i_{E,D}^* \, \eta)
\]
\end{enumerate}
\end{theorem}
The last property gives us a distribution at infinity as follows.
\begin{corollary}
\label{Cor:BoundaryDistribution}
Suppose that $M$ is a connected, orientable, finite volume, complete hyperbolic three-manifold.
Fix
a closed, compactly supported one-form $\omega \in \Omega_c^1(M)$ and
a basepoint $b \in \cover{M}$.
Then there is a distribution $\Phi^{\omega,b}$ on $\bdy_\infty \cover{M}$ so that,
for any view $D$ and any $\eta \in \Omega^2_c(D)$, we have
\[
\pushQED{\qed}
\Phi^{\omega,b,D}(\eta) =
\Phi^{\omega, b}((i_D^{-1})^* \eta) \qedhere
\popQED
\]
\end{corollary}
\subsection{Proof of the pixel theorem}
\label{Sec:PixelProof}
We now describe some background necessary to prove the theorem.
Throughout this section, we fix a connected, orientable, finite volume, complete hyperbolic three-manifold $M$.
We use the abbreviations $X=\UT{}{M}$ and $\cover{X}=\UT{}{\cover{M}}$.
Let
\begin{equation}
\label{Eqn:yt}
Y_t = \int_D (\omega \circ \varphi_t) \eta
\end{equation}
Fubini's theorem implies that
\begin{equation}
\Phi^{\omega,b,D}_{T}(\eta) = \Phi^{\omega,b,D}_{0}(\eta) + \int_0^T Y_t dt
\end{equation}
so much of the proof boils down to obtaining exponential decay of $Y_t$.
In this section we will use the Haar measure $\mu_{\operatorname{Haar}}$ for integrals over $X = \UT{}{M}$.
We will use the shorthand $dv = d\mu_{\operatorname{Haar}}(v)$ throughout.
We also need to introduce the Sobolev norm $S_m = S_{m,\infty}$ for smooth functions $f$ on homogenous spaces.
First consider functions $f \from G \to \mathbb{R}$ where $G$ is a Lie group.
Fix a basis for the Lie algebra of $G$; we think of the elements in this basis as left-invariant vector fields on $G$.
The Sobolev norm $S_{m}(f)$ is the maximum of all $L_\infty$--norms of functions obtained by differentiating $f$ up to $m$ times using these vector fields in any order.
Suppose that $\Gamma$ and $H$ are (respectively) discrete and compact subgroups of $G$.
The Sobolev norm of $f \from \Gamma \backslash G \slash H \to \mathbb{R}$ is the Sobolev norm of the lift of $f$ to $G$.
As usual, we have $M = \Gamma \backslash \mathbb{H}^3 \homeo \Gamma \backslash \PSL(2,\mathbb{C}) \slash \operatorname{PSU}(2)$.
Likewise, we have $X = \UT{}{M} \homeo \Gamma \backslash \PSL(2,\mathbb{C}) \slash \operatorname{PSO}(2)$.
For any of the three views, material, ideal, or hyperideal, we can also express $D$ in this fashion.
For example, in the material view we have $D \homeo S^2 \homeo \operatorname{PSU}(2) \slash \operatorname{PSO}(2)$.
For $\eta \in \Omega^2_c(D)$, define $S_m(\eta) = S_m(h)$ where $h$ is the Hodge dual of $\eta$.
Note that the Sobolev norm depends on our choice of basis;
however, changing the basis changes the resulting norm by a bounded factor and thus only changes the constant $C$ in the following lemma.
\begin{lemma}
\label{Lem:expoDecay}
Let $M$ be a connected, orientable, finite volume, complete hyperbolic three-manifold.
There is a constant $m \in \mathbb{N}$ such that the following is true.
Fix a view $D$ in $M$ and a smooth, compactly supported function $f \from X =\UT{}{M} \to \mathbb{R}$ with $\int_X f(v) dv = 0$.
Fix a compact set $K \subset D$.
There are constants $C > 0$ and $c > 0$ such that for all two-forms $\eta \in \Omega_K^2(D)$
supported in $K$ and for all $t\geq 0$, we have
\[
\left| \int_D (f \circ \varphi_t) \eta \right| \leq Ce^{-ct} S_m(\eta)
\]
\end{lemma}
To prove this, we use the exponential decay of correlation coefficients for geodesic flows.
This is a much studied area. We will rely on \cite{KelmerOh}
because they explicitly give the dependence of the decay on the Sobolev norms of the functions involved. For hyperbolic, finite volume three-manifolds, their theorem can be simplified to the following.
\begin{theorem}
\label{Thm:mixing}
Let $M$ be a connected, orientable, finite volume, complete hyperbolic three-manifold. Then there exists $m\in\mathbb{N}$, $C>0$ and $c>0$ with the following property. For any smooth functions $f,g\from X=\UT{}{M}\to\mathbb{R}$ with $\int_{X} f(v)dv = 0$ and for all $t\geq 0$, we have
\begin{equation}
\left|\int_{X} f(\varphi_t(v)) g(v) dv\right| \leq C e^{-ct} S_{m}(f) S_{m}(g)
\end{equation}
\end{theorem}
\begin{proof}
The more general \cite[Theorem~3.1]{KelmerOh} relates to \refthm{mixing} as follows.
They integrate over the frame bundle $\Gamma\backslash\PSL(2,\mathbb{C})$ using the Bowen-Margulis-Sullivan-measure. However, we can think of a function $X\to\mathbb{R}$ as an $\operatorname{PSO}(2)$--invariant function on the frame bundle and the BMS-measure is simply the Haar measure in the case of a hyperbolic, finite volume three-manifold $\Gamma\backslash \mathbb{H}^3$.
Note that \cite[Theorem~3.1]{KelmerOh} requires the functions $f$ and $g$ to be supported on a unit neighbourhood of the preimage of the convex core of $M$.
However, for finite volume $M$, the convex core is just $M$.
Furthermore, conventions for the Sobolev norm $S_m$ differ in whether to take the sum or maximum of the $L_\infty$--norms of derivatives; however the resulting norms are equivalent because they differ by a constant factor.
\end{proof}
To prove \reflem{expoDecay} using \refthm{mixing}, we construct test functions $h_\varepsilon \from \cover{X}\to\mathbb{R}$ that tend to the given two-form $\eta\in\Omega^2_c(D)$ in the sense that $Y_t$ can be approximated by
\begin{equation}
\label{Eqn:blurredIndicator}
Y_{t,\varepsilon}=\int\limits_\cover{X} \cover{f}(\varphi_t(v)) h_\varepsilon(v) dv
\end{equation}
Note that there are several incompatibilities between $\eta$ and $h_\varepsilon$:
\begin{enumerate}
\item $\eta\in\Omega^2_c(D)$ is a two-form but $h_\varepsilon$ has to be a function.
\item $D\subset\cover{X}$ but the integral \refthm{mixing} is over $X$.
\item $D$ is two-, not five-dimensional.
\end{enumerate}
The first issue is solved by using the Hodge dual $h\in\Omega^0_c(D)$. That is, $\eta = h\zeta$ where $\zeta=\zeta_D$ is the area form on $D$.
For the second issue, we reformulate \refthm{mixing} as follows:
\begin{theorem}
\label{Thm:mixing2}
Let $M$ be a connected, orientable, finite volume, complete hyperbolic three-manifold.
There is a constant $m\in\mathbb{N}$ such that the following is true.
Fix a smooth, compactly supported function $f\from X=\UT{}{M}\to\mathbb{R}$ with $\int_X f(v)dv = 0$ and a compact set $K\subset \cover{X}=\UT{}{\cover{M}}$.
There exists $C>0$ and $c>0$ such that for all smooth functions $g\from\cover{X}\to\mathbb{R}$ supported in $K$ and all $t\geq 0$, we have
\begin{equation}
\left|\int_\cover{X} \cover{f}(\varphi_t(v)) g(v) dv\right| \leq C e^{-ct} S_{m}(g)
\end{equation}
\end{theorem}
\begin{proof}
Note that
$$\int_\cover{X} \cover{f}(\varphi_t(v)) g(v) dv = \int_X f(\varphi_t(v)) g_{\sum}(v) dv$$
where $g_{\sum}(v)$ is the sum of all $g(\cover{v})$ where $\cover{v}\in\cover{X}$ is a preimage of $v\in X$.
Since $K$ can only cover a finite number of copies of a fundamental domain of $M$, the sum $g_{\sum}(v)$ has a finite and bounded number of terms.
Since $f$ has compact support, $S_m(f)$ is finite and the result follows from \refthm{mixing}.
\end{proof}
To address the third issue, we make the following definition.
\begin{definition}
\label{Def:Bump}
Define an \emph{$\varepsilon$--bump function} by
$$b_\varepsilon(x)= \begin{cases}
\exp\left(\frac{1}{(x/\varepsilon)^2 - 1}\right) & \mbox{if}\quad |x|<\varepsilon, \\
0 & \mbox{otherwise}
\end{cases}$$
and set $B=\left[\int_{-\infty}^{\infty} b_1(x)dx\right] \left[\int_0^{\infty} b_1(r) 2\pi r dr\right]$.
\end{definition}
We again use the coordinates on $\cover{X}$ already introduced in \refsec{coordinatesView}.
Recall that $H=H^{\sf{s}}(x_{\sfu})$. We define $r_s=d_H(x_{\sfu},x_{\sfs})$. Again, see \reffig{localCoordinates}. We now define
\begin{equation}
\label{Eqn:defHEps}
h_\varepsilon(r)=h(x_{\sfu}) \cdot \frac{b_\varepsilon(x_{\sff})b_\varepsilon(r_s)}{B \varepsilon^3}
\end{equation}
Using Fubini's theorem, we can now write
$$Y_{t,\varepsilon} = \! \int_D \int_\mathbb{R} \int_{H^{\sf{s}}(x_{\sfu})} \cover{f}(\varphi_t(x_{\sfu},x_{\sff},x_{\sfs})) h_\varepsilon(x_{\sfu},x_{\sff},x_{\sfs}) J_D(x_{\sfu}, x_{\sff}, x_{\sfs}) dx_{\sfs} dx_{\sff} dx_{\sfu}$$
where $dx_{\sfu}$ and $dx_{\sfs}$ are using the area measures on $D$ and $H^{\sf{s}}(x_{\sfu})$, respectively, and where $J_D(x_{\sfu},x_{\sff},x_{\sfs})$ is the smooth function such that
\begin{equation}
\label{Eqn:ScaleFactor}
d\mu_X = J_D(x_{\sfu},x_{\sff},x_{\sfs}) dx_{\sfs} dx_{\sff} dx_{\sfu}
\end{equation}
Note that, by construction, $J_D$ is invariant under isometries fixing $D$. In particular, $J_D(x_{\sfu},0,x_{\sfu})$ is a positive constant. We set $J_0 = J_D(x_{\sfu},0,x_{\sfu})$.
Defining
\begin{equation}
\label{Eqn:SliceBlurredIndicator}
Y^{\sf{u}}_{t,\varepsilon}(x_{\sfu}) = \int_\mathbb{R} \int_{H^{\sf{s}}(x_{\sfu})} \cover{f}(\varphi_t(x_{\sfu},x_{\sff},x_{\sfs})) J_D(x_{\sfu}, x_{\sff}, x_{\sfs}) \frac{b_\varepsilon(x_{\sff})b_\varepsilon(r_s)}{B \varepsilon^3} dx_{\sfs} dx_{\sff},
\end{equation}
we can write
$$Y_{t,\varepsilon} = \int_D Y^{\sf{u}}_{t,\varepsilon}(x_{\sfu}) h(x_{\sfu}) dx_{\sfu}$$
\begin{lemma}
\label{Lem:approxStableAndFlow}
For any smooth, compactly supported function $f\from X\to\mathbb{R}$ and any view $D$, there is a $C>0$ such that for all $t\geq 0$, for all $1>\varepsilon>0$, and for all $x_{\sfu}\in D$, we have
$$\left| \cover{f}(\varphi_t(x_{\sfu})) J_0 - Y^{\sf{u}}_{t,\varepsilon}(x_{\sfu}) \right| \leq C\varepsilon$$
\end{lemma}
\begin{proof}
Let $$b_\varepsilon(x_{\sfu}, x_{\sff}, x_{\sfs}) = \frac{b_\varepsilon(x_{\sff}) b_\varepsilon(r_s)}{B\varepsilon^3}$$
Note that the support of $b_\varepsilon(x_{\sfu}, x_{\sff}, x_{\sfs})$ is the closure of $D_\varepsilon$ and that $\iint b_\varepsilon(x_{\sfu},x_{\sff},x_{\sfs}) dx_{\sfs} dx_{\sff} = 1$.
Thus, it suffices to show that there is a $C>0$ such that for all $t\geq 0$, for all $1>\varepsilon>0$, and for all $(x_{\sfu},x_{\sff},x_{\sfs})\in D_\varepsilon$, we have
\begin{equation}
\label{Eq:lemApprox}
\left| \cover{f}(\varphi_t(x_{\sfu})) J_0 - \cover{f}(\varphi_t(x_{\sfu},x_{\sff},x_{\sfs})) J_D(x_{\sfu}, x_{\sff}, x_{\sfs})\right| \leq C \varepsilon
\end{equation}
\begin{claim}
Let $f \from X \to \mathbb{R}$ be a smooth, compactly supported function.
Let $L=L(f)$ be a Lipschitz constant for $f$ and let $\cover{f}\from\cover{X}\to\mathbb{R}$ be its lift.
For all $t \geq 0$, for all $1 > \varepsilon > 0$, and for all $(x_{\sfu}, x_{\sff}, x_{\sfs}) \in D_\varepsilon$, we have
\[
\left| \cover{f}(\varphi_t(x_{\sfu})) - \cover{f}(\varphi_t(x_{\sfu},x_{\sff},x_{\sfs}))\right| \leq 2L\varepsilon \enspace \mbox{and} \enspace \left|\cover{f}(\varphi_t(x_{\sfu}))\right| \leq \| f \|_\infty
\]
\end{claim}
\begin{proof}
The first inequality follows from \reflem{distFlowStable}. The second is by definition.
\end{proof}
\begin{claim}
The function $J_D$ has a finite Lipschitz constant $L'$ when restricted to $D_1$. Thus, for all $1>\varepsilon>0$ and for all $(x_{\sfu},x_{\sff},x_{\sfs})\in D_\varepsilon$, we have
\[
\left| J_0 - J_D(x_{\sfu},x_{\sff},x_{\sfs})\right| \leq 2L'\varepsilon
\]
\end{claim}
\begin{proof}
Since $J_D$ is invariant under isometries preserving $D$, we can assume that $x_{\sfu}$ is fixed. Then, the domain $(x_{\sfu},(-1,1),B^{{\sf{s}}}_1(x_{\sfu}))$ has compact closure and thus $J_D$ has a finite Lipschitz constant $L'$ when restricted to it. The claim now follows from \reflem{distFlowStable}.
\end{proof}
Using the above two claims, the left hand side of \eqref{Eq:lemApprox} is bounded by $2L\varepsilon \cdot J_0 +\|f\|_\infty \cdot 2L'\varepsilon + 2L\varepsilon \cdot 2L' \varepsilon$. Thus setting $C=2L J_0 + 2L' \|f\|_\infty + 4L L'$ suffices to prove \reflem{approxStableAndFlow}.
\end{proof}
\begin{lemma}
\label{Lem:approxStableFlow}
Let $M$ be a connected, orientable, finite volume, complete hyperbolic three-manifold and $f\from X\to\mathbb{R}$ a smooth, compactly supported function. Fix a view $D$ and a compact set $K\subset D$. There is a $C>0$ such that for all smooth $h\from D\to\mathbb{R}$ supported in $K$ and for all $t\geq 0$, $1>\varepsilon>0$, we have
$$\left| Y_t J_0 - Y_{t,\varepsilon} \right| \leq C\varepsilon \| h\|_\infty$$
\end{lemma}
\begin{proof}
We have
$$Y_t J_0 - Y_{t,\varepsilon} = \int_D \left( \cover{f}(\varphi_t(x_{\sfu})) J_0 - Y^{\sf{u}}_{t,\varepsilon}(x_{\sfu})\right) h(x_{\sfu}) dx_{\sfu} $$
which is bounded by
$$C\varepsilon \int_D |h(x_{\sfu})| dx_{\sfu}$$
by \reflem{approxStableAndFlow}. The result now follows since $K$ has finite area.
\end{proof}
\begin{lemma}
\label{Lem:sobNormBound}
Let $M$ be a connected, orientable, finite volume, complete hyperbolic three-manifold. Fix $m\in\mathbb{N}$. Fix a view $D$ and a compact set $K\subset D$. There is a $C>0$ such that for all smooth $h\from D\to\mathbb{R}$ supported in $K$ and all $1>\varepsilon>0$, we have
$$S_m(h_\varepsilon) \leq C \varepsilon^{-(m+3)} S_m(h).$$
\end{lemma}
\begin{proof}
We estimate $S_m(h_\varepsilon)$ by using that $h_\varepsilon$ is separable as defined in \eqref{Eqn:defHEps}. In suitable coordinates, the second factor can be written as
$$g_\varepsilon\from\mathbb{R}^3\to\mathbb{R},\quad
(x, y, z)\mapsto \frac{b_\varepsilon(x) b_\varepsilon(\sqrt{y^2+z^2})}{B\varepsilon^3}$$
We have $g_\varepsilon(u)=g_1(u/\varepsilon) / \varepsilon^{3}$ so $S_m(g_\varepsilon) = \varepsilon^{-(m+3)} S_m(g_1)$. Using that all $h_\varepsilon$ are supported in a common compact set, the lemma follows from the following fact about Sobolev norms.
Recall that a Sobolev norm requires a choice of vector fields that pointwise span the tangent space of the manifold (or a bundle over the manifold when using the Sobolev norm defined earlier by lifting a function $f\from \Gamma \backslash G \slash H\to \mathbb{R}$ to $G$). However, any two such choices yield Sobolev norms that differ by a bounded factor when restricting to a small enough neighbourhood or compact set. In particular, up to a bounded factor, we can estimate the Sobolev norm $S_m(h_\varepsilon)$ by Sobolev norms using local coordinates in which $h_\varepsilon$ is separable.
\end{proof}
\begin{proof}[Proof of \reflem{expoDecay}]
Let $m$ be as in \refthm{mixing2}. Fix a smooth, compactly supported function $f$, a view $D$, and a compact $K\subset D$.
\refthm{mixing2} states that there is a $C_0$ and $c_0$ such that for all smooth $h\from D\to\mathbb{R}$ supported in $K$ and all $1 > \varepsilon > 0$, we may set $g=h_\varepsilon$ and have
$$|Y_{t,\varepsilon} | = \left| \int_\cover{X} \cover{f}(\varphi_t(v)) h_\varepsilon(v) dv \right| \leq C_0 e^{-c_0t} S_m(h_\varepsilon)$$
Applying \reflem{approxStableFlow} to the left-hand side and \reflem{sobNormBound} to the right hand-side, there are $C_1$ and $C_2$ such that
\[
|Y_t J_0 | \leq C_1 \varepsilon \|h\|_\infty + C_0 e^{-c_0 t} C_2 \varepsilon^{-(m+3)} S_m(h)
\]
Since this holds for all $1 > \varepsilon > 0$, we can set $\varepsilon = e^{-c_0t/2(m+3)}$ and obtain
\[
|Y_t | \leq \frac{C_1 + C_0 C_2}{J_0} e^{-c_0t/2} S_m(h) \qedhere
\]
\end{proof}
\begin{proof}[Proof of \refthm{Pixel}]
Recall that $$\Phi^{\omega,b,D}(\eta)=\int_0^\infty Y_t + \int_D \left[\int_b^{\pi(v)} \omega\right] \eta$$
We now discuss the topology on $\Omega^2_c(D)$.
Many textbooks on distributions define the topology on $C^\infty_c(\mathbb{R}^n)$.
This suffices in the ideal and hyperideal cases, where the view $D$ is diffeomorphic to $\mathbb{R}^2$.
When $D$ is a two-sphere, we instead rely on the theory of distributions on generic smooth manifolds.
We point the reader to \cite[Section~2.3]{BanCrainic} where such distributions are called ``generalised sections''.
In particular, the definition of $\mathcal{D}(M;E)$ in \cite{BanCrainic} gives the correct topology on $\Omega^0_c(D)\cong C^\infty_c(D)$ and thus on $\Omega^2_c(D)\cong\Omega^0_c(D)$.
Using \cite[Corollary~2.2.1]{BanCrainic} and the definition of the topology on $\Omega^2_c(D)$, it is not hard to see that statement~\refitm{wellDefDist} is equivalent to the following claim.
\begin{claim}
For any compact $K\subset D$, there is a $C>0$ such that for all $\eta\in\Omega^2_K(D)$, the integrals in
$\Phi^{\omega,b,D}(\eta)$ exists and we have $|\Phi^{\omega,b,D}(\eta)| \leq C S_m(\eta)$.
\end{claim}
\begin{proof}
Taking $f=\omega\from X\to\mathbb{R}$, \reflem{expoDecay} implies that there is a $C_0>0$ such that for all $\eta\in\Omega^2_K(D)$, the integral $\int_0^\infty Y_t$ exists and is bounded by $C_0 S_m(\eta)$. Let $C_1=\int_K \zeta_D$ be the area of $K$ and $C_2$ be the maximum of $\int_b^{\pi(v)} \omega$ over $v\in K$. Recall that $h\in\Omega^0_K(D)$ denotes the Hodge dual to $\eta=h\zeta_D$. Then,
\[
\left| \Phi^{\omega,b,D}(\eta) \right| \leq C_0 S_m(\eta) + C_1 C_2 \| h\|_\infty \leq (C_0 + C_1 C_2) S_m(\eta) \qedhere
\]
\end{proof}
Let $W\from \cover{M}\to\mathbb{R}$ be a primitive such that $dW=\omega$.
In an abuse of notation, we again abbreviate $W\circ \pi$ as $W$.
We have
\begin{align}
\Phi^{\omega,b,D}(\eta)&=\lim_{T\to\infty} \int_{v\in D} \left(W(\varphi_T(v)) - W(b)\right) \eta(v) \nonumber \\
& = \left[\lim_{T\to\infty} \int_D (W\circ \varphi_T) \eta\right] - W(b) \int_D \eta
\label{eq:splitOffCanonical}
\end{align}
Statement~\refitm{cocycleIndep} is equivalent to the following claim.
\begin{claim}
If $[\omega] = 0$, there is a $C$ such that $\Phi^{\omega,b,D} = C \cdot \int_D $
\end{claim}
\begin{proof}
Since $\omega$ is a coboundary, the primitive descends to a map $W\from M\to\mathbb{R}$.
Add a constant to $W$ to arrange $\int_M W dv = 0$; here we integrate with respect to the volume measure.
Taking $f=W$, \reflem{expoDecay} now applies to \eqref{eq:splitOffCanonical} and we have
$\Phi^{\omega,b,D} = -W(b) \cdot \int_D$.
\end{proof}
Statement~\refitm{basePointPixelThm} follows from from \refrem{DependenceOnb}.
It is left to show Statement~\refitm{viewTransform}.
Recall that we defined maps $i_D\from D\to\partial_\infty\cover{M}$ and $i_E\from E\to\partial_\infty\cover{M}$. Let $U=\operatorname{image}(i_D)\cap \operatorname{image}(i_E) \subset \partial_\infty\cover{M}$. Furthermore, let us define two distributions on $U$ by
\[
\Phi^{\omega,b}_{\leftarrow D}(\delta) = \Phi^{\omega,b,D}(i_D^*(\delta))
\qquad\mbox{and}\qquad
\Phi^{\omega,b}_{\leftarrow E} (\delta)= \Phi^{\omega,b,E}(i_E^*(\delta))
\]
For every $\eta \in \Omega^2_c(\operatorname{image}(i_{E,D}))$, there is a compactly supported two-form $\delta$ on $U$ such that $\eta = i_D^* \delta$ and $i^*_{E,D} \eta = i_E^* \delta$. Thus, it is enough to show that
\begin{equation}
\label{Eqn:limViewSame}
\Phi^{\omega,b}_{\leftarrow D} = \Phi^{\omega,b}_{\leftarrow E}
\end{equation}
\begin{figure}\centering
\labellist
\small\hair 2pt
\pinlabel $s$ [B] at 89 179
\pinlabel $\varphi_T(i_D^{-1}(N))$ [Br] at 48 175
\pinlabel $\varphi_{T+\Delta T}(i_E^{-1}(N))$ [Bl] at 131 175
\pinlabel $i_D^{-1}(s)$ [tr] at 81 80
\pinlabel $i_E^{-1}(s)$ [tl] at 126 72
\pinlabel $\Delta T_s$ [bl] at 116 88
\pinlabel $r_s$ [b] at 104 79
\pinlabel $y$ [b] at 117 116
\pinlabel $H$ [Bl] at 128 115
\endlabellist
\includegraphics[height=8cm]{Figures/viewsConverging}
\caption{The regions $\varphi_T(i_D^{-1}(N))$ and $\varphi_{T+\Delta T}(i_E^{-1}(N))$ for two material views. The distance between corresponding points in the two regions is bounded by a constant for all large enough $T$. The constant can be made arbitrarily small by making the neighbourhood $N$ about $p$ small enough. \label{Fig:convergenceViews}}
\end{figure}
\begin{claim}
\label{Clm:convergenceDistance}
For every $p\in U$ and $\varepsilon>0$, there is a neighbourhood $N\subset U$ of $p$, a number $\Delta T\in\mathbb{R}$ and $T_0\geq 0$ such that for all $s\in N$ and $T\geq T_0$, we have
\[|W(\varphi_T(i_D^{-1}(s))) - W(\varphi_{T+\Delta T}(i_E^{-1}(s)))| \leq \varepsilon\]
\end{claim}
\begin{proof}
It might be helpful to consult \reffig{convergenceViews}.
Let $L$ be a Lipschitz constant of $W\from X\to\mathbb{R}$.
It is enough to show that
\[d_\cover{X}(\varphi_T(i_D^{-1}(s)), \varphi_{T+\Delta T}(i_E^{-1}(s)))\leq \varepsilon / L\]
Given $s\in U\subset S^2$, the rays corresponding to $i_D^{-1}(s)$ and $i_E^{-1}(s)$ both converge to $s$.
There is a horosphere $H$ about $s$ containing the initial point $\pi(i_D^{-1}(s))$ of the first ray.
The other ray intersects $H$ orthogonally as well.
Let us denote this intersection by $y$.
Let $\Delta T_s$ be the signed distance from $\pi(i_E^{-1}(s))$ to $y$ and let $r_s=d_H(y, \pi(i_D^{-1}(s)))$.
Use the given $p\in U\subset S^2$ to set $\Delta T=\Delta T_p$. When flowing the first ray by $T$ and the second by $T+\Delta T$, we obtain the following distance estimate (see \refrem{extrinsicCurvCorr} to explain the factor $\sqrt{2}$)
$$d_{\cover{X}}(\varphi_T(i_D^{-1}(s)), \varphi_{T+\Delta T}(i_E^{-1}(s))) \leq \sqrt{2} r_s e^{-T} + |\Delta T - \Delta T_s|$$
There is a neighbourhood $N$ of $p$ such that for all $s\in N$, we have $|\Delta T - \Delta T_s| \leq \varepsilon / 2L$. Note that $r_s$ is bounded on $N$ so there is a $T_0$ such that $\sqrt{2} r_s e^{-T_0}\leq \varepsilon / 2L.$
\end{proof}
We define a semi-norm on $\Omega^2_c(U)$ as follows. Identify $\partial_\infty \cover{M}$ with $S^2$. Such an identification induces an area form $\zeta_\infty$ on $\partial_\infty \cover{M}$. Given $\delta \in\Omega^2_c(U)$, let $d$ be the Hodge dual, that is $\delta=d\cdot \zeta_\infty$. Let
\[
\| \delta \|_1 = \int_U |d| \cdot \zeta_\infty
\]
Note that $\|\delta\|_1$ does not depend on the identification of $\partial_\infty \cover{M}$ with $S^2$.
\begin{claim}
\label{Clm:convergenceRelatedInts}
For every $p\in U$ and $\varepsilon > 0$, there is a neighbourhood $N \subset U$ of $p$ such that for all $\delta\in \Omega^2_c(N)$, we have
\[
\left| \Phi^{\omega,b}_{\leftarrow D}(\delta) - \Phi^{\omega,b}_{\leftarrow E}(\delta) \right| \leq \varepsilon \|\delta\|_1
\]
\end{claim}
\begin{proof}
Let $N$, $\Delta T$ and $T_0$ as in \refclm{convergenceDistance}. Add a constant to the primitive $W$ of $\omega$ such that $W(b)=0$. Then, the left hand-side can be expressed using $W$ as follows.
\begin{align*}
& \left| \lim_{T\to\infty}\int_U W(\varphi_T(i_D^{-1}(s))) \cdot \delta - \lim_{T\to\infty}\int_U W(\varphi_T(i_E^{-1}(s))) \cdot \delta \right| = \\
& \left| \lim_{T\to\infty}\int_U W(\varphi_T(i_D^{-1}(s))) \cdot \delta - \lim_{T\to\infty}\int_U W(\varphi_{T+\Delta T}(i_E^{-1}(s))) \cdot \delta\right| = \\
& \left| \lim_{T\to\infty}\int_U \left[ W(\varphi_T(i_D^{-1}(s))) - W(\varphi_{T+\Delta T}(i_E^{-1}(s))\right] \cdot \delta \right|
\end{align*}
\refclm{convergenceDistance} implies that the integral is bounded by $\varepsilon \|\delta\|_1$ for all $T_0\geq T$, thus the limit is bounded by $\varepsilon \|\delta\|_1$.
\end{proof}
We now verify \eqref{Eqn:limViewSame}.
Fix a smooth, compactly supported $\delta\in\Omega^2_c(U)$.
Also fix $\varepsilon>0$.
For each $p\in U$, pick a neighbourhood $N$ as in \refclm{convergenceRelatedInts}.
The support $\supp(\delta)$ can be covered by finitely many of these neighbourhoods.
Consider a partition of unity with respect to this finite cover.
By multiplying $\delta$ by the partition functions, we obtain smooth two-forms $\delta_1,\dots, \delta_n$ with $\sum \delta_i = \delta$ and $\sum \|\delta_i\|_1 = \|\delta\|_1$
Since $\sum \varepsilon \|\delta_i\|_1 = \varepsilon \|\delta\|_1$, \refclm{convergenceRelatedInts} implies
\[
\left| \Phi^{\omega,b}_{\leftarrow D}(\delta) - \Phi^{\omega,b}_{\leftarrow E}(\delta) \right| \leq \varepsilon \|\delta\|_1
\]
Since $\varepsilon$ was arbitrary, we are done.
\end{proof}
\begin{remark}
McMullen~\cite{McMullen19} suggests another approach to \refthm{Pixel}.
Arrange matters so that $W$, the primitive of $\omega$, is harmonic with respect to the hyperbolic metric.
Prove that $W$ has suitably bounded growth as it approaches $\bdy_\infty \mathbb{H}^3$, in terms of the euclidean metric in the Poincar\'e ball model.
Now prove and then apply an appropriate version (of part (iii) implies part (i)) of~\cite[Theorem~1.1]{Straube84}.
\end{remark}
\subsection{The cohomology fractal as a measure}
\label{Sec:RemarkMeasureHard}
We do not know whether the cohomology fractal converges to a signed measure; that is whether or not
$$\lim_{T\to\infty} \int_U \Phi^{\omega,b,D}_T \cdot d\mu_D$$
converges for every measurable $U\subset D$. Instead, we have the following partial result.
\begin{theorem}[Square pixel theorem]
\label{Thm:SquarePixel}
Suppose that $M$ is a connected, orientable, finite volume, complete hyperbolic three-manifold.
Fix
a closed, compactly supported one-form $\omega \in \Omega_c^1(M)$,
a basepoint $b \in \cover{M}$, and
a view $D$.
Suppose that $U \subset D$ is bounded and open and $\partial U$ consists of finitely many smooth curves. Then the following limit exists
\[
\lim_{T\to\infty} \int_U \Phi^{\omega,b,D}_T \cdot d\mu_D
\]
\end{theorem}
\begin{remark}
Recall that the pictures in \reffig{RagainstNumSamples} were generated by uniformly sampling across a pixel square.
\refthm{SquarePixel} finally proves that this technique (with enough samples) will give accurate images in some non-empty range of visual radii.
Note that the earlier \refthm{Pixel} is not sufficient;
it requires a smooth filter function.
\end{remark}
\begin{remark}
To generalise \refthm{SquarePixel} to measurable subsets seems difficult.
Our proof does not apply, for example, to an open set $U \subset D$ bounded by several Osgood arcs~\cite{Osgood03}.
\end{remark}
Suppose that $h\from D \to \mathbb{R}$ is a bounded measurable function with compact support.
We define $h^{\sf{u}}_\varepsilon\from D \to \mathbb{R}$ by setting
\begin{equation}
\label{Eqn:hEpsUnstable}
h^{\sf{u}}_\varepsilon(x_{\sfu}) =
\frac{1}{B_\varepsilon} \int_D h \cdot b_\varepsilon(d_D(x_{\sfu}, v)) \cdot d\mu_D(v)
\end{equation}
Here $B_\varepsilon$ is a normalisation factor such that
$\int_D h \cdot d\mu_D=\int_D h^{\sf{u}}_\varepsilon \cdot d\mu_D$.
We call $h^{\sf{u}}_\varepsilon$ the \emph{$\varepsilon$--mollification (in the unstable direction)} of $h$.
\begin{lemma}
\label{Lem:technicalSquarePixel}
Let $h\from D\to\mathbb{R}$ be a bounded measurable function with compact support.
Assume that there is a $c>0$ and $C>0$ such that for all $1 > \varepsilon >0$, we have
\[
\| h - h^{\sf{u}}_\varepsilon \|_1 \leq C\varepsilon^c
\]
Then the following limit exists
\[
\lim_{T\to\infty} \int_D \Phi^{\omega,b,D}_T \cdot h \cdot d\mu_D
\]
\end{lemma}
\begin{proof}[Proof sketch]
We show that $Y_t = \int_D (\omega \circ \varphi_t) \cdot h\cdot d\mu_D$ decays exponentially.
Let $\eta_\varepsilon = h^{\sf{u}}_\varepsilon \zeta_D$ and $Y_{t,\varepsilon} = \int_D (\omega \circ \varphi_t) \cdot \eta_\varepsilon$.
Regarding $\omega$ as function $X\to\mathbb{R}$, we have
\[
|Y_t - Y_{t,\varepsilon}| \leq \| h - h^{\sf{u}}_\varepsilon \|_1 \cdot \| \omega\|_\infty
\leq C\varepsilon^c \|\omega\|_\infty
\]
Since $h$ is bounded, the Sobolev norm of the convolution $h^{\sf{u}}_\varepsilon$ of $h$ can be estimated from the Sobolev norm of the convolution kernel.
This can be done similarly to \reflem{sobNormBound}.
Thus, there is a $C_0>0$ such that for all $1>\varepsilon > 0$, we have $S_m(\eta_\varepsilon) \leq C_0 \varepsilon^{-(m+2)}$.
Taking $f = \omega$ and $\eta = \eta_\varepsilon$, \reflem{expoDecay} states that there is a $C_1>0$ and $c_1>0$ such that for all $t\geq 0$ and $1>\varepsilon>0$, we have
\[
|Y_{t,\varepsilon}| \leq C_1 e^{-c_1t} \varepsilon^{-(m+2)}
\]
Thus, we have
\[
|Y_t| \leq C \varepsilon^c \| \omega\|_\infty + C_1e^{-c_1 t} \varepsilon^{-(m+2)}
\]
We obtain exponential decay when setting $\varepsilon = e^{-c_1 t / 2 (m+2)}.$
\end{proof}
\begin{proof}[Proof sketch of \refthm{SquarePixel}]
The theorem follows from \reflem{technicalSquarePixel} when taking $h$ to the indicator function $\chi_U$ of $U$.
It is left to show that there is a $C$ such that $\| h - h^{\sf{u}}_\varepsilon\|_1 \leq C \varepsilon$.
Note that $h-h^{\sf{u}}_\varepsilon$ is bounded by one, thus it is enough to show that the area where $h-h^{\sf{u}}_\varepsilon$ is non-trivial is bounded by $C\varepsilon$.
But the area of this neighbourhood can be approximated by the length of $\partial U$ multiplied by $\varepsilon$.
\end{proof}
\section{Questions and projects}
\label{Sec:Questions}
\begin{question}
Suppose that $F$ is a surface in a hyperbolic three-manifold $M$.
\refthm{CLT} tells us that the standard deviation $\sigma$ is a topological invariant of the pair $(M,F)$. What are the number theoretic (or other) properties of $\sigma$? Fixing $M$, does $\sigma$ ``see'' the shape of the Thurston norm ball?
\end{question}
\begin{question}
Suppose that $F$ is a fibre of a closed, connected hyperbolic surface bundle $M$. In \refprop{LightDark} we showed that approximations of the Cannon--Thurston map are (components of) level sets of the cohomology fractal. Is there some more precise sense in which the Cannon--Thurston map $\Psi$ is a ``level set'' of the distributional cohomology fractal $\Phi^{F, b}$?
\end{question}
\begin{question}
Can the cohomology class $[\omega]$ be recovered from the distributional cohomology fractal $\Phi^{\omega, b}$?
\end{question}
\begin{question}
\reffig{SubPixEvolution} suggests that in the example of \texttt{m122(4,-1)} the mean has settled down at around $R=10$ for a pixel size of $0.1^\circ$.
We also see this in \reffig{RagainstNumSamples} in that there is hardly any difference between the images at $R = 10$ and $R=12$ with $128\times 128$ samples.
\refthm{SquarePixel} tells us that given enough samples we can produce an accurate picture of the distributional cohomology fractal.
Can one calculate effective bounds that would allow us to produce a provably correct image?
\end{question}
\begin{question}
In \reffig{RagainstNumSamples}, the image with $1 \times 1$ samples and $R = 8$ is very similar to the image with $128 \times 128$ samples and $R = 12$.
However, we do not understand how an image generated with only one sample per pixel can so closely approximate the limiting object.
The manifold \texttt{m122(4,-1)} is small; as a result, perhaps the geodesic flow mixes rapidly enough?
Does this fail in larger manifolds?
\end{question}
\begin{question}[Mark Pollicott]
We consider lowering the dimension of $F$ and $M$ by one. Let $F$ be a non-separating curve in a closed, connected hyperbolic surface $M$. Fix a point $p \in M$. Let $P$ be a ``pixel'' -- that is, a closed arc in $\UT{p}{M} \homeo S^1$ with centre $c_P$ and radius $r_P$. The distributional cohomology fractal $\Phi^F$ exists and in fact gives a ``signed measure'' to each such pixel $P$ (see \refsec{RemarkMeasureHard}). By taking the pair $(c_P, r_P)$ to the measure of $P$, we obtain a function from $S^1 \times (0, \pi]$ to $\mathbb{R}$. What does its graph look like? For example, what happens if we fix $c_P$ and allow the radius to vary?
How does the graph behave as $r_P$ approaches zero?
\end{question}
\begin{question}
The histogram in \reffig{m122(4,1)Dist} is low near the mean. Increasing $R$ (within the range that we trust our experiments, see \refsec{Accumulate}) reduces, but does not remove, this gap. Why is it there? (This does not seem to happen in \reffig{s789Dist}, where the surface is closed but the manifold is not.)
\end{question}
\begin{question}
\label{Que:Cusped}
Consider the experiment shown in \reffig{s789Histogram2}.
Here the support of the cocycle $\omega$ is not compact.
We see that the distribution of the cohomology fractal, over a pixel, appears to not be normal.
Further experiments show that it depends sensitively on the choice of pixel.
Can one verify rigorously that it is not normal?
We suspect that some version of ``subtracting the largest excursion''
(see the remarks immediately before \cite[Theorem~1]{DiamondVaaler86})
will yield a more reasonable distribution.
\end{question}
\begin{question}
\refthm{Pixel} applies to cocycles $\omega$ with compact support.
Consider a cusped manifold and a cocycle $\omega$ such that the pullback of $[\omega]$ to the cusp torus is non-trivial.
In this case, the Sobolev norm of $\omega$ is infinite;
thus \refthm{mixing} does not apply.
Are there modifications, perhaps as indicated in \refque{Cusped},
so that we again obtain a distribution at infinity for the cohomology fractal?
\end{question}
\begin{question}
In the fibred case, what is the relationship between the cohomology fractal and the lightning curve? See \refsec{Lightning}.
\end{question}
We end with some ideas for future software projects.
\begin{project}
One could use material triangulations in the closed case to draw an approximation $\Psi_D$ to the Cannon--Thurston map, following \refalg{CTApprox}. By \refprop{LightDark}, these match the cohomology fractal. Motivated by Figures~\ref{Fig:m122_4_-1} and~\ref{Fig:Orbifold}, we anticipate that $\Psi_D$ will look significantly different from Cannon--Thurston maps in the cusped case; the ``mating dendrites'' that approximate $\Psi$ have bounded branching at all points.
\end{project}
\begin{project}
In \refsec{Cone}, we discussed cohomology fractals for incomplete structures along a line in Dehn surgery space. As discussed in \refsec{NumericalInstability}, all of these suffer from numerical defects along the incompleteness locus $\Sigma_s$. These defects are visible (although small) in \reffig{Bending}.
When the slope $s$ is integral, we use material triangulations (\refsec{MaterialTriangulations}) to remove these defects. For general $s$, material triangulations are not available. Instead, one could \emph{accelerate} through tubes about $\Sigma_s$. That is, we modify the cellulation of the manifold by truncating each tetrahedron, replacing the lost volume with a solid torus ``cell'' around $\Sigma_s$.
\end{project}
| {
"timestamp": "2020-10-13T02:40:56",
"yymm": "2010",
"arxiv_id": "2010.05840",
"language": "en",
"url": "https://arxiv.org/abs/2010.05840",
"abstract": "Cohomology fractals are images naturally associated to cohomology classes in hyperbolic three-manifolds. We generate these images for cusped, incomplete, and closed hyperbolic three-manifolds in real-time by ray-tracing to a fixed visual radius. We discovered cohomology fractals while attempting to illustrate Cannon-Thurston maps without using vector graphics; we prove a correspondence between these two, when the cohomology class is dual to a fibration. This allows us to verify our implementations by comparing our images of cohomology fractals to existing pictures of Cannon-Thurston maps.In a sequence of experiments, we explore the limiting behaviour of cohomology fractals as the visual radius increases. Motivated by these experiments, we prove that the values of the cohomology fractals are normally distributed, but with diverging standard deviations. In fact, the cohomology fractals do not converge to a function in the limit. Instead, we show that the limit is a distribution on the sphere at infinity, only depending on the manifold and cohomology class.",
"subjects": "Geometric Topology (math.GT); Dynamical Systems (math.DS)",
"title": "Cohomology fractals, Cannon-Thurston maps, and the geodesic flow",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363503693294,
"lm_q2_score": 0.8128673201042492,
"lm_q1q2_score": 0.8001348512059141
} |
https://arxiv.org/abs/2007.01985 | Limits of almost homogeneous spaces and their fundamental groups | We say that a sequence of proper geodesic spaces $X_i$ consists of \textit{almost homogeneous spaces} if there is a sequence of discrete groups of isometries $G_i \leq Iso(X_i)$ such that diam$(X_i/G_i)\to 0$ as $i \to \infty$.We show that if a sequence $X_i$ of almost homogeneous spaces converges in the pointed Gromov--Hausdorff sense to a space $X$, then $X$ is a nilpotent locally compact group equipped with an invariant geodesic metric.Under the above hypotheses, we show that if $X$ is semi-locally-simply-connected, then it is a nilpotent Lie group equipped with an invariant Finsler or sub-Finsler metric, and for large enough $i$, there are subgroups $\Lambda_i \leq \pi_1(X_i) $ with surjective morphisms $\Lambda_i\to \pi_1(X)$. | \section{Introduction}
We say that a sequence of proper geodesic spaces $X_i$ consists of \textit{almost homogeneous spaces} if there is a sequence of discrete groups of isometries $G_i \leq Iso(X_i)$ such that diam$(X_i/G_i)\to 0$ as $i \to \infty$.
\begin{remark}
\rm A sequence of homogeneous spaces $X_i$ does not necessarily consist of almost homogeneous spaces, since the groups $Iso (X_i ) $ are not necessarily discrete.
\end{remark}
\begin{example}
\rm Let $Z_i$ be a sequence of compact geodesic spaces with diam$(Z_i)\to 0$ as $i \to \infty$. If $\tilde{Z}_i \to Z_i$ is a sequence of Galois covers, then the sequence $\tilde{Z}_i$ consists of almost homogeneous spaces.
\end{example}
The goal of this paper is to understand the Gromov--Hausdorff limits of sequences of almost homogeneous spaces. In the case when the sequence consists of blow-downs of a single space, the problem was solved by Mikhail Gromov and Pierre Pansu (\cite{GromovPG}, \cite{Pansu}).
\begin{theorem}
\rm (Gromov--Pansu) Let $(X,x_0)$ be a pointed proper geodesic space, and $G \leq Iso (X)$ a discrete group of isometries with diam$(X/G) < \infty$. If for some sequence of positive numbers $\lambda_i \to \infty$, the sequence
\[ \lim_{i \to \infty} \dfrac{1}{\lambda_i}(X, x_0 ) \to (Y,y_0) \]
converges in the pointed Gromov--Hausdorff sense, then $Y$ is a simply connected nilpotent Lie group equipped with a Carnot--Caratheodory metric (a Carnot--Caratheodory metric is a special kind of invariant sub-Finsler metric satisfying that for any $\lambda > 0$, the space $\lambda Y$ is isometric to $Y$).
\end{theorem}
When the limit is compact, Alan Turing solved the finite dimensional case \cite{Tur}, and using Turing's result, Tsachik Gelander solved the infinite dimensional case \cite{Gel}.
\begin{theorem}
\rm (Turing) Let $X_i$ be a sequence of almost homogeneous spaces. If the sequence $X_i$ converges in the Gromov--Hausdorff sense to a compact finite dimensional space $X$, then $X$ is a finite dimensional torus equipped with an invariant Finsler metric.
\end{theorem}
\begin{theorem}
\rm (Gelander) Let $X_i$ be a sequence of almost homogeneous spaces. If the sequence $X_i$ converges in the Gromov--Hausdorff sense to a compact space $X$, then $X$ is a (possibly infinite dimensional) torus equipped with an invariant metric.
\end{theorem}
The main result of this paper deals with the case in which the limit is non-compact.
\begin{theorem}\label{MAIN}
\rm Let $X_i$ be a sequence of almost homogeneous spaces. If the sequence $X_i$ converges in the pointed Gromov--Hausdorff sense to a space $X$, then $X$ is a nilpotent group equipped with an invariant metric. Furthermore, if $X$ is semi-locally-simply-connected, then it is a Lie group equipped with a Finsler or sub-Finsler metric, and for large enough $i$ there are subgroups $\Lambda_i \leq \pi_1(X_i)$ with surjective morphisms
\begin{equation}\label{Li}
\Lambda_i \twoheadrightarrow \pi_1(X).
\end{equation}
\end{theorem}
\begin{remark}\label{Montzip}
\rm The hypothesis of $X$ being semi-locally-simply-connected can be replaced by $X$ having finite topological dimension, because of the following result (solution to Hilbert's fifth problem) by Deane Montgomery and Leo Zippin \cite{MZ}.
\end{remark}
\begin{theorem}\label{MZHil}
\rm (Montgomery--Zippin) Let $X$ be a homogeneous proper geodesic space. If $X$ has finite topological dimension, then it is homeomorphic to a topological manifold, and its isometry group is a Lie group.
\end{theorem}
\subsection{Lower semi-continuity of $\pi_1$}
Under the conditions of Theorem \ref{MAIN}, Equation \ref{Li} implies that if the spaces $X_i$ are simply connected, and $X$ is semi-locally-simply-connected, then $X$ is simply connected as well. This is an instance of a more general phenomenon (see \cite{GromovMS}, Section 3E).
\begin{theorem}\label{LSOFFGC}
\rm (Folklore) Let $X_i$ be a sequence of compact geodesic spaces converging to a compact semi-locally-simply-connected space $X$. Then for large enough $i$, there are surjective morphisms
\begin{equation}
\pi_1 (X_i) \twoheadrightarrow \pi_1 (X).
\end{equation}
\end{theorem}
This property is further studied by Christina Sormani and Guofang Wei in \cite{SWHau}, \cite{SWUni}, \cite{SWCov}. This result fails if the limit is not compact (see \cite{SWUni}, Example 1.2), or not semi-locally-simply-connected (see \cite{SWHau}, Example 2.6). The following example shows that the conclusion of Theorem \ref{MAIN} fails if one works with homogeneous spaces instead of almost homogeneous spaces.
\begin{example}\label{BergZ}
\rm Let $Y$ be $\mathbb{S}^1$ with its standard metric of length $2 \pi$ and $Z_i $ be $\mathbb{S}^3$ with the round (bi-invariant) metric of constant curvature $1/i$. Let $X_i $ be the quotient $(Y \times Z_i ) / \mathbb{S}^1$ where $\mathbb{S}^1$ acts on $Y \times Z_i$ as follows:
$$ z(w,q) = (wz ^{-1}, z q): z, w \in \mathbb{S}^1, q \in \mathbb{S}^3. $$
Then $X_i$ is isometric to $\mathbb{S}^3$ equipped with a re-scaled Berger metric. The sequence $X_i$ consists of simply connected homogeneous spaces, but its pointed Gromov--Hausdorff limit is $\mathbb{S}^1 \times \mathbb{R}^2$, which is not simply connected.
\end{example}
Under the hypotheses of Theorem \ref{MAIN}, assuming $X$ is semi-locally-simply-connected, one may wonder whether one could obtain the conclusion of Theorem \ref{LSOFFGC}, which is slightly stronger than Equation \ref{Li}. The following example shows that it is not the case.
\begin{example}\label{BergZ}
\rm Define a ``dot product'' $\mathbb{R}^4 \times \mathbb{R}^4 \to \mathbb{R}^6$ in $\mathbb{R}^4$ as
\[ (a \cdot x) : = (a_1x_2, a_1x_3, a_1x_4, a_2x_3, a_2x_4, a_3x_4), \text{ } a,x \in \mathbb{R}^4 . \]
With it, define a group structure on $\mathbb{R}^4 \times \mathbb{R}^6$ as
\[ (a,b) \cdot (x,y) : = (a+x , b+y+ (a\cdot x)), \text{ }a,x \in \mathbb{R}^4 , \text{ } b,y \in \mathbb{R}^6 . \]
Let $G$ be the above group equipped with a left invariant Riemannian metric. For each $i$, define the subgroups $K \leq K_i \triangleleft G_i \leq G \cong \mathbb{R}^4 \times \mathbb{R}^6 $ as:
\begin{eqnarray*}
K & : = & \{ 0 \} \times \mathbb{Z}^6 \\
K_i & : = & ( i \mathbb{Z}^4 ) \times \mathbb{Z}^6 \\
G_i & : = & \left( \frac{1}{i} \mathbb{Z}^4 \right) \times \left( \frac{1}{i^2} \mathbb{Z}^6 \right) .
\end{eqnarray*}
The sequence $X_i : = G / K_i$ converges in the pointed Gromov--Hausdorff sense to $X:= G/K$. Since the sequence of finite groups $G_i / K_i$ acts on $X_i$ with diam$(X_i/(G_i/K_i))\to 0$, the sequence $X_i$ consists of almost homogeneous spaces. A direct computation shows that the abelianization of $\pi_1(X_i ) = K_i$ is isomorphic to
\[ \mathbb{Z}^4 \oplus \left( \mathbb{Z} / i^2 \mathbb{Z} \right)^6 . \]
Since $\pi_1(X) = K \cong \mathbb{Z}^6$, for no $i$ there are surjective morphisms
\[ \pi_1(X_i ) \twoheadrightarrow \pi_1(X).\]
\end{example}
\subsection{Existence of the limit}
Given a sequence $X_i$ of almost homogeneous spaces, one may wonder which conditions guarantee the existence of a convergent subsequence. If the spaces $X_i$ are $n$-dimensional Riemannian manifolds with a uniform lower Ricci curvature bound, Gromov compactness criterion implies the existence of a convergent subsequence \cite{GromovMS}. Itai Benjamini, Hilary Finucane and Romain Tessera found another sufficient condition for this partial limit to exist when the spaces $X_i$ are graphs \cite{BFT}.
\begin{theorem}\label{BFTExist}
\rm (Benjamini--Finucane--Tessera) Let $D_i \leq \Delta _i$ be two sequences going to infnity, and let $X_i$ be a sequence of graphs. Assume there is a sequence of discrete groups $G_i \leq Iso (X_i)$ acting transitively on the sets of vertices. If the balls of radius $D_i$ in $X_i$ satisfy
$$\vert B^{X_i}(x,D_i)\vert =O(D_i^q) $$
for some $q>0$, then the sequence of almost homogeneous spaces
\[ \dfrac{1}{\Delta_i} X_i \]
has a subsequence converging the pointed Gromov--Hausdorff sense to a nilpotent Lie group.
\end{theorem}
\begin{remark}
\rm Recently, Romain Tessera and Matthew Tointon showed that the hypothesis in Theorem \ref{BFTExist} of the groups $G_i$ being discrete can be removed \cite{TT}. Moreover, one could define a sequence of \textit{weakly almost homogeneous spaces} to be a sequence of proper geodesic spaces $X_i$ with groups of isometries $G_i \leq Iso (X_i)$ acting with discrete orbits and such that diam$(X_i / G_i ) \to 0$. Their results imply that Theorem \ref{MAIN} holds under the weaker assumption that the spaces $X_i$ are weakly almost homogeneous.
\end{remark}
\subsection{Further problems}
There are two natural strengthenings of Theorem \ref{MAIN}. One could ask if a weaker conclusion holds if one removes the hypotheses of the groups $G_i$ acting almost transitively, or the limit $X$ being semi-locally-simply-connected.
\begin{con}
\rm Let $X_i$ be a sequence of simply connected proper geodesic spaces. Assume there is a sequence of discrete groups of isometries $G_i \leq Iso (X_i)$ with diam$(X_i/G_i) \leq C$ for some $C>0$. If $X_i$ converges in the pointed Gromov--Hausdorff sense to a semi-locally-simply-connected space $X$, then $X$ is simply connected.
\end{con}
\begin{con}
\rm Let $X_i$ be a sequence of simply connected almost homogeneous spaces. If $X_i$ converges in the pointed Gromov--Hausdorff sense to a space $X$, then $X$ is simply connected.
\end{con}
\subsection{Summary}
The proof of Theorem \ref{MAIN} consists of three parts. In the first part, we show that $X$ is a nilpotent group equipped with an invariant metric (Theorem \ref{PI}). In the second part, we show that if $X$ is semi-locally-simply-connected, then it is a Lie group (Theorem \ref{PII}). In the third part, we show that if $X$ is a Lie group, then for large enough $i$, there are subgroups $\Lambda_i \leq \pi_1(X_i)$ with surjective morphisms $\Lambda_i \twoheadrightarrow \pi_1 (X)$ (Theorem \ref{PIII}).
In Section \ref{Prelim}, we present the relevant definitions and preliminary results required for the proof of Theorem \ref{MAIN}. In Section \ref{PartI} we prove Theorem \ref{PI} by repeated applications of a Margulis Lemma by Breuillard--Green--Tao \cite{BGT}, Gleason--Yamabe structure theory of locally compact groups (\cite{Gleason}, \cite{Yamabe}), and a result of Berestovskii about groups of isometries of homogeneous spaces \cite{BerBus}. In Section \ref{PartII} we prove Theorem \ref{PII} using elementary Lie theory and algebraic topology techniques.
Sections \ref{Nilsection} to \ref{Rootsection} contain the proof of Theorem \ref{PIII}, finishing the proof of Theorem \ref{MAIN}. In Section \ref{Nilsection}, we reduce Theorem \ref{PIII} to the case when the groups $G_i$ are uniformly almost nilpotent (see Lemma \ref{anilLie}). We use again the Margulis Lemma of Breuillard--Green--Tao \cite{BGT}, paired with a local uniform doubling of $X$ by Nagel--Stein--Wainger \cite{NSW}. After this reduction, in Section \ref{FreeSection}, we use commutator estimates similar to the ones in (\cite{BK}, \cite{GrAF}) to prove that the groups $G_i$ act ``almost by translations'' on the spaces $X_i$.
In Section \ref{Torsection}, we use the escape norm from \cite{BGT} to find small normal subgroups $W_i \triangleleft G_i$ with the property that the spaces $X_i$ and $X_i/ W_i$ are Gromov--Hausdorff close, and the groups $Z_i := G_i/W_i$ contain large subsets $Y_i$ without non-trivial subgroups. In Section \ref{Nilprogsection}, we use the space $X$ as a model (as defined by Hrushovski \cite{Hr}) for the algebraic ultralimit
\[Y := \lim\limits_{i \to \alpha} Y_i.\]
This enables us to find large nice subsets $P_i$ of $Z_i$ (nilprogressions in $C_0$-regular form, as defined by Breuillard--Green--Tao \cite{BGT}).
In Section \ref{Globalsection}, we use the Malcev Embedding Theorem \cite{Malcev} to find groups $\Gamma_{P_i}$ isomorphic to lattices in simply connected Lie groups, with isometric actions
$$ \Phi_i: \Gamma_{P_i} \to Iso ( X_i /W_i).$$
Using elementary algebraic topology, we show that the kernels $Ker(\Phi_i)$ of those actions are isomorphic to quotients of $\pi_1(X_i).$ Finally, in section \ref{Rootsection}, we find subgroups of $Ker (\Phi_i)$ isomorphic to $\pi_1 (X)$, finishing the proof of Theorem \ref{PIII}, and consequently Theorem \ref{MAIN}.
\subsection{Acknowledgments}
The author would like to thank Vladimir Finkelshtein, Enrico LeDonne, Adriana Ortiz, Anton Petrunin, and Burkhard Wilking for helpful comments and discussions. This research was supported in part by the International Centre for Theoretical Sciences (ICTS) during a visit for participating in the program - Probabilistic Methods in Negative Curvature (Code: ICTS/pmnc 2019/03).
\section{Preliminaries}\label{Prelim}
\subsection{Notation}
For $H,K$ subgroups of a group $G$, we define their \textit{commutator subgroup} $[H,K]$ to be the group generated by the elements $[h,k]:=hkh^{-1}k^{-1}$ with $h \in H, k \in K$. Define $G^{(0)} $ as $G$, and $G^{(j+1)}$ inductively as $G^{(j+1)} := [G^{(j)}, G] $. If $G^{(s)} = \{ e\}$ for some $s \in \mathbb{N}$, we say that $G$ is \textit{nilpotent of step} $\leq s$. For any element $g$ in a group $G$, we denote by $L_g$ the left shift $G \to G$ given by $L_g(h) = gh$. If $G$ is abelian, we may denote $L_g$ by $+g$.
We say that a set $A \subset G$ is \textit{symmetric} if $A=A^{-1} $ and $e \in A$. For subsets $A_1, \ldots, A_k \subset G$, we denote by $A_1 \cdots A_k $ the set of all products
$$ \{ a_1 \cdots a_k \vert a_i \in A_i \} \subset G, $$
and by $A_1 \times \ldots \times A_k $ the set of all sequences
$$ \{ (a_1, \ldots, a_k) \vert a_i \in A_i \} \subset G^k. $$
If $A_i = A $ for $i = 1, \ldots , k$, we will also denote $ A_1 \cdots A_k $ by $A^k$, and $ A_1 \times \ldots \times A_k $ by $A^{ \times k }$.\\
Let $Y$ be a topological space and $\beta, \gamma : [0,1] \to Y$ two curves. We denote by $\overline{\beta} : [0,1] \to Y$ the curve given by $\overline{\beta} (t) = \beta (1-t) $. And if $\beta(1)= \gamma (0)$, we denote by $\beta \ast \gamma : [0,1] \to Y$ the concatenation
\begin{center}
$ \beta \ast \gamma (t) = \begin{cases}
\beta (2t) & \text{ if } t \leq 1/2 \\
\gamma (2t-1) & \text{ if }t \geq 1/2.
\end{cases} $
\end{center}
If $\beta(1)\neq \gamma (0)$, we say that $\beta \ast \gamma$ is undefined. We call $\beta \ast \gamma$ the \textit{concatenation} of $\beta$ and $\gamma$. We will write $\beta \simeq \gamma$ whenever $\beta $ and $\gamma$ are homotopic relative to their endpoints. If $Y$ is a geodesic space, we say that a curve is an $\varepsilon$\textit{-lasso nailed at }$\beta (0)$ if it is of the form $ \beta \ast \gamma \ast \overline{\beta } $, with $\beta (1) = \gamma (0)$, and $\gamma$ a loop contained in a closed ball of radius $\varepsilon$.\\
In a metric space $(Y,d)$, we will denote the closed ball of center $q\in Y$ and radius $r > 0$ as
$$B_{d}^Y(q,r) : = \{ y \in Y \vert d(y, q) \leq r \}. $$
We will sometimes omit $d$ or $Y$ and write $B(q,r)$ if the metric space we are considering is clear from the context.
\subsection{Uniform distance}
\begin{definition}
\rm Let $A, B$ be metric spaces and $f,h : A \to B$ a function. For a subset $C \subset A $, we define the \textit{uniform distance} between $f$ and $h$ in $C$ as
$$ d_U(f,h; C) : = \sup\limits_{c \in C} d( f(c), h(c) ).$$
\end{definition}
\begin{definition}
\rm Let $A, B$ be two metric spaces and $f: A \to B$ a function. We define the \textit{distortion} of $f$ as
$$ Dis(f) : = \sup\limits_{a_1, a_2 \in A} \vert d(f(a_1), f(a_2)) - d (a_1, a_2) \vert . $$
\end{definition}
The following proposition follows immediately from the triangle inequality in the spaces $B_1$ and $B_2$.
\begin{proposition}\label{UnifProp}
\rm Let $A, B_1, B_2$ be metric spaces, $f, g, h : A \to B_1$, $f_1, g_1 : B_1 \to B_2$, and $C \subset A$. Then
\begin{center}
$ d_U(f,g; C) \leq d_U(f,h;C) + d_U(h,g; C) , $
$ d_U(f_1f ,g_1g; C) \leq d_U(f,g; C) + d_U(f_1,g_1;g( C)) + Dis( f_1 \vert_{f(C) \cup g(C)} ) . $
\end{center}
\end{proposition}
\subsection{Gromov--Hausdorff convergence}
In this subsection we present the basic results about Gromov--Hausdorff convergence. We refer the reader to (\cite{BBI}, Chapter 7) for proofs and further discussion.
\begin{definition}
\rm Let $A,B $ be subsets of a metric space $(Z,d)$. The \textit{Hausdorff distance} between $A$ and $B$ is defined as the infimum of the numbers $r>0$ such that for all $a_0 \in A$, $b_0 \in B$ there are $a_1 \in A$, $ b_1 \in B $ with $d(a_0,b_0)<r$ and $ d(a_1, b_1)<r$.
\end{definition}
\begin{definition}
\rm Let $A,B $ be metric spaces,
and $f: A \to B$. We say that $f$ is an $\varepsilon$\textit{-approximation}
if $Dis (f) \leq \varepsilon$, and the Hausdorff distance between $f(A)$ and
$B$ is at most $\varepsilon$.
\end{definition}
\begin{definition}
\rm Let $X_i$ be a sequence of proper metric spaces. We say that $X_i$ \textit{converges in the Gromov--Hausdorff sense} to a complete metric space $X$ if there is a sequence of $\varepsilon_i$-approximations $f_i : X_i \to X$
with $\varepsilon_i \to 0$ as $i \to \infty$.
\end{definition}
\begin{theorem}\label{gromov--hausdorff--distance}
\rm (Gromov) In the class of proper metric spaces modulo isometry, there is a metric $d_{GH}$ such that for any pair of spaces $X$, $Y$, the following holds:
\begin{itemize}
\item If $d_{GH}(X,Y) \leq \varepsilon $, then there is a $10\varepsilon$-approximation $X \to Y$.
\item If there is an $\varepsilon$-approximation $X \to Y$, then $d_{GH}(X,Y) \leq 10 \varepsilon$.
\end{itemize}
\end{theorem}
\begin{definition}
\rm Let $(X_i, p_i)$ be a sequence of pointed proper metric spaces. We say that $(X_i, p_i)$ converges in the pointed Gromov--Hausdorff sense to a pointed complete metric space $(X,p)$ if there are two sequences of functions $f_i : X_i \to X$ and $h_i : X \to X_i$ with $f_i(p_i) =p$, $h_i (p) = p_i$ and such that for all $R > 0$, we have
\begin{equation}\label{pGH1}
\lim_{i \to \infty} Dis \left( f_i \vert_{B(p_i, R)} \right) = \lim_{i \to \infty} Dis \left( h_i \vert_{B(p, R)} \right) = 0
\end{equation}
and
\begin{equation}\label{pGH2}
\lim_{i \to \infty} d_U \left( h_if_i, Id_{X_i} ; B^{X_i}(p_i,R) \right) = \lim_{i \to \infty} d_U \left( f_ih_i, Id_{X} ; B^{X}(p,R) \right) = 0.
\end{equation}
\end{definition}
\begin{remark}\label{fihi}
\rm Throughout this paper, whenever we have a sequence of proper metric spaces $(X_i, p_i)$ converging in the pointed Gromov--Hausdorff sense to a pointed metric space $(X,p) $, we assume they come equipped with functions $f_i : X_i \to X$ and $h_i : X \to X_i$ with $f_i (p_i)=p$, $h_i(p)=p_i$, and satisfying equations \ref{pGH1} and \ref{pGH2} for all $R > 0$.
\end{remark}
\begin{lemma}
\rm If $X_i $ is a sequence of proper metric spaces converging in the Gromov--Hausdorff sense to a complete space $X$, then for all $p \in X$ there is a sequence $p_i \in X_i $ such that the sequence $(X_i, p_i)$ converges in the pointed Gromov--Hausdorff sense to $(X,p)$.
\end{lemma}
\begin{lemma}\label{proper-geodesic}
\rm If $(X_i, p_i) $ is a sequence of proper geodesic spaces converging in the pointed Gromov--Hausdorff sense to a complete space $(X,p)$, then $X$ is a proper geodesic space.
\end{lemma}
\subsection{Groups of isometries}
Here we summarize the main results about groups acting on metric spaces we will need.
\begin{remark}\label{topology-basis}
\rm Throughout this paper, for a proper metric space $X$, we use the compact-open topology on the group of isometries $Iso (X)$. With this topology, $Iso(X)$ is a locally compact, first countable, Hausdorff group. For any $p\in X$, a basis of compact neighborhoods of the identity with respect to this topology is given by the family of sets
\[ U_{R,\varepsilon}^X : = \{ g \in Iso (X) \vert d_U(g,Id_X; B(p,R))\leq \varepsilon \} . \]
With respect to this topology, a group $\Gamma \leq Iso(X)$ is \textit{discrete} if and only if its action has discrete orbits and finite stabilizers.
\end{remark}
\begin{lemma}\label{short-generators}
\rm Let $(Y,q)$ be a pointed proper geodesic space, and $G \leq Iso (Y)$ a closed subgroup. If diam$(Y/G) < \infty$, then the set
\[ S : = \{ g \in G \vert d(gq,q ) \leq 3\cdot \text{diam}(Y/G) \} \]
generates $G$.
\end{lemma}
\begin{proof}
Let $g \in G$. Since $Y$ is geodesic, there is a sequence of points $q=q_0, q_1, \ldots , q_k = gq \in Y$ satisfying that $d(q_j,q_{j-1}) \leq $diam$(Y/G)$ for each $j$. Choose a sequence $g_1, \ldots, g_{k-1} \in G$ such that $d(g_jq,q_j)\leq $diam$(Y/G)$ for each $j$, and let $g_k : = g$. From the triangle inequality, for each $j$ we have $g_{j-1}^{-1}g_j \in S$. Then
\[ g = g_1 ( g_1^{-1}g_2) \cdots (g_{k-1}^{-1}g_k) \in S^k . \]
\end{proof}
\begin{lemma}\label{diameter-subgroup}
\rm Let $X$ be a proper geodesic space, and $H \leq G \leq Iso (X)$ be closed subgroups. If diam$(X/G) $ and $[G:H]$ are finite, then
\[ \text{diam}(X/H) \leq 3 [G: H ]\cdot \text{diam}(X/G) . \]
\end{lemma}
\begin{proof}
Let $x \in X$, and define
\[ S : = \{ g \in G \vert d(gx,x) \leq 3 \cdot \text{diam} (X/G) \}, \]
which generates $G$ by Lemma \ref{short-generators}. If we have $S^{k+1} H = S^k H$ for some $k \in \mathbb{N}$, then an inductive argument implies that
\[S^kH = \bigcup_{n \in \mathbb{N}} S^n H = G .\]
Since there are only $[G:H]$ cosets, then $S^{k+1}H = S^k H$ for $k \geq [G:H]-1$. This implies that $S^{[G:H]-1}$ intersects each $H$-coset. For any $y \in X$, there is $g \in G$ with $d(x,gy)\leq $ diam$(X/G)$, and by the above analysis, there is $u \in S^{[G:H]-1}$ with $u^{-1}g \in H$. This implies that
\begin{eqnarray*}
d(x,u^{-1}gy) & = & d(ux,gy)\\
& \leq & d(ux,x) + d(x,gy)\\
& \leq & 3 ([G:H]-1) \cdot \text{diam}(X/G) + \text{diam}(X/G).
\end{eqnarray*}
Since $x,y \in X$ were arbitrary, the lemma follows.
\end{proof}
\begin{lemma}\label{normal-stabilizers}
\rm Let $(Y,q)$ be a pointed proper geodesic space, $G \leq Iso (Y)$ a closed group acting transitively, and $H \triangleleft G$ a normal subgroup. If for some $\varepsilon > 0 $,
\[ H \subset \{ g \in G \vert d(gq,q) \leq \varepsilon \}, \]
then
\[ H \subset \{ g \in G \vert d(gy,y) \leq \varepsilon \text{ for all }y \in Y \}. \]
In particular, if $H$ is contained in the group $\{ g \in G \vert gq=q \}$, then $H$ is trivial.
\end{lemma}
\begin{proof}
If $ y \in Y$, then there is $g \in G$ with $gq=y$. Since $H$ is normal in $G$, then for any $h \in H$ we have $g^{-1}hg \in H$, and
\[ d(hy,y) = d(hgq, gq) = d (g^{-1}h g q,q) \leq \varepsilon. \]
\end{proof}
The following result by Valerii Berestovskii concerns groups that act transitively on geodesic spaces (\cite{BerBus}, Theorem 1).
\begin{theorem}\label{berestovskii-open}
\rm (Berestovskii) Let $X$ be a proper geodesic space, and $G$ a closed subgroup of $Iso (X)$ acting transitively. If $\mathcal{O} \leq G$ is an open subgroup, then $\mathcal{O}$ also acts transitively on $X$.
\end{theorem}
\subsection{Hilbert's fifth problem and locally compact groups}
Hilbert's fifth problem consists of understanding which locally compact Hausdorff groups are Lie groups. One satisfactory answer is Theorem \ref{MZHil}. On a different direction, Andrew Gleason and Hidehiko Yamabe found that any locally compact Hausdorff group is not far from being a Lie group (\cite{Gleason}, \cite{Yamabe}).
\begin{theorem}\label{gleason-yamabe}
\rm (Gleason--Yamabe) Let $G$ be a locally compact Hausdorff group. Then there is an open subgroup $\mathcal{O}\leq G$ with the following property: for any open neighborhood of the identity $U \subset \mathcal{O}$, there is a compact normal subgroup $K \triangleleft \mathcal{O}$ with $K \subset U$ such that $\mathcal{O}/K$ is a connected Lie group.
\end{theorem}
The following result by Victor Glushkov implies that the set of compact normal subgroups with the property that the corresponding quotient is a connected Lie group is closed under finite intersections \cite{Glu}.
\begin{theorem}\label{glushkov}
\rm (Glushkov) Let $\mathcal{O}$ be a locally compact Hausdorff group, and $K_1, K_2$ compact normal subgroups such that both $\mathcal{O}/K_1$ and $\mathcal{O}/K_2$ are connected Lie groups. Then $\mathcal{O}/(K_1 \cap K_2)$ is a connected Lie group.
\end{theorem}
\begin{corollary}\label{glushkov-corollary}
\rm Let $\mathcal{O}$ be a connected, locally compact, first countable, Hausdorff group that is not a Lie group. Then it contains a sequence of compact, normal, non-trivial subgroups $K_1 \geq K_2 \geq \ldots$ such that
\[ \bigcap_{j=1}^{\infty} K_j = 0, \]
and $\mathcal{O}/K_j$ is a connected Lie group for all $j$.
\end{corollary}
\subsection{Lie groups}
In this section we introduce the tools from Lie theory we will need.
\begin{definition}
\rm Let $\mathfrak{g}$ be a nilpotent Lie algebra. We say that an ordered basis $\{ v_1, \ldots , v_n \} \hookrightarrow \mathfrak{g}$ is a \textit{strong Malcev basis} if for all $k \in \{1, \ldots ,n \} $, the vector subspace $J_j \leq \mathfrak{g}$ generated by $\{ v_{k+1}, \ldots , v_n \}$ is an ideal, and $v_k + J_k$ is in the center of $\mathfrak{g}/J_k$.
\end{definition}
The following theorem encompasses the results and discussion in (\cite{Cor}, Section 1.2).
\begin{theorem}\label{simply-connected-nilpotent}
\rm Let $G$ be a simply connected nilpotent Lie group and $\mathfrak{g} $ its Lie algebra. Let $Z$ be the center of $G$ and $\mathfrak{z}$ the center of $\mathfrak{g}$. Then
\begin{itemize}
\item $\exp : \mathfrak{g} \to G$ is a diffeomorphism.
\item $Z= \exp ( \mathfrak{z} )$.
\item Any compact subgroup of $G$ is trivial.
\item If $\{ v_1, \ldots , v_n \} \hookrightarrow \mathfrak{g}$ is a strong Malcev basis, then the map $ \psi : \mathbb{R}^n \to G$ given by
\[ \psi (x_1, \ldots , x_n) := \exp(x_1v_1) \cdots \exp (x_n v_n) \]
is a diffeomorphism.
\item After identifying $\mathfrak{g} $ with $\mathbb{R}^n$ through a strong Malcev basis, the maps $ ( \log \circ $ $ \psi ) : \mathfrak{ g} \to \mathfrak{g}$ and $ ( \psi ^{-1} \circ \exp ) : \mathfrak{ g} \to \mathfrak{g}$ are polynomial of degree bounded by a number depending only on $n$.
\end{itemize}
\end{theorem}
\begin{corollary}\label{aspherical}
\rm Let $G$ be a connected nilpotent Lie group. Then $G$ is aspherical. That is, $\pi_k(X) =0$ for all $k \geq 2$.
\end{corollary}
\begin{proof}
For smooth manifolds, being aspherical is equivalent to having a contractible universal cover. By Theorem \ref{simply-connected-nilpotent}, the universal cover $\tilde{G} \to G$, being a simply connected nilpotent Lie group, is contractible.
\end{proof}
\begin{lemma}\label{discrete-normal-central}
\rm Let $G$ be a connected Lie group, and $\Lambda \leq G$ a discrete normal subgroup. Then $\Lambda$ is central in $G$.
\end{lemma}
\begin{proof}
For fixed $\lambda \in \Lambda$, the map $ \phi_{\lambda}:G \to \Lambda$ given by $ \phi_{\lambda}(g) := g \lambda g^{-1}$ is a continuous map from a connected space to a discrete space, hence it is constant. Since $\phi_{\lambda}(e) = \lambda$, we have $g \lambda = \lambda g$ for all $g \in G$. Since $\lambda \in \Lambda$ was arbitrary, the lemma follows.
\end{proof}
\begin{lemma}\label{compact-nilpotent-1}
\rm Let $G$ be a connected nilpotent Lie group, and $K \leq G$ a compact subgroup. Then $K$ is central in $G$.
\end{lemma}
\begin{proof}
Let $\rho : \tilde{G} \to G$ be the universal cover of $G$ and $Z\leq \tilde{G}$ be the center of $\tilde{G}$. By Theorem \ref{simply-connected-nilpotent}, $G/Z$ is a simply connected nilpotent group. Since $\rho (Z)$ is contained in the center of $G$, the lemma is a consequence of the following claim.
\begin{center}
\textbf{Claim:} The preimage $\rho^{-1}(K)$ is contained in $Z$
\end{center}
By Lemma \ref{discrete-normal-central}, $\Lambda $ is contained in $Z$, and we have a projection from $G= \tilde{G} / \Lambda$ to $\tilde{G}/Z$ sending $K$ to the compact group $\rho^{-1}(K)/(Z\cap \rho ^{-1}(K)) \leq \tilde{G}/Z$. By Theorem \ref{simply-connected-nilpotent}, $\rho^{-1}(K)/(Z\cap \rho ^{-1}(K))$ is trivial and the claim follows.
\end{proof}
\begin{corollary}\label{compact-nilpotent-2}
\rm A connected compact nilpotent Lie group is abelian.
\end{corollary}
\begin{definition}
\rm We say that a continuous map $Y \to Z$ between path connected topological spaces \textit{has no content} or \textit{has trivial content} if the induced map $\pi_1(Y) \to \pi_1 (Z) $ is trivial. Otherwise, we say that the map \textit{has non-trivial content}.
\end{definition}
\begin{lemma}\label{LongHomo}
\rm Let $G$ be a connected nilpotent Lie group and $K$ a compact subgroup such that its connected component $K_0$ is nontrivial. Then the inclusion $K_0 \to G$ has non-trivial content.
\end{lemma}
\begin{proof}
Since $K_0$ is a connected compact nilpotent Lie group, it is homeomorphic to a torus, and $\pi_1(K_0)$ is non-trivial. From the long exact homotopy sequence of the fibration $ G \to G/K_0$, we extract the portion
$$ \pi_2 (G/K_0) \to \pi_1 (K_0) \to \pi_1 (G) $$
By Lemma \ref{compact-nilpotent-1}, $K_0$ is central in $G$. Hence, $G/K_0$ is a connected nilpotent Lie group, and aspherical by Corollary \ref{aspherical}. This implies that $\pi_2(G/K_0)=0$, and the map $\pi_1 (K_0) \to \pi_1 (G) $ is non-trivial.
\end{proof}
\begin{lemma}\label{FinalCover}
\rm Let $G$, $\tilde{G}$ be connected Lie groups such that $\tilde{G}$ is a discrete extension of $G$ (i.e. there is a surjective continuous morphism $f: \tilde{G} \to G$ with discrete kernel). Assume $G$ and $\tilde{G}$ are equipped with invariant geodesic metrics for which $f$ is a local isometry. Let $\delta > 0$ be such that $B^{G}(e,\delta)$ contains no non-trivial subgroups, and the inclusion $B^{G}(e, \delta) \to G$ has no content. Then $B^{\tilde{G}}(e,\delta)$ contains no non-trivial subgroups and the inclusion $B^{\tilde{G}}(e, \delta) \to \tilde{G}$ has no content.
\end{lemma}
\begin{proof}
If there is a group $H \subset B^{\tilde{G}}(p, \delta)$, then its image $f(H)$ is a subgroup of $G$ contained in $B^{G}(e, \delta)$, so $f(H) = \{e \} \leq G$. If there is a non-trivial element $h \in H \backslash \{e \}$, we can take a shortest path $\gamma : [0,1] \to \tilde{G}$ from $e$ to $h$. The projection $f \circ \gamma$ would be a non-contractible loop in $G$ contained in $B^{G}(e, \delta )$, contradicting the hypothesis that $B^{G}(e, \delta) \to G$ has no content. Also, we can consider the commutative diagram
\[
\begin{tikzcd}[column sep=3em]
B^{\tilde{G}}(e, \delta) \arrow{r}{ } \arrow{d}{} & \tilde{G} \arrow{d}{f} \\
B^{G}(e,\delta) \arrow{r}{ } & G
\end{tikzcd}
\]
Since $ f$ is a covering map, the right vertical arrow induces an injective map at the level of fundamental groups, and the bottom horizontal arrow has no content by hypothesis. Therefore the top horizontal arrow has trivial content as well.
\end{proof}
\subsection{Lifting curves}
In this section we present some techniques we use to lift curves from one space to another.
\begin{lemma}\label{LipApprox}
\rm Let $Y$ be a proper geodesic space, and $\gamma : [0,1] \to Y$ be a continuous curve. Then for every $\varepsilon > 0$ there is a Lipschitz curve $\beta : [0,1] \to Y$ with $\beta (0)= \gamma (0)$, $\beta (1) = \gamma (1)$, and
$$d_U(\gamma , \beta , [0,1]) \leq \varepsilon .$$
\end{lemma}
\begin{proof}
Let $k > 0$ such that $\vert x_1 - x_2 \vert \leq 3/k$ implies $ d (\gamma (x_1), \gamma (x_2)) \leq \varepsilon /10$ for $x_1, x_2 \in [0,1]$. Then choose $\beta : [0,1]\to Y$ to be such that $\beta \vert_{[j/k, (j+1)/k]}$ is a constant speed minimizing curve between $ \gamma (j/k) $ and $\gamma ( (j+1)/k)$ for $j = 0, 1, \ldots , k-1$. It is easy to see that $\beta$ satisfies the desired properties.
\end{proof}
\begin{lemma}\label{ShortLoopNTI}
\rm Let $f:Y \to Z$ be a continuous map between proper geodesic spaces. Assume that $Y$ is semi-locally-simply-connected, and the composition $B(y,r) \to Y \to Z$ has non-trivial content for some $y \in Y$, $r > 0$. Then there is a loop $\beta : [0,1] \to Y$ based at $y$ of length $\leq 3r $ such that $f \circ \beta$ is non-contractible in $Z$.
\end{lemma}
\begin{proof}
Since $Y$ is semi-locally-simply-connected, there is $\varepsilon >0$ such that any two closed curves $\beta_1, \beta_2 : [0,1] \to B(y,2r)$ with $d_U(\beta_1 , \beta_2, [0,1] ) \leq \varepsilon$, are homotopic to each other in $Y$. By hypothesis, there is a loop $\gamma_1 :[0,1] \to Y $ based at $y$ whose image is contained in $B(y,r)$ and such that $f \circ \gamma_1$ is non-contractible in $Z$. By Lemma \ref{LipApprox}, there is a Lipschitz curve $\gamma : [0,1] \to B(y,(1.1)r) $ such that $f \circ \gamma $ is non-contractible in $Z$. Let $L$ be the Lipschitz constant of $\gamma$, and pick $\ell \in \mathbb{N}$ with $\ell \geq 2L /r$. For each $j = 0, 1 \ldots , \ell$, set $\sigma_{j}$ to be a minimizing path from $y$ to $\gamma (j/\ell)$, and define $\beta_j $ as the concatenation $\overline{\sigma}_{j+1} \ast \gamma \vert_{[j/\ell , (j+1 )/ \ell ]} \ast \sigma_j$. Since $\gamma$ is homotopic to the concatenation $\beta_{\ell-1} \ast \ldots \ast \beta_0$, at least one of the curves $\beta_j$ is satisfies that $f \circ \beta_j$ is non-contractible in $Z$, and all of them have length $\leq 3r$.
\end{proof}
\begin{corollary}\label{ShortLoopNT}
\rm Let $Y$ be a proper semi-locally-simply-connected geodesic space. Assume the inclusion $B(y,r) \to Y$ has nontrivial content for some $y \in Y$, $r > 0$. Then there is a non-contractible loop based at $y$ of length $\leq 3r $.
\end{corollary}
\begin{proof}
Apply Lemma \ref{ShortLoopNTI} with $f = Id_Y$.
\end{proof}
\begin{definition}
\rm Let $f : Y_1 \to Y_2$ be a map between proper geodesic spaces. We say that $f$ is a \textit{metric submersion} if for every $y \in Y_1, $ and $r > 0$, we have $f(B^{Y_1}(y,r)) = B^{Y_2}(f(y) ,r)$
\end{definition}
\begin{lemma}\label{Submer}
\rm Let $Y$ be a proper geodesic space, and $\Gamma \leq Iso (Y)$ a closed subgroup. Then the quotient map $f: Y \to Y/ \Gamma$ is a metric submersion.
\end{lemma}
\begin{proof}
Let $y \in Y $, $r > 0 $, and $z \in Y/\Gamma$ with $d_{Y/\Gamma}( f(y), z) \leq r$. Since $\Gamma $ is closed, the orbits are closed, and since $Y$ is proper, there is $z_1 \in f^{-1}(z)$ with $d(y,z_1) = d(f(y), z ) \leq r$. This proves $ B^{Y/ \Gamma}(f(y), r) \subset f(B^Y(y,r)) $. The other contention is immediate from the definition of the metric in $Y/ \Gamma$.
\end{proof}
\begin{lemma}\label{HoriLift}
\rm Let $f: Y_1 \to Y_2$ be a metric submersion between proper geodesic spaces, $\gamma : [0,1] \to Y_2$ a Lipschitz curve, and $q_0 \in f^{-1}(\gamma (0))$. Then there is a curve $\tilde{\gamma} : [0,1] \to Y_1$ with $f \circ \tilde{ \gamma} = \gamma ,$ $\tilde{\gamma}(0) = q_0$, and length$(\tilde{\gamma}) = $length$ (\gamma )$.
\end{lemma}
\begin{proof}
For each $j \in \mathbb{N}$, let $D_j : = \{ 0, \frac{1}{2^j}, \ldots , \frac{2^j-1}{2^j} , 1 \} $, and define $h_j : D_j \to Y_1$ as follows: Let $h_j(0)=q_0$, and inductively, let $h_j(x + 1/2^j)$ be a point in $f^{-1}( \gamma (x + 1/2^j ) )$, such that
$$ d_{Y_1} ( h_j ( x + 1/2^j), h_j ( x ) ) = d_{Y_2} ( \gamma ( x + 1/2^j ), \gamma (x) ), \text{ for }x \in D_j \backslash \{ 1 \} . $$
Using Cantor's diagonal argument, we can find a subsequence of $h_j$ that converges for every dyadic rational. Since the maps $h_j$ are uniformly Lipschitz, we can extend this map to a Lipschitz map $\tilde{ \gamma} : [0,1] \to Y_1$. It is easy to check that $\tilde{\gamma}$ satisfies the desired properties.
\end{proof}
\subsection{Ultralimits}
In this section we discuss the ultrafilter tools we will use during the proof of Theorem \ref{PIII}, including metric ultralimits and algebraic ultraproducts. We refer the reader to (\cite{AKP}, Chapter 4) , (\cite{BFT}, Section 2.1), (\cite{BGT}, Appendix A) for proofs and further discussions.
\begin{definition}
\rm Let $\wp (\mathbb{N})$ denote the power set of the natural numbers and $\alpha : \wp (\mathbb{N}) \to \{ 0,1 \}$ be a function. We say that $\alpha$ is a \textit{non-principal ultrafilter} if it satisfies:
\begin{itemize}
\item $\alpha (\mathbb{N}) = 1$.
\item $\alpha (A \cup B )= \alpha (A) + \alpha (B)$ for all disjoint $A , B \subset \mathbb{N}$.
\item $\alpha (F )=0$ for all finite $F \subset \mathbb{N}$.
\end{itemize}
\end{definition}
Using Zorn's Lemma it is not hard to show that nonprincipal ultrafilters exist. We will choose one $(\alpha )$ and fix it for the rest of this paper. Sets $A \subset \mathbb{N}$ with $\alpha (A) =1$ are called $\alpha$-large. For a property $P : \mathbb{N}\to \{0,1\}$, if $\alpha ( P^{-1} (1) ) =1$ we say that ``$P$ holds for $i$ sufficiently $\alpha$-large'', or ``$P$ holds for $i$ sufficiently close to $\alpha$''.
\begin{definition}
\rm Let $A_i$ be a sequence of sets. In the product
$$A^{\prime} : = \prod _{i=1}^{\infty} A_i,$$
we say that two sequences $\{ a_i \} , \{ a_i^{\prime} \}$ are $\alpha $\textit{-equivalent} if
$$ \alpha \left( \{ i \vert a_i = a_i^{\prime} \} \right) =1. $$
The set $A^{\prime}$ modulo this equivalence relation is called the \textit{algebraic ultraproduct} of the sets $A_i$ and is denoted by
$$A_{\alpha} : = \lim\limits_{i \to \alpha} A_i. $$
\end{definition}
If $A_i = \mathbb{R}$ for each $i$, an element in $A_{\alpha}$ is called a \textit{non-standard real number}. For $x= \{ x_i\}$, $y=\{ y_i \}$ non-standard real numbers, we say that $x \leq y$ if $\alpha \left( \{ i \in \mathbb{N} \vert x_i \leq y_i \} \right) =1$.
Let $ x_i $ be a sequence in a metric space $X$. We say that the sequence \textit{ultraconverges} to a point $x_{\infty} \in X$ if for every $\varepsilon > 0$,
$$ \alpha \left( \{ i \vert d(x_i, x_{\infty}) < \varepsilon \} \right) = 1 .$$
If this is the case, the point $x_{\infty} $ is called the \textit{ultralimit} of the sequence, and we write $x_i \xrightarrow{\alpha} x_{\infty}$ or
$$\lim\limits_{i \to \alpha} x_i = x_{\infty}. $$
It is easy to show that if a sequence has an ultralimit, then it is unique. Furthermore, if $X$ is compact, then any sequence in $X$ ultraconverges.
\begin{definition}
\rm Let $ (X_i, p_i) $ be a sequence of pointed metric spaces. Let $X_{\alpha}^{\prime}$ be the set of sequences $ x_i \in X_i$ such that
$$ \sup\limits_{i \in \mathbb{N}} d(x_i, p_i) < \infty.$$
Equip $X_{\alpha}^{\prime}$ with the pseudometric
$$d( \{ x_i \}, \{ y_i \} ) := \lim\limits_{i \to \alpha} d(x_i, y_i).$$
Let $X_{\alpha}$ be the metric space corresponding to the pseudometric space $X_{\alpha}^{\prime}$. The pointed metric space $(X_{\alpha}, p_{\alpha} )$, where $p_{\alpha} $ is the class of the sequence $\{ p_i \} $, is called the \textit{metric ultralimit} of the sequence $ (X_i, p_i)$.
\end{definition}
It is straightforward to show that $X_{\alpha} $ is always a complete metric space.
\begin{theorem}\label{Subseq}
\rm If a sequence of proper metric spaces $(X_i, p_i)$ converges in the pointed Gromov--Hausdorff sense to a space $(X,p)$, then $(X_{\alpha}, p_{\alpha}) $ and $(X,p)$ are isometric. Conversely, if the sequence $(X_i, p_i)$ is precompact in the pointed Gromov--Hausdorff topology, then there is a subsequence that converges to $(X_{\alpha}, p_{\alpha}) $ in the pointed Gromov--Hausdorff sense.
\end{theorem}
\begin{remark}
\rm To define an algebraic ultraproduct $A_{\alpha} = \lim_{i \to \alpha} A_i $ or a metric ultraproduct $X_{\alpha} = \lim_{i \to \alpha}X_i$, we don't require the sets $A_i $ or $X_i$ to be defined for all $i$, but only for all $i$ in an $\alpha$-large set.
\end{remark}
\subsection{Approximate isometries, equivariant convergence}
If we have a sequence of pointed proper metric spaces $(X_i, p_i)$ converging in the pointed Gromov--Hausdorff sense to a space $(X,p)$, then any sequence of groups of isometries $G_i \leq Iso (X_i)$ converges in a suitable sense to a group $G_{\alpha} \leq Iso (X)$. In this section we present a precise statement of this result. The proofs follow from the discussion in (\cite{GromovPG}, Section 6).
\begin{definition}
\rm Let $ (X_i, p_i)$ be a sequence of pointed proper metric spaces and $Y_i$ a sequence of proper metric spaces. We say that two sequences of functions $\phi_i, \psi_i : X_i \to Y_i$ are \textit{ultraequivalent} if for every $R>0$,
\begin{center}
$\lim\limits_{i\to \alpha} d_U ( \phi_i , \psi_i , B(p_i, R) ) =0 $.
\end{center}
\end{definition}
\begin{lemma}\label{ALISON}
\rm Let $(X,p)$ be a pointed proper metric space and $\phi_i : X \to X$ a sequence of functions for which the sequence $ d ( \phi_i p, p ) $ is bounded. Assume that for every $R > 0$, $Dis \left( \phi_i \vert_{B(p, R)} \right) \to 0 $ as $i \to \infty$, and consider the map $\phi_{\alpha}:X \to X $ given by
$$ \phi_{\alpha} (x) : = \lim\limits_{i \to \alpha} \phi_i (x). $$
Then the sequence $\phi_i$ is ultraequivalent to the constant sequence $\phi_{\alpha}$, and $\phi_{\alpha}$ is an isometric embedding. We call $\phi_{\alpha}$ the \textit{ultralimit} of the sequence $\phi_i$. Furthermore, if there is a sequence $\psi_i: X \to X$ with $Dis \left( \psi_i \vert_{B(p,R)} \right) \to 0$ as $i \to 0$ for all $R>0$, and such that the sequence $\phi_i\psi_i$ is ultraequivalent to the identity $Id_X$, then $\phi_{\alpha} $ is a surjective isometry.
\end{lemma}
Let $(X_i, p_i)$ be a sequence of proper geodesic spaces, converging in the pointed Gromov--Hausdorff sense to a space $(X, p)$, and consider a sequence of groups of isometries $G_i \leq Iso (X_i)$. We say that a sequence $g_i \in G_i$ is \textit{stable} if the sequence $ d(g_i(p_i), p_i) $ is bounded. The set $G_{\alpha}$ of stable sequences modulo ultraequivalence is called the \textit{equivariant ultralimit } of the sequence $G_i$. By definition, we have sequences of maps $f_i : X_i \to X$ and $h_i : X \to X_i$ with $f_i(p_i ) = p$, $h_i (p) = p_i$, and satisfying equations \ref{pGH1} and \ref{pGH2} for all $R>0$. We say that a sequence $g_i \in G_i$ \textit{ultraconverges} to a map $g: X \to X$ if the sequence of maps $(f_i \circ g_i \circ h_i ) $ is ultraequivalent to the constant sequence $g$. By Lemma \ref{ALISON}, any stable sequence has a unique ultralimit.
\begin{theorem}\label{equivariant-ultralimit}
\rm (Gromov) Let $(X_i,p_i)$ be a sequence of pointed proper metric spaces converging in the pointed Gromov--Hausdorff sense to a complete space $(X,p)$. Consider a sequence of groups of isometries $G_i \leq Iso (X_i)$ and let $G_{\alpha}$ be their equivariant ultralimit. The map $ \Phi : G_{\alpha} \to Iso (X)$ that sends a sequence to its ultralimit is well defined (that is, the ultralimit doesn't depend on the representative in the equivalence class), is injective (that is, if two stable sequences have the same equivariant ultralimit, then the sequences are ultraequivalent), and has closed image. Furthermore, if diam$(X_i/G_i) \to 0$, then $G_{\alpha} $ acts transitively on $X$.
\end{theorem}
\begin{remark}
\rm The image $\Phi (G_{\alpha})$ may depend on the choice of the functions $f_i, h_i$ used to obtain the ultralimit of a stable sequence $g_i \in G_i$. However, as pointed out in Remark \ref{fihi}, we always assume that the functions $f_i$ and $h_i$ are already given in the hypothesis that the spaces $X_i$ converge to $X$.
\end{remark}
\begin{remark}
\rm The set $G_{\alpha }$ has two equivalent group structures. The one obtained by pulling back the group structure in $Iso(X)$ through $\Phi$:
\begin{center}
$a \ast b := \Phi^{-1} (\Phi (a) \circ \Phi (b)). $
\end{center}
And the one given by the groups $G_i$:
\begin{center}
$\{ g_i \} \ast \{ g_i^{\prime} \} : = \{ g_i \circ g_i ^{\prime} \} $.
\end{center}
\end{remark}
\begin{remark}
\rm The group $G_{\alpha}$ has two equivalent topologies. The one obtained by pulling back the topology of $Iso (X)$ through $\Phi$, yielding the following family of sets as a compact neighborhood basis of the identity:
\begin{center}
$U_{R, \varepsilon}^{\alpha} : = \Phi ^{-1} \left( U_{R, \varepsilon}^X\right) $.
\end{center}
And the one given by the groups $G_i$, yielding the following family of sets as a compact neighborhood basis of the identity:
\begin{center}
$\tilde{U}_{R,\varepsilon}^{\alpha}:= \left\{ \{ g_i \} \in G_{\alpha} \big\vert \text{ } \alpha \left( \left\{ i \vert g_i \in U_{R, \varepsilon} ^{X_i} \right\} \right) =1 \right\} .$
\end{center}
\end{remark}
\subsection{Local groups}
In this section we present the elementary theory of local groups an approximate groups we will use. We refer the reader to (\cite{BGT}, Appendix B) for proofs and further discussion.
\begin{definition}
\rm Let $(G,e)$ be a pointed topological space. We say that $G$ is a \textit{local group}, if there are continuous maps $(\cdot )^{-1}: G \to G$ and $\cdot : \Omega \to G$ for some $ \Omega \subset G \times G$ such that
\begin{itemize}
\item $\Omega$ is an open set containing $(G \times \{ e\} ) \cup ( \{ e \} \times G ) $.
\item For all $g \in G$, we have $(g,(g)^{-1}), ((g)^{-1},g)\in \Omega$ and $g \cdot e = e \cdot g = g$.
\item For all $g \in G$, we have $g \cdot (g)^{-1} = (g)^{-1} \cdot g = e$.
\item For all $g,h,k \in G$ such that $(g,h) , (gh,k), (h,k), (g, hk) \in \Omega$, we have $g (hk)= (gh)k$.
\end{itemize}
\end{definition}
\begin{definition}
\rm We say that a local group $G$ is a \textit{local Lie group} if it is a smooth manifold, and the maps $(\cdot )^{-1}: G \to G$ and $\cdot : \Omega \to G$ are smooth.
\end{definition}
\begin{definition}
\rm Let $G$ be a local group. We say that a subset $A \subset G$ is \textit{symmetric} if $e \in A$ and $g^{-1} \in A$ for all $g \in A$.
\end{definition}
\begin{definition}
\rm Let $G$ and $H$ be two local groups. We say a function $\phi : G \to H$ is a morphism if the following holds:
\begin{itemize}
\item $\phi (e) = e$.
\item For all $g \in G$, we have $\phi (g^{-1}) = \left[ \phi (g) \right] ^{-1}$.
\item If $g,h \in G$ are such that $g\cdot h$ is defined in $G$, then $\phi (g) \cdot \phi (h) $ is defined in $H$, and $\phi (g\cdot h ) = \phi (g) \phi (h)$.
\end{itemize}
\end{definition}
\begin{definition}
\rm Let $(G,e$) be a local group, and $U\subset G$ a symmetric subset. We say that $U$ is a \textit{restriction} of $G$ if when restricting $(\cdot )^{-1} $ to $U$, and $\cdot : \Omega \to G$ to $\left[ ( \cdot )^{-1}(U)\right] \cap (U \times U) $, we obtain a local group structure on $U$.
\end{definition}
\begin{definition}
\rm Let $G$ be a local group, and $g_1, \ldots , g_m \in G$. We say that the product $g_1 \cdots g_m$ is \textit{well defined in} $G$ if for each $1 \leq j\leq k \leq m$, we can find an element $g_{[j,k]} \in G$ such that
\begin{itemize}
\item For all $j \in \{ 1, \ldots , m \} $, we have $g_{[j,j]}= g_j$
\item For all $1 \leq j \leq k \leq \ell \leq m$, the pair $\left( g_{[j,k]} , g_{[k+1,\ell ]} \right) $ lies in $\Omega$, and $g_{[j,k]} \cdot g_{[k+1,\ell ]} = g_{[j, \ell ]}$.
\end{itemize}
For sets $A_1, \ldots , A_m \in G$, we say that the product $A_1\cdots A_m$ is \textit{well defined} if for all choices of $g_j \in A_j$, the product $g_1 \cdots g_m$ is well defined.
\end{definition}
\begin{definition}
\rm Let $G$ be a local group. We say that a subset $A \subset G$ is a \textit{multiplicative set} if it is symmetric, and $A^{200}$ is well defined in $G$.
\end{definition}
\begin{definition}
\rm We say that a local group $G$ is \textit{cancellative} if the follwing holds:
\begin{itemize}
\item For all $g,h,k \in G$ such that $gh$ and $gk$ are well defined and equal to each other, we have $h=k$.
\item For all $g,h,k \in G$ such that $hg$ and $kg$ are well defined and equal to each other, we have $h=k$.
\item For all $g,h \in G$ such that $gh $ and $h^{-1}g^{-1}$ are well defined, then $(gh)^{-1} = h^{-1}g^{-1}$.
\end{itemize}
\end{definition}
\begin{definition}
\rm Let $G, G^{\prime}$ be local groups such that $G^{\prime}$ is a restriction of $G$. We say that $G^{\prime}$ is a \textit{sub-local group} of $G$ if there is an open set $V\subset G$ containing $G^{\prime}$ with the property that for all $a,b \in G^{\prime}$ such that $ab$ is well defined in $V$, then $ab \in G^{\prime}$. If $V$ also satisfies that for all $a \in G^{\prime} $, $b \in V$ such that $bab^{-1}$ is well defined in $V$, then $bab^{-1}\in G^{\prime}$, we say that $G^{\prime}$ is a \textit{normal} sub-local group of $G$, and $V$ is called a \textit{normalizing neighborhood} of $G^{\prime}$.
\end{definition}
\begin{lemma}
\rm Let $G$ be a cancellative group and $H$ be a normal sub-local group with normalizing neighborhood $V$. Let $W \subset G$ be an open symmetric subset such that $W^6 \subset V$. Then there is a cancellative local group $W/H $ equipped with a surjective morphism $\phi : W \to W / H$ such that, for all $g,h \in W$, one has $\phi (g) = \phi (h)$ if and only if $gh^{-1}\in H$, and for any $E \subset W / H$, one has that $E$ is open if and only if $\phi^{-1}(E)$ is open.
\end{lemma}
\subsection{Ultraconvergence of polynomials}
\begin{definition}
\rm Let $ Q_i : \mathbb{R}^{m} \to \mathbb{R}^n $ be a sequence of polynomials of bounded degree. We say that the sequence converges \textit{well} to a polynomial $Q: \mathbb{R}^{m} \to \mathbb{R}^n$ if the sequence $Q_i$ is ultraequivalent to the constant sequence $Q$. Equivalently, the sequence $Q_i$ converges well to $Q$ if the sequences of coefficients of $Q_i$ ultraconverge to the corresponding coefficients of $Q$.
\end{definition}
\begin{lemma}\label{PreWell}
\rm For each $d\in \mathbb{N}$, there is $N_0 \in \mathbb{N}$ such that the following holds. Let $I_{N_0}:= \{ -1, \ldots,\frac{-1}{N_0}, 0, \frac{1}{N_0}, \ldots , 1 \} $, and assume we have polynomials $Q_i, Q: \mathbb{R}^m \to \mathbb{R}^m$ of degree $\leq d$ such that $Q_i(x) \xrightarrow{\alpha} Q(x) $ for all $x\in \left( I_{N_0} \right)^{\times m}$. Then $Q_i$ converges well to $Q$.
\end{lemma}
\begin{proof}
Working on each coordinate, we may assume that $n=1$. We proceed by induction on $m$, the case $m =1$ being elementary Lagrange interpolation. Name the variables $x_1, \ldots , x_m$. Since
$$ \mathbb{R}[x_1, \ldots , x_m] = (\mathbb{R}[x_1])[x_2, \ldots , x_{m }], $$
we can consider the polynomials $Q_i, Q$ as polynomials $\tilde{Q}_i, \tilde{Q}$ in the variables $x_2, \ldots, x_{m }$ with coefficients in $\mathbb{R}[x_1]$.
If $Q_i(x) \xrightarrow{\alpha} Q(x) $ for all $x\in \left( I_{N_0}\right)^{\times m}$, we would have $\tilde{Q}_i(q, x^{\prime}) \xrightarrow{\alpha} \tilde{Q}(q,x^{\prime})$ for all $q \in I_{N_0}$ and $x^{\prime} \in ( I_{N_0})^{\times (m -1)}$. By the induction hypothesis, if $N_0$ was large enough, depending on $d$, the coefficients of $\tilde{Q}_i $, which are polynomials in $\mathbb{R}[x_{1}]$, ultraconverge to the coefficients of $\tilde{Q}$ whenever $x_{1}\in I_{N_0}$. By the case $m =1$, if $N_0$ was large enough, the coefficients of $Q_i$ ultraconverge to the coefficients of $Q$.
\end{proof}
\begin{lemma}\label{Well}
\rm Let $Q_i: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}^n$ be a sequence of polynomial group structures in $\mathbb{R}^n$ of bounded degree. Assume $Q_i $ converges well to a polynomial group structure $Q: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}^n$. Then the corresponding sequence of Lie algebra structures on $\mathbb{R}^n$ converges well to the Lie algebra structure of $Q$.
\end{lemma}
\begin{proof}
This follows from the fact that the structure coefficients of the Lie algebras depend continuously on the second derivatives of $Q_i$, which by hypothesis, ultraconverge to the corresponding second derivatives of $Q$.
\end{proof}
\begin{definition}\label{quasil}
\rm For $x \in \mathbb{R}^r$, we define its \textit{support} as
$$ \text{supp} ( x_1, \ldots , x_r) := \{ i \in \{ 1, \ldots , r \} \vert x_i \neq 0 \} . $$
For $x,y \in \mathbb{R}^r$, we say that $x \preceq y$ if $i \leq j$ for every $i \in \text{supp} (x) $, $j \in \text{supp} (y) $. We say that a polynomial group structure $Q : \mathbb{R}^r \times \mathbb{R}^r\to \mathbb{R}^r$ is \textit{quasilinear} if
$$Q(x,y) = Q(x, 0) + Q(0,y) = x+y$$
whenever $x \preceq y$.
\end{definition}
Note that for any quasilinear group structure $Q:\mathbb{R}^r \times \mathbb{R}^r \to \mathbb{R}^r$, the coordinate axes are one-parameter subgroups, and the exponential map
$$\exp : T_0 \mathbb{R}^r = \mathbb{R}^r \to \mathbb{R}^r$$
is the identity when restricted to such axes. Moreover, by the Baker--Campbell--Hausdorff formula, for $x= (x_1, \ldots , x_r) \in \mathbb{R}^r$, the expression
\begin{eqnarray*}
\log ( x ) & = & \log \left( x_{ 1} e_1 + \ldots + x_{i}e_r \right) \\
& = & \log \left( ( x_{ 1} e_1 ) \ldots ( x_{r}e_r) \right) \\
& = & \log ( \exp ( \log ( x_{1}e_1 ) ) \ldots \exp( \log ( x_{r}e_r ) ) ) \\
& = & \log ( \exp (x_{1 }e_1) \ldots \exp( x_{r}e_r) )
\end{eqnarray*}
is a polynomial on the variables $x_1, \ldots , x_r$, and its coefficients depend continuously on the structure coefficients of the Lie algebra associated to $(\mathbb{R}^r, Q)$. This, together with Lemma \ref{Well}, implies the following result.
\begin{lemma}\label{Quasilinear Continuity}
\rm Consider quasilinear polynomial nilpotent group structures $Q_i , Q : \mathbb{R}^r\times \mathbb{R}^r \to \mathbb{R}^r$ of bounded degree. Let $\log_i ,\log : \mathbb{R}^r \to \mathbb{R}^r = T_0 \mathbb{R}^r$ denote the logarithm maps for the group structures $Q_i$ and $Q$, respectively. Assume the sequence $Q_i$ converges well to $Q$, and a sequence $x_i \in \mathbb{R}^r$ ultraconverges to a point $x \in \mathbb{R}^r$. Then
\[ \lim\limits_{i \to \alpha} \log_i (x_i) = \log (x). \]
\end{lemma}
\subsection{Constructing covering spaces}\label{CoverConstruction}
In this section we present the constructions of covering spaces we will need.
\begin{definition}
\rm Let $\varepsilon > 0$. We say that a covering map $\tilde{Y} \to Y$ of geodesic spaces is $\varepsilon$\textit{-wide} if for every $y \in Y$, the ball $B^Y(y, \varepsilon ) $ is an evenly covered neighborhood of $y$.
\end{definition}
The following result is obtained via the standard construction of covering spaces (see \cite{Munk}, Theorem 77.1).
\begin{lemma}\label{cover-characterization}
\rm Let $(Y,y_0)$ be a pointed proper geodesic space, $\Delta$ a group, and $\varepsilon > 0$. Then the following are equivalent:
\begin{itemize}
\item There is a Galois $\varepsilon$-wide covering map $\tilde{Y} \to Y$ with Galois group $\Delta$.
\item There is a surjective morphism $\pi _ 1 (Y,y_0) \to \Delta$ whose kernel contains all the classes containing $\varepsilon$-lassos nailed at $y_0$.
\end{itemize}
\end{lemma}
The following result by Sormani and Wei implies that if two geodesic spaces are sufficiently close in the Gromov--Hausdorff sense, then one can transfer Galois covers from one of them to the other (\cite{SWHau}, Theorem 3.4).
\begin{theorem}\label{sormani-cover-copy}
\rm (Sormani--Wei) Let $A, B$ be proper geodesic spaces with
$$d_{GH} (A,B ) \leq \varepsilon/200, $$
and $\tilde{B} \to B$ a Galois $\varepsilon$-wide covering map with Galois group $\Delta$. Then there is a Galois cover $\tilde{A} \to A$ with Galois group $\Delta$.
\end{theorem}
Consider a pointed proper geodesic space $(Z,z_0)$, and a discrete group of isometries $\Gamma \leq Iso (Z)$, with diam$(Z/\Gamma) \leq \rho /10$ for some $\rho > 0$. We define
\begin{center}
$B:= B(z_0, \rho ),$
$ S := \{ g \in \Gamma \vert d(gz_0, z_o) \leq 2 \rho \} = \{ g \in \Gamma \vert gB \cap B \neq \emptyset \} . $
\end{center}
Let $ \tilde{\Gamma } $ be the abstract group generated by $S$, with relations
\begin{center}
$s= s_1 s_2$ in $ \tilde{\Gamma } $, whenever $s, s_1, s_2 \in S$ and $s= s_1s_2$ in $\Gamma$.
\end{center}
If we denote the canonical embedding $S \hookrightarrow \tilde{\Gamma} $ as $(s \to s^{\sharp } )$, then there is a unique group morphism $\Phi : \tilde{\Gamma } \to \Gamma$ that satisfies $\Phi (s^{\sharp }) = s$ for all $s \in S$. This map is surjective by Lemma \ref{short-generators}. Equip $ \tilde{\Gamma } $ with the discrete topology, and consider the topological space
\begin{center}
$\tilde{Z} := \left( \tilde{\Gamma } \times B \right) / \sim$,
\end{center}
where $\sim$ is the minimal equivalence relation such that
\begin{center}
$ (gs^{\sharp}, x) \sim (g,s x) $ whenever $s \in S$, $x , sx \in B$.
\end{center}
We obtain a continuous map $\Psi : \tilde{Z} \to Z$ given by
\begin{center}
$\Psi (g, x) : = \Phi (g)(x) .$
\end{center}
\begin{theorem}\label{monodromy}
\rm The map $\Psi$ is a Galois $(\rho /3)$-wide covering map with Galois group $Ker (\Phi )$.
\end{theorem}
The proof of Theorem \ref{monodromy} is obtained from a sequence of lemmas.
\begin{lemma}\label{relation-is-good}
\rm Let $(a,x), (b,y ) \in \tilde{\Gamma} \times B$. The following are equivalent:
\begin{itemize}
\item there is $s \in S$ with $b= as^{\sharp}$, $x=sy$.
\item there is $t \in S$ with $a=bt^{\sharp}$, $y=tx$.
\item $(a,x) \sim (b,y)$.
\end{itemize}
\end{lemma}
\begin{proof}
The first two conditions are equivalent by taking $t $ to be $s^{-1}$, and they imply the third one by definition. Using the fact that the first two conditions are equivalent, the third condition implies that there is a sequence $s_1, \ldots , s_k \in S$ such that
\[(a,x) \sim (as_1^{\sharp} , s_1^{-1}x) \sim \ldots \sim (as_1^{\sharp} \cdots s_k^{\sharp} , s_k^{-1} \cdots s_1^{-1} x ) = (b,y). \]
This implies that $ \left( s_1^{-1} \right) , \left( s_2^{-1}s_1^{-1} \right) , \ldots , \left( s_k^{-1}\ldots s_1^{-1} \right) \in S$, allowing us to prove by induction on $j$ that $(s_1\cdots s_j)^{\sharp} s_{j+1}^{\sharp} = (s_1 \cdots s_{j+1} )^{\sharp}$ in $\tilde{\Gamma }$. This implies the first condition by taking $s$ to be $(s_1 \cdots s_k) \in S$.
\end{proof}
We say that a subset $A \subset \tilde{\Gamma } \times B$ is \textit{saturated} if it is a union of equivalence classes of the relation $\sim$. By definition, if $A$ is open and saturated, then its image in $\tilde{Z}$ is open. Let $U \subset Z$ be an open ball of radius $\rho /2$. Since $\Phi$ is surjective and diam$(Z/\Gamma ) \leq \rho / 10$, there is $g_0 \in \tilde{\Gamma}$ such that
\[V : = \Phi(g_0^{-1})(U) \subset B^Z(z_0, 2\rho /3).\]
\begin{lemma}\label{monodromy1}
\rm The preimage of $U$ is given by
\begin{equation}\label{preimage-u}
\Psi ^{-1}(U) = \bigsqcup_{g \in Ker(\Phi)} \left( \bigcup_{s \in S} \left( \{ g_0gs^{\sharp} \} \times \left(( s^{-1}V) \cap B \right) \right) \right) / \sim .
\end{equation}
\end{lemma}
\begin{proof}
By direct evaluation, if $x \in V \cap sB$, then
\[ \Psi (g_0gs^{\sharp}, s^{-1}x) = \Phi (g_0g)x = \Phi(g_0) x \in \Phi (g_0) (V) = U. \]
On the other hand, if $(h,x) \in \tilde{\Gamma} \times B$ is such that $\Phi(h) (x) \in U$, then $\Phi(g_0^{-1}h)(x) \in V$, and $\Phi(h) \in \Phi( g_0) S $, implying that $h = g_0gs^{\sharp}$ for some $g \in Ker (\Phi)$, $s \in S$. Also, $s(x) = \Phi ( g_0^{-1} g_0 s^{\sharp} )(x) = \Phi(g_0^{-1}h)(x) \in V$, proving that the class of $(h,x)$ belongs to the right hand side of Equation \ref{preimage-u}.
\end{proof}
\begin{lemma}\label{monodromy2}
\rm As $g$ ranges through $ Ker (\Phi )$, the sets
\[ W_g : = \bigcup_{s \in S} \left( \{ g_0gs^{\sharp} \} \times \left(( s^{-1}V) \cap B \right) \right) \subset \tilde{\Gamma } \times B \]
are open, disjoint, and saturated.
\end{lemma}
\begin{proof}
The fact that they are open is straightforward, since $(s^{-1}V) \cap B$ is open in $B$ for each $s \in S$. To prove that they are disjoint, assume that
\[ (g_0g_1s_1^{\sharp},x ) \sim (g_0 g_2 s_2 ^{\sharp} , y) \]
for some $g_1, g_2 \in Ker(\Phi),$ $s_1s_2 \in S , $ $ x\in (s_1^{-1}V) \cap B , y \in (s_2^{-1}V ) \cap B . $ Lemma \ref{relation-is-good} implies that there is $t \in S$ with
\begin{equation}\label{mono-equation}
g_0g_1s_1^{\sharp} t^{\sharp} = g_0g_2s_2^{\sharp} , \text{ } x=ty .
\end{equation}
By Taking $\Phi$ on both sides of the first equation, we get $s_2 = s_1t$, and hence $s_2^{\sharp} = s_1^{\sharp}t^{\sharp}$. Canceling this in Equation \ref{mono-equation}, we get $g_1=g_2$, proving that the sets $W_g$ are disjoint.
To prove that they are saturated, assume that for some $(h,x ) \in \tilde{\Gamma} \times B$, $g \in Ker (\Phi) $, $s \in S$, $y \in (s^{-1} V )\cap B$, we have $(h,x) \sim (g_0gs^{\sharp} , y)$. By Lemma \ref{relation-is-good}, there is $t \in S$ with
\[ h = g_0gs^{\sharp}t^{\sharp}, \text{ } y = tx. \]
This implies that $st (x)= s(y) \in V \subset B$, hence $st \in S$ and $(st)^{\sharp}=s^{\sharp} t ^{\sharp} $. Then
\[ (h,x) \in \{ g_0 g (st)^{\sharp} \} \times \left( ((st)^{-1}V) \cap B \right) , \]
proving that $W_g$ is saturated.
\end{proof}
\begin{lemma}\label{monodromy3}
\rm For each $g \in Ker (\Phi )$, the image of $W_g$ in $\tilde{Z}$ is sent homeomorphically via $\Psi$ onto $U$.
\end{lemma}
\begin{proof}
Surjectivity is traightforward, since $\Phi ( g_0 g ) (V) = U$. To check injectivity, assume that for some $s,t \in S$, $x\in (s^{-1}V) \cap B$, $y \in (t^{-1}V) \cap B$, we have
\[ \Phi(g_0gs^{\sharp})(x) = \Phi (g_0gt^{\sharp})(y). \]
Then $x=s^{-1}ty \in B$, implying that $s^{-1}t \in S$, and consequently, $(s^{-1}t)^{\sharp} = (s^{\sharp})^{-1}t^{\sharp}$. Hence
\[ g_0gs^{\sharp} (s^{-1}t)^{\sharp} = g_0gt^{\sharp} , \text{ }x= (s^{-1}t)y, \]
obtaining injectivity. To check that $\Psi \vert_{W_g}$ is open, take $\mathcal{O} \subset W_g$ open and saturated containing the class of
\[(g_0g, x) \in \{g_0g\} \times V . \]
Then $\Psi $ sends $ ( \{g_0g\} \times V ) \cap \mathcal{O} $ to an open neighborhood of $\Phi(g_0)(x)$. Since $(g_0g,x) $ was arbitrary, $\Psi \vert _{W_g} $ is open.
\end{proof}
By lemmas \ref{monodromy1}, \ref{monodromy2}, and \ref{monodromy3}, $U$ is an evenly covered neighborhood. Since $U$ was arbitrary, Theorem \ref{monodromy} follows.
\section{Nilpotent groups of isometries}\label{PartI}
In this section, we begin the proof of Theorem \ref{MAIN} with the following result.
\begin{theorem}\label{PI}
\rm Let $(X_i,p_i)$ be a sequence of almost homogeneous spaces that converges in the pointed Gromov--Hausdorff sense to a space $(X,p)$. Then $X$ is a nilpotent locally compact group equipped with an invariant metric.
\end{theorem}
By hypothesis, there are discrete groups of isometries $G_i \leq Iso (X_i)$ with diam$(X_i/G_i) \to 0$. To prove Theorem \ref{PI} we first reduce it to the case when the groups $G_i$ are almost nilpotent.
\begin{lemma}\label{anil}
\rm Under the hypotheses of Theorem \ref{PI}, there are discrete groups of isometries $\Gamma_i \leq Iso (X_i)$ with diam$(X_i/\Gamma_i)\to 0$, satisfying that for each $\varepsilon > 0$, there is $N = N(\varepsilon ) \in \mathbb{N} $ such that for all $i \geq N$, and $g \in \Gamma_i ^{(N)}$, we have $d(gp_i,p_i ) < \varepsilon $.
\end{lemma}
This Lemma is an elementary consequence of the following result, which is one of the strongest versions of the Margulis Lemma \cite{BGT}. It states that if a sufficiently large ball in a Cayley graph can be covered by a controlled number of balls of half its radius, then the corresponding group is virtually nilpotent.
\begin{theorem}\label{Margulis}
\rm (Breuillard--Green--Tao) For $C>0$, there is $N(C) \in \mathbb{N}$ such that the following holds: Let $A$ be a finite symmetric subset of a group $G$, which is in turn generated by a finite symmetric set $S$. If $S^N\subset A$, and $A^2$ can be covered by $C$ left translates of $A$, then there is a subgroup $ G^{\prime} \leq G$ with
\begin{itemize}
\item $[G:G^{\prime}]\leq N$
\item $\left( G^{\prime}\right)^{(N)} \subset A^4 $
\end{itemize}
\end{theorem}
\begin{proof}[Proof of Lemma \ref{anil}:] Fix $k \in \mathbb{N}$, and define
\[ A_i : = \{ g \in G_i \vert d(gp_i,p_i) \leq 2/k \} . \]
It is straightforward to verify that $A_i$ is finite and symmetric. Since $X$ is proper, there is $C > 0 $ such that $B(p,4/k)$ can be covered by $C\in \mathbb{N}$ balls of radius $1/k$. That is, there are $\{ q_1, \ldots , q_C \} \in X$ such that
\[ B(p , 4 /k) \subset \bigcup_{j=1}^C B(q_j, 1/k). \]
From the definition of pointed Gromov--Hausdorff convergence, there are sequences of functions $f_i : X_i \to X$, $h_i : X \to X_i$ with $f_i(p_i)=p$, $h_i(p)=p_i$, and satisfying equations \ref{pGH1} and \ref{pGH2} for all $R>0$. Since diam$(X_i/G_i ) \to 0$, there are group elements $\{ g_{1,i}, \ldots , g_{C,i} \} \in G_i $ such that $d(g_{j,i}p_i, h_iq_j)\to 0$ as $i \to \infty$ for each $j\in \{1, \ldots, C \}$. This implies that for large enough $i$,
\[ A_i^ 2 \subset \{ g \in G_i \vert d(gp_i,p_i)\leq 4 /k \} \subset \bigcup_{j=1}^C g_{j,i} A_i . \]
One can also define
\[ S_i : = \{ g \in G_i \vert d(gp_i,p_i) \leq 3 \cdot \text{diam}(X_i/G_i) \} , \]
which generates $G_i$ by Lemma \ref{short-generators}. Let $N_k:=N(C)\in \mathbb{N}$ be given by Theorem \ref{Margulis}. Since diam$(X_i/G_i) \to 0$, there is $M_k\in \mathbb{N}$ such that for all $i \geq M_k$, we have
\[ \text{diam}(X_i/G_i) \leq \frac{1}{4kN_k}.\]
This immediately implies $S_i^{N_k} \subset A_i$. Hence there are subgroups $ G_i^{\prime} (k) \leq G_i$ which satisfy, by Lemma \ref{diameter-subgroup},
\begin{itemize}
\item diam$(X_i/(G_i^{\prime}(k)))\leq 3 N_k \cdot $ diam$ (X_i/G_i) $
\item If $i \geq M_k$, then all $g \in \left( G_i^{\prime} (k) \right)^{(N_k)}$ satisfy $d(gp_i,p_i) \leq 8 /k$
\end{itemize}
By inductively repeating this construction with $k \in \mathbb{N}$, we can assume that $N_{k+1} \geq N_k$ and $M_{k+1} \geq M_k$ for all $k$. With this, we define $\Gamma_i : = G_i^{\prime}(k)$, where $k$ is the largest integer such that $i \geq M_k$. The verify the conclusion of the lemma, for any positive $\varepsilon $ we define $N(\varepsilon) : = \max \{N_k,M_k \} $ with $k$ satisfying $\varepsilon k \geq 8$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{PI}:]
Let $\Gamma_i$ be given by Lemma \ref{anil}. By Lemma \ref{proper-geodesic} and Theorem \ref{equivariant-ultralimit}, $X$ is a proper geodesic space and the equivariant ultralimit $\Gamma_{\alpha} \leq Iso (X)$ is a closed subgroup acting transitively. By Lemma \ref{anil}, for any $\varepsilon > 0$, there is $N \in \mathbb{N}$, such that
\[ \Gamma_{\alpha}^{(N)} \subset \{ g \in \Gamma _{\alpha} \vert d(gp,p) \leq \varepsilon \} . \]
Since $\Gamma_{\alpha}$ acts transitively on $X$, Lemma \ref{normal-stabilizers} implies
\begin{equation}\label{nilalpha}
\Gamma_{\alpha}^{(N)} \subset \{ g \in \Gamma_{\alpha} \vert d(g q,q) \leq \varepsilon \text{ for all }q \in X \} .
\end{equation}
By Remark \ref{topology-basis} and Theorem \ref{gleason-yamabe}, there is an open subgroup $\mathcal{O} \leq \Gamma_{\alpha}$ with the property that for any $R , \varepsilon > 0$, there is a compact normal subgroup $K \triangleleft \mathcal{O}$ with $K \subset U_{R, \varepsilon}^{\alpha}$ and such that $\mathcal{O}/K$ is a connected Lie group. Fix $K_0 \triangleleft \mathcal{O}$ a compact normal subgroup satisfying that $ \mathcal{O}/K_0$ is a connected Lie group, and denote by $\pi_{K_0}: \mathcal{O} \to \mathcal{O}/K_0$ the projection. Let $U \subset \mathcal{O}/K_0$ be a small open neighborhood of the identity, such that any subgroup of $\mathcal{O}/K_0$ contained in $U$ is trivial. Since $\pi_{K_0}^{-1}(U) \subset \mathcal{O}$ is an open neighborhood of the identity, there is $\varepsilon > 0 $ such that
\begin{equation}\label{NSSNil}
\{ g \in \mathcal{O} \vert d(gq,q) \leq \varepsilon \text{ for all }q \in X \} \subset \pi_{K_0}^{-1}(U) .
\end{equation}
Let $N(\varepsilon) \in \mathbb{N} $ be given by Lemma \ref{anil}. Then Equations \ref{nilalpha} and \ref{NSSNil} imply that
\[ \left( \mathcal{O}/K_0 \right)^{(N)} = \mathcal{O}^{(N)}/K_0 = \pi_{K_0}\left( \mathcal{O}^{(N)}\right) \subset U , \]
and hence $\mathcal{O}/K_0$ is nilpotent of step $\leq N$. Notice that we also proved that if a compact normal subgroup $K_1 \triangleleft \mathcal{O}$ satisfies that $\mathcal{O} /K_1 $ is a connected Lie group, then $\mathcal{O}/K_1$ is nilpotent.
\begin{center}
\textbf{Claim:} $\mathcal{O}$ is nilpotent of step $\leq N+1$.
\end{center}
By contradiction, assume there is $g \in \mathcal{O}^{(N+1)} $ with $gq \neq q$ for some $q \in X$. We choose $R := 2 d (p,q) +1$, $\varepsilon_0 : = d(gq,q) /2 $, and take a compact normal subgroup $K \triangleleft \mathcal{O}$ with $K \leq U_{R,\varepsilon_0}^{\alpha}$ and such that $\mathcal{O}/K$ is a connected Lie group. By Theorem \ref{glushkov}, $\mathcal{O}/(K \cap K_0)$ is a connected Lie group, and it fits in an exact sequence of connected nilpotent Lie groups
\[ 0 \to K_0/(K \cap K_0) \to \mathcal{O}/ (K \cap K_0 ) \to \mathcal{O} / K_0 \to 1 . \]
The group $K_0/(K\cap K_0)$ is a compact subgroup of the connected nilpotent Lie group $\mathcal{O}/(K \cap K_0)$, so by Lemma \ref{compact-nilpotent-1}, it is central. This implies that
\begin{eqnarray*}
(\mathcal{O}/(K \cap K_0))^{(N+1)} & = & [ (\mathcal{O}/(K \cap K_0) )^{(N)}: \mathcal{O}/(K \cap K_0) ] \\
& \leq & [ K_0/(K \cap K_0 ) : \mathcal{O}/(K \cap K_0) ]\\
& = & 0.
\end{eqnarray*}
This implies that $\mathcal{O}^{(N+1)} \leq K \cap K_0 \leq K \subset U_{R, \varepsilon_0}^{\alpha} $, but by construction, there is $g \in \mathcal{O}^{N+1} \backslash U_{R, \varepsilon_0}^{\alpha} $, which is a conradiction, proving the claim.
All that is left is proving that $\mathcal{O}$ acts freely and transitively on $X$, showing that $X \cong \mathcal{O}$. The fact that it acts transitively, follows from Theorem \ref{berestovskii-open}. To prove that the action is free, for $q \in X$ consider $\mathcal{O}_q : = \{ g \in \mathcal{O} \vert gq=q \}$.
\begin{center}
\textbf{Claim:} $\mathcal{O}_p$ is a central subgroup of $\mathcal{O}$.
\end{center}
Assume there are $g \in \mathcal{O}_p$, $h \in \mathcal{O}$, with $[g,h] \neq e$. Then there are $R, \varepsilon$ such that $[g,h]$ is not in $ U_{R, \varepsilon}^{\alpha}$, and a compact normal subgroup $K \triangleleft \mathcal{O}$ with $K \subset U_{R, \varepsilon }^{\alpha}$ and such that $\mathcal{O}/K$ is a connected Lie group. $\mathcal{O}_p / K$ is a compact subgroup of the connected nilpotent Lie group $\mathcal{O}/K$. Hence by Lemma \ref{compact-nilpotent-1}, $[g,h] \in K \subset U_{R, \varepsilon}^{\alpha}$, but by construction, $[g,h] $ is not in $U_{R, \varepsilon}^{\alpha}$, which is a contradiction, proving the claim.
By Lemma \ref{normal-stabilizers}, $\mathcal{O}_p$ is trivial. Also, since $\mathcal{O}$ acts transitively on $X$, then for any $q \in X$, there is $h \in \mathcal{O}$ with $hp=q$. This implies that
\[ \mathcal{O}_q = h \mathcal{O}_p h^{-1} = \mathcal{O}_p = \{ Id_X \} . \]
\end{proof}
\section{Semi-locally-simply-connected groups}\label{PartII}
In this section, we prove the second part of Theorem \ref{MAIN}, consisting of the following result.
\begin{theorem}\label{PII}
\rm Let $X_i$ be a sequence of almost homogeneous spaces, converging in the pointed Gromov--Hausdorff sense to a space $X$. If $X$ is semi-locally-simply-connected, then $X$ is a Lie group equipped with an invariant Finsler or sub-Finsler metric.
\end{theorem}
Due to the following result of Valerii Berestovskii (\cite{BerII}, Theorem 3), all we need to show is that $X$ is a Lie group.
\begin{theorem}\label{BerestovskiiFinsler}
\rm (Berestovskii) Let $Y$ be a proper geodesic space whose isometry group acts transitively. If $Y$ is homeomorphic to a topological manifold, then its metric is given by a Finsler or a sub-Finsler structure.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{PII}:]
By Theorem \ref{PI}, $X$ is a connected nilpotent group. By Corollary \ref{glushkov-corollary}, if $X$ is not a Lie group, then it contains a sequence of compact subgroups $K_1 \geq K_2 \geq \ldots $ with
\begin{equation}\label{trivial-intersection}
\bigcap_{j=1}^{\infty} K_j = 0
\end{equation}
and such that $H_j : = X/K_j$ is a connected nilpotent Lie group.
\begin{center}
\textbf{Case 1:} Only for finitely many $j$, the connected component
of $K_j/K_{j+1}$ is non-trivial.
\end{center}
Let $j_0$ be such that $K_j/K_{j+1} $ is discrete for all $j \geq j_0$. Let $\delta > 0 $ be small enough so that $B^{H_{j_0}} (e, \delta) $ contains no non-trivial subgroups, and $B^{H_{j_0}} (e, \delta) \to H_{j_0}$ has no content. By Lemma \ref{FinalCover} and induction, the balls $B^{H_j } (e,\delta)$ contain no non-trivial subgroups for $j \geq j_0$. From Equation \ref{trivial-intersection}, there is $M \in \mathbb{N}$ such that for all $\ell \geq M$, we have $K_{\ell} \subset B^{X}(p, \delta)$, and
\[ K_{\ell} / K_{\ell + 1} \subset B^{H_{\ell + 1} }(e, \delta ) . \]
This implies $K_{\ell} = K _{\ell + 1}$ for $\ell \geq M$, and $X= H_M$.
\begin{center}
\textbf{Case 2:} For infinitely many $j$, the connected component.
of $K_j/K_{j+1}$ is non-trivial
\end{center}
Since $X$ is semi-locally-simply-connected, there is $\delta > 0$ such that the inclusion $B^X(p , \delta ) \to X $ has no content. By our assumption, there is $j \in \mathbb{N}$ with $ K_j \subset B^X(p, \delta / 12) $, and the connected component of $K_j/K_{j+1} $ is non-trivial. By Lemma \ref{LongHomo} and Corollary \ref{ShortLoopNT}, there is a non-contractible loop $\gamma_0: [0,1] \to H_{j+1}$ based at $e$, with length$(\gamma_0)\leq \delta / 4$. A contradiction will arise by generating from this loop a small non-contractible loop in $X$.
Since $H_{j+1}$ is a Lie group, there is $ 0 < \varepsilon < \delta /4 $ such that if two closed curves in $H_{j+1}$ are at uniform distance less than $\varepsilon$, then they are homotopic to each other. Choose $\ell \in \mathbb{N}$ large enough so that $K_{\ell}\subset B^X(p,\varepsilon / 2)$ and let $f: H_{\ell} \to H_{j+1} $ be the natural map, which by Lemma \ref{Submer} is a metric submersion. Using Lemma \ref{HoriLift}, we get a curve $\tilde{\gamma}_1: [0,1] \to H_{\ell}$ with $\tilde{\gamma}_1(0) = e$, $f \circ \tilde{\gamma}_1 = \gamma_0$, and length$(\tilde{\gamma}_1) \leq \delta / 4$. Since $f(\tilde{\gamma}_1(1) ) = e$, we have $\tilde{\gamma}_1 (1) \in K_{j+1}/K_{\ell} $. For each $m \in \mathbb{N}$, define the curve $\tilde{\gamma}_m : [0,1]\to H_{\ell}$ as
\[\tilde{\gamma}_m(t) : = \left[ \tilde{\gamma}_1(1) \right]^{m-1} \tilde{\gamma}_1(t). \]
Observe that $\tilde{\gamma}_{m}(1) = \tilde{\gamma}_{m+1}(0)$ for each $m$, so we can define the curves $\beta_m : = \tilde{\gamma}_1 \ast \ldots \ast \tilde{ \gamma }_m$.
Let $m_0 $ be the smallest integer such that $\beta_{m_0}(1) $ lies in the connected component of the identity of $ K_{j+1}/K_{\ell}$, and let $\beta : [0,1]\to K_{j+1}/K_{\ell}$ be a (not necessarily rectifiable) curve from $\beta_{m_0} (1) $ to $e$. Then the curve $\beta_{m_0}\ast \beta$ is a curve whose image lies in $B^{H_{\ell}}(e, \delta / 3)$, and the composition $f \circ (\beta_{m_0 } \ast \beta)$ is homotopic to $\gamma_0 \ast \ldots \ast \gamma_0$ ($m_0$ times). Since $\pi_1 (H_{j+1} ) $ has no torsion, $f \circ (\beta_{m_0} \ast \beta)$ is non-contractible in $H_{j+1}$. By Lemma \ref{ShortLoopNTI}, there is a loop $\tilde{\gamma} $ in $H_{\ell}$ based at $e$ with length$(\tilde{\gamma}) \leq \delta $ such that $f \circ \tilde{ \gamma} $ is non-contractible in $H_{j+1}$.
By Lemma \ref{Submer}, the quotient map $\rho : X \to H_{\ell}$ is a metric submersion, and hence by Lemma \ref{HoriLift}, there is a curve $c_1 : [0,1] \to B^X(p, \delta ) $ with $c_1(0)=p$, and $\rho \circ c_1 = \tilde{\gamma}$. Let $c_2: [0,1] \to X$ be a minimizing path from $c_1(1)$ to $p$. Then the curve $c: =c_1 \ast c_2$ is a loop in $ B^X(p, \delta )$ such that
\[ d_U ( (f \circ \rho \circ c ),( f \circ \rho \circ \tilde{c}_1); [0,1] ) \leq \varepsilon , \]
for some suitable reparametrization $\tilde{c}_1 $ of $c_1$. This implies that $f \circ \rho \circ c$ is non-contractible in $H_{j+1} $, and the composition $B^X(p, \delta) \to X \to H_j$ has non-trivial content, which is a contradiction.
\end{proof}
\section{Discrete groups with uniform doubling}\label{Nilsection}
As stated in the Summary, in Sections \ref{Nilsection} to \ref{Rootsection} we prove the following theorem, which is the last part of Theorem \ref{MAIN}.
\begin{theorem}\label{PIII}
\rm Let $(X_i, p_i)$ be a sequence of almost homogeneous spaces, converging in the pointed Gromov--Hausdorff sense to a Lie group $X$. Then for large enough $i$ there are subgroups $\Lambda_i \leq \pi_1(X_i)$ with surjective morphisms
\begin{equation*}
\Lambda _i \to \pi_1 (X).
\end{equation*}
\end{theorem}
The first step is to obtain the following refinement of Lemma \ref{anil}.
\begin{lemma}\label{anilLie}
\rm Under the hypotheses of Theorem \ref{PIII}, there are discrete groups of isometries $\Gamma_i \leq Iso (X_i)$ with diam$(X_i/\Gamma_i)\to 0$, and $N \in \mathbb{N}$, satisfying that for each $\varepsilon > 0$, there is $M = M(\varepsilon ) \in \mathbb{N} $ such that for all $i \geq M$, and $g \in \Gamma_i ^{(N)}$, we have $d(gp_i,p_i ) < \varepsilon $.
\end{lemma}
To prove Lemma \ref{anilLie}, we need a result by Alexander Nagel, Elias Stein, and Stephen Waigner about how small balls behave in manifolds with invariant metrics (\cite{NSW}, Section 3).
\begin{theorem}\label{small-balls}
\rm (Nagel--Stein--Waigner) Let $Y$ be a manifold equipped with a Finsler or sub-Finsler metric, and assume $ Iso (Y)$ acts transitively. Then the Hausdorff dimension of $Y$ is an integer, and the corresponding Hausdorff measure $\mu$ on $Y$ is positive on open sets and finite on compact sets. Furthermore, there is a constant $C>0$ such that for all $q \in Y$, $s \in (0, 1]$, one has
\[ \dfrac{\mu ( B(q, 10s) )}{ \mu ( B(q, s))} \leq C .\]
\end{theorem}
\begin{lemma}\label{CCDoubl}
\rm Let $Y$ be a manifold equipped with a Finsler or sub-Finsler metric, and assume $ Iso (Y)$ acts transitively. Then there is a constant $C > 0$ such that for every $q\in Y$, $r \in (0, 1]$, there are elements $q_1, \ldots , q_m \in Y$, with $m \leq C$ such that
$$ B(q, 4r) \subset \bigcup_{j=1}^m B(q_j , r ). $$
\end{lemma}
\begin{proof}
For $q\in Y$, $r \in (0, 1]$, pick a maximal set $q_1, \ldots , q_m \in Y$ such that
\begin{center}
$q_1, \ldots , q_m \in B(q, 4r)$
\end{center}
and the family of balls
\begin{center}
$ \left\{ B\left( q_j, \frac{r}{2} \right) \right\}_{j=1}^m $
\end{center}
is disjoint. By Theorem \ref{small-balls}, $m \leq C$ for some $C$ independent of $r$. If the balls $\{ B(q_j , r ) \}_{j=1}^m$ do not cover $B(q, 4r) $, then there is $q^{\prime} \in B(q, 4r ) $ with
$$ \min_{j=1, \ldots, m} d(q^{\prime }, q_j) > r, $$
contradicting the maximality of the family $\{q_1, \ldots , q_m \}.$
\end{proof}
\begin{proof}[Proof of Lemma \ref{anilLie}:]
Just like in the proof of Lemma \ref{anil}, we construct discrete subgroups $G_i^{\prime}(k)\leq Iso (X_i) $, and integers $N_k, M_k$ such that
\begin{itemize}
\item diam$(X_i/(G_i^{\prime}(k)))\leq 3 N_k \cdot $ diam$ (X_i/G_i) $
\item If $i \geq M_k$, then all $g \in \left( G_i^{\prime} (k) \right)^{(N_k)}$ satisfy $d(gp_i,p_i) \leq 8 /k$
\end{itemize}
However, by Theorem \ref{BerestovskiiFinsler} and Lemma \ref{CCDoubl}, $C$ does not depend on $k$, and hence $N : = N_k$ does not depend on $k$ either. We define $\Gamma_i : = G_i^{\prime}(k)$, where $k$ is the largest integer such that $i \geq M_k$. To verify the conclusion of the lemma, for any positive $\varepsilon $ we take $M(\varepsilon) : = M_k $ with $k$ satisfying $\varepsilon k \geq 8$.
\end{proof}
\section{Almost translational behavior}\label{FreeSection}
Let $\Gamma_i$ be the sequence of groups given by Lemma \ref{anilLie}, and let $\Gamma_{\alpha}$ be its equivariant ultralimit, which by Theorem \ref{MZHil}, is a Lie group. With $N \in \mathbb{N}$ given by Lemma \ref{anilLie}, the normal subgroup $\Gamma_{\alpha}^{(N)}$ is contained in the subgroup $\{ g \in \Gamma_{\alpha } \vert gp=p\}$, so by Lemma \ref{normal-stabilizers}, $\Gamma_{\alpha}^{(N)}$ is trivial.
By Theorem \ref{berestovskii-open}, the connected component of the identity $\Gamma_0$ also acts transitively. The group $ \{ g \in \Gamma_0 \vert gp=p \} $ is compact, so by Lemma \ref{compact-nilpotent-1}, it is central in $\Gamma_0$. Thus Lemma \ref{normal-stabilizers} implies that $ \{ g \in \Gamma_0 \vert gp=p \} $ is trivial, and we can identify $\Gamma_0 \cong X $. In this section, we use commutator estimates similar to the ones in (\cite{BK}, \cite{GrAF}) to prove that $\Gamma_{\alpha}$ is connected, and from that we deduce that the groups $\Gamma_i$ act, in some sense, almost by translations.
\begin{proposition}\label{alpha=x}
\rm $\Gamma_{\alpha}$ is connected.
\end{proposition}
\begin{proof}
Since $\Gamma_0$ acts transitively on $X$, each connected component of $\Gamma_{\alpha}$ does too. This implies that $\Gamma_{\alpha}$ is connected if and only if the group
\[ \Gamma_p : = \{ g \in \Gamma_{\alpha} \vert gp=p \} \]
is trivial. To study this group, we require a result by Enrico LeDonne and Alessandro Ottazzi about isometries of nilpotent Lie groups with invariant metrics (\cite{LeDO}, Theorem 1.2).
\begin{theorem}\label{subFInf}
\rm (LeDonne--Ottazzi) Let $G$ be a nilpotent Lie group equipped with an invariant Finsler or sub-Finsler metric, and $f : G \to G$ an isometry. Then $f$ is smooth, and uniquely defined by its first order data at the identity ($f(e)$, $d_ef$).
\end{theorem}
By Theorem \ref{subFInf}, the compact group $\Gamma_p$ consists of diffeomorphisms. This implies that there is a $\Gamma_p$-invariant inner product $\langle \cdot , \cdot \rangle_0$ in $T_pX$. Then for each $g \in \Gamma_p$ there is an $\langle \cdot , \cdot \rangle_0$-orthonormal basis
$$ a_1, b_1, \ldots , a_{k_1} , b_{k_1}, c_1, \ldots , c_{k_2}, d_1 , \ldots , d_{k_3} \in T_pX $$
and angles
$$\theta_1, \ldots , \theta_{k_1} \in \mathbb{S}^1 \backslash \{ 1 \}$$
such that the derivative $d_pg : T_pX \to T_pX$ satisfies:
\begin{eqnarray*}
d_pg (a_j) &=& \cos \theta_j a_j + \sin \theta_j b_j, \\
d_pg (b_j) &= &-\sin \theta_j a_j + \cos \theta_j b_j, \\
d_pg (c_j)& =& -c_j ,\\
d_pg(d_j ) & = & d_j.
\end{eqnarray*}
Assume by contradiction that there exists $g \in \Gamma_p \backslash \{ Id _X\}$, then again by Theorem \ref{subFInf}, $d_pg \neq Id_{T_pX}$, so $k_1 + k_2 > 0$. Let $d_0$ denote the left invariant Riemannian metric in $X$ given by $\langle \cdot , \cdot \rangle_0$ and $d_U$ denote the uniform distance with respect to $d_0$. We deal first with the case $k_1 > 0 $. By the Baker--Cambell--Hausdorff formula, for every $\varepsilon > 0 $ there is $\delta > 0$ such that
\begin{equation}\label{Taylor1}
d_U\left( L_{\exp (\delta a_1)} , \exp \circ ( + \delta a_1) \circ \left( \exp^{-1} \right); B_{d_0}(p, 100 N 2^N \delta) \right) < \varepsilon \delta
\end{equation}
and
\begin{equation}\label{Taylor2}
d_U \left( g, \exp \circ \left( d_pg \right) \circ \left( \exp^{-1} \right) ; B_{d_0}( p , 100 N 2^N \delta) \right) < \varepsilon \delta .
\end{equation}
By equations \ref{Taylor1} and \ref{Taylor2}, and repeated applications of Proposition \ref{UnifProp}, one can find $C^{\prime}>0$ such that the step $N$ commutators satisfy that the uniform distance between
\[ [\ldots [ L_{\exp(\delta a_1)}, g ], \ldots ], g ] : B_{d_0}^{X}(p, \delta) \to X \]
and
\[ \exp \circ [\ldots [ +\delta a_1, d_pg ], \ldots ], d_p g] \circ \left( \exp^{-1} \right) : B_{d_0}^{X}(p, \delta) \to X \]
is bounded above by $C^{\prime} \varepsilon \delta$. However, by direct computation,
$$d_0 \left( \exp \circ [\ldots [ +\delta a_1, d_pg ], \ldots ], d_pg] \circ \left( \exp^{-1}\right) (p), p \right) = \delta \vert \theta_1 - 1 \vert ^N + o(\delta).$$
So, if $\varepsilon > 0 $ was chosen small enough depending on $C^{\prime}$, as $\delta \to 0$ one obtains
$$[\ldots [ L_{\exp(\delta a_1)}, g ], \ldots ], g](p) \neq p,$$
contradicting the fact that every step $N$ commutator is trivial. The case $k_2 > 0$ is similar, but using $c_1$ instead of $a_1$.
\end{proof}
By Proposition \ref{alpha=x}, $\Gamma_{\alpha} = \Gamma_0 \cong X$. This means that we can assume the action of $\Gamma_{\alpha}$ on $X$ is given by left multiplications. In order to translate this information back into the sequence $\Gamma_i$, we need to consider almost morphisms from the groups $\Gamma_i$ to $X$ and to $Iso(X)$. We define the \textit{holonomy maps} $t : \Gamma_i \to X$ and $\hat{t}: \Gamma _i \to Iso (X)$ as
\[ t(g):= f_i (g( h_i (p))) ,\text{ } \hat{t}(g) : = L_{t(g)}. \]
By the above discussion, for any stable sequence $g_i \in \Gamma_i$, its ultratimit $g_{\alpha} \in \Gamma_{\alpha}$ coincides with
\[ \lim\limits_{i\to \alpha} \left( f_i \circ g_i \circ h_i \right) =L_{\lim\limits_{i\to \alpha} t(g_i)} . \]
Besides that, for any bounded sequence $x_i \in X$, one has
\[ \lim\limits_{i\to \alpha} L_{x_i} =L_{\lim\limits_{i\to \alpha} x_i} . \]
Thus we obtain, using Proposition \ref{UnifProp} repeatedly, the following results.
\begin{proposition}\label{holonomy1}
\rm For any stable sequence $g_i \in \Gamma_i$, the sequence $ \left( f_i \circ g_i \circ h_i \right) $ is ultraequivalent to the sequence $\hat{t}(g_i)$.
\end{proposition}
\begin{proposition}\label{Holohomo}
\rm For any pair of stable sequences $g_i , g_i ^{\prime} \in \Gamma_i$, the sequence $ \hat{t}(g_ig_i^{\prime})$ is ultraequivalent to the sequence $\hat{t}(g_i)\hat{t}(g_i^{\prime}) . $
\end{proposition}
\begin{corollary}\label{holonomy2}
\rm For any pair of stable sequences $g_i, g_i^{\prime} \in \Gamma_i , $ we have,
$$ \lim\limits_{i\to \alpha } \hat{t} (g_ig_i^{\prime}) = \lim\limits_{i\to \alpha } \hat{t} (g_i) \hat{t} (g_i^{\prime}) = \lim\limits_{i\to \alpha } \hat{t} (g_i) \lim\limits_{i\to \alpha } \hat{t} (g_i^{\prime}) . $$
\end{corollary}
\section{Getting rid of the torsion}\label{Torsection}
Now we want to identify and get rid of the torsion elements of $\Gamma_i$. Unfortunately, for $g \in \Gamma_i $, the point $t(g) $ being close or even equal to $p$, may not imply that high powers of $g$ do not ``escape''. This happens because the definition of $t: \Gamma_i \to X$ contains an ``error'' coming from the fact that $f_i$ and $h_i$ are not an actual isometries. To deal with the torsion elements, we use the \textit{escape norm} and techniques from \cite{BGT}.
Since $X$ is locally contractible, there is $\varepsilon_0 \in (0,1)$ such that every loop of length $\leq 10^6 \varepsilon_0$ is nullhomotopic. Let $B $ be a small open convex symmetric set in the Lie algebra $\mathfrak{g} $ of $\Gamma_{\alpha}$ such that
\[ \exp(B )\subset B(p, \varepsilon_0) . \]
Fix $R > 10^6\varepsilon_0 ,$ and define the sets
\begin{eqnarray*}
\Theta_i & : = & \{ g \in \Gamma_i \vert d(gp_i , p_i ) \leq R \} \\
\hat{ A} _i & : = & \{ g \in \Theta_i \vert t(g) \in \exp (B) \} ,\\
\hat{S}_i & := & \{ g \in \Theta_i \vert t(g) \in \exp \left( B / 10^5 C^3 \right) \} , \\
A_i & := & \hat{A}_i \cup \hat{A}_i^{-1} ,\\
S_i & : = & \hat{S}_i \cup \hat{S}_i^{-1},
\end{eqnarray*}
where $C \in \mathbb{N}$ was obtained from Lemma \ref{CCDoubl}.
\begin{definition}\label{approxgr}
\rm Let $A$ be a finite symmetric subset of a multiplicative set and $C > 0$. We say that $A$ is a $C$\textit{-approximate group} if $A^2$ can be covered by $\leq C$ left translates of $A$.
\end{definition}
\begin{definition}
\rm Let $A $ be a $C$-approximate group. We say that $A$ is a \textit{strong} $C$-approximate group if there is a symmetric set $S \subset A$ satisfying the following:
\begin{itemize}
\item $\left( \{ asa^{-1} \vert a \in A^4, s \in S\} \right) ^{10^3C^3} \subset A$.
\item If $g, g^2, \ldots , g^{1000} \in A^{100}$, then $g \in A$.
\item If $g, g^2 , \ldots , g^{10^6 C^3} \in A$, then $g \in S$.
\end{itemize}
\end{definition}
By the Baker--Campbell--Hausdorff formula, Proposition \ref{holonomy1}, and Corollary \ref{holonomy2}, we see that if $B$ was chosen small enough, for $i$ sufficiently close to $\alpha$, we have
\begin{center}
$\{ asa^{-1} \vert a \in A_i^4, s \in S_i \} \subset S_i^2,$
\end{center}
and all three conditions of a strong global approximate group hold.
\begin{lemma}\label{Strong}
\rm For $i$ sufficiently close to $\alpha$, the set $A_i$, thanks to $S_i$, is a strong $C$-approximate group.
\end{lemma}
\begin{definition}
\rm Let $A $ be a subset of a multiplicative set $G$. For $g\in G$, we define the escape norm as
\begin{center}
$\Vert g \Vert_{A} : = \inf \left\{ \dfrac{1}{m+1} \bigg| \text{ } e, g, g^2, \ldots , g^m \in A \right\}$.
\end{center}
\end{definition}
In strong approximate groups, the escape norm satisfies really nice properties, which Breuillard, Green, and Tao call the ``Gleason lemmas''.
\begin{theorem} \label{Gleason}
\rm (Gleason--Breuillard--Green--Tao) Let $A$ be a strong $C$-approximate group. Then for $g_1, g_2, \ldots , g_n \in A^{10}$, we have
\begin{itemize}
\item $\Vert g_1 \Vert_A = \Vert g_1 ^{-1} \Vert _A$.
\item $\Vert g_1 ^k \Vert _A \leq \vert k \vert \Vert g_1 \Vert_A $
\item $\Vert g_2g_1g_2^{-1} \Vert_A \leq 10^3 \Vert g_1 \Vert_A$.
\item $\Vert g_1g_2 \cdots g_n \Vert_A \leq O_C\left( \sum_{j=1}^n \Vert g_j \Vert_A \right) $.
\item $\Vert [g_1,g_2]\Vert _A \leq O_C\left( \Vert g_1 \Vert_A \Vert g_2 \Vert_A \right)$.
\end{itemize}
\end{theorem}
\begin{proof}
The first three properties are immediate. We refer the reader to (\cite{BGT}, Section 8) for a proof of the other two properties.
\end{proof}
Lemma \ref{Strong} and Theorem \ref{Gleason} imply that for $i$ sufficiently close to $\alpha$,
$$ W_i := \{ g \in A_i \vert \Vert g \Vert _{A_i}=0 \} $$
is a subgroup of $\Gamma_i$ normalized by $A_i$. By Lemma \ref{short-generators}, for $i$ sufficiently $\alpha$-large, $ W_i $ is a normal subgroup of $\Gamma_i$ such that for any sequence $w_i \in W_i$, one has
$$ \lim\limits_{i \to \alpha} t(w_i ) =p. $$
Therefore, by Lemma \ref{normal-stabilizers}, the quotient map $X_i \to X_i/W_i$ is an $\varepsilon_i$-approximation with $\varepsilon_i \to 0$ as $i \to \alpha$, and
\begin{equation}\label{quotient-gh-close}
\lim\limits_{i \to \alpha} d_{GH} \left( X_i, X_i/W_i \right) =0.
\end{equation}
The quotients $Z_i := \Gamma_i/W_i $ are particularly useful to us since they satisfy the no-small-subgroup property. Let $Y_i := \hat{\pi} (A_i)$, where $\hat{\pi} : \Gamma_i \to Z_i$ is the standard quotient map. For $[g] \in Y_i \backslash \{ e \},$ we have $\Vert g \Vert_{A_i}\neq 0$, and therefore $g^m $ is not in $A_i^2 \supset A_i W_i $ for some $m>0$. This implies that $ \hat{\pi} (g^m)= [g]^m $ does not belong to $Y_i$. In other words, every non-trivial element in $Y_i$ eventually ``escapes'' from $Y_i$. We still have the map
\begin{center}
$\overline{t}:\hat{\pi}(\Theta_i) \to X = \Gamma_{\alpha}$
\end{center} given by
\begin{center}
$\overline{t}\left( [g]\right) := t(g)$.
\end{center}
Of course, to make this map well defined, we have to choose one representative from each class in $ \hat{\pi}(\Theta_i) $. However, different choices of representatives only change the value of $\overline{t}$ by an error which goes to $0$ as $i \to \alpha$. More precisely, if one considers two sequences $g_i, g_i^{\prime} \in \Theta_i $ with $\hat{\pi}(g_i) = \hat{\pi}(g_i^{\prime}) $ for $\alpha$-large enough $i$, then there is a sequence $w_i \in W_i$ satisfying $ g_i = g_i^{\prime} w_i $ for $\alpha$-large enough $i$, and by Corollary \ref{holonomy2},
\[ \lim\limits_{i \to \alpha} \hat{t} ( g_i ) = \lim\limits_{i \to \alpha} \hat{t} ( g_i ^{\prime}) \hat{t} ( w_i) = \lim\limits_{i \to \alpha} \hat{t} ( g_i ^{\prime}). \]
Let $\tilde{X} $ be the universal cover of $X$. By our choice of $B$, for $\alpha$-large enough $i$, and $g \in Y_i^8$,
$$\overline{t}( g ) \in (\exp (B))^{10} \subset B^X(p, 10 \varepsilon_0) \cong B^{\tilde{X}}(e, 10 \varepsilon_0) ,$$
and we can think of $\overline{t} $ as a map from $ Y_i^8 $ to $\tilde{X}$. We will denote this map by $\overline{t}_{\tilde{X}} : Y^8_i \to \tilde{X}$.
\section{The nilprogressions}\label{Nilprogsection}
In this section, we apply a short basis procedure by Breuillard, Green, and Tao to find nice subsets in the local groups $Y_i$.
\begin{definition}
\rm Let $A_i$ be a sequence of finite subsets of multiplicative sets. If there is a $C>0$ such that $A_i$ are local $C$-approximate groups for $i $ sufficiently close to $\alpha$, we say that the algebraic ultraproduct $A= \lim_{i \to \alpha} A_i $ is an \textit{ultra approximate group}. If for $\alpha$-large enough $i$, the approximate groups $A_i$ do not contain non-trivial subgroups, we say that $A$ is an \textit{NSS} (no small subgroups) ultra aproximate group.
For subsets $A^{\prime}_i \subset A_i^4$ with the property $(A^{\prime}_i)^4 \subset A_i^4$, we say that the algebraic ultraproduct $A^{\prime}=\lim_{i \to \alpha} A^{\prime}_i $ is a \textit{sub-ultra approximate group} of $A$ if it is an ultra approximate group, and there is a constant $C^{\prime}\in \mathbb{N}$ such that $A_i$ can be covered by $C_0$ many translates of $A_i^{\prime}$ for $i$ sufficiently close to $\alpha$.
\end{definition}
Let $Y$ be the algebraic ultraproduct $\lim_{i \to \alpha} Y_i$. By the discussion in the previous section, it is an NSS ultra approximate group. Consider the map
\begin{center}
$\tilde{t} : Y^8 \to X$
\end{center}
given by the metric ultralimit
\begin{center}
$\tilde{t}( \{ g_i \} ) := \lim\limits_{i \to \alpha} \overline{t}(g_i)$.
\end{center}
Corollary \ref{holonomy2} implies that $\tilde{t} $ is a homomorphism, but moreover, it is a \textit{good model}, as defined by Ehud Hrushovski \cite{Hr}.
\begin{definition}
\rm Let $A=\lim_{i \to \alpha} A_i$ be an ultra approximate group. A \textit{good Lie model} for $A$ is a connected local Lie group $L$, together with a morphism $\sigma:A^8 \to L $ satisfying:
\begin{itemize}
\item There is an open neighborhood $U_0 \subset L$ of the identity with $U_0 \subset \sigma (A) $ and $\sigma^{-1}(U_0) \subset A$.
\item $\sigma (A)$ is precompact.
\item For $F \subset U \subset U_0$ with $F $ compact and $U$ open, there is an algebraic ultraproduct $A^{\prime}= \lim_{i \to \alpha} A_i^{\prime}$ of finite sets $A_i^{\prime} \subset A_i$ with $\sigma^{-1}(F ) \subset A^{\prime} \subset \sigma^{-1}(U)$.
\end{itemize}
\end{definition}
\begin{definition}
\rm Let $B$ be a local group, $u_1, u_2, \ldots , u_r \in B$, and $N_1, N_2, \ldots , N_r $ $\in \mathbb{R}^+$. The set $P(u_1, \ldots , u_r ; N_1, \ldots , N_r)$ is defined as the set of words in the $u_i$'s and their inverses such that the number of appearances of $u_i$ and $u_i^{-1}$ is not more than $N_i$. We say that $P(u_1, \ldots , u_r ; N_1, \ldots , N_r)$ is well defined if every word in it is well defined in $B$. When that is the case, we call it a \textit{progression of rank} $r$ (a progression of rank $0$ is defined to be the trivial subgroup). We say a progression $P(u_1, \ldots , u_r ; N_1, \ldots , N_r)$ is a \textit{nilprogression in $C_0$-regular form} for some $C_0>0$ if it also satisfies the following properties:
\begin{itemize}
\item For all $1 \leq i \leq j \leq r$, and all choices of signs, we have
\begin{center}
$ [ u_i^{\pm 1} , u_j^{\pm 1} ] \in P \left( u_{j+1} , \ldots , u_r ; \dfrac{C_0N_{j+1}}{N_iN_j} , \ldots , \dfrac{C_0N_r}{N_iN_j} \right). $
\end{center}
\item The expressions $ u_1 ^{n_1} \ldots u_r^{n_r} $ represent distinct elements as $n_1, \ldots , n_r$ range over the integers with $\vert n_1 \vert \leq N_1/C_0 , \ldots , \vert n_r \vert \leq N_r/C_0$.
\end{itemize}
For a nilprogression $P$ in $C_0$-regular form, and $\varepsilon \in (0,1)$, it is easy to see that $P( u_1, \ldots , u_r ; \varepsilon N_1, \ldots , \varepsilon N_r )$ is also a nilprogression in $C_0$-regular form. We denote it by $\varepsilon P$. We define the \textit{thickness} of $P$ as the minimum of $N_1, \ldots , N_r$ and we denote it by thick$(P)$. The set $\{ u_1 ^{n_1 }\ldots u_r^{n_r} \vert \vert n_i \vert \leq N_i/C _0 \}$ is called the \textit{grid part of }$P$, and is denoted by $G(P)$.
Let $P_i$ be a sequence of sets. If for $\alpha$-large enough $i$, $P_i$ is a nilprogression of rank $r$ in $C_0$-regular form for some $r \in \mathbb{N}$, $C_0>0$, independent of $i$, we say that the algebraic ultraproduct $ P = \lim_{i \to \alpha} P_i$ is an \textit{ultra nilprogression of rank $r$ in $C_0$-regular form}. We denote $\lim_{i \to \alpha} \varepsilon P_i$ as $\varepsilon P$. If $(\text{thick}(P_i))_i$ is unbounded, we say that $P$ is a \textit{non-degenerate ultra nilprogression}. The algebraic ultraproduct $G(P) :=\lim_{i \to \alpha} G(P_i)$ is called the \textit{grid part of }$P$.
\end{definition}
The main result of this section is the following theorem, which is slightly stronger than (\cite{BGT}, Theorem 9.3).
\begin{theorem}\label{Nilprog}
\rm Let $A=\lim_{i \to \alpha} A_i$ be an NSS ultra approximate group. Assume there is a good Lie model $\sigma: A^8 \to L$. Then $A^4$ contains a nondegenerate ultra nilprogression $P$ of rank $r := dim(L)$ in $C_0$-regular form, with the property that for all standard $\varepsilon \in (0,1)$, there is an open set $U_{\varepsilon} \subset L$ with $\sigma^{-1}(U_{\varepsilon}) \subset G(\varepsilon P)$.
\end{theorem}
\begin{proof}
The proof is done by induction on $r$. In the case $r=0$, $L$ is a trivial group, and necessarily $U_0 = L$. The first property of being a good model implies that $ A^8 = \sigma^{-1} (L)=\sigma^{-1}(U_0) \subset A $. Hence for $\alpha$-large enough $i$, $A_i$ is a group, which is trivial by the NSS property, and there is nothing to show. In order to prove the induction step with $r \geq 1$, we are going to follow the construction performed in (\cite{BGT}, Section 9), and use the results they provide
Let $\hat{B}$ be a small open convex symmetric set in $\mathfrak{ l}$, the Lie algebra of $L$. Let $A^{\prime \prime \prime} \subset A^{\prime \prime} \subset A^{\prime } \subset A$ be sub ultra approximate groups of $A$ such that
\begin{center}
$ \sigma ^{-1} (\exp (\hat{B} )) \subset A^{\prime} \subset \sigma^{-1} ( \exp ((1.001) \hat{B}) ), $
$ \sigma ^{-1} (\exp (\delta \hat{B})) \subset A^{\prime \prime } \subset \sigma^{-1} ( \exp ((1.001) \delta \hat{B}) ) , $
$ \sigma ^{-1} (\exp ( \delta \hat{B} /10)) \subset A^{\prime \prime \prime } \subset \sigma^{-1} ( \exp ((1.001) \delta \hat{B}/10) ), $
\end{center}
where $\delta \in (0,1)$ will be chosen later. Notice that if $\hat{B}$ was chosen small enough, then $A^{\prime} $, $A^{\prime \prime}$, $A^{\prime \prime\prime} $ are strong ultra approximate groups.
Let $u \in A^{\prime } \backslash \{ e \}$ be such that minimizes $ \Vert u \Vert _{A^{\prime}}$ (in this setting, $\Vert \cdot \Vert_{A^{\prime}}$ is a nonstandard real number). Then, by Theorem \ref{Gleason}, if $\delta$ was chosen small enough, for all $x \in (A^{\prime \prime})^{10}$ we have $\Vert x \Vert_{A^{ \prime}} \leq 100 \delta $, and
\begin{center}
$ \Vert [u, x] \Vert _{A^{\prime}} = O\left( \Vert u \Vert _{A^{\prime}} \Vert x \Vert _{A^{\prime}} \right) < \Vert u \Vert _{A^{\prime}} $.
\end{center}
Since $\Vert u \Vert_{A^{\prime}}$ was minimal, $u $ commutes with every element in $(A^{\prime \prime })^{10}$. Consequently, if we define
\[Z := \{ u^n \vert \vert n \vert \leq 1/ \Vert u \Vert _{A^{\prime}} \} ,\]
then every element of $Z$ will commute with every element of $(A^{\prime \prime })^{10}$. Since $\left( A^{\prime \prime} \right)^6$ is well defined, by Theorem ``local group quotient'' we can form the quotients $A^{\prime \prime} /Z : = \lim_{i \to \alpha}(A^{\prime \prime }_i/Z_i) $ and $A^{\prime \prime \prime } /Z : = \lim_{i \to \alpha}(A^{\prime \prime \prime }_i/Z_i) $.
\begin{proposition}
\rm (Breuillard--Green--Tao) The image $\sigma (Z)$ is of the form $\phi ( [-1,1] )$, with $\phi (t) := \exp (tv)$, for some non-zero $v$ in the center of $\mathfrak{ l}$. By choosing a small open neighborhood of the identity $U\subset L$, we can form the quotient $U/\sigma (Z) $, and by choosing $\delta $ small enough, one can guarantee that
\begin{itemize}
\item $U/\sigma (Z)$ is a connected local Lie group of dimension $r-1$.
\item $A^{\prime\prime} / Z$ and $A^{\prime\prime\prime}/Z$ are NSS ultra-approximate groups.
\item $\overline{\sigma} \left(A^{\prime\prime }/Z \right)^8 \to U / \sigma (Z)$ is a good model.
\end{itemize}
\end{proposition}
By this proposition, we can apply the induction hypothesis to $A^{\prime \prime \prime}/Z$ and conclude that there is a non-degenerate nilprogression $\overline{P}(\overline{u}_1, \ldots , \overline{ u}_{r-1} ; \overline{N_1}, \ldots , $ $\overline{N}_{r-1} )$ in $\overline{C}$-regular form such that $\overline{P} \subset (A^{\prime \prime \prime}/ Z)^4 \subset A^{\prime \prime}/Z$ with the property that for all standard $\varepsilon > 0$, there is an open set $V_{\varepsilon} \subset U / \sigma (Z)$ with $( \overline{\sigma})^{-1} (V_{\varepsilon}) \subset G(\varepsilon \overline{P})$. Let $\pi: A^{\prime \prime} \to A^{\prime \prime }/Z$ be the natural projection. To properly ``lift'' $\overline{P}$ to $A^{\prime \prime}$ through $\pi$, we need the following result.
\begin{lemma}\label{lifting}
\rm (Breuillard--Green--Tao) For every $w \in A^{\prime \prime }/ Z$, there is $w^{\prime} \in A^{\prime \prime}$ with $\pi (w^{\prime}) =w$, and $\Vert w^{\prime } \Vert _{A^{\prime \prime} } = O( \Vert w \Vert_{A^{\prime \prime}/Z} )$.
\end{lemma}
Construct $P : =P( u_1, \ldots , u_{r};$ $ N_1, \ldots , N_{r})$, where $u_j \in A^{\prime \prime}$ is a lift of $\overline{u}_j$ that minimizes $\Vert u_j \Vert _{A^{\prime \prime }}$ for $j=1, \ldots , r-1$,
\begin{center}
$N_j : = \delta_0 \overline{N}_j$ for $j=1, \ldots , r-1$,
$u_{r} : = u$, $N_{r }: = \delta_0 / \Vert u \Vert _{A^{\prime \prime}}$.
\end{center}
\begin{proposition}
\rm (Breuillard--Green--Tao) If $\delta_0 >0$ is small enough, then $P$ is a non-degenerate ultra-nilprogression in $C_0$-regular form for some $C_0 > 0$.
\end{proposition}
The only thing left to prove is that for all $\varepsilon > 0$, there is an open $U_{\varepsilon} \subset L$ such that $\sigma ^{-1} (U_{\varepsilon}) \subset G( \varepsilon P)$. By contradiction, assume that for some $\varepsilon > 0$, the element $x$ of $A^{\prime \prime }\backslash G( \varepsilon P)$ with minimal norm $\Vert x \Vert _{A^{\prime \prime}}$ satisfies $\sigma (x) = e_L$. If that is the case, $ \overline{\sigma } (x) = e_{L/ \sigma (Z)}$, and by our induction hypothesis, for all standard $\eta >0$, we have $\pi (x) \in G( \eta \overline{P})$. Therefore $x = u_1 ^{n_1} \ldots u_{r} ^{n_r}$, with
\begin{center}
$\vert n_j \vert \leq \eta \overline{N_j} /\overline{C}$ for $j=1, \ldots , r-1$, $\vert n_{r} \vert \leq 1/ \Vert u_{r } \Vert_{A^{\prime }} $.
\end{center}
Also, from Theorem \ref{Gleason}, Lemma \ref{lifting}, and the fact that $\overline{N}_j = O(1/\Vert \overline{u}_j \Vert _{A^{\prime \prime }/Z})$, we get
\begin{eqnarray*}
\Vert u_1^{n_1} \ldots u_{r-1}^{n_{r-1}} \Vert _{A^{\prime \prime}} &= & O \left( \sum_{j=1}^{r-1} \Vert u_j ^{n_j} \Vert_{A^{\prime \prime}} \right)\\
& = & O\left( \sum_{j=1}^{r-1} \vert n_j \vert \Vert u_j \Vert_{A^{\prime \prime}} \right) \\
& = & O \left( \eta \sum_{j=1}^{r-1} \overline{N_j} \Vert \overline{u}_j \Vert_{A^{\prime \prime}/Z} \right) \\
& = & O (\eta).
\end{eqnarray*}
Since $\eta$ was arbitrary, we obtain that $\Vert u_1^{n_1} \ldots u_{r-1}^{n_{r-1}} \Vert _{A^{\prime \prime}}$ is infinitesimal. Using once more Theorem \ref{Gleason},
\begin{center}
$\Vert u_{r} ^{n_{r}} \Vert _{ A^{\prime \prime} } = O( \Vert x \Vert_{A^{\prime \prime}} + \Vert u_1^{n_1} \ldots u_{r-1}^{n_{r-1}} \Vert _{A^{\prime \prime}} ) $.
\end{center}
This implies that $ \Vert u_{r } ^{n_{r}} \Vert _{ A^{\prime \prime} } $ is infinitesimal and $\vert n_{r} \vert = o (N_{r} ) \leq \varepsilon N_{r} / C_0 $. Also, since $\eta$ was arbitrary, $\vert n_j \vert \leq \varepsilon N_j /C_0$ for $j=1, \ldots , r-1$. Therefore $x \in G(\varepsilon P)$, which is a contradiction.
\end{proof}
\begin{remark}\label{strong-basis}
\rm Note that from the proof of Theorem \ref{Nilprog}, the group $L$ is nilpotent and we can assume the basis $\{ l_1, \ldots , l_r \} $ of $ \mathfrak{ l}$ given by
$$ \exp ( l_j ) = \sigma \left( u_j ^{\left\lfloor N_j /C_0 \right\rfloor } \right) \text{ for }j= 1, \ldots , r, $$
is a strong Malcev basis.
\end{remark}
Let $r: = dim (X)$. We conclude this section with the following proposition, obtained by applying Theorem \ref{Nilprog} to the good model $\tilde{t}: Y ^8\to X $.
\begin{proposition}\label{nilprog2}
\rm There is $C_0>0,$ and for each $i \in \mathbb{N}$, elements $u_{1,i}, \ldots , u_{r,i} \in Y_i$, $N_1 (i), \ldots , N_r (i) \in \mathbb{N}$ with $ N_j (i) \xrightarrow{\alpha} \infty $ for each $j = 1, \ldots , r ,$ such that for $i$ close enough to $\alpha$,
\begin{center}
$P_i:=P(u_{1,i} , \ldots , u_{r,i} ; N_1 (i) , \ldots , N_r (i))$
\end{center}
is a nilprogression in $C_0$-regular form contained in $Y_i^4$, and such that for every $\varepsilon , \delta \in (0, 1)$, there are $\varepsilon_0, \delta_0 >0$ with
\[ \{ g \in \hat{\pi}(\Theta_i) \vert d( \overline{t}(g), p ) < \delta _0 \} \subset G(\varepsilon P_i) \]
\[ G(\varepsilon_0 P_i) \subset \{ g \in \hat{\pi}(\Theta_i) \vert d( \overline{t}(g), p ) < \delta \}. \]
\end{proposition}
\section{Malcev theory}\label{Globalsection}
In this section, we use the nilprogressions constructed in Proposition \ref{nilprog2} to build a sequence of simply connected nilpotent Lie groups $H_i$ converging in a suitable sense to $\tilde{X}$. The proof of the following lemma can be found in (\cite{BGT}, Appendix C).
\begin{lemma}\label{DoublingNilprogression}
\rm Let $r\in \mathbb{N}, C_0>1$, then there exist $M_0, \delta_0 > 0$ such that, for any nilprogression $P$ of rank $r$ in $C_0$-regular form with thick$(P) > M_0$, we have
$$ G(\delta_0 P )^{2} \subset G(P) . $$
\end{lemma}
Let $\mathfrak{g} $ be the lie algebra of $\tilde{X}$, and let $\delta_0 >0$ be given by Lemma \ref{DoublingNilprogression} with $r,C_0,$ from Proposition \ref{nilprog2}. By Remark \ref{strong-basis}, we can assume the basis $\{ v_1, \ldots , v_r \} \subset \mathfrak{g}$ given by
\begin{equation}\label{malcev-basis-construction}
\exp ( v_j) := \lim\limits_{i\to \alpha} \overline{t}_{\tilde{X}} \left( u_{j,i}^{\left\lfloor \frac{\delta_0 N_j(i)}{C_0} \right\rfloor} \right) \text{ for each }j = 1, \ldots , r,
\end{equation}
is a strong Malcev basis, and hence the map $\psi : \mathbb{R}^r \to \tilde{X}$ given by
$$ \psi (x_1, \ldots, x_r ) : = \exp (x_1 v_1) \ldots \exp (x_r v_r) $$
is a diffeomorphism.
\begin{lemma}\label{Yiren}
\rm The group structure $Q: \mathbb{R}^r \times \mathbb{R}^r \to \mathbb{R}^r$ given by
$$ Q(x,y) := \psi^{-1} ( \psi (x) \psi (y) ) $$
is a quasilinear polynomial of degree $\leq d(r)$. We will denote the group $(\mathbb{R}^r, Q)$ as $H$.
\end{lemma}
\begin{proof}
By the Baker--Campbell--Hausdorff formula, after identifying $\mathfrak{g}$ with $\mathbb{R}^r$ via the basis $\{ v_1, \ldots , v_r \}$, the map $\mathbb{R}^r \times \mathbb{R}^r \to \mathbb{R}^r$ given by
$$ (x,y) \to \exp ^{-1} ( \psi (x) \psi (y) ) $$
is polynomial of degree $\leq r$. Also, by Theorem \ref{simply-connected-nilpotent}, the map $\mathbb{R}^r \to \mathbb{R}^r$ given by
$$x \to \psi ^{-1} ( \exp (x) )$$
is polynomial of degree bounded by a number depending only on $r$. Therefore the composition is also polynomial of degree $\leq d(r)$. Quasilinearity is immediate from the definition.
\end{proof}
\begin{definition}
\rm Let $P(v_1 , \ldots , v_r ; N_1 , \ldots , N_r)$ be a nilprogression in $C_0$-regular form. Assume $N_j\geq C_0$ for each $j$, and set $\Gamma_P$ to be the abstract group generated by $\gamma_1, \ldots , \gamma_r$ with relations $[\gamma _j , \gamma_k] = \gamma_{k+1}^{\beta_{j, k}^{ k+1}} \ldots \gamma_r^{\beta_{j, k}^r} $ whenever $j < k $, where $[v _j , v_k] = v_{k+1}^{\beta_{j, k}^{ k+1}} \ldots v_r^{\beta_{j, k}^{ r}} $ and $\vert \beta_{j, k}^{l } \vert \leq \frac{C_0N_l}{N_j N_k}$. We say that $P$ is \textit{good} if $\Gamma_P$ is isomorphic to a lattice in a simply connected nilpotent Lie group, and each element of $\Gamma _P$ has a unique expression of the form
\begin{center}
$ \gamma_1^{n_1}\ldots \gamma_r^{n_r}, $ with $n_1, \ldots, n_r \in \mathbb{Z}$.
\end{center}
\end{definition}
The proof of the following result can be found in (\cite{Cle}, Section 4.2).
\begin{theorem}\label{Embed}
\rm (Malcev) Let $r \in \mathbb{N} , C_0 >0 $, and $P(v_1 , \ldots , v_r ;$ $ N_1 , \ldots , N_r)$ a nilprogression in $C_0$-regular form in a group $\Gamma$. If thick$(P)$ is large enough depending on $r$ and $ C$, then $P$ is good and the map $v_j \to \gamma_j$ extends to an embedding $ \sharp : G(P) \to \Gamma_P.$ For $A \subset G(P)$, we will denote its image under this embedding by $A^{\sharp}$.
Furthermore, there is a quasilinear polynomial group structure (see Definition \ref{quasil})
$$Q: \mathbb{R}^{r}\times \mathbb{R}^r \to \mathbb{R}^r $$
of degree $\leq d(r)$ such that the multiplication in $\Gamma_P$ is given by
\begin{center}
$ \gamma_1^{n_1}\ldots \gamma_r^{n_r} \gamma_1^{m_1}\ldots \gamma_r^{m_r} = \gamma_1^{(Q(n,m))_1}\ldots \gamma_r^{(Q(n,m))_r} $ for $n,m \in \mathbb{Z}^r.$
\end{center}
$Q$ is called the \textit{Malcev polynomial} of $P$, and $(\mathbb{R}^r, Q)$ the \textit{Malcev Lie group} of $P$. $\Gamma_P$ is isomorphic, via $\gamma_j \to e_j$, to the lattice $(\mathbb{Z}^r, Q\vert_{\mathbb{Z}^r \times \mathbb{Z}^r} )$.
\end{theorem}
By Proposition \ref{nilprog2} and Theorem \ref{Embed}, for $\alpha$-large enough $i$, the nilprogressions $P_i$ are good with Malcev polynomials $\hat{Q}_i$. Let $N_0\in \mathbb{N}$ be given by Lemma \ref{PreWell} with $d (r)$ given by lemmas \ref{Yiren} and \ref{Embed}, and define $\xi : \mathbb{N}\to \mathbb{N}$ as
$$ \xi (n) : = N_0 \left\lfloor \frac{\delta_0 n}{C_0N_0} \right\rfloor. $$
For $i\in \mathbb{N}$, consider $ \kappa_i : \mathbb{R}^r \to \mathbb{R}^r $ given by
$$ \kappa_i(x_1, \ldots , x_r) : = (x_1 \xi (N_1 (i)), \ldots , x_r \xi (N_r (i))) .$$
Let $H_i$ be the group $(\mathbb{R}^r , Q_i)$, where $Q_i : \mathbb{R}^r \times \mathbb{R}^r \to \mathbb{R}^r $ is the group structure given by
$$ Q_i(x,y) : = \kappa_i^{-1} (\hat{Q}_i ( \kappa_i(x) ,\kappa_i(y)) ) . $$
The following is the main result of this section.
\begin{proposition}\label{Goooooood}
\rm The sequence of quasilinear polynomial group structures $Q_i$ converges well to $Q$.
\end{proposition}
We define $\Omega \subset \mathbb{R}^r$ as
$$ \Omega : = \left\{ -1, \ldots,\frac{-1}{N_0}, 0, \frac{1}{N_0}, \ldots , 1 \right\} ^{\times r} . $$
Consider the maps $\omega_{\alpha} : \Omega \to \tilde{X}$ and $\omega_i : \Omega \to \Gamma_{P_i}$ defined as $\omega_{\alpha} : = \psi \vert _{ \Omega }$ and
\begin{center}
$ \omega_i ( x_1, \ldots, x_r) := \gamma_1^{ \xi (x_1 N_1(i)) }\ldots \gamma_r^{ \xi (x_r N_r(i)) } . $
\end{center}
We also define maps $\overline{t}^{\flat } : G(P_i )^{\sharp } \to X$ and $\overline{t}^{\flat }_{\tilde{X}} : G(P_i)^{\sharp} \to \tilde{X}$ as
\begin{center}
$ \overline{t}^{\flat} ( x^{\sharp } ) := \overline{t} (x)$ and $ \overline{t}_{\tilde{X}}^{\flat} ( x^{\sharp } ) := \overline{t}_{\tilde{X}} (x).$
\end{center}
Consider the following diagram.
\begin{displaymath}
\begin{tikzcd}[column sep=3em]
\Omega \times \Omega \arrow{r}{ \omega_i } \arrow{d}{ Id} & (G (\delta_0 P_i )^{\sharp})^{\times 2} \arrow{r}{\ast} \arrow{d}{ \overline{t} _{\tilde{X}} ^{\flat}}& G (P_i )^{\sharp} \arrow{d}{ \overline{t} _{\tilde{X}} ^{\flat} } \arrow{r}{\kappa_i^{-1}} & \mathbb{R}^r \arrow{d}{Id} \\
\Omega \times \Omega \arrow{r}{\omega_{\alpha} } & \tilde{X}\times \tilde{X} \arrow{r}{\ast }& \tilde{X} \arrow{r}{ \psi^{-1} } & \mathbb{R}^r
\end{tikzcd}
\end{displaymath}
The first row of the diagram is the polynomial $Q_i$, while the second row is the polynomial $Q$. Commutativity of the diagram does not hold in general, but it holds in the limit, as the following proposition shows.
\begin{proposition}\label{CD}
\rm For every $x,y \in \Omega$,
\[ \lim\limits_{i \to \alpha} \kappa_i^{-1} \left( \omega_i(x) \omega_i(y) \right) = \psi^{-1} ( \omega_{\alpha}(x)\omega_{\alpha}(y)) .\]
\end{proposition}
\begin{proof}
We will first show that for any $x \in \Omega$, we have
\begin{equation}\label{omegas}
\lim_{ i \to \alpha } \overline{t}_{\tilde{X}}^{\flat }\left( \omega_i (x) \right) = \omega_{\alpha} (x).
\end{equation}
Using Equation \ref{malcev-basis-construction} and Corollary \ref{holonomy2}, we obtain
\begin{eqnarray*}
\omega_{\alpha }(x) & = & \exp (x_1 v_1) \ldots \exp (x_r v_r)\\
& = & \lim_{i \to \alpha} \overline{t}_{\tilde{X}}^{\flat } \left( \gamma_1^{ \xi ( x_1 N_1 (i)) } \right) \ldots \lim_{i \to \alpha} \overline{t}_{\tilde{X}}^{\flat } \left( \gamma_{r}^{ \xi ( x_r N_r (i)) } \right)\\
& = & \lim_{i \to \alpha} \overline{t}_{\tilde{X}}^{\flat } (\omega_i(x)).
\end{eqnarray*}
We then show that for any sequence $x_i^{\sharp} \in G(P_i)^{\sharp}$, we have
\begin{equation}\label{PreCD}
\psi \left( \lim\limits_{i \to \alpha} \kappa_i^{-1}(x_i^{\sharp} ) \right) = \lim\limits_{i\to \alpha} \overline{t}_{\tilde{X}}^{\flat} ( x_i^{\sharp} ) .
\end{equation}
We can decompose the sequence $x_i = u_{1,i}^{p_{1,i}}\ldots u_{r,i}^{p_{r,i}} \in G(P_i)$ as
\begin{center}
$ x_i = x_{1,i}\ldots x_{r,i} , $ with $x_{j,i} = u_{j,i}^{p_{j,i}} \in G(P_i)$.
\end{center}
Then, by Equation \ref{malcev-basis-construction} and Corollary \ref{holonomy2},
\begin{eqnarray*}
\lim\limits_{i \to \alpha}\overline{t}_{\tilde{X}} (x_i ) & = & \lim\limits_{i \to \alpha} \overline{t}_{\tilde{X}} (x_{1,i}) \ldots \lim\limits_{i \to \alpha} \overline{t}_{\tilde{X}} (x_{r,i})\\
& = & \exp \left( \lim\limits_{i \to \alpha} \dfrac{C_0p_{1,i}}{\delta_0N_1(i)} v_1 \right)\ldots \exp \left( \lim\limits_{i \to \alpha} \dfrac{C_0p_{r,i}}{\delta_0N_r(i)} v_r \right)\\
& = & \psi \left( \dfrac{C_0}{\delta_0} \lim\limits_{i \to \alpha} \left( \dfrac{p_{1,i}}{N_1(i)}, \ldots , \dfrac{p_{r,i}}{N_r(i)} \right) \right) \\
& = & \psi \left( \lim\limits_{i \to \alpha} \kappa_i^{-1} ( x_i ) \right) .
\end{eqnarray*}
From equations \ref{omegas} and \ref{PreCD}, we conclude
\begin{eqnarray*}
\omega_{\alpha}(x) \omega_{\alpha}(y) & = & \lim_{i \to \alpha}\overline{t}_{\tilde{X}}^{\flat} ( \omega_i (x) ) \lim_{i \to \alpha}\overline{t}_{\tilde{X}}^{\flat} ( \omega_i (y) ) \\
& = & \lim_{i \to \alpha} \overline{t}_{\tilde{X}}^{\flat} (\omega_i (x) \omega_i (y))\\
& = & \psi \left( \lim\limits_{i \to \alpha} \kappa_i^{-1} \left( \omega_i(x) \omega_i(y) \right) \right).
\end{eqnarray*}
\end{proof}
Proposition \ref{Goooooood} follows from Lemma \ref{PreWell} and Proposition \ref{CD}. By Lemma \ref{Well}, the sequence of Lie algebras of the groups $H_i$ converges well to $\mathfrak{g}$. Consider the morphisms $ \natural : \Gamma_{P_i} \to H_i $ and $\natural : \tilde{X} \to H$ given by $g^{\natural } : = \kappa_i^{-1}(g)$ for $g \in \Gamma_{P_i}$, and $g^{\natural}: = \psi^{-1}(g) $ for $g \in \tilde{X}$. These identifications allow us to have all these group structures in the same underlying set $\mathbb{R}^r$.
\section{Almost torsion elements}\label{Rootsection}
In this section we finish the proof of Theorem \ref{PIII} by establishing the following result.
\begin{proposition}\label{final}
\rm For $\alpha$-large enough $i$, there are groups $\Lambda_i \leq \pi_1(X_i)$ with surjective morphisms $\Lambda_i \to \pi_1(X)$.
\end{proposition}
Recall that $Z_i := \Gamma_i / W_i$, and define $q_i : = W_i (p_i) \in X_i / W_i $. Let $\eta > 0$ be small enough so that for $\alpha$-large enough $i$,
$$ S_i : = \{ g \in Z_i \vert d(g(q_i) , q_i ) < \eta \} \subset G(\delta_0 P_i). $$
Let $\tilde{Z}_i$ be the abstract group generated by $S_i$, with relations
\begin{center}
$ s= s_1s_2 \in \tilde{Z}_i $ whenever $ s,s_1, s_2 \in S_i$ and $s=s_1s_2$ in $Z_i$.
\end{center}
Notice that for $\alpha$-large enough $i$, we have $\tilde{Z}_i = \Gamma_{P_i}$, and by Theorem \ref{monodromy}, there is a Galois $(\eta /3)$-wide covering map $\tilde{X}_i^{\prime} \to X_i/W_i$ whose Galois group is the kernel of the canonical map $\Phi_i : \Gamma_{P_i} \to Z_i$. Proposition \ref{final} will follow from the following result.
\begin{proposition}\label{final2}
\rm For $\alpha$-large enough $i$, there are subgroups $\Lambda_{i}^{\prime} \leq Ker (\Phi_i)$ isomorphic to $\pi_1(X)$.
\end{proposition}
\begin{proof}[Proof of Proposition \ref{final}:]
By Theorem \ref{sormani-cover-copy} and Equation \ref{quotient-gh-close}, for $\alpha$-large enough $i$, there are Galois covering spaces $\tilde{X}_i \to X_i$ with Galois groups $Ker (\Phi_i)$. By Lemma \ref{cover-characterization}, for $\alpha$-large enough $i$, there are surjective morphisms $\rho _i : \pi_1 (X_i) \to Ker (\Phi_i)$. Proposition \ref{final} is obtained by defining $\Lambda_i : = \rho^{-1}_i( \Lambda_i^{\prime} ) \leq \pi_1(X_i)$, where $\Lambda_i^{\prime}$ are given by Proposition \ref{final2}.
\end{proof}
\begin{remark}\label{phi-locally-injective}
\rm Notice that if $i$ is $\alpha$-large enough, then for every $g \in \Gamma_{P_i} $ with
$$d ( \Phi_i (g)( q_i) , q_i ) < \eta ,$$
there is a unique $w \in G( \delta_0 P_i )^{\sharp}$ such that $gw \in Ker (\Phi_i)$. Also, $Ker(\Phi_i ) \cap G(P_i)^{\sharp } = \{ e_{\Gamma_{P_i}} \}$.
\end{remark}
\begin{proof}[Proof of Proposition \ref{final2}:]
Let $\Phi : \tilde{X} \to X$ denote the canonical projection. Then by Lemma \ref{discrete-normal-central}, $Ker (\Phi) \cong \pi_1(X) $ is central, and since $\tilde{X}$ has no torsion, $Ker (\Phi )$ is free abelian. Let $\{ \lambda_1 , \ldots , \lambda_{\ell} \} $ be a basis of $Ker (\Phi )$ as a free abelian group. Pick $M \in \mathbb{N}$ large enough so that the $M$-th roots of the $\lambda_j$'s lie in the ball $B^{\tilde{X}}(e, \eta /2)$. For each $j \in \{ 1, \ldots , \ell \}$ pick a sequence
$$ \lambda _j (i) \in G(P_i)^{\sharp} \subset \Gamma_{P_i} $$
with
$$ \lim\limits_{i \to \alpha} \overline{t}_{\tilde{X}}^{\flat} (\lambda_j (i)) = \dfrac{\lambda_j}{M}. $$
Since $Q_i$ converges well to $Q$, Equation \ref{PreCD} implies that
\begin{equation}\label{well-lambda}
\lim\limits_{i \to \alpha} (\lambda_j (i)^{\natural})^M = \lambda_j^{\natural} .
\end{equation}
Also, for each $j \in \{ 1, \ldots , \ell \}$ we have
\begin{eqnarray*}
\lim\limits_{i \to \alpha} \overline{t} \left( \Phi_i \left( \lambda_j(i) \right) ^M \right) & = & \left( \lim\limits_{i \to \alpha} \overline{t} \left( \Phi_i \left( \lambda_j(i) \right) \right)\right)^M\\
& = & \left( \lim\limits_{i\to \alpha} \overline{t}^{\flat} ( \lambda_j (i) ) \right)^M \\
& = & \Phi \left( \dfrac{\lambda_j}{M} \right)^M\\
& = & e_X.
\end{eqnarray*}
By Remark \ref{phi-locally-injective}, for $\alpha$-large enough $i$, and all $j \in \{ 1 , \ldots , \ell \}$, there are $w_{j,i}\in G( P_i)^{\sharp}$ such that
$$ \lambda_j (i)^M w_{j,i} \in Ker (\Phi_i).$$
Since $Q_i$ converges well to $Q$, we deduce from Equation \ref{well-lambda} that
$$ \lim\limits_{i \to \alpha} \left[ \lambda_j(i) ^M w_{j,i}\right]^{\natural} = \lambda_j^{\natural} , $$
and hence by Lemma \ref{Quasilinear Continuity},
$$ \lim\limits_{i \to \alpha} \log_i \left( \left[ \lambda_j(i) ^M w_{j,i}\right]^{\natural} \right) = \log (\lambda_j^{\natural}) \text{ for each }j \in \{ 1, \ldots , \ell \} , $$
where $\log_i, \log,$ denote the logarithm maps with respect to $H_i$ and $H$, respectively. Therefore for $\alpha$-large enough $i$, the set
$$\left\{ \log_i \left( \left[ \lambda_1(i) ^M w_{1,i}\right]^{\natural} \right) , \ldots , \log_i \left( \left[ \lambda_{\ell}(i) ^M w_{\ell ,i}\right]^{\natural} \right) \right\} $$
is linearly independent. Also, for $j,k \in \{ 1, \ldots , \ell \}$,
\begin{eqnarray*}
\lim\limits_{i \to \alpha} [ \lambda_j (i)^M w_{j,i},\lambda_k(i) ^M w_{k,i} ]^{\natural} &=& \left[ \lim\limits_{i \to \alpha}\lambda_j(i)^Mw_{j,i}, \lim\limits_{i \to \alpha}\lambda_k(i)^Mw_{k,i} \right]^{\natural} \\
& = & \left( \lambda_j\lambda_k\lambda_j^{-1} \lambda_k^{-1} \right)^{\natural} \\
& = & e_{\tilde{X}}^{\natural}.
\end{eqnarray*}
By Remark \ref{phi-locally-injective}, this implies that for $\alpha$-large enough $i$,
$$ [ \lambda_j (i)^M w_{j,i},\lambda_k(i) ^M w_{k,i} ] =e_{\Gamma_{P_n}},$$
and the group
$$ \langle \lambda_1 (i)^M w_{1,i}, \ldots , \lambda_{\ell} (i)^M w_{\ell , i} \rangle \leq Ker \left( \Phi_i \right) $$
is a free abelian group of rank $\ell$, hence isomorphic to $ Ker (\Phi) $ and $\pi_1(X)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{PIII}:] Assume by contradiction that the theorem fails for some sequence $X_i$. After taking a subsequence, we can assume that for no $i$, there is a subgroup $\Lambda_i \leq \pi_1(X_i)$ with a surjective morphism $\Lambda_i \to \pi_1(X)$. This contradicts Proposition \ref{final} for $\alpha$-large enough $i$.
\end{proof}
| {
"timestamp": "2021-12-01T02:13:58",
"yymm": "2007",
"arxiv_id": "2007.01985",
"language": "en",
"url": "https://arxiv.org/abs/2007.01985",
"abstract": "We say that a sequence of proper geodesic spaces $X_i$ consists of \\textit{almost homogeneous spaces} if there is a sequence of discrete groups of isometries $G_i \\leq Iso(X_i)$ such that diam$(X_i/G_i)\\to 0$ as $i \\to \\infty$.We show that if a sequence $X_i$ of almost homogeneous spaces converges in the pointed Gromov--Hausdorff sense to a space $X$, then $X$ is a nilpotent locally compact group equipped with an invariant geodesic metric.Under the above hypotheses, we show that if $X$ is semi-locally-simply-connected, then it is a nilpotent Lie group equipped with an invariant Finsler or sub-Finsler metric, and for large enough $i$, there are subgroups $\\Lambda_i \\leq \\pi_1(X_i) $ with surjective morphisms $\\Lambda_i\\to \\pi_1(X)$.",
"subjects": "Metric Geometry (math.MG); Group Theory (math.GR)",
"title": "Limits of almost homogeneous spaces and their fundamental groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363522073338,
"lm_q2_score": 0.8128673178375734,
"lm_q1q2_score": 0.8001348504687964
} |
https://arxiv.org/abs/1311.6176 | Inverse questions for the large sieve | Suppose that an infinite set $A$ occupies at most $\frac{1}{2}(p+1)$ residue classes modulo $p$, for every sufficiently large prime $p$. The squares, or more generally the integer values of any quadratic, are an example of such a set. By the large sieve inequality the number of elements of $A$ that are at most $X$ is $O(X^{1/2})$, and the quadratic examples show that this is sharp. The simplest form of the inverse large sieve problem asks whether they are the only examples. We prove a variety of results and formulate various conjectures in connection with this problem, including several improvements of the large sieve bound when the residue classes occupied by $A$ have some additive structure. Unfortunately we cannot solve the problem itself. | \subsection[#1]{\sc #1}}
\newcommand\E{\mathbb{E}}
\newcommand\Z{\mathbb{Z}}
\newcommand\R{\mathbb{R}}
\newcommand\T{\mathbb{T}}
\newcommand\C{\mathbb{C}}
\newcommand\N{\mathbb{N}}
\newcommand\A{\mathscr{A}}
\newcommand\B{\mathscr{B}}
\newcommand\G{\mathbf{G}}
\newcommand\SL{\operatorname{SL}}
\newcommand\Upp{\operatorname{Upp}}
\newcommand\im{\operatorname{im}}
\newcommand\bad{\operatorname{bad}}
\newcommand\bes{\operatorname{bes}}
\newcommand\GL{\operatorname{GL}}
\newcommand\Mat{\operatorname{Mat}}
\newcommand\Alg{\operatorname{Alg}}
\newcommand\Ad{\operatorname{Ad}}
\newcommand\tr{\operatorname{tr}}
\newcommand\dist{\operatorname{dist}}
\newcommand\vol{\operatorname{vol}}
\newcommand\res{\operatorname{res}}
\newcommand\nonres{\operatorname{nonres}}
\newcommand\unif{\operatorname{unif}}
\renewcommand\P{\mathbb{P}}
\newcommand\F{\mathbb{F}}
\newcommand\Q{\mathbb{Q}}
\renewcommand\b{{\bf b}}
\newcommand\g{\mathfrak{g}}
\newcommand\h{\mathfrak{h}}
\newcommand\n{\mathfrak{n}}
\renewcommand\a{\mathfrak{a}}
\newcommand\p{\mathfrak{p}}
\newcommand\q{\mathfrak{q}}
\renewcommand\b{\mathfrak{b}}
\renewcommand\r{\mathfrak{r}}
\newcommand\eps{\varepsilon}
\newcommand\id{{\operatorname{id}}}
\newcommand\st{{\operatorname{st}}}
\newcommand\diam{{\operatorname{diam}}}
\newcommand\proofsymb{{\hfill{\usebox{\proofbox}}\vspace{11pt}}}
\renewcommand{\labelenumi}{(\roman{enumi})}
\newcommand{{{}^*}}{{{}^*}}
\onehalfspace
\parindent 5mm
\parskip 0mm
\begin{document}
\title{Inverse questions for the large sieve}
\author[Green]{Ben Green}
\address{Mathematical Institute\\
Radcliffe Observatory Quarter\\
Woodstock Road\\
Oxford OX2 6GG\\
England }
\email{ben.green@maths.ox.ac.uk}
\author[Harper]{Adam J Harper}
\address{Jesus College \\
Cambridge CB5 8BL\\
England }
\email{A.J.Harper@dpmms.cam.ac.uk}
\begin{abstract}
Suppose that an infinite set $\A$ occupies at most $\frac{1}{2}(p+1)$ residue classes modulo $p$, for every sufficiently large prime $p$. The squares, or more generally the integer values of any quadratic, are an example of such a set. By the large sieve inequality the number of elements of $\A$ that are at most $X$ is $O(X^{1/2})$, and the quadratic examples show that this is sharp. The simplest form of the \emph{inverse large sieve problem} asks whether they are the only examples. We prove a variety of results and formulate various conjectures in connection with this problem, including several improvements of the large sieve bound when the residue classes occupied by $\A$ have some additive structure. Unfortunately we cannot solve the problem itself.
\end{abstract}
\maketitle
\begin{center}\emph{To Roger Heath-Brown on his 60th birthday}\end{center}
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}
\textsc{Notation.} Most of our notation is quite standard. When dealing with infinite sets $\A$, we write $\A[X]$ for the intersection of $\A$ with the initial segment $[X] := \{1,\dots, X\}$.
Our primary aim in this paper is to study sets $\A$ of integers with the property that the reduction $\A \md{p}$ occupies at most $\frac{1}{2}(p+1)$ residue classes modulo $p$ for all sufficiently large primes $p$. It follows from the large sieve that $|\A[X]| \ll X^{1/2}$ for all $X$ (we will recall the details of this argument below). This is clearly sharp up to the value of the implied constant, as shown by taking $\A$ to be the set of squares or more generally the set of integer values taken by any rational quadratic, that is to say quadratic with rational coefficients.
It has been speculated, most particularly by Helfgott and Venkatesh \cite[pp 232--233]{helfgott-venkatesh} and by Walsh \cite{walsh}, that quadratics provide the only examples of sets for which the large sieve bound is essentially sharp. See also \cite[Problem 7.4]{croot-lev}. One might call problems of this type the ``inverse large sieve problem''. Unfortunately, we have not been able to prove any statement of this kind, and our aims here are more modest.
Suppose that $\A \md{p} \subset S_p$ for all sufficiently large primes $p$. Our first set of results consists of improvements to the large sieve bound when $S_p$ looks very much unlike a quadratic set modulo $p$, for example by having some additive structure.
\begin{theorem}\label{thm1.4}
Suppose that for each prime $p \leq X^{1/2}$ one has a set $S_p \subset \Z/p\Z$ of size $(p+1)/2$. Suppose there is some $\delta > 0$ such that, for each $p$, $S_p$ has at least $(\frac{1}{16} + \delta) p^3$ additive quadruples, that is to say quadruples $(s_1,s_2,s_3,s_4)$ with $s_1 + s_2 = s_3 + s_4$. Suppose that $\A \mdlem{p} \subset S_p$ for all $p$. Then $|\A[X]| \ll X^{1/2 - c\delta^2}$, where $c > 0$ is an absolute constant.\end{theorem}
The condition of having at least $(\frac{1}{16} + \delta) p^3$ additive quadruples will {\em not} be satisfied by a generic (e.g. randomly selected) set $S_p \subset \Z/p\Z$ of size $(p+1)/2$, which will have $(\frac{1}{16} + o(1)) p^3$ such quadruples for large $p$. But it is a rather general condition corresponding to $S_p$ being additively structured, and certainly we are not aware of any previous improvements to the large sieve bound under comparably general conditions.
An extreme case of the preceding theorem is that in which $S_p$ is in fact an interval. Here a simple calculation, reproduced later, shows that Theorem \ref{thm1.4} is applicable with the choice $\delta=1/48$, but we can do rather better.
\begin{theorem}\label{interval-sieve}
Suppose that $\A$ is a set of integers and that, for each prime $p$, the set $\A \mdlem{p}$ lies in some interval $I_p$. Let $\eps > 0$ be arbitrary. Then
\begin{enumerate}
\item If $|I_p| \leq (1 - \eps) p$ for at least a proportion $\eps$ of the primes in each dyadic interval $[Z,2Z]$ then $|\A[X]| \ll_{\eps} (\log \log X)^{C\log(1/\eps)}$, where $C > 0$ is some absolute constant;
\item If $|I_p| \leq \frac{p}{2}$ for all primes then $|\A[X]| \ll (\log \log X)^{\gamma + o(1)}$, where $\gamma = \frac{\log 18}{\log(3/2)} \approx 7.129$;
\item If $|I_p| = [\alpha p, \beta p]$ for all primes $p$ and for fixed $0 \leq \alpha < \beta < 1$ \textup{(}not depending on $p$\textup{)} then $|\A| = O_{\beta - \alpha}(1)$;
\item There are examples of infinite sets $\A$ with $|I_p| \leq (\frac{1}{2} + \eps) p$ for all $p$.
\end{enumerate}
\end{theorem}
This improves on the results of an unpublished preprint~\cite{green} by the first author, in which it was shown that one has $|\A[X]| \ll X^{1/3+o(1)}$ under the condition (ii).
Theorem \ref{thm1.4} also covers the case in which $S_p$ is an arithmetic progression of length $\frac{1}{2}(p+1)$, where the common difference of this arithmetic progression may depend on $p$. Here again one could apply Theorem \ref{thm1.4} with the choice $\delta=1/48$, but we can also handle this situation with a less restrictive condition on the size of $S_p$.
\begin{theorem}\label{prog-thm}
Let $\eps > 0$. Suppose that $\A$ is a set of integers and that, for each prime $p \leq X^{1/2}$, the set $\A \mdlem{p}$ lies in some arithmetic progression $S_p$ of length $(1-\eps)p$. Then $|\A[X]| \ll_{\eps} X^{1/2 - \eps'}$, where $\eps' > 0$ depends on $\eps$ only.
\end{theorem}
We are not aware of any previous results improving the large sieve bound $X^{1/2}$ when the $S_p$ are arbitrary arithmetic progressions, even for $\eps = 1/2$.
\vspace{11pt}
After proving the foregoing results, we turn to the ``robustness'' of the inverse large sieve problem. The aim of these results is to show that if $|\A \md{p}| \leq \frac{1}{2}(p+1)$ (or if similar conditions hold), if $|\A[X]| \approx X^{1/2}$, and if $\A$ is even vaguely close to quadratic in structure, then it must in fact approximate a quadratic very closely. Our proof methods here lead to some complicated dependencies between parameters, so we do not state and prove the most general result possible, settling instead for a couple of statements that have relatively clean formulations.
The first and main one concerns finite sets. Here, and henceforth in the paper, we say that a rational quadratic $\psi$ has \emph{height at most $H$} if it can be written as $\psi(x) = \frac{1}{d}(ax^2 + bx + c)$ with $a,b,c,d \in \Z$ and $\max(|a|, |b|, |c|, |d|) \leq H$.
\begin{theorem}\label{stability-thm}
Let $X_0 \in \N$, and let $\eps > 0$. Let $X \in \N$ be sufficiently large in terms of $X_0$ and $\eps$, and suppose that $H \leq X^{1/8}$. Suppose that $A,B \subset [X]$ and that $|A \md{p}| + |B \md{p}| \leq p+1$ for all $p \in [X_0, X^{1/4}]$. Then, for some absolute constant $c > 0$, one of the following holds:
\begin{enumerate}
\item \textup{(Better than large sieve)} Either $|A \cap [X^{1/2}]|$ or $|B \cap [X^{1/2}]|$ is $\leq X^{1/4 - c\eps^3}$;
\item \textup{(Behaviour with quadratics)} Given any two rational quadratics $\psi_A,\psi_B$ of height at most $H$, either $|A \setminus \psi_{A}(\Q)|$ and $|B\setminus \psi_{B}(\Q)| \leq HX^{1/2 - c}$, or else at least one of $|A \cap \psi_{A}(\Q)|$ and $|B \cap \psi_{B}(\Q)|$ is bounded above by $HX^{1/4 + \eps}$.
\end{enumerate}
\end{theorem}
We expect that if the large sieve bound is close to sharp for $A$ and $B$, then there must exist rational quadratics of ``small'' height containing ``many'' points of $A$ and $B$. Together with Theorem \ref{stability-thm}, this provides some motivation for making the following conjecture of the form ``almost equality in the large sieve implies quadratic structure''.
\begin{conjecture}\label{stability-conj}
Let $X_0 \in \N$, and let $\rho > 0$. Let $X \in \N$ be sufficiently large in terms of $X_0$ and $\rho$. Suppose that $A,B \subset [X]$ and that $|A \md{p}| + |B \md{p}| \leq p+1$ for all $p \in [X_0, X^{1/4}]$. Then there exists a constant $c = c(\rho) > 0$ such that one of the following holds:
\begin{enumerate}
\item \textup{(Better than large sieve)} Either $|A \cap [X^{1/2}]|$ or $|B \cap [X^{1/2}]|$ is $\leq X^{1/4 - c}$;
\item \textup{(Quadratic structure)} There are two rational quadratics $\psi_A,\psi_B$ of height at most $X^{\rho}$ such that $|A \setminus \psi_{A}(\Q)|$ and $|B\setminus \psi_{B}(\Q)| \leq X^{1/2 -c}$.
\end{enumerate}
\end{conjecture}
The contents of Theorem \ref{stability-thm} and of Conjecture \ref{stability-conj} are perhaps a little hard to understand on account of the parameters $H, X, \rho$ and $\eps$. As a corollary we establish the following more elegant statement involving infinite sets.
\begin{theorem}\label{stability-thm-symmetric}
Suppose that $\A$ is a set of positive integers and that $|\A \md{p}| \leq \frac{1}{2}(p+1)$ for all sufficiently large primes $p$. Then one of the following options holds:
\begin{enumerate}
\item \textup{(Quadratic structure)} There is a rational quadratic $\psi$ such that all except finitely many elements of $\A$ are contained in $\psi(\Q)$;
\item \textup{(Better than large sieve)} For each integer $k$ there are arbitrarily large values of $X$ such that $|\A[X]| < \frac{X^{1/2}}{\log^k X}$;
\item \textup{(Far from quadratic structure)} Given any rational quadratic $\psi$, for all $X$ we have $|\A[X] \cap \psi(\Q)| \leq X^{1/4 + o_{\psi}(1)}$.
\end{enumerate}
\end{theorem}
We conjecture that option (iii) is redundant. This is another conjecture of inverse large sieve type, rather cleaner than Conjecture \ref{stability-conj}.
\begin{conjecture}\label{stability-conj-symmetric}
Suppose that $\A$ is a set of positive integers and that $|\A \md{p}| \leq \frac{1}{2}(p+1)$ for all sufficiently large primes $p$. Then one of the following options holds:
\begin{enumerate}
\item \textup{(Quadratic structure)} There is a rational quadratic $\psi$ such that all except finitely many elements of $\A$ are contained in $\psi(\Q)$;
\item \textup{(Better than large sieve)} For each integer $k$ there are arbitrarily large values of $X$ such that $|\A[X]| < \frac{X^{1/2}}{\log^k X}$. In particular, $\lim\inf_{X \rightarrow \infty} X^{-1/2} |\A[X]| = 0$.
\end{enumerate}
\end{conjecture}
We remark that some very simple properties of rational quadratics are laid down in Appendix \ref{rat-quad-app}. In particular we draw attention to the fact that given a rational quadratic $\psi$ there are further rational quadratics $\psi_1,\psi_2$ such that $\psi_1(\Z) \subset \psi(\Q) \cap \Z \subset \psi_2(\Z)$. In particular, $|\psi(\Q) \cap [X]| \ll_{\psi} X^{1/2}$.
\vspace{11pt}
Our final task in the paper is to show, elaborating on ideas of Elsholtz \cite{elsholtz}, that Conjecture \ref{stability-conj} would resolve the currently unsolved ``inverse Goldbach problem'' of Ostmann \cite[p. 13]{ostmann} (and see also \cite[p. 62]{erdos}). This asks whether the set of primes can be written as a sumset $\A + \B$ with $|\A|, |\B| \geq 2$, except for finitely many mistakes. Evidently the answer should be that it cannot be so written.
\begin{theorem}\label{ils-ostmann}
Assume Conjecture \ref{stability-conj}. Let $\A, \B$ be two sets of positive integers, with $|\A|, |\B| \geq 2$, such that $\A + \B$ contains all sufficiently large primes. Then $\A + \B$ also contains infinitely many composite numbers.
\end{theorem}
We remark that much stronger statements that would imply this should be true, but we do not know how to prove them. For example, it is reasonable to make the following conjecture.
\begin{conjecture}
Let $\delta > 0$. Then if $X$ is sufficiently large in terms of $\delta$, the following is true. Let $A, B \subset [X]$ be any sets with $|A|, |B| \geq X^{\delta}$. Then $A + B$ contains a composite number.
\end{conjecture}
We do not know how to prove this for any $\delta \leq \frac{1}{2}$. If one had it for any $\delta < \frac{1}{2}$, the inverse Goldbach problem would follow.
\vspace{11pt}
The proofs of the above theorems are rather diverse and use the large sieve, Gallagher's ``larger sieve'', and several other tools from harmonic analysis and analytic number theory. The proofs of Theorems \ref{thm1.4} and \ref{prog-thm}, which involve lifting the additive structure of the $S_p$ to additive structure on $\A[X]$, also involve some ideas of additive combinatorial flavour, although we do not need to import many results from additive combinatorics to prove them. With very few exceptions (for example, Lemma \ref{progr-energy} depends on standard Fourier arguments given in detail in Lemma \ref{large-fourier}), Sections 3,4,5,6 and 7 may be read independently of one another.
The situation considered in the majority of this paper, in which $\A \md{p}$ is small for \emph{every} prime $p$ (or at least for every prime $p \leq X^{1/2}$), may seem rather restrictive. It would be possible to adapt our arguments and prove many of our theorems under weaker conditions, and we leave this to the reader who has need of such results. However, it seems possible that any set $\A$ for which $|\A \md{p}| \leq (1 - c)p$ for a decent proportion of primes $p$ and for which $|\A[X]| \geq X^c$ for infinitely many $X$ has at least some ``algebraic structure''. Moreover such statements may well be true in finitary settings, in which $\A$ is restricted to some finite interval $[X]$ and $p$ is only required to range over some (potentially quite small) subinterval of $[X]$. Unfortunately none of our methods come close to establishing such strong results. \vspace{11pt}
\textsc{Acknowledgements.} The authors are very grateful to Jean Bourgain for allowing us to use his ideas in Section \ref{interval-sieve-sec}. The first-named author is supported by ERC Starting Grant 279438 \emph{Approximate Algebraic Structure and Applications}. He is very grateful to the University of Auckland for providing excellent working conditions on a sabbatical during which this work was completed. The second-named author was supported by a Doctoral Prize from the EPSRC when this work was started; by a postdoctoral fellowship from the Centre de Recherches Math\'ematiques, Montr\'eal; and by a research fellowship at Jesus College, Cambridge when the work was completed.
\section{The large sieve and the larger sieve}
\textsc{The large sieve}. Let us begin by briefly recalling a statement of the large sieve bound. The following may be found in Montgomery \cite{montgomery-early}.
\begin{proposition}\label{classicls}
Let $\mathscr{A}$ be a set of positive integers with the property that $\mathscr{A} \mdlem{p} \subset S_p$ for each prime $p$. Then for any $Q,X$ we have the bound
$$ |\mathscr{A}[X]| \leq \frac{X + Q^{2}}{\sum_{q \leq Q} \mu^{2}(q) \prod_{p \mid q} \frac{|S_p^c|}{|S_p|}} \leq \frac{X + Q^2}{\sum_{p \leq Q} \frac{|S_p^c|}{p}}. $$
where $\mu(q)$ denotes the M\"{o}bius function.
\end{proposition}
The second bound is a little crude but has the virtue of being simple: we will use it later on. In the particular case that $|S_p| \leq \frac{1}{2}(p+1)$ for all $p$, discussed in the introduction, the first bound implies upon setting $Q := X^{1/2}$ that
$$ |\mathscr{A}[X]| \leq \frac{2X}{\sum_{q \leq X^{1/2}} \mu^{2}(q) \prod_{p \mid q} \frac{p-1}{p+1}} \ll X^{1/2}, $$ as we claimed.
The large sieve may also be profitably applied to ``small sieve'' situations in which $|S_p| = p - O(1)$ (as opposed to ``large sieve'' situations in which $p - |S_p|$ is large). We will need one such result later on, in \S \ref{stab-sec}.
\begin{lemma}\label{small-from-large}
Suppose that $\mathscr{B} \subset \Z$ is a set with the property that $\mathscr{B} \mdlem{p}$ misses $w(p)$ residue classes, for every prime $p$. Suppose that the function $w$ has average value $k$ in the \textup{(}fairly weak\textup{)} sense that $\frac{1}{Z}\sum_{Z \leq p \leq 2Z} w(p) \log p = k + O(\frac{1}{\log^2 Z})$ for all $Z$. Then $|\mathscr{B} \cap [-N, N]| \ll N (\log N)^{-k}$ for all $N$.
\end{lemma}
\begin{proof}
In view of Proposition \ref{classicls}, it would suffice to know that
$$ \sum_{q \leq N^{1/2}} \mu^2(q) \prod_{p | q} \frac{w(p)}{p - w(p)} \gg \log^k N$$
for all large $N$. If we define a multiplicative function $g(n)$, supported on squarefree integers, by $g(p) := w(p)/p$, then it would obviously suffice to know that $\sum_{q \leq N^{1/2}} g(q) \gg \log^k N$ for all large $N$. However there is an extensive theory, dating back to Hal\'{a}sz, Wirsing and others, that gives asymptotics and bounds for sums of multiplicative functions. For example, partial summation and the assumption that $\sum_{Z \leq p \leq 2Z} w(p) \log p = k Z + O(\frac{Z}{\log^2 Z})$ show that $g$ satisfies the conditions of Theorem A.5 of Friedlander and Iwaniec~\cite{opera-cribo}, and therefore
$$ \sum_{q \leq N} g(q) \sim c_{g} \log^k N \;\;\; \text{as} \; N \rightarrow \infty , $$
for a certain constant $c_{g} > 0$.
\end{proof}
\textsc{The larger sieve}. The ``larger sieve'' was introduced by Gallagher \cite{gallagher}. A pleasant discussion of it may be found in chapter 9.7 of Friedlander and Iwaniec \cite{opera-cribo}. We will apply it several times in the paper, and we formulate a version suitable for those applications.
\begin{theorem}\label{larger-sieve}
Let $0 < \delta \leq 1$, and let $Q > 1$. Let $\mathscr{P}$ be a set of primes. For each prime $p \in \mathscr{P}$, suppose that one is given a set $S_p \subset \Z/p\Z$, and write $\sigma_p := |S_p|/p$. Suppose that there is some set $\A'_p \subset \A$, $|\A'_p[X]| \geq \delta |\A[X]|$, such that $\A'_p \md{p} \subset S_p$. Then
\[ |\A[X]| \ll \frac{Q}{\delta^2 \sum_{p \in \mathscr{P}, p \leq Q} \frac{\log p}{\sigma_p p} - \log X},\] provided that the denominator is positive.
\end{theorem}
\emph{Remark.} In this paper we will always have $\delta$ at least some absolute constant, not depending on $X$, and very often we will have $\delta \approx 1$.
\begin{proof}
We examine the expression
\[ I := \sum_{\substack{p \in \mathscr{P} \\ p \leq Q}} \sum_{\substack{x,y \in \A[X] \\ x \neq y}} 1_{p | x - y} \log p.\]
On the one hand we have
\[ \sum_{p \in \mathscr{P}} 1_{p | n} \log p \leq \sum_{p} 1_{p | n} \log p \leq \log n,\] and therefore
\[ I \leq |\A[X]|^2 \log X.\]
On the other hand, writing $\A(a,p;X)$ for the number of $x \in \A[X]$ with $x \equiv a \md{p}$, we have
\[ I = \sum_{\substack{p \in \mathscr{P}\\ p \leq Q}} \sum_{a \mdsub{p}} |\A(a,p;X)|^2 \log p - |\A[X]| \sum_{\substack{p \in \mathscr{P}\\ p \leq Q}} \log p.\]
Comparing these facts yields
\begin{equation}\label{to-use} \sum_{\substack{p \in \mathscr{P} \\ p \leq Q}} \sum_{a \mdsub{p}} |\A(a,p;X)|^2 \log p \leq |\A[X]|^2 \log X + O(|\A[X]|Q),\end{equation}
and so of course, since $\A'_p \subset \A$,
\[ \sum_{\substack{p \in \mathscr{P} \\ p \leq Q}} \sum_{a \mdsub{p}} |\A'_p(a,p;X)|^2 \log p \leq |\A[X]|^2 \log X + O(|\A[X]|Q).\]
However by the Cauchy--Schwarz inequality and the fact that $\A'_p \md{p} \subset S_p$ we have
\[ \sum_{a \mdsub{p}} |\A'_p(a,p;X)|^2 \geq \frac{|\A'_p[X]|^2}{\sigma_p p} \geq \delta^2 \frac{|\A[X]|^2}{\sigma_p p}.\]
Summing over $p$ and rearranging, we obtain the result.
\end{proof}
The larger sieve bound can be a little hard to get a feel for, so we give an example. Suppose that $\mathscr{P}$ consists of all primes and that $\sigma_p = \alpha$ for all $p$. Take $\delta = 1$. Then, since $\sum_{p \leq Q} \frac{\log p}{p} = \log Q + O(1)$, the larger sieve bound is essentially
\[ |\A[X]| \ll \frac{Q}{\frac{1}{\alpha} \log Q - \log X}.\]
Taking $Q$ a little larger than $X^{\alpha}$, we obtain the bound $|\A[X]| \ll X^{\alpha + o(1)}$. This beats an application of the large sieve when $\alpha < \frac{1}{2}$, that is to say when we are sieving out a majority of residue classes (hence the terminology ``larger sieve''). However in the type of problems we are generally considering in this paper, where $\alpha = \frac{1}{2}$, we only recover the bound $|\A[X]| \ll X^{1/2 + o(1)}$, as of course we must since nothing has been done to exclude the example where $\A$ is a set of values of a quadratic.
In actual fact one of our three applications of the larger sieve (in the proof of Theorem \ref{thm1.4}) requires an inspection of the above proof, rather than an application of the result itself. This is the observation that when $\sigma_p \approx \frac{1}{2}$ the larger sieve \emph{does} beat the bound $|\A[X]| \ll X^{1/2 + o(1)}$ unless $\A$ satisfies a certain ``uniform fibres'' condition. Recall that if $\A$ is a set of integers then $\A(a,p;X)$ is the number of $x \in \A[X]$ with $x \equiv a \md{p}$.
\begin{lemma}\label{lem1.1}
Let $\kappa > 0$ and $\eta > 0$ be small parameters. Suppose that $\A$ is a set of integers occupying at most $(p+1)/2$ residue classes modulo $p$ for all $p$. Say that $\A[X]$ has {\em $\eta$-uniform fibres above $p$} if
\[ \sum_{a \mdsublem{p}} |\A(a, p;X)|^2 \leq (2 + \eta) |\A[X]|^2/p.\] Let $\mathscr{P}_{\unif}$ be the set of primes above which $\A[X]$ has $\eta$-uniform fibres. Then either $|\A[X]| \leq X^{1/2 - \kappa}$ or else ``most'' fibres are $\eta$-uniform in the sense that
\[ \sum_{\substack{p \leq X^{1/2}, \\ p \notin \mathscr{P}_{\unif}}} \frac{\log p}{p} \leq \frac{3\kappa \log X + O(1)}{\eta}.\]
\end{lemma}
\begin{proof}
Let $\mathscr{P}$ be the set of all primes, and let $Q := X^{1/2 - \kappa}$.
We proceed as in the proof of the larger sieve until \eqref{to-use}, which was the inequality
\[ \sum_{p \leq Q} \sum_{a \mdsub{p}} |\A(a,p;X)|^2 \log p \leq |\A[X]|^2 \log X + O(|\A[X]|Q).\]
Now by the Cauchy--Schwarz inequality we have \[ \sum_{a \mdsub{p}} |\A(a,p;X)|^2 \geq 2|\A[X]|^2/(p+1)\] for all $p$. Using this and the estimate $\sum_{p \leq Q} \log p/p = \log Q + O(1)$, we see that the left-hand side of \eqref{to-use} is at least
\[ 2|\A[X]|^2 (\log Q + O(1)) + \eta |\A[X]|^2 \sum_{\substack{p \leq Q \\ p \notin \mathscr{P}_{\unif}}} \frac{\log p}{p}.\]
Therefore
\[ \eta \sum_{\substack{p \leq Q \\ p \notin \mathscr{P}_{\unif}}} \frac{\log p}{p} \leq \log X - 2 \log Q + O(1) + O(\frac{Q}{|\A[X]|}).\] If $|\A[X]| \leq X^{1/2 - \kappa}$ then we are done; otherwise, the term $O(Q/|\A[X]|)$ may be absorbed into the $O(1)$ term and, after a little rearrangement, we obtain
\[ \sum_{\substack{p \leq Q \\ p \notin \mathscr{P}_{\unif}}} \frac{\log p}{p} \leq \frac{2 \kappa \log X + O(1)}{\eta}.\]
Since
\[ \sum_{Q \leq p \leq X^{1/2}} \frac{\log p}{p} = \kappa \log X + O(1),\] the claimed bound follows.
\end{proof}
\section{Sieving by additively structured sets}
Our aim in this section is to establish Theorem \ref{thm1.4}.
Let $A$ be a finite set of integers. As is standard, we write $E(A,A)$ for the {\em additive energy} of $A$, that is to say the number of quadruples $(a_1,a_2,a_3,a_4) \in A^4$ with $a_1 + a_2 = a_3 + a_4$. If $p$ is a prime, write $E_p(A, A)$ for the number of quadruples with $a_1 + a_2 \equiv a_3 + a_4 \md{p}$. It is easy to see that $E_p(A,A) \geq |A|^4/p$. In situations where this inequality is not tight, we can get a lower bound for the additive energy $E(A,A)$. To do this we will use the {\em analytic large sieve inequality}, which is something like an approximate version of Bessel's inequality (and which leads, in a non-obvious way, to the large sieve bound that we stated as Proposition \ref{classicls}). We cite the following version, which is best possible in various aspects, from Chapter 9.1 of Friedlander and Iwaniec \cite{opera-cribo}.
\begin{proposition}
Let $0 < \delta \leq 1/2$, and suppose that $\theta_{1}, \dots , \theta_{R} \in \R/\Z$ form a $\delta$-spaced set of points, in the sense that $\Vert \theta_{r} - \theta_{s} \Vert \geq \delta$ for all $ r \neq s$ where $\Vert \cdot \Vert$ denotes distance to the nearest integer. Suppose that $(a(x))_{M < x \leq M+X}$ are any complex numbers, where $X$ is a positive integer. Then
\[ \sum_{r=1}^{R} \bigg|\sum_{M < x \leq M+X} a(x) e(\theta_{r}x) \bigg|^{2} \leq (X - 1 + \delta^{-1}) \sum_{M < x \leq M+X} |a(x)|^{2}, \]
where as usual $e(\theta) := \exp\{2\pi i \theta\}$.
\end{proposition}
Using the analytic large sieve inequality, we shall prove the following lemma.
\begin{lemma}[Lifting additive energy]\label{lift-lem} Suppose that $A \subset [X]$. Then we have
\[ \sum_{p \leq X^{1/2}} p \big( E_p(A,A) - \frac{|A|^4}{p} \big) \leq 3 X E(A,A).\]
\end{lemma}
\begin{proof} Write $r(x)$ for the number of representations of $x$ as $a_1 + a_2$ with $a_1,a_2 \in A$. Then
\[ E(A,A) = \sum_{x \leq 2X} r(x)^2,\]
whilst
\[ E_p(A, A) = \sum_{\substack{x,x' \leq 2X \\ x \equiv x' \mdsub{p}}} r(x) r(x') = \frac{1}{p}\sum_{a \mdsub{p}} |\sum_{x \leq 2X} r(x) e(ax/p)|^2.\]
It follows that
\begin{align*} \sum_{p \leq X^{1/2}} p E_p(A,A) & = \sum_{p \leq X^{1/2}} \sum_{a \mdsub{p}} |\sum_{x \leq 2X} r(x) e(ax/p)|^2 \\ & = \sum_{p \leq X^{1/2}} \sum_{\substack{a \mdsub{p} \\ a \neq 0}} |\sum_{x \leq 2X} r(x) e(ax/p)|^2 + \sum_{p \leq X^{1/2}} p \frac{|A|^4}{p}.\end{align*}
Now the fractions $a/p$ are $1/X$-spaced, as $a, p$ range over all pairs with $p \leq X^{1/2}$ prime and $1 \leq a \leq p-1$. By the analytic form of the large sieve it follows that
\[ \sum_{p \leq X^{1/2}} \sum_{a \mdsub{p} a \neq 0} |\sum_{x \leq 2X} r(x) e(ax/p)|^2 \leq 3 X \sum_{x \leq 2X} r(x)^2.\]
Putting all these facts together gives the result.\end{proof}
\begin{corollary}\label{cor1.3}
Let $\eta,\delta > 0$ be small real numbers with $\eta \leq \delta^2$. Suppose that $A \subset [X]$ is a set. Let $\mathscr{P}$ be a set of primes satisfying $36\delta^{-2} \leq p \leq X^{1/2}$, and suppose that the following are true whenever $p \in \mathscr{P}$:
\begin{enumerate}
\item $A \mdlem{p}$ lies in a set $S_p$ of cardinality at most $\frac{1}{2}(p+1)$;
\item $S_p$ has at least $(\frac{1}{16} + \delta) p^3$ additive quadruples;
\item $A$ has $\eta$-uniform fibres mod $p$, in the sense that $\sum_{a \mdsublem{p}} |A(a; p)|^2 \leq (2 + \eta) |A|^2/p$,
where $A(a;p)$ is the number of $x \in A$ with $x \equiv a \mdlem{p}$.
\end{enumerate}
Then $E(A,A) \geq \frac{\delta |A|^4}{3X}|\mathscr{P}|$.
\end{corollary}
\begin{proof}
Suppose that $p \in \mathscr{P}$. We will obtain a lower bound for $E_p(A,A)$ which beats the trivial bound of $E_p(A,A) \geq |A|^4/p$. The corollary will then follow quickly from Lemma \ref{lift-lem}. First of all we apply the variance identity
\[ \sum_{m = 1}^M (t_m - \frac{1}{M}\sum_{i = 1}^M t_i)^2 = \sum_{i=1}^M t_i^2 - \frac{1}{M} (\sum_{i=1}^M t_i)^2\]
with $M := |S_p|$ and the $t_i$ being the $A(a;p)$ with $a \in S_p$. This and the uniform fibres assumption yields
\[ \sum_{a \in S_p} \big(|A(a;p)| - \frac{|A|}{|S_p|}\big)^2 \leq \frac{(2 + \eta) |A|^2}{p} - \frac{2|A|^2}{p+1} \leq \frac{|A|^2}{p}\big(\eta + \frac{2}{p}\big).\]
Write $f : \Z/p\Z \rightarrow \R$ for the function $f(a) := |A(a;p)|$, and $g : \Z/p\Z \rightarrow \R$ for the function which is $|A|/|S_p|$ on $S_p$ and zero elsewhere. We have shown\footnote{The normalisations here are the ones standard in additive combinatorics. Write $\Vert F \Vert_2 = (\frac{1}{p}\sum_{x \in \Z/p\Z} |F(x)|^2)^{1/2}$, but $\Vert \hat{F} \Vert_2 = (\sum_r |\hat{F}(r)|^2)^{1/2}$ on the Fourier side. Then we have the Parseval identity $\Vert F \Vert_2 = \Vert \hat{F} \Vert_2$ and Young's inequality $\Vert \hat{F} \Vert_{\infty} \leq \Vert F \Vert_1$, both of which are used here.} that
\[ \Vert f - g \Vert_2 \leq \frac{|A|}{p} \sqrt{\eta + \frac{2}{p}}.\]
Note also that, by the Cauchy--Schwarz inequality,
\[ \Vert f - g \Vert_1 \leq \Vert f - g \Vert_2 \leq \frac{|A|}{p} \sqrt{\eta + \frac{2}{p}} ,\]
and hence
\[ \Vert \hat{f} - \hat{g} \Vert_4^4 \leq \Vert \hat{f} - \hat{g} \Vert_{\infty}^{2} \Vert \hat{f} - \hat{g} \Vert_2^{2} \leq \Vert f - g \Vert_1^2 \Vert f - g \Vert_2^2 \leq \frac{|A|^4}{p^4}(\eta + \frac{2}{p})^{2},\] which of course implies that
\[ \Vert \hat{f} - \hat{g} \Vert_4 \leq \frac{|A|}{p} (\eta^{1/2} + \sqrt{\frac{2}{p}}),\]
where $\hat{f}(r) := \frac{1}{p}\sum_{a \in \Z/p\Z} f(a)e(-ar/p)$, similarly for $\hat{g}$. It follows that
\[ \Vert \hat{f} \Vert_4 \geq \Vert \hat{g} \Vert_4 - \frac{\eta^{1/2}|A|}{p} - \frac{\sqrt{2}|A|}{p^{3/2}}.\]
Note, however, that
\[ E_p(A,A) = \sum_{a_1 + a_2 = a_3 + a_4} f(a_1)f(a_2) f(a_3) f(a_4) = p^3 \Vert \hat{f} \Vert_4^4,\]
whilst
\[ \Vert \hat{g} \Vert_4^4 = \frac{|A|^4}{|S_p|^4} \frac{E_p(S_p, S_p)}{p^{3}} \geq \frac{|A|^4}{16|S_p|^4} (1 + 16\delta)\] and so
\[ \Vert \hat{g} \Vert_4 \geq \frac{|A|}{2|S_p|}(1 + 2\delta) \geq \frac{|A|}{p+1}(1 + 2\delta).\]
Putting these facts together, and remembering that $\eta \leq \delta^{2}$ and $p \geq 36\delta^{-2}$, yields
\[ \Vert \hat{f} \Vert_4 \geq \frac{|A|}{p}(1 + 2\delta - \eta^{1/2} - 3/p^{1/2}) \geq \frac{|A|}{p}(1 + \delta/2)\] and so $E_p(A,A) \geq \frac{|A|^4}{p}(1 + \delta)$ whenever $p \in \mathscr{P}$.
The result now follows immediately from Lemma \ref{lift-lem}.
\end{proof}
\begin{corollary}\label{cor3.3}
Let $\kappa > 0$ be a small parameter. Suppose that $A \subset [X]$ and that, for every prime $p \leq X^{1/2}$, the set $A \mdlem{p}$ lies in a set $S_p$ of cardinality at most $\frac{1}{2}(p+1)$ and with at least $(\frac{1}{16} + \delta) p^3$ additive quadruples. Then either $|A| \ll X^{1/2 - \kappa}$ or else $E(A,A) \geq \delta X^{-9 \kappa/\delta^2} |A|^3$.
\end{corollary}
\begin{proof}
Suppose that $|A| \geq X^{1/2 - \kappa}$ and that $\kappa \log X$ is large enough. (If $\kappa \log X$ is small then $|A| \ll X^{1/2} \ll X^{1/2 - \kappa}$ by the usual large sieve bound.) Set $\eta := \delta^2$. By Lemma \ref{lem1.1} we either have $|A| \leq X^{1/2 - \kappa}$ or else $A$ has $\eta$-uniform fibres on a set $\mathscr{P} \subset [X^{1/2}]$ of primes satisfying
\[ \sum_{\substack{p \leq X^{1/2} \\ p \notin \mathscr{P}}} \frac{\log p}{p} \leq \frac{4\kappa \log X}{\eta}.\] This implies that $|\mathscr{P}| \geq X^{1/2 - 8\kappa/\eta}$, and so by Corollary \ref{cor1.3} and the fact that $|A| \geq X^{1/2 - \kappa}$ we have $E(A,A) \geq \delta X^{-9\kappa/\eta} |A|^3$, which gives the claimed bound.
\end{proof}
The main task for the rest of this section will be to prove the following.
\begin{proposition}[Differenced larger sieve]\label{diff-largersieve}
Let $X$ be large, and let $A \subset [X]$ be a set with the property that $A \mdlem{p}$ lies in a set $S_p$ of size at most $\frac{1}{2}(p+1)$ for all primes $p \leq X^{1/2}$. Suppose that $E(A,A) \geq |A|^3/K$. Then $|A| \leq K X^{1/2 - c_0}$, where $c_0 > 0$ is an absolute constant.
\end{proposition}
Let us pause to see how this and Corollary \ref{cor3.3} combine to establish Theorem \ref{thm1.4}.
\begin{proof}[Proof of Theorem \ref{thm1.4} given Proposition \ref{diff-largersieve}]
Let $\kappa > 0$ be a parameter to be specified shortly. Suppose that $A \subset [X]$, and that $A \mdlem{p} \subset S_p$ for all primes $p$. Suppose furthermore that $|S_p| = \frac{1}{2}(p+1)$ and that $S_p$ has at least $(\frac{1}{16} + \delta) p^3$ additive quadruples for all $p$. By Corollary \ref{cor3.3} we see that either $|A| \ll X^{1/2 - \kappa}$, or else $E(A,A) \geq \delta X^{-9\kappa/\delta^2} |A|^3$. In this second case it follows from Proposition \ref{diff-largersieve} that $|A| \ll_{\delta} X^{\frac{1}{2} + 9 \kappa/\delta^2 - c_0}$.
Choosing $\kappa := c_0\delta^2/10$ gives the result.
\end{proof}
It remains to prove Proposition \ref{diff-largersieve}. As the reader will soon see, the proof might be thought of as a ``differenced larger sieve'' argument, in which the larger sieve is not applied to $A$ directly, but rather to intersections of shifted copies of $A$ (as in Lemma \ref{lem2.7}) and to a set $H$ of pairwise differences of elements of $A$ (as in Lemma \ref{lem2.8}). The assumption that $A$ has large additive energy allows one to recover bounds on $A$ from that information (as in Lemma \ref{lem3.5}).
\emph{Remark.} It is possible to prove Proposition \ref{diff-largersieve} with a quite respectable value of the constant $c_{0}$. Unfortunately the quality of the final bound in Theorem \ref{thm1.4} is not really determined by the value of $c_{0}$, but by the much poorer bounds that we achieved when trying to force the set $A$ to have uniform fibres mod $p$. We believe that by reworking Corollary \ref{cor1.3} a little one could prove Theorem \ref{thm1.4} with an improved bound $|\A[X]| \ll X^{1/2 - c\sqrt{\delta}}$, but this is presumably very far from optimal.
\begin{proof}[Proof of Proposition \ref{diff-largersieve}]
The argument is a little involved, so we begin with a sketch. Suppose that $E(A,A) \approx |A|^3$. Then it is not hard to show that $|A \cap (A + h)| \approx |A|$ for $h \in H$, where $|H| \approx |A|$. Modulo $p$, the set $A \cap (A + h)$ is contained in $S_p \cap (S_p + h)$. If, for some $h \in H$, we have $|S_p \cap (S_p + h)| < (\frac{1}{2} - c)p$ then an application of the larger sieve implies that $|A \cap (A + h)| < X^{1/2 - c'}$, and hence $|A| \lessapprox X^{1/2 - c'}$. The alternative is that $|S_p \cap (S_p + h)| \approx \frac{1}{2}p$ for many $p$, for all $h \in H$. Using this we can show that there is \emph{some} $p$ for which $|S_p \cap (S_p + h)| \approx \frac{1}{2}p$ for many $h$. By a result of Pollard, there is no such set $S_p$.
Let us turn now to the details, formulating a number of lemmas which correspond to the above heuristic discussion. From now on, the assumptions are as in Proposition \ref{diff-largersieve}.
\begin{lemma}\label{lem3.5} Then there is a set $H \subset [-X, X]$, $|H| \geq |A|/2K$ such that $|A \cap (A + h)| \geq |A|/2K$ for all $h \in H$.
\end{lemma}
\begin{proof}
This is completely standard additive combinatorics and is a consequence, for example, of the inequalities in \cite[\S 2.6]{tao-vu}. It is no trouble to give a self-contained proof: note that $E(A,A) = \sum_x |A \cap (A + x)|^2$ and that we have the trivial bound $|A \cap (A + x)| \leq |A|$ for all $x$. If $H$ is the maximal set with the stated property then
\[ E(A, A) = \sum_{x \in H} |A \cap (A + x)|^2 + \sum_{x \notin H} |A \cap (A + x)|^2 \leq |H| |A|^2 + \frac{|A|}{2K}\sum_{x} |A \cap (A + x)| = |H| |A|^2 + \frac{|A|^3}{2K},\]
from which the statement follows immediately.
\end{proof}
\begin{lemma}\label{lem2.7} Let $c > 0$ be a small constant. Set $Q := X^{1/2 - c/2}$, and suppose that there is some $h \in H$ such that
\[ \sum_{p \leq Q} \frac{\log p}{p} \frac{|S_p \cap (S_p + h)|}{p} < (\frac{1}{2} - c) \log Q.\] Then $|A| \ll K X^{1/2 - c/4}$.
\end{lemma}
\begin{proof}
Note that $A \cap (A + h) \subseteq S_p \cap (S_p + h)$ for all $p$. Write $\sigma_p := |S_p \cap (S_p + h)|/p$ and apply the larger sieve, Theorem \ref{larger-sieve}, with $\delta = 1$ and $A$ replaced by $A \cap (A + h)$. We obtain the bound
\begin{equation}\label{eq77denom} |A \cap (A + h)| \ll \frac{Q}{\sum_{p \leq Q} \frac{\log p}{p\sigma_p} - \log X},\end{equation}
provided that the denominator is positive.
Our assumption is that
\[ \sum_{p \leq Q} \frac{\sigma_p \log p}{p} \leq (\frac{1}{2} - c) \log Q.\] Since $4t + 1/t \geq 4$ for all $t > 0$, it follows that
\[ \sum_{p \leq Q} \frac{\log p}{\sigma_p p} \geq 4\log Q + O(1) - 4(\frac{1}{2} - c)\log Q = (2 + 4c)\log Q + O(1).\]
It is easy to check that the denominator of \eqref{eq77denom} is indeed positive, since $Q = X^{1/2 - c/2}$. We obtain the bound
\[ |A \cap (A + h)| \ll X^{1/2 - c/2 + o(1)} \ll X^{1/2 - c/4}.\]
Since $|A \cap (A + h)| \geq |A|/2K$, the lemma follows.
\end{proof}
Before stating the next lemma, let us isolate a fact which will be needed in the proof. This is basically due to Pollard.
\begin{lemma}[Pollard]\label{add-comb}
Let $\eps > 0$ be small, and let $S \subset \Z/p\Z$ be a non-empty set such that $|S| < (1-2\eps)p$. Then there are at most $4\eps |S| + 1$ values of $h \in \Z/p\Z$ such that $|S \cap (S + h)| \geq (1 - \eps)|S|$.
\end{lemma}
\begin{proof}
This follows quickly from a well-known result of Pollard \cite{pollard}. Writing $N_i$ for the number of $h$ such that $|S \cap (S + h)| \geq i$, Pollard's result in our setting implies that $N_1 + \dots + N_r \geq r(2|S|-r)$ for all $2|S| - p \leq r \leq |S|$. Temporarily write $H$ for the set of all $h \in \Z/p\Z$ such that $|S \cap (S + h)| \geq (1 - \eps)|S|$, and also let $R := |S| - 2\lfloor \eps|S| \rfloor$ and $U := |S| - \lfloor \eps|S| \rfloor$, where $\lfloor \cdot \rfloor$ denotes the floor function. Then
\[ N_{R+1} + \dots + N_{|S|} \geq N_{R+1} + \dots + N_{U} \geq |H|(U - R) = |H| \lfloor \eps|S| \rfloor .\]
Pollard's result tells us that
\[ N_1 + \dots + N_{R} \geq R(2|S|-R) = |S|^{2} - 4\lfloor \eps|S| \rfloor^{2} .\]
On the other hand we trivially have
\[ N_1 + \dots + N_{|S|} = |S|^2.\]
Combining all these facts leads to the result provided that $|S| \geq 1/\epsilon$.
Alternatively, if $|S| < 1/\epsilon$ then $|S \cap (S + h)| \geq (1 - \eps)|S|$ only if $S \cap (S + h) = S$, in which case $S \cap (S + nh) = S$ for every $n$. Since $S$ is a proper subset of $\Z/p\Z$, this can only happen when $h=0$.
\end{proof}
We will also require a simple and standard averaging principle, the proof of which we include here for completeness.
\begin{lemma}\label{simple-avg}
Let $\eps, \eps'$ be real numbers with $0 < \eps \leq \eps'$. Let $X$ be a finite set, let $(\lambda(x))_{x \in X}$ be nonnegative weights, and suppose that $f : X \rightarrow [0,1]$ is a function such that $\sum_{x \in X} \lambda(x) f(x) \geq (1 - \eps) \sum_{x \in X} \lambda(x)$.
Let $X' \subset X$ be the set of all $x \in X$ such that $f(x) \geq 1 - \eps'$. Then $\sum_{x \in X'} \lambda(x) \geq (1 - \frac{\eps}{\eps'}) \sum_{x \in X} \lambda(x)$. In particular if $\sum_{x \in X} f(x) \geq (1 - \eps) |X|$ then there are at least $(1 - \frac{\eps}{\eps'}) |X|$ values of $x$ such that $f(x) \geq 1 - \eps'$.
\end{lemma}
\begin{proof}
We have
\[ (1 - \eps) \sum_{x \in X} \lambda(x) \leq \sum_{x \in X} \lambda(x) f(x) = \sum_{x \in X'} \lambda(x) f(x) + \sum_{x \in X \setminus X'} \lambda(x) f(x) \leq \sum_{x \in X'} \lambda(x) + (1 - \eps') \sum_{x \in X \setminus X'} \lambda(x).\]
Rearranging this inequality gives the first result. The second one follows by taking all the weights $\lambda(x)$ to be 1.
\end{proof}
\begin{lemma}\label{lem2.8}
Let $c > 0$ be a sufficiently small absolute constant. Let $H \subset [-X,X]$ be a set of size $X^{1/2 - c/4}$, and let $Q = X^{1/2 - c/2}$. Then there is some $h \in H$ such that \[ \sum_{p \leq Q} \frac{\log p}{p} \frac{|S_p \cap (S_p + h)|}{p} < (\frac{1}{2} - c) \log Q.\]
\end{lemma}
\begin{proof}
Suppose not. Then certainly
\[ \sum_{h \in H}\sum_{p \leq Q} \frac{\log p}{p} \frac{|S_p \cap (S_p + h)|}{p} \geq (\frac{1}{2} - c) |H| \log Q \geq (\frac{1}{2} - c) |H| \sum_{p \leq Q} \frac{\log p}{p}.\]
Write $\mathscr{P}$ for the set of primes $p \leq Q$ such that
\begin{equation}\label{averagehyp}
\sum_{h \in H} \frac{|S_p \cap (S_p + h)|}{p} \geq (\frac{1}{2} - c^{1/2}) |H| .
\end{equation}
By Lemma \ref{simple-avg} applied with $X$ the set of primes $p \leq Q$, $\lambda(p) = \frac{\log p}{2p}|H|$, $f(p) = \frac{2}{|H|} \frac{1}{p}\sum_{h \in H} |S_p \cap (S_p + h)|$, $\epsilon = 2c$ and $\epsilon' = 2c^{1/2}$ we have\footnote{The alert reader will observe that our applications of Lemma \ref{simple-avg} are slightly bogus, since we have $f(p) \leq (p+1)/p$ rather than $f(p) \leq 1$, as required in the lemma. This can be corrected by instead setting $\lambda(p) = \frac{\log p}{2p} \frac{p+1}{p} |H|$ and $f(p) = \frac{2}{|H|} \frac{1}{p+1}\sum_{h \in H} |S_p \cap (S_p + h)|$, which makes no essential difference to the conclusions about $\mathscr{P}$ and $H_p$.}
\[ \sum_{p \in \mathscr{P}} \frac{\log p}{p} \geq (1 - c^{1/2} ) \sum_{p \leq Q} \frac{\log p}{p}.\]
Note that $|S_p \cap (S_p + h)| \leq \frac{1}{2}(p+1)$ always. It also follows from Lemma \ref{simple-avg} applied to the inequality (\ref{averagehyp}) that, for all $p \in \mathscr{P}$, there is a set $H'_p \subset H$ with $|H'_p| \geq (1 - c^{1/4}) |H|$ such that $|S_p \cap (S_p + h)| \geq (\frac{1}{2} - c^{1/4})p$ for all $h \in H'_p$. On the other hand, by Lemma \ref{add-comb} it follows that $|H'_p \md{p}| \leq 4 c^{1/4} p + 1$, and so all but $c^{1/4} |H|$ elements of $H$ reduce to lie in a set of size $4 c^{1/4} p + 1 < \frac{1}{3}p$ modulo $p$, for all $p \in \mathscr{P}$, a set which satisfies $\sum_{p \in \mathscr{P}} \frac{\log p}{p} \geq (1 - c^{1/2}) (\log Q + O(1))$. We may apply the larger sieve, Theorem \ref{larger-sieve}, to this situation, taking $\delta = 1 - c^{1/4}$ and $\sigma_p = 1/3$ for all $p \in \mathscr{P}$. This gives the bound
\[ |H| \ll \frac{Q}{3(1 - c^{1/4})^{2} (1 - c^{1/2})(\log Q + O(1)) -\log X }\] provided that the denominator is positive. If $c$ is sufficiently small then the denominator will be positive with our choice of $Q$, namely $X^{1/2 - c/2}$, and we get the bound $|H| \ll X^{1/2 - c/2 + o(1)}$. This is contrary to assumption.
\end{proof}
We may now conclude the proof of Proposition \ref{diff-largersieve}. As in the hypothesis of the proposition, let $A \subset [X]$ be a set such that $A \md{p} \subset S_p$ for all $p$, where $|S_p| \leq \frac{1}{2}(p+1)$. Suppose additionally that $E(A,A) \geq |A|^3/K$. By Lemma \ref{lem3.5} there is a set $H \subset [-X,X]$, $|H| \geq |A|/2K$, such that $|A \cap (A + h)| \geq |A|/2K$ for all $h \in H$. If $|H| < X^{1/2 - c/4}$ then the proposition follows, so suppose this is not the case. Then Lemma \ref{lem2.8} applies and we may conclude that there is an $h \in H$ such that
\[ \sum_{p \leq Q} \frac{\log p}{p} \frac{|S_p \cap (S_p + h)|}{p} < (\frac{1}{2} - c) \log Q,\] where $Q = X^{1/2 - c/2}$.
Finally, by Lemma \ref{lem2.7}, it follows that $|A| \ll K X^{1/2 - c/4}$, thereby concluding the proof of the proposition.
\end{proof}
\section{Sieving by intervals}\label{interval-sieve-sec}
Our aim in this section is to establish Theorem \ref{interval-sieve}. We begin by recalling the statement of it.
\begin{interval-sieve-repeat}
Suppose that $\A$ is a set of integers and that, for each prime $p$, the set $\A \mdlem{p}$ lies in some interval $I_p$. Let $\eps > 0$ be arbitrary. Then
\begin{enumerate}
\item If $|I_p| \leq (1 - \eps) p$ for at least a proportion $\eps$ of the primes in each dyadic interval $[Z,2Z]$ then $|\A[X]| \ll_{\eps} (\log \log X)^{C\log(1/\eps)}$, where $C > 0$ is some absolute constant;
\item If $|I_p| \leq \frac{p}{2}$ for all primes then $|\A[X]| \ll (\log \log X)^{\gamma + o(1)}$, where $\gamma = \frac{\log 18}{\log(3/2)} \approx 7.129$;
\item If $|I_p| = [\alpha p, \beta p]$ for all primes $p$ and for fixed $0 \leq \alpha < \beta < 1$ \textup{(}not depending on $p$\textup{)} then $|\A| = O_{\beta - \alpha}(1)$;
\item There are examples of infinite sets $\A$ with $|I_p| \leq (\frac{1}{2} + \eps) p$ for all $p$.
\end{enumerate}
\end{interval-sieve-repeat}
The proof of parts (i) and (ii) relies on the following basic lemma.
\begin{lemma}\label{large-fourier} Suppose that $p$ is a prime, that $I_p \subset \Z/p\Z$ is an interval of length at most $(1 - \eps) p$, and that $A \subset [X]$ is a set with $A \mdlem{p} \subset I_p$. Then there is some integer $k$, $1 \leq k \leq \lceil 2/\eps^2 \rceil$, such that
\[ |\sum_{x \leq X} 1_A(x) e(k x/p) | \geq \eps |A|/32.\] If $|I_p| \leq p/2$ then we have the following more precise conclusion: there is an integer $k \in \{1,2\}$ such that
\[ |\sum_{x \leq X} 1_A(x) e(k x/p)| \geq |A|/3.\]
\end{lemma}
\begin{proof}
We claim that there is a $1$-periodic real-valued function
\[ f(\theta) = 1 + \sum_{0 < |k| \leq \lceil 2/\eps^2 \rceil} c_k e(k\theta)\]
such that $f(\theta) \leq 0$ when $|\theta| \geq \eps/2$.
To construct $f(\theta)$, consider first the convolution $\psi(\theta) := 1_{|\theta| \leq \eps/4} \ast 1_{|\theta| \leq \eps/4} = \int_{\R/\Z} 1_{|\theta - \phi| \leq \eps/4} 1_{|\phi| \leq \eps/4} d\phi$. We have
\[ |\hat{\psi}(k)| = |\widehat{1_{|\phi| \leq \eps/4}}(k)|^{2} = \big|\int_{\R/\Z} 1_{|\phi| \leq \eps/4} e(-k\phi) d\phi\big|^{2} \leq \min(\eps, \frac{1}{\pi |k|})^2.\]
From the Fourier inversion formula it follows that
\[ \frac{8}{\eps^2} \psi(\theta) = 2 + \sum_{k \neq 0} c_k e(k \theta),\] where $|c_k| \leq \min(8, \frac{1}{\eps^2 |k|^2})$.
Furthermore, by construction, $\psi(\theta) = 0$ for $|\theta| \geq \eps/2$. Define
\[ f(\theta) := 1 + \sum_{0 < |k| \leq K} c_k e(k \theta),\] where $K := \lceil 2/\eps^2 \rceil$.
Since
\[ \sum_{|k| > K} |c_k| \leq \sum_{|k| \geq K+1} \frac{1}{\eps^2 |k|^2} \leq \frac{2}{\eps^2 K} \leq 1,\] it follows that $f$ has the required properties. Now there is some $\beta \in [0,1]$ (depending on $I_p$) such that $\Vert \frac{x}{p} + \beta \Vert \geq \eps/2$ whenever $x \in A$, where $\Vert \cdot \Vert$ denotes distance to the nearest integer. This means that $f(\frac{x}{p} + \beta) \leq 0$, and so
\[ 1 + \sum_{0 < |k| \leq \lceil 2/\eps^2 \rceil} c_k e( k (\frac{x}{p} + \beta)) \leq 0.\]
It follows that
\[ |A| \leq -\sum_{0 < |k| \leq \lceil 2/\eps^2 \rceil} c_k \sum_{x \leq X} 1_A(x) e( k (\frac{x}{p} + \beta)).\]
Using the triangle inequality, one obtains
\[ |A| \leq \sum_{0 < |k| \leq \lceil 2/\eps^2 \rceil} |c_k| | \sum_{x \leq X} 1_A(x) e(k x/p) |.\]
To conclude the proof of the lemma, we observe that
\[ \sum_{0 < |k| \leq K} |c_k| \leq \frac{32}{\eps},\] an estimate that follows upon splitting into the ranges $0 < |k| \leq 1/\eps$ and $|k| > 1/\eps$.
For the second statement, simply note that the function $f(\theta) = 1 - 2\cos \theta + \cos 2\theta$ satisfies $f(\theta) \leq 0$ when $|\theta| \leq \pi/2$; rewriting the left-hand side as $2\cos\theta (\cos \theta - 1)$, this becomes clear. The rest of the argument proceeds as before.\end{proof}
We turn now to the proof of Theorem \ref{interval-sieve} (i). The general scheme of the argument, and in particular the use of Vinogradov's estimate (Proposition \ref{vino-est} below) was suggested to us by Jean Bourgain. We are very grateful to him for allowing us to include it here. The heart of the matter is the proof of the following lemma, from which Theorem \ref{interval-sieve} (i) follows rather easily by an iteration argument (or equivalently induction on $X$).
\begin{lemma}\label{it-step}
Suppose that $A \subset [X]$ and that $A \md{p}$ lies in an interval $I_p$ of length at most $(1-\eps) p$, for at least $\eps$ of all primes in each dyadic interval. Suppose that $X > X_0(\eps)$. Then there is a subinterval of $[X]$ of length $\exp (\log^{7/10} X)$ containing at least $c \eps^5 |A|$ points of $A$, where $c > 0$ is a small absolute constant.
\end{lemma}
Indeed, before proving this lemma let us explain how it implies Theorem \ref{interval-sieve} (i). We set $X_{0} = X$ and $A_{0} = \A[X]$, and by repeated application of the lemma we construct numbers $X_{i}$ and sets $A_{i}$ such that $A_{i} \subset [X_{i}]$, $A_{i} \md{p}$ lies in an interval $I_p$ of length at most $(1-\eps) p$, for at least $\eps$ of all primes in each dyadic interval, $\log X_{i+1} = \log^{7/10} X_{i}$ and $|A_{i+1}| \geq c \eps^5 |A_{i}|$. This procedure terminates when we first have $X_{i+1} \leq X_{0}(\eps)$, which will happen after $\ll \log\log\log X$ iterations. Consequently we have $|\A[X]| \leq (c^{-1}\eps^{-5})^{O(\log\log\log X)} X_{0}(\eps) \ll_{\eps} (\log \log X)^{C\log(1/\eps)}$, as claimed\footnote{As we have written things, we need to have $X_{0}(\eps) = \exp(C\log^{100}(1/\epsilon))$ (say) in order for the parameter $Y$ in the proof of Lemma \ref{it-step} to be large enough. But we remark that by taking more care of the final iterations in the proof of Theorem \ref{interval-sieve} (i), one could obtain a bound $|\A[X]| \ll (\log \log X)^{C\log(1/\eps)}$ for all $X$, with an absolute implied constant (not depending on $\eps$).} in Theorem \ref{interval-sieve}.
\begin{proof}[Proof of Lemma \ref{it-step}] Suppose that $p$ is a prime such that $A \md{p} \subset I_p$. By Lemma \ref{large-fourier}, there is some $k$, $1 \leq k \leq \lceil 2/\eps^2 \rceil$, such that
\begin{equation}\label{eq997} |\sum_{a \in A} e(k a/p)| \geq \eps |A|/32.\end{equation}
Let $Y$, $1 \ll Y \ll X$, be a parameter to be selected later (we will in fact take $Y = \exp(c\log^{7/10} X)$). We may choose a single $k$ so that the preceding estimate holds for $\gg \eps^2$ of the primes in $[Y, 2Y]$ for which we know that $A \md{p} \subset I_p$, that is for $\gg \eps^3$ of all the primes in $[Y, 2Y]$. Now we use the following fact: there is a weight function $w : [Y, 2Y] \rightarrow \R_{\geq 0}$ such that
\begin{enumerate}
\item $w(p) \geq 1$ for all primes $p \in [Y, 2Y]$;
\item $\sum_{Y \leq n \leq 2Y} w(n) \leq 10 \pi(Y)$;
\item $w(n) = \sum_{d| n: d \leq Y^{1/2}} \lambda_d$, where $\sum_{d \leq Y} \frac{|\lambda_d|}{d} \ll \log^{3} Y$.
\end{enumerate}
Such a function can be constructed in the form $w(n) = (\sum_{d | n} \mu(d) \psi(\frac{\log d}{\log Y}))^2$, where $\psi \in C^{\infty}(\R)$ is supported on $|x| \leq \frac{1}{4}$, is bounded in absolute value by 1, and $\psi(0) = 1$. Property (i) is then clear, whilst bound (ii) can be verified by expanding out and interchanging the order of summation. To check (iii), we note that it is clear that $|\lambda_d| \leq \sum_{[d_1, d_2] = d} 1 \leq \tau_3(d)$, the number of ways of writing $d$ as a product of three nonnegative integers. The claimed bound is then an easy exercise. It follows from \eqref{eq997} and the above properties that
\begin{equation}\label{eq527} \sum_{Y \leq n \leq 2Y} w(n) |\sum_{a \in A} e(ka/n)|^2 \geq c \eps^5 \pi(Y) |A|^2.\end{equation}
Expanding out and applying the triangle inequality yields
\begin{equation}\label{eq37} \sum_{a, a' \in A} |\sum_{Y \leq n \leq 2Y} w(n) e(\frac{k(a - a')}{n}) | \geq c\eps^5 \pi(Y) |A|^2.\end{equation}
We now claim that, if $Y$ is chosen judiciously, the contribution to this from those pairs $a,a'$ with $|a - a'| \geq Y^{10}$ (say) can be ignored. Indeed suppose, on the contrary, that
\begin{equation}\label{eq843} |\sum_{Y \leq n \leq 2Y} w(n) e(\frac{x}{n})| \geq \frac{c}{100}\eps^5 \pi(Y),\end{equation}
for some $x := k(a - a')$ satisfying $Y^{10} \leq x \ll \eps^{-2}X$. By property (iii) of $w(n)$ and the triangle inequality, this implies that
\[ \sum_{d \leq Y^{1/2}} |\lambda_d| |\sum_{Y/d \leq n' \leq 2Y/d} e(\frac{x}{dn'})| \geq \frac{c}{100} \eps^5 \pi(Y).\] By the upper bound (iii) for $\sum |\lambda_d| / d$ it follows that there is some $d \leq Y^{1/2}$ such that
\begin{equation}\label{eq478} |\sum_{Y/d \leq n' \leq 2Y/d} e(\frac{x}{d n'})| \gg \eps^5 \log^{-4} Y \frac{Y}{d}.\end{equation}
At this point we invoke the following powerful estimate of Vinogradov.
\begin{proposition}\label{vino-est}
Let $\delta > 0$ be small and $Y$ be large, and suppose that $x \geq Y^5$. Suppose that $|\sum_{Y \leq n \leq 2Y} e(x/n)| \geq \delta Y$. Then $x \geq \exp( c\log^{3/2} Y/\log^{1/2}(1/\delta))$.
\end{proposition}
\begin{proof} Using e.g. Theorem 8.25 of Iwaniec and Kowalski \cite{iwaniec-kowalski}, one obtains that
\[ |\sum_{Y \leq n \leq 2Y} e(x/n)| \ll Y e^{-c\frac{\log^{3}Y}{\log^{2}x}}. \]
Thus we must have $1/\delta \gg \exp(c\log^{3}Y/\log^{2}x)$, from which the conclusion of the proposition quickly follows.
\end{proof}
Applying this Proposition to \eqref{eq478} leads to a contradiction unless
\[ \frac{x}{d} > \exp \bigg(c \frac{\log^{3/2} Y}{(\log \log Y)^{1/2} + (\log(1/\epsilon))^{1/2}}\bigg), \]
which would imply that $X \gg \epsilon^{2} x > \exp (c (\log^{3/2} Y)/((\log \log Y)^{1/2} + (\log(1/\epsilon))^{1/2}))$. With $Y = \exp(c\log^{7/10} X)$ and $X \geq \exp(C\log^{100}(1/\epsilon))$, say, this will not be so. It follows that we were wrong to assume \eqref{eq843}, and so indeed the contribution to \eqref{eq37} of those pairs $a,a'$ with $|a - a'| \geq Y^{10}$ may be ignored. Thus we have
\begin{equation}\label{eq372} \sum_{\substack{a, a' \in A \\ |a - a'| \leq Y^{10}}} \bigg|\sum_{Y \leq n \leq 2Y} w(n) e(\frac{k(a - a')}{n}) \bigg| \geq \frac{c}{2}\eps^5 \pi(Y) |A|^2.\end{equation}
Finally, we may apply the trivial bound to the inner sum, recalling from (ii) above that $\sum_{Y \leq n \leq 2Y} w(n) \leq 10 \pi(Y)$. We obtain
\[ \sum_{\substack{a, a' \in A \\ |a - a'| \leq Y^{10}}} 1 \gg \eps^5 |A|^2,\]
which implies that there is a subinterval of $[X]$ of length $Y^{10}$ containing $\gg \eps^5 |A|$ elements of $A$. This concludes the proof of Lemma \ref{it-step}, and hence of Theorem \ref{interval-sieve} (i). \end{proof}
\emph{Proof of Theorem \ref{interval-sieve} (ii)} (sketch). We proceed as above, with the following changes.
\begin{itemize}
\item Use the second conclusion of Lemma \ref{large-fourier} to conclude that there is some $k \in \{1,2\}$ such that
\[ \sum_{Y \leq p \leq 2Y} | \sum_{a \in A} e(ka/p)|^{2} \geq \frac{1}{18} (\pi(2Y) - \pi(Y)) |A|^{2}.\] This takes the place of \eqref{eq527}.
\item Expand out as in \eqref{eq37} to get
\[ \sum_{a, a' \in A} |\sum_{Y \leq p \leq 2Y} e(\frac{k(a - a')}{p})| \geq \frac{1}{18} (\pi(2Y) - \pi(Y)) |A|^2.\]
\item Choose $Y = \exp(\log^{2/3+o(1)} X)$, and use Jutila \cite[Theorem 2]{jutila} (which is a Vinogradov-type estimate for $\sum_{p \leq P} e(x/p)$) to show that the contribution from those pairs with $|a - a'| \geq Y^{10}$ can be ignored, so we have
\[ \sum_{\substack{a, a' \in A \\ |a - a'| \leq Y^{10}}} |\sum_{Y \leq p \leq 2Y} e(\frac{k(a - a')}{p})| \geq (\frac{1}{18} - o(1)) \pi(Y) |A|^2.\]
\item Conclude that there is some interval of length $\sim Y^{10}$ containing at least $(\frac{1}{18} - o(1))|A|$ points of $A$, and proceed iteratively as before.
\end{itemize}
\emph{Remark.} We could have used Jutila's bound in the proof of Theorem \ref{interval-sieve} (i) as well, instead of using the weight function $w$ . We chose not to do this in the interests of self-containment and of variety. Note that Jutila's paper predates Vaughan's identity \cite{vaughan} for prime number sums, and his argument would be a little more accessible if this device were used. A model for such an argument may be found in the paper of Granville and Ramar\'e \cite{granville-ramare}. \vspace{11pt}
\emph{Proof of Theorem \ref{interval-sieve}} (iii). This is essentially a consequence of Jutila \cite[Corollary, p126]{jutila}. A slight variant of that Corollary shows that the number of $p \in [x^{1/2}, 2x^{1/2}]$ with $\alpha \leq \{ x/p \} \leq \beta$ is $\sim (\beta - \alpha) (\pi(2x^{1/2}) - \pi(x^{1/2}))$, and so all elements of $\A$ are bounded by $O_{\beta - \alpha}(1)$.\vspace{11pt}
\emph{Proof of Theorem \ref{interval-sieve}} (iv). We take $\A$ to consist of the numbers $a_i = \prod_{p \leq X_i} p$, for some extremely rapidly-growing sequence $X_1 < X_2 < \dots$. Given a prime $p$, suppose that $X_i \leq p \leq X_{i+1}$. Then $a_{i+1},a_{i+2},\dots$ all reduce to zero $\md{p}$, and so $\A \md{p} = \{0,a_1,\dots, a_i\}$. By choosing the $X_i$ sufficiently rapidly growing we may ensure that $0 < a_1 < \dots < a_{i-1} < \eps p$. Regardless of the value of $a_i \md{p}$ (which we cannot usefully control) the set $\A \md{p}$ will be contained in some interval of length at most $(\frac{1}{2} + \eps) p$.\vspace{11pt}
\emph{Remark.} With $\A$ as constructed above, the shape of $|\A[X]|$ is $\log_* X$. Thus there is still a considerable gap between the bound of (i) and the construction given here. We expect, however, that the correct bound in (i) is of $\log_* $ type, which would follow assuming vaguely sensible conjectures on exponential sums $\sum_{n \leq Y} e(x/n)$. If, for example, the conclusion of Proposition \ref{vino-est} were instead that $x \geq \exp(Y^{1/10})$ then we would get a $\log_*$-type bound on $\A[X]$ in this case.
\section{Sieving by arithmetic progressions}
In this section we shall prove Theorem \ref{prog-thm}, whose statement was as follows.
\begin{prog-thm-rpt}
Let $\eps > 0$. Suppose that $\A$ is a set of integers and that, for each prime $p \leq X^{1/2}$, the set $\A \mdlem{p}$ lies in some arithmetic progression $S_p$ of length $(1-\eps)p$. Then $|\A[X]| \ll_{\eps} X^{1/2 - \eps'}$, where $\eps' > 0$ depends on $\eps$ only.
\end{prog-thm-rpt}
Suppose to begin with that $|S_{p}|=\frac{1}{2}(p+1)$ for each odd prime $p$. Then (dilating $S_p$ to the interval $\{1,\dots, \frac{1}{2}(p+1)\}$, which preserves the additive energy)
$$ E_{p}(S_{p},S_{p}) = (\frac{p+1}{2})^{2} + 2\sum_{h=1}^{\frac{1}{2}(p-1)} (\frac{p+1}{2} - h)^{2} \geq \frac{p^{3}}{12} . $$
Thus Theorem \ref{thm1.4} is applicable with the choice $\delta = 1/48$, and the result follows in this case.
Now we turn to the proof of Theorem \ref{prog-thm} for arbitrary $\eps > 0$. We begin with a result which should be compared to Corollary \ref{cor1.3}, but which is simpler to state and prove than that result.
\begin{lemma}\label{progr-energy}
Let $\eps > 0$ be small, and suppose that $A \subset [X]$ is a set and $\mathscr{P} \subset [X^{1/2}]$ is a set of primes such that $|\mathscr{P}| \geq 2/\eps^{2}$. If $S_{p} \subset \Z/p\Z$ is an arithmetic progression of length at most $(1-\eps)p$, and if $A \mdlem{p} \subset S_p$ for each prime $p \in \mathscr{P}$, then $E(A,A) \gg \frac{\eps^{4} |A|^{4}}{X} |\mathscr{P}|$,
where the constant implicit in the $\gg$ notation is absolute.
\end{lemma}
\begin{proof}
In view of Lemma \ref{lift-lem}, and our assumption that $|\mathscr{P}| \geq 2/\eps^{2}$, it will suffice to show that
$$ E_{p}(A,A) - \frac{|A|^{4}}{p} \gg \frac{\eps^{4} |A|^{4}}{p} $$
for each prime $p \in \mathscr{P}$ that is greater than $2/\eps^{2}$ (that being a positive proportion of all the primes in $\mathscr{P}$). As in the proof of Corollary \ref{cor1.3} we have
$$ E_{p}(A,A) = p^3 \sum_{r \in \Z/p\Z} \big|\frac{1}{p} \sum_{x \leq X} 1_A(x) e(-xr/p) \big|^{4} . $$
The contribution from the $r=0$ term is evidently equal to $|A|^{4}/p$. By Lemma \ref{large-fourier} (which was stated in the case $S_p$ is an interval, but may easily be adapted to the case $S_p$ a progression by dilation), if $p > \lceil 2/\eps^2 \rceil$ then there is some nonzero $r$ satisfying $|\sum_{x \leq X} 1_A(x) e(rx/p)| \geq \eps |A|/32$. The result follows immediately.
\end{proof}
The other major ingredient that we shall need is an analogue of Lemma \ref{lem2.8} that applies when the sets $S_{p}$ have size at most $(1-\eps)p$, rather than size at most $\frac{1}{2}(p+1)$ as in that lemma. The following result provides this.
\begin{lemma}\label{progr-goodh}
Let $c > 0$ be a sufficiently small absolute constant. Let $H \subset [-X,X]$ be a set of size $X^{1/2 - c/4}$, and let $Q = X^{1/2 - c/2}$. Suppose that for each prime $p \in \mathscr{P}'$ we have a subset $S_p \subset \Z/p\Z$ such that $\frac{1}{10}p \leq |S_p| \leq (1-\eps)p$, where $\mathscr{P}'$ is a subset of the primes $p \leq Q$ satisfying $\sum_{p \in \mathscr{P}'} \frac{\log p}{p} \geq \frac{1}{3}\log Q$.
Then there is some $h \in H$ such that \[ \sum_{p \in \mathscr{P}'} \frac{\log p}{p} \frac{|S_p \cap (S_p + h)|}{p} < (1 - c\eps) \sum_{p \in \mathscr{P}'} \frac{\log p}{p} \frac{|S_p|}{p} .\]
\end{lemma}
\begin{proof}
The proof is quite close to the proof of Lemma \ref{lem2.8}, so we shall give a fairly brief account. If the conclusion were false then we would have
$$ \sum_{h \in H} \sum_{p \in \mathscr{P}'} \frac{\log p}{p} \frac{|S_p \cap (S_p + h)|}{p} \geq (1 - c\eps) |H| \sum_{p \in \mathscr{P}'} \frac{\log p}{p} \frac{|S_p|}{p}. $$
In this case, if we let $\mathscr{P}$ denote the subset of primes $p \in \mathscr{P}'$ for which
$$ \sum_{h \in H} \frac{|S_p \cap (S_p + h)|}{p} \geq (1 - c^{1/2}\eps) |H| \frac{|S_p|}{p} , $$
then applying Lemma \ref{simple-avg} with $X = \mathscr{P}'$, $\lambda_{p} = \frac{\log p}{p^2} |H| |S_p|$ and $f(p) = |H|^{-1}\sum_{h \in H} \frac{|S_p \cap (S_p + h)|}{|S_p|}$ yields
$$ \sum_{p \in \mathscr{P}} \frac{\log p}{p} \frac{|S_p|}{p} \geq (1 - c^{1/2}) \sum_{p \in \mathscr{P}'} \frac{\log p}{p} \frac{|S_p|}{p} \geq \frac{1}{40}\log Q. $$
As in the proof of Lemma \ref{lem2.8}, it also follows from Lemma \ref{simple-avg} that, for all $p \in \mathscr{P}$, there is a set $H'_p \subset H$ with $|H'_p| \geq (1 - c^{1/4}) |H|$ such that $|S_p \cap (S_p + h)| \geq (1 - c^{1/4}\eps)|S_p|$ for all $h \in H'_p$. Finally, using Lemma \ref{add-comb} applied to $S_{p}$ (and with $\epsilon$ replaced by $c^{1/4}\epsilon$) it follows that $|H'_p \md{p}| \leq 4 c^{1/4} \eps p + 1$, and so all but $c^{1/4} |H|$ elements of $H$ reduce to lie in a set of size $4 c^{1/4} \eps p + 1 < \frac{1}{100}p$ modulo $p$, for all $p \in \mathscr{P}$, a set which satisfies $\sum_{p \in \mathscr{P}} \log p/p \geq \frac{1}{40}\log Q$. We may apply the larger sieve, Theorem \ref{larger-sieve}, to this situation, taking $\delta = 1 - c^{1/4}$ and $\sigma_p = 1/100$ for all $p \in \mathscr{P}$. This gives the bound
\[ |H| \ll \frac{Q}{100(1 - c^{1/4})^{2} (1/40)\log Q -\log X }\] provided that the denominator is positive. If $c$ is sufficiently small then this will be so with our choice of $Q$, namely $X^{1/2 - c/2}$, and we get the bound $|H| \ll X^{1/2 - c/2 + o(1)}$. This is contrary to assumption.
\end{proof}
Now we can prove Theorem \ref{prog-thm}, by applying the foregoing lemmas repeatedly to the intersection of $A$ (and of the sets $S_{p}$) with shifted copies of itself. The key point here is that, since any subset of $A$ will lie in the arithmetic progression $S_{p}$ when reduced modulo $p$, we can use Lemma \ref{progr-energy} throughout this ``intersecting process'' to obtain a lower bound on additive energy. In particular, we don't need to keep track of any ``uniformity of fibres'' throughout the process. Eventually we will obtain a subset of $A$ that has cardinality quite close to $|A|$, but lies modulo $p$ in a multiply intersected copy of $S_p$ having size $< (1/2-c)p$ (for most primes $p$). The theorem will then follow immediately from the larger sieve.
\begin{proof}[Proof of Theorem \ref{prog-thm}]
We assume that $\eps$ is small, and that $X$ is sufficiently large in terms of $\eps$. This is certainly permissible for proving the theorem. We will prove that $|A| < X^{1/2-c(\eps)}$, where $c(\eps) = K^{-1/\eps}$ for a large absolute constant $K > 0$. Suppose, for a contradiction, that $|A| \geq X^{1/2-c(\eps)}$. We proceed iteratively, setting $A_0 = A$ and constructing a sequence of sets $A_i \subset A$, $i = 1,2,3,\dots$ such that
\begin{equation}\label{progr-size}
|A_i| \geq X^{1/2-3^{i}K^{-1/\eps}}.
\end{equation}
The sets $A_i$ will satisfy $A_i \md{p} \subset S_p^i$, where $S_p^i \subset S_p$, and where
\begin{equation}\label{progr-decay}
\sum_{p \leq Q} \frac{\log p}{p} \frac{|S_p^i|}{p} < (1-c\eps/2)^{i}(\log Q + O(1)).
\end{equation}
Here $c$ is the absolute constant from Lemma \ref{progr-goodh}, and $Q=X^{1/2-c/2}$. After $O(1/\eps)$ steps we will, in particular, have
$$ \sum_{p \leq Q} \frac{\log p}{p} \frac{|S_p^i|}{p} < (1/2 - c)\log Q , $$
and therefore, writing $\sigma^i_p := |S_p^i|/p$, we will have
$$ \sum_{p \leq Q} \frac{\log p}{\sigma_p^i p} \geq 4\log Q + O(1) - 4(\frac{1}{2} - c)\log Q = (2 + 4c)\log Q + O(1) , $$
as argued in the proof of Lemma \ref{lem2.7}. Using the larger sieve, this implies that $|A_i| \ll X^{1/2-c/2}$, which contradicts the lower bound (\ref{progr-size}) provided that $K$ was chosen large enough.
We will show how the set $A_{i+1}$ is obtained from $A_i$, and verify that it satisfies the size bound (\ref{progr-size}) and that its reductions modulo primes $p$ satisfy the bound (\ref{progr-decay}). Firstly, if $A_i \subset A$ satisfies (\ref{progr-size}) then Lemma \ref{progr-energy} implies that
$$ E(A_i,A_i) \gg \frac{\eps^{4} |A_i|}{X^{1/2} \log X} |A_i|^3 \gg \frac{\eps^{4} X^{-3^{i}K^{-1/\eps}}}{\log X} |A_i|^3 \geq 2X^{-2 \cdot 3^{i}K^{-1/\eps}} |A_i|^3 , $$
provided that $X$ is large enough in terms of $\eps$. Using Lemma \ref{lem3.5}, it follows that there is a set $H \subset [-X,X]$ such that $|H| \geq |A_i|X^{-2 \cdot 3^{i}K^{-1/\eps}} \geq X^{1/2-3^{i+1}K^{-1/\eps}}$, and such that
$$ |A_i \cap (A_i + h)| \geq X^{1/2-3^{i+1}K^{-1/\eps}} $$
for all $h \in H$. The set $A_{i+1}$ will be of the form $A_i \cap (A_i + h)$, for suitably chosen $h \in H$. Note that any such choice will indeed satisfy the size bound (\ref{progr-size}). In view of (\ref{progr-decay}), we may assume that the sets $S_p^i$ satisfy
$$ (1/2 - c)\log Q \leq (1-c\eps/2)^{i+1}(\log Q + O(1)) \leq \sum_{p \leq Q} \frac{\log p}{p} \frac{|S_p^i|}{p} < (1-c\eps/2)^{i}(\log Q + O(1)), $$
the lower bound holding because if it failed we could simply set $A_{i+1}=A_i$. Now let $\mathscr{P}'$ denote the set of primes $p \leq Q$ for which $|S_p^i| \geq \frac{1}{10}p$. We must have
\[ \sum_{p \in \mathscr{P}'} \frac{\log p}{p} \geq \frac{1}{3}\log Q \qquad \textrm{and} \qquad \sum_{p \in \mathscr{P}'} \frac{\log p}{p} \frac{|S_p^i|}{p} \geq \frac{1}{2} \sum_{p \leq Q} \frac{\log p}{p} \frac{|S_p^i|}{p}, \]
say, because otherwise the lower bound we just assumed would be violated. Thus we can apply Lemma \ref{progr-goodh}, deducing that for some $h \in H$ we have
$$ \sum_{p \in \mathscr{P}'} \frac{\log p}{p} \frac{|S_p^i \cap (S_p^i + h)|}{p} < (1 - c\eps) \sum_{p \in \mathscr{P}'} \frac{\log p}{p} \frac{|S_p^i|}{p} \leq \sum_{p \in \mathscr{P}'} \frac{\log p}{p} \frac{|S_p^i|}{p} - \frac{c\eps}{2}\sum_{p \leq Q} \frac{\log p}{p} \frac{|S_p^i|}{p} . $$
Finally, if we set $A_{i+1} = A_i \cap (A_i + h)$ for this choice of $h$, and correspondingly set $S_p^{i+1} = S_p^i \cap (S_p^i + h)$, then the upper bound (\ref{progr-decay}) will indeed be satisfied.
\end{proof}
\section{Robustness of the inverse large sieve problem}\label{stab-sec}
In this section we prove Theorem \ref{stability-thm}. Let us begin by recalling the statement.
\begin{stability-thm-repeat}
Let $X_0 \in \N$, and let $\eps > 0$. Let $X \in \N$ be sufficiently large in terms of $X_0$ and $\eps$, and suppose that $H \leq X^{1/8}$. Suppose that $A,B \subset [X]$ and that $|A \md{p}| + |B \md{p}| \leq p+1$ for all $p \in [X_0, X^{1/4}]$. Then, for some absolute constant $c > 0$, one of the following holds:
\begin{enumerate}
\item \textup{(Better than large sieve)} Either $|A \cap [X^{1/2}]|$ or $|B \cap [X^{1/2}]|$ is $\leq X^{1/4 - c\eps^3}$;
\item \textup{(Behaviour with quadratics)} Given any two rational quadratics $\psi_A,\psi_B$ of height at most $H$, either $|A \setminus \psi_{A}(\Q)|$ and $|B\setminus \psi_{B}(\Q)| \leq HX^{1/2 - c}$, or else at least one of $|A \cap \psi_{A}(\Q)|$ and $|B \cap \psi_{B}(\Q)|$ is bounded above by $HX^{1/4 + \eps}$.
\end{enumerate}
\end{stability-thm-repeat}
The proof of Theorem \ref{stability-thm} requires a number of preliminary ingredients, which we assemble now. We start with some results concerning \emph{quasisquares}, that is to say squarefree integers that are quadratic residues modulo ``many'' primes (see, for example, \cite[Section 12.14]{dekoninck-luca}). The first result is not the one actually required later on (which is Lemma \ref{pseudosquares}), but it may be of independent interest and its proof motivates the proof of the result needed later.
\begin{lemma}\label{quasi-square}
Let $\eta > 0$, and suppose that $Y \geq Z > 2$. Suppose that $\mathscr{P} \subset [Z,2Z]$ is a set consisting of at least $\eta Z/\log Z$ of the primes in $[Z,2Z]$. Then the number of squarefree $q \in [1,Y]$ such that $\left( \frac{q}{p} \right) = 1$ for all $p \in \mathscr{P}$ is at most $8 (6 \log Y/\eta)^{3\log Y/\log Z}$.
\end{lemma}
\begin{proof}
First of all, let $\mathscr{Q}$ denote the set of all $q$ that are squarefree, lie in $[1,Y]$ and are a quadratic residue modulo all $p \in \mathscr{P}$. Each $q \in \mathscr{Q}$ is either congruent to $1 \md{4}$, or it is congruent to $2$ or $3 \md{4}$. Let us assume that at least half of the elements of $\mathscr{Q}$ are congruent to $2$ or $3 \md{4}$, and henceforth redefine $\mathscr{Q}$ to consist of those elements only, and aim to show that $|\mathscr{Q}| \leq 4 (6 \log Y/\eta)^{3\log Y/\log Z}$. (The proof when at least half of the elements are congruent to $1 \md{4}$ is essentially the same.)
Let $k \geq 3$ be the smallest integer for which $Z^k > Y^2$. Then if $n \in [Z^k, 2^k Z^k]$ is any product of $k$ distinct primes from $\mathscr{P}$, and if $q \in \mathscr{Q}$, we have that the Jacobi symbol $\left( \frac{q}{n} \right) = \prod_{p \mid n} \left( \frac{q}{p} \right) = 1$.
Let $\mathscr{S}$ be the set of all such $n$; then clearly
\begin{equation}\label{s-lower} |\mathscr{S}| = \binom{|\mathscr{P}|}{k} \geq \frac{|\mathscr{P}|^{k}}{k^{k}} \geq \left(\frac{\eta}{k\log Z}\right)^k Z^k . \end{equation}
(Of course this is only true if $|\mathscr{P}| \geq k$, but otherwise the bound we shall derive is trivial anyway.)
Finally, note that if $q$ is squarefree and congruent to $2$ or $3 \md{4}$, and if $n \in \mathscr{S}$ (so, in particular, $n$ is odd), then $\left( \frac{q}{n} \right) = \left( \frac{4q}{n} \right) = \chi_{4q}(n)$, where $\chi_{4q}(n)$ is a primitive character modulo $4q$. (It is the primitive quadratic character corresponding to the fundamental discriminant $4q$.) The multiplicative form of the large sieve \cite[Theorem 7.13]{iwaniec-kowalski} implies that
\[ \sum_{\substack{q \leq Q, \\ q \equiv 2 \text{ or } 3 \md 4, \\ q \; \text{squarefree}}} \left|\sum_{n \in \mathscr{S}} a_{n} \chi_{4q}(n) \right|^2 \leq (16Q^2 + N) \sum_{n \in \mathscr{S}} |a_{n}|^{2} , \]
for any set $\mathscr{S} \subset [N]$ and for any $Q$ and any coefficients $a_{n}$. Applying this with our particular set $\mathscr{S}$, and with $a_{n} = 1$, yields
\[ |\mathscr{Q}||\mathscr{S}|^2 \leq (16Y^2 + 2^k Z^k)|\mathscr{S}| < 2^{k+2} Z^k |\mathscr{S}|,\] and therefore by \eqref{s-lower}
\[ |\mathscr{Q}| \leq \frac{2^{k+2} Z^k}{|\mathscr{S}|} \leq 4 (\frac{2k\log Z}{\eta})^k.\] Noting that $k \leq \frac{2\log Y}{\log Z} + 1 \leq \frac{3\log Y}{\log Z}$, the result follows.
\end{proof}
\emph{Remarks.} The conclusion of Lemma \ref{quasi-square} is nontrivial when $Y$ is any fixed power of $Z$, and even for somewhat larger values of $Y$. It seems to us that the bound obtained here is stronger than could (straightforwardly) be obtained using the real character sum estimate of Heath-Brown \cite{hb}, which comes with an unspecified factor of $(QN)^{\eps}$.
Now we present the result we need later on. Of course more general statements are possible, but we leave their formulation as an exercise to the interested reader.
\begin{lemma}\label{pseudosquares}
Suppose that $Z \geq Y^{1/5}$ are large. The number of squarefree $q \in [1,Y]$ which are squares modulo 95\% of the primes in $[Z, 2Z]$ is $O(\log^{10} Z)$. Furthermore, all of these $q$ apart from $q=1$ are at least $Y^\varrho$, for a certain absolute constant $\varrho > 0$.
\end{lemma}
\begin{proof}
The argument for the first part closely follows the preceding. Write $\mathscr{Q}$ for the set of all squarefree $q \in [1,Y]$ which are squares modulo at least 95\% of the primes $p \in [Z,2Z]$. Write $\mathscr{P}_q$ for the set of these primes; note carefully that $\mathscr{P}_q$ may depend on $q$. Write $\mathscr{S}$ for the set of products of $10$ distinct primes from $[Z,2Z]$, and write $\mathscr{S}_q$ for the set of products of $10$ distinct primes from $\mathscr{P}_q$. Note that $\left(\frac{q}{n} \right) = 1$ whenever $n \in \mathscr{S}_q$. Furthermore,
\[ |\mathscr{S}_q| = \binom{|\mathscr{P}_q|}{10} \geq (1 - o(1)) (1 - \frac{1}{20})^{10} \binom{|\mathscr{P}|}{10} > 0.59\binom{|\mathscr{P}|}{10} = 0.59|\mathscr{S}|\] for every $q \in \mathscr{Q}$ (the key point here is that $0.59 > 0.5$). It follows that for every $q \in \mathscr{Q}$ we have $\sum_{n \in \mathscr{S}} \left( \frac{q}{n} \right) \geq 0.18|\mathscr{S}|$. The proof now concludes as before.
To prove the second part, we use a form of the prime number theorem for the real character $\chi(m) = \left( \frac{q}{m} \right)$ (which, provided $q > 1$ is squarefree, is always a non-principal character of conductor at most $4q$). This tells us (see e.g. Theorem 7 in Gallagher's paper~\cite{gallagherdensity}) that
\[ \sum_{Z \leq m \leq 2Z} \Lambda(m) \chi(m) = O(Z \max ( e^{-c\sqrt{\log Z}}, e^{-c\log Z/\log q} )) \] if $\chi$ has no exceptional zero, and
\[ \sum_{Z \leq m \leq 2Z} \Lambda(m) \chi(m) = \frac{Z^{\beta} - (2Z)^{\beta}}{\beta} + O(Z \max ( e^{-c\sqrt{\log Z}}, e^{-c\log Z/\log q} )),\] if $\chi$ does have an exceptional zero $\beta \in (\frac{1}{2}, 1)$. Either way, we have the 1-sided inequality
\[ \sum_{Z \leq m \leq 2Z} \Lambda(m) \chi(m) \leq O(Z \max ( e^{-c\sqrt{\log Z}}, e^{-c\log Z/\log q} ) ).\]
Since
\[ \sum_{Z \leq m \leq 2Z} \Lambda(m) = Z(1 + o(1))\] by the prime number theorem, it follows that if $q \leq Y^{\varrho}$ with $\varrho$ small enough then at most $95$\% of the primes in $[Z,2Z]$ are such that $(q | p) = \chi(p) = 1$ .\end{proof}
\begin{proof}[Proof of Theorem \ref{stability-thm}] A key role will be played by primes $p$ that are close to $X^{1/4}$. It is convenient to introduce some terminology concerning them. Let $C$ be a large absolute constant to be specified later. We say that $p \sim X^{1/4}$ if $X^{1/4 - C\eps} < p < X^{1/4}$. Furthermore, we will say that a certain property holds ``for at least 1\% of primes $p \sim X^{1/4}$'' if the set $\mathscr{P}$ of primes for which this property holds satisfies
\[ \sum_{p \sim X^{1/4} : p \in \mathscr{P}} \frac{\log p}{p} \geq 0.01 \sum_{p \sim X^{1/4}} \frac{\log p}{p}.\] The weighting of $\log p / p$ is included with some later applications of the larger sieve in mind. We begin with some preliminary analysis using the larger sieve, strongly based on the work of Elsholtz \cite{elsholtz}.
\begin{lemma}\label{large-sieve-app} Let $A, B$ be sets such that $|A \md{p}| + |B \md{p}| \leq p+1$ for all primes $p \in [X_0, X^{1/4}]$. Let $\eps > 0$, and suppose that $X$ is sufficiently large in terms of $X_0$ and $\eps$. Then either $A \cap [X^{1/2}]$ or $B \cap [X^{1/2}]$ has size at most $X^{1/4 - c\eps^3}$, or else for at least 99\% of primes $p \sim X^{1/4}$ we have both $|A \md{p}| \leq (\frac{1}{2} + \eps)p$ and $|B\md{p}| \leq (\frac{1}{2} + \eps) p$. We may take $c = 2^{-10}$.
\end{lemma}
\begin{proof}
Write $\alpha_p := |A \md{p}|/p$, $\beta_p := |B \md{p}|/p$. Thus we are assuming that $\alpha_p + \beta_p \leq 1 + \frac{1}{p}$.
Suppose the final statement of the lemma is false. Then
\[ \sum_{p \sim X^{1/4}} \frac{\log p}{p} \big( (1 - 2\alpha_p)^2 + (1 - 2\beta_p)^2\big) \geq 2^{-5}\eps^2 \sum_{p \sim X^{1/4}} \frac{\log p}{p} \geq 2^{-5} \eps^3 \log X.\]
If $c < 2^{-8}$, we can remove the contribution from $X^{1/4 - 2c\eps^3} \leq p \leq X^{1/4}$ trivially to get
\[ \sum_{p < X^{1/4 - 2c\eps^3}} \frac{\log p}{p}\big( (1 - 2\alpha_p)^2 + (1 - 2\beta_p)^2\big) \geq 2^{-6} \eps^3 \log X.\]
We claim that if $a,b$ are positive real numbers with $a + b \leq 1$ then
\[ \frac{1}{a} + \frac{1}{b} \geq 4 + 2(1 - 2a)^2 + 2(1 - 2b)^2.\]
To see this, apply the inequality
\[ \frac{1}{x} + \frac{1}{1 - x} = 4 + \frac{(1 - 2x)^2}{x (1 - x)} \geq 4 + 4(1 - 2x)^2\] with $x = a$ and $x = b$ in turn, and add the results.
Applying this together with the above, we obtain
\[ \sum_{X_0 \leq p \leq X^{1/4 - 2c\eps^3}} \frac{\log p}{p} (\frac{1}{\alpha_p} + \frac{1}{\beta_p}) \geq (1 - 8c\eps^3)\log X + 2^{-5}\eps^3 \log X - O(1) \geq (1- 8c\eps^3 + 2^{-6} \eps^3)\log X .\]
Here we used the fact (an estimate of Mertens) that $\sum_{X_0 \leq p \leq Z} \frac{\log p}{p} = \log Z + O_{X_0}(1)$. Note also that we only have $\alpha_p + \beta_p \leq 1 +\frac{1}{p}$, and not $\alpha_p + \beta_p \leq 1$; the introduction of the $O(1)$ term takes care of this as well, the full justification of which we leave to the reader\footnote{We are working with the condition $|A \md{p}| + |B \md{p}| \leq p+1$, rather than the cleaner condition $|A \md{p}| + |B \md{p}| \leq p$, so that we can formulate Theorem \ref{stability-thm-symmetric} to include the case in which $\A$ is the set of values of a quadratic. Note, however, that in this case Lemma \ref{large-sieve-app} is vacuous anyway. Therefore this small point really can be ignored.}.
Without loss of generality the contribution from the $\alpha_p$ is at least that from the $\beta_p$, so
\[ \sum_{X_0 \leq p \leq X^{1/4 - 2c\eps^3}} \frac{\log p}{p} \frac{1}{\alpha_p} \geq (\frac{1}{2} - 4c\eps^3+ 2^{-7}\eps^3)\log X > (\frac{1}{2} + 2^{-8}\eps^3) \log X\] if $c \leq 2^{-10}$.
Then, however, the larger sieve implies that
\[ |A \cap [X^{1/2}]| \ll \frac{X^{1/4 - 2c\eps^3}}{\eps^3 \log X} < X^{1/4 - c\eps^3},\] and the result follows.
\end{proof}
Suppose now that the hypotheses are as in Theorem \ref{stability-thm}. Replace $\eps$ by $\eps/2C$ (the statement of the theorem does not change). Let $\psi_A,\psi_B$ be rational quadratics of height at most $H$. If option (ii) of the theorem does not hold then at least $HX^{1/4 + 2C\eps}$ elements of $A$ lie in $\psi_{A}(\Q)$ and at least $HX^{1/4 + 2C\eps}$ elements of $B$ lie in $\psi_{B}(\Q)$. Suppose also that option (i) of Theorem \ref{stability-thm} does not hold. Then we may apply Lemma \ref{large-sieve-app} to conclude that both $|A \md{p}|$ and $|B \md{p}|$ are $\leq (\frac{1}{2} + \eps)p$ for at least 99\% of all primes $p \sim X^{1/4}$. (We urge the reader to recall the special meaning of this notation.) Using this information together with the fact that $|A \md{p}| + |B \md{p}| \leq p+1$ for all $p \sim X^{1/4}$, we will deduce that in fact
\begin{equation}\label{weak} |A \setminus \psi_{A}(\Q)|, |B \setminus \psi_{B}(\Q)| \ll HX^{1/2 - c}, \end{equation} this being the other conclusion of Theorem \ref{stability-thm} (ii). It suffices to prove this for $A$, the proof for $B$ being identical. Write $\psi = \psi_{A}$, and suppose that $p \sim X^{1/4}$. Set
\[ T_p := A \md{p} \cap (\psi(\Q) \cap \Z) \md p,\]
\[ U_p := A \md{p} \setminus (\psi(\Q) \cap \Z) \md{p}.\]
We know that $|A \md{p}| \leq (\frac{1}{2} + \eps) p$ for at least 99\% of all primes $p \sim X^{1/4}$. For these primes, then,
\[ |T_p| + |U_p| \leq (\frac{1}{2} + \eps)p.\]
We claim that $|U_p| \leq 2\eps p$ for at least 98\% of all primes $p \sim X^{1/4}$. If this failed, we would have \begin{equation}\label{tp-bound}|T_p| \leq (\frac{1}{2} - \eps) p\end{equation} for at least $1\%$ of the primes $p \sim X^{1/4}$. Write $\psi(x) = \frac{1}{d}(a x^2 + bx + c)$ with $a,b,c,d$ having no common factor and $|a|, |b|, |c|, |d| \leq H$, and note that $\psi^{-1}(\Z) \subset \frac{1}{a}\Z$.
Note furthermore that
\begin{equation}\label{35} \{ x \in \Z : \psi(\frac{x}{a}) \in A \cap \psi(\Q)\} \subset \bigcap_{p \sim X^{1/4}} \{ x \in [-C_1HX^{1/2}, C_1HX^{1/2}] : \psi(\frac{x}{a}) \md{p} \in T_p\} ,\end{equation} where $C_1$ is some absolute constant. If $p \sim X^{1/4}$, the condition that $\psi(\frac{x}{a}) \md{p} \in T_p$ forces $x \md{p}$ to lie in some set $S_p \subset \Z/p\Z$ of residue classes with $|S_p| \leq 2|T_p|$. We now have a large sieve problem to which Proposition \ref{classicls} may be applied. If $p$ is such that \eqref{tp-bound} holds then we have $|S_p^c|/|S_p| \geq \eps$, and so
\[ \sum_{q \leq X^{1/4}} \mu^2(q) \prod_{p | q} \frac{|S_p^c|}{|S_p|} \geq \sum_{p \sim X^{1/4}} \frac{|S_p^c|}{|S_p|} \gg \eps \frac{X^{1/4 -C\eps}}{\log X}.\] Here, $X^{1/4 - C\eps}/\log X$ is a crude lower bound for the size of a set of primes constituting 1\% of all $p \sim X^{1/4}$, this lower bound being attained when all the primes congregate at the bottom of the interval $X^{1/4 - C\eps} \leq p \leq X^{1/4}$.
It follows from \eqref{35} and Proposition \ref{classicls} that $|A\cap \psi(\Q)| < HX^{1/4 + 2C\eps}$ if $X$ is large enough, contrary to assumption.
Now let us write $A= A_{\psi} \cup E$, where $A_{\psi}$ consists of those $x \in A$ for which $x \md{p} \in T_p$ for at least $97\%$ of $p \sim X^{1/4}$, and
$E$ consists of those $x \in A$ such that $x \md{p} \in U_p$ for at least $3\%$ of $p \sim X^{1/4}$. The idea here is that $A_{\psi}$ satisfies a large number of local conditions suggesting that its elements lie in $\psi(\Q)$. We would like to relate $A_{\psi}$ to $A \cap \psi(\Q)$, and show that $E$ is small. With this idea in hand, we can divide the task of proving \eqref{weak} into two subclaims, namely
\begin{equation}\label{claim} |A_{\psi} \setminus \psi(\Q)| \ll H X^{1/2 - c} \qquad \mbox{and} \qquad |E| \ll X^{1/4} .\end{equation}
Of course, we could tolerate a weaker bound for $|E|$, but as it turns out we need not settle for one.
We start with the first claim, which is quite straightforward given results we established earlier. Let $\Delta$ be the discriminant of $\psi$. If $x$ is an integer then so is $4ad x + \Delta d^2$, and furthermore if $x =\psi(n)$ then $4a d x + \Delta d^2 = (2a n + b)^2$. Therefore if $x \in A_{\psi}$ then $4a d x + \Delta d^2$ is an integer which is a square modulo $p$ for at least 97\% of all $p \sim X^{1/4}$. By a simple averaging argument, $4ad x + \Delta d^2$ is a square modulo at least 95\% of all $p\in [Z,2Z]$ for some $Z \sim X^{1/4}$. It follows from Lemma \ref{pseudosquares} that \begin{equation}\label{false-eq}4a d x + \Delta d^2 = n_i s^2,\end{equation} where $s \in \Z$ and $n_i$ is one of at most $O(\log^{10} X)$ squarefree integers, with $n_1 = 1$ and $n_i \geq X^{\varrho}$ if $i \geq 2$, where $\varrho > 0$ is an absolute constant. The number of $x \in [X]$ for which \eqref{false-eq} holds for a given $i$ is $\ll H\sqrt{X/n_i}$, and so the number of $x$ for which this holds for \emph{some} $i \geq 2$ is $\ll H \log^{10} X \cdot X^{(1- \varrho)/2} \ll H X^{1/2-\varrho/4}$. If $i = 1$, so that $n_1 = 1$, then $4a d x + \Delta d^2$ is a square and so $x \in \psi(\Q)$. This concludes the proof of the first bound in \eqref{claim}.
We turn now to the proof of the second bound in \eqref{claim}.
Recall first of all that $|U_p| \leq 2\eps p$ for at least 98\% of all $p \sim X^{1/4}$, and also that for every $x \in E$ we have $x \md{p} \in U_p$ for at least 3\% of all $p \sim X^{1/4}$. For every $x \in E$, both of these events occur for at least 1\% of all $p \sim X^{1/4}$.
Write $E_p$ for the subset of $E$ whose elements belong to $U_p$ modulo $p$. By the preceding facts we have
\[ \sum_{\substack{p \sim X^{1/4} \\ |U_p| \leq 2\eps p}} \frac{\log p}{p}|E_p| = \sum_{x \in E} \sum_{\substack{p \sim X^{1/4} \\ |U_p| \leq 2\eps p}} \frac{\log p}{p}1_{x \md{p} \in U_p} \geq \frac{1}{100}|E|\sum_{p \sim X^{1/4}} \frac{\log p}{p}. \]
Writing $\mathscr{P} := \{ p \sim X^{1/4} : |U_p| \leq 2\eps p \; \text{and} \; |E_p| \geq \textstyle\frac{1}{200}|E| \}$, it follows that
\[ \sum_{p \in \mathscr{P}} \frac{\log p}{p} \geq \frac{1}{200} \sum_{p \sim X^{1/4}} \frac{\log p}{p}.\] Applying the larger sieve (that is Theorem \ref{larger-sieve}) with the choices $\delta = \frac{1}{200}$, $Q = X^{1/4}$ and $\sigma_{p} = 2\eps$, we obtain $|E| \ll X^{1/4}(\frac{1}{2^{20}\eps}\sum_{p \sim X^{1/4}} \frac{\log p}{p} - \log X)^{-1}$, provided that the term in parentheses is positive. That term is $> (2^{-21} C - 1)\log X$, which is positive if $C > 2^{22}$ (say). Thus we get the bound $|E| \ll X^{1/4}$.
This completes the proof of \eqref{claim}, and hence \eqref{weak} and Theorem \ref{stability-thm}.\end{proof}
We turn now to the proof of Theorem \ref{stability-thm-symmetric}, the stability theorem for a single infinite set $\A$. Again, we begin by recalling the statement.
\begin{stability-thm-symmetric-repeat}
Suppose that $\A$ is a set of positive integers and that $|\A \md{p}| \leq \frac{1}{2}(p+1)$ for all sufficiently large primes $p$. Then one of the following options holds:
\begin{enumerate}
\item \textup{(Quadratic structure)} There is a rational quadratic $\psi$ such that all except finitely many elements of $\A$ are contained in $\psi(\Q)$;
\item \textup{(Better than large sieve)} For each integer $k$ there are arbitrarily large values of $X$ such that $|\A[X]| < \frac{X^{1/2}}{\log^k X}$;
\item \textup{(Far from quadratic structure)} Given any rational quadratic $\psi$, for all $X$ we have $|\A[X] \cap \psi(\Q)| \leq X^{1/4 + o_{\psi}(1)}$.
\end{enumerate}
\end{stability-thm-symmetric-repeat}
\begin{proof}
Suppose that neither item (ii) nor item (iii) holds. Then there is an $\eps > 0$ and a rational quadratic $\psi$ such that $|\A[X] \cap \psi(\Q)| > X^{1/4 + \eps}$ for arbitrarily large values of $X$. For any such $X$ we may apply Theorem \ref{stability-thm} with $A = \A[X]$ to conclude that either option (ii) of our present theorem holds, or else for infinitely many $X$ we have $|\A[X] \setminus \psi(\Q)| \ll X^{1/2 - c}$. (We note in passing that Lemma \ref{large-sieve-app} is redundant inside the proof of Theorem \ref{stability-thm} in this setting, being trivially true.) We will deduce from this and further applications of the fact that $|\A \md{p}| \leq \frac{1}{2}(p+1)$ for $p$ sufficiently large that either (i) or (ii) of Theorem \ref{stability-thm-symmetric} holds.
Let $\tilde \psi$ be a rational quadratic satisfying the conclusions of Lemma \ref{tedious-local-lemma}, thus $\psi(\Q) \cap \Z \subset \tilde\psi(\Z)$ and for all sufficiently large primes $p$ the reductions $\md{p}$ of $\psi(\Q) \cap \Z$ and of $\tilde\psi(\Z)$ are the same. Suppose that option (ii) of Theorem \ref{stability-thm-symmetric} does not hold for the infinitely many values of $X$ for which we have $|\A[X] \setminus \psi(\Q)| \ll X^{1/2 - c}$. Then there is some integer $k$ such that (letting $X$ range through these values)
\begin{equation}\label{weak2} \lim\sup_{X \rightarrow \infty} X^{-1/2}\log^k X|\A[X] \cap \psi(\Q)| = \infty.\end{equation} We claim that this implies statement (i) of Theorem \ref{stability-thm}, and in fact the stronger conclusion $|\A \setminus \psi(\Q)\big| \leq k+1$. Suppose this statement is false. Then there are elements $x_1,\dots,x_{k+1}$ in $\A$ but not in $\psi(\Q)$. Since $x$ lies in $\psi(\Q)$ if and only if $4ad x + \Delta d^2$ is the square of a rational number, it follows that none of $4ad x_i + \Delta d^2$ is a square. Set $m_i := 4ad x_i + \Delta d^2$, and suppose that $p$ is a prime such that $(m_i | p ) = -1$. If $p$ is sufficiently large then $x_i \notin (\psi(\Q) \cap \Z) \md{p}$ and hence $x_i \notin \tilde\psi(\Z) \md{p}$.
For each prime $p$, let $k(p)$ be the number of indices $i \in \{1,\dots, k+1\}$ such that $(m_i | p) = -1$. From the above reasoning and the assumption that $x_i \in \mathscr{A}$ it follows that $\A \cap \tilde\psi(\Z) \md{p}$ must occupy a set of size at most $\frac{1}{2}(p+1) - k(p)$ for all sufficiently large primes $p$. Define a set $\mathscr{B} \subset \Z$ by $\A \cap \tilde\psi(\Z) = \tilde\psi(\mathscr{B})$. Thus $|\tilde\psi(\mathscr{B}) \md{p}| \leq \frac{1}{2}(p+1) - k(p)$ for all sufficiently large primes $p$, which implies that $|\mathscr{B} \md{p}| \leq p - 2k(p) + 1$. Note also that $\A[X] \cap \psi(\Q) \subset \tilde\psi{(\mathscr{B} \cap [c_1 \sqrt{X}, c_2 \sqrt{X}])}$ for some constants $c_1,c_2$ depending only on $\tilde\psi$. We may now apply Lemma \ref{small-from-large} to the set $\mathscr{B}$. In that lemma we may take $w(p) = 2k(p) - 1$, where $k(p)$ is the number of $i$ for which $(m_i | p) = -1$, or equivalently (if $p > 2$) for which $\chi_i(p) = -1$ where $\chi_i(n) = (4m_i | n)$ is a real Dirichlet character and $( \, | \, )$ denotes the Kronecker symbol. Thus $k(p) = \frac{1}{2}(k+1) - \frac{1}{2}\sum_{i=1}^{k+1} \chi_i(p)$, and so $w(p) = k -\sum_{i = 1}^{k+1} \chi_i(p)$. The conditions of Lemma \ref{small-from-large} are easily satisfied by the prime number theorem for characters with a fairly crude error term. It follows from Lemma \ref{small-from-large} and the above discussion that
\[ |\mathscr{A}[X] \cap \psi(\Q)| \leq |\mathscr{B} \cap [c_1 \sqrt{X}, c_2 \sqrt{X}]| \ll X^{1/2}(\log X)^{-k},\] contrary to \eqref{weak2}. \end{proof}
\section{Composite numbers in $\A + \B$}
Recall from the introduction the following conjecture of ``inverse large sieve'' type.
\begin{stability-conj-repeat}
Let $X_0 \in \N$, and let $\rho > 0$. Let $X \in \N$ be sufficiently large in terms of $X_0$ and $\rho$. Suppose that $A,B \subset [X]$ and that $|A \md{p}| + |B \md{p}| \leq p+1$ for all $p \in [X_0, X^{1/4}]$. Then there exists a constant $c = c(\rho) > 0$ such that one of the following holds:
\begin{enumerate}
\item \textup{(Better than large sieve)} Either $|A \cap [X^{1/2}]|$ or $|B \cap [X^{1/2}]|$ is $\leq X^{1/4 - c}$;
\item \textup{(Quadratic structure)} There are two rational quadratics $\psi_A,\psi_B$ of height at most $X^{\rho}$ such that $|A \setminus \psi_{A}(\Q)|$ and $|B\setminus \psi_{B}(\Q)| \leq X^{1/2 -c}$.
\end{enumerate}
\end{stability-conj-repeat}
Our aim in this section is to prove Theorem \ref{ils-ostmann}, which is the following statement.
\begin{ils-ostmann-repeat}
Assume Conjecture \ref{stability-conj}. Let $\A, \B$ be two sets of positive integers, with $|\A|, |\B| \geq 2$, such that $\A + \B$ contains all sufficiently large primes. Then $\A + \B$ also contains infinitely many composite numbers.
\end{ils-ostmann-repeat}
This follows quite straightforwardly from the following fact.
\begin{lemma}\label{lem6.2}
There is $\rho > 0$ with the following property. Suppose that $\psi_{A},\psi_{B}$ are two rational quadratics of height at most $X^{\rho}$ and that $A \subset \psi_{A}(\Q) \cap [X]$ and $B \subset \psi_{B}(\Q) \cap [X]$ are sets of positive integers. Suppose that $A + B$ contains no composite number. Then at least one of $A$ and $B$ has cardinality $\ll X^{1/3}$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{ils-ostmann} given Lemma \ref{lem6.2}]
Let $\A, \B$ be two sets of positive integers with $|\A|, |\B| \geq 2$ such that $\A + \B$ coincides with the set of primes on $[X_0,\infty)$.
We claim that there are infinitely many $X$ such that either $|\A[X]|$ or $|\B[X]|$ has cardinality at most $X^{1/2-c}$. This, however, is contrary to a theorem\footnote{The state-of-the-art here is $|\A[X]| \gg X^{1/2}/\log X \log\log X$: see \cite{elsholtz-harper}.} of Elsholtz \cite{elsholtz}, which implies that $|\A[X]|, |\B[X]| \gg X^{1/2} \log^{-5} X$ for all sufficiently large $X$.
It remains to prove the claim. Let $\rho$ be as in Lemma \ref{lem6.2}. For each $X$, write $A := \A \cap (X^{1/4},X]$ and $B := \B \cap (X^{1/4}, X]$. If $p \in [X_0, X^{1/4}]$ then $A + B$ contains no multiple of $p$, since any such number would be a nontrivial multiple of $p$ (and hence composite) and lies in $\A + \B$. For these primes $p$, then, we have $|A \md{p}| + |B \md{p}| \leq p$, since $B \md{p}$ cannot intersect $(-A) \md{p}$. Assuming Conjecture \ref{stability-conj}, for each $X$ one of the two options (i) or (ii) of that conjecture holds.
If (i) holds for infinitely many $X$ then without loss of generality we have $|A \cap [X^{1/2}]| \ll X^{1/4 - c}$ for infinitely many $X$. By Elsholtz \cite{elsholtz} we have $|\A[X^{1/4}]| \ll X^{1/8 + o(1)}$, and therefore $|\A[X^{1/2}]| \ll X^{1/4 - c}$ for infinitely many $X$, thereby establishing the claim.
Suppose, then, that (ii) holds for all sufficiently large $X$. That is, there are rational quadratics $\psi_{A}, \psi_{B}$ of height $X^{\rho}$ such that $|A\setminus \psi_{A}(\Q)|, |B \setminus \psi_{B}(\Q)| \leq X^{1/2-c}$. Write $A' := A \cap \psi_{A}(\Q)$ and $B' := B \cap \psi_{B}(\Q)$. Certainly $A' + B'$ contains no composite numbers. By Lemma \ref{lem6.2}, for all $X$ at least one of $A', B'$ has cardinality $\ll X^{1/3}$, and this means that indeed either $\A[X]$ or $\B[X]$ has size at most $X^{1/2-c}$ for infinitely many $X$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem6.2}] Write $\psi_{A}(x) = \frac{1}{d_A}(a_A x^2 + b_A x + c_A)$, $\psi_{B}(x) = \frac{1}{d_B}(a_B x^2 + b_B x + c_B)$. Here $a_A, a_B, b_A, b_B, c_A, c_B, d_A, d_B $ are integers, all of magnitude at most $H = X^{\rho}$. Set $Y := (H^{2}X)^{1/4}$. If $A + B$ contains no composite number then the set $(A \cap (Y, X]) + (B \cap (Y, X])$ contains no multiple of any prime $p \leq Y$. Note also that $\psi_{A}^{-1}(\Z) \subset \frac{1}{a_A}\Z$ and $\psi_{B}^{-1}(\Z) \subset \frac{1}{a_B} \Z$. Set
\[ S_A := \{ x \in \Z : \psi_{A}(\frac{x}{a_A}) \in A \cap (Y, X] \}, \quad S_B := \{ x \in \Z : \psi_{B}(\frac{x}{a_B}) \in B \cap (Y, X] \},\] and note that $\psi_{A}(\frac{x_A}{a_A}) + \psi_{B}(\frac{x_B}{a_B}) \neq 0 \md{p}$ whenever $x_A \in S_A$, $x_B \in S_B$, and $p \leq Y$ is a prime.
To prove Lemma \ref{lem6.2}, it suffices to show that either $|S_A|$ or $|S_B|$ has size $\ll X^{1/3}$.
Note furthermore (by completing the square) that $S_A,S_B \subset [-4(H^{2}X)^{1/2}, 4(H^{2}X)^{1/2}]$.
We will focus attention only on those primes $p \leq Y$ for which $(-a_A a_B d_A d_B | p) = 1$, that is to say for which $-a_A a_B d_A d_B$ is a square modulo $p$. We look for such primes amongst the $p \leq Y$ with $p \equiv 1 \md{8}$. Since both $-1$ and $2$ are squares modulo such a prime, it certainly suffices to additionally ensure that $(q_1 |p) = 1, \dots, (q_k | p) = 1$, where $q_1,\dots, q_k$ are the distinct odd primes appearing in $a_A a_B d_A d_B$. This is equivalent to the union of $(q_1 - 1) \dots (q_k - 1)/2^k$ congruence conditions modulo $q_1 \dots q_k$. Together with the condition $p \equiv 1\md{8}$, we get the union of at least $2^{-k-2} \phi(q)$ congruence conditions modulo $q := 8 q_1 \dots q_k$. Since $|a_A|, |a_B|, |d_A|, |d_B| \leq H$ we have $q \leq 8 H^4$. Now we invoke \cite[Corollary 18.8]{iwaniec-kowalski}, a quantitative version of Linnik's theorem on the least prime in an arithmetic progression, which implies that there are at least $\frac{Y}{\phi(q)\sqrt{q}\log Y}$ primes $p \leq Y$ satisfying each of these congruence conditions, and hence $\gg \frac{2^{-k}Y}{\sqrt{q}\log Y}$ such primes in total. (Here we used the fact that $H = X^{\rho}$ with $\rho$ sufficiently small.) Write $\mathscr{P}$ for the set of such primes, thus $|\mathscr{P}| \gg X^{1/4 - o(1)}H^{-2}$.
Now suppose that $p$ is such a prime and that $-a_A d_A/a_B d_B \equiv m^2 \md{p}$. Then we have
\begin{align*} \psi_A(\frac{x_A}{a_A}) + \psi_B (\frac{x_B}{a_B}) = \frac{1}{a_A d_A} (x_{A}^{2} + b_{A}x_{A} + a_{A}c_{A}) & + \frac{1}{a_B d_B} (x_{B}^{2} + b_{B}x_{B} + a_{B}c_{B}) \\ & \equiv \frac{1}{a_A d_A} \big(( x_A + c_1)^2 - (mx_B + c_2)^2 + c_3\big), \end{align*} where $c_1,c_2,c_3$ do not depend on $x_A, x_B$. Therefore for each prime $p \in \mathscr{P}$ we have one of the following alternatives.
\begin{enumerate}
\item $c_3 \equiv 0$ modulo $p$. Then whenever $x_A \in S_A \md{p}$ we must have $\frac{1}{m}(x_A + c_1 - c_2) \notin S_B \md{p}$, whence $|S_A \md{p}| + |S_B \md{p}| \leq p$.
\item $c_3 \not\equiv 0$ modulo $p$. Then we have $\psi_{A}(\frac{x_A}{a_A}) + \psi_B(\frac{x_B}{a_B}) \equiv 0 \md{p}$ whenever
\[ (x_A + mx_B + c_1 + c_2)(x_A - mx_B + c_1 - c_2) \equiv -c_3,\]
an equation which has $p-1$ solutions $(x_A, x_B)$. This solution set must be disjoint from $S_A \times S_B$.
Since to each $x_A$ there are at most two $x_B$, and to each $x_B$ there are at most two $x_A$, this forces at least one of $S_A \md{p}, S_B \md{p}$ to have size $\leq 7p/8$, say.
\end{enumerate}
In both cases at least one of $S_A \md{p}, S_B \md{p}$ has size $\leq 7p/8$. Without loss of generality the first holds for at least half the elements of $\mathscr{P}$. Finally the large sieve, as in Proposition \ref{classicls}, tells us that $|S_A| \ll \frac{(H^{2}X)^{1/2}}{|\mathscr{P}|} \ll H^3X^{1/4 + o(1)}$. If $\rho$ is small enough then this is $\ll X^{1/3}$, as required.
\end{proof}
| {
"timestamp": "2013-11-26T02:10:27",
"yymm": "1311",
"arxiv_id": "1311.6176",
"language": "en",
"url": "https://arxiv.org/abs/1311.6176",
"abstract": "Suppose that an infinite set $A$ occupies at most $\\frac{1}{2}(p+1)$ residue classes modulo $p$, for every sufficiently large prime $p$. The squares, or more generally the integer values of any quadratic, are an example of such a set. By the large sieve inequality the number of elements of $A$ that are at most $X$ is $O(X^{1/2})$, and the quadratic examples show that this is sharp. The simplest form of the inverse large sieve problem asks whether they are the only examples. We prove a variety of results and formulate various conjectures in connection with this problem, including several improvements of the large sieve bound when the residue classes occupied by $A$ have some additive structure. Unfortunately we cannot solve the problem itself.",
"subjects": "Number Theory (math.NT)",
"title": "Inverse questions for the large sieve",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363485313248,
"lm_q2_score": 0.8128673178375735,
"lm_q1q2_score": 0.8001348474806889
} |
https://arxiv.org/abs/2106.10388 | Upper bounds for critical probabilities in Bernoulli Percolation models | We consider bond and site Bernoulli Percolation in both the oriented and the non-oriented cases on $\mathbb{Z}^d$ and obtain rigorous upper bounds for the critical points in those models for every dimension $d \geq 3$. | \section{Introduction}
The study of Bernoulli percolation on $\mathbb{Z}^d$ is more than 60 years old and the existence of a non-trivial phase transition for $d\geq 2$ is well established for the model and several of its variants, but the exact value of the critical parameter $p_c$ is seldom known. A celebrated result of Kesten (see \cite{K3}) proved that the critical probability in Bernoulli bond percolation on $\mathbb{Z}^2$ is $1/2$. Beyond that, a handful of planar lattices had their critical probability established and planarity was always the key factor. On the other hand, for dimensions $d\geq 3$, there is not much hope of finding exact values for the critical probability and the best we can expect are numerical results via Monte Carlo Methods (see the very efficient algorithm in \cite{NZ}), statistical estimates based on a comparison with dependent percolation (see Section 6.2 of \cite{BR} for an overview) or rigorous bounds (see for instance \cite {W,WP}). In this paper, we focus on finding rigorous upper bounds for site and bond Bernoulli Percolation on $\mathbb{Z}^d$ for every $d\geq 3$, in both the oriented and the non-oriented cases.
The primary tools we use are couplings between the models we seek to understand and models where bounds or precise values for the critical probabilities are known. Although those bounds are still far from the values obtained from Monte Carlo simulations, we believe that most of them are the best rigorous upper bounds in the literature.
The remainder of the text is organized as follows: in Section \ref{sec:model} we define more precisely the models and state the main results, in Section \ref{sec:couplings} we establish some dynamical couplings, in Section \ref{sec:proofs} we prove the theorems and in Section \ref{sec:comments} we give a numerical table with upper bounds for the critical probabilities in homogeneous Bernoulli percolation models for dimensions up to $d=9$.
\section{The models and main results}\label{sec:model}
For $d \geq 1$, the underlying graph for all the models will be the $d$-dimensional hypercubic lattice in which the set of vertices is $\mathbb{Z}^d$ and the set of edges is the set of non ordered pairs $E(\mathbb{Z}^d) : = \{ \langle v, u \rangle : v, u \in \mathbb{Z}^d ~ \text{and} ~ |v-u| =1\}$. We abuse notation and denote this graph simply by $\mathbb{Z}^d$.
\subsection{Bond percolation}
Given $p \in [0,1]$, consider a family of independent random variables $\{X_e\}_{e \in E(\mathbb{Z}^d)}$, where, for each $e \in E(\mathbb{Z}^d)$, $X_e$ has Ber($p$) distribution. Let $\mu_e$ be the law of $X_e$, and let $\mathbb{P}_p := \prod_{e \in E(\mathbb{Z}^d)} \mu_e$ be the resulting product measure. We declare an edge $e$ to be {\it open} if $X_e=1$ and {\it closed} otherwise.
We first consider the non-oriented case and denote by $\{x \leftrightarrow y\}$ the event where $x, y \in \mathbb{Z}^d$ are connected by an open path, i.e., there exist $x_0, \dots, x_n$ such that $x_0 = x$, $x_n =y$ and each $\langle x_{j-1}, x_j \rangle$ belongs to $ E(\mathbb{Z}^d)$ and is open for $j=1, \dots, n$. Let $\mathcal{C}^b_0 := \{ x \in \mathbb{Z}^d: 0 \leftrightarrow x\}$ be the open cluster of the origin, and $|\mathcal{C}^b_0|$ its size.
We define the percolation probability by
$\theta^{b}_d(p) := \mathbb{P}_p(|\mathcal{C}^b_0|= \infty)$. The critical point for the {\it non-oriented bond} Bernoulli percolation model will be denoted by \[p_c^{b}(d)=\sup\{p \geq 0 : \theta^b_d(p)=0\}.\]
We now consider the oriented case. Let $\{e_1, \dots, e_d\}$ be the set of positive unit vectors of $\mathbb{Z}^d$. We denote by $\{x \rightarrow y\}$ the event where $x, y \in \mathbb{Z}^d$ are connected by an oriented open path, i.e., there exist $x_0, \dots, x_n$ such that $x_0 = x$, $x_n =y$ and for each $j=1, \dots, n$, we have $x_j = x_{j-1} + e$, for some $e \in \{e_1, \dots, e_d\}$ and $\langle x_{j-1}, x_j\rangle$ is open. Let $\vv{\mathcal{C}}^b_0 := \{ x \in \mathbb{Z}^d : 0 \rightarrow x\}$ be the oriented open cluster of the origin, and $|\vv{\mathcal{C}}^b_0|$ its size.
Analogously, we define
$\vv{\theta}^{\, b}_d(p) := \mathbb{P}_p(|\vv{\mathcal{C}}^b_0|= \infty)$ the corresponding oriented percolation probability, and we denote the critical point for the {\it oriented bond} Bernoulli percolation model by \[\vv{p}_c^{\, b}(d)=\sup\{p \geq 0 : \vv{\theta}^{\,b}_d(p)=0\}.\]
Our results are the following:
\begin{theorem}\label{theo:bounds} Consider non-oriented bond Bernoulli percolation and
let $p^{\ast}(d)$ be the unique solution in $(0,1)$ of
\[\prod_{i=0}^2 \left(1-(1-p)^{\lfloor\frac{d+i}{3}\rfloor}\right)=2-\sum_{i=0}^2(1-p)^{\lfloor\frac{d+i}{3}\rfloor}.\]
Then, for every $d\geq 3$, we have $p_c(d) \leq p^{\ast}(d)$.
\end{theorem}
\begin{theorem}\label{theo:bondO}
Consider oriented bond Bernoulli percolation on $\mathbb{Z}^d$.
1) If $d$ is even, then $ \vv{p}^{\,b}_c(d)\leq 1-(1/3)^{2/d}$;
2) For $d \geq 4$, we have that \[\vv{p}^{\,b}_c(d)\leq \frac{1}{d}+\frac{C_d}{d^2},\]
where \[C_d=1+\frac{8}{d}+\frac{d^{5/2}}{(\sqrt{2\pi})^{d-1}}\left[\frac{d-1}{d-3}\right]e^{\frac{1}{12d}}.\]
3) For any dimension $d\geq 2$, we have that $\vv{p}^{\,b}_c(d+1)\leq f(d)$, where $f(d)$ is the unique solution in $(0,1)$ of
\[ p = \vv{p}^{\,b}_c(d) \left[p + (1-p)^{(d+1)/d}\right]. \]
\end{theorem}
{\bf Remark:} It is known (see \cite{CD}) that $\vv{p}^{\,b}_c(d)\sim 1/d$, hence the third upper bound above is asymptotically sharp.
\subsection{Site percolation}
Given a parameter $p \in [0,1]$, we consider a family $\{X_v\}_{v \in \mathbb{Z}^d}$ of independent Bernoulli random variables with parameter $p$. As before, $\mathbb{P}_p$ will denote the resulting product measure. A vertex $v \in \mathbb{Z}^d$ is declared to be {\it open} if $X_v = 1$ and {\it closed} otherwise.
Now, the sequence of vertices $(x_0, \dots, x_n)$ is said to be an open path if all the vertices are open and for each $j = 1, \dots, n$, $\langle x_{j-1}, x_j\rangle \in E(\mathbb{Z}^d)$. If in addition to these conditions, for each $j = 1, \dots, n$, we have $x_j = x_{j-1}+ e$ for some $e \in \{e_1, \dots, e_d\}$, the sequence is said to be an oriented open path. Having these definitions we define $\mathcal{C}^s_0$, $\vv{\mathcal{C}}^s_0$, $\theta_d^{\, s}(p)$ and $\vv{\theta}_d^{\, s}(p)$ accordingly.
Finally, the critical points for {\it non-oriented} and {\it oriented} site Bernoulli percolation are respectively given by
\[ p_c^{s}(d)=\sup\{p \geq 0 : \theta^s_d(p)=0\} \quad \quad \text{and} \quad \quad \vv{p}_c^{\, s}(d)=\sup\{p \geq 0 : \vv{\theta}^{\,s}_d(p)=0\} .\]
For site percolation models, our results are the following
\begin{theorem}\label{theo:siteNO}
Consider non-oriented site Bernoulli percolation on $\mathbb{Z}^d$.
1) If $d$ is even, then $ p_c^{s}(d)\leq 1-(0,32)^{2/d}$;
2) If $d$ is divisible by 3, then $p_c^{s}(d)\leq 1-(0,5)^{3/d}$;
3) For any dimension $d\geq 2$, we have that $p_c^{s}(d+1)\leq g(d)$, where $g(d)$ is the unique solution in $(0,1)$ of
\[ p = p^{\,s}_c(d) \left[p + (1-p)^{2d/(2d-1)}\right]. \]
\end{theorem}
\begin{theorem}\label{theo:siteO}
Consider oriented site Bernoulli percolation on $\mathbb{Z}^d$.
1) If $d$ is even, then $ \vv{p}_c^{\,s}(d) \leq 1-(0,25)^{2/d}$;
2) For any dimension $d \geq 2$, we have that $\vv{p}_c^{\,s}(d+1) \leq h(d)$, where $h(d)$ is the unique solution in $(0,1)$ of
\[p = \vv{p}^{\,s}_c(d) \left[p + (1-p)^{(d+1)/d}\right]. \]
\end{theorem}
\section{The Dynamical Couplings}\label{sec:couplings}
Although we are only interested in homogeneous percolation, some of the tools we will use are related to couplings between {\it anisotropic} bond percolation models.
We now define the anisotropic (or inhomogeneous) non-oriented and oriented bond percolation models. For each $i=1, \dots, d$, let $E_i = \{ \langle x, x + e_i \rangle: x \in \mathbb{Z}^d\}$ be the set of edges parallel to $e_i$. Given $p_1, \dots, p_d \in [0,1]$, we consider a family of independent random variables $\{X_e\}_{e \in E(\mathbb{Z}^d)}$, but now, for each $e \in E_i$, $X_e$ has Ber($p_i$) distribution, $i=1, \dots, n$. The open cluster of the origin $\mathcal{C}^b_0$ and the oriented open cluster of the origin $\vv{\mathcal{C}}^b_0$ are defined analogously. The probabilities that $|\mathcal{C}^b_0|$, $|\vv{\mathcal{C}}^b_0|$ are infinite, will be denoted by $\theta_d(p_1, \dots, p_d)$ and $\vv{\theta}_d (p_1, \dots, p_d)$ respectively.
The first coupling is the content of Proposition 1 in \cite{GPS2}:
\begin{proposition} \label{prop:coupling}
Consider inhomogeneous non-oriented Bernoulli bond percolation in $\mathbb{Z}^d$. Let $p_1, \dots, p_{d+1} \in [0,1]$ and let $\Tilde{p}_d \in [0,1]$ be such that \[(1-\Tilde{p}_d) = (1-p_d)(1-p_{d+1}).\] Then,
$\theta_{d+1}(p_1, \dots,p_d, p_{d+1}) \geq \theta_{d}(p_1, \dots, p_{d-1}, \Tilde{p}_{d}) .$
\end{proposition}
Now we define bond and site percolation models in the triangular lattice $\mathbf{T}$. This lattice is simply $\mathbb{Z}^2$ with extra edges of the form $\langle v,v+(1,1)\rangle$. That is, $\mathbf{T} = (V_{\mathbf{T}}, E_{\mathbf{T}})$ where $V_{\mathbf{T}} = \mathbb{Z}^2$ and $E_{\mathbf{T}}$ is the set of non ordered pairs $\{\langle v, u\rangle : v-u = (1,0), (0,1) \text{ or } (1,1) \}$. We will denote by $\theta^b_{\mathbf{T}}$ and $\theta^s_{\mathbf{T}}$ the corresponding percolation probability for bond and site models, respectively.
We will consider inhomogeneous bond percolation on $\mathbf{T}$, where the corresponding parameters $(p_1,p_2,p_3)$ will refer to edges of the form $\langle v,v+(1,0)\rangle$, $\langle v,v+(0,1)\rangle$ and $\langle v,v+(1,1)\rangle$, respectively.
In the next proposition, we construct a monotonic coupling between inhomogeneous bond percolation on the triangular lattice $\mathbf{T}$ with parameters $(p_1,p_2,p_3)$ and on $\mathbb{Z}^3$ with the same parameters.
\begin{proposition}\label{prop:triangular}
Let $(p_1,p_2,p_3)\in [0,1]$ and consider two inhomogeneous bond Bernoulli percolation processes on the triangular lattice and on $\mathbb{Z}^3$, both with parameters $(p_1,p_2,p_3)$. Then
\[\theta^b_{\mathbf{T}}(p_1,p_2,p_3)\leq\theta^b_3(p_1,p_2,p_3).\]
\end{proposition}
\begin{proof}
We will construct a dynamic coupling between the percolation process on $\mathbb{Z}^{3}$ with parameters $(p_1,p_2, p_3)$ and an infection process over $\mathbf{T}$. We will do it in such a way that the law of infected sites in $\mathbf{T}$ is the same as the law of the open cluster of the origin for anisotropic percolation on $\mathbb{Z}^3$ and also that, if the infection process survives, the open cluster of the origin of the process in $\mathbb{Z}^3$ must be infinite.
The coupling will be built based on a susceptible-infected strategy described as follows. First, we declare the origin of $\mathbf{T}$ as the \textit{initial} infected component. Next, at each time-step, we possibly grow the infected component and associate each new vertex $v$ of the infected component in $\mathbf{T}$ to a vertex $x(v)$ in the open cluster of the origin in $\mathbb{Z}^3$. More precisely, consider a vertex $v$ in the infected component of $\mathbf{T}$ and a neighbor $v+u$ out of the infected component. The vertex $v$, in the infected component in $\mathbf{T}$, will be associated with some vertex $x(v)$ in the open cluster of the origin in $\mathbb{Z}^3$. According to $u = \pm(1,0) , \pm(0,1)$ or $\pm(1,1)$ we denote $\tau(u) = \pm(1,0,0)$, $\tau(u) = \pm(0,1,0)$ or $\tau(u) = \pm(0,0,1)$ respectively. If $\langle x(v), x(v)+\tau(u) \rangle$, is open, we infect $v+u$. (and write $x(v+u) = x(v) + \tau(u)$).
Let's define the sequence of sets $(I_n, x(I_n), R_n, S_n)_{n \geq 0}$. Here, $I_n$ represents the \textit{infected vertices} in $\mathbf{T}$ and $x(I_n)$ represents the vertices in $\mathbb{Z}^{3}$ associated with the infected vertices. $R_n$ represents the \textit{removed edges} of $\mathbf{T}$. Finally, given $I_n$, $x(I_n)$ and $R_n$, the \textit{susceptible edges set} is given by
\begin{equation*}
S_{n} := \{ \langle v, u \rangle : v \in I_{n} \: \mbox{and} \: u \notin I_{n} \} \cap R_{n}^C.
\end{equation*}
At time $n=0$, we set
\begin{itemize}
\item $I_0 = \{0\} \subset V_{\mathbf{T}}$;
\item $R_0 = \emptyset \subset E_{\mathbf{T}}$;
\item $x(0) = 0 \in \mathbb{Z}^{3}$;
\item $S_0 := \{ \langle v, u \rangle : v \in I_{0} \: \mbox{and} \: u \notin I_{0} \} \cap R_{0}^C \:\: = \{ \langle 0, \pm (1,0)\rangle, \langle 0,\pm (1,0)\rangle ,\langle 0,\pm (1,1)\rangle \}$.
\end{itemize}
This means that, at time $n=0$, only the vertex $0$ is infected, and it can potentially infect any of its neighbours, so all edges containing the origin are susceptible. After that, in each step, an infected vertex tries to infect a non-infected vertex through a susceptible edge (if the latter exists). From now on, we choose an arbitrary, but fixed, ordering of the edges in $\mathbf{T}$. Suppose that $I_n, x(I_n), R_n$ and $S_n$ are already defined. If there is no susceptible edge then the process stops. More specifically, if $S_n = \emptyset$, then for all $k \geq 1$,
\begin{itemize}
\item $I_{n+k} = I_n$;
\item $R_{n+k} = R_n$;
\item $S_{n+k} = S_n$.
\end{itemize}
Otherwise, if there exists some susceptible edge, then the infected vertex in the smallest (in the previously fixed ordering) such edge tries to infect its non-infected neighbour. Let us write it in symbols. Suppose $S_n \neq \emptyset$ and let $g_n$ be the smallest edge in $S_n$.
Since $g_n \in S_n$, it has to be equal to some $\langle v, v+u_n \rangle$, where $v \in I_n$, $v+u_n \notin I_n$, and $u_n \in \{ \pm (0,1), \pm (1,0), \pm (1,1)\}$.
We set $\tau(\pm(1,0))=\pm (1,0,0)$, $\tau(\pm(0,1))=\pm (0,1,0)$ and $\tau(\pm(1,1))=\pm (0,0,1)$ .
Then, $v$ \textit{infects} $v+u_n$ if $\langle x(v), x(v)+\tau (u_n) \rangle$ is open in $\mathbb{Z}^{3}$. More precisely, if $\langle x(v), x(v)+\tau (u_n) \rangle$ is open in $\mathbb{Z}^{3}$ then we write
\[ I_{n+1} := I_n \cup \{v+u_n\},\]
and define
\[ x(v+u_n) := x(v) + \tau(u_n). \]
Otherwise, if $\langle x(v), x(v)+ \tau(u_n) \rangle$ is closed in $\mathbb{Z}^{3}$, we set $I_{n+1}:= I_n$.
Now that we have explored $g_n$, we \textit{remove} it and write
\[ R_{n+1} := R_n \cup \{ g_n \}.\]
Next, to conclude our induction step, we set
\begin{equation*}
S_{n+1} := \{ \langle v, u \rangle : v \in I_{n+1} \: \mbox{and} \: u \notin I_{n+1} \} \cap R_{n+1}^C.
\end{equation*}
Observe that the function $x: \cup_n I_n \longrightarrow \mathbb{Z}^{3}$ is injective.
In fact, if $v= (v_1, v_2) \in I_n$, then by construction, we have that $x(v) = (x_1,x_2,x_3)$ satisfies
\[ v_1=x_1+x_3 \mbox{ and }
v_2=x_2+x_3.\]
Now, observe that the image of $x$ is contained in the open cluster of the origin $\mathcal{C}^b_0$ of $\mathbb{Z}^{3}$. Since $x$ is injective, $|\cup_n I_n| \leq |\mathcal{C}^b_0|$. Also, note that $\cup_{n} I_n$ has the same law as $\mathcal{C}^{\mathbf{T}}_0$, where $\mathcal{C}^{\mathbf{T}}_0$ is the open cluster of the origin in $\mathbf{T}$ with parameters $p_1, p_2,p_3$. Therefore,
\begin{equation*}
\theta^b_{\mathbf{T}}(p_1,p_2,p_3) \leq \theta_{3}^b (p_1,p_2,p_3),
\end{equation*}
and the proof of Proposition \ref{prop:triangular} follows.
\end{proof}
To conclude this section, we construct two couplings that will be used to prove Item 3 of Theorem \ref{theo:bondO}, along with its site version, which will be used in the proofs of Item 3 of Theorem \ref{theo:siteNO} and Item 2 of Theorem \ref{theo:siteO}. These couplings are reminiscent of the coupling in \cite{GSS} and give an upper bound for the critical probability for any percolation model in $\mathbb{Z}^{d+1}$ as a function of the corresponding critical probability in $\mathbb{Z}^d$. Since our goal is to obtain better upper bounds, we need to improve the coupling.
\begin{proposition}\label{prop:crossover}
Consider oriented bond Bernoulli percolation on $\mathbb{Z}^{d+1}$ and suppose that
\begin{equation} \label{eq:crossbond}
p > \vv{p}^{\,b}_c(d) \left[p + (1-p)^{(d+1)/d}\right].
\end{equation}
Then, $\vv{\theta}^{\,b}_{d+1}(p)>0$.
\end{proposition}
\begin{proof}
Let ${\mathcal{E}} = \{e_1, \dots, e_d\}$ denote the set of positive unit vectors of $\mathbb{Z}^d$. To avoid ambiguities, we denote the set of positive unit vectors of $\mathbb{Z}^{d+1}$ by $\{ u_1, \dots, u_{d+1}\}$. Consider the multigraph $\mathbb{Z}^{d+1}_{\mathcal{E}}$ defined as follows: the set of vertices is $\mathbb{Z}^{d+1}$ and the set of edges is given by $E(\mathbb{Z}^{d+1}_{\mathcal{E}}):= \left( \cup_{i=1}^d E_i \right) \cup E_{\mathcal{E}}$ where $E_i = \{ \langle x, x + u_i \rangle: x \in \mathbb{Z}^{d+1}\}$, $i=1, \dots, d$, and we define $E_{\mathcal{E}} := \{\langle v, v+u_{d+1}\rangle_e : v \in \mathbb{Z}^{d+1} , e \in {\mathcal{E}}\}$. In words, each edge of $\mathbb{Z}^{d+1}$ parallel to $u_{d+1}$ is partitioned into another $|{\mathcal{E}}|$ edges indexed by ${\mathcal{E}}$ in $\mathbb{Z}^{d+1}_{\mathcal{E}}$, while edges parallel to all other directions remain unmodified.
We prove this proposition in two steps. First, we will define a multigraph $\mathbb{Z}^{d+1}_{\mathcal{E}}$ and show that, with certain parameters, the model on $\mathbb{Z}^{d+1}_{\mathcal{E}}$ is equivalent to the homogeneous model on $\mathbb{Z}^{d+1}$ with parameter $p$. To conclude, we will show that if $p$ satisfies Inequality \eqref{eq:crossbond}, then this new model on $\mathbb{Z}^{d+1}_{\mathcal{E}}$ dominates a supercritical model on $\mathbb{Z}^{d}$.
Consider now inhomogeneous oriented bond Bernoulli percolation on $\mathbb{Z}^{d+1}_{\mathcal{E}}$, where edges in $\cup_{i=1}^d E_i$ are open with probability $p$ and edges in $E_{\mathcal{E}}$ are open with probability $q$, where $q$ is such that $(1-p) = (1-q)^{|{\mathcal{E}}|}$. Clearly, the distribution of the open cluster in this model is the same as in the homogeneous model on $\mathbb{Z}^{d+1}$ with parameter $p$. We will construct a coupling showing that the model on $\mathbb{Z}_{\mathcal{E}}^{d+1}$ with parameters $p$ and $q$ as above, dominates the homogeneous model on $\mathbb{Z}^d$ with parameter $p/[1 - (1-p)q]$.
First, for each $e_i \in \mathcal{E}$, let $ \sigma(e_i):= u_i \in \mathbb{Z}^{d+1}$. Then, for each $v \in \mathbb{Z}^{d+1}$ and each $e \in {\mathcal{E}}$, let $A(v,e)$ be the event where either the edge $\langle v, v+\sigma(e) \rangle$ is open or, for some $k \geq 1$,
\begin{itemize}
\item the edges $\langle v+ iu_{d+1}, v + (i+1)u_{d+1} \rangle_e$, $i = 0, \dots, k-1$, are open and
\item the edges $\langle v + iu_{d+1}, v + iu_{d+1} + \sigma(e) \rangle$, $i = 0, \dots, k-1$ are closed, and
\item $\langle v + ku_{d+1}, v + ku_{d+1} + \sigma(e) \rangle$ is open.
\end{itemize}
In the event $A(v,e)$, we define $u(v,e) = v+\sigma(e)$ if $\langle v, v+\sigma(e) \rangle$ is open, or $u(v,e) = v + k u_{d+1} + \sigma(e)$ if $k \geq 1$ is such that the above three conditions are met. Observe that
\begin{eqnarray*}
\mathbb{P}(A(v,e)) &=&
p\sum_{i = 0}^{\infty} \left[q(1-p)\right]^i
= \frac{p}{1 - (1-p)q}\\
&=& \frac{p}{1-(1-p)(1-(1-p)^{1/d})} \\
&=& \frac{p}{ p + (1-p)^{(d+1)/d} }.
\end{eqnarray*}
Similarly to what was done in Proposition \ref{prop:triangular}, we now build the sequence of sets $(I_n, x(I_n), R_n, S_n)_{n \geq 0}$. For $n =0$, we set
\begin{itemize}
\item $I_0 = \{0\} \subset \mathbb{Z}^d$;
\item $R_0 = \emptyset \subset E(\mathbb{Z}^d)$;
\item $x(0) = 0 \in \mathbb{Z}_{\mathcal{E}}^{d+1}.$
\end{itemize}
Suppose that, for some $n \geq 0$, $I_n, x(I_n)$ and $R_n$ are already defined. Then $S_n \subset E(\mathbb{Z}^d)$ is given by
\begin{equation*
S_{n} := \{ \langle v, u \rangle : v \in I_{n} \: \mbox{and} \: u \notin I_{n} \} \cap R_{n}^C.
\end{equation*}
If $S_n = \emptyset$, our sequence becomes constant, that is,
\begin{equation}\label{eq:stop}
(I_{n+k}, x(I_{n+k}), R_{n+k}, S_{n+k}) = (I_n, x(I_n), R_n, S_n), \quad \forall k \geq 1.
\end{equation}
Otherwise, let $g_n = \langle v , v + e \rangle$ be the smallest (according to a prefixed ordering, as in Proposition~\ref{prop:triangular}) edge of $S_n$, where $v \in I_n$, $e \in {\mathcal{E}}$, and $v+e \notin I_n$. We set $R_{n+1} = R_n \cup \{g_n\}$. We also set
\begin{equation*}
I_{n+1} =
\begin{cases} I_n \cup \{v+e\}, &\text{if } A(x(v),e) \text{ occurs};\\
I_n, & \text{otherwise}.
\end{cases}
\end{equation*}
In case $A(x(v),e)$ occurs, we set $x(v+e) = u(x(v),e)$.
Once our sequence $(I_n, x(I_n), R_n, S_n)_ {n \geq 0}$ is built, note that by construction, the function $x: \cup_n I_n \longrightarrow \mathbb{Z}^{d+1}_{\mathcal{E}}$ is injective. In fact, for each $v \in \cup_n I_n$, the projection of $x(v)$ into $\mathbb{Z}^d$ is equal to $v$. The conclusion of the proof follows in a similar way to that of Proposition \ref{prop:triangular}.
\end{proof}
\begin{proposition}\label{prop:crossoversite}
Consider non-oriented site Bernoulli percolation on $\mathbb{Z}^{d+1}$ and suppose that
\begin{equation}\label{eq:cross1}
p > p^s_c(d) \left[p + (1-p)^{2d/(2d-1)}\right].
\end{equation}
Then $\theta^s_{d+1}(p)>0$.
\end{proposition}
\begin{proof}
The strategy of the proof is similar to the previous proposition. As before, let $\{e_1, \dots, e_d\}$ denote the set of positive unit vectors of $\mathbb{Z}^d$ and let $\{ u_1, \dots, u_{d+1}\}$ denote the set of positive unit vectors of $\mathbb{Z}^{d+1}$. We will consider a graph where each vertex of $\mathbb{Z}^{d+1}$ will be partitioned into another $2d-1$ vertices.
For each $v \in \mathbb{Z}^{d+1}$, we define the set of split vertices $V_v := \{v^{(1)}, \dots, v^{(2d-1)}\}$. Let $\mathbb{Z}^{d+1}_V$ be the graph with vertex set $\cup_{v \in \mathbb{Z}^{d+1}} V_v$ and edge set
\begin{equation*}
E(\mathbb{Z}^{d+1}_V) := \left\{ \langle x,y \rangle ~:~ x \in V_v \text{ and } y \in V_u, \text{ for some } \langle v,u \rangle \in E(\mathbb{Z}^{d+1}) \right\}.
\end{equation*}
Given $p$ satisfying \eqref{eq:cross1}, let $q$ be such that $(1-p) = (1-q)^{2d-1}$. We will consider the non-oriented site Bernoulli percolation model on $\mathbb{Z}^{d+1}_V$ with parameter $q$. Note that the distribution of the open cluster of the origin in this new model is the same as in the model on $\mathbb{Z}^{d+1}$ with parameter $p$.
To build the coupling between $\mathbb{Z}^{d+1}_V$ with parameter $q$ and $\mathbb{Z}^d$ with parameter $p/[1 - (1-p)q]$, we again construct a sequence of sets $(I_n, x(I_n), R_n, S_n)_{n \geq 0}$. Since we are considering a site model, for each $n \geq 0$, the sets $R_n$ and $S_n$ will be, respectively, the sets of {\it removed} and {\it susceptible} vertices (instead of edges) at step $n$. We start with two infected vertices (otherwise the origin should have been split into $2d$ vertices, instead of $2d-1$). For $n = 0$, we set
\begin{itemize}
\item $I_0 = \{0, e_1\} \subset \mathbb{Z}^d$;
\item $R_0 = \emptyset \subset \mathbb{Z}^d$;
\item $x(0) = 0^{(1)} \in \mathbb{Z}_V^{d+1} \text{ and } x(e_1) = u_1^{(1)} \in \mathbb{Z}_V^{d+1}. $
\end{itemize}
Inductively, if for some $n \geq 0$, the sets $I_n, x(I_n)$ and $R_n$ are already defined, we set
\begin{equation}\label{eq:SnSITE}
S_n = \{u \in \mathbb{Z}^d : \langle v,u \rangle \in E(\mathbb{Z}^d) \text{ for some } v \in I_n\} \cap (I_n \cup R_n)^C.
\end{equation}
If $S_n = \emptyset$, the sequence becomes constant, as in \eqref{eq:stop}. Otherwise, let $a_n$ be the smallest (in a preset ordering) vertex of $S_n$. We can write $a_n = v+ e$, such that $v \in I_n$ and $e \in \{\pm e_1, \dots, \pm e_d\}$. In this case, let $j_n$ denote the number of susceptible neighbors of $v$ (including $a_n$), that is
\begin{equation*}
j_n := \left\vert \{ u \in S_n : \langle v, u \rangle \in E(\mathbb{Z}^d) \} \right\vert .
\end{equation*}
Since we start with two infected vertices, necessarily $|S_n| \neq 2d$, and then $1 \leq j_n \leq 2d-1$.
For each $i \in \{1, \dots, d\}$, we set the notation $\sigma(\pm e_i) = \pm u_i$.
Consider the event $A_n$ where, either, for some $\ell \in \{1, \dots, 2d-1 \}$, the vertex $(x(v)+\sigma(e))^{(\ell)}$ is open or, for some $k \geq 1$ and $\ell \in \{1, \dots, 2d-1\}$,
\begin{itemize}
\item the vertices $\left( x(v) + iu_{d+1} \right)^{(j_n)}$, $i = 1, \dots, k$, are open, and
\item the vertices $\left( x(v) + i u_{d+1} + \sigma(e) \right)^{(j)}$, are closed $\forall j \in \{1, \dots, 2d-1 \}$, $i = 0, \dots. k-1$, and
\item the vertex $\left( x(v) + ku_{d+1} + \sigma(e) \right)^{(\ell)}$ is open.
\end{itemize}
Note that $A_n$ has probability
\begin{eqnarray*}
\mathbb{P}(A_n) &=& \left(1 - (1-q)^{2d-1}\right) \sum_{i=0}^{\infty} \left[q(1-q)^{2d-1}\right]^i \\
&=& \frac{p}{1 - (1-p)q}\\
&=& \frac{p}{1-(1-p)\left(1-(1-p)^{1/(2d-1)}\right)} \\
&=& \frac{p}{ p + (1-p)^{2d/2d-1} }.
\end{eqnarray*}
To conclude our induction step, we set
\begin{itemize}
\item $R_{n+1} =
\begin{cases} R_n \cup \{v+e\}, &\text{if } A_n \text{ does not occur};\\
R_n, & \text{otherwise};
\end{cases}$
\item $I_{n+1} =
\begin{cases} I_n \cup \{v+e\}, &\text{if } A_n \text{ occurs};\\
I_n, & \text{otherwise}.
\end{cases}$
\end{itemize}
In the event $A_n$, if for some $\ell \in \{1, \dots, 2d-1 \}$ the vertex $(x(v)+\sigma(e))^{(\ell)}$ is open, we set $x(v+e) =( x(v)+\sigma(e) )^{(\ell)}$. Otherwise, let $k \geq 1$ and $\ell \in \{1, \dots, 2d-1\}$ be such that the above three conditions are satisfied, in this case, we set $x(v+e) =( x(v)+ k u_{d+1} + \sigma(e) )^{(\ell)}$.
By construction, for each $v \in \cup_n I_n$, the projection of $x(v)$ onto $\mathbb{Z}^d$ is equal to $v$. Therefore $x: \mathbb{Z}^d \longrightarrow \mathbb{Z}^{d+1}_V$ is also injective. It follows that the site percolation model in $\mathbb{Z}^{d+1}_V$ (with $e_1^{(1)}$ declared open), dominates the infection process $(I_n)_{n \geq 0}$.
Since $|\cup_n I_n|$ has the same distribution as the size of the open cluster of $\{0, e_1\}$ in a supercritical percolation model on $\mathbb{Z}^d$, the proof follows.
\end{proof}
\section{Proof of Theorems}\label{sec:proofs}
In this section, we prove the theorems using the couplings of the last section
\subsection{Proof of Theorem \ref{theo:bounds}}
\begin{proof}
Given Propositions \ref{prop:coupling} and \ref{prop:triangular}, the proof of Theorem \ref{theo:bounds} is quite straightforward. Recall that for bond percolation on the triangular lattice with parameters $(p_1,p_2,p_3)$, assuming $p_1,p_2,p_3<1$, we have (see Theorem 11.116 in \cite{Grim})
\begin{equation*}
\theta_{\mathbf{T}}(p_1,p_2,p_3)>0 \Leftrightarrow p_1+p_2+p_3-p_1p_2p_3>1.
\end{equation*}
Consider now the partition
\begin{equation*}
d=\left\lfloor\frac{d}{3}\right\rfloor+\left\lfloor\frac{d+1}{3}\right\rfloor+\left\lfloor\frac{d+2}{3}\right\rfloor.
\end{equation*}
Let $p>p^{\ast}(d)$ and set, for $i=1,2,3$,
\begin{equation*}
p_i=1-(1-p)^{\lfloor \frac{d+i-1}{3}\rfloor}.
\end{equation*}
Then, iteratively applying Proposition \ref{prop:coupling} and finally Proposition \ref{prop:triangular}, we have
\begin{equation*}
\theta_d(p)\geq\theta_3(p_1,p_2,p_3)\geq \theta_{\mathbf{T}}(p_1,p_2,p_3)>0,
\end{equation*}
which concludes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{theo:bondO}}
\begin{proof}
First, we remark that Proposition \ref{prop:coupling} holds {\it mutatis mutandis} for inhomogeneous oriented bond Bernoulli percolation. As a consequence, we have the following corollary which we state without proof.
\begin{lemma}\label{lem:Cor}
Let $k \in \mathbb{N}$ be such that $d$ is divisible by $k$. Given $p \in [0,1]$, let $\tilde{p}$ be such that
\[(1-\tilde{p}) = (1-p)^{d/k}.\]
Then, $\vv{\theta}^{\, b}_d(p) \geq \vv{\theta}_k^{\, b}(\tilde{p})$.
\end{lemma}
Taking $k=2$ in Lemma \ref{lem:Cor} and using Liggett's upper bound $\vv{p}_c^{\, b}(2) \leq 2/3$ (see \cite{L}), the first item of Theorem \ref{theo:bondO} follows. The third item is equivalent to Proposition \ref{prop:crossover}.
Finally, the second item is implicitly proved in \cite{GPS}.
There, the authors defined a quantity $\lambda(1/d, \dots, 1/d)$ and showed that (see Equation (3.2)), if $p \in [0,1]$ satisfies
\[\phi(d) := \lambda(1/d, \dots, 1/d) \leq dp -1,\]
then $\vv{\theta}^{\, b}_d(p) > 0$. In particular,
\[ \vv{p}_c^{\, b}(d) \leq \frac{1+\phi(d)}{d}.\]
The conclusion follows from the estimate given in Subsection 3.1 of \cite{GPS} (see the equation above (3.5)), that is
\[\phi(d) \leq \frac{1}{d}+\frac{8}{d^2}+\frac{d^{3/2}}{(\sqrt{2\pi})^{d-1}}\left[\frac{d-1}{d-3}\right]e^{\frac{1}{12d}}. \]
\begin{comment}
Let $\{S_n^1\}_{n \geq 0}$ and $\{S_n^2\}_{n \geq 0}$ be two independent, oriented, simple and symmetrical random walks on $\mathbb{Z}^d$, starting on the origin. In the paper \cite{GPS} was shown that, if $p \in [0,1]$ is such that
\[\phi(d) := \sum_{n=1}^{\infty} \mathbb{P}\left( S_n^1 = S_n^2 \right) \leq dp -1,\]
then $\vv{\theta}^{\, b}_d(p) > 0$. In particular,
\[ \vv{p}_c^{\, b}(d) \leq \frac{1+\phi(d)}{d}.\]
The conclusion follows from the estimate given in Subsection 3.1 of \cite{GPS} (see the equation above 3.5),
\[\phi(d) \leq \frac{1}{d}+\frac{8}{d^2}+\frac{d^{3/2}}{(\sqrt{2\pi})^{d-1}}\left[\frac{d-1}{d-3}\right]e^{\frac{1}{12d}}. \]
\end{comment}
\end{proof}
\subsection{Proof of Theorem \ref{theo:siteNO}}
Inhomogeneous percolation is not well defined for site models, and therefore, we are not able to formulate a version of Proposition \ref{prop:coupling} for site percolation. But note that Lemma \ref{lem:Cor} only involves homogeneous models. In the following, we give its site version.
\begin{lemma}\label{lem:ksite}
Consider non-oriented site Bernoulli percolation on $\mathbb{Z}^{d}$. Let $k \in \mathbb{N}$ be such that $d$ is divisible by $k$. Given $p \in [0,1]$, let $\tilde{p}$ be such that
\[(1-\tilde{p}) = (1-p)^{d/k}.\]
Then, we have ${\theta}^s_d(p) \geq {\theta}^s_k(\tilde{p})$.
\end{lemma}
\begin{proof}
The lemma follows by a coupling between non-oriented site Bernoulli percolation on $\mathbb{Z}^d$, with parameter $p$, and on $\mathbb{Z}^{k}$, with parameter $\tilde{p}$. To establish this coupling, we again construct a sequence of vertex sets $(I_n, x(I_n), R_n, S_n)_{n \geq 0}$.
First, we recall that $\mathcal{E} = \{e_1, \dots, e_d\}$ is the set of positive unit vectors of $\mathbb{Z}^d$ and let $(D_{u_1}, \dots, D_{u_k})$ be a uniform partition of $\mathcal{E}$ into $k$ subsets indexed by the set $\{u_1, \dots, u_k \}\subset\mathbb{Z}^k$ of positive unit vectors of $\mathbb{Z}^k$. For each $u \in \{u_1, \dots, u_k \} $, let $D_{-u} = -D_u$.
Consider the non-oriented site Bernoulli percolation model on $\mathbb{Z}^d$, with parameter $p$. For each $v \in \mathbb{Z}^d$ and each $u \in \{\pm u_1, \dots, \pm u_k\}$, we define the following event
\begin{equation*}
B(v,u) := \{ v+e \text{ is open, for some } e \in D_u \}.
\end{equation*}
If the event $B(v,u)$ occurs, we set $e(v,u) = v+e$, where $e$ is an open vertex that guarantee the occurrence of $B(v,u)$. Note that $\mathbb{P}(B(v,u)) = \tilde{p}.$
For $n = 0$, we define
\begin{itemize}
\item $I_0 = \{0\} \subset \mathbb{Z}^k$;
\item $R_0 = \emptyset \subset \mathbb{Z}^k$;
\item $x(0) = 0 \in \mathbb{Z}^d$.
\end{itemize}
If for some $n \geq 0$, the sets $I_n, x(I_n)$ and $R_n$ are already defined, let $S_n$ be as given in \eqref{eq:SnSITE}. If $S_n = \emptyset$, the sequence becomes constant as in \eqref{eq:stop}. Otherwise, let $v_n$ be the smallest vertex of $S_n$ (according to a preset ordering of $\mathbb{Z}^k$). In this case, we write $v_n = v+u \notin I_n$, where $v \in I_n$ and $u \in \{\pm u_1, \dots, \pm u_k\}$. To conclude the induction step, we define
\begin{itemize}
\item $R_{n+1} =
\begin{cases} R_n \cup \{v+u\}, &\text{if } B(x(v),u) \text{ does not occurs};\\
R_n, & \text{otherwise};
\end{cases}$
\item $I_{n+1} =
\begin{cases} I_n \cup \{v+u\}, &\text{if } B(x(v),u) \text{ occurs};\\
I_n, & \text{otherwise}.
\end{cases}$
\end{itemize}
If $B(x(v),u)$ occurs, we define $x(v+u) = e(x(v),u)$.
Note that, by construction, the function $x: \cup_n I_n \longrightarrow \mathbb{Z}^d$ is such that, for each $v = v_{u_1}u_1+ \cdots + v_{u_k}u_k \in \cup_n I_n$, writing $x(v) = x_{e_1} e_1 + \cdots + x_{e_d} e_d$, we have
\begin{equation*}
\sum_{e \in D_u} x_e = v_u, \quad \forall u \in \{u_1, \dots, u_k\}.
\end{equation*}
Therefore, $x: \cup_n I_n \longrightarrow \mathbb{Z}^d$ is injective. The conclusion of the lemma follows as in previous couplings.
\end{proof}
With Lemma \ref{lem:ksite} we are now able to conclude the goal of this section.
\begin{proof}[Proof of Theorem \ref{theo:siteNO}]
The first item of Theorem \ref{theo:siteNO} follows by taking $k=2$ in Lemma \ref{lem:ksite} toghether with Wierman's upper bound $p_c^s(2)\leq 0.68$ (see \cite{W2}). Taking $k=3$ in Lemma \ref{lem:ksite} and using the upper bound $p_c^s(3) < 1/2$ of Campanino-Russo (see \cite{CR}), the second item follows. The third item follows directly by Proposition \ref{prop:crossoversite}.
\end{proof}
\subsection{Proof of Theorem \ref{theo:siteO}}
\begin{proof}
First, we remark that Lemma \ref{lem:ksite} and Proposition \ref{prop:crossoversite} hold {\it mutatis mutandis} for oriented site Bernoulli percolation. Therefore to conclude the proof of Theorem \ref{theo:siteO}, the last ingredient is Liggett's upper bound $\vv{p}^{\,s}_c(2) \leq 3/4$ (see \cite{L}).
\end{proof}
\section{Explicit bounds}\label{sec:comments}
In this section, we present a table with upper bounds for bond and site Bernoulli Percolation in $\mathbb{Z}^ d$, up to $d=9$, in both the oriented and the non-oriented cases.
In each column we give a numerical upper bound rounded to four decimals for the critical probability in the head of the column.
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
\textbf{Dimension} & $p_c^b(d)$ & $\vv{p_c}^b(d)$ & $p_c^s(d)$ & $\vv{p_c}^s(d)$\\ \hline
3 & 0,3473 &0,5680 &0,5000 &0,6422 \\ \hline
4 & 0,2788 &0,4227 &0,4344 &0,5000 \\ \hline
5 & 0,2284 & 0,3926 &0,4156& 0,4615 \\ \hline
6 & 0,1922 & 0,2734 & 0,2929 & 0,3701 \\ \hline
7 & 0,1682 & 0,2028 & 0,2866 & 0,3533\\ \hline
8 & 0,1486 &0,1627 & 0,2479 &0,2929\\ \hline
9 & 0,1326 &0,1371 &0,2063 &0,2844\\ \hline
\end{tabular}
\end{center}
\vspace{0.5cm}
To obtain each bound, we used the following:
\vspace{0.5cm}
\textbf{Non-oriented bond percolation.} All bounds were obtained from Theorem \ref{theo:bounds}.
\textbf{Oriented bond percolation} All bounds came from Theorem \ref{theo:bondO}. For $d=4$ we used Item 1), for $d=5$ we used the upper bound for $d= 4$, and Item 3). For $d\geq 6$ we used Item 2).
\textbf{Non-oriented site percolation.} The bound for $d=3$ follows from Campanino-Russo (see \cite{CR}). The others follow from Theorem \ref{theo:siteNO}. For $d=4$ and $d=8$ we used Item 1). For $d=6$ and $d=9$ we used Item 2). For $d=5$ and $d=7$ we used Item 3) along with the upper bounds obtained for $d=4$ and $d=6$.
\textbf{Oriented site percolation.} All the bounds came from Theorem \ref{theo:siteO}.
For even dimensions the bounds follow from Item 1), and for odd dimensions the bounds follow from Item 2) using the upper bounds for even dimensions.
\begin{comment}\begin{tabular}{|c|c|c|c|c|} \hline
\textbf{Dimension} & $p_c^b(d)$ & $\vv{p_c}^b(d)$ & $p_c^s(d)$ & $\vv{p_c}^s(d)$\\ \hline
3 & 0,3473 &0,5680 &0,5000 &0,6422 \\ \hline
4 & 0,2787 &0,4227 &0,4344 &0,5528\\ \hline
5 & 0,2283 & 0,3926 &0,4156& 0,5086 \\ \hline
6 & 0,1921 & 0,2733 & 0,2929 & 0,3701 \\ \hline
7 & 0,1681 & 0,2028 & 0,2796 & 0,3533\\ \hline
8 & 0,1485 &0,1627 & 0,2479 &0,2929\\ \hline
9 & 0,1326 &0,1370 &0,2063 &0,2843\\ \hline
\end{tabular}
\end{comment}
\vspace{1.2cm}
{\bf \Large Acknowledgements}
The authors thank Roger Silva for valuable comments on the manuscript. P.A. Gomes has been supported by São Paulo Research Foundation (FAPESP), grant 2020/02636-3 and grant 2017/10555-0.
R. Sanchis has been partially supported by Conselho Nacional de Desenvolvimento
Científico e Tecnológico (CNPq), CAPES and by FAPEMIG (PPM 00600/16).
\thebibliography{}
\bibitem{BR} Bollob\'as, B., and Riordan, O. {\it Percolation}, Cambridge University Press (2006).
\bibitem{CR} Campanino, M., and Russo, L. {\it An upper bound on the critical percolation probability for the three-dimensional cubic lattice}, Ann. Prob. 13 (2), 478-491, (1985).
\bibitem{CD} Cox, J.T., and Durrett, R. {\it Oriented percolation in dimensions $d \geq 4$: bounds and asymptotic formulas.} Math Proc. Camb. Phil. Soc. Vol. 93(1), pp.151-162, (1983).
\bibitem{GPS} Gomes, P.A., Pereira, A., and Sanchis, R. {\it Anisotropic oriented percolation in high dimensions}. ALEA Lat. Am. J. Probab. Math. Stat. 17 (1), 531-543, (2020).
\bibitem{GPS2} Gomes, P.A., Pereira, A., and Sanchis, R. {\it Anisotropic percolation in high dimensions: the non-oriented case}. ArXiv:2106.09083.
\bibitem{GSS} Gomes P.A., Sanchis, R. and Silva, R.W.C. {\it A note on the dimensional crossover critical exponent.} Lett. Math. Phys. 110 (12), 3427–3434, (2020).
\bibitem{Grim} Grimmett, G.R. {\it Percolation.} Second edition. Springer-Verlag, Berlin, (1999).
\bibitem{K3} Kesten, H. {\it The Critical Probability of Bond Percolation
on the Square Lattice Equals 1/2}, Commun. Math. Phys. 74 (1), 41-59, (1980).
\bibitem{L} Liggett, T.M. {\it Survival of discrete time growth models with applications to oriented percolation.} Ann. Appl. Probab. 5 (3), 613-636, (1995).
\bibitem{NZ} Newman, R.E.J., and Ziff, R.M. {\it Fast Monte Carlo algorithm for site or bond percolation}, Phys. Rev. E, 64 (1), 016706, (2001).
\bibitem{W} Wierman, J.C. {\it Bond percolation critical probability bounds for the Kagomé lattice by a substitution method}. In: Disorder in Physical Systems, Grimmett, G. and Welsh, D. J. A. (eds.),
pp.349-360, (1990).
\bibitem{W2} Wierman, J.C. {\it Substitution method critical probability bounds for the square lattice site percolation model}.
Combin. Probab. Comput. 4 (2), 181-188 (1995).
\bibitem{WP} Wierman, J.C., and Parviainen, R. {\it Ordering Bond Percolation Critical Probabilities}, UUDM Report, (2002)
\url{http://www.ams.jhu.edu/~wierman/Papers/Bond-Pc-ordering.pdf}
\vspace{2cm}
\begin{minipage}{0.45\textwidth}
Pablo Almeida Gomes\\
Universidade de S\~ao Paulo, Brasil\\
E-mail: \url{pagomes@usp.br}
\end{minipage}
\vspace{0.5cm}
\begin{minipage}{0.45\textwidth}
Alan Pereira\\
Universidade Federal de Alagoas, Brasil\\
E-mail: \url{alan.pereira@im.ufal.br}
\end{minipage}
\vspace{0.5cm}
\begin{minipage}{0.5\textwidth}
R\'emy Sanchis\\
Universidade Federal de Minas Gerais, Brasil\\
E-mail: \url{rsanchis@mat.ufmg.br}
\end{minipage}
\end{document}
| {
"timestamp": "2021-06-22T02:04:44",
"yymm": "2106",
"arxiv_id": "2106.10388",
"language": "en",
"url": "https://arxiv.org/abs/2106.10388",
"abstract": "We consider bond and site Bernoulli Percolation in both the oriented and the non-oriented cases on $\\mathbb{Z}^d$ and obtain rigorous upper bounds for the critical points in those models for every dimension $d \\geq 3$.",
"subjects": "Probability (math.PR)",
"title": "Upper bounds for critical probabilities in Bernoulli Percolation models",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363476123224,
"lm_q2_score": 0.8128673155708975,
"lm_q1q2_score": 0.8001348445024903
} |
https://arxiv.org/abs/2103.11443 | Bipartite biregular Moore graphs | A bipartite graph $G=(V,E)$ with $V=V_1\cup V_2$ is biregular if all the vertices of a stable set $V_i$ have the same degree $r_i$ for $i=1,2$. In this paper, we give an improved new Moore bound for an infinite family of such graphs with odd diameter. This problem was introduced in 1983 by Yebra, Fiol, and Fàbrega.\\ Besides, we propose some constructions of bipartite biregular graphs with diameter $d$ and large number of vertices $N(r_1,r_2;d)$, together with their spectra. In some cases of diameters $d=3$, $4$, and $5$, the new graphs attaining the Moore bound are unique up to isomorphism. | \section{Introduction}
The {\emph{degree/diameter problem}} for graphs consists in finding the largest order of a graph with prescribed degree and diameter. We call this number the \emph{Moore bound}, and a graph whose order coincides with this bound is called a {\emph{Moore graph}}.
There is a lot of work related to this topic (see a survey by Miller and \v{S}ir\'a\v{n} \cite{MS16}), and also some restrictions of the original problem. One of them is related to the bipartite Moore graphs. In this case, the goal is finding regular bipartite graphs with maximum order and fixed diameter. In this paper, we study the problem, proposed by Yebra, Fiol and F\`abrega \cite{YFF83} in 1983, that consists in finding biregular bipartite Moore graphs.
A bipartite graph $G=(V,E)$ with $V=V_1\cup V_2$ is biregular if, for $i=1,2$, all the vertices of a stable set $V_i$ have the same degree. We denote $[r,s;d]$-bigraph a bipartite biregular graph of degrees $r$ and $s$ and diameter $d$; and by $[r,s;d]$-bimoore graph the bipartite biregular graph of diameter $d$ that attains the Moore bound, which is denoted $M(r,s;d)$.
Notice that constructing these graphs is equivalent to construct block designs, where one partite set corresponds to the points of the block design, and the other set corresponds to the blocks of the design. Moreover, each point is in a fixed number $s$ of blocks, and the size of each block is equal to $r$. The incidence graph of this block design is an $[r,s;d]$-biregular bipartite graph of diameter $d$.
This type of graph is often used as an alternative to a hypergraph in modeling some interconnection networks. Actually, several applications deal with the study of bipartite graphs such that all vertices of every partite set have the same degree. For instance, in an interconnection network for a
multiprocessor system, where the processing elements communicate through buses, it is useful that each processing element is connected to the same number of buses and also that each bus is connected to the same number of processing elements to have a uniform traffic through the network. These networks can be modeled by hypergraphs (see Bermond, Bond, Paoli, and Peyrat \cite{BeBoPaPe83}), where the vertices indicate the processing elements and the edges indicate the buses of the system. They can also be modeled by bipartite graphs with a set of vertices for the processing elements, another one for the buses, and
edges that represent the connections between processing elements and buses since all vertices of each set have the same degree.
The degree/diameter problem is strongly related to the {\em degree\/}/{\em girth problem\/} (also known as {\em the cage problem\/}) that consists in finding the smallest order of a graph with prescribed degree and girth (see the survey by Exoo and Jajcay \cite{ExooJaj08}). Note that when for an even girth of the graph, $g=2d$, the lower bound of this value coincides with the Moore bound for bipartite graphs (the largest order of a bipartite regular graph with given diameter $d$).
In the bipartite biregular problem, we have the same situation. In 2019, Filipovski, Ramos-Rivera and Jajcay \cite{FilRamRivJaj19} introduced the concept of bipartite biregular Moore cages and presented lower bounds on the orders of bipartite biregular $(m,n;g)$-graphs. The bounds when $g=2d$ and $d$ even also coincide with the bounds given by Yebra, Fiol, and F\`abrega in \cite{YFF83}. Note that these bounds only coincide when the diameter is even. The cases for odd diameter and girth $g=2d$ are totally different, even for the extreme values.
The contents of the paper are as follows. In the rest of this introductory section, we recall the Moore-like bound $M(r,s;d)$ derived in Yebra, Fiol, and F\`abrega \cite{YFF83} on the order of a bipartite biregular graph with degrees $r$ and $s$, and diameter $d$. Following the same problem of obtaining good bounds, in Section \ref{sec:improvedMoore} we prove that, for some cases of odd diameter, the Moore bound of \cite{YFF83} can be improved (see the new bounds in Tables \ref{tab:d=3} and \ref{tab:d=5}).
In the two following sections, we basically deal with the case of even diameter because known constructions provide optimal (or very good) results. Thus, Section \ref{gen-pols} is devoted to the Moore bipartite biregular graphs associated with generalized polygons.
In Section \ref{gen-cons}, we propose two general graph constructions: the subdivision graphs giving Moore bipartite biregular graphs with even diameter, and the semi-double graphs that, from a bipartite graph of any given diameter, allows us to obtain another bipartite graph with the same diameter but with a greater number of vertices. For these two constructions, we also give the spectrum of the obtained graphs.
Finally, a numeric construction of bipartite biregular Moore graphs for diameter $d=3$ and degrees $r$ and $3$ is proposed in Section \ref{sec:d=3}.
\subsection{Moore-like bounds}
Let $G=(V,E)$, with $V=V_1\cup V_2$, be a $[r,s;d]$-bigraph, where each vertex of $V_1$ has degree $r$, and each vertex of $V_2$ has degree $s$. Note that, counting in two ways the number of edges of $G$, we have
\begin{equation}
\label{basic=}
r N_1=s N_2,
\end{equation}
where $N_i=|V_i|$, for $i=1,2$.
Moreover, since $G$ is bipartite with diameter $d$, from one vertex $u$ in one stable set, we must reach {\bf all} the vertices of the other set in at most $k-1$ steps.
Suppose first that the diameter is even, say, $k=2m$ (for $m\ge 1$). Then, by simple counting, if $u\in V_1$, we get
\begin{equation}
\label{N2-even}
N_2\le r+r(r-1)(s-1)+\cdots+r[(r-1)(s-1)]^{m-1}= r\frac{[(r-1)(s-1)]^{m}-1}{(r-1)(s-1)-1},
\end{equation}
and, if $u\in V_2$,
\begin{equation}
\label{N1-even}
N_1\le s\frac{[(r-1)(s-1)]^{m}-1}{(r-1)(s-1)-1}.
\end{equation}
In the case of equalities in \eqref{N2-even} and \eqref{N1-even}, condition \eqref{basic=} holds,
and the Moore bound is
\begin{equation}
\label{Moore-even}
M(r,s;2m)= (r+s)\frac{[(r-1)(s-1)]^{m}-1}{(r-1)(s-1)-1}.
\end{equation}
The Moore bounds for $2\le s\le r\le 10$ and $d=4,6$ are shown in Tables \ref{tab:d=4} and \ref{tab:d=6}, where values in boldface are known to be attainable.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|}
\hline
$r\setminus s$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
\hline \hlin
2 & $\textbf{8}^{\bullet}$ \\ \cline{1-3}
3 & $\textbf{15}^{* \diamond}$ & \textbf{30} \\ \cline{1-4}
4 & $\textbf{24}^{* \diamond}$ & 49 & \textbf{80} \\ \cline{1-5}
5 & $\textbf{35}^*$ & $\textbf{72}^{\bullet}$ & 117 & \textbf{170} \\ \cline{1-6}
6 & $\textbf{48}^*$ & 99 & 160 & 231 & \textbf{312} \\ \cline{1-7}
7 & $\textbf{63}^*$ & 130 & 209 & 300 & 403 & $\stackrel{(518)}{516}$ \\ \cline{1-8}
8 & $\textbf{80}^*$ & 165 & 264 & 377 & 504 & 645 & \textbf{800} \\ \cline{1-9}
9 & $\textbf{99}^*$ & 204 & 325 & 462 & 615 & 784 & 969 & \textbf{1170} \\ \cline{1-10}
10 & $\textbf{120}^*$ & 247 & $\textbf{292}^{\bullet}$ & 555 & 736 & 935 & 1152 & 1387 & \textbf{1640} \\
\hline
\end{tabular}
\end{center}
\vskip-.25cm
\caption{Moore bounds for diameter $d=4$. The attainable known values are in boldface.
The asterisks correspond to the subdivision graphs $S(K_{r,r})$, see Section \ref{sec:subdiv-graphs}.
The symbol `$\bullet$' indicates the orders of the graphs according to Theorem \ref{bb-Moore} and the diamonds correspond to unique graphs.}
\vskip1cm
\label{tab:d=4}
\end{table}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|}
\hline
$r\setminus s$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
\hline \hlin
2 & $\textbf{12}^{\bullet}$ \\ \cline{1-3}
3 & \textbf{35*} & \textbf{126} \\ \cline{1-4}
4 & \textbf{78*} & 301 & \textbf{728} \\ \cline{1-5}
5 & \textbf{147*} & 584 & 1431 & \textbf{2730} \\ \cline{1-6}
6 & \textbf{248*} & 999 & 2410 & 4631 & \textbf{7812} \\ \cline{1-7}
7 & 387 & 1570 & 3773 & 7212 & 12103 & $\stackrel{(18662)}{18660}$ \\ \cline{1-8}
8 & \textbf{570*} & 2321 & 5556 & 10569 & 17654 & 27105 & \textbf{39216} \\ \cline{1-9}
9 & \textbf{803*} & 3276 & $\textbf{7813}^{\bullet}$ &14798 & 24615 & 37648 & 54281 & \textbf{74898} \\ \cline{1-10}
10 & \textbf{1092*} & 4459 & 10598 & 19995 & 33136 & 50507 & 72594 & 99883 & \textbf{132860} \\
\hline
\end{tabular}
\end{center}
\vskip-.25cm
\caption{Moore bounds for diameter $d=6$. The attainable known values are in boldface. The asterisks correspond to the bimoore graphs of Proposition \ref{bb(s=2,d=6)-Moore}. The `$\bullet$' indicates the order of the graph according to Theorem
\ref{bb-Moore}.}
\label{tab:d=6}
\end{table}
Similarly, if the diameter is odd, say, $k=2m+1$ (for $m\ge 1$), and $u\in V_1$, we have
\begin{equation}
\label{N1-odd}
N_1\le 1+r(s-1)+\cdots + r(s-1)[(r-1)(s-1)]^{m-1}=1+r(s-1)\frac{[(r-1)(s-1)]^{m}-1}{(r-1)(s-1)-1}=N_1',
\end{equation}
whereas, if $u\in V_2$,
\begin{equation}
\label{N2-odd}
N_2\le 1+s(r-1)\frac{[(r-1)(s-1)]^{m}-1}{(r-1)(s-1)-1}=N_2'.
\end{equation}
But, in this case, $N_1'r\neq N_2's$ and, hence, the Moore bound must be smaller than $N_1'+N_2'$. In fact, it was proved in Yebra, Fiol, and F\`abrega \cite{YFF83} that, assuming $r>s$,
\begin{equation}
\label{N1-N2-odd}
N_1\le \left\lfloor \frac{N_2'}{\rho}\right\rfloor \sigma\qquad\mbox{and}
\qquad N_2\le \left\lfloor \frac{N_2'}{\rho}\right\rfloor \rho,
\end{equation}
where $\rho=\frac{r}{\gcd\{r,s\}}$ and $\sigma=\frac{s}{\gcd\{r,s\}}$.
Then, in this case, we take the Moore bound
\begin{equation}
\label{Moore-odd}
M(r,s;2m+1)= \left\lfloor\frac{1+s(r-1)\frac{[(r-1)(s-1)]^{m}-1}{(r-1)(s-1)-1}}{\rho}\right\rfloor (\rho+\sigma).
\end{equation}
Two bipartite biregular graphs with diameter three attaining the Moore bound \eqref{Moore-odd} were given in \cite{YFF83}. Namely, in Figure \ref{3unics}$(a)$, with $r=4$ and $s=3$, we would have the unattainable values $(N_1',N_2')=(9,10)$, whereas we get $(N_1,N_2)=(6,8)$, giving $M(4,3;3)=14$. In Figure \ref{3unics}$(b)$, with $r=5$ and $s=3$, $(N_1',N_2')=(11,13)$, and $(N_1,N_2)=(6,10)$, now corresponding to $M(5,3;3)=16$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{3unics.pdf}
\vskip-6.75cm
\caption{$(a)$ The only [4,3;3]-bimoore graph on $14$ vertices; $(b)$ One of the two [5,3;3]-bimoore graph on $16$ vertices; $(c)$ The only [6,3;3]-bimoore graph on $21$ vertices.}
\label{3unics}
\end{figure}
As shown in Section \ref{sec:improvedMoore}, the bound \eqref{Moore-odd} can be improved for some values of the degree $r$.
So, we display there the tables of the new Moore bounds for small values of $r,s$ and diameters $d=3$ and $d=5$.
Recall that
if $G=(V,E)$ is an $r$-regular graph of diameter $d$, then its \emph{defect} is $\delta=\delta(G)=M(r;d)-|V|$, where $M(r;d)$ stands for the corresponding Moore bound. Thus, in this paper, the defect of a $[r,s;d]$-bigraph $G=(V_1\cup V_2,E)$ is defined as $\delta=M(r,s;d)-|V_1\cup V_2|$.
\section{An improved Moore bounds for odd diameter}
\label{sec:improvedMoore}
Let us begin with a simple result concerning the girth of the possible biregular bipartite graphs attaining the bound \eqref{Moore-odd} for odd diameter.
\begin{lemma}
\label{g<=4m}
Every biregular bipartite graph $G$ of odd diameter $d=2m+1$ with order attaining the Moore bound \eqref{Moore-odd} has girth $g\le 4m$.
\end{lemma}
\begin{proof}
Since $G$ is bipartite, we must have $g\le 2d=4m+2$. Consider the trees $T_1$ rooted at $u\in V_1$ , and $T_2$ rooted at $v\in V_2$, of vertices at distance at most $2m$ of their roots. If $G$ has girth $g=4m+2$, all vertices of $T_1$ must be different (otherwise, $T_1$ could not have the maximum number of vertices), and the same holds for all the vertices of $T_2$. This occurs if and only if $T_1$ and $T_2$ have numbers of vertices $N_1'$ and $N_2'$ given
by \eqref{N1-odd} and \eqref{N2-odd}, respectively.
But this is not possible because the Moore bound \eqref{Moore-odd} is obtained from \eqref{N1-N2-odd} as $\lfloor N_2'/\rho\rfloor(\rho+\sigma) <N_1'+N_2'$ (since $r>s\Rightarrow N_2'>N_1'$). Hence, the girth of $G$ is at most $4m$.
\end{proof}
\subsection{The case of diameter three}
As a consequence of the following result, we prove in Corollary \ref{coro:optimal} that the $[6,3;3]$-graph of Figure \ref{3unics}$(c)$ has the maximum possible order.
\\
\begin{proposition}
\label{nonupperbound}
If $\rho=\frac{r}{\gcd\{r,s\}}$ divides $s-1$, then there is no $[r,s;3]$-graph with order attaining the Moore-like bound in \eqref{Moore-odd}.
Instead, the new improved Moore bound is
\begin{equation}
\label{Moore-odd-improved(d=3)}
M^*(r,s;3)= (1+s(r-1)-\rho)\left(1+\frac{\sigma}{\rho}\right),
\end{equation}
with
\begin{equation}
\label{N1-N2-(d=3)}
N_1\le [1+s(r-1)-\rho]\frac{\sigma}{\rho}
\qquad \mbox{and}\qquad N_2\le 1+s(r-1)-\rho,
\end{equation}
where $\rho=\frac{r}{\gcd\{r,s\}}$ and $\sigma=\frac{s}{\gcd\{r,s\}}$.
\end{proposition}
\begin{proof}
Suppose that, under the hypothesis, there exists a $[r,s;3]$-graph $G=(V_1\cup V_2,E)$ that attains the upper bound in \eqref{Moore-odd}. Then, with $r>s$, $|V_1|$ is the number of vertices of degree $r$, $ |V_2|$ is the number of vertices of degree $s$, and $N_1=|V_1|< |V_2|=N_2$.
Thus, for diameter $d=3$, we have
$$
N_2=\left\lfloor \frac{1+s(r-1)}{\rho}\right\rfloor\rho=1+s(r-1)=N_2'.
$$
This means that there is only one shortest path of length at most 2 from any vertex $v\in V_2$ to all the vertices of $V_2$.
Hence, the girth of $G$ is larger than 4. If not, we would have a cycle
$u_1\sim v_1\sim u_2\sim v_2 (\sim u_1)$, with $u_i\in V_1$ and $v_i\in V_2$ for $i=1,2$. So, there would be 2 shortest 2-paths between $v_1$ and $v_2$.
This is a contradiction with Lemma \ref{g<=4m} and, hence, the upper bound \eqref{Moore-odd} cannot be attained.
\end{proof}
Otherwise, if $\rho$ does not divide $s-1$, we have the bound \eqref{Moore-odd} with $m=1$, where \eqref{N1-N2-odd} yields
\begin{equation}
\label{N1-N2(d=3)not-divide}
N_1\le \left\lfloor s\cdot \gcd\{r,s\}-\frac{s-1}{\rho}\right\rfloor \sigma\qquad\mbox{and}\qquad N_2\le \left\lfloor s\cdot \gcd\{r,s\}-\frac{s-1}{\rho}\right\rfloor\rho.
\end{equation}
In Table \ref{tab:d=3}, we show the values of the Moore bounds in \eqref{Moore-odd} and \eqref{Moore-odd-improved(d=3)} for $s\le 2\le r\le 11$ and diameter $d=3$.
The values between parenthesis correspond to the old bound \eqref{Moore-odd} that was given in \cite{YFF83}. The attainable known values are in boldface. The asterisks indicate the values obtained in this paper (see Proposition \ref{propo:(r,3)}). The $[6,3;3]$-bigraph with $21$ vertices of Figure \ref{3unics}$(c)$ can be shown to be unique, up to isomorphisms.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|}
\hline
$r\setminus s$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\
\hline \hlin
2 & \textbf{6} \\ \cline{1-3}
3 & \textbf{5} & \textbf{14} \\ \cline{1-4}
4 & \textbf{9} & $\textbf{14}^{* \diamond}$ & \textbf{26} \\ \cline{1-5}
5 & \textbf{7} & \textbf{16*} & 27 & \textbf{42} \\ \cline{1-6}
6 & \textbf{12} & $\stackrel{(24)}{\textbf{21}^{\diamond}}$ & $\stackrel{(35)}{30}$ & 44 & \textbf{62} \\ \cline{1-7}
7 & \textbf{9} & \textbf{20*} & 33 & 48 & 65 & 86 \\ \cline{1-8}
8 & \textbf{15} & \textbf{22*} & 42 & 52 & 70 & 90 & \textbf{114} \\ \cline{1-9}
9 & \textbf{11} & 32 & 39 & 56 & 80 & 96 & 119 & \textbf{146} \\ \cline{1-10}
10 & \textbf{18} & \textbf{26*} & 49 & $\stackrel{(69)}{66}$ & $\stackrel{(88)}{80}$ & 102 & 126 & 152 & \textbf{182} \\ \cline{1-11}
11 & \textbf{13} & \textbf{28*} & 45 & 64 & 85 & 108 & 133 & 160 & 189 & 222 \\
\hline
\end{tabular}
\end{center}
\vskip-.25cm
\caption{Best Moore bounds for diameter $d=3$. The attainable known values are in boldface.
The asterisks correspond to the graphs obtained according to Proposition \ref{propo:(r,3)}, and the diamonds correspond to unique graphs. The values between parenthesis correspond to the old (unattainable) bound \eqref{Moore-odd}.}
\label{tab:d=3}
\vskip1cm
\end{table}
As a consequence of the above results, we get the following Moore bounds when the degree $r$ is a multiple of the degree $s$.
\begin{corollary}
For $s\ge 2$, the best Moore bounds for the orders $N_1$ and $N_2$ of a $[\rho s,s;3]$-graph are as follows:
\begin{itemize}
\item[$(i)$] If $\rho|(s-1)$, then
\begin{equation*}
N_1 \le (s^2-1)-\frac{s-1}{\rho},\qquad\mbox{and}\qquad N_2 \le \rho(s^2-1)-(s-1).
\end{equation*}
\item[$(i)$] If $\rho\nmid (s-1)$, then
\begin{equation*}
N_1 \le s^2-\left\lceil s/\rho\right\rceil,\qquad\mbox{and}\qquad
N_2 \le \rho(s^2-\left\lceil s/\rho\right\rceil).
\end{equation*}
\end{itemize}
\end{corollary}
\begin{proof}
Note that, under the hypothesis, $\gcd\{r,s\}=s$, $\rho=\frac{r}{s}$, and $s=1$.
Then, $(i)$ follows from \eqref{N1-N2-(d=3)}.
Concerning $(ii)$, the values in \eqref{N1-N2(d=3)not-divide} become
$$
N_1\le \left\lfloor s^2-\frac{s-1}{\rho}\right\rfloor\qquad\mbox{\rm and}\qquad
N_2\le \left\lfloor s^2-\frac{s-1}{\rho}\right\rfloor\rho,
$$
which are expressions equivalent to those given above.
\end{proof}
Assuming that $\rho=2$ and $s$ is odd, we get the following consequence.
\begin{corollary}
\label{coro:2s-s}
There is no $[2s,s;3]$-bimoore graph for $s$ odd.
\label{coro:optimal}
\end{corollary}
\begin{proof}
From the first statement, the Moore bound $M(6,3;3)=24$ given in \eqref{Moore-odd} is not attained. With this bound, we have $N_2'=16$ vertices of degree $3$, and $N_1'=8$ vertices of degree $6$. So, the first possible values are $N_2=14$ vertices of degree $3$ and $N_1=7$ vertices of degree $6$, which corresponds to the graph depicted in Figure \ref{3unics}$(c)$.
\end{proof}
\subsection{The general case}
Proposition \ref{nonupperbound} can be extended for any odd diameter, as shown in the following result.
\begin{theorem}\label{nonupperbound2m+1}
If $\rho=\frac{r}{\gcd\{r,s\}}$ divides $s\frac{[(r-1)(s-1)]^{m}-1}{(r-1)(s-1)-1}-1$, then there is no $[r,s;2m+1]$-graph with order attaining the Moore-like bound in \eqref{Moore-odd}. Instead, the new improved Moore bound is
\begin{equation}
\label{Moore-odd-improved}
M^*(r,s;2m+1)= \left(\frac{1+s(r-1)\frac{[(r-1)(s-1)]^{m}-1}{(r-1)(s-1)-1}}{\rho}-1\right) (\rho+\sigma),
\end{equation}
where, as before, $\rho=\frac{r}{\gcd\{r,s\}}$ and $\sigma=\frac{s}{\gcd\{r,s\}}$.
\end{theorem}
\begin{proof}
The proof that the bound in \eqref{Moore-odd} is not attainable follows the same reasoning as in Proposition \ref{nonupperbound}.
Indeed, from the hypothesis, if there exists a $[r,s;2m+1]$-graph $G=(V_1\cup V_2,E)$ with order attaining the upper bound in \eqref{Moore-odd}, we would have
$N_2=N_2'$, and the girth would be larger than $4m$, a contradiction.
Then, as before, the new possible bounds in \eqref{N1-N2-odd} are
$$
N_1\le \left( \frac{N_2'}{\rho}-1\right) \sigma\qquad\mbox{and}
\qquad N_2\le \left(\frac{N_2'}{\rho}-1\right) \rho,
$$
as claimed.
\end{proof}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|}
\hline
$r\setminus s$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
\hline \hlin
2 & 10 \\ \cline{1-3}
3 & $\textbf{15}^\diamond$ & 62 \\ \cline{1-4}
4 & 36 & $\stackrel{(112)}{105}$ & 242 \\ \cline{1-5}
5 & 56 & 168 & 369 & 682 \\ \cline{1-6}
6 & 80 & $\stackrel{(249)}{246}$ & $\stackrel{(535)}{530}$ & 957 & 1562 \\ \cline{1-7}
7 & 108 & 330 & 715 & $\stackrel{(1284)}{1272}$ & 2067 & 3110 \\ \cline{1-8}
8 & 140 & 429 & 924 & $\stackrel{(1651)}{1638}$ & 2646 & 3945 & 5602 \\ \cline{1-9}
9 & 176 & 544 & $\stackrel{(1157)}{1144}$ & 2044 & 3280 & 4880 & 6885 & 9362 \\ \cline{1-10}
10 & 216 & 663 & 1407 & $\stackrel{(2499)}{2496}$ & $\stackrel{(3976)}{3968}$ & 5882 & 8289 & 11229 & 14762 \\
\hline
\end{tabular}
\end{center}
\vskip-.25cm
\caption{Best Moore bounds for diameter $d=5$. The diamond corresponds to unique graphs. The values between parenthesis correspond to old (unattainable) bounds \eqref{Moore-odd}.
}
\label{tab:d=5}
\end{table}
Table \ref{tab:d=5} shows the values of the Moore bounds in \eqref{Moore-odd} and \eqref{Moore-odd-improved} for $s\le 2\le r\le 10$ and diameter $d=5$. As before, the values between parenthesis correspond to old (unattainable) bounds, as those in \eqref{Moore-odd}
\subsection{Computational results}
The enumeration of bigraphs with maximum order can be done with a computer whenever the Moore bound is small enough. To this end, given a diameter $d$, first we generate with {\em Nauty} \cite{MP13} all bigraphs with maximum orders $N_1=|V_1|$ and $N_2=|V_2|$ allowed by the Moore bound $M(r,s;d)$. Second, we filter the generated graphs keeping those with diameter $d$ using the library {\em NetworkX} from Python. Computational resources forces to study the cases where $\max\{N_1,N_2\} \leq 24$ and $n=N_1+N_2 \leq 32$. Computational results are shown in Table \ref{tab:comput}.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|}
\hline
$[r,s;d]$ & $n$ & $N_1$ & $N_2$ & Generated graphs & Graphs with diameter $d$ \\ \hline\hline
$[4,3;3]$ & ${\bf 14}^\diamond$ & 6 & 8 & 18 & 1 \\ \hline
$[5,3;3]$ & {\bf 16} & 6 & 10 & 45 & 2 \\ \hline
$[6,3;3]$ & 24 & 8 & 16 & 977278 & 0 \\
& ${\bf 21}^\diamond$ & 7 & 14 & 7063 & 1 \\ \hline
$[7,3;3]$ & {\bf 20} & 6 & 14 & 344 & 4 \\ \hline
$[8,3;3]$ & {\bf 22} & 6 & 16 & 950 & 10 \\ \hline
$[9,3;3]$ & 32 & 8 & 24 & $>$122996904 & ? \\
& 28 & 7 & 21 & 2262100 & 1 \\ \hline
$[10,3;3]$ & {\bf 26} & 6 & 20 & 6197 & 19 \\ \hline
$[11,3;3]$ & {\bf 28} & 6 & 22 & 14815 & 16 \\ \hline
$[5,4;3]$ & 27& 12 & 15 & 822558 & 0 \\
& {\bf 18} & 8 & 10 & 3143 & 583 \\ \hline
$[3,2;4]$ & ${\bf 15}^\diamond$ & 6 & 9 & 6 & 1 \\ \hline
$[4,2;4]$ & ${\bf 24}^\diamond$ & 8 & 16 & 204 & 1 \\ \hline
$[3,2;5]$ & 20 & 8 & 12 & 20 & 0 \\
& ${\bf 15}^\diamond$ & 6 & 9 & 6 & 1 \\ \hline
\end{tabular}
\end{center}
\vskip-.25cm
\caption{Complete enumeration of bipartite biregular Moore graphs for some (small) cases of the degrees $r,s$ and diameter $d$.}\label{tab:comput}
\end{table}
Even with the computational limitations on the order of the sets $V_1$ and $V_2$, some of the optimal values given in Tables \ref{tab:d=4}, \ref{tab:d=3}, and \ref{tab:d=5} have been found with this method. When there is no graph for the largest values $N_1$ and $N_2$, the following lower feasible pair values may decrease dramatically, producing a large number of optimal graphs, as it happens in the case $[5,4;3]$. The particular case $[9,3;3]$ is computationally very hard, and it is out of our computational resources (which are very limited), but our guess is that there is no Moore graph in this case. Finally, we point out that these results encourage us to study the case $s=d=3$ and $3 \nmid r$ more in detail (see Section \ref{sec:d=3}), where computational evidence shows that there is always a Moore graph.
\section{Bipartite biregular Moore graphs from generalized polygons}
\label{gen-pols}
In the first part of this section, we recall the connection between Moore graphs and generalized polygons that was extensively studied (see, for instance, Bamberg, Bishnoi, and Royle \cite{BamBisRoy}) because we will use it for the rest of the paper. In fact, our first result is an immediate consequence of the result proved by Araujo-Pardo, Jajcay, and Ramos in \cite{AraJajRam}, and the analysis given in the introduction about the coincidence of the bounds for bipartite biregular cages and bipartite biregular Moore graphs when $d$ is even.
\begin{theorem}\cite{AraJajRam}
Whenever a generalized quadrangle, hexagon, or octagon
${\mathcal G}$ of order
$(s,t)$ exists, its point-line incidence graph is an $(s+1,t+1;8)$-,
$(s+1,t+1;12)$- or $(s+1,t+1;16)$-cage, respectively.
Hence, there exist infinite families of bipartite biregular
$(n+1,n^2+1;8)$-, $(n^2+1,n^3+1;8)$-, $(n,n+2;8)$-, $(n+1,n^3+1;12)$- and $(n+1,n^2+1;16)$-cages.
\end{theorem}
Then, immediately we conclude the following result.
\begin{theorem}
\label{bb-Moore}
There exists infinite families of bipartite biregular
$[r^2+1,r+1;4]$-, $[r^3+1,r^2+1;4]$-, $[r+2,r;4]$-, $[r^3+1,r+1;6]$- and $[r^2+1,r+1;8]$-bimoore graphs.
\end{theorem}
In the following, we give some results related to generalized $n$-gons that we use in the rest of the paper.
\begin{lemma}[\cite{VM98}, Lemma 1.3.6]\label{VMLemma}
A geometry $\Gamma = (\cal{P},\cal{L},{\bf I})$ is a (weak) generalized $n$-gon if and
only if the incidence graph of $\Gamma$ is a connected bipartite graph of diameter $d=n$ and
girth $g=2d$, such that each vertex is incident with at least three (at least two) edges.
\end{lemma}
As every generalized polygon $\Gamma$ can be associated with a pair $(r,s)$,
called the {\em order} of $\Gamma$, such that every line is incident with $r+1$
points, and every point is incident with $s+1$ lines (see van Maldeghem \cite{VM98}). This means,
in particular, that the incidence graph of $\Gamma$ is a bipartite biregular graph
with degrees $r+1$ and $s+1$. Besides, the following result
determines the orders of both partite sets.
\begin{theorem}[\cite{VM98}, Corollary 1.5.5]\label{PL}
If there exists a $\Gamma = (\cal{P},\cal{L},{\bf I})$ (weak) generalized $n$-gon of order $(r,s)$ for $n \in \{3,4,6,8\}$, then
\begin{itemize}
\item
For $n=3$, $|{\cal{P}}|= r^2+r+1$\ and\ $|{\cal{L}}|=s^2+s+1$.
\item
For $n=4$, $|{\cal{P}}|=(1+r)(1+rs)$\ and\ $|{\cal{L}}|=(1+s)(1+rs)$.
\item
For $n=6$, $|{\cal{P}}|=(1+r)(1+rs+r^2s^2)$\ and\ $|{\cal{L}}=(1+s)(1+rs+r^2s^2)$.
\item
For $n=8$, $|{\cal{P}}|=(1+r)(1+rs)(1+r^2s^2)$\ and\ $|{\cal{L}}|=(1+s)(1+rs)(1+r^2s^2)$.
\end{itemize}
\end{theorem}
In 1964, Feit and Higman proved that finite generalized $n$-gons exist only for $n=\{3,4,6,8\}$. When $n=3$, we have the projective planes; when $n=4$, we have the generalized quadrangles, which are known to exist for parameter pairs
$(q,q), (q,q^2), (q^2,q),(q^2,q^3), $
$(q^3,q^2), (q-1,q+1),(q+1,q-1) $; when $n=6$, we have the generalized hexagons with parameters
$ (q,q),(q,q^3),(q^3,q)$, in both cases for $q$ prime power; and, finally, for $n=8$, we have the generalized octagons, which are only known to exist for the pairs $(q,q^2),(q^2,q)$, where $q$ is an odd power of $2$.
\section{Two general constructions}
\label{gen-cons}
In this section, we construct some infinite families of Moore or large semiregular bipartite graphs derived from two general constructions: the subdivision graphs and the semi-double graphs
\subsection{The subdivision graphs}
\label{sec:subdiv-graphs}
Given a graph $G=(V,E)$, its {\em subdivision graph} $S(G)$ is obtained by inserting a new vertex in every edge of $G$. So, every edge $e=uv\in E$ becomes two new edges, $ux$ and $xv$, with new vertex $x$ of degree two ($\deg(x)=2$).
In our context, we have the following result.
\begin{proposition}
\label{propo:subdiv}
Let $G=(V_1\cup V_2,E)$ be an $r$-regular bipartite graph with $n$ vertices, $m$ edges,
diameter $D(\ge 2)$, and spectrum
$\spec G=\{\lambda_0^{m_0},\lambda_1^{m_1},\ldots,\lambda_{d-1}^{m_{d-1}},\lambda_d^{m_d}\}$,
where $\lambda_i=-\lambda_{d-i}$ and $m_i=m_{d-i}$ for $i=0,\ldots,\lfloor d/2\rfloor$. Then, its
{\em subdivision graph} $S(G)$, is a bipartite biregular graph with degrees $(r,2)$, $n+m$ vertices, $2m$ edges, has diameter $2D$, and spectrum
\begin{equation}
\label{spec-double}
\spec S(G)= \pm\sqrt{\spec G+r} \cup \{0^{m-n}\}.
\end{equation}
\end{proposition}
\begin{proof}
Let $S(G)$ have partite sets $U_1$ and $U_2$, with $U_1=V_1\cup V_2$ (the old vertices of $G$), and $U_2$ being the new vertices of degree 2. Let $x_{uv}\in U_2$ denote the vertex inserted in the previous edge $uv$.
The first statement is obvious. To prove that the diameter of $S(G)$ is $2m$, we consider three cases:
\begin{itemize}
\item[$(i)$]
Since $G$ has diameter $d$, there is a path of length $\ell \le d$ from any vertex $u\in U_1$ to any other vertex $v\in U_1$, say $u_0(=u),u_1,u_2,\ldots,u_{\ell}$. This path clearly induces a path of length $2\ell$ in $S(G)$. Namely, $u_0(=u),x_{u_0u_1},u_1,x_{u_1u_2}u_2,\ldots,x_{u_{\ell-1}u_{\ell}}u_{\ell}$.
\item[$(ii)$]
Since $G$ is bipartite, there is a path of length $\ell\le d-1$ between any two vertices in the same partite set (if $d$ is odd) or in different partite sets (if $d$ is even). Thus, from a vertex $x_{u_1u_2}\in U_2$, there is a path of length $2\ell+1\le 2d-1$ to any vertex $v\in U_1$. (This is because either $u_1$ or $u_2$ are at distance $\ell\le d-1$.)
\item[$(iii)$]
Finally, a path of length at most $2d$ between vertices $x_{u_1u_2},x_{v_1v2}\in U_2$ is obtained by considering first the path from $x_{u_1u_2}$ to $v_i$, for some $i\in\{1,2\}$ (which, according to $(ii)$, has length at most $2d-1$), together with the edge $v_ix_{v_1v2}$.
\end{itemize}
The result about the spectrum of $S(G)$ follows from a result by Cvetkovi\'{c}~\cite{Cv75}, who proved that, if $G$ is an $r$-regular graph with $n$ vertices and $m\big(=\frac{1}{2}nr\big)$ edges, then the characteristic polynomials of $S(G)$ and $G$ satisfy
$\phi_{S(G)}(x) = x^{m-n} \phi_{G}(x^2-r)$.
\end{proof}
For instance, the $[2,r;4]$-Moore graphs proposed in Yebra, Fiol, and F\`abrega \cite{YFF83}, with $N_1=2r$ and $N_2=r^2$ can be obtained as the subdividing graphs $S(K_{r,r})$ (see the values in column of $s=2$ in Table \ref{tab:d=4}).
For larger diameters, we can use the same construction with the known Moore bipartite graphs, which correspond to the incidence graphs of generalized polygons with $r=s$.
\begin{proposition}
\label{bb(s=2,d=6)-Moore}
For any value of $r\ge 3$, with $r-1$ a prime power, there exist three infinite families of bimoore graphs with corresponding parameters
$[r,2;2m]$ for $m\in\{3,4,6\}$.
\end{proposition}
\begin{proof}
According to \eqref{Moore-even}, a bipartite biregular Moore graph with degrees $r$ and $2$ and diameter $d=2m$ has order $M(r,2;d)=\frac{r+2}{r-2}[(r-1)^m -1]$. Then, from Proposition \ref{propo:subdiv}, these are the parameters obtained when considering the subdividing graph $S(G)$ of a bipartite Moore graph $G$ of degree $r$ and diameter $m$. (For diameter $d=6$, see the values in the column $s=2$ of Table \ref{tab:d=6}.)
\end{proof}
\subsection{The semi-double graphs}
Let $G$ be a bipartite graph with stable sets $V_1$ and $V_2$.
Given $i\in \{1,2\}$, the {\em semi-double(-$V_i$) graph} $G^{2V_i}$ is obtained from $G$ by doubling each vertex of $V_i$, so that each vertex $u\in V_i$ gives rise to another vertex $u'$ with the same neighborhood as $u$, $G(u')=G(u)$. Thus, assuming, without loss of generality, that $i=1$, the graph $G^{2V_1}$ is bipartite with stable sets $V_1\cup V_1'$ and $V_2$, and satisfies the following result.
\begin{theorem}
\label{th:semi-double}
Let $G=(V_1\cup V_2,E)$ be a bipartite graph on $n=n_1+n_2=|V_1|+|V_2|$ vertices, diameter $D(\ge 2)$, and spectrum
$\spec G$.
Then, its semi-double graph $G^{2V_1}$, on $N=2n_1+n_2$ vertices, has the same diameter $D$, and spectrum
\begin{equation}
\label{spec-semi-double}
\spec G^{2V_1}=\sqrt{2}\cdot \spec G \cup \{0^{n_1}\}.
\end{equation}
\end{theorem}
\begin{proof}
Let $p : u_1(=u),u_2,\ldots,u_{\delta-1},u_{\delta}(=v)$ be a shortest path in $G$ between vertices $u$ and $v(\neq u')$, for $2\le \delta\le D$.
If $u,v\in V_1\cup V_2$, $p$ also is a shortest path in $G^{2V_1}$. Otherwise, the following also are shortest paths in $G^{2V_1}$:
\begin{align*}
& u_1(=u),u_2,\ldots,u_{\delta-1},u_{\delta}'(=v');\\
& u_1'(=u'),u_2,\ldots,u_{\delta-1},u_{\delta}(=v);\\
& u_1'(=u'),u_2,\ldots,u_{\delta-1},u_{\delta}'(=v').
\end{align*}
Finally, the distance from $u$ to $u'$ is clearly two.
\\
To prove \eqref{spec-semi-double}, notice that the adjacency matrices $\mbox{\boldmath $A$}$ and $\mbox{\boldmath $A$}^{[1]}$ of $G$ and $G^{2V_1}$, are
$$
\mbox{\boldmath $A$}=\left(
\begin{array}{c|c}
\mbox{\normalfont\Large\bfseries 0} & \mbox{\boldmath $N$} \\
\hline \\
[-.4 cm]
\mbox{\boldmath $N$}^{\top} & \mbox{\normalfont\Large\bfseries 0}
\end{array}
\right)\qquad \mbox{and}\qquad
\mbox{\boldmath $A$}^{[1]}
=\left(
\begin{array}{c|c|c}
\mbox{\normalfont\Large\bfseries 0} & \mbox{\normalfont\Large\bfseries 0} & \mbox{\boldmath $N$} \\
\hline \\
[-.4 cm]
\mbox{\normalfont\Large\bfseries 0} & \mbox{\normalfont\Large\bfseries 0} & \mbox{\boldmath $N$}\\
\hline \\
[-.4 cm]
\mbox{\boldmath $N$}^{\top} & \mbox{\boldmath $N$}^{\top} & \mbox{\normalfont\Large\bfseries 0}
\end{array}
\right),
$$
respectively, where $\mbox{\boldmath $N$}$ is an $n_1\times n_2$ matrix. Now, we claim that, if $\mbox{\boldmath $v$}=(\mbox{\boldmath $v$}_1 | \mbox{\boldmath $v$}_2)^{\top}$ is a $\lambda$-eigenvector of $\mbox{\boldmath $A$}$, then $\mbox{\boldmath $v$}^{[1]}=(\mbox{\boldmath $v$}_1 | \mbox{\boldmath $v$}_1| \sqrt{2}\mbox{\boldmath $v$} _2)^{\top}$ is a $\sqrt{2}\lambda$-eigenvector of $\mbox{\boldmath $A$}^{[1]}$. Indeed, from
\begin{equation*}
\mbox{\boldmath $A$}\mbox{\boldmath $v$}=\left(
\begin{array}{c|c}
\mbox{\normalfont\Large\bfseries 0} & \mbox{\boldmath $N$} \\
\hline \\
[-.4 cm]
\mbox{\boldmath $N$}^{\top} & \mbox{\normalfont\Large\bfseries 0}
\end{array}
\right)
\left(
\begin{array}{c}
\mbox{\boldmath $v$}_1\\
\hline
\mbox{\boldmath $v$}_2
\end{array}
\right)
=\lambda \left(
\begin{array}{c}
\mbox{\boldmath $v$}_1\\
\hline
\mbox{\boldmath $v$}_2
\end{array}
\right)
\qquad \Rightarrow \qquad \mbox{\boldmath $N$}\mbox{\boldmath $v$} _2=\lambda \mbox{\boldmath $v$} _1\quad{\rm and}\quad \mbox{\boldmath $N$}^{\top}\mbox{\boldmath $v$} _1=\lambda \mbox{\boldmath $v$} _2,
\end{equation*}
we have
$$
\mbox{\boldmath $A$}^{[1]}\mbox{\boldmath $v$}^{[1]}=
\left(
\begin{array}{c|c|c}
\mbox{\normalfont\Large\bfseries 0} & \mbox{\normalfont\Large\bfseries 0} & \mbox{\boldmath $N$} \\
\hline \\
[-.4 cm]
\mbox{\normalfont\Large\bfseries 0} & \mbox{\normalfont\Large\bfseries 0} & \mbox{\boldmath $N$} \\
\hline \\
[-.4 cm]
\mbox{\boldmath $N$}^{\top} & \mbox{\boldmath $N$}^{\top} & \mbox{\normalfont\Large\bfseries 0}
\end{array}
\right)
\left(
\begin{array}{c}
\mbox{\boldmath $v$}_1\\
\hline
\mbox{\boldmath $v$}_1\\
\hline \\
[-.45 cm]
\sqrt{2}\mbox{\boldmath $v$} _2
\end{array}
\right)=
\left(
\begin{array}{c}
\sqrt{2}\mbox{\boldmath $N$}\mbox{\boldmath $v$} _2\\
\hline
\sqrt{2}\mbox{\boldmath $N$}\mbox{\boldmath $v$} _2\\
\hline \\
[-.45 cm]
2\mbox{\boldmath $N$}^{\top}\mbox{\boldmath $v$} _1
\end{array}
\right
\left(
\begin{array}{c}
\sqrt{2}\lambda \mbox{\boldmath $v$} _1\\
\hline
\sqrt{2}\lambda \mbox{\boldmath $v$} _1\\
\hline
2\lambda \mbox{\boldmath $v$} _2
\end{array}
\right)
=\sqrt{2}\lambda \mbox{\boldmath $v$} ^{[1]}.
$$
Moreover, if $\mbox{\boldmath $U$}=\left(
\begin{array}{c}
\mbox{\boldmath $U$}_1\\
\hline
\mbox{\boldmath $U$}_2
\end{array}
\right)$
is a matrix whose columns are the $n$ independent eigenvectors of $\mbox{\boldmath $A$}$, then the $n$ columns of the matrix
$\mbox{\boldmath $U$}'=\left(
\begin{array}{c}
\mbox{\boldmath $U$}_1\\
\hline
\mbox{\boldmath $U$}_1\\
\hline \\
[-.45 cm]
\sqrt{2}\mbox{\boldmath $U$}_2
\end{array}
\right)$
also are independent since, clearly, $\rank \mbox{\boldmath $U$}'=\rank \mbox{\boldmath $U$}=n_1+n_2$.
Consequently, $\spec \mbox{\boldmath $A$}\subset \spec \mbox{\boldmath $A$}^{[1]}$. Finally, each of the remaining $n_1$ eigenvalues $0$ corresponds to an eigenvector with $u$-th component $+1$ and $u'$-th component $-1$ for a given $u\in V_1$, and $0$ elsewhere. This is because the matrix $\mbox{\boldmath $U$}^{[1]}$ obtained by extending $\mbox{\boldmath $U$}'$ with such eigenvalues, that is,
$$
\mbox{\boldmath $U$}^{[1]}=\left(
\begin{array}{c|c}
\mbox{\boldmath $U$}_1 & \mbox{\boldmath $I$} \\
\hline
\mbox{\boldmath $U$}_1 & -\mbox{\boldmath $I$} \\
\hline \\
[-.45 cm]
\sqrt{2}\mbox{\boldmath $U$}_2 & \mbox{\normalfont\Large\bfseries 0}
\end{array}
\right)
$$
has rank $n=2n_1+n_2$, as required.
\end{proof}
In general, we can consider the $k$-tuple graph $G^{kV_i}$, which is defined as expected by replacing each vertex $u\in V_i$ of $G$ by $k$ vertices $u_1,\ldots,u_k$ with the same adjacencies as $u$. Then, similar reasoning as in the proof of Theorem
\ref{th:semi-double} leads to the following result.
\\
\begin{theorem}
\label{th:k-tuple}
Let $G=(V_1\cup V_2,E)$ be a bipartite graph on $n=n_1+n_2=|V_1|+|V_2|$ vertices, diameter $D(\ge 2)$, and spectrum
$\spec G$.
Then, its $k$-tuple graph $G^{kV_1}$, on $N=kn_1+n_2$ vertices, has the same diameter $D$, and spectrum
\begin{equation}
\label{spe-k-yuble}
\spec G^{kV_1}=\sqrt{k}\cdot \spec G \cup \{0^{(k-1)n_1}\}.
\end{equation}
\end{theorem}
As a consequence of Theorem \ref{th:semi-double}, we introduce a family of $[r,2r;d]$-graphs for $d=\{3,4,6\}$ using the existence of bipartite Moore graphs of order $M(r;d)$ for $r-1$ a prime power and these values of $d$. That is, the incidence graphs of the mentioned generalized polygons.
\begin{theorem}
\label{r2r}
The following are $[r,2r;d]$-biregular bipartite graphs for $r\geq 3$, $r-1$ a prime power, and diameter $d\in \{3,4,6\}$.
\begin{itemize}
\item [(i)]
A $[r,2r;3]$-biregular bipartite graph has order $n=3r^2-3r+3$ with defect $\delta=\frac{3}{2}(r-1)$ for odd $r$, and $\delta=\frac{3r}{2}-3$ for even $r$.
\item [$(ii)$]
A $[r,2r;4]$-biregular bipartite graph has order $n=3r^3-6r^2+10r$ with defect $\delta=3r^3-3r^2-2r$.
\item [$(iii)$]
A $[r,2r;6]$-biregular bipartite graph has order $n=2r^5-8r^4+14r^3-12r^2+6r$ with defect $\delta=10r^5-28r^4+31r^3-15r^2-3r$.
\end{itemize}
\end{theorem}
\begin{proof}
$(i)$ Let $G$ be a $[r,3]$-Moore graph, that is, the incidence graph of a projective plane of order $r-1$. As already mentioned, $G$ is bipartite, has $2r^2-2r+1$ vertices, and diameter $3$.
Then, by Theorem \ref{th:semi-double}, the semi-double graph $G^{[1]}$ (or $G^{[2]}$) is
a bipartite graph on $3r^2-3r+3$ vertices, biregular with degrees $r$ and $2r$, and diameter $3$.
Thinking on the projective plane, this corresponds to duplicate, for instance, each line, so that each point is on $2r$ lines, and each line has $r$ points, as before. Then, $G^{[1]}$ is the incidence graph of this new incidence geometry ${\cal{I}}_G$.
Moreover, the Moore bound \eqref{Moore-even} is
$M(2r,r;3)=3r^2-\frac{3}{2}(r-1)$ if $r$ is odd, and $M(2r,r;3)=3r^2-\frac{3}{2}r$ if $r$ is even. Thus, a simple calculation gives that the defect of $G$ is equal to $\delta=\frac{3}{2}(r-1)$ for $r$ odd and $\delta=\frac{3}{2}r-3$ for $r$ even.
Figure \ref{3unics}$(c)$ depicts the only $[6,3;3]$-biregular bipartite graph of order $21$ obtained from the Heawood graph (that is, the incidence graph of the Fano plane).
Notice that, by Corollary \ref{coro:2s-s}, this is a Moore graph since it has the maximum possible number of vertices (see Table \ref{tab:d=3}).
$(ii)$ In this case, we apply Theorem \ref{th:semi-double} to $G$ being the $[r;4]$-Moore graph (that is, the incidence graph of a generalized quadrangle of order $r-1$). Now the semi-double graph $G^{[1]}$ is $[r,2r;4]$-bipartite biregular
with $3[(r-1)^3+(r-1)^2+(r-1)+1]=3r^3-6r^2+10r$ vertices, whereas the Moore bound in \eqref{Moore-even} is $M(2r,r;3)=6r^3-9r^2+6r$. Hence, the defect is $\delta=3r^3-3r^2-2r$.
For example, the $[3,6;4]$-bigraph of order $45$ obtained from the Tutte graph (that is, the incidence graph of the generalized quadrangle of order $2$) has defect $\delta=54$.
$(iii)$ Finally, to obtain a $[r,2r;6]$-bigraph of order $2r^5-8r^4+14r^3-12r^2+6r$, we apply Theorem \ref{th:semi-double} to the $[r;6]$-Moore graph (the incidence graph of a generalized hexagon of order $r-1$). Then, we obtain a $[r,2r;6]$-bigraph with
$2r^5-8r^4+14r^3-12r^2+6r$
vertices, whereas the Moore bound in \eqref{Moore-even} is $M(2r,r;6)=12r^5-36r^4+45r^3-27r^2+9r$, yielding a defect $\delta=10r^5-28r^4+31r^3-15r^2-3r$.
\end{proof}
Notice that, in Theorem \ref{r2r}, we only state results for Moore graphs constructed with regular generalized quadrangles; clearly, it is possible to do the same starting with biregular bipartite Moore graphs of diameters $6$ and $8$ given as in Theorem \ref{bb-Moore}, but since the bounds are far to be tight, we
restricted the details to the best cases.
\section{Bipartite biregular Moore graphs of diameter $3$}
\label{sec:d=3}
We have already seen some examples of large biregular graphs with diameter three. Namely, the Moore graphs with degrees $(3,4)$, $(3,5)$, and $(3,6)$ of Figure \ref{3unics} (the last one in Theorem \ref{r2r}$(i)$).
Now, we begin this section with the simple case of bimoore graphs with degrees $r$ (even) and $2$, and diameter $3$. For these values, the Moore bounds in \eqref{Moore-odd} and \eqref{Moore-even} turn out to be $M(r,2;3)=2+r$ when $r(>1)$ is odd, and $M(r,2;3)=3\left(1+\frac{r}{2}\right)$ ($N_1=3$ and $N_2=3r/2$) when $r(>2)$ is even, respectively.
In the first case, the bound is attained by the complete bipartite graph $K_{2,r}$ and, hence, the diameter is, in fact, $2$.
In the second case, the Moore graphs are obtained via Theorem \ref{th:k-tuple}. Let $G$ be a hexagon (or $6$-cycle) with vertex set $V_1\cup V_2=\{u_1,u_3,u_5\}\cup \{2,4,6\} $. Then, the $k$-tuple $G^{kV_1}$ with $k=r/2$ is a $[2,2r;3]$-bimoore graph on $3(r+1)$ vertices (see Table \ref{tab:d=3}).
In general, and in terms of designs, to guarantee that the diameter is equal to $3$, it is necessary that any pair of points shares a block, and any pair of blocks has a non-empty intersection.
It is well known that this kind of structure exists when $r=m$ and $r-1$ is a prime power. Namely, the so-called projective plane of order $r-1$. In this case, any pair of blocks (or lines) intersects in one and only one point and, for any pair of points, they share one and only one block (or line).
For more details about projective planes, you can consult, for instance, Coxeter \cite{Cox93}. In our constructions of block designs giving graphs of diameter $3$, this condition is not necessary, and two blocks can be intersected in more than one point, and two points can share one or more blocks. Moreover, we may have the same block appearing more than once.
\subsection{Existence of bipartite biregular Moore graphs of diameter 3 and degrees $r\ge 3$ and $s=3$}
\label{sec:d=3}
Let us consider the diameter $3$ case together with $r>s=3$. According to Eqs. \eqref{N1-odd} and \eqref{N2-odd}, the maximum numbers of vertices for the partite sets, in this case, are $N_1'=2r+1$ and $N_2'=3r-2$. We recall that the equality $N_1'r=3N_2'$ does not hold when the diameter is odd. Since
$$
\rho=\frac{r}{\gcd\{r,3\}}=\left\{\begin{array}{cl}
r & \textrm{if} \ \ 3 \nmid r \\
\frac{r}{3} & \textrm{otherwise}
\end{array}\right.
\quad \textrm{and} \quad \sigma=\frac{3}{\gcd\{r,3\}}=\left\{\begin{array}{cl}
3 & \textrm{if} \ \ 3 \nmid r \\
1 & \textrm{otherwise}
\end{array}\right.
$$
we obtain the Moore-like bounds (see \eqref{N1-N2-odd}):
\[
N_1 \leq \left\lfloor \frac{3r-2}{\rho}\right\rfloor \sigma = \left\{\begin{array}{cl}
3\omega & \textrm{if} \ \ 3 \nmid r \\
\omega' & \textrm{otherwise}
\end{array}\right.
\quad \textrm{and} \quad N_2\le \left\lfloor \frac{3r-2}{\rho}\right\rfloor \rho = \left\{\begin{array}{cl}
r\omega & \textrm{if} \ \ 3 \nmid r \\
\frac{r\omega'}{3} & \textrm{otherwise}
\end{array}\right.
\]
where $\omega=\left\lfloor \frac{3r-2}{r}\right\rfloor=2$ and $\omega'=\left\lfloor \frac{3r-2}{r/3}\right\rfloor=8$. As a consequence, whenever $3 \nmid r$, we obtain Moore-like bounds $N_1\leq 6$ and $N_2 \leq 2r$. Otherwise, when $3|r$, we have
the bounds $N_1\leq 8$ and $N_2 \leq \frac{8}{3}r$.
The following graphs have orders that either attain or are close to such Moore bounds.
Given an integer $n\ge 6$, let $G_{6+n}=(V_1 \cup V_2, E)$ be the bipartite graph with independent sets $V_1=\{(0,j) \ | \ j \in \mathbb{Z}_6\}$, $V_2=\{(1,i) \ | \ i \in \mathbb{Z}_{n}\}$, and where $(1,i) \sim (0,j)$ for all $i \in \mathbb{Z}_{n}$ if and only if $j \equiv i \pmod{6}$ or $j \equiv i+1 \pmod{6}$ or $j \equiv i+3 \pmod{6}$.
Thus, $G_{6+n}$ is a bipartite graph with $n+6$ vertices, where every vertex $v\in V_2$ has degree $3$ and, assuming that $n=6k+\rho$ ($n\equiv \rho \pmod{6}${\color{blue})}, vertex $u\in V_1$ has the degree indicated in Table \ref{tab:degreesV1}.
Indeed, let us consider the case $\rho=2$ (the other cases are analogous), where $n=6k+2$ and, for simplicity, let $V_2= \{0,1,\ldots,6k-1\}$. According to the adjacency rules,
the vertex $(0,i)\in V_1$, for $i\in \ns{Z}_6$, is adjacent to all the vertices $ j\in V_2$ with $j\equiv i,i-1,i-3 \pmod{6}$. Then, we have the following cases:
\begin{itemize}
\item
If $i=0$, $V_2$ contains $k+1$ numbers $j\equiv 0 \pmod6$, $k$ numbers $j\equiv-1\equiv 5 \pmod6$, and $k$ numbers $j\equiv-3\equiv 3 \pmod6$. Thus, the degree of $(0,0)$ is $3k+1$.
\item
If $i=1$, $V_2$ contains $k+1$ numbers $j\equiv 1 \pmod6$, $k+1$ numbers $j\equiv0 \pmod6$, and $k$ numbers $j\equiv-2\equiv 4 \pmod6$. Thus, the degree of $(0,2)$ is $3k+2$.
\item
If $i=2$, $V_2$ contains $k$ numbers $j\equiv 2 \pmod6$, $k+1$ numbers $j\equiv1 \pmod6$, and $k$ numbers $j\equiv-1\equiv 5 \pmod6$. Thus, the degree of $(0,3)$ is $3k+1$.
\item[ \mbox{\boldmath $v$} dots]
\item
If $i=5$, $V_2$ contains $k$ numbers $j\equiv 5 \pmod6$, $k$ numbers $j\equiv4 \pmod6$, and $k$ numbers $j\equiv2 \pmod6$. Thus, the degree of $(0,5)$ is $3k$.
\end{itemize}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
$\rho\setminus u$ & (0,0) & (0,1) & (0,2) & (0,3) & (0,4) & (0,5) \\
\hline
0 & $3k$ & $3k$ & $3k$ & $3k$ & $3k$ & $3k$ \\
1 & $3k+1$ & $3k+1$ & $3k$ & $3k+1$ & $3k$ & $3k$ \\
2 & {\boldmath $3k+1$} & {\boldmath $3k+2$} & {\boldmath $3k+1$} & {\boldmath $3k+1$} & {\boldmath $3k+1$} & {\boldmath $3k$} \\
3 & $3k+1$ & $3k+2$ & $3k+2$ & $3k+2$ & $3k+1$ & $3k+1$ \\
4 & {\boldmath $3k+2$} & {\boldmath $3k+2$} & {\boldmath $3k+2$} & {\boldmath $3k+3$} & {\boldmath $3k+2$} & {\boldmath $3k+1$} \\
5 & $3k+2$ & $3k+3$ & $3k+2$ & $3k+3$ & $3k+3$ & $3k+2$ \\
\hline
\end{tabular}
\end{center}
\caption{Degrees of the vertices $u=(0,j)\in V_1$ when $|V_2|=n=6k+\rho$. }
\label{tab:degreesV1}
\end{table}
\begin{proposition}
\label{G(6+n)}
The diameter of the bipartite graph $G_{6+n}=(V_1\cup V_2, E)$, on $n+6$ vertices, is $d=3$.
\end{proposition}
\begin{proof}
It suffices to prove that every pair of different vertices $u,u'\in V_1$, and every pair of different vertices $v,v'\in V_2$, are at distance two.
In the first case, notice that $u=(0,j)\in V_1$ is adjacent to every vertex $v=(1,i)\in V_2$ such that $i\equiv j,j-1, j-3 \pmod6$.
Moreover, vertex $(1,j)\in V_2$ is adjacent to vertices $(0,j+1),(0,j+3)\in V_1$, vertex $(1,j-1)\in V_2$ is adjacent to vertices
$(0,j-1),(0,j+2)\in V_1$, and vertex $(1,j-3)\in V_2$ is adjacent to vertices
$(0,j-3),(0,j-2)\in V_1$. Schematically (with all arithmetic modulo 6),
\begin{align}
(0,j)\quad & \sim \quad (1, j), (1,j-1), (1, j-3) \label{u-u'}\\
& \sim \quad (0,j+1),(0,j+3),(0,j-1),(0,j+2),(0,j-3),(0,j-2) \nonumber,
\end{align}
which are all vertices of $V_1$ different from $(0,j)$.
Similarly, starting from a vertex of $V_2$, we have
\begin{align*}
(1,i)\quad & \sim \quad (0, i), (0,i+1), (0, i+3)\\
& \sim \quad (1,i), (1,i-1),(1,i-3),(1,i+1),(1,i-2),(1,i+3),(1,i+1),
\end{align*}
with the last two representing all vertices of $V_2$ because the second entries cover all values modulo 6. This completes the proof.
\end{proof}
The following result proves the existence of an infinite family of Moore bipartite biregular graphs with diameter three.
\begin{proposition}
\label{propo:(r,3)}
For any integer $r\ge 6$ such that $3 \nmid r$, there exists a bipartite graph $G$ on $2r+6$ vertices (the Moore bound for the case of degrees $(r,3)$ and diameter $3$) with degrees $3,r,r\pm 1$ and diameter $d=3$.
Moreover, when $r\equiv 2 \mod3$, there exists a Moore bipartite biregular graph with degrees $(r,3)$ and diameter $3$.
\end{proposition}
\begin{proof}
From the comments at the beginning of this subsection, if $3 \nmid r$, the graph $G_{6+n}$ of Proposition \ref{G(6+n)} with $n=2r$ has maximum order for diameter $3$. However, as shown before, not every vertex of $u\in V_1$ has degree $r$ since it is required to become a bipartite biregular Moore graph. More precisely, if $2r=6k+\rho$ (with $\rho=2$ or $\rho=4$, since $3 \nmid r$), from the two rows in boldface of Table \ref{tab:degreesV1}, we have
$$
\deg(u)=\left\{\begin{array}{cl}
r & \textrm{if} \ \ u \in \{(0,0),(0,2),(0,3),(0,4)\}, \\
r+1 & \textrm{if} \ \ u=(0,\rho-1), \\
r-1 & \textrm{if} \ \ u=(0,5). \\
\end{array}\right.
$$
This proves the first statement.
When $r\equiv 2 \pmod3$, that is $\rho=4$, we obtain biregularity by modifying only one adjacency of $G_{6+n}$, as follows. Let $G'_r$ be the graph $G_{6+r}$ defined above, but where the edge $(0,3)\sim(1,i)$, for some $i\equiv 0 \pmod 3$, is switched to $(0,5)\sim(1,i)$. Then, $G'_r$ is a bipartite semiregular Moore graph of diameter $3$ for all $r\ge 5$.
To prove that $G'_r$ has diameter $d=3$, let us check again that every pair of different vertices $u,u'\in V_1$, and every pair of different vertices $v,v'\in V_2$, are at distance two.
\begin{itemize}
\item
If $u,u'\in V_1$, we only need to consider the case when $u=(0,3)$. Then, the paths are as follows (where $(1,i)$ represents any vertex $(1,i')$ with $i'\equiv i \pmod6$):
\begin{align*}
(0,3)\quad & \sim \quad (1, 3), (1,2), (1, 0)\quad \sim \quad (0,4),(0,0),(0,2),(0,5),(0,1),
\end{align*}
because $(0,3)$ was initially adjacent to more than one vertex of type $(0,i)$ with $i\equiv 0 \pmod 3$. Thus, all vertices of $V_0$ are reached from $(0,3)$.
\item
If $v,v'\in V_2$, assume that $v=(1,i)$ with $i\equiv 0 \pmod6$ (the case where $i\equiv 3 \pmod6$ is similar). Then,
\begin{itemize}
\item
If $v=(1,0)$, we have the paths
\begin{align*}
(1,0)\quad & \sim \quad (0, 0), (0,1), (0,5)
\quad \sim \quad (1,0),(1,3),(1,1),(1,4),(1,2).
\end{align*}
Notice that, in this case, the first step is not $(1,0)\sim (0,3)$ (deleted edge), but $(1,0)\sim (0,5)$. Despite this, we still reach all vertices of $V_2$, as required.
\item
If $v=(1,i)$ with $i\neq 0$, the first step is
\begin{align*}
(1,i)\quad & \sim \quad (0, i), (0,i+1), (0,i+3)
\end{align*}
So, the only problem would be when some $i,i+1,i+3$ is $3$, since we have not the adjacency $(0,3)\sim (1,0)$. But, if so, we have the following alternative adjacencies: if $i=0,3$, we have $(0,0)\sim (1,0)$; and if $i=2$, we have $(0,5)\sim (1,0)$.
Again, all vertices of $V_2$ are reached, completing the proof.
\end{itemize}
\end{itemize}
\vskip-.5cm
\end{proof}
| {
"timestamp": "2021-03-23T01:21:11",
"yymm": "2103",
"arxiv_id": "2103.11443",
"language": "en",
"url": "https://arxiv.org/abs/2103.11443",
"abstract": "A bipartite graph $G=(V,E)$ with $V=V_1\\cup V_2$ is biregular if all the vertices of a stable set $V_i$ have the same degree $r_i$ for $i=1,2$. In this paper, we give an improved new Moore bound for an infinite family of such graphs with odd diameter. This problem was introduced in 1983 by Yebra, Fiol, and Fàbrega.\\\\ Besides, we propose some constructions of bipartite biregular graphs with diameter $d$ and large number of vertices $N(r_1,r_2;d)$, together with their spectra. In some cases of diameters $d=3$, $4$, and $5$, the new graphs attaining the Moore bound are unique up to isomorphism.",
"subjects": "Combinatorics (math.CO)",
"title": "Bipartite biregular Moore graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363531263359,
"lm_q2_score": 0.8128673087708699,
"lm_q1q2_score": 0.8001348422911373
} |
https://arxiv.org/abs/2002.01597 | Berge cycles in non-uniform hypergraphs | We consider two extremal problems for set systems without long Berge cycles. First we give Dirac-type minimum degree conditions that force long Berge cycles. Next we give an upper bound for the number of hyperedges in a hypergraph with bounded circumference. Both results are best possible in infinitely many cases. | \section{Introduction}
\subsection{Classical results on longest cycles in graphs}
The {\em circumference} $c(G)$ of a graph $G$ is the length of its longest cycle.
In particular, if a graph has a cycle $C$ which covers all of its vertices, $V(C)=V(G)$, we say it is {\em hamiltonian}.
A classical result of Dirac states that high minimum degree in a graph forces hamiltonicity.
\begin{thm}[Dirac~\cite{D}]\label{th:D} Let $n \geq 3$, and let $G$ be an $n$-vertex graph with minimum degree $\delta(G)$.
If $\delta(G) \geq n/2$, then $G$ contains a hamiltonian cycle.
If $G$ is 2-connected, then $c(G)\geq \min\{n, 2\delta(G)\}$.
\end{thm}
Inspired by this theorem, it is common in extremal combinatorics to refer to results in which a minimum degree condition forces some structure as a {\em Dirac-type condition}.
The second part of Theorem~\ref{th:D} cannot be extended to non 2-connected graphs:
let ${\mathcal F}_{n,k}$ be the family of graphs in which each block (inclusion maximal 2-connected subgraph) of the graph is a copy of $K_{k-1}$.
Every $F \in {\mathcal F}_{n,k}$ has minimum degree $k-2$, but its longest cycle has length $k-1$.
\begin{thm}[Erd\H{o}s, Gallai~\cite{EG}]\label{th:EG} Let $G$ be an $n$-vertex graph with no cycle of length $k$ or longer. Then $e(G) \leq \frac{n-1}{k-2}{k-1 \choose 2}$.
\end{thm}
So the graphs in ${\mathcal F}_{n,k}$ have the maximum number of edges among the $n$-vertex graphs with circumference $k-1$.
They also maximize the number of cliques of any size:
\begin{thm}[Luo~\cite{luo}]\label{cliques} Let $G$ be an $n$-vertex graph with no cycle of length $k$ or longer. Then the number of copies of $K_r$ in $G$ is at most $\frac{n-1}{k-2}{k-1 \choose r}$.
\end{thm}
\subsection{Known results on cycles in hypergraphs}
A hypergraph ${\mathcal H}$ is a set system. We often refer to the ground set as the set of vertices $V({\mathcal H})$ of ${\mathcal H}$ and to the sets as the hyperedges $E({\mathcal H})$ of ${\mathcal H}$. When there is no ambiguity, we may also refer to the hyperedges as edges. In this paper, we prove versions of Theorems~\ref{th:D} and~\ref{th:EG} for hypergraphs with no restriction on edge sizes. Namely, we seek long {\em Berge cycles}.
A {\bf Berge cycle} of length $\ell$ in a hypergraph is a set of $\ell$ distinct vertices $\{v_1, \ldots, v_\ell\}$ and $\ell$ distinct edges $\{e_1, \ldots, e_\ell\}$ such that $\{ v_{i}, v_{i+1} \}\subseteq e_i$ with indices taken modulo $\ell$. The vertices $\{v_1, \ldots, v_\ell\}$ are called {\bf representative vertices} of the Berge cycle.
A {\bf Berge path} of length $\ell$ in a hypergraph is a set of $\ell+1$ distinct vertices $\{v_1, \ldots, v_{\ell+1}\}$ and $\ell$ distinct hyperedges $\{e_1, \ldots, e_{\ell}\}$ such that $\{ v_{i}, v_{i+1} \}\subseteq e_i$ for all $1\leq i\leq \ell$. The vertices $\{v_1, \ldots, v_{\ell+1}\}$ are called {\bf representative vertices} of the Berge path.
For a hypergraph ${\mathcal H}$, the {\bf 2-shadow} of ${\mathcal H}$, denoted $\partial_2 {\mathcal H}$, is the graph on the same vertex set such that $xy \in E(\partial_2{\mathcal H})$ if and only if $\{x,y\}$ is contained in an edge of ${\mathcal H}$.
Note that if we require no conditions on multiplicities of hyperedges, then we can arbitrarily add hyperedges of size $1$ without creating new Berge cyles or Berge paths. From now on, we only consider {\em simple} hypergraphs, i.e., those without multiple edges (except if it is stated otherwise).
Bermond, Germa, Heydemann, and Sotteau~\cite{bermond} were among the first to prove Dirac-type results for uniform hypergraphs without long Berge cycles: Let $k > r$ and ${\mathcal H}$ be an $r$-uniform hypergraph with minimum degree $\delta({\mathcal H}) \geq {k-2 \choose r-1} + (r-1)$, then ${\mathcal H}$ contains a Berge cycle of length at least $k$.
For large $n$, generalizations and results for linear hypergraphs are proved by Jiang and Ma~\cite{JM}.
Ma, Hou, and Gao~\cite{MHG_r3} studied $3$-uniform hypergraphs.
Coulson and Perarnau~\cite{rainbow} proved that if ${\mathcal H}$ is an $r$-uniform hypergraph on $n$ vertices, $r = o(\sqrt{n})$, and ${\mathcal H}$ has minimum degree $\delta({\mathcal H}) > {\lfloor (n-1)/2 \rfloor \choose r-1}$, then ${\mathcal H}$ contains a Berge hamiltonian cycle.
Our new results differ from these in several aspects. We consider non-uniform hypergraphs, prove exact formulas, prove results for every $n$ (or every $n> 14$), and use only classical tools mentioned above and in Section~\ref{ss0}.
\section{New results}
Our first result is a Dirac-type condition that forces hamiltonian Berge cycles.
\begin{thm}\label{diracH}
Let $n \geq 15$
and let ${\mathcal H}$ be an $n$-vertex hypergraph such that $\delta({\mathcal H}) \geq 2^{(n-1)/2} + 1$ if $n$ is odd, or $\delta({\mathcal H}) \geq 2^{n/2 -1} + 2$ if $n$ is even. Then ${\mathcal H}$ contains a Berge hamiltonian cycle.
\end{thm}
The following four constructions show that Theorem~\ref{diracH} is the best possible.
\newline
${}$\enskip --- \enskip Let $n$ be odd. Let ${\mathcal H}$ be the $n$-vertex hypergraph on the ground set $[n]$ with edges $\{A: A \subseteq [(n+1)/2]\} \cup \{B: B \subseteq \{(n+1)/2, \ldots n\}\}$. Then $\delta({\mathcal H}) = 2^{(n-1)/2}$ and ${\mathcal H}$ has no hamiltonian Berge cycle (because it has a cut vertex).
\newline
${}$\enskip --- \enskip Let $n$ be even. Let ${\mathcal H}$ be the $n$-vertex hypergraph on the ground set $[n]$ with edges $\{A: A \subseteq [n/2]\} \cup \{B: B \subseteq \{(n/2 +1, \ldots n\}\}$ and the set $[n]$. Then $\delta({\mathcal H}) = 2^{n/2 -1}+1$ and ${\mathcal H}$ has no hamiltonian Berge cycle (because it has a cut edge, $[n]$).
\newline
${}$\enskip --- \enskip Let $n$ be odd. Let ${\mathcal H}$ be the $n$-vertex hypergraph on the ground set $[n]$ obtained by taking all hyperedges with at most one vertex in $[(n+1)/2]$. Then $\delta({\mathcal H}) = 2^{(n-1)/2}$, and ${\mathcal H}$ cannot contain a Berge cycle with two consecutive representative vertices in $[(n+1)/2]$.
\newline
${}$\enskip --- \enskip Let $n$ be even. Let ${\mathcal H}$ be the $n$-vertex hypergraph on the ground set $[n]$ obtained by taking all hyperedges with at most one vertex in $[n/2 + 1]$ and the edge $[n]$. Then $\delta({\mathcal H}) = 2^{n/2 - 1} + 1$, and ${\mathcal H}$ cannot contain a Berge cycle with two instances of two consecutive representative vertices in $[n/2 + 1]$ (because only one edge of ${\mathcal H}$ contains multiple vertices in $[n/2+1]$).
Next, we consider hypergraphs without long Berge paths or cycles.
\begin{thm}\label{th:mindeg_path}
Let $k \geq 2$ and let ${\mathcal H}$ be a hypergraph such that $\delta({\mathcal H}) \geq 2^{k-2}+1$.
Then ${\mathcal H}$ contains a Berge path with $k$ base vertices.
\end{thm}
A vertex disjoint union of complete hypergraphs of $k-1$ vertices shows that this bound is best possible for $n:=|V({\mathcal H})|$ divisible by $(k-1)$.
It would be interesting to find $\max \delta({\mathcal H})$ for other values of $n$, and also for the cases when ${\mathcal H}$ is
connected or 2-connected.
\begin{thm}\label{th:mindeg_cycle}
Let $k \geq 3$ and let ${\mathcal H}$ be a hypergraph such that $\delta({\mathcal H}) \geq 2^{k-2}+2$.
Then ${\mathcal H}$ contains a Berge cycle of length at least $k$.
\end{thm}
The following constructions show that the bound in Theorem~\ref{th:mindeg_cycle} is best possible when $n$ is divisible by $(k-1)$ and also when $n\equiv 1 \mod (k-1)$ for $n>(k-1)(2^{k-2}+1)$.
In the first case, take a vertex disjoint union of complete hypergraphs with $k-1$ vertices and add one more set, namely $[n]$.
In the other case, take $m:=(n-1)/(k-1) \geq 2^{k-2}+1$ disjoint $(k-1)$-sets $A_1, \dots, A_m$ and an element $x$ such that $[n]=(\cup _{1\leq i\leq m}A_i)\cup \{ x\} $. Then define ${\mathcal H}$ as the union of complete hypergraphs on
the sets $A_i$'s together with the hyperedges of the form $A_i \cup \{ x\} $.
If we do not insist on connectedness, then $(2^{k-2}+1)$-regular examples can be constructed for \emph{all} $n\geq k^2 2^{k-2}$.
Finally, we prove a hypergraph version of Theorem~\ref{th:EG}.
\begin{thm}\label{th:EGh}Let $n \geq k \geq 3$ and let ${\mathcal H}$ be an $n$-vertex hypergraph with no Berge cycle of length $k$ or longer. Then \[e({\mathcal H}) \leq
2+ \frac{n-1}{k-2}\left(2^{k-1}-2\right)
.\]
\end{thm}
The bound in Theorem~\ref{th:EGh} is best possible when $n \equiv 1 \mod (k-2)$. Take $m:= (n-1)/(k-2)$ and disjoint sets $A_1 , \ldots, A_m$ of size $k-2$. Let $x$ be a new element, and set $[n] = (\cup _{1\leq i\leq m}A_i)\cup \{ x\}$. Define ${\mathcal H}$ to be the union of all sets $A$ such that there exists an $i$ with $A \setminus \{x\} \subseteq A_i$. Note that the 2-shadow $\partial_2({\mathcal H})$ is in the family ${\mathcal F}_{n,k}$ defined before Theorem~\ref{th:EG}.
There are many exact results concerning the maximum size of uniform hypergraphs avoiding Berge paths and cycles, see the recent results of Ergemlidze et al.~\cite{EGyMSTZ}
or one by the present authors~\cite{FKL}.
\section{Dirac type conditions for hamiltonian hypergraphs}
In this section, we present a proof for Theorem~\ref{diracH}. The proof method relies on reducing the hypergraph to a dense nonhamiltonian graph.
In the next three subsections we collect some results about such graphs.
Subsections~\ref{ss33} and~\ref{ss34} contain the proof for hypergraphs.
\subsection{Classical tools}\label{ss0}
Let $G$ be an $n$-vertex graph. The {\em hamilton-closure} of $G$ is the unique graph $C(G)$ of order $n$ that can be obtained from $G$ by recursively joining nonadjacent vertices with degree-sum at least $n$.
\begin{thm}[Bondy, Chv\'atal~\cite{BC}]\label{BCthm}
If $C(G)$ is hamiltonian, then so is $G$.
\end{thm}
A graph $G$ is called {\em hamiltonian-connected} if for any pair of vertices $x,y \in V(G)$ there is a hamiltonian $(x,y)$-path.
The following corollary can be obtained from Theorem~\ref{BCthm} or from the classical result of P\'osa~\cite{Posa}:
If for every pair of nonadjacent vertices $x,y \in V(G)$ we have $d(x) + d(y) \geq |V(G)| + 1$, then $G$ is hamiltonian-connected.
\begin{cor}\label{cor1}
If $e(G)\geq \binom{n}{2}-2$ and $n\geq 5$ then $G$ is hamiltonian-connected. \hfill\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else\hfill$\Box$\fi
\end{cor}
We will need the following result about the structure of matchings in bipartite graphs. It is a well known fact in the theory of transversal matroids (but one can also give a short, direct proof
finding an $M_3\subseteq M_1\cup M_2$).
\begin{thm}\label{matroid}
Let $G[X,Y]$ be a bipartite graph. Suppose that there is a matching $M_1$ in $G$ joining the vertices of $X_1\subseteq X$ and $Y_1\subseteq Y$.
Suppose also that we have another matching $M_2$ with end vertices $X_2\subseteq X$ and $Y_2 \subseteq Y$ such that $Y_2\subseteq Y_1$.
Then there exists a third matching $M_3$ from $X_3 \subseteq X$ to $Y_3 \subseteq Y$ such that
\begin{equation*}
Y_3=Y_1 \quad \text{and} \quad X_3\supseteq X_2.
\end{equation*}
\end{thm}
\begin{thm}[Erd\H{o}s~\cite{Erdos}]\label{Erdos} Let $n, d$ be integers with $1 \leq d \leq \left \lfloor \frac{n-1}{2} \right \rfloor$, and set $h(n,d):={n-d \choose 2} + d^2$.
If $G$ is a nonhamiltonian graph on $n$ vertices with minimum degree $\delta(G) \geq d$, then
\[e(G) \leq \max\left\{ h(n,d),h(n, \left \lfloor \frac{n-1}{2} \right \rfloor)\right\}=:e(n,d).\]
\end{thm}
\subsection{A lemma for nonhamiltonian graphs}\label{ss11}
The lemma below follows from a result of Voss~\cite{Voss} (and from the even more detailed descriptions by Jung~\cite{Jung} and Jung, Nara~\cite{JungNara}). We only state and use a weaker version and for completeness include a short proof. Define five classes of nonhamiltonian graphs.
${}$\enskip --- \enskip Let $n=2k+2$, $V=V_1\cup V_2$, $|V_1|=|V_2|=k+1$, ($V_1\cap V_2=\emptyset$). We say that $G\in {\mathcal G}_1$
if its edge set is the union of two complete graphs with vertex sets $V_1$ and $V_2$ and it contains at most one further edge $e_0$ (joining $V_1$ and $V_2$);
\newline
${}$\enskip --- \enskip Let $n=2k+1$, $V=V_1\cup V_2$, $|V_1|=|V_2|=k+1$, $V_1\cap V_2=\{ x_0\}$. We say that $G\in {\mathcal G}_2$
if its edge set is the union of two complete graphs with vertex sets $V_1$ and $V_2$;
\newline
${}$\enskip --- \enskip Let $n=2k+2$, $V=V_1\cup V_2$, $|V_1|=k+1$, $|V_2|=k+2$, $V_1\cap V_2=\{ x_0\}$. We say that $G\in {\mathcal G}_3$
if its edge set is the union of a complete graph with vertex set $V_1$
and a $2$-connected graph $G_2$ with vertex set $V_2$ such that $\deg_G(v)\geq k$ for every vertex $v\in V$;
\newline
${}$\enskip --- \enskip Let $n=2k+1$, $V=V_1\cup V_2$, $|V_1|=k$, $|V_2|=k+1$, ($V_1\cap V_2=\emptyset$). We say that $G\in {\mathcal G}_4$
if $V_2$ is an independent set, and its edge set contains all edges joining $V_1$ and $V_2$;
\newline
${}$\enskip --- \enskip Let $n=2k+2$, $V=V_1\cup V_2$, $|V_1|=k$, $|V_2|=k+2$, ($V_1\cap V_2=\emptyset$). We say that $G\in {\mathcal G}_5$ if
$V_2$ contains at most one edge $e_0$ and $\deg_G(v)\geq k$ for every vertex $v\in V$ (so its edge set contains all but at most two edges joining $V_1$ and $V_2$).
\begin{lem}\label{5G}Let $k\geq 3$ be an integer, $n \in \{ 2k+1, 2k+2\} $.
Suppose that $G$ is an $n$-vertex nonhamiltonian graph with $\delta(G) \geq k = \lfloor(n-1)/2\rfloor$, $V:= V(G)$.
Then $G\in {\mathcal G}_1\cup \dots \cup {\mathcal G}_5$.
\end{lem}
\begin{proof}
Suppose first that $G$ is not 2-connected.
Then there exist two blocks $B_1, B_2$ of $G$ (i.e., $B_i$ is a maximal 2-connected subgraph or a $K_2$) which are {\em endblocks}, i.e., for $i=1,2$ there is a vertex $v_i\in B_i$ such that $V(B_i)\setminus \{ v_i\}$ does not meet any other block.
Then $\{v\} \cup N(v) \subset V(B_i)$ for all $v\in V(B_i)\setminus \{ v_i\}$, so
an endblock has at least $k+1$ vertices and if $|V(B_i)|=k+1$ then it is a clique.
If $B_1$ and $B_2$ are disjoint then we get $n=2k+2$, and $G\in {\mathcal G}_1$.
If $B_1$ and $B_2$ meet, then $G$ has no other blocks, and $G\in {\mathcal G}_2\cup {\mathcal G}_3$.
Suppose now that $G$ is 2-connected. By the second part of Dirac's theorem (Theorem~\ref{th:D}), the length of a longest cycle $C$ of $G$ is at least $2k$. If $|V(C)| = n-1$, assume $C=v_1 \ldots v_{n-1} v_1$ and $v_n \notin V(C)$.
Then $v_n$ has at least $k$ neighbors in $C$, with no two of them appearing consecutively (otherwise we could extend $C$ to a hamiltonian cycle). Without loss of generality, let $N(v_n) = \{v_1, v_3, \ldots, v_{2k-1}\}$.
If for some $i<j$ such that $v_i, v_j \in N(v_n)$, $v_{i+1}v_{j+1} \in E(G)$, then we obtain the hamiltonian cycle $v_1 v_2 \ldots v_i v_n v_j v_{j-1} \ldots v_{i+1}v_{j+1} v_{j+2} \ldots v_{n-1}v_1$. Therefore the vertices in $C$ of even parity, together with $v_n$, form an independent set. In case of $n=2k+1$ we got $G\in {\mathcal G}_4$.
If $n=2k+2$ then in the same way we get that $\{v_{2k+1}\}\cup \{ v_2, v_4, \dots, v_{2k-2} \}$ together with $v_n$ is also independent, so the set $\{ v_2, ..., v_{2k-2}\}
\cup \{ v_{2k}, v_{2k+1}, v_n \}$ contains only the edge $v_{2k}v_{2k+1}$, $G\in {\mathcal G}_5$.
Finally, consider the case that $|V(C)| = n-2$, (i.e., $n=2k+2$) and let $x, y \notin V(C)$.
We claim that $xy\notin E(G)$. Indeed, suppose to the contrary, that $xy \in E(G)$. Without loss of generality,
$A:=\{v_1, v_3, \ldots, v_{2k-3}\} \subseteq N(x)$ or $(A\setminus \{ v_{2k-3}\}) \cup \{ v_{2k-2}\})\subseteq N(x)$. Note that for any $v_i \in N(x)$, $\{v_{i-2}, v_{i-1}, v_{i+1}, v_{i+2}\} \cap N(y) = \emptyset$ (indices are taken modulo $2k$), because we can remove a segment of $C$ with at most 3 vertices and replace it with a segment with at least 4 containing the edge $xy$.
This leads to a contradiction because there is not enough room
on the $2k$-cycle $C$ to distribute the at least $k-1$ vertices of $N(y)-x$.
If $xy \notin E(G)$ then without loss of generality $N(x) = \{v_1, v_3,\ldots v_{2k-1}\}$.
Then the set $\{x\} \cup \{v_2, \ldots, v_{2k}\}$ is an independent set.
If $y v_i \in E(G)$ for some $i \in \{2, 4, \ldots, {2k}\}$, then because $y$ has $k$ neighbors in $C$ and no two of them appear consecutively, $N(y) = \{v_2, v_4, \ldots, v_{2k}\}$, and we obtain a hamiltonian cycle by replacing the segment $v_1v_2v_3v_4$ of $C$ with the path $v_1 x v_3 v_2 y v_4$.
Therefore $V_2:=\{v_2, v_4, \ldots, v_{2k}\} \cup \{x, y\}$ is an independent set of size $k+2$, and so $G\in {\mathcal G}_5$.
\end{proof}
\subsection{A maximality property of the graphs in ${\mathcal G}_1\cup\ldots\cup{\mathcal G}_5$}
Let $G\in {\mathcal G}_1\cup \dots \cup {\mathcal G}_5$ be a graph.
Delete a set of edges ${\mathcal A}$ from $E(G)$ where $|{\mathcal A}|\leq 1$ for $G\in {\mathcal G}_2\cup {\mathcal G}_3\cup {\mathcal G}_4$
and $|{\mathcal A}|\leq 2$ for $G\in {\mathcal G}_1\cup {\mathcal G}_5$.
Then add a set of new edges ${\mathcal B}$ as defined below:
\newline
${}$\enskip --- \enskip
For $G\in {\mathcal G}_1$, $|{\mathcal B}|=2$ and it consists of any two disjoint pairs joining $V_1$ and $V_2$;
\newline
${}$\enskip --- \enskip
for $G\in {\mathcal G}_2\cup{\mathcal G}_3$, $|{\mathcal B}|=1$ and it consists of any pair $x_1x_2$ joining $V_1\setminus \{ x_0\}$ and $V_2\setminus \{ x_0\}$ (here $x_i\in V_i$);
\newline
${}$\enskip --- \enskip
for $G\in {\mathcal G}_4$, $|{\mathcal B}|=1$ and it consists of any pair contained in $V_2$;
\newline
${}$\enskip --- \enskip
and for $G\in {\mathcal G}_5$, $|{\mathcal B}|=2$ and it consists of any two distinct pairs contained in $V_2$.
\begin{lem}\label{5Gmodified} If $k\geq 6$,
then the graph $\left(E(G)\setminus {\mathcal A}\right) \cup {\mathcal B}$ defined by the above process is hamiltonian, except if $G\in {\mathcal G}_3$,
$x_0$ has exactly two neighbors $x_2$ and $y_2$ in $V_2$, ${\mathcal A}= \{ x_0y_2\}$, ${\mathcal B}= \{ x_1x_2\}$, and $G[V_2\setminus \{ x_0\}]$ is either a $K_{k+1}$ or misses only the edge $x_2y_2$.
\end{lem}
\begin{proof}
If $G\in {\mathcal G}_1$ and we add two disjoint edges $x_1x_2$ and $y_1y_2$ joining $V_1$ and $V_2$ ($x_1, y_1\in V_1$) then to form a hamiltonian cycle
we need an $x_1x_2$ path $P_1$, and a $y_1y_2$ path $P_2$ of length $k$, $V(P_i)=V_i$ and $E(P_i)\subset E(G)\setminus {\mathcal A}$. Such paths exist because the graph $G[V_i]\setminus {\mathcal A}$ has at least $\binom{k+1}{2}-2$ edges, so it satisfies the condition of Corollary~\ref{cor1}.
If $G\in {\mathcal G}_2\cup {\mathcal G}_3$ and we add an edge $x_1x_2$ joining $V_1\setminus\{x_0\}$ and $V_2\setminus\{x_0\}$ then
we need paths $P_1$, $P_2$ of length $|V_i|-1$ joining $x_i$ to $x_0$, $V(P_i)=V_i$ and $E(P_i)\subset E(G)\setminus {\mathcal A}$.
If $G[V_i]\setminus {\mathcal A}$ satisfies the condition of Corollary~\ref{cor1} then we can find $P_i$.
The only missing case is when $|V_2|=k+2$ (so $G\in {\mathcal G}_3$).
Let $G_2$ be the graph on $|V_2|+1$ vertices obtained from $G[V_2]\setminus {\mathcal A}$ by adding a new vertex $x_2'$ and two edges $x_0x_2'$ and $x_2x_2'$.
If $G_2$ has a hamiltonian cycle $C$ then it should contain $x_0x_2'$ and $x_2x_2'$ so the rest of the edges of $C$ can serve as $P_2$ we are looking for. Consider the hamilton-closure $C(G_2)$ and
apply Theorem~\ref{BCthm} to $G_2$. Since the degrees of $V_2\setminus \{ x_0\}$ in $G_2$ are at least $k-1$ and $2(k-1)\geq k+3= |V(G_2)|$, $C(G_2)$ is a complete graph on $V_2\setminus \{ x_0\}$.
So $C(G_2)$ is hamiltonian unless the only neighbors of $x_0$ in $G_2$ are $x_2$ and $x_2'$. Hence $N_G(x_0)\cap V_2= \{ x_2, y_2\}$ and ${\mathcal A}=\{ x_0y_2\}$.
The last case is when $G\in {\mathcal G}_5$, $|{\mathcal A}|=2$, ${\mathcal B}= \{ e_1,e_2\}$ (two distinct edges inside $V_2$). (The proofs of the other cases, especially when $G\in {\mathcal G}_4$ are easier).
We create a graph $H_0$ from $G$ as follows:
Delete the edge $e_0$ (if it exists), delete the edges of ${\mathcal A}$ joining $V_1$ and $V_2$,
add two new vertices $z_1, z_2$ to $V_1$ and join $z_i$ to the endpoints of $e_i$.
We obtain the graph $H$ by adding all possible $\binom{k+2}{2}$ pairs from $V_1\cup \{ z_1,z_2\}$ to $H_0$.
If $H$ is hamiltonian then its hamiltonian cycle must use only edges of $H_0$ (because $V_2$ is an independent set of size $k+2$ in $H$).
If the
graph $H_0$ is hamiltonian then its hamiltonian cycle must use the two edges of the degree $2$ vertex $z_i$, so $\left(G\setminus (\{ e_0\}\cup {\mathcal A})\right) \cup {\mathcal B}$ is hamiltonian as well.
So it is sufficient to show that $H$ has a hamiltonian cycle.
Let $A$ be the graph on $V(H)$ consisting of the edges of ${\mathcal A}$ joining $V_1$ and $V_2$ together with the (at most) two missing pairs $E(K(V_1, V_2))\setminus E(G)$.
We will again apply Theorem~\ref{BCthm} to $H$, so consider the hamilton-closure $C(H)$.
The degree $\deg_H(x)$ of an $x\in V_1$ is $(2k+3)-\deg_A(x)$.
The degree $\deg_H(y)$ of a $y\in V_2$ is at least $|V_1|-\deg_A(y)= k-\deg_A(y)$.
Since $\deg_A(x)+\deg_A(y)\leq |E(A)|+1\leq 5$ we get for $k\geq 6$ that
\[ \deg_H(x)+ \deg_H(y)\geq (3k+3)-\left(\deg_A(x)+\deg_A(y)\right)\geq 3k-2 \geq 2k+4= |V(H)|.
\]
So $C(H)$ contains the complete bipartite graph $K(V_1,V_2)=K_{k,k+2}$.
Then it is really a simple task to find a hamiltonian cycle in $C(H)$
and therefore $\left(E(G)\setminus {\mathcal A}\right) \cup {\mathcal B}$ is hamiltonian.
\end{proof}
\subsection{Proof of Theorem~\ref{diracH}, reducing the hypergraph to a dense graph}\label{ss33}
Fix ${\mathcal H}$ to be an $n$ vertex hypergraph satisfying the minimum degree condition.
We will find a hamiltonian Berge cycle in ${\mathcal H}$.
Recall that $H=\partial_2({\mathcal H})$ denote the 2-shadow of ${\mathcal H}$, a graph on $V=V({\mathcal H})$.
Define a bipartite graph $B:= B[E({\mathcal H}), E(H)]$ with parts $E({\mathcal H})$ and $E(H)$ and with edges $\{ h, xy\}$ where
a hyperedge $h\in E({\mathcal H})$ is joined to the graph edge $xy\in E(H)$ if $\{ x, y\} \subseteq h$.
In the case of $\{ x,y\} \in {\mathcal H}$ we consider the edge $xy\in E(H)$ and $\{ x,y\} \in E({\mathcal H})$ as two distinct objects of $B$, so $B$ is indeed a bipartite graph (with $|E({\mathcal H})|+| E(H)|$ vertices and no loops).
Let $M$ be a maximum matching of $B$. So
$M$ can be considered as a partial injection of maximum size, i.e., a bijection $\phi$ between two subsets ${\mathcal M}\subseteq E({\mathcal H})$ and ${\mathcal E}\subseteq E(H)$
such that $|{\mathcal M}|=|{\mathcal E}|$, $\phi(m)\subseteq m$ for $m\in {\mathcal M}$ (and $\phi(m_1)\neq \phi(m_2)$ for $m_1\neq m_2$).
Consider the subgraph $G=(V,{\mathcal E})$ of $H$. Then $G$ does not have a hamiltonian cycle, otherwise by replacing the edges of a hamiltonian cycle with their corresponding matched hyperedges in $M$, we obtain a hamiltonian Berge cycle in ${\mathcal H}$ (with representative vertices in the same order).
In this subsection we are going to prove that
\begin{equation}\label{eq331}
\delta(G) \geq \lfloor (n-1)/2 \rfloor:=k.
\end{equation}
Since $G$ has no hamiltonian cycle and $k\geq 7$, if~\eqref{eq331} holds, then by Lemma~\ref{5G}, $G\in {\mathcal G}_1\cup \dots\cup {\mathcal G}_5$. We will consider this case and prove the remainder of Theorem~\ref{diracH} in the next subsection.
Let ${\mathcal H}_2:= E({\mathcal H})\cap \partial_2({\mathcal H})$, the set of 2-element edges of ${\mathcal H}$.
We may assume that among all maximum sized matchings of $B$ the matching $M$ maximizes $|{\mathcal M}\cap {\mathcal H}_2|$.
\begin{claim}\label{cl33}
${\mathcal H}_2\subseteq {\mathcal M}$, $\partial_2({\mathcal M})= E(H)$, and every $m\in E({\mathcal H})\setminus {\mathcal M}$ induces a complete graph in $G$.
\end{claim}
\begin{proof}
If $m\in E({\mathcal H})$ contains an edge $e\in E(H)\setminus E(G)$ then one can enlarge the matching $M$ by adding $\{m,e\}$ to it, if it is possible.
Since $M$ is maximal, it cannot be enlarged, so $m\in {\mathcal M}$. This implies the second and the third statements.
We also obtained that if $\{x,y\}\in E({\mathcal H})$ then $xy\in E(G)$, so
$\phi(m)=xy$ for some $m\in {\mathcal M}$. In case of $|m|>2$ we can replace the pair $\{ m, xy\}$ by the pair $\{ \{ x,y\}, xy\}$ in $M$ and the new matching covers more edges from ${\mathcal H}_2$ than $M$ does (in the graph $B$). So $|m|=2$, all members of ${\mathcal H}_2$ must belong to ${\mathcal M}$.
\end{proof}
To continue the proof of Theorem~\ref{diracH}, let $d:= \delta (G)$, $v\in V$ such that $D:=N_G(v)$, $|D|=d$.
Since $G$ is not hamiltonian, Theorem~\ref{th:D} gives $d\leq k$.
Let ${\mathcal H}_v$ denote the set of hyperedges of ${\mathcal H}$ containing the vertex $v$, ($\deg_{\mathcal H}(v)=|{\mathcal H}_v|$),
and split it into two parts, ${\mathcal H}_v= {\mathcal D}\cup {\mathcal L}$ where
${\mathcal D}:= \{ e\in E({\mathcal H}): v\in e \subseteq \{ v\} \cup D\}$ and ${\mathcal L}:= {\mathcal H}_v\setminus {\mathcal D}$.
Split ${\mathcal D}$ further into three parts according to the sizes of its edges,
${\mathcal D}={\mathcal D}^- \cup {\mathcal D}_2\cup {\mathcal D}_3$ where ${\mathcal D}_i:= \{ e\in {\mathcal D}: |e|=i\}$ (for $i=2,3$) and ${\mathcal D}^-:= {\mathcal D}\setminus \left({\mathcal D}_2\cup {\mathcal D}_3\right)$.
Since ${\mathcal D}$ can have at most $2^d$ members and we handle ${\mathcal D}_2$ and ${\mathcal D}_3$ separately we get
\begin{equation}\label{eq333}
|{\mathcal D}| \leq 2^d-d-\binom{d}{2} +|{\mathcal D}_2|+ |{\mathcal D}_3|.
\end{equation}
Recall that the matching $M$ in the bipartite graph $B$ can be considered as a bijection $\phi: {\mathcal M} \to {\mathcal E}$, where ${\mathcal M}\subseteq E({\mathcal H})$ and ${\mathcal E}\subseteq E(H)$.
Define another matching $M_2$ in $B$ by an injection $\phi_2: {\mathcal D}_2\cup {\mathcal D}_3\to E(G)$ as follows.
If $m\in {\mathcal M} \cap ({\mathcal D}_2\cup {\mathcal D}_3)$ then $\phi_2(m):= \phi(m)$. In particular, since ${\mathcal D}_2\subseteq {\mathcal M}$, if $\{ v,x\} \in {\mathcal D}_2$ then $\phi_2(\{ v,x\})= vx$.
If $m= \{ v,x,y\} \in {\mathcal D}_3 \setminus {\mathcal M}$ then let $\phi_2(m):= xy$.
Since $\phi_2({\mathcal D}_2\cup {\mathcal D}_3)\subseteq E(G)$ we can apply Theorem~\ref{matroid} to the matchings $M$ and $M_2$ in $B$ with $X_1:= {\mathcal M}$, $Y_1:= E(G)$, and $X_2:= {\mathcal D}_2\cup {\mathcal D}_3$. So there exists a subfamily ${\mathcal L}_3\subseteq {\mathcal H} \setminus ({\mathcal D}_2\cup {\mathcal D}_3)$ and a bijection
$\phi_3: ({\mathcal D}_2\cup {\mathcal D}_3 \cup {\mathcal L}_3)\to E(G)$. The matching $M'$ defined by $\phi_3$ is also a largest matching of $B$.
Every $m\in {\mathcal L}$ has an element $x\notin D$, so $vx\notin E(G)$. If $m$ is not matched in $M'$, then we add $\{m, vx\}$ to $M'$ to get a larger matching. Hence $m \in {\mathcal L}_3$.
These yield
\begin{equation}\label{eq334}
|{\mathcal L}|\leq |{\mathcal L}_3| = e(G)-|{\mathcal D}_2|-|{\mathcal D}_3|.
\end{equation}
Summing up~\eqref{eq333} and~\eqref{eq334}, then using the lower bound for $|{\mathcal H}_v|$ and the upper bound
of Theorem~\ref{Erdos} for $e(G)$ we obtain
\begin{equation*
2^k +1 \leq \deg_{\mathcal H}(v) \leq 2^d-\binom{d+1}{2} +e(n,d)
\end{equation*}
The inequality $2^k +1 \leq 2^d-\binom{d+1}{2} +e(n,d)$ does not hold for $n\geq 15$ and $d< k$, e.g., for $(n,k,d) = (16, 7, 6)$, the right hand side is only $64-21+85 = 128$.
This completes the proof of $d=k$.
\subsection{Proof of Theorem~\ref{diracH}, the end}\label{ss34}
We may assume that $G\in {\mathcal G}_1\cup \dots\cup {\mathcal G}_5$ by Lemma~\ref{5G}, $\phi$ is a bijection $\phi: {\mathcal M} \to E(G)$
with $\phi(m)\subseteq m$ where ${\mathcal M}\subseteq E({\mathcal H})$, and Claim~\ref{cl33} holds.
Let ${\mathcal L}_v$ denote the set of edges $m\in {\mathcal H}$ containing an edge $vy$ of $E(H)\setminus E(G)$. Note that ${\mathcal L}_v\subseteq {\mathcal M}$.
If $\deg_G(v)=k$, then the family ${\mathcal L}_v$ is non-empty, otherwise $\deg_{{\mathcal H}}(v) \leq 2^k$.
Call a graph $F$ with vertex set $V$ a {\em Berge graph of} ${\mathcal H}$ if $E(F)\subseteq E(H)$, and there exists a subhypergraph ${\mathcal F}\subseteq {\mathcal H}$, and a bijection $\psi:{\mathcal F} \to E(F)$ such that $\psi(m)\subseteq m$ for each $m\in {\mathcal F}$.
We are looking for a Berge graph of ${\mathcal H}$ having a hamiltonian cycle.
In particular, the graph $G$ is a Berge graph of ${\mathcal H}$ and it is almost hamiltonian. We will show that a slight change to $G$ yields a hamiltonian Berge graph of ${\mathcal H}$.
If $G\in {\mathcal G}_2\cup {\mathcal G}_3$ then choose any $v\in V_1\setminus \{ x_0\}$ and let $m\in {\mathcal L}_v$. There exists an edge $vy\in \left(E(H)\setminus E(G)\right)$ contained in $m$. Then $y\in V_2\setminus \{ x_0\}$.
The graph $\left(E(G)\setminus \{ \phi(m) \}\right)\cup \{ vy\} $ is a Berge graph of ${\mathcal H}$ (we map $m$ to the edge $vy$ instead of $\phi(m)$).
According to Lemma~\ref{5Gmodified} (with ${\mathcal A}:= \{ \phi(m)\}$ and ${\mathcal B}:= \{ vy\}$) it is hamiltonian except if we run into the only exceptional case:
$x_0$ has exactly two $G$-neighbors $x_2$ and $y_2$ in $V_2$, $vy=vx_2$, and $\phi(m)=x_0y_2$.
In this case $m$ contains $\{ x_0,v, x_2, y_2\}$ so it can be avoided by choosing $y:= y_2$ instead of $y=x_2$.
If $G\in {\mathcal G}_4$ then we argue in a very similar way.
Choose any $v\in V_2$ and let $m\in {\mathcal L}_v$ containing an edge $vy\in \left(E(H)\setminus E(G)\right)$. Then $y\in V_2$ and
the graph $\left(E(G)\setminus \{ \phi(m) \}\right)\cup \{ vy\} $ is a Berge graph of ${\mathcal H}$ that is hamiltonian by Lemma~\ref{5Gmodified} with ${\mathcal A}:= \{ \phi(m)\}$ and ${\mathcal B}:= \{ vy\}$.
From now on we may suppose that $n=2k+2$ so $|{\mathcal L}_v|\geq 2$ for $\deg_G(v)=k$.
If $G\in {\mathcal G}_1$ then define ${\mathcal M}_{1,2}$ as the members of ${\mathcal M}$ meeting both $V_1$ and $V_2$.
The minimum degree condition on ${\mathcal H}$ implies that $|{\mathcal M}_{1,2}|\geq 2$.
Since ${\mathcal M}_{1,2}$ can have at most one member of size $2$, we can choose an $m_1$, $|m_1|\geq 3$. By symmetry we may suppose that $|m_1\cap V_1|\geq 2$ and let $x_2\in V_2\cap m_1$.
Choose an element $y\in V_2$, $y\notin e_0$, $y\neq x_2$.
Since $|{\mathcal L}_y|\geq 2$ we can choose an $m_2\in {\mathcal M}_{1,2}$ such that $m_1\neq m_2$ and $y\in m_2$.
Take any pair $\{ y_1,y\}\subseteq m_2$ with $y_1\in V_1$. Then one can choose an $x_1\in m_1\cap V_1$ so that $x_1\neq x_2$. So the pairs $\{ x_1, x_2\}\subseteq m_1$ and $\{ y_1, y\}\subseteq m_2$ are disjoint.
Lemma~\ref{5Gmodified} with ${\mathcal A}:= \{ \phi(m_1), \phi(m_2)\}$ and ${\mathcal B}:= \{ x_1x_2, y_1y\}$ implies that the graph
$\left(E(G)\setminus {\mathcal A} \right)\cup {\mathcal B} $ is a hamiltonian Berge graph of ${\mathcal H}$.
If $G\in {\mathcal G}_5$ then $|{\mathcal L}_v|\geq 2$ for any $v\in V_2\setminus e_0$ and for all members $m$ of ${\mathcal L}_v$ we have $|m\cap V_2|\geq 2$.
Fix $v\in V_2\setminus e_0$ and let $m_1$ be an arbitrary member of ${\mathcal L}_v$. Choose a pair $\{ v,v'\} \subseteq m_1\cap V_2$.
Fix another vertex $u\in V_2\setminus (e_0\cup \{ v,v'\})$ and let $m_2$ be an arbitrary member of ${\mathcal L}_u$. Choose a pair $\{ u,u'\} \subseteq m_2\cap V_2$. Then $u\notin \{ v,v'\}$ so the pairs $\{ u,u'\}$ and $\{ v,v'\} $ are distinct. Again, apply Lemma~\ref{5Gmodified} with ${\mathcal A}:= \{ \phi(m_1), \phi(m_2)\}$ and ${\mathcal B}:= \{ uu', vv'\}$. This completes the proof of Theorem~\ref{diracH}.
\hfill\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else\hfill$\Box$\fi
\begin{rem}
We can also show that all extremal examples are slight modifications of the four types of the sharpness examples described after Theorem~\ref{diracH}.
\end{rem}
\section{Dirac-type conditions for long Berge cycles}\label{secCk}
In this section we prove Theorem~\ref{th:mindeg_path} for Berge paths and Theorem~\ref{th:mindeg_cycle} for Berge cycles. In fact we prove the two statements simultaneously.
\begin{proof}[Proof of Theorems~\ref{th:mindeg_path} and~\ref{th:mindeg_cycle}]
Suppose that $\delta({\mathcal H})\geq 2^{k-2}+1$, $k\geq 3$ and that ${\mathcal H}$ has no Berge cycle of length $k$ or longer.
We will show that it contains a Berge path of length $k-1$ (thus establishing Theorem~\ref{th:mindeg_path}) and then that $\delta({\mathcal H})= 2^{k-2}+1$ (which completes the proof of Theorem~\ref{th:mindeg_cycle}).
Choose a longest Berge path in ${\mathcal H}$ according the following rules.
We say that a Berge path
with edges $\{e_1,\dots,e_s\}$ is \emph{better} than a Berge path
with edges $\{f_1,\dots,f_t\}$ if
\newline\indent${}$\quad
a) $s>t$ or
\newline\indent${}$\quad
b) $s=t$ and $\sum |e_i|<\sum |f_j|$.
Consider a best Berge path ${\mathcal P}$ in ${\mathcal H}$.
Let the base vertices of the path be $v_1, v_2, \dots, v_p$.
Let $e_1, \dots ,e_{p-1}$ be the edges of the path ($v_{i}, v_{i+1}\in e_i$).
First, we show that $p\geq k-1$.
(In fact, $p\geq k$ follows but that will be proved later).
Indeed, let ${\mathcal H}^{(p)}$ be the hypergraph consisting of the edges of ${\mathcal H}$
containing $v_p$, contained in $\{ v_1, \dots, v_p \}$ and also the edges of the path, i.e.,
\[
E({\mathcal H}^{(p)}):=\{ e\in E({\mathcal H}): v_p\in e\subseteq \{ v_1, \dots, v_p\}\} \cup \{e_1, \dots ,e_{p-1} \}.
\]
Then for $p\leq k-2$ (and $k\geq 3$) we have
\[
|E({\mathcal H}^{(p)})|\leq 2^{p-1}+ (p-1)\leq 2^{k-2}< \delta({\mathcal H})\leq \deg_{\mathcal H}(v_p).
\]
So there exits an edge $f$ in $E({\mathcal H})\setminus E({\mathcal H}^{(p)})$ containing $v_p$.
Then $e_1, \dots ,e_{p-1},f$ form a Berge path longer than ${\mathcal P}$, a contradiction.
Now we have $p\geq k-1$, so we can define $W:=\{ v_1, \dots, v_{k-1}\}$. Let ${\mathcal P}_1$ be the subhypergraph consisting of the first $k-1$ edges of ${\mathcal P}$,
$E({\mathcal P}_1):= \{ e_1, ..., e_{k-1}\}$
(if $p=k-1$ we take ${\mathcal P}_1:={\mathcal P}$).
Let ${\mathcal H}_1$ be the subhypergraph of ${\mathcal H}$ consisting of the edges incident to $v_1$.
\begin{claim}\label{cl:52}
Every edge $f\in E({\mathcal H}_1)\setminus E({\mathcal P}_1)$ is contained in $W:=\{ v_1, \dots, v_{k-1}\}$.
\end{claim}
\begin{proof}
First, we show that every edge $f\in E({\mathcal H}_1)\setminus E({\mathcal P}_1)$ avoids $\{v_k, \dots, v_p\}$.
Otherwise, if there exists an edge $f\in E({\mathcal H}_1)\setminus E({\mathcal P}_1)$ such that
$f\cap \{v_k, \dots, v_p\} \neq \emptyset$, then suppose that
$v_i$ has the minimum index ($k\leq i\leq p $) such that $v_i$ is a vertex of such an $f$.
Then $e_1, \dots, e_{i-1}$ and $f$ are forming a Berge cycle of length $i$, since these hyperedges are all distinct and $v_1,v_i\in f$.
Finally, suppose that there is an edge $f\in E({\mathcal H}_1)\setminus E({\mathcal P}_1)$
such that $v\in f$, $v\notin W$.
Then $v\notin \{ v_1, \dots, v_p\}$ so the path $f, e_1, \dots, v_p$ is longer than ${\mathcal P}$, a contradiction.
\end{proof}
Let ${\mathcal K}$ be the family of all $2^{k-2}$ subsets of $W$ that contain $v_1$. We claim there is a one-to-one mapping $\varphi$ from ${\mathcal H}_1 \setminus e_{k-1}$ to ${\mathcal K}$. The existence of such a $\varphi$ implies
\begin{equation}\label{eq41}
\delta({\mathcal H})\leq \deg_{\mathcal H}(v_1) \leq 2^{k-2} + 1.
\end{equation}
If an edge $e$ of ${\mathcal H}_1$ satisfies $e \subseteq W$, then let $\varphi(e) = e$. Otherwise, let ${\mathcal A} \subseteq {\mathcal H}_1$ be the set of the edges of ${\mathcal H} \setminus \{e_{k-1}\}$ that contain both $v_1$ and some vertex outside of $W$. By Claim~\ref{cl:52}, each $e \in {\mathcal A}$ must be some edge $e_i$ in ${\mathcal P}_1$. Hence it remains to show that all elements of ${\mathcal A}$ can be mapped to distinct elements of ${\mathcal K}$ that are not edges of ${\mathcal H}$.
Observe that if $e_i\in {\mathcal A}$ then $\{v_i,v_{i+1}\}\notin {\mathcal H}$.
Otherwise, we get a better path by replacing $e_i$ by $\{v_i,v_{i+1}\}$.
Also, for $1\leq i\leq k-2$, $e_i\in {\mathcal A}$ implies $v_1\in e_i$ and $\{v_i,v_{i+1}\}\subset e_i$.
Since $e_i\not \subset W$ we get $|e_i|\geq 4$ for $i\geq 2$.
We also obtain that in case of $i\geq 3$, $e_i\in {\mathcal A}$ we have $\{v_1, v_i,v_{i+1}\}\notin {\mathcal P} $, and moreover $\{v_1,v_i,v_{i+1}\} \notin {\mathcal H}$ since otherwise we get a better path by replacing $e_i$ by $\{v_1, v_i,v_{i+1}\}$.
For $3\leq i\leq k-2$ (and $e_i\in {\mathcal A}$) define $\varphi(e_i)$ as $\{v_1, v_i,v_{i+1}\}$.
If $e_2\in {\mathcal A}$ and $\{ v_1, v_2, v_3 \}\not \in {\mathcal H}$ then
we proceed as above, $\varphi(e_2):=\{ v_1, v_2, v_3 \}$. Otherwise, if $e_2\in {\mathcal A}$ (so $|e_2|\geq 4$) and $\{ v_1, v_2, v_3 \}\in {\mathcal H}$ then
$\{ v_1, v_2, v_3 \}\in {\mathcal P}$ too (otherwise, we get a better path by replacing $e_2$ by $\{v_1, v_2,v_3\}$).
We get $e_1=\{ v_1, v_2, v_3 \}$ (and $e_1 \subset e_2$).
We claim that $\{ v_1, v_3\}\notin {\mathcal H}$.
Otherwise we rearrange the base vertices of the path ${\mathcal P}$ by exchanging $v_1$ and $v_2$ (and get the order $v_2, v_1, v_3, \dots, v_{p}$) and observe that the Berge path $\{ v_2, v_1, v_3\}, \{ v_1, v_3 \}, e_3, \dots, e_{p-1}$ is better than ${\mathcal P}$, a contradiction. So in this case $\varphi(e_2):= \{ v_1, v_3 \}$. Finally, if $e_1\in {\mathcal A}$ then $\varphi(e_1):= \{ v_1, v_2 \}$, and the definition of $\varphi$ is complete.
We have shown that $\deg_{\mathcal H}(v_1) \leq |{\mathcal H}_1 \setminus \{e_{k-1}\}| +1 \leq 2^{k-2} +1$. Equality holds, so $v_1 \in e_{k-1}$. In particular $e_{k-1}$ must exist, so ${\mathcal P}$ was a Berge path of length at least $k-1$.
\end{proof}
\medskip
Our method works for multihypergraphs as well.
If the maximum multiplicity of an edge is $\mu$, then the corresponding necessary bounds on the minimum degrees are $\mu 2^{k-2}+1$ or $\mu 2^{k-2}+2$, respectively.
Indeed, suppose that $\delta({\mathcal F})\geq \mu 2^{k-2}+1$, $k\geq 3$ and that ${\mathcal F}$ has no Berge cycle of length $k$ or longer.
Let ${\mathcal H}$ be the simple hypergraph obtained from ${\mathcal F}$ by keeping one copy from the multiple edges. We have $\delta({\mathcal H})\geq 2^{k-2}+1$.
Then Theorems~\ref{th:mindeg_path} implies that ${\mathcal H}$ (and ${\mathcal F}$ as well) contain a Berge path with $k$ base vertices.
As in the proof of Theorem~\ref{th:mindeg_cycle}, consider a best Berge path ${\mathcal P}$ in ${\mathcal H}$ with
base vertices $v_1, v_2, \dots, v_p$ and edges $e_1, \dots ,e_{p-1}$. We have $p\geq k$.
Then~\eqref{eq41} gives $\deg_{\mathcal H}(v_1)= 2^{k-2} + 1$ and we get
$\deg_{\mathcal H}(v_1)= |{\mathcal H}_1 \setminus \{e_{k-1}\}| +1$.
Since we also obtained $\{v_1, v_{k-1}, v_k\} \subset e_{k-1}$, the multiplicity of $e_{k-1}$ could not exceed $1$. So
$\delta({\mathcal F})$ could not exceed $\mu 2^{k-2}+1$.
\section{Maximum number of edges}
\noindent{\em Proof of Theorem~\ref{th:EGh}.}\enskip
Suppose that among all $n$-vertex hypergraphs with $c({\mathcal H})<k$ and $e({\mathcal H})$ edges our ${\mathcal H}$ is chosen so that $\sum_{e \in E({\mathcal H})} |e|$ is minimized.
We claim that ${\mathcal H}$ is a downset, that is, for any $e \in E({\mathcal H})$ and $e' \subset e$, $e' \in E({\mathcal H})$.
Indeed, if there exists a set $e'$ and a hypergedge $e$ such that $e' \subset e$ such that $e' \notin E({\mathcal H})$ and $e \in E({\mathcal H})$, then the hypergraph obtained by replacing $e$ with $e'$ also does not contain a Berge cyle of length $k$ or longer. This contradicts the choice of ${\mathcal H}$.
Let $H = \partial_2{\mathcal H}$ be the 2-shadow of ${\mathcal H}$. Suppose that $H$ contains a cycle $C = v_1 v_2 \ldots v_\ell v_1$. Every edge $v_iv_{i+1}$ of $C$ is contained in a hyperedge of ${\mathcal H}$. But since ${\mathcal H}$ is a downset, the hyperedge $\{v_i, v_{i+1}\}$ is also contained in $E({\mathcal H})$. Therefore ${\mathcal H}$ also contains a (Berge) cycle of length $\ell$. Hence the graph $H$ contains no cycles of length at least $k$.
Let $e_r({\mathcal H})$ be the number of hyperedges of ${\mathcal H}$ of size $r$. In $H$, every hyperedge $e$ of ${\mathcal H}$ is represented by a clique of order $|e|$, and so $e_r({\mathcal H})$ is at most the number of cliques of size $r$ in $H$.
Since $c(H)<k$, each hyperedge contains at most $k-1$ vertices.
By Theorem~\ref{cliques},
\[e({\mathcal H}) = e_0({\mathcal H})+ e_1({\mathcal H})+ \sum_{r=2}^{k-1} e_r({\mathcal H}) \leq 1+ n+ \sum_{r=2}^{k-1}\frac{n-1}{k-2}{k-1 \choose r} = 2+ \frac{n-1}{k-2}\left(2^{k-1}-2\right). \hfill\ifhmode\unskip\nobreak\fi\quad\ifmmode\Box\else\hfill$\Box$\fi \]
| {
"timestamp": "2020-02-06T02:04:47",
"yymm": "2002",
"arxiv_id": "2002.01597",
"language": "en",
"url": "https://arxiv.org/abs/2002.01597",
"abstract": "We consider two extremal problems for set systems without long Berge cycles. First we give Dirac-type minimum degree conditions that force long Berge cycles. Next we give an upper bound for the number of hyperedges in a hypergraph with bounded circumference. Both results are best possible in infinitely many cases.",
"subjects": "Combinatorics (math.CO)",
"title": "Berge cycles in non-uniform hypergraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534376578004,
"lm_q2_score": 0.8152324938410783,
"lm_q1q2_score": 0.8001127335706679
} |
https://arxiv.org/abs/1206.4740 | Leinartas's partial fraction decomposition | These notes describe Leinartas's algorithm for multivariate partial fraction decompositions and employ an implementation thereof in Sage. | \section{Introduction}
In \cite{Lein1978}, Le{\u\i}nartas\ gave an algorithm for decomposing multivariate rational expressions into partial fractions.
In these notes I re-present Le{\u\i}nartas's\ algorithm, because it is not well-known, because its English translation \cite{Lein1978} is difficult to find, and because it is useful e.g. for computing residues of multivariate rational functions; see \cite[Chapter 3]{AiYu1983} and \cite{RaWi2012}.
Along the way I include examples that employ an open-source implementation of Le{\u\i}nartas's\ algorithm that I wrote in Sage \cite{Sage}.
The code can be downloaded from \href{http://www.alexraichev.org/research.html}{my website} and is currently under peer review for incorporation into the Sage codebase.
For a different type of multivariate partial fraction decomposition, one that uses iterated univariate partial fraction decompositions, see \cite{Stou2008}.
\section{Algorithm}
Henceforth let $K$ be a field and $\overline{K}$ its algebraic closure.
We will work in the factorial polynomial rings $K[X]$ and $\overline{K}[X]$, where $X = X_1,\ldots, X_d$ with $d \ge 1$.
Le{\u\i}nartas's\ algorithm is contained in the constructive proof of the following theorem, which is \cite[Theorem 1]{Lein1978}\footnote{
Le{\u\i}nartas\ used $K = \mathbb{C}$, but that is an unnecessary restriction.
By the way, Le{\u\i}nartas's\ article contains typos in equation (c) on the second page, equation (b) on the third page, and the equation immediately after equation (d) on the third page: the right sides of those equations should be multiplied by $P$.
}.
\begin{theorem}[Le{\u\i}nartas\ decompositon]\label{leinartas-decomp}
Let $f = p/q$, where $p, q \in K[X]$.
Let $q = q_1^{e_1} \cdots q_m^{e_m}$ be the unique factorization of $q$ in $K[X]$, and let $V_i = \{x \in \overline{K}^d : q_i(x) = 0 \}$, the algebraic variety of $q_i$ over $\overline{K}$.
The rational expression $f$ can be written in the form
\[
f = \sum_A \frac{p_A}{\prod_{i \in A} q_i^{b_i}},
\]
where the $b_i$ are positive integers (possibly greater than the $e_i$), the $p_A$ are polynomials in $K[X]$ (possibly zero), and the sum is taken over all subsets $A \subseteq \{1,\ldots, m\}$ such that $\cap_{i \in A} V_i \neq \emptyset$ and $\{q_i : i \in A\}$ is algebraically independent (and necessarily $|A| \le d$).
\end{theorem}
Let us call a decomposition of the form above a \hl{Le{\u\i}nartas\ decomposition}.
An immediate consequence of the theorem is the following.
\begin{corollary}
Every rational expression in $d$ variables can be represented as a sum of rational expressions each of whose denominators contains at most $d$ unique irreducible factors.
\qed
\end{corollary}
Now for a constructive proof of the theorem.
It involves two steps: decomposing $f$ via the Nullstellensatz and then decomposing each resulting summand via algebraic dependence.
We need a few lemmas.
The following lemma is a strengthening of the weak Nullstellensatz and is proved in \cite[Lemma 3.2]{DLMM2008}.
\begin{lemma}[Nullstellensatz certificate]\label{null-cert}
A finite set of polynomials $\{q_1, \ldots, q_m \} \subset K[X]$ has no common zero in $\overline{K}^d$ iff there exist polynomials $h_1, \ldots, h_m \in K[X]$ such that
\[
1 = \sum_{i=1}^m h_i q_i.
\]
Moreover, if $K$ is a computable field, then there is a computable procedure to check whether or not the $q_i$ have a common zero in $\overline{K}^d$ and, if not, return the $h_i$.
\qed
\end{lemma}
Let us call a sequence of polynomials $h_i$ satisfying the equation above a \hl{Nullstellensatz certificate} for the $q_i$.
Note that in contrast to the usual weak Nullstellensatz, here the polynomials $h_i$ are in $K[X]$ and not just in $\overline{K}[X]$.
Some examples of computable fields are finite fields, $\mathbb{Q}$, finite degree extensions of $\mathbb{Q}$, and $\overline{\mathbb{Q}}$.
Applying Lemma~\ref{null-cert} we get the following lemma \cite[Lemma 3]{Lein1978}.
\begin{lemma}[Nullstellensatz decomposition]\label{null-decomp}
Under the hypotheses of Theorem~\ref{leinartas-decomp},
the rational expression $f$ can be written in the form
\[
f = \sum_A \frac{p_A}{\prod_{i \in A} q_i^{e_i}},
\]
where the $p_A$ are polynomials in $K[X]$ (possibly zero) and the sum is taken over all subsets $A \subseteq \{1,\ldots, m\}$ such that $\cap_{i \in A} V_i \neq \emptyset$.
\end{lemma}
\begin{proof}
If $\cap_{i=1}^m V_i \neq \emptyset$, then the result holds.
Suppose now that $\cap_{i=1}^m V_i = \emptyset$.
Then the polynomials $q_i^{e_i}$ have no common zero in $\overline{K}^d$.
So by Lemma~\ref{null-cert}
\[
1 = h_1 q_1^{e_1} + \cdots + h_m q_m^{e_m}
\]
for some polynomials $h_i$ in $K[X]$.
Multiplying both sides of the equation by $p/q$ yields
\begin{align*}
f
&=
\frac{p (h_1 q_1^{e_1} + \cdots + h_m q_m^{e_m})}{q_1^{e_1} \cdots
q_m^{e_m}} \\
&=
\sum_{i=1}^m \frac{p h_i}{q_1^{e_1} \cdots \widehat{q_i^{e_i}} \cdots
q_m^{e_m}}
\end{align*}
Note that $p h_i \in K[X]$.
Next we check each summand $p h_i/(q_1^{e_1} \cdots \widehat{q_i^{e_i}} \cdots q_m^{e_m})$ to see whether $\cap_{j \neq i } V_j \neq \emptyset$.
If so, then stop.
If not, then apply Lemma~\ref{null-cert} to ${q_1^{e_1}, \ldots \widehat{q_i^{e_i}}, \ldots q_m^{e_m}}$.
Repeating this procedure until it stops yields the desired result.
The procedure must stop, because each $V_i \neq \emptyset$ since each $q_i$ is irreducible in $K[X]$ and hence not a unit in $K[X]$.
\end{proof}
Let us call a decomposition of the form above a \hl{Nullstellensatz decomposition}.
\begin{example}
Consider the rational expression
\[
f := \frac{X^2 Y + X Y^2 + X Y + X + Y}{X Y (X Y + 1)}
\]
in $\mathbb{Q}(X,Y)$.
Let $p$ denote the numerator of $f$.
The irreducible polynomials $X, Y, XY + 1 \in \mathbb{Q}[X, Y]$ in the denominator have no common zero in $\overline{\mathbb{Q}}^2$.
So they have a Nullstellensatz certificate, e.g. $(-Y, 0, 1)$:
\[
1 = (-Y)X + (0)X + (1)(XY + 1).
\]
Applying the algorithm in the proof of Lemma~\ref{null-decomp} gives us a Nullstellensatz decomposition for $f$ in one iteration:
\begin{align*}
f
=& \frac{p(-Y)}{Y(XY + 1)} + \frac{p(1)}{XY} \\
=& \frac{-p}{XY + 1} + \frac{p}{XY} \\
=& -X -Y -1 + \frac{1}{X Y + 1} + X + Y + 1 + \frac{X + Y}{XY} \\
& \text{(after applying the division algorithm)} \\
=& \frac{1}{X Y + 1} + \frac{X + Y}{XY}.
\end{align*}
Notice that
\[
f = \frac{1}{X} + \frac{1}{Y} + \frac{1}{XY + 1}
\]
is also a Nullstellensatz decomposition for $f$.
So Nullstellensatz decompositions are not unique.
\end{example}
The next lemma is a classic in computational commutative algebra; see e.g. \cite{Kaya2009}.
\begin{lemma}[Algebraic dependence certificate]\label{algdep-cert}
Any set $S$ of polynomials in $K[X]$ of size $> d$ is algebraically dependent.
Moreover, if $K$ is a computable field and $S$ is finite, then there is a computable procedure that checks whether or not $S$ is algebraically dependent and, if so, returns an annihilating polynomial over $K$ for $S$.
\qed
\end{lemma}
The next lemma is \cite[Lemma 1]{Lein1978}.
\begin{lemma}\label{algdep-powers}
A finite set of polynomials $\{q_1, \ldots, q_m\} \subset K[X]$ is algebraically dependent iff for all positive integers $e_1, \ldots, e_m$ the set of polynomials $\{q_1^{e_1}, \ldots, q_m^{e_m}\}$ is algebraically dependent.
\end{lemma}
\begin{proof}
A set of polynomials $\{q_1, \ldots, q_m\} \subset K[X]$ is algebraically independent
iff the $m \times d$ Jacobian matrix $J(q_1, \ldots, q_m) := \left( \frac{\partial q_i}{\partial X_j}\right)$ over the vector space $K(X)^d$ has rank $m$ (by the Jacobian criterion; see e.g. \cite{EhRo1993})
iff for all positive integers $e_i$ the matrix $\left(e_i q_i^{e_i -1} \frac{\partial q_i}{\partial X_j}\right) = J(q_1^{e_1}, \ldots, q_m^{e_m})$ over the vector space $K(X)^d$ has rank $m$ (since we are just taking scalar multiples of rows) iff the set of polynomials $q_1^{e_1}, \ldots, q_m^{e_m}$ is algebraically independent (by the Jacobian criterion).
Moreover, if $\{q_1, \ldots, q_m\}$ is algebraically dependent, then any member of the (necessarily nonempty) elimination ideal
\[
\langle Y_1 - q_1, \ldots, Y_m - q_m, Y_1^{e_1} - Z_1, \ldots, Y_m^{e_m} - Z_m \rangle_{K[X,Y,Z]} \cap K[Z_1, \ldots, Z_m],
\]
is an annihilating polynomial for $q_1^{e_1}, \ldots, q_m^{e_m}$.
Moreover a finite basis for the elimination ideal can be computed using Groebner bases; see e.g. \cite[Chapter 3]{CLO2007}.
\end{proof}
Applying the previous two lemmas we get our final lemma \cite[Lemma 2]{Lein1978}.
\begin{lemma}[Algebraic dependence decomposition]\label{algdep-decomp}
Under the hypotheses of Theorem~\ref{leinartas-decomp},
the rational expression $f$ can be written in the form
\[
f = \sum_A \frac{p_A}{\prod_{i \in A} q_i^{b_i}},
\]
where the $b_i$ are positive integers (possibly greater than the $e_i$), the $p_A$ are polynomials in $K[X]$ (possibly zero), and the sum is taken over all subsets $A \subseteq \{1,\ldots, m\}$ such that $\{q_i : i \in A\}$ is algebraically independent (and necessarily $|A| \le d$).
\end{lemma}
\begin{proof}
If $\{q_1, \ldots, q_m\}$ is algebraically independent, then the result holds.
Notice that in this case $m \le d$ by Lemma~\ref{algdep-cert}.
Suppose now that $\{q_1, \ldots, q_m\}$ is algebraically dependent.
Then so is $\{q_1^{e_1}, \ldots, q_m^{e_m}\}$ by Lemma~\ref{algdep-powers}.
Let $g = \sum_{\nu \in S} c_\nu Y^\nu \in K[Y_1, \ldots, Y_m]$ be an annihilating polynomial for $\{q_1^{e_1}, \ldots, q_m^{e_m}\}$, where $S \subset \mathbb{N}^m$ is the set of multi-indices such that $c_\nu \neq 0$.
Choose a multi-index $\alpha \in S$ of smallest norm $||\alpha|| = \alpha_1 + \cdots + \alpha_m$.
Then at $Q:= (q_1^{e_1}, \ldots, q_m^{e_m})$ we have
\begin{align*}
g(Q)
&= 0 \\
c_\alpha Q^\alpha
&= -\sum_{\nu \in S \setminus{\{\alpha\}}} c_\nu Q^\nu \\
1
&= \frac{-\sum_{\nu \in S \setminus{\{\alpha\}}} c_\nu Q^\nu}{c_\alpha Q^\alpha}.
\end{align*}
Multiplying both sides of the last equation by $p / q$ yields
\begin{align*}
\frac{p}{q}
&= \sum_{\nu \in S \setminus{\{\alpha\}}}
\frac{-p c_\nu Q^\nu}{c_\alpha Q^{\alpha + 1}} \\
&=
\sum_{\nu \in S \setminus{\{\alpha\}}} \frac{-p c_\nu}{c_\alpha}
\prod_{i=1}^m \frac{q_i^{e_i \nu_i}}{q_i^{e_i(\alpha_i + 1)}} \\
\end{align*}
Since $\alpha$ has the smallest norm in $S$ it follows that for any $\nu \in S \setminus{\{\alpha\}}$ there exists $i$ such that $\alpha_i + 1 \le \nu_i$, so that $e_i(\alpha_i + 1) \le e_i \nu_i$.
So for each $\nu \in S \setminus{\{\alpha\}}$, some polynomial $q_i^{e_i (\alpha_i + 1)}$ in the denominator of the right side of the last equation cancels.
Repeating this procedure yields the desired result.
\end{proof}
Let us call a decomposition of the form above an \hl{algebraic dependence decomposition}.
\begin{example}
Consider the rational expression
\[
f := \frac{(X^2 Y^2 + X^2 Y Z + X Y^2 Z + 2 X Y Z + X Z^2 + Y Z^2)}{X Y Z (X Y + Z)}
\]
in $\mathbb{Q}(X,Y,Z)$.
Let $p$ denote the numerator of $f$.
The irreducible polynomials $X, Y, Z, XY + Z \in \mathbb{Q}[X,Y,Z]$ in the denominator are four in number, which is greater than the number of ring indeterminates, and so they are algebraically dependent.
An annihilating polynomial for them is $g(A,B,C,D) = AB + C - D$.
Applying the algorithm in the proof of Lemma~\ref{algdep-decomp} gives us an algebraic dependence decomposition for $f$ in one iteration:
\begin{align*}
f
=& \sum_{\nu \in S \setminus{\{\alpha\}}}
\frac{-p c_\nu Q^\nu}{c_\alpha Q^{\alpha + 1}} \\
& \text{where $Q = (X,Y,Z,XY + Z)$ and $\alpha = (0,0,0,1)$} \\
=& \frac{pQ^{(1,1,0,0)}}{Q^{(1,1,1,2)}} + \frac{pQ^{(0,0,1,0)}}{Q^{(1,1,1,2)}} \\
=& \frac{p}{Q^{(0,0,1,2)}} + \frac{p}{Q^{(1,1,0,2)}} \\
=& \frac{p}{Z (XY + Z)^2} + \frac{p}{XY(XY + Z)^2}.
\end{align*}
Notice that in this example the exponent 2 of the irreducible factor $XY + Z$ in the denominators of the decomposition is larger than the exponent 1 of $XY + Z$ in the denominator of $f$.
Notice also that
\[
f = \frac{1}{X} + \frac{1}{Y} + \frac{1}{Z} + \frac{1}{XY + Z}
\]
is also an algebraic dependence decomposition for $f$.
So algebraic dependence decompositions are not unique.
\end{example}
Finally, here is Le{\u\i}nartas's\ algorithm.
\begin{proof}[Proof of Theorem~\ref{leinartas-decomp}]
First find the irreducible factorization of $q$ in $K[X]$.
This is a computable procedure if $K$ is computable.
Then decompose $f$ via Lemma~\ref{null-decomp}.
Finally decompose each summand of the result via Lemma~\ref{algdep-decomp}.
As highlighted above, the last two steps are computable if $K$ is.
\end{proof}
\begin{example}
Consider the rational expression
\[
f := \frac{2X^2 Y + 4X Y^2 + Y^3 - X^2 - 3 X Y - Y^2}{X Y (X + Y) (Y - 1)}
\]
in $\mathbb{Q}(X,Y)$.
Computing a Nullstellensatz decomposition according to the proof of Lemma~\ref{null-decomp} with Nullstellensatz combination $1 = 0(X) + 1(Y) + 0(X + Y) -1(Y - 1)$ yields
\begin{align*}
f =& X - Y + \frac{Y^3 + X^2 - Y^2 + X}{X(Y-1)} +
\frac{X^2 Y - 2X^2 - XY}{(X + Y)(Y - 1)} +\\
& \frac{-2X^3 - Y^3 - 2X^2 + Y^2}{X(X + Y)} +
\frac{2X^2 Y - Y^3 + X^2 + 3X Y + Y^2}{XY(X + Y)}.
\end{align*}
Computing an algebraic dependence decomposition for the last term according to the proof of Lemma~\ref{algdep-decomp} with annihilating polynomial $g(A,B,C) = A + B - C$ for $(X, Y, X + Y)$ yields
\begin{align*}
& \frac{2X^2 Y - Y^3 + X^2 + 3X Y + Y^2}{XY(X + Y)} \\
&= 1 + \frac{2X^2 Y - Y^3 + X^2 + 3X Y + Y^2}{XY^2} +
\frac{-2X^2 Y - XY^2 - X^2 - 3XY - Y^2}{Y^2 (X + Y)}.
\end{align*}
The two equalities taken together give us a Le{\u\i}nartas\ decomposition for $f$.
Notice that
\[
f = \frac{1}{X} + \frac{1}{Y} + \frac{1}{X + Y} + \frac{1}{Y - 1}
\]
is also a Le{\u\i}nartas\ decomposition of $f$.
So Le{\u\i}nartas\ decompositions are not unique.
\end{example}
\begin{remark}
In case $d=1$, Le{\u\i}nartas\ decompositions are unique once the fractions are written in lowest terms (and one disregards summand order).
To see this, note that a Le{\u\i}nartas\ decomposition of a univariate rational expression $f = p/q$ must have fractions all of the form $p_i/q_i^{e_i}$, where $q = q_1^{e_1} \cdots q_m^{e_m}$ is the unique factorization of $q$ in $K[X]$.
This is because two or more univariate polynomials are algebraically dependent (by Lemma~\ref{algdep-cert}).
Assume without loss of generality here that $\deg(p) < \deg(q)$.
It follows that if we have two Le{\u\i}nartas's\ decompositions of $p/q$, then we can write them in the form $a_1/q' + a_2/q'' = b_1/q' + b_2/q''$, where $q = q'q''$ with $q'$ and $q''$ coprime, $\deg(a_1), \deg(b_1) < \deg(q')$, and $\deg(a_2), \deg(b_2) < \deg(q'')$.
Multiplying the equality by $q$ we get $a_1q'' + a_2q' = b_1q'' + b_2q'$.
So $a_1 \equiv b_1 \pmod{q'}$ and $a_2 \equiv b_2 \pmod{q''}$.
Thus $a_1 = b_1$ and $a_2 = b_2$.
This observation used inductively demonstrates uniqueness.
This argument fails in case $d \ge 2$, because then a Le{\u\i}nartas\ decomposition might not have fractions all of the form $p_i/q_i^{e_i}$.
\end{remark}
\begin{remark}
A rational expression already with $\cap_{i=1}^m V_i \neq \emptyset$ and $\{q_1, \ldots, q_m\}$ algebraically independent, can not necessarily be decomposed further into partial fractions.
For example,
\[
f = \frac{1}{X_1 X_2 \cdots X_m} \in K(X_1, X_2, \ldots, X_d),
\]
with $m \le d$ can not equal a sum of rational expressions whose denominators each contain fewer than $m$ of the $X_i$.
Otherwise, multiplying the equation by $X_1 X_2 \cdots X_m$ would yield
\[
1 = \sum_{i\in B} h_i X_i
\]
for some $h_i \in K[X]$ and some nonempty subset $B\subseteq \{1, 2, \ldots, m\}$, a contradiction to Lemma~\ref{null-cert} since $\{X_i : i\in B\}$ have a common zero in $\overline{K}^d$, namely the zero tuple.
\end{remark}
\bibliographystyle{amsalpha}
| {
"timestamp": "2012-06-26T02:07:37",
"yymm": "1206",
"arxiv_id": "1206.4740",
"language": "en",
"url": "https://arxiv.org/abs/1206.4740",
"abstract": "These notes describe Leinartas's algorithm for multivariate partial fraction decompositions and employ an implementation thereof in Sage.",
"subjects": "Commutative Algebra (math.AC); Combinatorics (math.CO)",
"title": "Leinartas's partial fraction decomposition",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534365728416,
"lm_q2_score": 0.8152324938410783,
"lm_q1q2_score": 0.8001127326861742
} |
https://arxiv.org/abs/1001.2949 | A Property of the Frobenius Map of a Polynomial Ring | Let R be a ring of polynomials in a finite number of variables over a perfect field k of characteristic p>0 and let F:R\to R be the Frobenius map of R, i.e. F(r)=r^p. We explicitly describe an R-module isomorphism Hom_R(F_*(M),N)\cong Hom_R(M,F^*(N)) for all R-modules M and N. Some recent and potential applications are discussed. | \section{Introduction}
The main result of this paper is Theorem \ref{main} which is a type of adjointness property for the Frobenius map of a polynomial ring over a perfect field. The interest in this fairly elementary result comes from its striking recent applications (see \cite[Sections 5 and 6]{WZ} and \cite{YZ}) and also from the fact that despite extensive inquiries we have not been able to find it in the published literature.
Especially interesting is the application in \cite{YZ} where a striking new result on local cohomology modules in characteristic $p>0$ is deduced from Theorem \ref{main}. There is no doubt that the same result is true in characteristic 0, but the only currently known (to us) proof is in characteristic $p>0$, based on Theorem \ref{main}. This has motivated what promises to be a very interesting search for some new technique to extend the result in \cite{YZ} to characteristic 0.
We believe the results of this paper hold a potential for further applications and we discuss some of them in the last section.
\section{Preliminaries}
Let $R$ be a regular (but not necessarily local) UFD, let $R_s$ and $R_t$ be two copies of $R$ (the subscripts stand for {\it source} and {\it target}) and let $F:R_s\to R_t$ be a finite ring homomorphism. Let $$F_*:R_t{\rm -mod}\to R_s{\rm -mod}$$ be the restriction of scalars functor (i.e. $F_*(M)$ for every $R_t$-module $M$ is the additive group of $M$ regarded as an $R_s$-module via $F$) and let $$F^!, F^*:R_s{\rm -mod}\to R_t{\rm -mod}$$ be the functors defined by $F^!(N)={\rm Hom}_{R_s}(R_t,N)$ and $F^*(N)=R_t\otimes_{R_s}N$ for every $R_s$-module $N$.
$F^*(N)$ and $F^!(N)$, for an $R_s$-module $N$, have a structure of $R_s$-module via the natural $R_s$-action on $R_t$, while $F_*(M)$, for an $R_t$-module $M$, retains its old $R_t$-module structure (from before the restriction of scalars). Thus $F^*(N), F^!(N)$, and $F_*(M)$ are both $R_s$- and $R_t$-modules.
It is well-known that $F^*$ is left-adjoint to $F_*$, i.e. there is an isomorphism of $R_t$-modules
\begin{alignat}{2}\label{adj4}
{\rm Hom}_{R_t}(F^*(N), M)&\cong &&{\rm Hom}_{R_s}(N,F_*(M))\\
f&\mapsto &&(n\mapsto f(1\otimes n))\notag \\
(r\otimes n\mapsto rf(n))&\leftarrow &&f\notag
\end{alignat}
which is functorial in $M$ and $N$ (see, for example, \cite[II.5, p.110]{Ha}). This holds without any restrictions on $R$.
The main result of this paper is based on a different type of adjointness (see \ref{adj3} below) which is certainly not well-known (we could not find a reference). Unlike (\ref{adj4}) it is not quite canonical but depends on a choice of a certain isomorphism $\phi$ (described in (\ref{phi}) below). And it requires the conditions we imposed on $R$, namely, a regular UFD.
Since $R_t$, being regular, is locally Cohen-Macaulay, it follows from the Auslander-Buchsbaum theorem that the projective dimension of $R_t$ as $R_s$-module is zero, i.e. $R_t$ is projective, hence locally free, as an $R_s$-module. It is a standard fact that in this case $F^!$ is right adjoint to $F_*$, \cite[Ch. 3, Exercise 6.10]{Ha} i.e. for every $R_t$-module $M$ and every $R_s$-module $N$ there is an $R_t$-module isomorphism
\begin{alignat}{1}\label{adj1}
{\rm Hom}_{R_s}(F_*(M),N)\cong&{\rm Hom}_{R_t}(M,F^!(N))\\
(m\mapsto \mathfrak f(1))\leftarrow& f\notag
\end{alignat}
where $\mathfrak f=f(m):R_t\to N$. This isomorphism is functorial both in $M$ and in $N$.
Consider the $R_t$-module $H\stackrel{\rm def}{=}{\rm Hom}_{R_s}(R_t,R_s)$. Since $R_t$ is a locally free $R_s$-module of finite rank, it is a standard fact \cite[Ch 2, Exercise 5.1(b)]{Ha} that there is an isomorphism of functors
\begin{align}\label{f^!}
F^!\cong H\otimes_{R_s}-,
\end{align}
i.e. for every $R_s$-module $N$ there is an $R_t$-module isomorphism
\begin{align}\label{f^!'}
H\otimes_{R_s}N&\cong {\rm Hom}_{R_s}(R_t,N)\\
\notag h\otimes n&\mapsto (r\mapsto h(r)n)
\end{align}
and this isomorphism is functorial in $N$. Replacing $F^!$ by $H\otimes_{R_s}-$ in (\ref{adj1}) produces an $R_t$-module isomorphism
\begin{align}\label{adj2}
{\rm Hom}_{R_s}(F_*(M),N)\cong {\rm Hom}_{R_t}(M, H\otimes_{R_s}N).
\end{align}
which is functorial in $M$ and $N$.
It follows from \cite[Kor. 5.14]{KM} that locally $H$ is the canonical module of $R_t$. In particular, the rank of $H$ as $R_t$-module is 1 and therefore $H$ is $R_t$-module isomorphic to some ideal $I$ of $R_t$. According to \cite[Kor. 6.13]{KM} the quotient $R_t/I$ is locally Gorenstein of dimension dim$R_t-1$, hence $I$ has pure height 1. Since $R_t$ is a UFD, $I$ is principal, i.e. there is an $R_t$-module isomorphism
\begin{align}\label{phi}
\phi:R_t\to H.
\end{align}
Hence $\phi$ induces an isomorphism of functors
\begin{align}\label{f^*}
R_t\otimes_{R_s}-\stackrel{\phi\otimes{\rm id}}{\cong} H\otimes_{R_s}-.
\end{align}
Replacing $H\otimes_{R_s}-$ by $F^*=R_t\otimes_{R_s}-$ in (\ref{adj2}) via (\ref{f^*}) produces an $R_t$-module isomorphism
\begin{align}\label{adj3}
{\rm Hom}_{R_s}(F_*(M),N)\cong {\rm Hom}_{R_t}(M, F^*(N))
\end{align}
which is functorial in $M$ and $N$.
While the isomorphisms (\ref{adj1}), (\ref{f^!}), (\ref{f^!'}) and (\ref{adj2}) are canonical, the isomorphisms (\ref{phi}), (\ref{f^*}) and (\ref{adj3}) depend on a choice of $\phi$. Every $R_t$-module isomorphism $\phi':R_t\to H$ is obtained from a fixed $\phi$ by multiplication by an invertible element of $R_t$, i.e. $\phi'=c\cdot \phi$ where $c\in R_t$ is invertible. Therefore the isomorphisms (\ref{f^*}) and (\ref{adj3}) are defined up to multiplication by an invertible element of $R_t$.
Since the element $1\in R_t$ generates the $R_s$-submodule $F(R_s)$ of $R_t$ and does not belong to $\mathfrak mR_t$ for any maximal ideal $\mathfrak m$ of $R_s$, the $R_s$-module $R_t/F(R_s)$ is projective. Hence applying the functor Hom$_{R_s}(-,R_s)$ to the injective map $F:R_s\to R_t$ produces a surjection $H\to {\rm Hom}_{R_s}(R_s, R_s).$ Composing it with the standard $R_s$-module isomorphism ${\rm Hom}_{R_s}(R_s, R_s)\stackrel{\psi\mapsto \psi(1)}{\longrightarrow}R_s$ produces an $R_s$-module surjection $H\to R_s$. Composing this latter map with the isomorphism $\phi$ from (\ref{phi}) produces an $R_s$-module surjection
\begin{align}\label{psi}
\psi:R_t\to R_s.
\end{align}
If $N$ is an $R_s$-module, applying $-\otimes_{R_s}N$ to $\psi$ produces an $R_s$-module surjection
\begin{align}\label{psiN}
\psi_N:R_t\otimes_{R_s}N\to N.
\end{align}
It is not hard to check that the isomorphism (\ref{adj3}) sends $g\in {\rm Hom}_{R_t}(M,F^*(N))$ to $\psi_N\circ g\in {\rm Hom}_{R_s}(F_*(M),N).$
\section{The Main Result}
For the rest of this paper $R$ is a ring of polynomials in a finite number of variables over a perfect field $k$ of characteristic $p>0$ and $F:R_s\to R_t$ is the standard Frobenius map, i.e. $F(r)=r^p$. The main result of this paper (Theorem \ref{main}) is an explicit description of the isomorphism (\ref{adj3}) in terms of polynomial generators of $R$. The recent applications \cite{WZ, YZ} crucially depend on this explicit description. We keep the notation of the preceding section.
Let $x_1,\dots, x_n$ be some polynomial generators of $R$ over the field $k$, i.e. $R=k[x_1,\dots,x_n]$. We denote the multi-index $i_1,\dots, i_n$ by $\bar i$. Since $k$ is perfect, $R_t$ is a free $R_s$-module on the $p^n$ monomials $e_{\bar i}\stackrel{\rm def}{=}x_1^{i_1}\cdots x_n^{i_n}$ where $0\leq i_j<p$ for every $j$. If $i_j<p-1$, then $x_je_{\bar i}=e_{\bar i'}$ where $\bar i'$ is the multi-index $i_1,\dots, i_{j-1},i_j+1, i_{j+1},\dots, i_n$. If $i_j=p-1$, then $x_je_{\bar i}=x_j^pe_{\bar i'}$ where $\bar i'$ is the multi-index $i_1,\dots, i_{j-1},0,i_{j+1},\dots,i_n$.
Let $\{f_{\bar i}\in H|0\leq i_j<p\ {\rm for\ every}\ j\}$ be the dual basis of $H$, i.e. $f_{\bar i}(e_{\bar {i'}})=1$ if $\bar i=\bar {i'}$ and $f_{\bar i}(e_{\bar {i'}})=0$ otherwise. If $i_j>0$, then $x_jf_{\bar i}=f_{\bar i'}$ where $\bar i'$ is the multi-index $i_1,\dots, i_{j-1},i_j-1,i_{j+1},\dots, i_n$. If $i_j=0$, then $x_jf_{\bar i}=x_j^pf_{\bar i'}$ where $\bar i'$ is the multi-index $i_1,\dots,i_{j-1},p-1,i_{j+1},\dots,i_n$.
Denote the multi-index $p-1,\dots, p-1$ by $\overline{p-1}$ and let $\overline{p-1}-\bar i$ be the multi-index $p-1-i_1,\dots, p-1-i_n$.
\begin{proposition}\label{Fphi}$(\rm cf.$ \cite[Remark 3.11]{BSTZ}$)$
The $R_s$-linear isomorphism $\phi:R_t\to H$ that sends $e_{\bar i}$ to $f_{\overline{p-1}-\bar i}$ is $R_t$-linear.
\end{proposition}
\emph{Proof.} All we have to show is that $\phi(x_je_{\bar i})=x_j\phi(e_{\bar i})$ for all indices $j$ and multi-indices $\bar i$. This is straightforward from the definition of $\phi$ and the above description of the action of $x_j$ on $e_{\bar i}$ and $f_{\bar i}$.\qed
\smallskip
Clearly, $F^*(N)=R_t\otimes_{R_s}N=\oplus_{\bar i}(e_{\bar i}\otimes_{R_s}N)$, as $R_s$-modules. Thus every $R_s$-linear map $g:M\to F^*(N)$ has the form $g=\oplus_{\bar i}(e_{\bar i}\otimes_{R_s}g_{\bar i})$ where $g_{\bar i}:M\to N$ are $R_s$-linear maps (i.e. $g_{\bar i}:F_*(M)\to N$ because $M$ with its $R_s$-module structure is $F_*(M)$).
\begin{lemma} \label{F^*}
An $R_s$-linear map $g:M\to F^*(N)$ as above is $R_t$-linear if and only if $g_{\bar i}(-)=g_{\overline{p-1}}(e_{\overline{p-1}-\bar i}(-))$ for every $\bar i$ (here $e_{\overline{p-1}-\bar i}\in R_t$ acts on $(-)\in F_*(M)$ via the $R_t$-module structure on $F_*(M)$).
\end{lemma}
\emph{Proof.} Assume $g$ is $R_t$-linear. Then $g$ commutes with multiplication by every element of $R_t$ and in particular with multiplication by $e_{\overline{p-1}-\bar i}$. That is $g(e_{\overline{p-1}-\bar i}(-))=e_{\overline{p-1}-\bar i}g(-)$. Since $e_{\overline{p-1}-\bar i}e_{\bar i}=e_{\overline{p-1}}$, the $\overline{p-1}$-component of $e_{\overline{p-1}-\bar i}g(-)$ is $e_{\overline{p-1}}\otimes_{R_s}g_{\bar i}(-)$ while the $\overline{p-1}$-component of $g(e_{\overline{p-1}-\bar i}(-))$ is $e_{\overline{p-1}}\otimes_{R_s}g_{\overline{p-1}}(e_{\overline{p-1}-\bar i}(-)).$ Since the two $\overline{p-1}$-components are equal, $g_{\bar i}(-)=g_{\overline{p-1}}(e_{\overline{p-1}-\bar i}(-))$.
Conversely, assume $g_{\bar i}(-)=g_{\overline{p-1}}(e_{\overline{p-1}-\bar i}(-))$ for every $\bar i$. To show that $g$ is $R_t$-linear all one has to show is that $g$ commutes with the action of every $x_j\in R_t$, i.e. the $\bar i$-components of $g(x_j(-))$ and $x_jg(-)$ are the same for all $\bar i$. If $i_j>0$, then the $\bar i$-component of $x_jg(-)$ is $e_{\bar i}\otimes_{R_s}g_{\bar i'}(-)$ where $\bar i'$ is the index $i_1,\dots,i_{j-1},i_j-1,i_{j+1},\dots,i_n$ while the $\bar i$-component of $g(x_j(-))$ is $e_{\bar i}\otimes_{R_s}g_{\bar i}(x_j(-))$. But the fact that $g_{\bar i}(-)=g_{\overline{p-1}}(e_{\overline{p-1}-\bar i}(-))$ for every $\bar i$ implies $g_{\bar i'}(-)=g_{\bar i}(x_j(-))$.
If $i_j=0$, then the $\bar i$-component of $x_jg(-)$ is $e_{\bar i''}{\otimes_{R_s}}(_sx_jg_{\bar i''}(-))$ where $_sx_j$ denotes the element of $R_s$ corresponding to $x_j\in R_t$ (i.e. $F(_sx_j)=x_j^p$) and $\bar i''$ is the index $i_1,\dots, i_{j-1},p-1,i_{j+1},\dots, i_n$ while the $\bar i$-component of $g(x_j(-))$ is $g_{\bar i}(x_j(-))$. But the fact that $g_{\bar i}(-)=g_{\overline{p-1}}(e_{\overline{p-1}-\bar i}(-))$ for every $\bar i$ implies $_sx_jg_{\bar i''}(-)=g_{\bar i}(x_j(-))$.\qed
\smallskip
Finally we are ready for the main result of the paper which is the following explicit description of the isomorphism (\ref{adj3}) for the Frobenius map.
\begin{theorem}\label{main}
For every $R_t$-module $M$ and every $R_s$-module $N$ there is an $R_t$-linear isomorphism
\begin{alignat}{2}
{\rm Hom}_{R_s}(F_*(M),N)&\cong& &{\rm Hom}_{R_t}(M,F^*(N))\notag\\
g_{\overline{p-1}}(-)&\leftarrow &&(g=\oplus_{\bar i}(e_{\bar i}\otimes_{R_s}g_{\bar i}(-)))\\
g&\mapsto &&\oplus_{\bar i}(e_{\bar i}\otimes_{R_s}g(e_{\overline{p-1}-\bar i}(-))).
\end{alignat}
\end{theorem}
\emph{Proof.} As is pointed out at the end of the preceding section, the isomorphism (\ref{adj3}) sends $g\in {\rm Hom}_{R_t}(M,F^*(N))$ to $\psi_N\circ g\in {\rm Hom}_R(F_*(M),N).$ It is straightforward to check that with $\phi$ as in Proposition \ref{Fphi} the map $\psi$ of (\ref{psi}) sends $e_{\overline{p-1}}$ to 1. This implies that $\psi_N\circ g=g_{\overline{p-1}}$ and finishes the proof that formula (11) produces the isomorphism of (\ref{adj3}) in one direction. The fact that the other direction of this isomorphism is according to formula (12) has essentially been proven in Lemma \ref{F^*}. \qed
\section{Potential Applications}
The notion of $F$-finite modules was introduced in \cite{L}. An $F$-finite module is determined by a generating morphism, i.e. an $R$-module homomorphism $\beta:M\to F^*(M)$ where $M$ is a finite $R$-module. For simplicity assume the $R$-module $M$ has finite length, i.e. the dimension of $M$ as a vector space over $k$, which we denote by $d$, is finite. Then the dimension of $F^*(M)$ as a vector space over $k$ equals $p^n\cdot d$. The number $p^n$ can be huge even for quite modest values of $p$ and $n$. Thus the target of $\beta$ may be a huge-dimensional vector space even if $d, p$ and $n$ are fairly small. But the $R$-module $F_*(M)$ has dimension $d$ as a $k$-vector space, hence the map $\tilde\beta:F_*(M)\to M$ that corresponds to $\beta$ under the isomorphism of Theorem \ref{main}, is a map between two $d$-dimensional vector spaces. Huge-dimensional vector spaces do not appear! This should make the map $\tilde\beta$ easier to manage computationally than the map $\beta$. Of course the isomorphism of Theorem \ref{main} means that many properties of $\beta$ could be detected in $\tilde\beta$; for example, $\beta$ is the zero map if and only if $\tilde\beta$ is. Therein lies the potential for using Theorem \ref{main} to make computations more manageable.
The functors ${\rm Ext}^i_R(-, R)$ commute with both $F^*$ and $F_*$. More precisely, for every finitely generated $R_t$-module $M$ and every finitely generated $R_s$-module $N$ there exist functorial $R_t$-module isomorphisms $$\kappa_i:{\rm Ext}^i_{R_t}(F^*(N),R_t)\cong F^*( {\rm Ext}^i_{R_s}(N,R_s))$$$$\lambda_i:F_*({\rm Ext}^i_{R_t}(M,R_t))\cong {\rm Ext}^i_{R_s}(F_*(M),R_s).$$ Indeed, for $i=0, M=R_t$ and $N=R_s$ a straightforward composition of the $R_t$-module isomorphism $R_t\otimes_{R_s}R_s\stackrel{r'\otimes r\mapsto r'r^p}{\longrightarrow}R_t$ and the standard $R$-module isomorphisms ${\rm Hom}_{R}(R,R)\cong R$ for $R=R_t, R_s$ produce $\kappa_0$ while an additional $R_t$-module isomorphism ${\rm Hom}_{R_s}(R_t,R_s)=H\cong R_t$ produces $\lambda_0$. This implies that if $-$ stands for a complex of finite free $R$-modules, then ${\rm Hom}_R(-,R)$ commutes with $F^*$ and $F_*$. Taking now finite free resolutions of $M$ and $N$ in the categories of $R_t$- and $R_s$-modules respectively and considering that $F^*$ and $F_*$, being exact, commute with the operation of taking the (co)homology of complexes, we get $\kappa_i$ and $\lambda_i$ for every $i$.
It is straightforward to check that there is a commutative diagram
$$
\begin{CD}
{\rm Hom}_{R_t}(F^*(N), M)@>>> {\rm Hom}_{R_s}(N,F_*(M))\\
@VVV @VVV\\
{\rm Hom}_{R_t}({\rm Ext}^i_{R_t}(M,R_t),{\rm Ext}^i_{R_t}(F^*(N),R_t))@.{\rm Hom}_{R_s}({\rm Ext}^i_{R_s}(F_*(M), R_s),{\rm Ext}^i_{R_s}(N,R_s))\\
@VVV @VVV\\
{\rm Hom}_{R_t}({\rm Ext}^i_{R_t}(M,R_t),F^*({\rm Ext}^i_{R_s}(N,R_s)))@>>>{\rm Hom}_{R_s}(F_*({\rm Ext}^i_{R_t}(M, R_t)),{\rm Ext}^i_{R_s}(N,R_s))
\end{CD}
$$
where the top horizontal map is the isomorphism (\ref{adj4}), the bottom horizontal map is the isomorphism of Theorem \ref{main}, the bottom vertical maps are induced by $\kappa_i$ and $\lambda_i$ and, finally, the top vertical maps are defined by sending every $f\in {\rm Hom}(L,L')$ to the map ${\rm Ext}^i(L', R)\to {\rm Ext}^i(L,R)$ functorially induced by $f$.
In other words, if a pair of maps $F^*(N)\to M$ and $N\to F_*(M)$ correspond to each other under (\ref{adj4}), then the induced maps ${\rm Ext}^i_{R_t}(M,R_t)\to F^*({\rm Ext}^i_{R_s}(N,R_s))$ and $F_*({\rm Ext}^i_{R_t}(M, R_t)) \to {\rm Ext}^i_{R_s}(N,R_s)$ correspond to each other under isomorphism of Theorem \ref{main}.
Of course there is a similar diagram with the isomorphism of Theorem \ref{main} in the top row; we are not going to use it in the rest of the paper.
\smallskip
An important example of $F$-finite modules are local cohomology modules $H^i_I(R)$ of $R$ with support in an ideal $I\subset R$. A generating morphism of $H^i_I(R)$ is the composition $$f:{\rm Ext}^i_R(R/I, R)\to {\rm Ext}^i_R(F^*(R/I), R)\stackrel{\kappa_i}{\cong}F^*({\rm Ext}^i_R(R/I, R))$$
where the first map is induced by the isomorphism $F^*(R/I)\stackrel{r'\otimes\bar r\mapsto r'\bar r^p}{\cong}R/I^{[p]}$ followed by the natural surjecton $R/I^{[p]}\to R/I$. The following proposition holds the potential for simplifying computations involving local cohomology modules.
\begin{proposition}
Let $I\subset R$ be an ideal and let the composition $$f:{\rm Ext}^i_R(R/I, R)\to {\rm Ext}^i_R(F^*(R/I), R)\stackrel{\kappa_i}{\cong}F^*({\rm Ext}^i_R(R/I, R))$$ be as above.The map that corresponds to $f$ under the isomorphism of Theorem \ref{main} is the composition $$g:F_*({\rm Ext}^i_R(R/I, R))\stackrel{\lambda_i}{\cong}{\rm Ext}^i_R(F_*(R/I), R)\to {\rm Ext}^i_R(R/I, R)$$ where the second map in the composition is nothing but the map induced on ${\rm Ext}^i_R(-,R)$ by the natural Frobenius map $R/I\stackrel{r\mapsto r^p}{\cong} F_*(R/I)$.
\end{proposition}
\emph{Proof.} This is immediate from the above commutative diagram considering that the maps $F^*(R/I)\stackrel{r'\otimes r\mapsto r'r^p}{\longrightarrow} R/I$ and $R/I\stackrel{r\to r^p}{\rightarrow}F_*(R/I)$ correspond to each other under the isomorphism (\ref{adj4}).\qed
\smallskip
In addition to potential use in computation, the material of this section has already been used in a proof of a theoretical result \cite[Section 5]{WZ}.
| {
"timestamp": "2010-01-18T04:36:20",
"yymm": "1001",
"arxiv_id": "1001.2949",
"language": "en",
"url": "https://arxiv.org/abs/1001.2949",
"abstract": "Let R be a ring of polynomials in a finite number of variables over a perfect field k of characteristic p>0 and let F:R\\to R be the Frobenius map of R, i.e. F(r)=r^p. We explicitly describe an R-module isomorphism Hom_R(F_*(M),N)\\cong Hom_R(M,F^*(N)) for all R-modules M and N. Some recent and potential applications are discussed.",
"subjects": "Commutative Algebra (math.AC)",
"title": "A Property of the Frobenius Map of a Polynomial Ring",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534360303622,
"lm_q2_score": 0.8152324915965392,
"lm_q1q2_score": 0.8001127300410168
} |
https://arxiv.org/abs/1706.06630 | Improved upper bounds in the moving sofa problem | The moving sofa problem, posed by L. Moser in 1966, asks for the planar shape of maximal area that can move around a right-angled corner in a hallway of unit width. It is known that a maximal area shape exists, and that its area is at least 2.2195... - the area of an explicit construction found by Gerver in 1992 - and at most $2\sqrt{2}=2.82...$, with the lower bound being conjectured as the true value. We prove a new and improved upper bound of 2.37. The method involves a computer-assisted proof scheme that can be used to rigorously derive further improved upper bounds that converge to the correct value. | \section{Introduction}
The \textbf{moving sofa problem} is a well-known unsolved problem in geometry, first posed by Leo Moser in 1966 \cite{unsolved-problems, moser}. It asks:
\begin{quote}
\textit{What is the planar shape of maximal area that can be moved around a right-angled corner in a hallway of unit width?}
\end{quote}
We refer to a connected planar shape that can be moved around a corner in a hallway as described in the problem as a \textbf{moving sofa shape}, or simply a \textbf{moving sofa}.
It is known \cite{gerver} that a moving sofa of maximal area exists. The shape of largest area currently known is an explicit construction found by Joseph Gerver in 1992 \cite{gerver} (see also \cite{romik} for a recent perspective on Gerver's results), known as \textbf{Gerver's sofa} and shown in Figure~\ref{fig:gerver}. Its area is \textbf{Gerver's constant}
$$\mu_\textrm{G} = 2.21953166\ldots,$$ an exotic mathematical constant that is defined in terms of a certain system of transcendental equations but which does not seem to be expressible in closed form.
\begin{figure}
\begin{center}
\scalebox{0.85}{\includegraphics{gerversofa.pdf}}
\caption{Gerver's sofa, conjectured to be the solution to the moving sofa problem. Its boundary is made up of 18 curves, each given by a separate analytic formula; the tick marks show the points of transition between different analytic pieces of the boundary.}
\label{fig:gerver}
\end{center}
\end{figure}
Gerver conjectured that $\mu_\textrm{G}$ is the largest possible area for a moving sofa, a possibility supported heuristically by the local-optimality considerations from which his shape was derived.
Gerver's construction provides a lower bound on the maximal area of a moving sofa. In the opposite direction, it was proved by Hammersley \cite{hammersley} in 1968 that a moving sofa cannot have an area larger than $2\sqrt{2}\approx 2.82$. It is helpful to reformulate these results by denoting
$$ \mu_\textrm{MS} = \max \Big\{ \operatorname{area}(S)\,:\, S \textrm{ is a moving sofa shape} \Big\}, $$
the so-called \textbf{moving sofa constant}.\footnote{See Finch's book \cite[Sec.~8.12]{finch}; note that Finch refers to Gerver's constant $\mu_\textrm{G}$ as the ``moving sofa constant,'' but this terminology currently seems unwarranted in the absence of a proof that the two constants are equal.} The above-mentioned results then translate to the statement that
$$ \mu_\textrm{G} \le \mu_\textrm{MS} \le 2\sqrt{2}. $$
The main goal of this paper is to derive improved upper bounds for $\mu_\textrm{MS}$. We prove the following explicit improvement to Hammersley's upper bound from 1968.
\begin{thm}[New area upper bound in the moving sofa problem] \label{thm:new-upperbound}
We have the bound
\begin{equation} \label{eq:upperbound}
\mu_\textrm{MS} \le 2.37.
\end{equation}
\end{thm}
More importantly than the specific bound $2.37$, our approach to proving Theorem~\ref{thm:new-upperbound} involves the design of a computer-assisted proof scheme that can be used to rigorously derive even sharper upper bounds; in fact, our algorithm can produce a sequence of rigorously-certified bounds that converge to the true value $\mu_\textrm{MS}$ (see Theorems~\ref{thm:conv-moving-sofa} and~\ref{thm:asym-sharpness} below). An implementation of the scheme we coded in \texttt{C++} using exact rational arithmetic certifies $2.37$ as a valid upper bound after running for 480 hours on one core of a 2.3 GHz Intel Xeon E5-2630 processor. Weaker bounds that are still stronger than Hammersley's bound can be proved in much less time---for example, a bound of $2.7$ can be proved using less than one minute of processing time.
Our proof scheme is based on the observation that the moving sofa problem, which is an optimization problem in an infinite-dimensional space of shapes, can be relaxed in many ways to arrive at a family of finite-dimensional optimization problems in certain spaces of polygonal shapes. These finite-dimensional optimization problems are amenable to attack using a computer search.
Another of our results establishes new restrictions on a moving sofa shape of largest area. Gerver \cite{gerver} proved that such a largest area shape must undergo rotation by an angle of at least $\pi/3$ as it moves around the corner, and does not need to rotate by an angle greater than $\pi/2$. As explained in the next section, Gerver's argument actually proves a slightly stronger result with $\pi/3$ replaced by the angle $\beta_0 = \sec^{-1}(\mu_\textrm{G}) \approx 63.22^\circ$. We will prove the following improved bound on the angle of rotation of a moving sofa of maximal area.
\begin{thm}[New rotation lower bound in the moving sofa problem] \label{thm:angle-bound}
Any moving sofa shape of largest area must undergo rotation by an angle of at least $\sin^{-1}(84/85) \approx 81.203^\circ$ as it moves around the corner.
\end{thm}
There is no reason to expect this bound to be sharp; in fact, it is natural to conjecture that any largest area moving sofa shape must undergo rotation by an angle of $\pi/2$. As with the case of the bound \eqref{eq:upperbound}, our techniques make it possible in principle to produce further improvements to the rotation lower bound, albeit at a growing cost in computational resources.
The paper is arranged as follows. Section~\ref{sec:theory} below defines the family of finite-dimensional optimization problems majorizing the moving sofa problem and develops the necessary theoretical ideas that set the ground for the computer-assisted proof scheme. In Section~\ref{sec:algorithm} we build on these results and introduce the main algorithm for deriving and certifying improved bounds, then prove its correctness. Section~\ref{sec:numerical} discusses specific numerical examples illustrating the use of the algorithm, leading to a proof of Theorems~\ref{thm:new-upperbound} and~\ref{thm:angle-bound}. The Appendix describes \texttt{SofaBounds}, a software implementation we developed as a companion software application to this paper \cite{sofabounds}.
\paragraph{Acknowledgements.} Yoav Kallus was supported by an Omidyar Fellowship at the Santa Fe Institute. Dan Romik thanks Greg Kuperberg for a key suggestion that was the seed from which Proposition~\ref{prop:sofa-fg-bounds} eventually grew, and John Sullivan, Joel Hass, Jes\'us De Loera, Maria Trnkova and Jamie Haddock for helpful discussions.
\section{A family of geometric optimization problems}
\label{sec:theory}
In this section we define a family of discrete-geometric optimization problems that we will show are in a sense approximate versions of the moving sofa problem for polygonal regions. Specifically, for each member of the family, the goal of the optimization problem will be to maximize the area of the intersection of translates of a certain finite set of polygonal regions in $\mathbb{R}^2$. It is worth noting that such optimization problems have been considered more generally in the computational geometry literature; see, e.g., \cite{harpeled-roy, mount-silverman}.
We start with a few definitions. Set
\begin{align*}
H &= \mathbb{R} \times [0,1], \\
V &= [0,1] \times \mathbb{R}, \\
L_{\textrm{horiz}} &= (-\infty,1]\times[0,1], \\
L_{\textrm{vert}} &= [0,1]\times (-\infty,1], \\
L_0 &= L_{\textrm{horiz}} \cup L_{\textrm{vert}}.
\end{align*}
For an angle $\alpha \in [0,\pi/2]$ and a vector $\mathbf{u}=(u_1,u_2)\in\mathbb{R}^2$, denote
\begin{align*}
L_\alpha(\mathbf{u}) &=
\Big\{ (x,y)\in\mathbb{R}^2\,:\,
u_1 \le x\cos \alpha + y\sin \alpha \le u_1+1
\\& \qquad\qquad\qquad\qquad\qquad\qquad \textrm{ and \ } -x\sin\alpha + y \cos\alpha \le u_2+1
\Big\}
\\ & \quad \cup
\Big\{ (x,y)\in\mathbb{R}^2\,:\,
x\cos \alpha + y\sin \alpha \le u_1+1
\\ & \qquad\qquad\qquad\qquad\qquad\qquad
\textrm{ and } u_2\le -x\sin\alpha + y \cos\alpha \le u_2+1
\Big\},
\end{align*}
For angles $\beta_1, \beta_2$, denote
\begin{align*}
B(\beta_1,\beta_2) &=
\Big\{ (x,y)\in\mathbb{R}^2\,:\,
0 \le x\cos \beta_1 + y\sin \beta_1
\\& \qquad\qquad\qquad\qquad\qquad\qquad \textrm{ and \ } x\cos \beta_2 + y\sin \beta_2 \le 1
\Big\}
\\ & \quad \cup
\Big\{ (x,y)\in\mathbb{R}^2\,:\,
x\cos \beta_1 + y\sin \beta_1 \le 1
\\& \qquad\qquad\qquad\qquad\qquad\qquad \textrm{ and \ } 0 \le x\cos \beta_2 + y\sin \beta_2
\Big\},
\end{align*}
Geometrically, $L_\alpha(\mathbf{u})$ is the $L$-shaped hallway $L_0$ translated by the vector~$\mathbf{u}$ and then rotated around the origin by an angle of $\alpha$; and $B(\beta_1,\beta_2)$, which we nickname a ``butterfly set,'' is a set that contains a rotation of the vertical strip $V$ around the origin by an angle $\beta$ for all $\beta\in[\beta_1,\beta_2]$. See Fig.~\ref{fig:geometric-sets}.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\scalebox{0.5}{\includegraphics{rot-trans-L.pdf}}
& \scalebox{0.5}{\includegraphics{butterfly.pdf}}
\\
(a) & (b)
\end{tabular}
\caption{
a) The rotated and translated $L$-shaped corridor $L_\alpha(u_1,u_2)$. (b) The ``butterfly set'' $B(\beta_1,\beta_2)$.
}
\label{fig:geometric-sets}
\end{center}
\end{figure}
Next, let $\lambda$ denote the area measure on $\mathbb{R}^2$ and let $\lambda^*(X)$ denote the maximal area of any connected component of $X\subset\mathbb{R}^2$.
Given a vector $\boldsymbol{\alpha} = (\alpha_1,\ldots,\alpha_k)$ of angles $0< \alpha_1<\ldots < \alpha_k < \pi/2$ and two additional angles $\beta_1,\beta_2 \in (\alpha_k,\pi/2]$ with $\beta_1\le\beta_2$, define
\begin{align}
g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u}_1,\ldots,\mathbf{u}_k) &= \lambda^*\left(
H \cap \bigcap_{j=1}^k L_{\alpha_j}(\mathbf{u}_j) \cap B(\beta_1,\beta_2)
\right) \quad (\mathbf{u}_1,\ldots,\mathbf{u}_k \in \mathbb{R}^2), \label{eq:def-little-g} \\[5pt]
\label{eq:def-G}
G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}
&= \sup \left\{
g_{\boldsymbol{\alpha}}(\mathbf{u}_1,\ldots,\mathbf{u}_k)
\,:\,
\mathbf{u}_1,\ldots,\mathbf{u}_k \in \mathbb{R}^{2}
\right\}.
\end{align}
An important special case is $G_{\boldsymbol{\alpha}}^{\pi/2,\pi/2}$, which we denote simply as $G_{\boldsymbol{\alpha}}$. Note that $B(\beta_1,\pi/2)\cap H = H$, so in that case the inclusion of $B(\beta_1,\beta_2)$ in \eqref{eq:def-little-g} is superfluous.
The problem of computing $G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$ is an optimization problem in $\mathbb{R}^{2k}$. The following lemma shows that the optimization can be performed on a compact subset of $\mathbb{R}^{2k}$ instead.
\begin{lem}\label{lem:supmax}
There exists a box $\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}=[a_1,b_1]\times\ldots \times [a_{2k},b_{2k}] \subset \mathbb{R}^{2k}$, with the values of $a_i,b_i$ being explicitly computable functions of $\boldsymbol{\alpha}, \beta_1, \beta_2$, such that
\begin{equation} \label{eq:supmax}
G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2} =
\max \left\{
g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u})
\,:\,
\mathbf{u} \in \Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}
\right\}.
\end{equation}
\end{lem}
\begin{proof}
We will show that any value of $g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u})$ attained for some $\mathbf{u}\in\mathbb{R}^2$ is matched by a value attained inside a sufficiently large box. This will establish that $g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u})$ is bounded from above; the fact that it attains its maximum follows immediately, since $g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u})$ is easily seen to be an upper semicontinuous function.
Start by observing that for every interval $[x_1,x_2]$ and $0<\alpha<\pi/2$, there are intervals $I$ and $J$ such
that if $(u,v)\in \mathbb{R}^2\setminus I\times J$, the set $\big([x_1,x_2]\times[0,1]\big)\cap L_\alpha(u,v)$ is either empty or is identical to $\big([x_1,x_2]\times[0,1]\big)\cap L_\alpha(u',v')$
for some $(u',v')\in I\times J$. Indeed, it can be checked that this is correct with the choices \begin{align*}
I&=[x_1 \cos\alpha-1,x_2 \cos\alpha+\sin\alpha], \\
J&=[-x_2 \sin\alpha-1,-x_1 \sin\alpha+\cos\alpha].
\end{align*}
We now divide the analysis into two cases. First, if $\beta_2<\pi/2$, then $H\cap B(\beta_1,\beta_2)\subseteq[-\tan \beta_2,\sec \beta_2]\times[0,1]$.
Therefore, if we define $\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}=I_1\times J_1 \times I_2 \times J_2 \times \cdots\times I_k \times J_k$, where for each $1\le i\le k$, $I_i$ and $J_i$ are intervals $I,J$ as described in the above observation as applied to the angle $\alpha=\alpha_i$, then $g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(u_1,\ldots,u_{2k})$ is guaranteed to
attain its maximum on $\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$, since any value attained outside $\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$ is either zero or matched by a value attained inside~$\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$.
Second, if $\beta_2=\pi/2$, then $H\cap B(\beta_1,\beta_2) = H$.
The optimization objective function $g_\mathbf{\alpha}^{\beta_1,\beta_2}(\mathbf{u}_1,\ldots,\mathbf{u}_k)$ is invariant to translating all the rotated $L$-shaped hallways horizontally by the same amount (which corresponds to translating each variable $\mathbf{u}_j$ in the direction of the vector $(\cos\alpha_j, -\sin\alpha_j)$).
Therefore, fixing an arbitrary $1\le j\le k$, any value of $g_\mathbf{\alpha}^{\beta_1,\beta_2}(\mathbf{u}_1,\ldots,\mathbf{u}_k)$ attained on $\mathbb{R}^{2k}$ is also attained at some point satisfying $\mathbf{u}_j=(0,u_{j,2})$.
Furthermore, we can constrain $u_{j,2}$ as follows: first, when $u_{j,2}<-\tan\alpha_j-1$, then $L_{\alpha_j}(0,u_{j,2})\cap H$ is empty.
Second, when $u_{j,2}>\sec\alpha_j$, then $L_{\alpha_j}(0,u_{j,2})\cap H$ is the union of two disconnected components, one of which is a translation
of $\rotmat{\alpha_j}(H)\cap H$ and the other is a translation of $\rotmat{\alpha_j}(V)\cap H$.
Since the largest connected component of $H \cap \bigcap_{j=1}^k L_{\alpha_j}(\mathbf{u}_j)$ is contained in one of these two rhombuses, and
since the translation of the rhombus does not affect the maximum attained area, we see that any objective value attained with $\mathbf{u}_j=(0,u_{j,2})$,
where $u_{j,2}>\sec\alpha_j$, can also be attained with $\mathbf{u}_j=(0,\sec\alpha_j)$. So, we may restrict $\mathbf{u}_j\in I_j\times J_j$, where
$I_j = \{0\}$ and $J_j=[-\tan\alpha_j-1,\sec\alpha_j]$.
Finally, when $\mathbf{u}_j\in I_j\times J_j$, we have $H\cap L_{\alpha_j}(\mathbf{u})\subseteq[\csc\alpha_j,\sec\alpha_j]\times[0,1]$, so we can repeat
a procedure similar to the one used in the case $\beta=\pi/2$ above to construct intervals $I_i$ and $J_i$ for all $i\neq j$ to ensure that \eqref{eq:supmax} is satisfied.
\end{proof}
We now wish to show that the function $G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$ relates to the problem of finding upper bounds in the moving sofa problem. The idea is as follows. Consider a sofa shape $S$ that moves around the corner while rotating continuously and monotonically (in a clockwise direction, in our coordinate system) between the angles $0$ and $\beta \in [0,\pi/2]$. A key fact proved by Gerver \cite[Th.~1]{gerver} is that in the moving sofa problem it is enough to consider shapes being moved in this fashion. By changing our frame of reference to one in which the shape stays fixed and the $L$-shaped hallway $L_0$ is dragged around the shape while being rotated, we see (as discussed in \cite{gerver, romik}) that $S$ must be contained in the intersection
\begin{equation} \label{eq:sx-intersections}
S_{\mathbf{x}} = L_{\textrm{horiz}} \cap \bigcap_{0\le t\le \beta} L_t(\mathbf{x}(t))
\cap \Big(\mathbf{x}(\beta)+\rotmat{\beta}(L_{\textrm{vert}}) \Big),
\end{equation}
where $\mathbf{x}:[0,\beta]\to\mathbb{R}^2$ is a continuous path satisfying $\mathbf{x}(0)=(0,0)$ that encodes the path by which the hallway is dragged as it is being rotated, and where $\rotmat{\beta}(L_{\textrm{vert}})$ denotes $L_{\textrm{vert}}$ rotated by an angle of $\beta$ around $(0,0)$ (more generally, here and below we use the notation $\rotmat{\beta}(\cdot)$ for a rotation operator by an angle of $\beta$ around the origin). We refer to such a path as a \textbf{rotation path}, or a \textbf{$\beta$-rotation path} when we wish to emphasize the dependence on $\beta$. Thus,
the area of $S$ is at most $\lambda^*(S_\mathbf{x})$, the maximal area of a connected component of $S_\mathbf{x}$, and conversely, a maximal area connected component of $S_\mathbf{x}$ is a valid moving sofa shape of area $\lambda^*(S_\mathbf{x})$.
Gerver's result therefore implies that
\begin{equation} \label{eq:sofaconst-characterization}
\mu_\textrm{MS} = \sup \Big\{ \lambda^*(S_\mathbf{x}) \,:\,
\mathbf{x}\textrm{ is a $\beta$-rotation path for some }\beta\in[0,\pi/2]
\Big\}.
\end{equation}
It is also convenient to define
$$
\mu_*(\beta) = \sup \Big\{ \lambda^*(S_\mathbf{x}) \,:\,
\mathbf{x}\textrm{ is a $\beta$-rotation path}
\Big\}
\qquad (0 \le \beta \le \pi/2).
$$
so that we have the relation
\begin{equation} \label{eq:sofaconst-beta}
\mu_\textrm{MS} = \sup_{0 < \beta \le \pi/2} \mu_*(\beta).
\end{equation}
Moreover, as Gerver pointed out in his paper, $\mu_*(\beta)$ is bounded from above by the area of the intersection of the horizontal strip $H$ and the rotation of the vertical strip $V$ by an angle $\beta$, which is equal to $\sec(\beta)$. Since $\mu_\textrm{MS} \ge \mu_\textrm{G}$, and $\sec(\beta)\ge \mu_\textrm{G}$ if and only if $\beta\in[\beta_0,\pi/2]$, where we define $\beta_0 = \sec^{-1}(\mu_\textrm{G}) \approx 63.22^\circ$, we see that in fact
\begin{equation} \label{eq:sofaconst-beta0}
\mu_\textrm{MS} = \sup_{\beta_0 \le \beta \le \pi/2} \mu_*(\beta),
\end{equation}
and furthermore, $\mu_\textrm{MS} > \mu_*(\beta)$ for any $0<\beta<\beta_0$, i.e., any moving sofa of maximal area has to rotate by an angle of at least $\beta_0$.
(Gerver applied this argument to claim a slightly weaker version of this result in which the value of $\beta_0$ is taken as $\pi/3 = \sec^{-1}(2)$; see \cite[p.~271]{gerver}).
Note that it has not been proved, but seems natural to conjecture, that $\mu_\textrm{MS} = \mu_*(\pi/2)$---an assertion that would follow from Gerver's conjecture that the shape he discovered is the moving sofa shape of largest area, but may well be true even if Gerver's conjecture is false.
The relationship between our family of finite-dimensional optimization problems and the moving sofa problem is made apparent by the following result.
\begin{prop}
\label{prop:sofa-fg-bounds}
(i) For any $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k)$ and $\beta$ with $0<\alpha_1<\ldots<\alpha_k \le \beta\le\pi/2$,
we have
\begin{equation}
\mu_*(\beta) \le G_{\boldsymbol{\alpha}}. \label{eq:sofa-fg-bound1}
\end{equation}
\noindent (ii) For any $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k)$ with $0<\alpha_1<\ldots<\alpha_k \le \beta_0$, we have
\begin{equation}
\mu_\textrm{MS} \le G_{\boldsymbol{\alpha}}. \label{eq:sofa-fg-bound2}
\end{equation}
\noindent (iii)
For any $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k)$ and $\beta_1,\beta_2$ with $0<\alpha_1<\ldots<\alpha_k \le \beta_1 < \beta_2 \le \pi/2$, we have
\begin{equation}
\sup_{\beta\in[\beta_1,\beta_2]} \mu_*(\beta) \le G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}.
\label{eq:sofa-fg-bound3}
\end{equation}
\end{prop}
\begin{proof}
Start by noting that, under the assumption that $0 < \alpha_1 <\ldots < \alpha_k \le \beta \le \pi/2$, if $\mathbf{x}$ is a $\beta$-rotation path then the values $\mathbf{x}(\alpha_1),\ldots,\mathbf{x}(\alpha_k)$ may potentially range over an arbitrary $k$-tuple of vectors in $\mathbb{R}^2$. It then follows that
\begin{align*}
\mu_*(\beta) &= \sup \Big\{ \lambda^*(S_\mathbf{x}) \,:\,
\mathbf{x}\textrm{ is a $\beta$-rotation path}
\Big\}
\nonumber \\ &=
\sup \left\{ \lambda^*\left(L_{\textrm{horiz}} \cap \bigcap_{0\le t\le \beta} L_t(\mathbf{x}(t))
\cap \Big(\mathbf{x}(\beta)+\rotmat{\beta}(L_{\textrm{vert}}) \Big)\right) \,:\,
\right. \nonumber \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \ \ \left. \mathbf{x}\textrm{ is a $\beta$-rotation path}
\vphantom{\lambda^*\left(L_{\textrm{horiz}} \cap \bigcap_{0\le t\le \beta} L_t(\mathbf{x}(t))
\cap \Big(\mathbf{x}(\beta)+\rotmat{\beta}(L_{\textrm{vert}}) \Big)\right)}
\right\}
\nonumber \\ &\le
\sup \left\{ \lambda^*\left(H \cap \bigcap_{j=1}^k L_{\alpha_j}(\mathbf{x}(\alpha_j))
\right) \,:\, \mathbf{x}\textrm{ is a $\beta$-rotation path}
\right\}
\nonumber \\ &=
\sup \left\{ \lambda^*\left(H \cap \bigcap_{j=1}^k L_{\alpha_j}(\mathbf{x}_j)
\right) \,:\, \mathbf{x}_1,\ldots,\mathbf{x}_k\in\mathbb{R}^2
\right\}
= G_{\boldsymbol{\alpha}}.
\end{align*}
This proves claim (i) of the Proposition.
If one further assumes that $\alpha_k\le \beta_0$, \eqref{eq:sofa-fg-bound2} also follows immediately using \eqref{eq:sofaconst-beta0}, proving claim (ii).
The proof of claim (iii) follows a variant of the same argument used above; first, note that we may assume that $\beta_2<\pi/2$, since the case $\beta_2=\pi/2$ already follows from part (i) of the Proposition. Next, observe that
\begin{align*}
\mu_*(\beta) &=
\sup \left\{ \lambda^*\left(L_{\textrm{horiz}} \cap \bigcap_{0\le t\le \beta} L_t(\mathbf{x}(t))
\cap \Big(\mathbf{x}(\beta)+\rotmat{\beta}(L_{\textrm{vert}}) \Big)\right) \,:\,
\right. \nonumber \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \ \ \left. \mathbf{x}\textrm{ is a $\beta$-rotation path}
\vphantom{\lambda^*\left(L_{\textrm{horiz}} \cap \bigcap_{0\le t\le \beta} L_t(\mathbf{x}(t))
\cap \Big(\mathbf{x}(\beta)+\rotmat{\beta}(L_{\textrm{vert}}) \Big)\right)}
\right\}
\\
& \le
\sup_{\mathbf{x}_1,\ldots,\mathbf{x}_{k+1}\in\mathbb{R}^2} \left[ \lambda^*\left(H \cap \bigcap_{j=1}^k L_{\alpha_j}(\mathbf{x}_j)
\cap \Big(\mathbf{x}_{k+1}+\rotmat{\beta}(V) \Big)\right) \right].
\\
& =
\sup_{\mathbf{y}_1,\ldots,\mathbf{y}_{k}\in\mathbb{R}^2} \left[ \lambda^*\left(H \cap \bigcap_{j=1}^k L_{\alpha_j}(\mathbf{y}_j)
\cap \rotmat{\beta}(V) \right) \right],
\\\end{align*}
where the last equality follows by expressing $\mathbf{x}_{k+1}$ in the form $\mathbf{x}_{k+1}=a (1,0)+b (-\sin \beta,\cos \beta)$,
making the substitution $\mathbf{x}_j = \mathbf{y}_j+a(1,0)$ ($1\le j\le k$), and using the facts that $H+a(1,0)=H$ and $\rotmat{\beta}(V)+b(-\sin \beta,\cos\beta) = \rotmat{\beta}(V)$. Finally, as noted after the definition of $B(\beta_1,\beta_2)$, this set has the property that if
$\beta \in [\beta_1,\beta_2]$ then
$\rotmat{\beta}(V) \subset B(\beta_1,\beta_2)$. We therefore get for such $\beta$ that
\begin{align*}
\mu_*(\beta) & \le
\sup_{\mathbf{y}_1,\ldots,\mathbf{y}_{k}\in\mathbb{R}^2} \left[ \lambda^*\left(H \cap \bigcap_{j=1}^k L_{\alpha_j}(\mathbf{y}_j)
\cap B(\beta_1,\beta_2) \right) \right] = G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2},
\end{align*}
which finishes the proof.
\end{proof}
\paragraph{Example.} In the case of a vector $\boldsymbol{\alpha}=(\alpha)$ with a single angle $0<\alpha<\pi/2$, a simple calculation, which we omit, shows that
\begin{equation}
\label{eq:g-alpha-explicit}
G_{(\alpha)} = \sec\alpha+\csc\alpha.
\end{equation}
Taking $\alpha=\pi/4$ and using Proposition~\ref{prop:sofa-fg-bounds}(ii), we get the result that $\mu_\textrm{MS}\le 2\sqrt{2}$, which is precisely Hammersley's upper bound for $\mu_\textrm{MS}$ mentioned in the introduction (indeed, this application of the Proposition is essentially Hammersley's proof rewritten in our notation).
\bigskip
We conclude this section with a result that makes precise the notion that the optimization problems involved in the definition of $G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$ are finite-dimensional approximations to the (infinite-dimensional) optimization problem that is the moving sofa problem.
\begin{thm}[Convergence to the moving sofa problem]
\label{thm:conv-moving-sofa}
Let
\begin{align*}
\boldsymbol{\alpha}(n,k) &= (\tfrac1n\tfrac\pi2,\tfrac2n\tfrac\pi2,\tfrac3n\tfrac\pi2,\ldots,\tfrac{n-k-1}{n}\tfrac\pi2)\text, \\
\gamma_1(n,k)&=\tfrac{n-k}{n}\tfrac\pi2, \\
\gamma_2(n,k)&=\tfrac{n-k+1}{n}\tfrac\pi2,
\end{align*}
and let
\begin{equation} \label{eq:def-wn}
W_n = \max_{k=1,\ldots,\lceil n/3\rceil} G_{\boldsymbol{\alpha}(n,k)}^{\gamma_1(n,k),\gamma_2(n,k)}\text.
\end{equation}
Then $\lim_{n\to\infty} W_n = \mu_\textrm{MS}$.
\end{thm}
\begin{proof}
For each $n\ge 6$, denote by $k^*_n$ the smallest value of $k$ for which the maximum in the definition $W_n$ is attained, let $\beta_n' = \gamma_1(n,k_n^*)$,
and let $\beta_n''=\gamma_2(n,k_n^*)$.
There is subsequence of $W_n$ converging to its limit superior. Furthermore, the values of $\beta_n'$ in this subsequence have an accumulation point.
Therefore, let $n_m$ denote the indices of a subsequence of $W_n$ converging to its limit superior such that $\beta_n'$ (and therefore also $\beta_n''$) converges to some limiting angle $\beta \in [\pi/3,\pi/2]$.
Let
$\mathbf{u}^{(n)}=\left(u_1^{(n)},\ldots,u_{2(n-{k^*_n}-1)}^{(n)}\right)$ denote a point in
$\Omega_{\boldsymbol{\alpha}(n,k^*_n)}^{\beta'_n,\beta''_n}$
where $g_{\boldsymbol{\alpha}(n,{k^*_n})}^{\beta_n',\beta_n''}$ attains its maximum value, which (it is immediate from the definitions) is equal to $G_{\boldsymbol{\alpha}(n,{k^*_n})}^{\beta_n',\beta_n''}=W_n$. Moreover, as more than one point may attain this value, let $\mathbf{u}^{(n)}$ be chosen to be minimal under the coordinatewise partial order with respect to this property.
Let $P_n$ be a largest area connected component of
$$H\cap \bigcap_{j=1}^{n-{k^*_n}-1} L_{j\pi/2n}\left(u^{(n)}_{2j-1},u^{(n)}_{2j}\right)\cap B\left(\beta_n',\beta_n''\right).$$
Again by the definitions, $\lambda(P_n)=W_n$.
Note that the diameter of $P_n$ is bounded from above by a universal constant; this is easy to check and is left as an exercise.
Now, of course $P_n$ is not a moving sofa shape, but we will show that it approximates one along a suitable subsequence. To this end, define functions $U^{(n)}:[0,\beta]\to \mathbb{R}$, $V^{(n)}:[0,\beta]\to \mathbb{R}$ by
\begin{align*}
U^{(n)}(t) &= \max_{(x,y)\in P_n} \left( x\cos t+y\sin t - 1 \right), \\
V^{(n)}(t) &= \max_{(x,y)\in P_n} \left( -x\sin t+y\cos t - 1 \right).
\end{align*}
We note that $U^{(n)}(t)$ and $V^{(n)}(t)$ have the following properties: first, they are Lipschitz-continuous with a uniform (independent of $n$) Lipschitz constant; see pp.\ 269--270 of Gerver's paper \cite{gerver} for the proof, which uses the fact that the diameters of $P_n$ are bounded.
Second, the fact that $P_n \subset H$ implies that
$\displaystyle V^{(n)}(0) = \max_{(x,y)\in P_n} (y-1) \le 0$. This in turn implies that
\begin{align}
P_n &\subseteq
(-\infty,U^{(n)}(0)+1] \times [0,V^{(n)}(0)+1]
\nonumber \\ & \subseteq
(-\infty,U^{(n)}(0)+1] \times [V^{(n)}(0),V^{(n)}(0)+1] \nonumber \\&=
(U^{(n)}(0),V^{(n)}(0)) + L_{\textrm{horiz}}.
\label{eq:third-observation}
\end{align}
Third, we have
$$ \left(U^{(n)}(j \pi/2n), V^{(n)}(j \pi/2n) \right) = \left(u^{(n)}_{2j-1}, u^{(n)}_{2j} \right) $$
for all $j=1,\ldots,n-{k^*_n}$; that is, $U^{(n)}(t)$ and $V^{(n)}(t)$ continuously interpolate the odd and even (respectively) coordinates of the vector $\mathbf{u}^{(n)}$. Indeed, the relation $P_n \subset L_{j\pi/2n}\left(u^{(n)}_{2j-1},u^{(n)}_{2j}\right)$ implies trivially that
$$U^{(n)}(j \pi/2n) \le u^{(n)}_{2j-1} \ \ \textrm{ and }\ \
V^{(n)}(j \pi/2n) \le u^{(n)}_{2j}.$$
However, if we had a strict inequality $U^{(n)}(j \pi/2n) < u^{(n)}_{2j-1}$ (respectively, $V^{(n)}(j \pi/2n) < u^{(n)}_{2j}$), that would imply that
replacing $L_{j\pi/2n}\left(u^{(n)}_{2j-1},u^{(n)}_{2j}\right)$ by $L_{j\pi/2n}\left(u^{(n)}_{2j-1}-\epsilon,u^{(n)}_{2j}\right)$ (respectively, $L_{j\pi/2n}\left(u^{(n)}_{2j-1},u^{(n)}_{2j}-\epsilon\right)$) for some small positive $\epsilon$ in the definition of $P_n$ would increase its area slightly, in contradiction to the maximality property defining $\mathbf{u}^{(n)}$.
We now define a smoothed version $T_n$ of the polygon $P_n$ by letting
\begin{align*}
T_n =& P_n\cap\left(L_{\textrm{horiz}}+(U^{(n)}(0),V^{(n)}(0))\right)
\cap\bigcap_{0\le t\le\beta} L_t(U^{(n)}(t),V^{(n)}(t))\\
&\cap \left( \rotmat{\beta}\Big(L_{\textrm{vert}}\Big)+(U^{(n)}(\beta),V^{(n)}(\beta)) \right)
\\ =&
P_n\cap\bigcap_{0\le t\le\beta} L_t(U^{(n)}(t),V^{(n)}(t))\cap \left( \rotmat{\beta}\Big(L_{\textrm{vert}}\Big)+(U^{(n)}(\beta),V^{(n)}(\beta)) \right),
\end{align*}
where the first equality sign is a definition, and the second equality follows from \eqref{eq:third-observation}.
We claim that the Hausdorff distance between the sets $T_{n_m}$ and $P_{n_m}$ goes to $0$ as $n\to\infty$.
To see this, let $(x,y)\in P_{n_m}\setminus T_{n_m}$.
From the fact that $(x,y)\in P_{n_m}$ we have that
\begin{equation}
\label{eq:ineqPnm1}
\begin{aligned}
x\cos t + y\sin t &\ge U^{(n_m)}(t) \ \textrm{ or } \
-x\sin t + y\cos t \ge V^{(n_m)}(t)
\end{aligned}
\end{equation}
for all $t = j\pi/2n_m$, where $j=1,\ldots,n_m-k_{n_m}^*-1$. Moreover, we have $y>V^{(n_m)}(0)$ and
\begin{equation}
\label{eq:ineqPnm2}
x\cos t + y\sin t \ge U^{(n_m)}(t)
\end{equation}
for $t = \beta'_{n_m} = (n_m - k_{n_m}*)\pi/2n_m$.
We want to show that there exists $\delta_m\to 0$ such that
\begin{equation}
\begin{aligned}
\label{eq:ineqTnm1}
x\cos t' + (y+\delta_m)\sin t' &\ge U^{(n_m)}(t') \ \textrm{ or } \\
-x\sin t' + (y+\delta_m)\cos t' &\ge V^{(n_m)}(t').
\end{aligned}
\end{equation}
for all $0<t'<\beta$ and
\begin{equation}
\label{eq:ineqTnm2}
x\cos \beta + (y+\delta_m)\sin \beta \ge U^{(n_m)}(\beta)\text.
\end{equation}
We claim that $\delta_m = C((1/n_m) + |\beta-\beta'_{n_m}|)^{1/2}$ suffices, where $C$ is some constant.
First, if $\beta<\pi/2$, then for $t'\in[(1/n_m)^{1/2},\beta]$, we have \eqref{eq:ineqTnm1}
from the uniform Lipschitz continuity of $U^{(n_m)}(t')$, $V^{(n_m)}(t')$, and the other terms as functions of $t'$
(recall $x$ and $y$ are uniformly bounded), from the fact that we have \eqref{eq:ineqPnm1} for some $t$ with
$|t-t'|<((1/n_m) + |\beta-\beta'_{n_m}|)$, and from $|\sin t'|, |\cos t'| > \tfrac12((1/n_m) + |\beta-\beta'_{n_m}|)^{1/2}$.
For $t'<(1/n_m)^{1/2}$, the fact that $y>V^{(n_m)}(0)$ and Lipschitz continuity suffice to give the second clause of \eqref{eq:ineqTnm1}.
Finally, \eqref{eq:ineqTnm2} is satisfied due to Lipschitz continuity and the inequality \eqref{eq:ineqPnm2}.
The case of $\beta=\pi/2$ can be worked out similarly.
Therefore, for every $(x,y)\in P_{n_m}\setminus T_{n_m}$, we can construct a point $(x,y')\in T_{n_m}$, with $|y'-y|\le \delta_m$ with $\delta_m\to0$.
We now use the fact that the vector-valued function $(U^{(n_m)}(t),V^{(n_m)}(t))$ is uniformly Lipschitz to conclude using the Arzel\`a-Ascoli theorem that
it has a subsequence (which we still denote by $n_m$, to avoid clutter) such that the ``anchored'' version of the function $(U^{(n_m)}(t),V^{(n_m)}(t))$, namely
$$(U^{(n_m)}(t)-U^{(n_m)}(0), V^{(n_m)}(t)-V^{(n_m)}(0))$$
converges in the supremum norm to some limiting function
$\mathbf{x}:[0,\beta]\to\mathbb{R}^2$, with the same Lipschitz constant, which satisfies $\mathbf{x}(0)=(0,0)$; that is, the limiting function $\mathbf{x}$ is a $\beta$-rotation path. Now let
\begin{equation*}
\begin{aligned}
T_\infty =& L_{\textrm{horiz}}
\cap\bigcap_{0\le t\le\beta} L_t(\mathbf{x}(t))
\cap \left(\rotmat{\beta}\left(L_{\textrm{vert}}\right)+\mathbf{x}(\beta)\right).
\end{aligned}
\end{equation*}
Since the Hausdorff distances of $P_{n_m}$ to $T_{n_m}$ and of $T_{n_m}-(U^{(n)}(0),V^{(n)}(0))$ to $T_\infty\cap \left(T_{n_m}-(U^{(n)}(0),U^{(n)}(0))\right)$ both approach zero as $m\to\infty$,
we have that the largest connected component of $T_\infty$ has an area at least as large as $\lim_{m\to\infty} \lambda(P_{n_m}) = \lim\sup_{n\to\infty} W_n$.
On the other hand, $T_\infty$ is of the form \eqref{eq:sx-intersections} for a $\beta$-rotation path $\mathbf{x}(t)$, so, by \eqref{eq:sofaconst-characterization}, its area is bounded from above by $\mu_\textrm{MS}$. Therefore, $\lim\sup_{n\to\infty} W_n\le\mu_\textrm{MS}$.
We also have that $W_n\ge \mu_*(\beta)$ for all $\pi/3\le\beta\le\pi/2$ from Proposition~\ref{prop:sofa-fg-bounds}(iii), so $W_n \ge \mu_\textrm{MS}$ for all $n$.
We conclude that $\lim_{n\to\infty} W_n=\mu_\textrm{MS}$, as claimed.
\end{proof}
We remark that in view of Theorem~\ref{thm:angle-bound}, it is easy to see that Theorem~\ref{thm:conv-moving-sofa} remains true if we replace the range $1\le k\le \lceil n/3 \rceil$ of values of $k$ in \eqref{eq:def-wn} with the smaller (and therefore computationally more efficient) range $1\le k\le \lceil n/9 \rceil$.
\section{An algorithmic proof scheme for moving sofa area bounds}
\label{sec:algorithm}
The theoretical framework we developed in the previous section reduces the problem of deriving upper bounds for $\mu_\textrm{MS}$ to that of proving upper bounds for the function $G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$. Since this function is defined in terms of solutions to a family of optimization problems in finite-dimensional spaces, this is already an important conceptual advance. However, from a practical standpoint it remains to develop and implement a practical, efficient algorithm for solving optimization problems in this class. Our goal in this section is to present such an algorithm and establish its correctness.
Our computational strategy is a variant of the \textbf{geometric branch and bound} optimization technique \cite{ratscheck}.
Recall from Lemma~\ref{lem:supmax} that the maximum of the function $g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$ is attained in a box (a Cartesian product of intervals) $\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2} \subset \mathbb{R}^{2k}$.
Our strategy is to break up $\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$ into sub-boxes.
On each box $E$ being considered, we will compute a quantity $\Gamma_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(E)$, which is an upper bound for $g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u})$ that holds uniformly for all $\mathbf{u} \in E$. In many cases this bound will not be an effective one; in such a case the box will be subdivided into two further boxes $E_1$ and $E_2$, which will be inserted into a queue to be considered later. Other boxes lead to effective bounds and need not be considered further. By organizing the computation efficiently, practical bounds can be established in a reasonable time, at least for small values of~$k$.
To make the idea precise, we introduce a few more definitions. Given two intervals $I=[a,b], J=[c,d] \subseteq \mathbb{R}$ and $\alpha \in[0,\pi/2]$, define
$$
\widehat{L}_\alpha(I,J) = \bigcup_{u \in I, v \in J} L_\alpha(u,v).
$$
Note that $\widehat{L}_\alpha(I,J)$ can also be expressed as a Minkowski sum of $L_\alpha(0,0)$ with the rotated rectangle $\rotmat{\alpha}(I\times J)$; in particular, it belongs to the class of planar sets known as \textbf{Nef polygons} (see the Appendix for further discussion of Nef polygons and their relevance to our software implementation of the algorithm).
Now, for a box $E=I_1\times \ldots \times I_{2k} \subset \mathbb{R}^2$, define
\begin{equation}\label{eq:defcalF}
\Gamma_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(E) =
\lambda^*\left(
H \cap \bigcap_{j=1}^k
\widehat{L}_{\alpha_j}(I_{2j-1},I_{2j})
\cap B(\beta_1,\beta_2)
\right)\text.
\end{equation}
Thus, by the definitions we have trivially that
\begin{equation} \label{eq:upperbound-trivially}
\sup_{\mathbf{u}\in E} g_{\boldsymbol{\alpha}}(\mathbf{u})
\le \Gamma_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(E).
\end{equation}
Next, given a box $E=I_1\times\ldots\times I_{2k}$ where $I_j=[a_j,b_j]$, let
$$
P_{\textrm{mid}}(E) = \left( \frac{a_1+b_1}{2}, \ldots ,\ldots,\frac{a_{2k}+b_{2k}}{2}\right)
$$
denote its midpoint.
We also assume that some rule is given to associate with each box $E$ a coordinate $i=\operatorname{ind}(E) \in \{1,\ldots,2k\}$, called the \textbf{splitting index} of $E$. This index will be used by the algorithm to split $E$ into two sub-boxes, which we denote by $\operatorname{split}_{i,1}(E)$ and $\operatorname{split}_{i,2}(E)$, and which are defined as
\begin{align*}
\operatorname{split}_{i,1}(E) &= I_1\times \ldots \times I_{i-1} \times \left[a_i,\tfrac12(a_i+b_i)\right] \times I_{i+1}\times \ldots I_{2k}, \\
\operatorname{split}_{i,2}(E) &= I_1\times \ldots \times I_{i-1} \times \left[\tfrac12(a_i+b_i),b_i\right] \times I_{i+1}\times \ldots I_{2k}.
\end{align*}
We assume that the mapping $E\mapsto \operatorname{ind}(E)$ has the property that, if the mapping $E \mapsto \operatorname{split}_{\operatorname{ind}(E),j}(E)$ is applied iteratively, with arbitrary choices of $j\in\{1,2\}$ at each step and starting from some initial value of $E$, the resulting sequence of splitting indices $i_1,i_2,\ldots$ contains each possible coordinate infinitely many times. A mapping satisfying this assumption is referred to as a \textbf{splitting rule}.
The algorithm is based on the standard data structure of a \textbf{priority queue} \cite{CLRS} used to hold boxes that are still under consideration. Recall that in a priority queue, each element of the queue is associated with a numerical value called its priority, and that the queue realizes operations of pushing a new element into the queue with a given priority, and popping the highest priority element from the queue. In our application, the priority of each box $E$ will be set to a value denoted $\Pi(E)$, where the mapping $E\mapsto \Pi(E)$ is given and is assumed to satisfy
\begin{equation} \label{eq:priority-map-condition}
\Pi(E) \ge \Gamma_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(E).
\end{equation}
Aside from this requirement, the precise choice of mapping is an implementation decision.
(A key point here is that setting $\Pi(E)$ \emph{equal} to $\Gamma_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(E)$ is conceptually the simplest choice, but from the practical point of view of minimizing programming complexity and running time it may not be optimal; see the Appendix for further discussion of this point.) Note that, since boxes are popped from the queue to be inspected by the algorithm in decreasing order of their priority, this ensures that the algorithm pursues successive improvements to the upper bound it obtains in a greedy fashion.
The algorithm also computes a lower bound on $G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$ by evaluating $g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u})$
at the midpoint of every box it processes and keeping track of the largest value observed. This lower bound is used to discard boxes in which it is impossible for the maximum to lie. The variable keeping track of the lower bound is initialized to some number $\ell_0$ known to be a lower bound for $G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$. In our software implementation we used the value
$$
\ell_0 = \begin{cases} 0 & \textrm{if }\beta_2 < \pi/2, \\ 11/5 & \textrm{if }\beta_2 = \pi/2,
\end{cases}
$$
this being a valid choice thanks to the fact that (by Proposition~\ref{prop:sofa-fg-bounds}(iii)) $G_{\boldsymbol{\alpha}}^{\beta_1,\pi/2} \ge \mu_*(\pi/2) \ge \mu_\textrm{G} = 2.2195\ldots > 2.2=11/5$. Note that simply setting $\ell_0=0$ in all cases would also result in a valid algorithm, but would result in a slight waste of computation time compared to the definition above.
With this setup, we can now describe the algorithm, given in pseudocode in Listing~\ref{alg:branchandbound}.
The next proposition is key to proving the algorithm's correctness.
\begin{listing}
\begin{mdframed}[backgroundcolor=codebgcolor]
\begin{algorithmic}
\State $\varname{box\_queue} \gets $ an empty priority queue of boxes
\State $\varname{initial\_box} \gets $ box representing $\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$, computed according to the \State \phantom{$\varname{initial\_box} \gets $} function of $\boldsymbol{\alpha}, \beta_1, \beta_2$ described in Lemma~\ref{lem:supmax}
\smallskip
\State \keyword{push} $\varname{initial\_box}$ into $\varname{box\_queue}$ with priority $\Pi(\varname{initial\_box})$
\smallskip
\State $\varname{best\_lower\_bound\_so\_far} \gets $ the initial lower bound $\ell_0$
\medskip
\While{true}
\medskip
\State \keyword{pop} highest priority element of $\varname{box\_queue}$ into $\varname{current\_box}$
\State $\varname{current\_box\_lower\_bound} \gets g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(P_{\textrm{mid}}(\varname{current\_box}))$
\State $\varname{best\_upper\_bound\_so\_far} \gets \Pi(\varname{current\_box})$
\medskip
\If{$\varname{current\_box\_lower\_bound} > \varname{best\_lower\_bound\_so\_far}$}
\State $\varname{best\_lower\_bound\_so\_far} \gets \varname{current\_box\_lower\_bound}$
\EndIf
\medskip
\State $\varname{i} \gets \operatorname{ind}(\varname{current\_box})$
\smallskip
\smallskip
\For{$\varname{j}=1,2$}
\State $\varname{new\_box} \gets \operatorname{split}_{\varname{i},\varname{j}}(\varname{current\_box})$
\If{$\Pi(\varname{new\_box}) \ge \varname{best\_lower\_bound\_so\_far}$
}
\State \keyword{push} $\varname{new\_box}$ into $\varname{box\_queue}$ with priority
$\Pi(\varname{new\_box})$
\EndIf
\EndFor
\medskip
\State \textbf{Reporting point:} print the values of $\varname{best\_upper\_bound\_so\_far}$
\State \phantom{\textbf{Reporting point:}} and $\varname{best\_lower\_bound\_so\_far}$
\medskip
\EndWhile
\end{algorithmic}
\end{mdframed}
\caption{The algorithm for computing bounds for $G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$.}
\label{alg:branchandbound}
\end{listing}
\begin{prop}\label{prop:lead-box}
Any box $\tilde E$ which is the highest priority box in the queue $\varname{box\_queue}$ at some step satisfies
\begin{equation} \label{eq:lead-box}
\sup\{g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u}):\mathbf{u}\in\tilde E\} \le G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2} \le \Pi(\tilde E)\text.
\end{equation}
\end{prop}
\begin{proof}
First, the lower inequality holds for \emph{all} boxes in the queue, simply because the value $g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u})$ for any $\mathbf{u}\in\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$ is a lower bound on its maximum over all $\mathbf{u}\in\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$.
Next, let $Q_n$ denote the collection of boxes in the priority queue after $n$ iterations of the \texttt{while} loop (with $Q_0$ being the initialized queue containing the single box $\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$), and let $D_n$ denote the collection of boxes that were discarded (not pushed into the priority queue during the execution of the \texttt{if} clause inside the \texttt{for} loop) during the first $n$ iterations of the \texttt{while} loop. Then we first note that for all $n$, the relation
\begin{equation} \label{eq:confspace-decom}
\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2} = \bigcup_{E \in Q_n\cup D_n} E
\end{equation}
holds.
Indeed, this is easily proved by induction on $n$: if we denote by $X$ the highest priority element in $Q_n$, then during the $(n+1)$th iteration of the \texttt{while} loop, $X$ is subdivided into two boxes $X=X_1\cup X_2$, and each of $X_1, X_2$ is either pushed into the priority queue (i.e., becomes an element of $Q_{n+1}$) or discarded (i.e., becomes an element of $D_{n+1}$), so we have that $\Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2} = \bigcup_{E \in Q_{n+1}\cup D_{n+1}} E$, completing the inductive step.
Second, note that for any box $X\in D_n$, since $X$ was discarded during the $k$th iteration of the \texttt{while} loop for some $1\le k\le n$, we have that
$\Pi(X)$
is smaller than the value of $\varname{best\_lower\_bound\_so\_far}$ during that iteration. But $\varname{best\_lower\_bound\_so\_far}$ is always assigned a value of the form $g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u})$ for some $\mathbf{u}\in R$ and is therefore bounded from above by $G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$,
so we have established that
\begin{equation} \label{eq:xupperbound-le}
\Pi(X) < G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2} \qquad (X \in D_n).
\end{equation}
The relations \eqref{eq:supmax}, \eqref{eq:upperbound-trivially}, \eqref{eq:priority-map-condition}, \eqref{eq:confspace-decom}, and \eqref{eq:xupperbound-le} now imply that
\begin{align*}
G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2} &= \max \left\{ g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u})
\,:\,
\mathbf{u}\in \Omega_{\boldsymbol{\alpha}}^{\beta_1,\beta_2} \right\} \\ &=
\max_{E \in Q_n\cup D_n} \left(
\sup \left\{ g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u})
\,:\,
\mathbf{u}\in E \right\}
\right)
\le
\max_{E \in Q_n\cup D_n} \Gamma_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(E)
\\ &\le \max_{E \in Q_n\cup D_n} \Pi(E)
=
\max_{E \in Q_n} \Pi(E).
\end{align*}
Finally, $\max_{E \in Q_n} \Pi(E) = \Pi(\tilde E)$, since $\tilde E$ was assumed to be the box with highest priority among the elements of $Q_n$, so we get the upper inequality in \eqref{eq:lead-box}, which finishes the proof.
\end{proof}
We immediately have the correctness of the algorithm as a corollary:
\begin{thm}[Correctness of the algorithm]
Any \,\!\!\! value of the variable \ $\varname{best\_upper\_bound\_so\_far}$ reported by the algorithm is an upper bound for $G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$.
\end{thm}
Note that the correctness of the algorithm is not dependent on the assumption we made on the splitting index mapping $E\mapsto \operatorname{ind}(E)$ being a splitting rule. The importance of that assumption is explained by the following result, which also explains one sense in which assuming an equality in \eqref{eq:priority-map-condition} rather than an inequality provides a benefit (of a theoretical nature at least).
\begin{thm}[Asymptotic sharpness of the algorithm]
\label{thm:asym-sharpness}
Assume that the priority mapping $E\mapsto \Pi(E)$ is taken to be
\begin{equation} \label{eq:priority-map-sharpness}
\Pi(E) = \Gamma_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(E).
\end{equation}
Then the upper and lower bounds output by the algorithm both converge to $G_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}$.
\end{thm}
\begin{proof}
As one may easily check, the upper bound used in the calculation under the assumption \eqref{eq:priority-map-sharpness}, $\Gamma_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(E)$, approaches the actual supremum of $g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u})$ over $E$ as the diameter of $E$ approaches zero.
That is $|\Gamma_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(E) - \sup\{g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u}):\mathbf{u}\in E\}|$ is bounded by
a function of the diameter of $E$ that approaches zero when the diameter approaches zero.
The same is true of the variation in each box, $|\sup\{g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u}):\mathbf{u}\in E\} - \inf\{g_{\boldsymbol{\alpha}}^{\beta_1,\beta_2}(\mathbf{u}):\mathbf{u}\in E\}|$.
When using a valid splitting rule, the diameter of the leading box approaches zero as $n$ approaches infinity, and Proposition~\ref{prop:lead-box} completes the proof.
\end{proof}
As with the case of the choice of priority mapping and the value of the initial lower bound $\ell_0$, the specific choice of splitting rule to use is an implementation decision, and different choices can lead to algorithms with different performance. A simple choice we tried was to use the index of the coordinate with the largest variation within $E$ (i.e., the ``longest dimension'' of $E$). Another choice, which we found gives superior performance and is the rule currently used in our software implementation \texttt{SofaBounds}, is to let the splitting index be the value of $i$ maximizing
$\lambda(D_i\cap S(E))$, where $S(E)$ is the argument of $\lambda^*$ in \eqref{eq:defcalF},
and
\begin{align*}
D_i = \begin{cases}
\displaystyle \bigcup_{u\in {I_{2j-1}}} \widehat{L}_{\alpha_j}(u,I_{2j}) \setminus \bigcap_{u\in {I_{2j-1}}} \widehat{L}_{\alpha_j}(u,I_{2j})
& \textrm{if }i=2j-1, \\[14pt]
\displaystyle
\bigcup_{u\in {I_{2j}}} \widehat{L}_{\alpha_j}(I_{2j-1},u) \setminus \bigcap_{u\in {I_{2j}}} \widehat{L}_{\alpha_j}(I_{2j-1},u) & \textrm{if }i=2j.
\end{cases}
\end{align*}
\section{Explicit numerical bounds}
\label{sec:numerical}
We report the following explicit numerical bounds obtained by our algorithm, which we will then use to prove Theorems~\ref{thm:new-upperbound} and~\ref{thm:angle-bound}.
\begin{thm}
\label{thm:explicit-bounds}
Define angles
\begin{align*}
\alpha_1 &= \sin^{-1}\tfrac{7}{25} \approx 16.26^\circ, \\
\alpha_2 &= \sin^{-1}\tfrac{33}{65} \approx 30.51^\circ, \\
\alpha_3 &= \sin^{-1}\tfrac{119}{169} \approx 44.76^\circ, \\
\alpha_4 &= \sin^{-1} \tfrac{56}{65} = \pi/2-\alpha_2 \approx 59.59^\circ, \\
\alpha_5 &= \sin^{-1}\tfrac{24}{25} = \pi/2-\alpha_1 \approx 73.74^\circ, \\
\alpha_6 &= \sin^{-1} \tfrac{60}{61} \approx 79.61^\circ, \\
\alpha_7 &= \sin^{-1} \tfrac{84}{85} \approx 81.2^\circ.
\end{align*}
Then we have the inequalities
\begin{align}
G_{(\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_5)} &\le
2.37, \label{eq:numerical-bound1} \\[5pt]
G_{(\alpha_1,\alpha_2,\alpha_3)}^{\alpha_4,\alpha_5} &\le
2.21,
\label{eq:numerical-bound2}
\\
G_{(\alpha_1,\alpha_2,\alpha_3)}^{\alpha_5,\alpha_6} &\le 2.21,
\label{eq:numerical-bound3}
\\
G_{(\alpha_1,\alpha_2,\alpha_3,\alpha_4)}^{\alpha_6,\alpha_7} &\le 2.21.
\label{eq:numerical-bound4}
\end{align}
\end{thm}
\begin{proof} Each of the inequalities \eqref{eq:numerical-bound1}--\eqref{eq:numerical-bound4} is certified as correct using the \texttt{SofaBounds} software package by invoking the \texttt{run} command from the command line interface after loading the appropriate parameters. For \eqref{eq:numerical-bound1}, the parameters can be loaded from the saved profile file \texttt{thm9-bound1.txt} included with the package (see the Appendix below for an illustration of the syntax for loading the file and running the computation). Similarly, the inequalities \eqref{eq:numerical-bound2}, \eqref{eq:numerical-bound3}, \eqref{eq:numerical-bound4} are obtained by running the software with the profile files \texttt{thm9-bound2.txt}, \texttt{thm9-bound3.txt}, and \texttt{thm9-bound4.txt}, respectively. Table~\ref{table:benchmarking} in the Appendix shows benchmarking results with running times for each of the computations.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:new-upperbound}]
For angles $0\le \beta_1<\beta_2 \le \pi/2$ denote
$$M(\beta_1,\beta_2) =
\sup_{\beta_1\le \beta \le \beta_2} \mu_*(\beta).$$
By \eqref{eq:sofaconst-beta0}, we have
\begin{equation} \label{eq:sofaconst-tworanges}
\mu_\textrm{MS} =
M(\beta_0,\pi/2) =
\max\Big(
M(\beta_0,\alpha_5),
M(\alpha_5,\pi/2)
\Big).
\end{equation}
By Proposition~\ref{prop:sofa-fg-bounds}(iii),
$M(\beta_0,\alpha_5)$
is bounded from above by $G_{(\alpha_1,\alpha_2,\alpha_3)}^{\alpha_4,\alpha_5}$, and by Proposition~\ref{prop:sofa-fg-bounds}(i),
$M(\alpha_5,\pi/2)$
is bounded from above by $G_{(\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_5)}$. Thus, combining \eqref{eq:sofaconst-tworanges} with the numerical bounds \eqref{eq:numerical-bound1}--\eqref{eq:numerical-bound2} proves \eqref{eq:upperbound}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:angle-bound}]
Using the same notation as in the proof of Theorem~\ref{thm:new-upperbound} above, we note that
$$
M(0,\alpha_7)
= \max\Big(
M(0,\alpha_4), M(\alpha_4,\alpha_5), M(\alpha_5,\alpha_6), M(\alpha_6,\alpha_7)
\Big).
$$
Now, by Gerver's observation mentioned after the relation \eqref{eq:sofaconst-beta}, we have that $M(0,\alpha_4) \le \sec(\alpha_4) < \sec(\pi/3) = 2$. By Proposition~\ref{prop:sofa-fg-bounds}(iii) coupled with the numerical bounds \eqref{eq:numerical-bound2}--\eqref{eq:numerical-bound4}, the remaining three arguments
$M(\alpha_4,\alpha_5)$, $M(\alpha_5,\alpha_6)$, and $M(\alpha_6,\alpha_7)$ in the maximum are all bounded from above by $2.21$, so in particular we get that $M(0,\alpha_7)\le 2.21 < \mu_\textrm{G} \approx 2.2195$. On the other hand, we have that
$$ \mu_\textrm{MS} = M(0,\pi/2) = \max\Big( M(0,\alpha_7), M(\alpha_7,\pi/2) \Big) \ge \mu_\textrm{G}. $$
We conclude that $\mu_\textrm{MS} = M(\alpha_7,\pi/2)$ and that $\mu_*(\beta) \le 2.21 < \mu_\textrm{MS}$ for all $\beta<\alpha_7$. This proves that a moving sofa of maximal area has to undergo rotation by an angle of at least $\alpha_7$, as claimed.
\end{proof}
\section{Concluding remarks}
The results of this paper represent the first progress since Hammersley's 1968 paper \cite{hammersley} on deriving upper bounds for the area of a moving sofa shape. Our techniques also enable us to prove an improved lower bound on the angle of rotation a maximal area moving sofa shape must rotate through. Our improved upper bound of $2.37$ on the moving sofa constant comes much closer than Hammersley's bound to the best known lower bound $\mu_\textrm{G} \approx 2.2195$ arising from Gerver's construction, but clearly there is still considerable room for improvement in narrowing the gap between the lower and upper bounds. In particular, some experimentation with the initial parameters used as input for the \texttt{SofaBounds} software should make it relatively easy to produce further (small) improvements to the value of the upper bound.
More ambitiously, our hope is that a refinement of our methods---in the form of theoretical improvements and/or speedups in the software implementation, for example using parallel computing techniques---may eventually be used to obtain an upper bound that comes very close to Gerver's bound, thereby providing supporting evidence to his conjecture that the shape he found is the solution to the moving sofa problem. Some supporting evidence of this type, albeit derived using a heuristic algorithm, was reported in a recent paper by Gibbs \cite{gibbs}. Alternatively, a failure of our algorithm (or improved versions thereof) to approach Gerver's lower bound may provide clues that his conjecture may in fact be false.
Our methods should also generalize in a fairly straightforward manner to other variants of the moving sofa problem. In particular, in a recent paper \cite{romik}, one of us discovered a shape with a piecewise algebraic boundary that is a plausible candidate to being the solution to the so-called \textbf{ambidextrous moving sofa problem}, which asks for the largest shape that can be moved around a right-angled turn \textit{either to the left or to the right} in a hallway of width~1 (Fig.~\ref{fig:romik-sofa}(a)). The shape, shown in Fig.~\ref{fig:romik-sofa}(b), has an area given by the intriguing explicit constant
\begin{align*}
\mu_\textrm{R} &=\sqrt[3]{3+2 \sqrt{2}}+\sqrt[3]{3-2 \sqrt{2}}-1
+\arctan\left[
\frac{1}{2} \left( \sqrt[3]{\sqrt{2}+1}- \sqrt[3]{\sqrt{2}-1}\, \right)
\right]
\nonumber \\ & \qquad\qquad\qquad = 1.64495521\ldots
\end{align*}
As with the case of the original (non-ambidextrous) moving sofa problem, the constant $\mu_\textrm{R}$ provides a lower bound on the maximal area of an ambidextrous moving sofa shape; in the opposite direction, any upper bound for the original problem is also an upper bound for the ambidextrous variant of the problem, which establishes $2.37$ as a valid upper bound for that problem. Once again, the gap between the lower and upper bounds seems like an appealing opportunity for further work, so it would be interesting to extend the techniques of this paper to the setting of the ambidextrous moving sofa problem so as to obtain better upper bounds on the ``ambidextrous moving sofa constant.''
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\scalebox{0.3}{\includegraphics{ambisofa-in-hallway.pdf}} &
\raisebox{10pt}{\scalebox{0.5}{\includegraphics{ambisofa.pdf}}}
\\[3pt]
(a) & (b)
\end{tabular}
\caption{(a) The ambidextrous moving sofa problem involves maximization of the area of moving sofa shapes that can navigate a hallway with right-angled turns going both ways, as shown in the figure; (b) a shape discovered by Romik \cite{romik} that was derived as a possible solution to the ambidextrous moving sofa problem. The boundary of the shape is a piecewise algebraic curve; the tick marks in the figure delineate the transition points between distinct parts of the boundary.}
\label{fig:romik-sofa}
\end{center}
\end{figure}
\section*{Appendix: The \texttt{SofaBounds} software}
We implemented the algorithm described in Section~\ref{sec:algorithm} in the software package \texttt{SofaBounds} we developed, which serves as a companion package to this paper and whose source code is available to download online \cite{sofabounds}. The package is a Unix command line tool written in \texttt{C++} and makes use of the open source computational geometry library \texttt{CGAL} \cite{cgal}. All computations are done in the exact rational arithmetic mode supported by \texttt{CGAL} to ensure that the bounds output by the algorithm are mathematically rigorous. For this reason, the software only works with angles $\gamma$ for which the vector $(\cos\gamma, \sin\gamma)$ has rational coordinates, i.e., is a rational point $(a/c,b/c)$ on the unit circle; clearly such angles are parametrized by Pythagorean triples $(a,b,c)$ such that $a^2+b^2=c^2$, and it is using such triples that the angles are entered into the program as input from the user. For example, to approximate an angle of $45$ degrees, we used the Pythagorean triple $(119,120,169)$, which corresponds to an angle of $\sin^{-1}(119/169) \approx 44.76^\circ$.
The software uses the \textbf{Nef polygon} geometric primitive implemented in \texttt{CGAL}; recall that Nef polygons are planar sets that can be obtained from a finite set of half-planes by applying set intersection and complementation operations. It is easy to see that all the planar sets manipulated by the algorithm belong to this family and can be readily calculated using elementary geometry and the features of \texttt{CGAL}'s Nef polygon sub-library \cite{seel}.
Our implementation uses the priority rule
$$
\Pi(E) =
\lambda\left(
H \cap \bigcap_{j=1}^k
\widehat{L}_{\alpha_j}(I_{2j-1},J_{2j})
\cap B(\beta_1,\beta_2)
\right)\text,
$$
i.e., we use the total area of the intersection as the priority instead of the area of the largest connected component as in \eqref{eq:defcalF}; this is slightly less ideal from a theoretical point of view, since Theorem~\ref{thm:asym-sharpness} does not apply, but simplified the programming and in practice probably results in better computational performance.
The software runs our algorithm on a single Unix thread, since the parts of the CGAL library we used are not thread-safe; note however that the nature of our algorithm lends itself fairly well to parallelization, so a multithreading or other parallelized implementation could yield a considerable speedup in performance, making it more practical to continue to improve the bounds in Theorems~\ref{thm:new-upperbound} and \ref{thm:angle-bound}.
To illustrate the use of the software, Listing~\ref{code-listing} shows a sample working session in which the upper bound $2.5$ is derived for $G_{\boldsymbol{\alpha}}$ with
\begin{equation} \label{eq:angles-30-45-60}
\boldsymbol{\alpha}=\left(\sin^{-1}\frac{33}{65},\sin^{-1}\frac{119}{169}, \sin^{-1}\frac{56}{65}\right)
\approx (30.51^\circ, 44.76^\circ, 59.49^\circ).
\end{equation}
The numerical bounds \eqref{eq:numerical-bound1}--\eqref{eq:numerical-bound4} used in the proofs of Theorems~\ref{thm:new-upperbound} and~\ref{thm:angle-bound} were proved using \texttt{SofaBounds}, and required several weeks of computing time on a desktop computer. Table~\ref{table:benchmarking} shows some benchmarking information, which may be useful to anyone wishing to reproduce the computations or to improve upon our results.
\newcommand{\ignore}[1]{{}}
\begin{listing}
\begin{mdframed}[backgroundcolor=codebgcolor]
\begin{alltt}
Users/user/SofaBounds$ \inputline{SofaBounds}
SofaBounds version 1.0
Type "help" for instructions.
> \inputline{load example-30-45-60.txt}
File 'example-30-45-60.txt' loaded successfully.
> \inputline{settings}
Number of corridors: 3
Slope 1: 33 56 65 (angle: 30.5102 deg)
Slope 2: 119 120 169 (angle: 44.7603 deg)
Slope 3: 56 33 65 (angle: 59.4898 deg)
Minimum final slope: 1 0 1 (angle: 90 deg)
Maximum final slope: 1 0 1 (angle: 90 deg)
Reporting progress every: 0.01 decrease in upper bound
> \inputline{run}
<iterations=0>
<iterations=1 | upper bound=3.754 | time=0:00:00>
<iterations=7 | upper bound=3.488 | time=0:00:01>
<iterations=9 | upper bound=3.438 | time=0:00:01> \vspace{7pt}
\ignore{<iterations=13 | upper bound=3.428 | time=0:00:02>
<iterations=14 | upper bound=3.416 | time=0:00:02>
<iterations=16 | upper bound=3.405 | time=0:00:02>
<iterations=18 | upper bound=3.397 | time=0:00:03>
<iterations=19 | upper bound=3.380 | time=0:00:03>
<iterations=24 | upper bound=3.360 | time=0:00:04>
<iterations=25 | upper bound=3.301 | time=0:00:04>
<iterations=30 | upper bound=3.273 | time=0:00:05>
<iterations=35 | upper bound=3.248 | time=0:00:06>
<iterations=36 | upper bound=3.222 | time=0:00:07>
<iterations=37 | upper bound=3.202 | time=0:00:07>
<iterations=41 | upper bound=3.189 | time=0:00:08>
<iterations=45 | upper bound=3.162 | time=0:00:09>
<iterations=46 | upper bound=3.140 | time=0:00:09>
<iterations=47 | upper bound=3.043 | time=0:00:09>
<iterations=51 | upper bound=3.010 | time=0:00:10>
<iterations=53 | upper bound=2.996 | time=0:00:11>
<iterations=56 | upper bound=2.983 | time=0:00:11>
<iterations=64 | upper bound=2.970 | time=0:00:14>
<iterations=67 | upper bound=2.957 | time=0:00:14>
<iterations=73 | upper bound=2.943 | time=0:00:16>
<iterations=79 | upper bound=2.928 | time=0:00:18>
<iterations=87 | upper bound=2.918 | time=0:00:20>
<iterations=92 | upper bound=2.903 | time=0:00:21>
<iterations=95 | upper bound=2.870 | time=0:00:22>
<iterations=102 | upper bound=2.857 | time=0:00:24>
<iterations=111 | upper bound=2.848 | time=0:00:27>
<iterations=116 | upper bound=2.838 | time=0:00:28>
<iterations=118 | upper bound=2.827 | time=0:00:29>
<iterations=127 | upper bound=2.817 | time=0:00:31>
<iterations=134 | upper bound=2.809 | time=0:00:34>
<iterations=139 | upper bound=2.796 | time=0:00:35>
<iterations=143 | upper bound=2.790 | time=0:00:36>
<iterations=151 | upper bound=2.778 | time=0:00:38>
<iterations=165 | upper bound=2.763 | time=0:00:42>
<iterations=173 | upper bound=2.748 | time=0:00:44>
<iterations=181 | upper bound=2.738 | time=0:00:47>
<iterations=204 | upper bound=2.730 | time=0:00:53>
<iterations=220 | upper bound=2.719 | time=0:00:58>
<iterations=240 | upper bound=2.709 | time=0:01:03>
<iterations=278 | upper bound=2.699 | time=0:01:14>
<iterations=300 | upper bound=2.689 | time=0:01:21>
<iterations=326 | upper bound=2.680 | time=0:01:28>
<iterations=361 | upper bound=2.670 | time=0:01:38>
<iterations=413 | upper bound=2.660 | time=0:01:53>
<iterations=462 | upper bound=2.650 | time=0:02:07>
<iterations=548 | upper bound=2.640 | time=0:02:31>
<iterations=619 | upper bound=2.630 | time=0:02:52>
<iterations=724 | upper bound=2.620 | time=0:03:22>
<iterations=812 | upper bound=2.610 | time=0:03:48>
<iterations=945 | upper bound=2.600 | time=0:04:27>
<iterations=1100 | upper bound=2.590 | time=0:05:12>
<iterations=1290 | upper bound=2.580 | time=0:06:07>
<iterations=1513 | upper bound=2.570 | time=0:07:18> } \textnormal{[\textit{... 54 output lines deleted ...}]}\vspace{7pt}
<iterations=1776 | upper bound=2.560 | time=0:08:43>
<iterations=2188 | upper bound=2.550 | time=0:10:48>
<iterations=2711 | upper bound=2.540 | time=0:13:23>
<iterations=3510 | upper bound=2.530 | time=0:18:18>
<iterations=4620 | upper bound=2.520 | time=0:24:54>
<iterations=6250 | upper bound=2.510 | time=0:34:52>
<iterations=8901 | upper bound=2.500 | time=0:50:45>
\end{alltt}
\end{mdframed}
\caption{A sample working session of the \texttt{SofaBounds} software package proving an upper bound for $G_{\boldsymbol{\alpha}}$ with $\boldsymbol{\alpha}$ given by \eqref{eq:angles-30-45-60} . User commands are colored in \inputline{\textrm{blue}}.
The session loads parameters from a saved profile file \texttt{example-30-45-60.txt} (included with the source code download package) and rigorously certifies the number $2.5$ as an upper bound for
$G_{\boldsymbol{\alpha}}$ (and therefore also for $\mu_\textrm{MS}$, by Proposition~\ref{prop:sofa-fg-bounds}(ii)) in about $50$ minutes of computation time on a laptop with a 1.3 GHz Intel Core M processor.
}
\label{code-listing}
\end{listing}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Bound & Saved profile file & Num.\ of iterations & Computation time \\
\hline
\eqref{eq:numerical-bound1} & \texttt{thm9-bound1.txt} & 7,724,162 & 480 hours \\
\eqref{eq:numerical-bound2} & \texttt{thm9-bound2.txt} & \phantom{0,000,}917 & 2 minutes \\
\eqref{eq:numerical-bound3} & \texttt{thm9-bound3.txt} & \phantom{,00}26,576 & 1:05 hours \\
\eqref{eq:numerical-bound4} & \texttt{thm9-bound4.txt} & \phantom{0,}140,467 & 6:23 hours \\
\hline
\end{tabular}
\caption{Benchmarking results for the computations used in the proof of the bounds \eqref{eq:numerical-bound1}--\eqref{eq:numerical-bound4}. The computations for \eqref{eq:numerical-bound1} were performed on a 2.3 GHz Intel Xeon E5-2630 processor, and the computations for \eqref{eq:numerical-bound2}--\eqref{eq:numerical-bound4} were performed
were performed on a 3.4 GHz Intel Core i7 processor.}
\label{table:benchmarking}
\end{center}
\end{table}
Additional details on \texttt{SofaBounds} can be found in the documentation included with the package.
\clearpage
| {
"timestamp": "2018-01-08T02:15:09",
"yymm": "1706",
"arxiv_id": "1706.06630",
"language": "en",
"url": "https://arxiv.org/abs/1706.06630",
"abstract": "The moving sofa problem, posed by L. Moser in 1966, asks for the planar shape of maximal area that can move around a right-angled corner in a hallway of unit width. It is known that a maximal area shape exists, and that its area is at least 2.2195... - the area of an explicit construction found by Gerver in 1992 - and at most $2\\sqrt{2}=2.82...$, with the lower bound being conjectured as the true value. We prove a new and improved upper bound of 2.37. The method involves a computer-assisted proof scheme that can be used to rigorously derive further improved upper bounds that converge to the correct value.",
"subjects": "Metric Geometry (math.MG); Optimization and Control (math.OC)",
"title": "Improved upper bounds in the moving sofa problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534349454033,
"lm_q2_score": 0.815232489352,
"lm_q1q2_score": 0.8001127269536122
} |
https://arxiv.org/abs/math/0606320 | Remarks on the Cayley Representation of Orthogonal Matrices and on Perturbing the Diagonal of a Matrix to Make it Invertible | This note contains two remarks. The first remark concerns the extension of the well-known Cayley representation of rotation matrices by skew symmetric matrices to rotation matrices admitting -1 as an eigenvalue and then to all orthogonal matrices. We review a method due to Hermann Weyl and another method involving multiplication by a diagonal matrix whose entries are +1 or -1. The second remark has to do with ways of flipping the signs of the entries of a diagonal matrix, C, with nonzero diagonal entries, obtaining a new matrix, E, so that E + A is invertible, where A is any given matrix (invertible or not). |
\section{The Cayley Representation of Orthogonal Matrices}
\label{sec1}
Given any rotation matrix, $R\in \mathbf{SO}(n)$,
if $R$ does not admit $-1$ as an eigenvalue, then
there is a unique skew symmetric matrix, $S$,
($\transpos{S} = -S$) so that
\[
R = (I - S)(I + S)^{-1}.
\]
This is a classical result of Cayley \cite{Cayley} (1846)
and $R$ is called the {\it Cayley transform of $S$\/}.
Among other sources, a proof can be found in
Hermann Weyl's beautiful book {\sl The Classical Groups\/}
\cite{Weyl46}, Chapter II, Section 10, Theorem 2.10.B (page 57).
\medskip
As we can see, this representation misses rotation matrices
admitting the eigenvalue $-1$, and of course,
as $\det((I - S)(I + S)^{-1}) = +1$, it misses
improper orthogonal matrices, i.e., those matrices
$R\in \mathbf{O}(n)$ with $\det(R) = -1$.
\medskip\noindent
{\bf Question 1}.
Is there a way to extend the Cayley representation to all
rotation matrices (matrices in $\mathbf{SO}(n)$)?
\medskip\noindent
{\bf Question 2}.
Is there a way to extend the Cayley representation to all
orthogonal matrices (matrices in $\mathbf{O}(n)$)?
\medskip\noindent
{\bf Answer}: Yes in both cases!
\medskip
An answer to Question 1 is given in Weyl's book \cite{Weyl46},
Chapter II, Section 10, Lemma 2.10.D (page 60):
\begin{prop} (Weyl)
\label{Weyl1}
Every rotation matrix, $R\in \mathbf{SO}(n)$, can be expressed as
a product
\[
R = (I - S_1)(I + S_1)^{-1}(I - S_2)(I + S_2)^{-1},
\]
where $S_1$ and $S_2$ are skew symmetric matrices.
\end{prop}
\medskip
Thus, if we allow two Cayley representation matrices, we can capture
orthogonal matrices having an even number of $-1$ as eigenvalues.
Actually, proposition \ref{Weyl1} can be sharpened slightly as follows:
\begin{prop}
\label{prop2}
Every rotation matrix, $R\in \mathbf{SO}(n)$, can be expressed as
\[
R = \Bigl((I - S)(I + S)^{-1}\Bigr)^2
\]
where $S$ is a skew symmetric matrix.
\end{prop}
\medskip
Proposition \ref{prop2} can be easily proved using the following
well-known normal form for orthogonal matrices:
\begin{prop}
\label{prop3}
For every orthogonal matrix, $R\in \mathbf{O}(n)$,
there is an orthogonal matrix $P$
and a block diagonal matrix $D$
such that $R = PD\,\transpos{P}$,
where $D$ is of the form
\[
D = \amsdiagmat{D}{p}
\]
such that each block $D_i$ is either $1$, $-1$,
or a two-dimensional matrix of the form
$$D_i = \amsmata{\cos\theta_i}{-\sin\theta_i}{\sin\theta_i}{\cos\theta_i}$$
where $0 < \theta_i < \pi$.
\end{prop}
\medskip
In particular, if $R$ is a rotation matrix ($R\in \mathbf{SO}(n)$), then
it has an even number of eigenvalues $-1$. So, they can be grouped
into two-dimensional rotation matrices of the form
\[
\amsmata{-1}{0}{0}{-1},
\]
i.e., we allow $\theta_i = \pi$ and we may assume that
$D$ does not contain one-dimensional blocks of the form $-1$.
\medskip
A proof of Proposition \ref{prop3} can be found in
Gantmacher \cite{Gantmacher1}, Chapter IX, Section 13 (page 285),
or Berger \cite{Berger90}, or Gallier \cite{Gallbook2},
Chapter 11, Section 11.4 (Theorem 11.4.5).
\medskip
Now, for every two-dimensional rotation matrix
\[
T = \amsmata{\cos\theta}{-\sin\theta}{\sin\theta}{\cos\theta}
\]
with $0 < \theta \leq \pi$, observe that
\[
T^{\frac{1}{2}} =
\amsmata{\cos(\theta/2)}{-\sin(\theta/2)}{\sin(\theta/2)}{\cos(\theta/2)}
\]
does not admit $-1$ as an eigenvalue (since $0 < \theta/2 \leq\pi/2$)
and $T = \left(T^{\frac{1}{2}}\right)^2$. Thus,
if we form the matrix $R^{\frac{1}{2}}$ by replacing each
two-dimensional block $D_i$ in the above normal form
by $D_i^{\frac{1}{2}}$, we obtain a rotation matrix that
does not admit $-1$ as an eigenvalue,
$R = \left(R^{\frac{1}{2}}\right)^2$ and the Cayley transform of
$R^{\frac{1}{2}}$ is well defined. Therefore, we have proved
Proposition \ref{prop2}.
$\mathchoice\sqr76\sqr76\sqr{2.1}3\sqr{1.5}3$
\bigskip
Next, why is the answer to Question 2 also yes?
\medskip
This is because
\begin{prop}
\label{prop4}
For any orthogonal matrix, $R\in \mathbf{O}(n)$, there is
some diagonal matrix, $E$, whose entries are $+1$ or $-1$,
and some skew-symmetric matrix, $S$, so that
\[
R = E(I - S)(I + S)^{-1}.
\]
\end{prop}
\medskip
As such matrices $E$ are orthogonal, all matrices $E(I - S)(I + S)^{-1}$
are orthogonal, so we have a Cayley-style representation of all
orthogonal matrices.
\medskip
I am not sure when Proposition \ref{prop4} was discovered
and originally published.
Since I could not locate this result in Weyl's book \cite{Weyl46},
I assume that it was not known before 1946, but I did stumble on
it as an exercise in Richard Bellman's classic \cite{Bellman}, first
published in 1960,
Chapter 6, Section 4, Exercise 11, page 91-92
(see also, Exercises, 7, 8, 9, and 10).
\medskip
Why does this work?
\medskip\noindent
{\bf Fact E}: Because, for every $n\times n$ matrix, $A$ (invertible
or not), there some diagonal matrix, $E$, whose entries are $+1$ or $-1$,
so that $I + EA$ is invertible!
\medskip
This is Exercise 10 in Bellman \cite{Bellman} (Chapter 6, Section 4, page 91).
Using Fact E, it is easy to prove Proposition \ref{prop4}.
\medskip\noindent
{\it Proof of Proposition \ref{prop4}\/}.
Let $R\in \mathbf{O}(n)$ be any orthogonal matrix.
By Fact E, we can find a diagonal matrix, $E$ (with diagonal entries
$\pm1$), so that $I + ER$ is invertible. But then, as $E$ is orthogonal,
$ER$ is an orthogonal matrix that does not admit the eigenvalue $-1$ and
so, by the Cayley representation theorem, there is a
skew symmetric matrix, $S$, so that
\[
ER = (I - S)(I + S)^{-1}.
\]
However, notice that $E^2 = I$, so we get
\[
R = E(I - S)(I + S)^{-1},
\]
as claimed.
$\mathchoice\sqr76\sqr76\sqr{2.1}3\sqr{1.5}3$
\medskip
But Why does Fact E hold?
\medskip
As we just observed, $E^2 = I$, so by multiplying by $E$,
\[
\hbox{$I + EA$ is invertible iff $E + A$ is.}
\]
\medskip
Thus, we are naturally led to the following problem:
If $A$ is any $n\times n$ matrix, is there a way to perturb
the diagonal entries of $A$, i.e., to add some diagonal
matrix, $C = \mathrm{diag}(c_1, \ldots, c_n)$, to $A$ so that
$C + A$ becomes invertible?
\medskip
Indeed this can be done, and we will show in the next section
that what matters is not the magnitude of the perturbation
but the signs of the entries being added.
\section{Perturbing the Diagonal of a Matrix to Make it Invertible}
\label{sec2}
In this section we prove the following result:
\begin{prop}
\label{prop5}
For every $n\times n$ matrix (invertible or not), $A$,
and every any diagonal matrix, $C = \mathrm{diag}(c_1, \ldots, c_n)$,
with $c_i \not= 0$ for $i = 1, \ldots, n$,
there an assignment of signs, $\epsilon_i = \pm 1$, so
that if $E = \mathrm{diag}(\epsilon_1 c_1, \ldots, \epsilon_n c_n)$, then
$E + A$ is invertible.
\end{prop}
\noindent{\it Proof\/}.\enspace
Let us evaluate the determinant of $C + A$. We see that
$\Delta = \det(C + A)$ is a polynomial of degree $n$ in the variables
$c_1, \ldots, c_n$ and that all the monomials of $\Delta$ consist
of products of distinct variables (i.e., every variable occurring in
a monomial has degree $1$). In particular, $\Delta$ contains the
monomial $c_1 \cdots c_n$.
In order to prove Proposition \ref{prop5}, it will suffice to prove
\begin{prop}
\label{prop6}
Given any polyomial, $P(x_1, \ldots, x_n)$, of degree $n$ (in the
indeterminates $x_1, \ldots, x_n$ and over any integral domain
of characteristic unequal to $2$), if
every monomial in $P$ is a product of distinct variables,
for every $n$-tuple $(c_1, \ldots, c_n)$ such that
$c_i \not= 0$ for $i = 1, \ldots, n$, then
there is an assignment of signs, $\epsilon_i = \pm 1$, so that
\[
P(\epsilon_1 c_1, \ldots, \epsilon_n c_n) \not= 0.
\]
\end{prop}
\medskip
Clearly, any assignment of signs given by Proposition \ref{prop6}
will make $\det(E + A) \not= 0$, proving Proposition \ref{prop5}.
$\mathchoice\sqr76\sqr76\sqr{2.1}3\sqr{1.5}3$
\medskip
It remains to prove Proposition \ref{prop6}.
\bigskip\noindent
{\it Proof of Proposition \ref{prop6}\/}.
We proceed by induction on $n$ (starting with $n = 1$).
For $n = 1$, the polynomial $P(x_1)$ is of the form
$P(x_1) = a + bx_1$, with $b\not= 0$ since $\mathrm{deg}(P) = 1$.
Obviously, for any $c\not= 0$, either $a + bc\not= 0$ or
$a - bc \not= 0$ (otherwise, $2bc = 0$, contradicting
$b\not= 0$, $c\not= 0$ and the ring being an integral domain
of characteristic $\not= 2$).
\medskip
Assume the induction hypothesis holds for any $n\geq 1$ and
let $P(x_1, \ldots, x_{n+1})$ be a polynomial of degree $n+1$
satisfying the conditions of Proposition \ref{prop6}.
Then, $P$ must be of the form
\[
P(x_1, \ldots, x_n, x_{n+1})
= Q(x_1, \ldots, x_n) + S(x_1, \ldots, x_n)x_{n+1},
\]
where both $Q(x_1, \ldots, x_n)$ and $S(x_1, \ldots, x_n)$
are polynomials in $x_1, \ldots, x_n$ and $S(x_1, \ldots, x_n)$
is of degree $n$ and all monomials in
$S$ are products of distinct variables. By the induction
hypothesis, we can find $(\epsilon_1, \ldots, \epsilon_n)$,
with $\epsilon_i = \pm 1$, so that
\[
S(\epsilon_1 c_1, \ldots, \epsilon_n c_n) \not= 0.
\]
But now, we are back to the case $n = 1$ with
the polynomial
\[
Q(\epsilon_1 c_1, \ldots, \epsilon_n c_n) +
S(\epsilon_1 c_1, \ldots, \epsilon_n c_n)x_{n+1},
\]
and we can find $\epsilon_{n+1} = \pm 1$ so that
\[
P(\epsilon_1 c_1, \ldots, \epsilon_n c_n, \epsilon_{n+1}c_{n+1}) =
Q(\epsilon_1 c_1, \ldots, \epsilon_n c_n) +
S(\epsilon_1 c_1, \ldots, \epsilon_n c_n)\epsilon_{n+1}c_{n+1}
\not= 0,
\]
establishing the induction hypothesis.
$\mathchoice\sqr76\sqr76\sqr{2.1}3\sqr{1.5}3$
\medskip
Note that in Proposition \ref{prop5}, the $c_i$ can be made
arbitrarily small or large, as long as they are not zero.
Thus, we see as a corollary that any matrix can be made
invertible by a very small perturbation of its diagonal elements.
What matters is the signs that are assigned to the perturbation.
\medskip
Another nice proof of Fact E is given in a short note by William Kahan
\cite{Kahan2004}. Due to its elegance, we feel compelled to
sketch Kahan's proof.
This proof uses two facts:
\begin{enumerate}
\item[(1)]
If $A = (A_1, \ldots, A_{n-1}, U)$ and $B = (A_1, \ldots, A_{n-1}, V)$
are two $n\times n$ matrices that differ in their last column,
then
\[
\det(A + B) = 2^{n-1}(\det(A) + \det(B)).
\]
This is because determinants are multilinear (alternating) maps of
their columns. Therefore, if $\det(A) = \det(B) = 0$, then
$\det(A + B) = 0$. Obviously, this fact also holds whenever
$A$ and $B$ differ by just one column (not just the last one).
\item[(2)]
For every $k = 0, \ldots 2^{n} -1$, if we write $k$ in binary
as $k = k_n \cdots k_1$, then let $E_k$ be the diagonal matrix
whose $i$th diagonal entry is $-1$ iff $k_i = 1$, else $+1$
iff $k_i = 0$. For example, $E_0 = I$ and $E_{2^n -1} = -I$.
Observe that $E_k$ and $E_{k+1}$ differ by exactly one column.
Then, it is easy to see that
\[
E_0 + E_1 + \cdots + E_{2^n -1} = 0.
\]
\end{enumerate}
The proof proceeds by contradiction. Assume that
$\det(I + E_kA) = 0$, \\
for $k = 0, \ldots, 2^n -1$.
The crux of the proof is that
\[
\det(I + E_0A + I + E_1A + I + E_2A + \cdots + I + E_{2^n -1}A) = 0.
\]
However, as $E_0 + E_1 + \cdots + E_{2^n -1} = 0$, we see that
\[
I + E_0A + I + E_1A + I + E_2A + \cdots + I + E_{2^n -1}A
= 2^n I,
\]
and so,
\[
0 = \det(I + E_0A + I + E_1A + I + E_2A + \cdots + I + E_{2^n -1}A) =
\det(2^n I) = 2^n \not= 0,
\]
a contradiction!
\medskip
To prove that
$\det(I + E_0A + I + E_1A + I + E_2A + \cdots + I + E_{2^n -1}A) = 0$, we
observe using fact (2) that,
\[
\det(I + E_{2i}A + I + E_{2i+1}A) =
\det(I + E_{2i}A) + \det(I + E_{2i+1}A) = 0,
\]
for $i = 0, \ldots, 2^{n - 1} -1$; similarly,
\[
\det(I + E_{4i}A + I + E_{4i+1}A + I + E_{4i+2}A + I + E_{4i+3}A) = 0,
\]
for $i = 0, \ldots, 2^{n - 2} -1$; by induction, we get
\[
\det(I + E_0A + I + E_1A + I + E_2A + \cdots + I + E_{2^n -1}A) = 0,
\]
which concludes the proof.
\bigskip\noindent
{\bf Final Questions\/}:
\begin{enumerate}
\item[(1)]
When was Fact E first stated and by whom (similarly for
Proposition \ref{prop4})?
\item[(2)]
Can Proposition \ref{prop5} be generalized
to non-diagonal matrices (in an interesting way)?
\end{enumerate}
| {
"timestamp": "2006-06-13T23:47:20",
"yymm": "0606",
"arxiv_id": "math/0606320",
"language": "en",
"url": "https://arxiv.org/abs/math/0606320",
"abstract": "This note contains two remarks. The first remark concerns the extension of the well-known Cayley representation of rotation matrices by skew symmetric matrices to rotation matrices admitting -1 as an eigenvalue and then to all orthogonal matrices. We review a method due to Hermann Weyl and another method involving multiplication by a diagonal matrix whose entries are +1 or -1. The second remark has to do with ways of flipping the signs of the entries of a diagonal matrix, C, with nonzero diagonal entries, obtaining a new matrix, E, so that E + A is invertible, where A is any given matrix (invertible or not).",
"subjects": "Numerical Analysis (math.NA); General Mathematics (math.GM)",
"title": "Remarks on the Cayley Representation of Orthogonal Matrices and on Perturbing the Diagonal of a Matrix to Make it Invertible",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.990140146733106,
"lm_q2_score": 0.8080672181749422,
"lm_q1q2_score": 0.80009979397395
} |
https://arxiv.org/abs/0911.4204 | Maximal independent sets and separating covers | In 1973, Katona raised the problem of determining the maximum number of subsets in a separating cover on n elements. The answer to Katona's question turns out to be the inverse to the answer to a much simpler question: what is the largest integer which is the product of positive integers with sum n? We give a combinatorial explanation for this relationship, via Moon and Moser's answer to a question of Erdos: how many maximal independent sets can a graph on n vertices have? We conclude by showing how Moon and Moser's solution also sheds light on a problem of Mahler and Popken's about the complexity of integers. | \section{Introduction}
We begin with a simply stated problem, which has made numerous appearances in mathematics competitions:%
\footnote{In particular, the 1976 IMO asked for the $n=1976$ case, the 1979 Putnam asked for the $n=1979$ case, and on April 23rd 2002, the 3rd Community College of Philadelphia Colonial Mathematics Challenge asked for the $n=2002$ case.} %
what is the largest number which can be written as the product of positive integers that sum to $n$?
We denote this number by $\ell(n)$. A moment's thought shows that one should use as many $3$s as possible; if $m\ge 5$ appears in the product then it can be replaced by $3(m-3)>m$, and while $2$s and $4$s can occur in the product, the latter can occur at most once since $4\cdot 4<2 \cdot 3 \cdot 3$ and the former at most twice since $2 \cdot 2 \cdot 2<3 \cdot 3$. This shows that for $n\ge 2$,
$$
\ell(n)
=
\left\{
\begin{array}{ll}
3^i
&
\mbox{if $n=3i$,}\\
4 \cdot 3^{i-1}
&
\mbox{if $n=3i+1$,}\\
2 \cdot 3^i
&
\mbox{if $n=3i+2$,}\\
\end{array}
\right.
$$
while $\ell(1)=1$. Note that it follows from the combinatorial definition of $\ell$ that this function is strictly increasing and {\it super-multiplicative\/}, meaning that it satisfies $\ell(n_1)\ell(n_2)\le\ell(n_1+n_2)$.
In 1973, G. O. H. Katona~\cite[Problem 8, p. 306]{katona:combinatorial-s:} posed a problem which looks completely unlike the determination of $\ell(n)$. A {\it separating cover\/}%
\footnote{We make this slight deviation from Katona's original formulation so that $s(1)=1$.}
over the ground set $X$ is a collection $\S$ of subsets of $X$ which satisfies two properties:
\begin{itemize}
\item the union of the sets in $\S$ is all of $X$, and
\item for every pair of distinct elements $x,y\in X$ there are disjoint sets $S,T\in\S$ with $x\in S$ and $y\in T$.
\end{itemize}
Katona asked about the function
$$
s(m)=\min\{n : \mbox{there is a separating cover on $m$ elements with $n$ sets\}}.
$$
M.-C. Cai and A. C. C. Yao gave independent solutions several years later.
\begin{theorem}[Cai~\cite{cai:solutions-to-ed:} and Yao~\cite{yao:on-a-problem-of:}, independently]\label{sss}
For all $m\ge 2$,
$$
s(m)
=
\left\{
\begin{array}{ll}
3i
&
\mbox{if $2 \cdot 3^{i-1}<m\le 3^i$,}\\
3i+1
&
\mbox{if $3^i<m\le 4 \cdot 3^{i-1}$,}\\
3i+2
&
\mbox{if $4 \cdot 3^{i-1}<m\le 2 \cdot 3^i$,}\\
\end{array}
\right.
$$
while $s(1)=1$.
\end{theorem}
Thus $s(\ell(n))=n$ for all positive integers $n$ --- in other words, $s$ is a left inverse of $\ell$. Ironically, the question we began with appears at the beginning of R. Honsberger's {\it Mathematical Gems III\/}~\cite{honsberger:mathematical-ge:}, while Katona's problem occurs at the end, where Honsberger describes the proof as ``long and much more complicated than the arguments in the earlier chapters.'' We present a short combinatorial explanation for the equivalence of these two problems.
\section{A Combinatorial Interpretation of ${\ell}$}
In order to give a combinatorial explanation for why $s(\ell(n))=n$, we first need a combinatorial interpretation of $\ell$. We use a graph-theoretic interpretation, although several others are available.%
\footnote{Another --- in terms of integer complexity --- is given later in this note. Additionally, $\ell(n)$ is the order of the largest abelian subgroup of the symmetric group of order $n$; see Bercov and Moser~\cite{bercov:on-abelian-perm:}.} %
Let $G$ be a graph over the vertex set $V(G)$. A subset $I\subseteq V(G)$ is {\it independent\/} if there is no edge between any two vertices of $I$, and it is a {\it maximal independent set (MIS)\/} if it is not properly contained in any other independent set. In the 1960s, P. Erd\H{o}s asked how many MISes a graph on $n$ vertices could have, which we define as
$$
g(n)=\max\{m : \mbox{there is graph on $n$ vertices with $m$ MISes\}}.
$$
Let us denote by $m(G)$ the number of MISes in the graph $G$. This quantity is particularly easy to compute when $G$ is a disjoint union:
\begin{proposition}\label{mis-product}
The disjoint union of the graphs $G$ and $H$ has $m(G)m(H)$ MISes.
\end{proposition}
\begin{proof}
For any MIS $M$ of this union, $M\cap V(G)$ must be an MIS of $G$ and $M\cap V(H)$ must be an MIS of $H$. Conversely, if $M_G$ and $M_H$ are MISes of $G$ and $H$, respectively, then $M_G\cup M_H$ is an MIS of the disjoint union of $G$ and $H$.
\end{proof}
Because the complete graph on $n$ vertices has $n$ MISes, Proposition~\ref{mis-product} implies that $g(n)\ge\ell(n)$ for all positive integers $n$; we need only take a disjoint union of edges, triangles, and complete graphs on $4$ vertices to achieve this lower bound. In 1965, J. W. Moon and L. Moser proved that this is best possible.
\begin{theorem}[Moon and Moser~\cite{moon:on-cliques-in-g:}]\label{mis}
For all positive integers $n$, $g(n)=\ell(n)$.
\end{theorem}
Indeed, Moon and Moser showed that the only extremal graphs (the graphs with $g(n)$ MISes) are those built by taking disjoint copies of edges, triangles, and complete graphs on $4$ vertices in the quantities suggested by the formula for $\ell$. (In the case $n=3i+1\ge 4$ there are two extremal graphs, one with $i-1$ triangles and two disjoint edges, the other with $i-1$ triangles and a complete graph on $4$ vertices.)
\section{A Short Proof of Theorem~\ref{mis}}
Before demonstrating the relationship between MISes and separating covers, we pause to present a short proof of Moon and Moser's theorem. First we need a definition: for a set $X\subseteq V(G)$, we denote by $G-X$ the graph obtained by removing the vertices $X$ from $G$ and all edges incident to vertices in $X$. When $X=\{v\}$, we abbreviate this notation to $G-v$. Our proof makes extensive use of the following upper bound.
\begin{proposition}\label{mis-recurse}
For any graph $G$ and vertex $v\in V(G)$, we have
$$
m(G)\le m(G-v)+m(G-N[v]),
$$
where $N[v]$ denotes the {\it closed neighborhood\/} of $v$, i.e., $v$ together with its neighbors.
\end{proposition}
\begin{proof}
The map $M\mapsto M-v$ gives a bijection between MISes of $G$ containing $v$ and MISes of $G-N[v]$. The proof is completed by noting that every MIS of $G$ that does not contain $v$ is also an MIS of $G-v$.
\end{proof}
\newenvironment{mis-proof}{\medskip\noindent {\it Proof of Theorem~\ref{mis}.\/}}{\qed\bigskip}
\begin{mis-proof}
Our proof is by induction on $n$, and we prove the stronger statement which characterizes the extremal graphs. It is easy to check the theorem for graphs with five or fewer vertices, so take $G$ to be a graph on $n\ge 6$ vertices, and assume the theorem holds for graphs with fewer than $n$ vertices.
If $G$ contains a vertex of degree $0$, that is, an isolated vertex, then clearly $m(G)\le g(n-1)=\ell(n-1)<\ell(n)$. If $G$ contains a vertex $v$ of degree $1$ then, letting $w$ denote the sole vertex adjacent to $v$, we have by Proposition~\ref{mis-recurse} that
$$
m(G)
\le
m(G-w)+m(G-N[w])
\le
2\ell(n-2)
=
\left\{\begin{array}{ll}
8\cdot 3^{i-2}&\mbox{if $n=3i$,}\\
4\cdot 3^{i-1}&\mbox{if $n=3i+1$,}\\
2\cdot 3^{i}&\mbox{if $n=3i+2$.}
\end{array}\right.
$$
In all three cases we have an upper bound of at most $\ell(n)$, with equality if and only if $n=3i+1$ and $G$ is a disjoint union of $i-1$ triangles and two edges, or $n=3i+2$ and $G$ is a disjoint union of $i$ triangles and an edge.
If $G$ contains a vertex $v$ of degree $3$ or greater, then we have
$$
m(G)
\le
m(G-v)+m(G-N[v])
\le
\ell(n-1)+\ell(n-4)
=
\left\{\begin{array}{ll}
8\cdot 3^{i-2}&\mbox{if $n=3i$,}\\
4\cdot 3^{i-1}&\mbox{if $n=3i+1$,}\\
16\cdot 3^{i-2}&\mbox{if $n=3i+2$.}
\end{array}\right.
$$
Again, all three cases give an upper bound of at most $\ell(n)$, with equality if and only if $n=3i+1$ and $G$ is a disjoint union of $i-1$ triangles together with a complete graph on $4$ vertices.
This leaves us to consider the case where every vertex of $G$ has degree $2$, which implies that $G$ consists of a disjoint union of cycles. If each of these cycles is a triangle, then $n=3i$ and $G$ is a disjoint union of $i$ triangles, as desired. Thus we may assume that at least one connected component of $G$ is a cycle of length $j\ge 4$, which we denote by $C_j$. Our goal in this case is to show that $G$ is not extremal (i.e., $m(G)<\ell(n)$), and by the super-multiplicativity of $\ell$, it suffices to show that this single cycle of length $j$ is not extremal. It is easy to check that $m(C_4)=2<4=\ell(4)$ and $m(C_5)=5<6=\ell(5)$, it therefore suffices to show that $m(C_j)<\ell(j)$ for $j\ge 6$. (In fact, F\"uredi~\cite{furedi:the-number-of-m:} found $m(C_j)$ exactly --- it is the $j$th Perrin number.) Label the vertices of our cycle on $j\ge 6$ vertices as $u,v,w,\dots$ so that $u$ is adjacent to $v$ which is in turn adjacent to $w$. By applying Proposition~\ref{mis-recurse} twice, we see that for $j\ge 6$,
\begin{eqnarray*}
m(C_j)
&\le&
m(C_j-w)+m(C_j-N[w])\\
&\le&
m(C_j-w-u)+m(C_j-w-N[u])+m(C_j-N[w])\\
&\le&
2\ell(j-3)+\ell(j-4),
\end{eqnarray*}
which is strictly less that $3\ell(j-3)=\ell(j)$, completing the proof.
\end{mis-proof}
\section{A Combinatorial Explanation for ${s(\ell(n))=n}$}
With Moon and Moser's Theorem~\ref{mis} proved, we are now ready to explain the connection to separating covers. Propositions~\ref{mis2sss} and \ref{sss2mis} illuminate the connection between separating covers and MISes, and then Proposition~\ref{main} gives a combinatorial explanation for why $s$ is a left inverse of $\ell=g$.
\begin{proposition}\label{mis2sss}
From a graph on $n$ vertices with $m$ MISes one can construct a separating cover on $m$ elements with at most $n$ sets.
\end{proposition}
\begin{proof}
Take $G$ to be a graph with $n$ vertices and $m$ MISes and let $\mathcal{M}$ denote the collection of MISes in $G$. The separating cover promised consists of the family of sets $\{S_v : v\in V(G)\}$ where
$$
S_v=\{M\in\mathcal{M} : v\in M\}.
$$
Clearly this is a family with $m$ elements (the MISes $\mathcal{M}$) and $n$ (not necessarily distinct) sets (one for each vertex of $G$), and this family covers the set $\mathcal{M}$ because each MIS lies in at least one $S_v$, so it remains to check only that it is separating. Take distinct sets $M, N\in\mathcal{M}$. Because $M$ and $N$ are both maximal there is some vertex $u\in M\setminus N$ . By the maximality of $N$, it must contain a vertex $v$ adjacent to $u$. Therefore $M\in S_u$, $N\in S_v$, and because $u$ and $v$ are adjacent, $S_u\cap S_v=\emptyset$, completing the proof.
\end{proof}
\begin{proposition}\label{sss2mis}
From a separating cover on $m$ elements with $n$ sets one can construct a graph on $n$ vertices with at least $m$ MISes.
\end{proposition}
\begin{proof}
Let $\S$ be such a cover over the ground set $X$. We define a graph $G$ on the vertices $\S$ where $S\in\S$ is adjacent to $T\in\S$ if and only if they are disjoint. For each $x\in X$, the set
$$
I_x=\{S\in\S : x\in S\}
$$
is an independent set in $G$. For each $x\in X$, choose an MIS $M_x\supseteq I_x$. We have only to show that these MISes are distinct. Take distinct elements $x,y\in X$. Because $\S$ is separating, there are disjoint sets $S,T\in\S$ with $x\in S$ and $y\in T$. Therefore $S\in M_x$, $T\in M_y$, and since $S$ and $T$ are disjoint they are adjacent in $G$, so $T\notin M_x$, and thus $M_x\neq M_y$.
\end{proof}
\begin{proposition}\label{main}
For all positive integers $m$ and $n$,
\begin{eqnarray*}
s(m)&=&\min\{n : g(n)\ge m\},\\
g(n)&=&\max\{m : s(m)\le n\}.
\end{eqnarray*}
\end{proposition}
\begin{proof}
First observe that $s$ and $g$ are both nondecreasing. The proof then follows from the two claims
\begin{enumerate}
\item[(1)] If $s(m)\le n$ then $g(n)\ge m$, and
\item[(2)] If $g(n)\ge m$ then $s(m)\le n$
\end{enumerate}
To prove (1), suppose that $s(m)\le n$. Then there is a separating cover with $m$ elements and at most $n$ sets, so by Proposition~\ref{sss2mis}, there is a graph with at most $n$ vertices and at least $m$ MISes. This and the fact that $g$ is nondecreasing establish that $g(n)\ge m$.
Now suppose that $g(n)\ge m$. Then there is a graph with $n$ vertices and at least $m$ MISes, so by Proposition~\ref{mis2sss}, there is a separating cover with at least $m$ elements and at most $n$ sets. Because $s$ is nondecreasing, we conclude that $s(m)\le n$, proving (2).
\end{proof}
\section{Integer Complexity}
We conclude with another appearance of $g$. The {\it complexity\/}, $c(m)$, of the integer $m$ is the least number of $1$s needed to represent it using only $+$s, $\cdot$s, and parentheses. For example, the complexity of $10$ is $7$, and there are essentially three different minimal expressions:
$$
10=(1+1+1)(1+1+1)+1=(1+1)(1+1+1+1+1)=(1+1)((1+1)(1+1)+1),
$$
Figure~\ref{fig-complexity} shows a plot of the complexities of the first $1000$ integers.
\begin{figure}
\begin{center}
\savedata{\plotdata}[{{1,1},{2,2},{3,3},{4,4},{5,5},{6,5},{7,6},{8,6},{9,6},{10,7},{11,8},{12,7},{13,8},{14,8},{15,8},{16,8},{17,9},{18,8},{19,9},{20,9},{21,9},{22,10},{23,11},{24,9},{25,10},{26,10},{27,9},{28,10},{29,11},{30,10},{31,11},{32,10},{33,11},{34,11},{35,11},{36,10},{37,11},{38,11},{39,11},{40,11},{41,12},{42,11},{43,12},{44,12},{45,11},{46,12},{47,13},{48,11},{49,12},{50,12},{51,12},{52,12},{53,13},{54,11},{55,12},{56,12},{57,12},{58,13},{59,14},{60,12},{61,13},{62,13},{63,12},{64,12},{65,13},{66,13},{67,14},{68,13},{69,14},{70,13},{71,14},{72,12},{73,13},{74,13},{75,13},{76,13},{77,14},{78,13},{79,14},{80,13},{81,12},{82,13},{83,14},{84,13},{85,14},{86,14},{87,14},{88,14},{89,15},{90,13},{91,14},{92,14},{93,14},{94,15},{95,14},{96,13},{97,14},{98,14},{99,14},{100,14},{101,15},{102,14},{103,15},{104,14},{105,14},{106,15},{107,16},{108,13},{109,14},{110,14},{111,14},{112,14},{113,15},{114,14},{115,15},{116,15},{117,14},{118,15},{119,15},{120,14},{121,15},{122,15},{123,15},{124,15},{125,15},{126,14},{127,15},{128,14},{129,15},{130,15},{131,16},{132,15},{133,15},{134,16},{135,14},{136,15},{137,16},{138,15},{139,16},{140,15},{141,16},{142,16},{143,16},{144,14},{145,15},{146,15},{147,15},{148,15},{149,16},{150,15},{151,16},{152,15},{153,15},{154,16},{155,16},{156,15},{157,16},{158,16},{159,16},{160,15},{161,16},{162,14},{163,15},{164,15},{165,15},{166,16},{167,17},{168,15},{169,16},{170,16},{171,15},{172,16},{173,17},{174,16},{175,16},{176,16},{177,17},{178,17},{179,18},{180,15},{181,16},{182,16},{183,16},{184,16},{185,16},{186,16},{187,17},{188,17},{189,15},{190,16},{191,17},{192,15},{193,16},{194,16},{195,16},{196,16},{197,17},{198,16},{199,17},{200,16},{201,17},{202,17},{203,17},{204,16},{205,17},{206,17},{207,17},{208,16},{209,17},{210,16},{211,17},{212,17},{213,17},{214,18},{215,17},{216,15},{217,16},{218,16},{219,16},{220,16},{221,17},{222,16},{223,17},{224,16},{225,16},{226,17},{227,18},{228,16},{229,17},{230,17},{231,17},{232,17},{233,18},{234,16},{235,17},{236,17},{237,17},{238,17},{239,18},{240,16},{241,17},{242,17},{243,15},{244,16},{245,17},{246,16},{247,17},{248,17},{249,17},{250,17},{251,18},{252,16},{253,17},{254,17},{255,17},{256,16},{257,17},{258,17},{259,17},{260,17},{261,17},{262,18},{263,19},{264,17},{265,18},{266,17},{267,18},{268,18},{269,19},{270,16},{271,17},{272,17},{273,17},{274,18},{275,17},{276,17},{277,18},{278,18},{279,17},{280,17},{281,18},{282,18},{283,19},{284,18},{285,17},{286,18},{287,18},{288,16},{289,17},{290,17},{291,17},{292,17},{293,18},{294,17},{295,18},{296,17},{297,17},{298,18},{299,19},{300,17},{301,18},{302,18},{303,18},{304,17},{305,18},{306,17},{307,18},{308,18},{309,18},{310,18},{311,19},{312,17},{313,18},{314,18},{315,17},{316,18},{317,19},{318,18},{319,19},{320,17},{321,18},{322,18},{323,18},{324,16},{325,17},{326,17},{327,17},{328,17},{329,18},{330,17},{331,18},{332,18},{333,17},{334,18},{335,19},{336,17},{337,18},{338,18},{339,18},{340,18},{341,19},{342,17},{343,18},{344,18},{345,18},{346,19},{347,20},{348,18},{349,19},{350,18},{351,17},{352,18},{353,19},{354,18},{355,19},{356,19},{357,18},{358,19},{359,20},{360,17},{361,18},{362,18},{363,18},{364,18},{365,18},{366,18},{367,19},{368,18},{369,18},{370,18},{371,19},{372,18},{373,19},{374,19},{375,18},{376,19},{377,19},{378,17},{379,18},{380,18},{381,18},{382,19},{383,20},{384,17},{385,18},{386,18},{387,18},{388,18},{389,19},{390,18},{391,19},{392,18},{393,19},{394,19},{395,19},{396,18},{397,19},{398,19},{399,18},{400,18},{401,19},{402,19},{403,19},{404,19},{405,17},{406,18},{407,19},{408,18},{409,19},{410,18},{411,19},{412,19},{413,20},{414,18},{415,19},{416,18},{417,19},{418,19},{419,20},{420,18},{421,19},{422,19},{423,19},{424,19},{425,19},{426,19},{427,19},{428,20},{429,19},{430,19},{431,20},{432,17},{433,18},{434,18},{435,18},{436,18},{437,19},{438,18},{439,19},{440,18},{441,18},{442,19},{443,20},{444,18},{445,19},{446,19},{447,19},{448,18},{449,19},{450,18},{451,19},{452,19},{453,19},{454,20},{455,19},{456,18},{457,19},{458,19},{459,18},{460,19},{461,20},{462,19},{463,20},{464,19},{465,19},{466,20},{467,21},{468,18},{469,19},{470,19},{471,19},{472,19},{473,20},{474,19},{475,19},{476,19},{477,19},{478,20},{479,21},{480,18},{481,19},{482,19},{483,19},{484,19},{485,19},{486,17},{487,18},{488,18},{489,18},{490,19},{491,20},{492,18},{493,19},{494,19},{495,18},{496,19},{497,20},{498,19},{499,20},{500,19},{501,20},{502,20},{503,21},{504,18},{505,19},{506,19},{507,19},{508,19},{509,20},{510,19},{511,19},{512,18},{513,18},{514,19},{515,20},{516,19},{517,20},{518,19},{519,20},{520,19},{521,20},{522,19},{523,20},{524,20},{525,19},{526,20},{527,20},{528,19},{529,20},{530,20},{531,20},{532,19},{533,20},{534,20},{535,21},{536,20},{537,21},{538,21},{539,20},{540,18},{541,19},{542,19},{543,19},{544,19},{545,19},{546,19},{547,20},{548,20},{549,19},{550,19},{551,20},{552,19},{553,20},{554,20},{555,19},{556,20},{557,21},{558,19},{559,20},{560,19},{561,20},{562,20},{563,21},{564,20},{565,20},{566,21},{567,18},{568,19},{569,20},{570,19},{571,20},{572,20},{573,20},{574,19},{575,20},{576,18},{577,19},{578,19},{579,19},{580,19},{581,20},{582,19},{583,20},{584,19},{585,19},{586,20},{587,21},{588,19},{589,20},{590,20},{591,20},{592,19},{593,20},{594,19},{595,20},{596,20},{597,20},{598,20},{599,21},{600,19},{601,20},{602,20},{603,20},{604,20},{605,20},{606,20},{607,21},{608,19},{609,20},{610,20},{611,21},{612,19},{613,20},{614,20},{615,20},{616,20},{617,21},{618,20},{619,21},{620,20},{621,20},{622,21},{623,21},{624,19},{625,20},{626,20},{627,20},{628,20},{629,20},{630,19},{631,20},{632,20},{633,20},{634,21},{635,20},{636,20},{637,20},{638,21},{639,20},{640,19},{641,20},{642,20},{643,21},{644,20},{645,20},{646,20},{647,21},{648,18},{649,19},{650,19},{651,19},{652,19},{653,20},{654,19},{655,20},{656,19},{657,19},{658,20},{659,21},{660,19},{661,20},{662,20},{663,20},{664,20},{665,20},{666,19},{667,20},{668,20},{669,20},{670,21},{671,21},{672,19},{673,20},{674,20},{675,19},{676,20},{677,21},{678,20},{679,20},{680,20},{681,21},{682,21},{683,22},{684,19},{685,20},{686,20},{687,20},{688,20},{689,21},{690,20},{691,21},{692,21},{693,20},{694,21},{695,21},{696,20},{697,21},{698,21},{699,21},{700,20},{701,21},{702,19},{703,20},{704,20},{705,20},{706,21},{707,21},{708,20},{709,21},{710,21},{711,20},{712,21},{713,22},{714,20},{715,20},{716,21},{717,21},{718,22},{719,23},{720,19},{721,20},{722,20},{723,20},{724,20},{725,20},{726,20},{727,21},{728,20},{729,18},{730,19},{731,20},{732,19},{733,20},{734,21},{735,20},{736,20},{737,21},{738,19},{739,20},{740,20},{741,20},{742,21},{743,22},{744,20},{745,21},{746,21},{747,20},{748,21},{749,22},{750,20},{751,21},{752,21},{753,21},{754,21},{755,21},{756,19},{757,20},{758,20},{759,20},{760,20},{761,21},{762,20},{763,20},{764,21},{765,20},{766,21},{767,22},{768,19},{769,20},{770,20},{771,20},{772,20},{773,21},{774,20},{775,21},{776,20},{777,20},{778,21},{779,21},{780,20},{781,21},{782,21},{783,20},{784,20},{785,21},{786,21},{787,22},{788,21},{789,22},{790,21},{791,21},{792,20},{793,21},{794,21},{795,21},{796,21},{797,22},{798,20},{799,21},{800,20},{801,21},{802,21},{803,21},{804,21},{805,21},{806,21},{807,22},{808,21},{809,22},{810,19},{811,20},{812,20},{813,20},{814,21},{815,20},{816,20},{817,21},{818,21},{819,20},{820,20},{821,21},{822,21},{823,22},{824,21},{825,20},{826,21},{827,22},{828,20},{829,21},{830,21},{831,21},{832,20},{833,21},{834,21},{835,22},{836,21},{837,20},{838,21},{839,22},{840,20},{841,21},{842,21},{843,21},{844,21},{845,21},{846,21},{847,21},{848,21},{849,22},{850,21},{851,22},{852,21},{853,22},{854,21},{855,20},{856,21},{857,22},{858,21},{859,22},{860,21},{861,21},{862,22},{863,23},{864,19},{865,20},{866,20},{867,20},{868,20},{869,21},{870,20},{871,21},{872,20},{873,20},{874,21},{875,21},{876,20},{877,21},{878,21},{879,21},{880,20},{881,21},{882,20},{883,21},{884,21},{885,21},{886,22},{887,23},{888,20},{889,21},{890,21},{891,20},{892,21},{893,22},{894,21},{895,22},{896,20},{897,21},{898,21},{899,22},{900,20},{901,21},{902,21},{903,21},{904,21},{905,21},{906,21},{907,22},{908,22},{909,21},{910,21},{911,22},{912,20},{913,21},{914,21},{915,21},{916,21},{917,22},{918,20},{919,21},{920,21},{921,21},{922,22},{923,22},{924,21},{925,21},{926,22},{927,21},{928,21},{929,22},{930,21},{931,21},{932,22},{933,22},{934,23},{935,21},{936,20},{937,21},{938,21},{939,21},{940,21},{941,22},{942,21},{943,22},{944,21},{945,20},{946,21},{947,22},{948,21},{949,21},{950,21},{951,22},{952,21},{953,22},{954,21},{955,22},{956,22},{957,22},{958,23},{959,22},{960,20},{961,21},{962,21},{963,21},{964,21},{965,21},{966,21},{967,22},{968,21},{969,21},{970,21},{971,22},{972,19},{973,20},{974,20},{975,20},{976,20},{977,21},{978,20},{979,21},{980,21},{981,20},{982,21},{983,22},{984,20},{985,21},{986,21},{987,21},{988,21},{989,22},{990,20},{991,21},{992,21},{993,21},{994,22},{995,22},{996,21},{997,22},{998,22},{999,20},{1000,21}}]
\psset{xunit=0.00474525in, yunit=0.0520833in}
\begin{pspicture}(0,-3)(1053.684211,28.80000000)
\psaxes[dy=5,Dy=5,dx=100,Dx=100,showorigin=false](0,0)(999,23)
\rput[c](0,25.4){$c(m)$}
\rput[l](1026.34,0){$m$}
\dataplot[plotstyle=dots,dotstyle=*,dotsize=3\psxunit]{\plotdata}
\end{pspicture}
\end{center}
\caption{The complexities of the first 1000 integers.}\label{fig-complexity}
\end{figure}
This definition was first considered by Mahler and Popken~\cite{mahler:on-a-maximum-pr:}, and while a straightforward recurrence,
$$
c(m)=\min \{c(d)+c(m/d) : d\divides m\}\cup\{c(i)+c(m-i) : 1\le i\le m-1\},
$$
is easy to verify, several outstanding conjectures and questions remain, for which we refer to R. K. Guy~\cite{guy:unsolved-proble:}. In that article, Guy mentions that J. Selfridge gave an inductive proof of the following result.
\begin{proposition}[Selfridge~{[unpublished]}]\label{prop-integer-complexity}
The greatest integer of complexity $n$ is $g(n)$.
\end{proposition}
One direction of Selfridge's proposition is clear: the problem we began with shows that $\ell(n)=g(n)$ has complexity at most $n$. In a final demonstration of the surprising versatility of Moon and Moser's Theorem~\ref{mis}, we show how it implies the other direction, via the following construction.
\begin{proposition}\label{comp2mis}
From an expression of the integer $m$ with $n$ $1$s one can construct a graph on $n$ vertices with $m$ MISes.
\end{proposition}
\begin{figure}
\begin{center}
\psset{xunit=0.007in, yunit=0.007in}
\psset{linewidth=0.02in}
\begin{pspicture}(0,0)(120,80)
\pscircle*(0,20){0.04in}
\pscircle*(0,60){0.04in}
\pscircle*(40,20){0.04in}
\pscircle*(40,60){0.04in}
\pscircle*(80,40){0.04in}
\pscircle*(120,20){0.04in}
\pscircle*(120,60){0.04in}
\psline(0,20)(0,60)
\psline(40,20)(40,60)
\psline(40,20)(120,60)
\psline(40,60)(120,20)
\psline(120,20)(120,60)
\end{pspicture}
\end{center}
\caption{The construction described in the proof of Proposition~\ref{prop-integer-complexity}, applied to the expression
$$
10=(1+1)((1+1)(1+1)+1).
$$
There are graphs on $7$ vertices with more MISes than the graph shown because $10$ is not the greatest integer of complexity $7$ ($12$ is).}
\label{prop-integer-complexity-figure}
\end{figure}
\begin{proof}
Before describing our inductive construction we need a definition. Given graphs $G$ and $H$, their {\it join\/} is the graph $G+H$ obtained from their disjoint union $G\cup H$ by adding all edges connecting vertices of $G$ with vertices of $H$. We know already from Proposition~\ref{mis-product} that $m(G\cup H)=m(G)m(H)$, and a similar formula for joins is easy to verify: $m(G+H)=m(G)+m(H)$ because every MIS in $G+H$ is either an MIS of $G$ or an MIS of $H$.
Now suppose we have an expression of the integer $m$ with $n$ $1$s. If $n=1$, then there is only one such expression, $1$, and we associate to this expression the one vertex graph. If $n\ge 2$, then any such expression must decompose as either $e_1+e_2$ or $e_1e_2$, where $e_1$ and $e_2$ are expressions with fewer $1$s. If our expression is $e_1+e_2$ then we associate it to the join of the graphs associated to $e_1$ and $e_2$, and if our expression is $e_1e_2$ then we associate it to the disjoint union of the graphs associated to $e_1$ and $e_2$. Figure~\ref{prop-integer-complexity-figure} shows an example. It follows that the resulting graph has precisely as many vertices as the expression has $1$s, and precisely $m$ MISes.
\end{proof}
\minisec{Acknowledgements}
I am grateful to the referees for their detailed and insightful comments. In particular, the change from ``separating families'' to ``separating covers,'' which simplified many of these results, was suggested by one of the referees.
\bibliographystyle{acm}
| {
"timestamp": "2010-08-30T02:00:22",
"yymm": "0911",
"arxiv_id": "0911.4204",
"language": "en",
"url": "https://arxiv.org/abs/0911.4204",
"abstract": "In 1973, Katona raised the problem of determining the maximum number of subsets in a separating cover on n elements. The answer to Katona's question turns out to be the inverse to the answer to a much simpler question: what is the largest integer which is the product of positive integers with sum n? We give a combinatorial explanation for this relationship, via Moon and Moser's answer to a question of Erdos: how many maximal independent sets can a graph on n vertices have? We conclude by showing how Moon and Moser's solution also sheds light on a problem of Mahler and Popken's about the complexity of integers.",
"subjects": "Combinatorics (math.CO)",
"title": "Maximal independent sets and separating covers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9901401455693092,
"lm_q2_score": 0.8080672135527632,
"lm_q1q2_score": 0.800099788456919
} |
https://arxiv.org/abs/2211.06484 | A regular $n$-gon spiral | We construct a polygonal spiral by arranging a sequence of regular $n$-gons such that each $n$-gon shares a specified side and vertex with the $(n+1)$-gon in the construction. By offering flexibility for determining the size of each $n$-gon in the spiral, we show that a number of different analytical and asymptotic behaviors can be achieved. | \section{Introduction}
Spirals are pervasive in fluid motion, biological structures, and engineering, affording numerous applications of mathematical spirals in modeling real-world phenomena \cite[ch.~11-14]{Thompson} \cite{Davis, Hammer}. Polygonal spirals are loosely defined as spirals that can be constructed using a sequence of polygons obeying a geometric relation. For example, in Fig. \ref{theo_spiral} we show how to join together right triangles in an infinite sequence to obtain the spiral of Theodorus---one of the oldest and most well-studied polygonal spirals \cite{Brink, Davis}.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width=0.45\textwidth]{Theodorus_Spiral2.png}
\caption{Plot of the first 7 triangles in the spiral of Theodorus. The sequence of shared vertex points $\vn{}{n}$ and the interpolation curve proposed by Davis \cite[p.~38]{Davis} are also shown.}
\label{theo_spiral}
\end{centering}
\end{figure}
In Fig. \ref{theo_spiral}, we let $\vn{}{n}$ denote the unique non-origin shared vertex of the triangle with hypotenuse $\sqrt{n}$ and the triangle with hypotenuse $\sqrt{n+1}$. Considerable attention has been given to studying the sequence $\vn{}{n}$ \cite{Brink, Davis, Gautschi, Hlawka}. The \textit{key idea} in the spiral of Theodorus construction is to define a sequence of polygons (i.e., right triangles) such that consecutive polygons share a unique side and at least one unique vertex point. By changing right triangles to similar triangles, or other types of similar polygons, we find that this idea is ubiquitous in the literature on polygonal spirals \cite{Anatriello, Strizhevsky, Yap} \cite[ch.~34]{Hammer}.
In this paper, we make use of this key idea to construct a spiral made out of a sequence of regular $n$-gons. In particular, we start by considering the case where all the $n$-gons are normalized to have perimeter $1$ (see Fig. \ref{lam1}). We begin the construction with an equilateral triangle with side length $\frac{1}{3}$ in the first quadrant of the complex plane. Along the upper right edge of the triangle, we construct a square with side length $\frac{1}{4}$. We continue in this way, following Definition \ref{dcon}, to construct the pentagon, hexagon, etc., leading to the construction shown below.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width= 0.6\textwidth]{spiral_s=1.png}
\caption{Plot of the first few polygons of the Definition \ref{dcon} construction (choosing $l(n) = \frac{1}{n}$). The shared vertex points $\vn{}{n}$ (see Lemma \ref{lamv}) and a smooth interpolation curve defined in equation \ref{interp} are also shown.}
\label{lam1}
\end{centering}
\end{figure}
\noindent We desire to find the $n \to \infty$ limit behavior of the sequence $V(n)$ shown in the Fig. \ref{lam1} construction. In particular, we pose a slightly more general question: if the side length of each $n$-gon in the Fig. \ref{lam1} construction is instead given by $\ell_s(n) = n^{-s}$, for which values of $s \in \mathbb{R}$ does the sequence $\vn{\ell_s}{n}$ converge as $n \to \infty$? The answer to this question is one of the main results of this paper:
\begin{theorem}[Convergence of $\vn{\ell_s}{n}$]
\label{thm1}
\begin{flalign*}
\text{ As } n \to \infty, \hspace{1mm} \vn{\ell_s}{n} & \begin{cases}
\text{converges to a point when } s > 0.\\
\text{converges to a circular orbit when } s=0.\\
\text{diverges when } s < 0.
\end{cases}
\end{flalign*}
\end{theorem}
We conclude the introduction by outlining the rest of the paper. In Section \ref{construct}, we formally define the geometric construction of the $n$-gon spiral (Definition \ref{dcon}) with an arbitrary length function, leading to a formula for the shared polygon vertex points (Lemma \ref{lamv}). In Section \ref{converge}, we use Lemma \ref{lamv} to prove Theorem \ref{thm1}, and then give a corollary to the theorem. In Section \ref{telescoping_spiral}, we introduce a special case of the $n$-gon spiral which admits particularly nice algebraic expressions, including a closed form analytic continuation of the discrete spiral to real values of $n$ (Theorem \ref{thm_analytic}) and an appearance of the golden ratio (Remark \ref{rem1}).
\section{The formal construction}
\label{construct}
\begin{definition}[Construction of the $n$-gon spiral $\pp{l}$]\
\label{dcon}
\begin{enumerate}
\item Starting with $n=2$, the $n$th polygon in the spiral construction is a regular $n$-gon with side length $l(n)$, where $l: \mathbb{N} \to \mathbb{R}$.
\item Consecutive polygons share a side and at least one vertex. Only consecutive polygons may share a side.
\item Shared vertices of each polygon are consecutive vertices, and the side connecting two shared vertices is not a shared side.
\item Consecutive polygons do not overlap (i.e. the shared area between consecutive polygons is 0).
\item \textit{Notation:} let $\pp{l}$ represent the infinite sequence of polygons defined via $1$--$4$ above.
\begin{itemize}
\item In this sequence, $\pn{l}{n}$ represents the $n$-gon in $\pp{l}$.
\item Let $\vv{l}$ denote the infinite sequence of shared vertex points of $\pp{l}$. In this sequence, $\vn{l}{n}$ represents the vertex of $\pn{l}{n}$ and $\pn{l}{n+1}$ that is shared for all choices of $l(n)$.
\item Let $\cc{l}$ denote the infinite sequence of polygon centers of $\pp{l}$. In this sequence, $\cn{l}{n}$ represents the center of $\pn{l}{n}$.
\end{itemize}
\end{enumerate}
\end{definition}
Once the \textit{length function} $l(n)$ is chosen and the coordinates of the $2$-gon and $3$-gon are given, Definition \ref{dcon} fully determines $\pp{l}$. We will see in the next section that there is an algebraically simplest orientation of the construction in the complex plane, which we will adopt.
With the rules for construction laid out, we desire to obtain a formula for the shared vertex point $\vn{l}{n}$.
We begin by following Definition \ref{dcon} to draw an arbitrary $n$-gon and $(n+1)$-gon in the $\pp{l}$ construction. Using that the interior angles of a regular $n$-gon are $\alpha_n := \frac{\pi(n-2)}{n}$ radians, we want to find the angle with respect to the horizontal ($\theta_n$) that must be followed to get from the shared vertex point $\vn{l}{n-1}$ to $\vn{l}{n}$, pictured in Fig. \ref{angles}. In this diagram, we assume $\theta_n, \theta_{n+1} < \pi$. Equivalent results (mod $2\pi$) are obtained by considering the other relevant cases.
\begin{figure}[!htb]
\begin{centering}
\includegraphics[width=0.48\textwidth]{geom_deriv.png}
\caption{$\pp{l}$ angular relationships.}
\label{angles}
\end{centering}
\end{figure}
In the Fig. \ref{angles} diagram, we observe that the horizontal (dotted) lines for defining $\theta_n$ and $\theta_{n+1}$ are cut by a transversal (a side of the $n$-gon). Therefore, the angles $\theta_n$ and $\beta = \alpha_n + \alpha_{n+1} - \theta_{n+1}$ in Fig. \ref{angles} are supplementary, which affords an angular recurrence:
\begin{equation}
\theta_n + \beta = \pi \implies \theta_{n+1} = \theta_{n} + \frac{(n-2)\pi}{n} + \frac{(n-1)\pi}{n+1} - \pi, \hspace{2mm} n > 1. \label{t1_recurrence}
\end{equation}
To solve the recurrence for $\theta_n$, we need a starting value for $\theta_2$---a choice which determines the orientation of $\pp{l}$ in the complex plane. Appealing to simplicity, we find that choosing $\theta_2 = -3 \pi$ ensures that there is no nontrivial constant rotation applied to all elements of $\pp{l}$ (i.e. $\theta_n$ mod $2\pi$ has no constant additive term). This choice affords the orientation shown in Fig. \ref{lam1}.
With $\theta_2$ in hand, we note that (\ref{t1_recurrence}) is a first order linear recurrence, and thus $\theta_{n+1}$ can be rewritten as a sum. Simplification of the resulting sum yields the angular relation for $\pp{l}$:
\begin{equation}
\label{t1}
\theta_n = 2\pi\Big(\frac{n}{2}+\frac{1}{n}-2 H_n\Big), \hspace{2mm} n > 1,
\end{equation}
\noindent where $H_n$ is the $n$th harmonic number.
Using (\ref{t1}), we are now ready to write down analytical expressions for the shared vertex points $\vn{l}{n}$. Examining Fig. \ref{angles}, we find that each shared vertex point $\vn{l}{n}$ can be written as a the sum over the integers $2 \leq k \leq n$ of the $k$-gon side length $l(k)$ multiplied by the corresponding rotation, $e^{i \theta_k}$. We define $\vn{l}{2} := 0$ to start the spiral from the origin.
\begin{lem}[Expression for shared vertex points of the $n$-gon spiral]\
\label{lamv}
\begin{equation*}
\vn{l}{n} = \sum_{k=3}^n l(k) \hspace{1mm} e^{i \theta_k} = \sum_{k=3}^n (-1)^k l(k) \hspace{1mm} e^{2 \pi i (\frac{1}{k} - 2H_k)}, \hspace{2mm} n \in \mathbb{N}_{> 1}.
\end{equation*}
\end{lem}
Since $\pp{l}$ consists of regular polygons, there is a natural relationship between the sequences $\pp{l},\vv{l},$ and $\cc{l}$. We can explicitly solve for $\cn{l}{n}$ by constructing the isosceles triangle with vertices $\vn{l}{n-1}$ and $\vn{l}{n}$ and an apex angle of $\frac{2 \pi}{n}$ radians. This recovers the expression for $\vn{l}{n}$ plus a much simpler function, $Q_l(n)$:
\begin{lem}[Expression for polygon centers of the $n$-gon spiral]\
\label{lamc}
\label{centers}
\begin{equation*}
\cn{l}{n} = \vn{l}{n} + Q_l(n), \hspace{ 2mm} \text{where } Q_l(n) = \frac{(-1)^n l(n) e^{2 \pi i (\frac{1}{n} -2H_n)}}{e^{\frac{2 \pi i}{n}} - 1}.
\end{equation*}
\end{lem}
Lemma \ref{lamc} makes it clear that understanding of the behavior of $\vv{l}$ and $Q_l$ is sufficient for understanding the behavior of $\cc{l}$, and hence $\pp{l}$. \textit{As such, we will frequently refer to the sequence of shared vertex points $\vv{l}$ as ``the $n$-gon spiral."}
\noindent We use the standard technique to obtain a smooth continuation of $\vn{l}{n}$ to real $n$:
\begin{equation}
\label{interp}
\tilde{V}_l(n) := \sum_{k=3}^{\infty} l(k) \hspace{1mm} e^{i \theta_k} - l(k-2+n) \hspace{1mm} e^{i \theta_{k-2+n}}, \hspace{2mm} n \in \mathbb{R}_{> 1}.
\end{equation}
\section{Spiral convergence}
\label{converge}
As a step toward understanding the convergence properties of the $n$-gon spiral, we will analyze convergence in the case where the length function is $\ell_s(n) = n^{-s}$, $s \in \mathbb{R}$. For conciseness, we assign $W(s) := \lim_{n \to \infty} \vn{\ell_s}{n}$ to denote the desired limit.
\begin{figure}[!htb]
\begin{subfigure}{0.59\textwidth}
\centering\captionsetup{width=.9\linewidth}
\includegraphics[width=\textwidth]{spiral_s=0.png}
\caption{The first few polygons of the spiral $\pp{\ell_0}$ $(s=0)$ plotted with the sequence of shared vertices $\vv{\ell_0}$ and the interpolation curve given by (\ref{interp}). We observe the convergence of $\vv{\ell_0}$ to a circular orbit (Theorem \ref{thm1}). Using Euler's transform for alternating series, we find numerically that the center of this circular orbit is at $\lim_{s \to 0^+} W(s) = 1.21711960256553... + i \hspace{1mm} 2.68541404871695...$}
\label{n=0_spiral}
\end{subfigure}
\begin{subfigure}{0.37\textwidth}
\centering\captionsetup{width=.94\linewidth}
\includegraphics[width=\textwidth]{parametric_spiral_med2.png}
\caption{The convergence points $W(s)$ for $s \in \mathbb{R}^+$ trace out a curve in the complex plane (black). We also plot ten spirals (from $s=0.0000726$ to $s=1.77$, interpolated by (\ref{interp})) with convergence points that are equally spaced along the curve.}
\label{conv_curve}
\end{subfigure}
\caption{Illustration of Theorem \ref{thm1}.}
\label{spiral_comparison}
\end{figure}
\begin{proof}[Proof of Theorem \ref{thm1}]
We aim to characterize the limit behavior of the sequence of shared vertex points,
\begin{equation}
W(s) = \sum_{k=3}^{\infty} \frac{(-1)^k e^{2 \pi i (\frac{1}{k} - 2H_k)}}{k^s}.
\end{equation}
\noindent \underline{Case 1: $s < 0$.} The terms of the series do not approach 0, hence $W(s)$ diverges.
\bigskip
\noindent \underline{Case 2: $s > 1$.} $W(s)$ is an absolutely convergent series.
\bigskip
\noindent \underline{Case 3: $0 < s \leq 1$.} Letting $f(k):=\exp\big(2\pi i\big(\frac{1}{k}-2H_k\big)\big)$, we first accelerate convergence of $\vn{\ell_s}{n}$ by adding consecutive pairs of terms:
\begin{equation}
\label{Saccel}
\vn{\ell_s}{2n} = \sum_{k=3}^{2n} \frac{(-1)^k f(k)}{k^s} = \sum_{j=2}^{n} F(j), \text{ where } F(j) := \frac{(2j-1)^sf(2j)-(2j)^sf(2j-1)}{((2j-1)(2j))^s}
\end{equation}
\noindent We now manipulate $F(j)$ in order to show $\vn{\ell_s}{2n}$ is absolutely convergent. To this end, we invoke the identity $H_{2j-1} = H_{2j} - \frac{1}{2j}$ to write $f(2j-1)$ in terms of $f(2j)$:
\begin{equation}
f(2j-1) = \exp\Big(2\pi i\Big(\frac{1}{2j-1}+\frac{1}{2j}\Big)\Big) f(2j). \label{f2j-1}
\end{equation}
\noindent Plugging (\ref{f2j-1}) and $(2j-1)^s = (2j)^s(1-\frac{1}{2j})^s$ into $F(j)$, we multiply the numerator and denominator of $F(j)$ by $2j-1$, which affords
\begin{equation}
\label{Fj}
F(j) = \frac{f(2j)\Big((2j-1)\big(1-\frac{1}{2j}\big)^s - (2j-1) \exp\big(2\pi i\big(\frac{1}{2j-1}+\frac{1}{2j}\big)\big)\Big)}{(2j-1)^{1+s}} =: \frac{T(j)}{(2j-1)^{1+s}}.
\end{equation}
\noindent Since $\sum_{j=2}^{\infty} \frac{1}{(2j-1)^{1+s}}$ is convergent for $s \in (0,1)$, showing $\abs*{T(j)}$ is bounded gives the desired convergence result. By the triangle inequality,
\begin{flalign}
\abs*{T(j)} & < \max \hspace{1mm}\abs*{ (2j-1)\Big(1-\Big(1-\frac{1}{2j}\Big)^s\Big)} + \max \hspace{1mm}\abs*{-(2j-1)\Big(1-\exp\Big(2\pi i \Big(\frac{1}{2j-1} + \frac{1}{2j}\Big)\Big)\Big)} \nonumber\\
& =: \max(A(j,s)) + \max(B(j))
\label{Tj}
\end{flalign}
\begin{lem}
\label{maxA}
\begin{equation*}
\max(A(j,s)) = \lim_{j\to\infty} A(j,s) = s.
\end{equation*}
\end{lem}
\begin{proof}
We obtain the $j \to \infty$ limit by direct calculation. To show this is a maximum, we make use of the following Bernoulli-type inequality (which can be readily proved using the binomial theorem):
\begin{equation}
\label{bern}
1-sy < (1+y)^{-s} < 1 \text{ for } y \in (0,1), \hspace{1mm} s \in (0,1].
\end{equation}
Plugging $y := \frac{1}{2j-1}$ into (\ref{bern}), we simplify to obtain:
\begin{equation}
\label{bernsimp}
\frac{s}{2j-1} > 1-\Big(1+\frac{1}{2j-1}\Big)^{-s} > 0.
\end{equation}
\noindent Observing that $(1+\frac{1}{2j-1})^{-s} = (1-\frac{1}{2j})^s$, we multiply (\ref{bernsimp}) by $2j-1$ to afford the desired bound:
\begin{equation}
s > A(j,s) = (2j-1)\Big(1-\Big(1-\frac{1}{2j}\Big)^s\Big) > 0.
\end{equation}
\end{proof}
\begin{lem}
\label{lemBj}
\begin{equation*}
\max(B(j)) = \lim_{j \to \infty} B(j) = 4\pi.
\end{equation*}
\end{lem}
\begin{proof}
The $j \to \infty$ limit is obtained by direct calculation. Defining $x(j):=\pi \big(\frac{1}{2j-1}+\frac{1}{2j}\big)$, we have that $x(j) \in (0, \pi)$ for all $j \geq 2$ since $x(2) < \pi$ and $x(j)$ is a monotonically decreasing function. From this, we can simplify $B(j)$ to obtain
\begin{equation}
B(j):= \abs*{(2j-1)(1-\exp(2 i x))} = 2(2j-1) \sin(x) \text{ for } j \in [2,\infty).
\end{equation}
Calculation of $B'(j)$ reveals
\begin{equation}
B'(j) = \frac{\pi}{j^2} + 4(\sin x - x \cos x).
\end{equation}
Since $\sin x - x \cos x$ is positive on $(0, \pi)$, $B(j)$ is increasing for all $j \geq 2$.
\end{proof}
\noindent Plugging Lemmas \ref{maxA} and \ref{lemBj} into (\ref{Tj}) and (\ref{Fj}) implies
\begin{equation}
\sum_{j=2}^{\infty} \abs*{F(j)} < (4 \pi + s)\hspace{1mm} 2^{-1-s}\zeta\Big(1+s,\frac{3}{2}\Big),
\end{equation}
hence $\sum_{j=2}^{\infty} F(j)$ is absolutely convergent for $s \in (0,1]$. Since the summand of $\vn{\ell_s}{n}$ vanishes as $n \to \infty$, $\lim_{n \to \infty} \vn{\ell_s}{2n+1} = \lim_{n \to \infty} \vn{\ell_s}{2n} = \sum_{j=2}^{\infty} F(j)$, so $W(s)$ is convergent.
\bigskip
\noindent \underline{Case 4: $s = 0$}. As in Case 3, we accelerate convergence of $\vn{\ell_0}{n}$ by pairing terms:
\begin{equation}
\vn{\ell_0}{2n} = \sum_{j=2}^{n} \big( f(2j) - f(2j-1) \big) =: U(n), \hspace{1mm}\text{where } f(k) := e^{2\pi i (\frac{1}{k}-2H_k)}. \label{Saccel2}
\end{equation}
Let $r \geq 1$ be a rational constant such that $nr \in \mathbb{N}$. We desire to show that the distance between two arbitrary, evenly indexed spiral points $U(nr)$ and $U(n)$ is a sinusoidal function of $r$ in the $n \to \infty$ limit. Hence, we desire to calculate $B(r) := \lim_{n \to \infty} \abs*{U(nr)-U(n)}$.
Plugging the asymptotic harmonic series expansion $H_n = \gamma + \log(n) + \frac{1}{2n} + \mathcal{O}(n^{-2})$ into $B(r)$, we simplify and apply the Euler-Maclaurin formula to obtain
\begin{equation}
B(r) = \lim_{n \to \infty} \Bigg\lvert \int_{n}^{nr} \Big(e^{-4\pi i \log(2x)} - e^{-4\pi i \log(2x-1)}\Big)dx + \text{error terms} \hspace{1mm}\Bigg\rvert. \label{EMbound}
\end{equation}
Using standard bounds \cite{Lehmer}, we show that the error terms vanish as $n \to \infty$. After evaluating the integral in (\ref{EMbound}), we make use of Euler's formula to separate real and imaginary parts. Evaluating the desired limit in Mathematica affords
\begin{equation}
B(r) = \abs*{\sin(2\pi \log r)}. \label{Blim}
\end{equation}
As claimed, we find that the distance between arbitrary points $U(nr)$ and $U(n)$ in the $n \to \infty$ limit is sinusoidally dependent on the distance parameter $r$ (\ref{Blim}). Since $U(n) = \vn{\ell_0}{2n}$, equation \ref{Blim} shows that all evenly indexed points on the spiral converge to a circle with diameter 1 (Fig. \ref{n=0_spiral}). By explicitly constructing the circle through $\vn{\ell_0}{2n-2}, \vn{\ell_0}{2n},$ and $\vn{\ell_0}{2n+2},$ we can readily show that $\vn{\ell_0}{2n+1}$ is on this circle in the $n \to \infty$ limit, which confirms that the odd spiral points converge to the same circle.
\bigskip
\noindent This concludes the proof of Theorem \ref{thm1}.
\end{proof}
Theorem \ref{thm1} provides the limiting behavior of $\vn{\ell_s}{n}$ for $\ell_s(n) = n^{-s}$, which may be used to find the limiting behavior of $V(n)$ for geometrically significant length functions which are asymptotic to $\ell_s$. For example, we can consider a Definition \ref{dcon} spiral construction where the length function is determined by each $n$-gon being inscribed inside a circle of radius $n^{-s}$, $s \in \mathbb{R}$.
\begin{corollary}[Corollary of Theorem \ref{thm1}]
Let $\pp{insc}$ and $\pp{circ}$ denote the $n$-gon spiral constructions where each $n$-gon is inscribed and circumscribed with respect to a circle of radius $n^{-s}$, $s \in \mathbb{R}$, respectively. Then $\lim_{n \to \infty} \vn{insc}{n}$ and $\lim_{n \to \infty} \vn{circ}{n}$ converge for $s > -1$.
\end{corollary}
\begin{proof}
The side lengths of the inscribed and circumscribed $n$-gons are given by $\ell_{insc}(n) = 2n^{-s} \sin(\frac{\pi}{n}) = \frac{2\pi}{n^{1+s}} + \mathcal{O}(\frac{1}{n^{3+s}})$ and $\ell_{circ}(n) = 2n^{-s} \tan(\frac{\pi}{n}) = \frac{2\pi}{n^{1+s}} + \mathcal{O}(\frac{1}{n^{3+s}})$, respectively. The result follows by Theorem \ref{thm1}.
\end{proof}
\noindent A similar result can be shown for the $n$-gon spiral where each $n$-gon has area $n^{-s}$.
\section{The telescoping spiral}
\label{telescoping_spiral}
Here, we present a special choice of the length function that leads to closed form formulae for the discrete spiral points as well as their analytic continuation.
\begin{theorem}[The telescoping spiral]
\label{thm_analytic}\
\noindent The $n$-gon spiral with length function $L(k) = 2 \cos (\frac{2\pi}{k})$ admits the following closed form analytic continuation for $n \in \mathbb{R}_{>1}$:
\begin{flalign}
\vn{L}{n} & = -1 + (-1)^n e^{-4 \pi i(\gamma + \psi(n+1))}, \nonumber\\
Q_L(n) & = (-1)^n e^{-4 \pi i(\gamma + \psi(n+1))} \bigg(e^{\frac{2 \pi i}{n}} + \frac{e^{\frac{2 \pi i}{n}}+1}{e^{\frac{2 \pi i}{n}}-1}\bigg). \nonumber
\end{flalign}
\end{theorem}
\begin{figure}[!h]
\begin{subfigure}{.585\textwidth}
\centering\captionsetup{width=\linewidth}
\includegraphics[width=\textwidth]{degenerate_spiral_interpolated2.png}
\caption{Plot of the polygon spiral $\pp{L}$ up to the $12$-gon. $\vn{L}{n}$ (see Theorem \ref{thm_analytic}) traces out the unit circle centered at $-1$. The polygon centers are also marked, with their analytic continuation $\cn{L}{n} = \vn{L}{n} + Q_L(n)$ plotted from $n = 1.05$.}
\label{degen}
\end{subfigure}
\begin{subfigure}{.40\textwidth}
\centering\captionsetup{width=0.8\linewidth}
\includegraphics[width=0.9\textwidth]{Qspiral_1-35.png}
\caption{$Q_L(n)$ (see Theorem \ref{thm_analytic}) plotted from $n=1.02$ to $n=35$. In the limit as $n \to 1$, Re$(Q_L(n)) \to 4(1-\frac{\pi^2}{6})$.}
\label{Qspiral}
\end{subfigure}
\caption{The telescoping spiral construction and analytic continuation.}
\label{tele_spiral}
\end{figure}
\begin{proof}
Writing $L(k)$ in exponential form, we plug it into Lemma \ref{lamv} and simplify to obtain a telescoping series!
\begin{equation}
\label{tele1}
\vn{L}{n} = \sum_{k=3}^n (-1)^k \big(e^{-4 \pi i H_{k-1}} + e^{-4 \pi i H_{k}} \big) = -1 + (-1)^n e^{-4 \pi i H_n}
\end{equation}
The harmonic numbers can be analytically continued to complex values of $n$ via $ H_n = \gamma + \psi(n+1)$, where $\psi(x) = \frac{\Gamma'(x)}{\Gamma(x)}$ is the digamma function, and $\gamma = 0.5772...$ is Euler's constant. This provides a direct analytic continuation of $\vn{L}{n}$ to $n \in \mathbb{R}_{>1}$, where $n \leq 1$ is excluded because we cannot construct a $1$-gon to obey Definition \ref{dcon} (and $\cn{L}{1}$ is not defined). By plugging $L(n)$ into Lemma \ref{centers}, we also obtain a closed-form analytic continuation of $Q_L(n)$, and hence $\cn{L}{n}$.
\end{proof}
From Theorem \ref{thm_analytic}, we see that all the values of $\vn{L}{n}$ lie on the unit circle centered at $-1$ (Fig. \ref{degen}), hence $\vn{L}{n}$ forms a degenerate spiral for $n \in (1, \infty)$. On the interval $n \in (1,\infty)$, $L(n)$ is negative for $n \in (\frac{4}{3}, 4)$, zero at $n= \frac{4}{3}, 4$, and positive everywhere else, resulting in 3 regions where the spirals $Q_L(n)$ and $\cn{L}{n}$ exhibit different winding behavior. The $n$-gon centers and vertices coincide at the zeros of $L(n)$ (Fig. \ref{degen}).
\begin{rem}[Golden ratio intersection]
\label{rem1}
In Fig. \ref{degen}, we observe that $\cn{L}{n}$ intersects itself inside the unit disk centered at $-1$. This intersection occurs at $n = \varphi := \frac{\sqrt{5} + 1}{2}$ and $n = \varphi + 1$. At this point, $\cn{L}{\varphi} = -i e^{-\pi i(4(\gamma + \psi(\varphi))+\varphi)} \cot(\pi \varphi) - 1$.
\end{rem}
\section{Further Directions}
There is much additional work to be done toward understanding the $n$-gon spiral introduced here. We showed that the sequence of shared polygon vertex points in Fig. \ref{lam1} converges, but is it possible to find a closed-form expression for this convergence value? Additionally, Theorem \ref{thm1} only describes the $n \to \infty$ behavior of the $n$-gon spiral for a particular family of length functions. Can one develop a convergence condition that applies to arbitrary length functions?
The closed-form analytic continuation of the telescoping spiral (Theorem \ref{thm_analytic}) makes this spiral a particularly attractive choice for further study. How does the asymptotic behavior of the spirals $\cc{L}$ and $Q_L$ compare with classical spirals in the literature? We identified self-intersections of $Q_L(n)$ and $\cn{L}{n}$ at $n = \frac{4}{3}, 4$ and $n = \varphi, \varphi + 1$, respectively---do these spirals have other self-intersections at algebraic values of $n$? Can we use the analytic continuation of $\vv{L}$ and $\cc{L}$ to define a natural, continuous geometric transformation from a regular $n$-gon to a regular $(n+1)$-gon? If so, what would a ``regular polygon'' with a non-integer number of sides look like?
The notion of arranging polygons with increasing numbers of sides is not limited to the polygonal spiral discussed here. In particular, making changes to each of the rules of Definition \ref{dcon} offers flexibility for discovering a range of intriguing polygonal constructions.
| {
"timestamp": "2022-11-15T02:01:39",
"yymm": "2211",
"arxiv_id": "2211.06484",
"language": "en",
"url": "https://arxiv.org/abs/2211.06484",
"abstract": "We construct a polygonal spiral by arranging a sequence of regular $n$-gons such that each $n$-gon shares a specified side and vertex with the $(n+1)$-gon in the construction. By offering flexibility for determining the size of each $n$-gon in the spiral, we show that a number of different analytical and asymptotic behaviors can be achieved.",
"subjects": "Metric Geometry (math.MG); Classical Analysis and ODEs (math.CA)",
"title": "A regular $n$-gon spiral",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787849789998,
"lm_q2_score": 0.8104789063814617,
"lm_q1q2_score": 0.8000875820527599
} |
https://arxiv.org/abs/1305.5394 | A note on the Waring ranks of reducible cubic forms | Let $W_3(n)$ be the set of Waring ranks of reducible cubic forms in $n+1$ variables. We prove that $W_3(n)\subseteq \lbrace 1,..., 2n+1\rbrace$. | \section{Introduction}
\indent
Let $K$ be an algebraically closed field of characteristic zero, let $V$ be a $(n+1)$-dimensional $K$-vector space and $F\in S^d V$, namely a homogeneous polynomial of degree $d$ in $n+1$ indeterminates. The {\bf Waring problem for polynomials} asks for the least value $s$ such that there exist linear forms $L_1, \ldots, L_s,$ for which $F$ can be written as a sum
\begin{equation}
F=L_1^d+\ldots+L_s^d.
\end{equation}
\noindent
This value $s$ is called the \textit{Waring rank}, or simply the \textit{rank}, of the form $F$, and here it will be denoted by $rk(F)$. The Waring problem for a \textit{general form} $F$ of degree $d$ was solved by Alexander and Hirschowitz, in their celebrated paper \cite {AH}.
\begin{theorem}[{\bf Alexander-Hirschowitz \cite{AH}}]\label{alexander-hirschowitz}
A general form $F$ of degree $d$ in $n+1$ variables is the sum of $\lceil \frac{1}{n+1}\binom{n+d}{d}\rceil$ powers of linear forms, unless
\begin{description}
\item $d=2$, $s=n+1$ instead of $\lceil \frac{n+2}{2}\rceil$;
\item $d=3$, $n=4$ and $s=8$ instead of $7$;
\item $d=4$, $n=2,3,4$ and $s=6,10,15$ instead of $5,9,14$ respectively.
\end{description}
\end{theorem}
\begin{remark}
The assumption on the characteristic is not necessary, see \cite{IK} for more details.
\end{remark}
\indent
The Waring problem in the case of a given homogeneous polynomial is far from being solved. A major development in this direction is made in \cite{CCG} where the rank of any monomial and the rank of any sum of pairwise coprime monomials are computed. \\
The present paper concerns with the Waring rank of reducible cubic forms. The main result of this work is the following theorem.
\begin{theorem}\label{theo1}
Let $W_3(n)$ be the set of ranks of reducible cubic forms in $n+1$ variables, then
$$
W_3(n)\subseteq \lbrace 1,\ldots,2n+1\rbrace.
$$
\end{theorem}
\section{The Apolarity}
In this section, we recall basic definitions and facts; see \cite{IK} and \cite{RS} for details.
\indent
Let $K$ be an algebraically closed field of characteristic zero, $S=\bigoplus_{i\geq 0} S_i=K[x_0, \ldots, x_n]$ and $T=\bigoplus_{i\geq 0} T_i=K[\partial_0, \ldots, \partial_n]$ be the \textit{dual ring} of $S$ (i.e. the ring of differential operators over $K$). $T$ is an $S$-module acting on $S$ by differentiation
$$
\partial^\alpha (x^\beta)=\left\{\begin{array}{rl}
\alpha ! \binom{\beta}{\alpha} x^{\beta-\alpha} &\mbox{ if } \beta \geq \alpha\\
0 &\mbox{ otherwise. } \\
\end{array}\right.
$$
\noindent
where $\alpha$ and $\beta$ are multi-indices. The action of $T$ on $S$ is classically called {\bf apolarity}. Note that $S$ can also act on $T$ with a (dual) differentiation, defined by
$$
x^\beta (\partial^\alpha)=\beta ! \binom{\alpha}{\beta} \partial^{\alpha-\beta},
$$
if $\alpha \geq \beta$ and $0$ otherwise.\\
\noindent
In this way, we have a non-degenerate pairing between the forms of degree $d$ and the homogeneous differential operators of order $d$. Let us recall some basic definitions.
\begin{definition}
Let $F \in S$ be a form and $D\in T$ be a homogeneous differential operator. Then $D$ is \textit{apolar} to $F$ if $D(F)=0$.
\end{definition}
\begin{definition}
For any $F\in S^dV$, we define the ideal $F^\perp=\lbrace D\in T | D(F)=0 \rbrace \subset T$, called the \textit{principal system} of $F$. If $F\in S^dV$, for every homogeneous operator $D\in T$ of degree $\geq d+1$, we have $D(F)=0$, or equivalently $D\in F^\perp$. The principal system of $F$ is a \textit{Gorenstein ideal}.
\end{definition}
\begin{definition}
Given a homogeneous ideal $I\subset T$, the Hilbert function $\mbox{HF}$ of $T/I$ is defined as
$$
\mbox{HF}(T/I,i)=\dim_K T_i -\dim_K I_i.
$$
\noindent The first difference function $\Delta \mbox{HF}$ of the Hilbert function of $T/I$ is defined as
$$
\Delta \mbox{HF}(T/I,i)=\mbox{HF}(T/I,i)-\mbox{HF}(T/I,i-1),
$$
\noindent where $\mbox{HF}(T/I,-1)$ is set to be zero.
\end{definition}
\noindent
Now, we recall the key result of this section.
\begin{lemma}[Apolarity lemma]\label{ApolarityLemma}
A form $F\in S^dV$ can be written as
\begin{equation}
F=\sum_{i=1}^s L_i^d,
\end{equation}
\noindent
where $L_i$ are linear forms pairwise linearly indipendent, if and only if there exists an ideal $I \subset F^\perp$ such that $I$ is the ideal of a set of $s$ distinct points in $\mathbb P^n$, where these $s$ points are the corresponding points of the linear forms $L_i$ in the dual space $\mathbb P^{n*}$.
\end{lemma}
\indent
For a proof of apolarity lemma \ref{ApolarityLemma} see for instance \cite{IK}. We will refer to the $s$ points of this lemma as \textit{decomposition points}.
\section{Classification of Ranks of Reducible Cubic Forms in $\mathbb P^n$}
\indent
In this section we give the classification of the ranks of reducible cubic forms. Since the rank is invariant under projective transformations, we only need to check the projective equivalence classes of cubic forms. Let $W_3(n)$ be the set of values of ranks of reducible cubic forms in $n+1$ variables, namely forms of type $F=LQ$, where $L,Q\in S$ are linear and quadratic forms respectively. In order to give a classification, note that $W_3(n-1)\subset W_3(n)$. Indeed, every form in $n$ indeterminates is also a form in the ring of polynomials in $n+1$ indeterminates and the ranks as polynomial in $n$ variables and as polynomial in $n+1$ variables are equal. The subset $W_3(n-1)\subset W_3(n)$ is the set of the ranks of reducible cones in $n+1$ variables. The forms $F=LQ$ which are not cones (up to projective equivalence) are the following.
\begin{itemize}
\item (Type $A$) $Q$ is not a cone and $L$ is not tangent to $Q$.
\begin{center}
\includegraphics[scale=0.1]{TypeA.png}
\end{center}
\item (Type $B$) $Q$ is a cone and $L$ does not pass through any vertex of $Q$.
\begin{center}
\includegraphics[scale=0.08]{TypeB.png}
\end{center}
\item (Type $C$) $Q$ is not a cone and $L$ is tangent to $Q$.
\begin{center}
\includegraphics[scale=0.1]{TypeC.png}
\end{center}
\end{itemize}
\newpage
We have the following result.
\begin{theorem}\label{Theo2}
The ranks of reducible cubic forms $A$, $B$ and $C$ in $n+1$ variables are the following.
\begin{center}
\begin{tabular}{ | l | l |}
\hline
Type & Rank \\ \hline
$A$ & $= 2n$ \\ \hline
$B$ & $= 2n$ \\ \hline
$C$ & $\geq 2n,\leq 2n+1$ \\ \hline
\end{tabular}
\end{center}
\end{theorem}
The ranks of cubic forms of type $A$ and $B$ are given by [\cite{LT}, Proposition $7.2$]. B. Segre proved that the cubic surface in $\mathbb P^3$ of type $C$ has rank $7$ \cite{Segre}.
\subsection{Type $C$}
Cubic forms of type $C$ are projectively equivalent to the cubic form
\begin{equation}\label{cubicform}
F=x_0(x_0x_1+x_2x_3+x_4^2+\ldots +x_n^2).
\end{equation}
\begin{Notation}
We denote by $\int G dx_i$ a suitable choice of a primitive of $G$ (that will be specified any time it is needed), namely a form $H$ such that $\partial_i H=G$, where $\partial_i$ denotes the usual partial derivative with respect to the variable $x_i$.
\end{Notation}
First, note that if $n=2$, we have this proposition.
\begin{proposition}\label{prop6}
The cubic form $F=x_0(x_0x_1+x_2^2)$ has rank $\leq 5$.
\begin{proof}
Consider the coordinate system given by the following linear transformation.
\begin{equation}\label{coordinate1}
\left\{\begin{array}{rl}
x_0=y_1\\
x_1=\frac{1}{3}y_1+y_3\\
x_2=y_2\\
\end{array}\right.
\end{equation}
\noindent
By this, we have $F=\frac{1}{3}y_1^3+y_1^2y_3+y_1y_2^2$. Let $K_1=\int \partial_2 F dy_2$ be the primitive of $\partial_2F$ given by $K_1=\frac{1}{6}[(y_1+y_2)^3+(y_1-y_2)^3]=\frac{1}{3}y_1^3+y_1y_2^2$. Thus $F=K_1+y_1^2y_3$, where $y_1^2y_3=\frac{1}{6}[(y_1+y_3)^3-(y_1-y_3)^3-2y_3^3]$. Then $rk(F)\leq 5$, which proves the statement.
\end{proof}
\end{proposition}
\noindent
It is straightforward to generalize this fact as follows.
\begin{proposition}\label{prop7}
The cubic form $F=x_0(x_0x_1+x_2x_3+x_4^2+\ldots +x_n^2)$ has rank $\leq 2n+1$.
\begin{proof}
We prove it by induction on $n$. The proposition holds for $n=2$ by Proposition \ref{prop6}.
Let us suppose the proposition true for all $i\leq n-1$ and prove the case $i=n$. Introduce the coordinate system given by the following linear transformation.
\begin{equation}\label{coordinate3}
\left\{\begin{array}{rll}
x_0=y_1\\
x_1=y_3\\
x_2=y_0+y_2\\
x_3=y_0-y_2\\
x_4=y_4\\
\vdots \ \ \ \ \ \\
x_n=y_n\\
\end{array}\right.
\end{equation}
\noindent
Then, the cubic becomes $F=y_0^2y_1-y_1y_2^2+y_1^2y_3+y_1y_4^2+\ldots +y_1y_n^2$.
Setting $G=\int \partial_0 F dy_0=y_0^2y_1+\frac{1}{3} y_1^3$, we take $F=G - \frac{1}{3} y_1^3-y_1y_2^2+y_1^2y_3+y_1y_4^2+\ldots+y_1y_n^2$. We have that $rk(G)=2$. Let $H=-\frac{1}{3}y_1^3-y_1y_2^2+y_1^2y_3+y_1y_4^2+\ldots +y_1y_n^2$. Since $H$ is a cubic form in $\mathbb P^{n-1}$ decomposed into a smooth quadric $Q$ and a tangent space $L$ to a point of $Q$ (and hence it is of type $C$), by inductive assumption $rk (H)\leq 2(n-1)+1$. Thus $rk (F)\leq rk (G) + rk (H)\leq 2+2(n-1)+1=2n+1$. Repeating the argument, one obtains a decomposition for $F$.
\end{proof}
\end{proposition}
\begin{remark}
By [\cite{LT}, Theorem $1.3$], the rank of the cubic forms of type $C$ is $\geq 2n$.
\end{remark}
\begin{remark}
The ranks of the reducible cubic forms are quite different from the generic rank of cubic forms given by the Alexander-Hirschowitz Theorem \ref{alexander-hirschowitz}: for sufficiently large values of $n$, the ranks of reducible cubics are smaller than the rank of the generic cubic.
\end{remark}
\section{Proof of Theorem \ref{theo1}}
\begin{proof}
We prove it by induction on $n$. If $n=1$, it is well known that cubic forms (actually, forms of any degree) in $2$ variables have rank at most their degree; in this case the set of ranks is exactly $W_3(1)$. Suppose that the statement holds for $i\leq n-1$ and we want to show it for $i=n$. Consider $ W_3(n)\setminus W_3(n-1)$; applying Theorem $\ref{Theo2}$, there exist forms of ranks $2n$ and of rank at most $2n+1$. By induction, $W_3(n-1)\subseteq \lbrace 1,\ldots,2n-1\rbrace$, and so $W_3(n)\subseteq \lbrace 1,\ldots,2n+1\rbrace$.
\end{proof}
Motivated by the result of Segre \cite{Segre}, we state the following
\begin{conjecture}\label{conj on cubic form of type C}
The Waring rank of the reducible cubic forms of type $C$ in $n+1$ variables is $2n+1$.
\end{conjecture}
\begin{remark}\label{remark on HF}
The conjecture above states that $F=y_0^2y_1-y_1y_2^2+y_1^2y_3+y_1y_4^2+\ldots +y_1y_n^2$ has rank $\geq 2n+1$. The ideal $F^\perp$ is minimally generated by $\partial_i\partial_3$ (for $i\neq 1$), $\partial_1\partial_3-\partial_i^2$ (for $i\neq 1,2,3$), $\partial_1\partial_3+\partial_2^2$, $\partial_i\partial_j$ (for $i,j\neq 1,3$), $\partial_i^3$ (for $i \neq 3$), $\partial_1^2\partial_i$ (for $i\neq 3$). \\
\indent The degree of a zero-dimensional scheme can be computed using Hilbert functions. Let $\mathbb X$ be a set of decomposition points of $F$ and set $I=I(\mathbb X)\subset F^\perp$. Let us suppose that $\mathbb X$ has no points on $\lbrace \partial_3=0\rbrace$. In this case, $\partial_3$ is not a zero-divisor in $T/I$, which is crucial here. Then the degree of $\mathbb X$ is given by
$$
\deg \mathbb X=\sum_{i\geq 0} \Delta \mbox{HF}(T/I,i)=\sum_{i\geq 0} \mbox{HF}(T/(I+\langle \partial_3\rangle),i)\geq \sum_{i\geq 0} \mbox{HF}(T/(F^\perp+\langle \partial_3\rangle),i)=2n+1,
$$
\noindent
where the Hilbert function $\mbox{HF}$ of $F^\perp+\langle\partial_3\rangle$ is the sequence $(1,n,n,0,-\cdots)$.\\
The case when $\mathbb X$ has points on $\lbrace \partial_3=0\rbrace$ requires a more careful analysis which we show for $n=2$.
\end{remark}
We propose a technique based on apolarity and Hilbert functions that might be generalized to higher dimensions. We will show it dealing with the known case $n=2$.
\begin{example n=2}
Let us denote $T=\mathbb C[\partial_1,\partial_2,\partial_3]$. In this case, we have $F=y_1(y_1y_3+y_2^2)$. The principal system of $F$ is the ideal
$F^\perp=\langle \partial_1\partial_3-\partial_2^2,\partial_2\partial_3,\partial_3^2,\partial_1^3,\partial_1^2\partial_2,\partial_2^3\rangle$. Let $\mathbb X$ be a set of decomposition points of $F$ and let us set $I=I(\mathbb X)$.\\
\indent If $\mathbb X$ has no points on $\lbrace\partial_3=0\rbrace$ then $\partial_3$ is not a zero-divisor of $T/I$. Then
$$\deg \mathbb X=\sum_{i\geq 0} \Delta \mbox{HF}(T/I,i)=\sum_{i\geq 0} \mbox{HF}(T/(I+\langle\partial_3\rangle),i)\geq \sum_{i\geq 0} \mbox{HF}(T/(F^\perp+\langle\partial_3\rangle),i)=5.$$
\noindent
Indeed, $I+\langle\partial_3\rangle\subset F^\perp+\langle\partial_3\rangle=\langle \partial_3,\partial_1\partial_2,\partial_2^2,\partial_1^3,\partial_2^3 \rangle$ and the Hilbert function of $T/(F^\perp+\langle\partial_3\rangle)$ is the sequence $(1,2,2,0,-\cdots)$, as in Remark \ref{remark on HF} above.\\
\indent Let us assume that $\mathbb X$ has some point on $\lbrace\partial_3=0\rbrace$. If $\dim I_2\leq 1$ then the Hilbert function of $T/I$ is the sequence $(1,3,m\geq 5,\ldots)$ and hence again $\deg \mathbb X\geq 5$. So let us assume $\dim I_2\geq 2$. Note that $I_2\subset F^\perp_2=\langle \partial_1\partial_3-\partial_2^2,\partial_2\partial_3,\partial_3^2\rangle$.
There exists a two-dimensional subspace of conics $L\subset I_2\subset F^\perp_2$. Either this space $L$ is the pencil $a\partial_3^2+b\partial_2\partial_3$, and the base locus of this pencil is $\lbrace\partial_3=0\rbrace$, or $I_2$ contains some irreducible conic of equation $\partial_1\partial_3-\partial_2^2+a\partial_3^2+b\partial_2\partial_3$, whose only common intersection with $\lbrace\partial_3=0\rbrace$ is the point $(1:0:0)$. The first case is not possible, since otherwise $\mathbb X\subset \lbrace\partial_3=0\rbrace$, namely $\partial_3 F=0$, which is false. Hence we have $\mathbb X\cap\lbrace\partial_3=0\rbrace=\{(1:0:0)\}.$ This implies that $\mathbb X\cap\lbrace\partial_3=0\rbrace\subset \mathbb X\cap\lbrace\partial_2=0\rbrace$. Then $\partial_3$ does not vanish at any point of $\mathbb X\cap\lbrace\partial_2\not=0\rbrace=\mathbb X'$. Note that $\deg \mathbb X'\leq \deg \mathbb X-1$ because the point $(1:0:0)$ does not belong to $\mathbb X'$. Setting $J=(I:\partial_2)$ the ideal of $\mathbb X'$, we have that $\partial_3$ is not a zero-divisor of $T/J$, so we can compute
$$\deg \mathbb X'=\sum_{i\geq 0} \mbox{HF}(T/(J+\langle\partial_3\rangle),i)\geq \sum_{i\geq 0} \mbox{HF}(T/((F^\perp:\partial_2)+\langle\partial_3\rangle),i)\geq 4,$$
\noindent
since $(F^\perp:\partial_2)+\langle\partial_3\rangle=\langle \partial_3,\partial_1^2,\partial_2^2\rangle$. Finally $\deg \mathbb X\geq\deg \mathbb X'+1\geq 5$, which says that the rank of $F$ is at least $5$ using the apolarity lemma \ref{ApolarityLemma}.
\end{example n=2}
\begin{Acknowledgement}
This paper is part of author's Master thesis at University of Catania. I would like to thank Riccardo Re for his support and encouragement. I also would like to thank Enrico Carlini and Zach Teitler for insightful discussions.
\end{Acknowledgement}
| {
"timestamp": "2014-12-18T02:12:34",
"yymm": "1305",
"arxiv_id": "1305.5394",
"language": "en",
"url": "https://arxiv.org/abs/1305.5394",
"abstract": "Let $W_3(n)$ be the set of Waring ranks of reducible cubic forms in $n+1$ variables. We prove that $W_3(n)\\subseteq \\lbrace 1,..., 2n+1\\rbrace$.",
"subjects": "Algebraic Geometry (math.AG); Commutative Algebra (math.AC)",
"title": "A note on the Waring ranks of reducible cubic forms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9838471663618993,
"lm_q2_score": 0.8333245932423308,
"lm_q1q2_score": 0.8198640397211494
} |
https://arxiv.org/abs/1809.05769 | Differentiation Matrices for Univariate Polynomials | We collect here elementary properties of differentiation matrices for univariate polynomials expressed in various bases, including orthogonal polynomial bases and non-degree-graded bases such as Bernstein bases and Lagrange \& Hermite interpolational bases. | \section{Introduction}
The transformation of the (possibly infinite) vector of coefficients $\mathbf{a}={\{a_k\}}_{k\geq0}$ in the expansion
\begin{equation}
f(x)=\sum_{k\geq0} a_k\phi_k(x)
\end{equation}
\noindent to the vector of coefficients $\mathbf{b} = \{b_k\}_{k\geq0}$ in the expansion
\begin{equation}
f'(x) = \sum_{k\geq0} a_k\phi_k(x)
\end{equation}
\noindent is, of course, linear because the differentiation operator is linear\footnote{We are assuming that $f(x)$ is a differentiable function and that the set $\{\phi_k\}_{k\geq0}$, which we will now sometimes collect in a row vector $\mathbf{\phi}$, is a complete basis. However, the bulk of this paper will be about finite $\mathbf{\phi}$ representing polynomials of degree at most~$n$. Note that we represent $f'(x)$ and $f(x)$ in the same basis.}. Here the $\phi_k(x)$ are univariate polynomials. The matrix representation of the linear transformation from $\mathbf{a}$ to $\mathbf{b}$, denoted $\mathbf{b}=\mathbf{D}_{\phi}\mathbf{a}$, is
\begin{equation}
\mathbf{D}_{\phi}=[d_{ji}]
\end{equation}
\noindent where $d_{ij} = \frac{\partial\phi_i'(x)}{\partial\phi_j(x)}$ (note the transposition); that is, the $d_{ij}$ are from
\begin{equation}
\phi_i'(x)=\sum_{j\geq0}d_{ij}\phi_j(x)\>.
\end{equation}
$\mathbf{D}_{\phi}$ is called a \textsl{differentiation matrix}.
In vector form, $f(x)=\boldsymbol{\phi}(x)\mathbf{a}$ (where $\boldsymbol{\phi}=[\phi_0(x),\phi_1(x),\dots]$) so
\begin{align}
f'(x)=&\boldsymbol\phi'(x)\mathbf{a} \nonumber \\
=&\boldsymbol\phi(x)\mathbf{D}_{\phi}\mathbf{a} \nonumber\\
=&\boldsymbol{\phi}(x)\mathbf{b}\>.
\end{align}
Alternatively, we might work with $f'(x) = \mathbf{b}^T\boldsymbol{\phi}^T(x)$ and in that case use the transpose of $\ensuremath{\mat{D}}$, in $\mathbf{b}^T = \mathbf{a}^T\ensuremath{\mat{D}}^T$.
The most familiar differentiation matrix is of course that of the monomial basis $\phi_k(x)=x^k$. The $4\times 4$ differentiation matrix, for polynomials of degree at most 3, is in this basis,
\begin{equation}
\ensuremath{\mathbf{D}_{\mathrm{monomial}}} = \left[
\begin{array}{cccc}
0 & 1 & 0& 0 \\
0& 0 & 2 & 0 \\
0 & 0 & 0 & 3\\
0 & 0 & 0 & 0\\
\end{array}
\right].
\end{equation}
This generalizes easily to the degree $n$ case.
This operation is so automatic that it's only rarely realized that it even has a matrix representation. If we are truncating to polynomials of degree at most~$n$ then
the finite
matrix $\mathbf{D}_{\textrm{monomial}}$ is
defined by:
\begin{equation}\label{DM}
\mathbf{D}_{\textrm{monomial}} = \left[
\begin{array}{cccccc}
0 & 1 & 0&\cdots\\
0 & 0 & 2 & 0 &\cdots \\
& & 0 & 3 & 0 &\\
& & & \ddots & \ddots & \\
& & & & & n\\
&&&&& 0\\
\end{array}
\right],
\end{equation}
an $n+1$ by $n+1$ matrix.
Differentiation matrices in other bases, such as the Chebyshev basis, Lagrange interpolational basis, or the Bernstein basis, are also useful in practice and we will see several explicit examples.
\subsection{Reasons for studying Differentiation Matrices}
Differentiation matrices are used in spectral methods for the numerical solution of ordinary and partial differential equations, going back to their implicit use by Lanczos with the Chebyshev basis\footnote{Actually, Lanczos used the generalized inverse, $\ensuremath{\mathbf{D}_{\mathrm{Chebyshev}}}^+$ which turns out to be bidiagonal for the Chebyshev basis; this is simpler for hand computation. We will see this later in this section.}. They can be used for quadrature, especially Filon or Levin quadrature, for highly oscillatory integrands. The first serious study seems to be~\cite{Don(1995)}. One of the present authors is working on this now~\cite{CorlessTrivedi:2018}. See also~\cite{Weideman},~\cite{olver2013fast}, and Chapter 11 of~\cite{corless2013graduate}.
In this paper we study differentiation matrices that occur when using various polynomial bases. We confine ourselves to using one fixed basis $\{\phi_k\}_{k\geq0}$ for both $f(x)$ and $f'(x)$, but sometimes there are advantages to using different bases for $f'$ than for $f$: see~\cite{olver2013fast} for an example. The reasons for using the Chebyshev basis or the Lagrange basis include superior conditioning of expressions for functions in those bases, and sometimes superior convergence. The reasons for studying general properties and basis-independent properties, as this paper does, include the power of abstraction and the potential to apply to the results of other bases perhaps more suited to the problem of your current interest. Another purpose is to see the relationships among the various bases.
It helps exposition to have some example bases in mind, in order to make the general theory intelligible and interesting, so we describe the differentiation matrices for a few polynomial bases in the next section.
\subsection{Example Differentiation Matrices}
Before we give examples, we repeat the following general observation:
The columns of $\mathbf{D_{\phi}}$ are the coefficients of the derivatives $\phi_k'$ expressed in the $ \{\phi_k\}$.
\\
\noindent\textsl{Proof}: If $b=\phi_k(x)$ then $\mathbf{b}=\mathbf{e}_k$ and $\mathbf{D}\mathbf{b}=\mathbf{d}_k$ the $k$-th column of $\mathbf{D}$; but $\mathbf{b}'(x)=\sum_{j=0}^n c_j\phi_j(x)$ for some $c_j$, and $d_k=[c_0, c_1, ..., c_n]^T$, by definition.
\\
\noindent\textsl{Corollary}: if $\{\phi_k\}$ is degree-graded (\textsl{i.e.} $\mathrm{deg}\phi_k=k$), then $\mathbf{D}$ is strictly upper triangular.
This is not true, of course, if $\{\phi_k\}$ is not degree-graded, \textsl{e.g.} $\phi_k$ is a Bernstein, Lagrange or Hermite interpolational basis.
\subsubsection{Chebyshev Polynomials}
One of the first kinds of differentiation matrices to be studied was for Chebyshev polynomials, \textsl{i.e}. $T_0(x)=1$, $T_1(x) = x$ and $T_{k+1} = 2xT_k(x)-T_{k-1}(x)$; alternatively, $T_k(x) = \cos(k\arccos(x))$ on $-1\leq x \leq1$. See for instance~\cite{olver2013fast} or (more briefly) Chapter 2 of~\cite{corless2013graduate}. For a thorough and modern introduction with application to the Chebfun software project see~\cite{berrut2004barycentric}. The derivative of $T_k(x)$ is explicitly given in terms of $T_0, T_1,...,T_{k-1}$ as a sum, in~\cite{olver2013fast} and as a Maple program in~\cite{corless2013graduate}.
\begin{equation}
\frac{dT_k(x)}{dx}=
\begin{cases}
0 & k=0 \\
k(\frac{1+(-1)^{k-1}}{2})T_0+2k\sum_{j=0}^{\floor{\frac{k-1}{2}}} T_{k-1-2j}(x)& k\geq 1 \>.\\
\end{cases}
\end{equation}
Here the notation $\lfloor x\rfloor$ means the floor of $x$, the largest integer not greater than~$x$.
From this formula we may construct the infinite differentiation matrix $\ensuremath{\mathbf{D}_{\mathrm{Chebyshev}}}$,
defined by
\begin{equation}\label{DC}
\ensuremath{\mathbf{D}_{\mathrm{Chebyshev}}} = \left[
\begin{array}{ccccccccc}
0 & 1 & 0 & 3 & 0 & 5 & 0 & 7 & \cdots\\
& 0 & 4 & 0 & 8 & 0 & 12 & 0 & \cdots\\
& & 0 & 6 & 0 & 10 & 0 & 14 &\cdots\\
&&& 0 & 8 & 0& 12 & 0 &\cdots\\
&&&& 0 & 10 & 0 & 14&\cdots \\
&&&& & 0 & 12 & 0 &\cdots\\
&&&&&& 0 & 14 & \ddots \\
&&&&&&& 0 & \ddots \\
&&&&&&& & \ddots
\end{array}
\right].
\end{equation}
As we see, the matrix is strictly upper triangular, just as the monomial basis matrix was; this is because the degree of $T_k'$ is $k-1$. Finite order differentiation matrices for Chebyshev polynomials are merely truncations of this. For a recent application of this matrix to the solution of pantograph equations, see~\cite{YANG2018132}.
\medskip
\goodbreak
\textbf{Remark}
Lanczos thought that this was cumbersome, and preferred the more compact \textsl{antiderivative} formulation (see~\cite{corless2013graduate} pp 125-126)
\begin{equation}
\int T_k(x)dx=\frac{1}{2(k+1)} T_{k+1}(x)-\frac{1}{2(k-1)}T_{k-1}(x)+\frac{k\sin{k\pi/2}}{k^2-1}\>,
\end{equation}
(giving a correct value $\frac{(T_2(x)+T_0(x))}{4}$ in the limit as $k\rightarrow1$; also $\int T_0(x)dx=T_1(x)).$
This allows a more simple transformation from the \textsl{derivative}
\begin{equation}
f'(x)=\sum_{k\geq0} b_kT_k(x)
\end{equation}
to its \textsl{antiderivative}
\begin{equation}
f(x)=\sum_{k\geq0} a_kT_k(x)
\end{equation}
\noindent by what we will see is a generalized inverse of $\ensuremath{\mathbf{D}_{\mathrm{Chebyshev}}}$:
The infinite
tridiagonal matrix $\ensuremath{\mathbf{D}_{\mathrm{Chebyshev}}}^+$,
derived from equation (7), is except for the first row
\begin{equation}
\ensuremath{\mathbf{D}_{\mathrm{Chebyshev}}}^{+} = \left[
\begin{array}{cccccc}
0 & 0 & & & &\\
1 & 0 & -1/2 & & &\\
& 1/4 & 0 & -1/4 & &\\
& & 1/6 & 0 & -1/6 & \\
&&& 1/8 & 0 & \ddots\\
&&&& 1/10 &\ddots\\
\end{array}
\right]\>.
\end{equation}
This matrix is tridiagonal (with $0$ on main diagonal). Here the first row is $0$, meaning that an arbitrary constant can be added to the integral. We will see that the first row and the final column of truncations of this matrix will not matter for antiderivatives of degree $n-1$ polynomials.
\subsubsection{Legendre Polynomials}
The Legendre polynomials $\{P_n\}_n$ satisfy, $P_0(x)=1, P_1(x)=x$,
\begin{equation}
\int_{-1}^1P_n(x)P_m(x)dx=\frac{2}{n+1}[n=m]
\end{equation}
This is a combinatorial notation for what elsewhere is termed the Kronecker Delta function. Here $[n=m]$ is 1 when $n=m$ and 0 otherwise. This is called Iverson's convention in~\cite{knuth1992two}. For a discussion of the merit of this notation, see~\cite{knuth1992two}. The Legendre polynomials satisfy the three term recursion relation
\begin{equation}
(n+1)P_{n+1}-(2n+1)P_n+nP_{n-1}=0
\end{equation}
By inspection, the differentiation matrix for polynomials $p(x)=\sum_{k\geq 0}c_kP_k(x)$ is, if $p'(x)=\sum_{k\geq 0}d_kP_k(x),$
\begin{gather}\label{DL}
\begin{bmatrix} d_0 \\
d_1 \\
d_2 \\
\vdots
\end{bmatrix}
=
\begin{bmatrix}
0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \cdots \\
0 & 0 & 3 & 0 & 3 & 0 & 3 & 0 \cdots \\
0 & 0 & 0 & 5 & 0 & 5 & 0 & 5 \cdots \\
0 & 0 & 0 & 0 & 7 & 0 & 7 & 0 \cdots \\
& & & & \ddots & \ddots & \ddots & \ddots &\\
\end{bmatrix}
\begin{bmatrix}
c_ 0\\
c_1 \\
c_2 \\
\vdots
\end{bmatrix}
\end{gather}
and
\begin{gather}
\begin{bmatrix} K \\
c_1 \\
c_2 \\
c_3\\
\vdots
\end{bmatrix}
=
\begin{bmatrix}
0 & -\frac{1}{3} \\
1 & 0 & -\frac{1}{5} \\
& \frac{1}{3} & 0 & -\frac{1}{7} \\
& & \frac{1}{5} & 0 & -\frac{1}{9} \\
& & & \frac{1}{7 }& 0 & \ddots \\
& & & & \frac{1}{9} & \ddots \\
& & & & & \ddots \\
\end{bmatrix}
\begin{bmatrix}
d_ 0\\
d_1 \\
d_2 \\
\vdots
\end{bmatrix}.
\end{gather}
Like the matrix for the Chebyshev polynomials, the generalized inverse of $\ensuremath{\mathbf{D}_{\mathrm{Legendre}}}$ is tridiagonal. The simplicity of these matrices recommend them.
\subsubsection{General Differentiation Matrix for Degree-Graded Polynomial Bases}
Real polynomials $\{\phi_n(x)\}_{n=0}^{\infty}$ with $\phi_n(x)$
of degree $n$ which are orthonormal on an interval of the real
line (with respect to some nonnegative weight function)
necessarily satisfy a three-term recurrence relation (see Chapter
10 of \cite{Davis(1963)}, for example). These relations can be written in the form
\begin{equation} \label{eq.rec}
x \phi_j(x)=\alpha_j \phi_{j+1}(x) +\beta_j \phi_j(x) +
\gamma_j\phi_{j-1}(x), \quad \quad j=0,1,\ldots, \end{equation} where the
$\alpha_j,\;\beta_j,\;\gamma_j$ are real, $\alpha_j\ne0$, $\phi_{-1}(x)\equiv 0$
$\phi_0(x)\equiv 1$.
Besides orthogonal polynomials, one can easily observe that the standard basis and Newton basis also satisfy (\ref{eq.rec}) with $\alpha_j= 1,\;\beta_j= 0,\;\gamma_j= 0$ and $\alpha_j= 1,\;\beta_j= z_j,\;\gamma_j= 0$, respectively where the $z_j$ are the nodes.
\textbf{Lemma}: $\ensuremath{\mathbf{D}_{\mathrm{Degree-Graded}}}$ has the following structure:
\begin{equation}\label{DG1}
\ensuremath{\mathbf{D}_{\mathrm{Degree-Graded}}} = \left[
\begin{array}{cccc}
0&0&\cdots&0\\
& & & \vdots\\
& \Huge{\mathbf{Q}}&& \\
& & &0
\end{array}
\right]^{T},
\end{equation}
where
\begin{equation}\label{DG}
\mathbf{Q}_{i, j}= \left\{\begin{array}{cl}
\frac{i}{\alpha_{i-1}}, & i=j\\
\frac{1}{\alpha_{i-1}}((\beta_{j-1}-\beta_{i-1})\mathbf{Q}_{i-1, j}+ \alpha_{j-2}\mathbf{Q}_{i-1, j-1}+ \gamma_j \mathbf{Q}_{i-1, j+1}- \gamma_{i-1}\mathbf{Q}_{i-2, j}). & i>j
\end{array}\right.\end{equation}
Any entry of $\mathbf{Q}$, with a negative or zero index is not considered in the above formula.\\
\noindent\textsl{Proof}: We provide the sketch of proof here. The proof itself is straightforward, but time-consuming. Taking the derivative of (\ref{eq.rec}) with respect to $x$, we have
\begin{equation} \label{deq.rec}
x \phi'_j(x)+ \phi_j(x)=\alpha_j \phi'_{j+1}(x) +\beta_j \phi'_j(x) +\gamma_j\phi'_{j-1}(x), \quad \quad j=0,1,\ldots,
\end{equation}
We let $j= 0$ in (\ref{deq.rec}) and simplify to get $$\phi'_1(x)= \frac{1}{\alpha_0}\phi_0(x).$$ We then let $j= 1$ in (\ref{deq.rec}) and simplify using (\ref{eq.rec}) with $j= 0$ and the result from the previous step to get $$\phi'_2(x)= \frac{\beta_0-\beta_1}{\alpha_0\alpha_1}\phi_0(x)+ \frac{2}{\alpha_1}\phi_1(x).$$ If we continue like this, and write the results in a matrix-vector form, the pattern stated in (\ref{DG}) will emerge.
\\
We can now find the matrices that we have for the monomial basis in (\ref{DM}), Chebyshev basis (\ref{DC}) and Legendre basis (\ref{DL}) directly from (\ref{DG1}) simply by plugging in the specific values for the $\alpha_j$, $\beta_j$, and $\gamma_j$ for each of them.
Another important degree-graded basis of this kind is the Newton basis. In the simplest case, let a polynomial $P(x)$ be specified by the
data $\{ \left(z_j,{P}_j\right) \}_{j=0}^{n}$ where the $z_j$'s
are distinct. The \textsl{Newton polynomials} are then defined by setting
$N_0(x)=1$ and, for $k=1,\cdots,n,$
$$ N_k(x)=\prod_{j=0}^{k-1}(x-z_j)\>. $$
Then we may express \begin{equation} P(x)= \left[
\begin{array}{ccccc}
a_0&a_1& \cdots &a_{n-1}&a_n
\end{array}
\right]\left[
\begin{array}{ccccc}
N_0(x)\\N_1(x)\\\vdots\\N_{n-1}(x)\\N_n(x)
\end{array}
\right]\>.
\end{equation}
For $j=0,\cdots,n$, the $a_j$ can be found by divided differences as follows.
\begin{equation}
a_j=[P_0,P_1,\cdots,P_{j-1}],
\end{equation}
where we have $[P_j]=P_j$, and
\begin{equation}\label{divdif}
[P_{i},\cdots,P_{i+j}]=\frac{[P_{i+1},\cdots,P_{i+j}]-[P_i,\cdots,P_{i+j-1}]}{z_{i+j}-z_i}.
\end{equation}
A similar expression is possible even if the $z_j$ are not distinct, if we use \textsl{confluent} divided differences. We return to this later, but note that the Newton polynomials are well-defined for $z_j$ that are not distinct. Indeed, if they are all equal, say $z_j = a$, we recover Taylor polynomials $(z-a)^{j-1}$.
If in (\ref{eq.rec}), we let $\alpha_j=1$, $\beta_j=z_j$
and $\gamma_j=0$, it will become the Newton basis. For $n= 4$, $\ensuremath{\mathbf{D}_{\mathrm{Newton}}}$, as given by (\ref{DG1}), has the following form.
\begin{small}\begin{equation}\label{Nex}\hspace*{-0.5cm} \ensuremath{\mathbf{D}_{\mathrm{Newton}}}=\left[
\begin{array}{ccccc} 0&0&0&0&0\\1&0&0&0&0 \\z_{0}-z_{1}&2&0&0&0\\ (z_{0}-z_{2})(z_{0}-z_{1})& -2z_{2
}+z_{1}+z_{0}&3&0&0\\ (z_{0}-z_{3})(z_{0}-z_{2})(z_{0}-z_{1})& (z_{1}-z_{3})( z_{1}-2z_{2}+z_{0}) +(z_{0}-z_{2})(z_{0}-z_{1}) &-3z_{3}+z_{2}+z_{1}+z_{{0}}&4&0
\end{array}
\right]^{T}\end{equation}\end{small}
\subsubsection{Lagrange Bases}
\bigbreak
Differentiation matrices for Lagrange bases are particularly useful. See \cite{corless2013graduate}
Chapter 2 for a detailed derivation. We give a summary here to establish notation. We suppose that function values $\rho_k$ are given at distinct nodes $\tau_k$ (that is, $\tau_k=\tau_i \Leftrightarrow i=k,$ for $0\leq k\leq n$). Then the barycentric weights $\beta_k$ are found once and for all from the partial fraction expansion
\begin{equation}
\frac{1}{w(\mathbf{z})}=\frac{1}{\prod_{k=0}^n (z-\tau_k)}=\sum_{k=0}^n \frac{\beta_k}{z-\tau_k},
\end{equation}
giving
\begin{equation}
\beta_k=\prod_{\substack{j=0\\ j\neq k\\}}^n(\tau_k-\tau_j)^{-1}\>.
\end{equation}
These can be computed in a numerically stable fashion \cite{olver2013fast}, and once this has been done, the polynomial interpolant can be stably evaluated either by the first barycentric form
\begin{equation}
\rho(z)=w(z)\sum_{k=0}^n \frac{\beta_k\rho_k}{z-\tau_k}
\end{equation}
or the second,
\begin{equation}
\rho(z)=\frac{\sum_{k=0}^n \frac{\beta_k\rho_k}{z-\tau_k}}{\sum_{k=0}^n \frac{\beta_k}{z-\tau_k}}\>.
\end{equation}
See \cite{berrut2004barycentric} for details. Here we are concerned with the differentiation matrix
\begin{equation}
\ensuremath{\mathbf{D}_{\mathrm{Lagrange}}}=[{d}_{ij}]
\end{equation}
(as derived in many places, but for instance see the aforementioned Chapter 11 of \cite{corless2013graduate}).\\ We have that
\begin{equation}
d_{ij}=\frac{\beta_j}{\beta_i(\tau_i-\tau_j)} \text{ for } i\neq j
\end{equation}
and
\begin{equation}
d_{ii}=-\sum_{j\neq i}d_{ij}
\end{equation}
Construction of this matrix is an $O(n^2)$ process, and evaluation of the vector of polynomial derivatives $b$ by
\begin{equation}
\mathbf{b}=\ensuremath{\mathbf{D}_{\mathrm{Lagrange}}}\rho\>.
\end{equation}
is also an $O(n^2)$ process. Once this has been done, then $\rho'(z)$ can be evaluated stably by re-using the previously computed barycentric weights:
\begin{equation}
\rho'(z)=w(z)\sum_{k=0}^n \frac{\beta_kb_k}{z-\tau_k}\>.
\end{equation}
If the derivative is to be evaluated \textsl{very} frequently, it may be cost-effective to modify the weights and throw away one node. This is usually not worth the bother.
\par\medskip\noindent
\textsl{Example}(taken from chapter 11 in \cite{corless2013graduate}) Note that if $\tau=[-1, -\frac{1}{3},\frac{1}{3}, 1]$ then the differentiation matrix is
\begin{equation}
\ensuremath{\mathbf{D}_{\mathrm{Lagrange}}}= \left[
\begin{array}{cccc}
-11 & \phantom{-}18 & -9 & \phantom{-}2\\
-2 & -3 & \phantom{-}6 & -1\\
\phantom{-}1 & -6 & \phantom{-}3 & \phantom{-}2\\
-2 & \phantom{-}9 & -18 & \phantom{-}11\\
\end{array}
\right],
\end{equation}
so,
\begin{equation}
\ensuremath{\mathbf{D}_{\mathrm{Lagrange}}}^+= \frac{1}{360}\left[
\begin{array}{cccc}
-81 & -147 & -123 & -9\\
-41 & -53 & -77 & -31\\
\phantom{-}31 & \phantom{-}77 & \phantom{-}53 & -41\\
\phantom{-}9 & \phantom{-}123 & \phantom{-}147 & \phantom{-}81\\
\end{array}
\right]\>.
\end{equation}
If instead $\tau=[-1, -1/2, 1/2, 1]$,
then it follows that
\begin{equation}
\ensuremath{\mathbf{D}_{\mathrm{Lagrange}}}= \frac{1}{6}\left[
\begin{array}{cccc}
-19 & 24 & -8 & 3\\
-6 & 2 & 6 & -2\\
2 & -6 & -2 & 6\\
-3 & 8 & -24 & 19\\
\end{array}
\right],
\end{equation}
and
\begin{equation}
\ensuremath{\mathbf{D}_{\mathrm{Lagrange}}}^+ = \frac{1}{720}\left[
\begin{array}{cccc}
-94 & -347 & -293 & 14\\
94 & -193 & -247 & -14\\
14 & 247 & 193 & -94\\
-14 & 293 & 347 & 94\\
\end{array}
\right].
\end{equation}
These matrices were displayed explicitly to demonstrate that, unlike the degree-graded case, the differentiation matrices are full, and their properties not very obvious\footnote{The row sums are zero, by design: the constant function has a constant vector representation, and its derivative should be (must be) zero. This is why $D_{ii}$ is the negative sum of all other entires.}.
If $\tau=[1, i, -1, -i]$,
\begin{equation}
\ensuremath{\mathbf{D}_{\mathrm{Lagrange}}} = \frac{1}{2}\left[
\begin{array}{cccc}
3 & -1+i & -1 & -1-i\\
-1+i & -3i & 1+i & i\\
1 & 1+i & -3 & 1-i\\
-1-i & -i & 1-i & 3i\\
\end{array}
\right],
\end{equation}
and
\begin{equation}
\ensuremath{\mathbf{D}_{\mathrm{Lagrange}}}^+ = \frac{1}{24}\left[
\begin{array}{cccc}
11 & 4-3i & 5 & 4+3i\\
-3+4i & 11i & 3+4i & 5i\\
-5 & -4-3i & -11 & -4+3i\\
-3-4i& -i5 & 3-4i & -11i\\
\end{array}
\right],
\end{equation}
which again has no obvious pattern (but see Theorem 11.3 of~\cite{corless2013graduate}: at least the singular values are simple).
\subsubsection{Hermite Interpolational Bases}
A Hermite interpolational basis is likely to be a bit less familiar to the reader than the Lagrange basis. They can be derived from Lagrange bases by letting two or more distinct nodes ``flow together'' (from whence the word confluency comes). Many methods to compute Hermite interpolational basis representations of polynomials are known that fit consecutive function values and derivative values (i.e. $f(\tau_i)$, $f'(\tau_i)/1!$, $\ldots$, $f^{(s_i-1)}(\tau_i)/(s_i-1)!$ are consecutive scaled values of the derivatives of $f$ at a particular node $\tau_i$, which is said to have confluency $s_i$, a non-negative integer).
Many people use divided differences to express polynomials that fit confluent data, but this does not result in a Hermite interpolational basis (as pointed out in an earlier section, we would instead call that a Newton basis).
We can solve the Hermite interpolation problem using a Newton basis, which is a degree-graded basis, and its differentiation matrix can be found through equation~\eqref{DG1}.
Let's assume that at each node, $z_j$, we have the value and the derivatives of $P(x)$ up to the $s_j$-th order. The nodes at which the derivatives are given are treated as extra nodes. In fact we pretend that we have $s_j+1$ nodes, $z_j$, at which the value is $P_j$ and remember that $\sum_{i=0}^{k-1}s_i=n+1-k$. As such, the first $s_0+1$ nodes are $z_0$, the next $s_1+1$ nodes are $z_1$ and so on.
Using the divided differences technique, as given by equation~\eqref{divdif}, to find the $a_j$, whenever we get $[P_j,P_j,\cdots,P_j]$ where $P_j$ is repeated $m$ times, we have
\begin{equation}
[P_j,P_j,\cdots,P_j]=\frac{P^{(m-1)}_j}{(m-1)!},
\end{equation}
and all the values $P'_j$ to $P^{(s_j)}_j$ for $j=0,\cdots,k-1$ are given. For more details see e.g.~\cite{LJR(1983)}.
For this confluent Newton basis, like the simple Newton basis, $\alpha_j= 1$, $\beta_j= z_j$, and $\gamma_j= 0$, but some of the $\beta_j$ are repeated. Other than that, the differentiation matrix can be found for this basis in a manner identical to $\ensuremath{\mathbf{D}_{\mathrm{Newton}}}$. This approach was used in~\cite{AFS2017} to find the coefficients of the Birkhoff interpolation.
There are other advantages to solving the Hermite interpolation problem by using divided differences, for low degrees; the derivative is then almost directly available, for instance, and one does not really \textsl{need} a differentiation matrix.
But there are numerical stability disadvantages to the confluent Newton basis. The main one is related to the relatively poor conditioning of the basis itself, for high-degree interpolants. [This does not matter much if the degree is low.] The next most important disadvantage is that the condition number of the polynomial expressed in this basis can be different if a different ordering of the nodes is used (it is usually true that the Leja ordering is good, but even so the condition number can be bad). See~\cite{Corless(2004)} for numerical experiments that confirm this.
Another well-known solution to the Hermite interpolation problem involves constructing a basis that generalizes the Lagrange property, where each basis element is $1$ at one and only one node, and zero at all the others, which allows a direct sum to give the desired interpolant.
One possible such definition (there are many variations) for a Hermite interpolational basis is to define it as a set of polynomials $H_{i,j}(z)$ with the index $i$ corresponding to the node indices, so if the nodes are $\tau_i$ with $0 \le i \le n$ then again for $H_{i,j}(z)$ we would have $0 \le i \le n$. The second index $j$ looks after the confluency at each node: $0 \le j \le s_i - 1$. Importantly, one needs consecutive derivative data at each node (else one has a Birkhoff interpolation problem~\cite{AFS2017}\cite{butcher2011polynomial}).
Then we have the property (again written with the Iverson convention)
\begin{equation}
{H_{i,j}^{(k)}(\tau_\ell)} = [i=\ell][j=k];
\end{equation}
that is, unless \textsl{both} the node indices are the same and the derivative indices are the same,
the given derivative of basis polynomial is zero at the given node; if both the node indices and the derivative indices \textsl{are} the same, then the (scaled) Hermite basis element takes the value $1$. Using this definition, one can write the interpolant as a linear combination of this Hermite interpolational basis: $p(x) = \sum_{i=0}^n\sum_{j=0}^{s_i-1} \rho_{i,j}H_{i,j}(x)$.
But there is a better way, that uses a stable partial fraction decomposition to get a collection of generalized barycentric weights $\ensuremath{\beta}_{i,j}$ that can be used to write down an efficient barycentric formula for evaluation of the polynomial. To be explicit, form the generalized node polynomial
\begin{equation}
w(z) = \prod_{i=0}^n (z-\tau_i)^{s_i}\>,
\end{equation}
which is exactly what you would get from the Lagrange node polynomial on letting each group of $s_i \ge 1$ distinct nodes flow together.
Then the barycentric weights from the partial fraction decomposition
of $1/w(z)$ must now account for the confluency:
\begin{equation}
\frac{1}{w(z)} = \sum_{i=0}^n \sum_{j=0}^{s_i-1} \frac{\beta_{i,j}}{(z-\tau_i)^{j+1}}\>. \label{eq:genparfrac}
\end{equation}
We will speak of the numerical computation of these $\beta_{i,j}$ shortly. Once we have them, we may simply write down barycentric forms of the polynomial that solves the Hermite interpolational problem: the first form is
\begin{equation}
p(z) = w(z) \sum_{i=0}^n \sum_{j=0}^{s_i-1} \sum_{k=0}^j \frac{\beta_{i,j}\rho_{i,k}}{(z-\tau_i)^{j+1-k}}\>.
\end{equation}
This form is simple to evaluate, and, provided the confluencies are not too large, numerically stable.
This form can be manipulated into a second barycentric form by replacing $w(z)$ with the reciprocal of its partial fraction expansion, equation~\ref{eq:genparfrac}. The second form allows scaling of the generalized barycentric weights, which can prevent overflow.
Incidentally, this allows us to give explicit expressions for the $H_{i,j}$ above:
\begin{equation}
H_{i,j}(z) = \sum_{k=0}^{s_i-1} \beta_{i,j+k} w(z)(z-\tau_i)^{-k-1}\>.
\end{equation}
[Equivalent expressions are given in the occasional textbook, but not all works on interpolation do so; the formula seems to be rediscovered frequently.]
Given this apparatus, it makes sense to try to directly find the appropriate values of the derivatives at the nodes directly from the given function values and derivative values at the nodes; that is, by finding the differentiation matrix.
Rather than give the derivation (a complete one can be found in chapter 11 of~\cite{corless2013graduate}) we point to both Maple code and Matlab code that implements those formulas, at \url{http://www.nfillion.com/coderepository/Graduate_Introduction_to_Numerical_Methods/} in \texttt{BHIP.mpl} and \texttt{genbarywts.m}, respectively. Evaluation in {\sc Matlab}\ can be done with the code \texttt{hermiteeval.m}. We give an example below.
If a polynomial is known at three points, say $[-1,0,1]$, and the values of $p$, $p'$, and $p''/2$ are known at $-1$, while the values of $p$, $p'$, $p''/2$, and $p'''/6$ are known at $0$, and the values of $p$ and $p'$ are known at $1$, then the differentiation matrix is found to be
\begin{equation}
\left[ \begin {array}{ccccccccc} 0&1&0&0&0&0&0&0&0
\\ \noalign{\medskip}0&0&2&0&0&0&0&0&0\\ \noalign{\medskip}-{\frac{201
}{2}}&-{\frac{177}{4}}&-15&96&-60&24&-12&9/2&-3/4\\ \noalign{\medskip}0
&0&0&0&1&0&0&0&0\\ \noalign{\medskip}0&0&0&0&0&2&0&0&0
\\ \noalign{\medskip}0&0&0&0&0&0&3&0&0\\ \noalign{\medskip}{\frac{83}{
4}}&6&1&-24&12&-12&4&{\frac{13}{4}}&-1/2\\ \noalign{\medskip}0&0&0&0&0
&0&0&0&1\\ \noalign{\medskip}35&11&2&0&48&0&16&-35&11\end {array}
\right] \>.
\end{equation}
Applying this to the vector of values known at the nodes gives us the values of
$[p'(-1)$, $p''(-1)$, $p'''(-1)/2$,
$p'(0)$, $p''(0)$, $p'''(0)/2$ , $p^{(iv)}(0)/6$, $p'(1)$, $p''(1)]^T$, which describe $p'(z)$ on these nodes in the same way that $p(z)$ was described.
Notice that some rows are essentially trivial, and just move known values into their new places. Notice that the nontrivial rows will, when multiplied by vectors representing constants (that is, $[c, 0, 0, c, 0, 0, 0, c, 0]^T$) give the zero vector. The nontrivial rows are constructed by recurrence relations from the generalized barycentric weights $\ensuremath{\beta}_{i,j}$, which are themselves merely the coefficients in the partial fraction expansion of the node polynomial.
There is more than one way to compute the generalized barycentric weights $\beta_{i,j}$. The fastest way that we know is the algorithm of~\cite{schneider1991hermite}, which internally uses a confluent Newton basis. Unfortunately, because it does so, it inherits the poor numerical stability of that approach. The codes referred to above use a direct local Laurent series expansion method instead, as outlined in~\cite{Henrici(1964)} for instance; this method is slower but much more stable. As discussed in~\cite{corless2013graduate}, however, it becomes less stable for higher confluency and cannot be perfectly backward stable even for $s_i \ge 3$.
We will see an example in section~\ref{sec:HermiteExample}.
\subsubsection{Bernstein Polynomials}
The Bernstein differentiation matrix is a tridiagonal matrix. Its entries are as follows:
\begin{equation}
[\mathbf{D_B}]_{i,j} =
\begin{cases}
2i-n & i=j \\
-i & j = i-1\\
n-i & j=i+1
\end{cases}\>.
\end{equation}
Here the row and column indices $i$ and $j$ run from $0$ to $n$.
For polynomials of degree at most $n=4$ expressed in the Bernstein basis, the matrix is explicitly
\begin{equation}
\left[ \begin {array}{ccccc} -4&4&0&0&0\\ \noalign{\medskip}-1&-2&3&0
&0\\ \noalign{\medskip}0&-2&0&2&0\\ \noalign{\medskip}0&0&-3&2&1
\\ \noalign{\medskip}0&0&0&-4&4\end {array} \right] \>.
\end{equation}
This is slightly different to the differentiation formulation seen in the Computer-Aided Geometric Design literature (e.g.~\cite{farin2014curves}), in that we \textsl{preserve the basis} to express the derivative in, even though that derivative is (nominally only) one degree too high. Degrees of polynomials expressed in Bernstein bases can be \textsl{elevated}, however, and when they are too high, they can be \textsl{lowered} or \textsl{reduced}. Indeed finding the \textsl{actual} degree of a polynomial expressed in a Bernstein basis can be, if there is noise in the coefficients, nontrivial. Here we simply keep the basis that we use to express $p(x)$, namely
\begin{equation}
p(x) = \sum_{i=0}^n c_i B_i^n(x)
\end{equation}
where
\begin{equation}
B_i^n(x) = {n \choose i} x^i (1-x)^{n-i}\>.
\end{equation}
By explicit computation, we find that the first column of the differentiation matrix (containing $-n$ in the zeroth row and $-1$ in the first row) correctly expresses the derivative of $B^n_0(x)$:
\begin{align}
-nB^n_0(x) - B^n_1(x) &= -n(1-x)^{n} - n x(1-x)^{n-1} \nonumber\\
&= -n(1-x)^{n-1}(1-x + x) \nonumber \\
&= \frac{d}{dx}B^n_0(x)\>.
\end{align}
Similarly, for $1 \le i \le n-1$,
\begin{align}
(n-i+1)B^n_{i-1} + (2i-n)B^n_i - (i+1)B^n_{i+1} &= x^{i-1}(1-x)^{n-i-1}{n \choose i}\left( i(1-x)^2 + (2i-n)x(1-x) - (n-i)x^2 \right) \nonumber \\
&= x^{i-1}(1-x)^{n-i-1}{n \choose i}\left( i-nx \right) \nonumber \\
&= \frac{d}{dx}B^n_i(x)\>.
\end{align}
By the reflection symmetry of $B^n_n(x)$ with $B^n_0(x)$, the final column is also correct.
\begin{remark}
As with the Lagrange polynomial bases, the pseudo-inverse of the Bernstein basis differentiation matrix is full. Also as with the Lagrange case, because $1 = \sum B^n_k(x)$ (that is, the Bernstein basis forms a partition of unity), application of the Bernstein differentiation matrix to a constant vector must return the zero vector and hence the row sums must be zero.
\end{remark}
\section{Basic Properties}
\textsl{Definition}: Let $\mathbf{X}_{\phi}^k$ be the vector of coefficients of $x^k$ in the basis $\phi$. That is, if
\begin{equation}
x^k =b_{k,0}\phi_0(x)+b_{k,1}\phi_1(x)+\dots+b_{k,n}\phi_n(x)\>,
\end{equation}
then
\begin{equation}
\mathbf{X}_{\phi}^k=[b_{k,0},b_{k,1},\dots,b_{k,n}]^T\>.\
\end{equation}
Set $\mathbf{1}_{\phi}=\mathbf{X}_{\phi}^0$. Let $\mathbf{V}$ be the matrix whose $k$-th column (numbering from zero) is $\frac{1}{k!}\mathbf{X}_{\phi}^k$.
\bigbreak
\subsection{Eigendecomposition of Differentiation Matrices}
Let $\mathbf{D}_{\phi}$ be the differentiation matrix for polynomials of degree at most $n$, expressed in the basis $\{\phi_k\}_{k=0}^n$. Note that if
\begin{equation}
b=\rho_0\phi_0+\rho_1\phi_1+...+\rho_n\phi_n
\end{equation}
then
\begin{equation}
\rho'=b_0\phi_0+b_1\phi_1+...+b_n\phi_n\>.
\end{equation}
same basis; for degree graded basis, $b_n=0$. Then $\mathbf{D}_{\phi}\mathbf{p}=\mathbf{b}$ where
\begin{align}
\mathbf{\rho}=\begin{bmatrix}
\rho_{0} \\
\rho_{1} \\
\vdots \\
\rho_{n}\>
\end{bmatrix}
\end{align} and
\begin{align}
\mathbf{b}=\begin{bmatrix}
b_{0} \\
b_{1} \\
\vdots \\
b_{n}
\end{bmatrix}.
\end{align}
\textbf{Lemma}: $\mathbf{D}$ is nilpotent.\\
\noindent\textsl{Proof}: $\mathbf{D}^{n+1}\mathbf{\rho}(x)=0$ for every polynomial of degree at most $n$; hence $\mathbf{D}^{n+1}\mathbf{p}(x)=0$ as required.\\
\noindent$\mathbf{Remark}$: Therefore all eigenvalues are zero.\\
\noindent\textsl{Proposition}
$\mathbf{D}_{\phi}\mathbf{V}=\mathbf{VJ}$, where
\begin{equation}
\mathbf{J}=
\left[
\begin{array}{cccc}
0 & 1 & &\\
0 & 0 & 1 & \\
\vdots&\vdots&\ddots&1\\
0&&&0\\
\end{array}
\right],
\end{equation}
is the Jordan Canonical Form of the differentiation matrix $\mathbf{D}_{\phi}$.
\bigbreak
\noindent\textsl{Proof}: $\mathbf{D}_{\phi}(\frac{1}{k!}\mathbf{X}_{\phi}^k)=\frac{1}{(k-1)!}\mathbf{X}_{\phi}^{k-1}$ for $k\geq 1$, by construction. Moreover the columns $\mathbf{X}_{\phi}^k$ are linearly independent because the monomials $1,x,x^2,\dots,x^n$ and $\phi$ are a polynomial basis. Thus $\mathbf{V}$ is invertible.\\
\noindent\textbf{Remark} The isomorphism of the polynomial representation by coefficient vectors (of the basis $\phi$) is complete for addition, subtraction, differentiation, and scalar multiplication; but the representation of $p\cdot q$ is possible only if $\mathrm{deg} p+\mathrm{deg} q\leq n$. The multiplication rules are interesting as well; we get the usual Cauchy convolution for the monomial basis.
\bigbreak
\subsection{Pseudoinverse}
\textsl{Observation}
As long as $\mathrm{deg} p<n$, anti-differentiation works by using the pseudo inverse; one then adds a constant times $\mathbf{1}_{\phi}$. Call this anti-differentiation matrix $\mathbf{S}$. Then we want $\mathbf{S1}_{\phi}=\mathbf{X}_{\phi},$ and $\mathbf{S}\frac{\mathbf{X}^{k-1}}{(k-1)!}=\frac{\mathbf{X}^k}{k!}$.\\
\bigbreak
\noindent Therefore, \begin{equation}\mathbf{S}\mathbf{V}(1:n)=[\mathbf{X},\frac{\mathbf{X}^2}{2},\dots,\frac{\mathbf{X}^{n-1}}{(n-1)!},\frac{\mathbf{X}^n}{n!}]\>,\end{equation}
\noindent and thus,
\begin{equation}
\mathbf{VJ^+V^{-1}[V_n]=VJ^+[0, 0, ..., 0, 1]^T=[0,0, ...., 0]^T\>.}
\end{equation}
\textbf{Lemma}: \textsl{The Moore-Penrose pseudo-inverse of}
\begin{equation}
\mathbf{J}= \left[
\begin{array}{cccc}
0 & 1 & &\\
0 & 0 & 1 & \\
\vdots&\vdots&\ddots&1\\
0&&&0\\
\end{array}
\right],
\end{equation}
is
\begin{equation}
\mathbf{J}^+=\mathbf{J^T}= \left[
\begin{array}{cccccc}
0 & 0 & & & &\\
1 & 0 & & & & \\
&1 & \ddots & & & \\
&& \ddots && &\\
&&&&1&0\\
\end{array}
\right].
\end{equation}
\textsl{Proof}: We need to verify that $\mathbf{JJ^TJ=J},\mathbf{J^TJ=J}$ and that both $\mathbf{J^TJ}$ and $\mathbf{JJ^T}$ are symmetric. The last two are trivial. Computation shows
\begin{equation}
\mathbf{JJ^T=J^TJ}= \left[
\begin{array}{ccccc}
0 & 0 & 0 &&\\
0 & 1 & 0 & & \\
&0 &1&&\\
&&&\ddots&\\
0&&&&1\\
\end{array}
\right],
\end{equation}
so
\begin{equation}
\mathbf{JJ^TJ}= \left[
\begin{array}{cccc}
0 & 1 & &\\
0 & 0 & 1 & \\
\vdots&\vdots&\ddots&1\\
0&&&0\\
\end{array}
\right] = \mathbf{J}\>.
\end{equation}
Similarly $\mathbf{J^TJJ^T=J}$.\\
$\mathbf{Proposition}$:
The matrix $\mathbf{D}^+=\mathbf{VJ^+V^{-1}}$ is a generalized inverse of $\mathbf{D}$.
\textsl{Proof}: It suffices to verify only the first two of the Moore-Penrose conditions: $\mathbf{D^+DD^+}=\mathbf{D^+}$ and $\mathbf{DD^+D}=\mathbf{D}$.
These follow immediately. Interestingly $\mathbf{D^+}$ is not (in general) a Moore-Penrose inverse: neither $\mathbf{D^+D}$ nor $\mathbf{DD^+}$ need be Hermitian.
The matrix $\mathbf{V}$ in a Lagrange basis is
\begin{equation}
\mathbf{V} = \left[
\begin{array}{ccccc}
1 & \tau_0 & \cdots & \cdots & \frac{\tau_0^n}{n!} \\
1 & \tau_1& & & \\
\vdots & \vdots &&&\\
1& \tau_n &&& \frac{\tau_n^k}{n!}
\end{array}
\right]\>.
\end{equation}
This is the product of a Vandermonde matrix and
\begin{equation}
\left[
\begin{array}{ccccc}
1 & & &&\\
& 1 & & & \\
& & \frac{1}{2} &&\\
&&& \ddots &\\
&&&& \frac{1}{n!}
\end{array}
\right].
\end{equation}
This is likely to be extraordinarily ill-conditioned. However, this gives an explicit JCF for differentiation matrices on Lagrange bases.
\section{Accuracy and Numerical Stability}
There are several questions regarding numerical stability (and, unfortunately, the answers vary with the basis used, and with the degree). For the orthogonal polynomial bases and the Bernstein bases, the differentiation matrices have integer or rational entries, and there are no numerical difficulties in constructing them, only (perhaps) with their use. For the Lagrange and the Hermite interpolational bases, the (generalized) barycentric weights need to be constructed from the nodes, and then the entries of the differentiation matrix constructed from the weights. In floating-point arithmetic, this can be problematic for some sets of nodes (especially equally-spaced nodes); higher-precision construction of the weights, or use of symmetries as with Chebyshev nodes, may be needed. High or variable confluency can also be a difficulty.
Use of higher precision in construction of the barycentric weights and of the differentiation matrix may be worth it, if the matrix is to be used frequently.
For all differentiation matrices, there is the question of accuracy of computation of the polynomial derivative by matrix multiplication. In general, differentiation is infinitely ill-conditioned: the derivative of $f(x) + \varepsilon v(x)$ can be arbitrarily different to the derivative of $f(x)$. However, if both $f$ and the perturbation are restricted to be \textsl{polynomial}, then the ill-conditioning is finite, and the absolute condition number is bounded by the norm of the differentiation matrix $\mathbf{D}$. This is Theorem 11.2 of~\cite{corless2013graduate}, which we state formally below.
\begin{theorem}
If $f(x)$ and $\Delta f(x)$ are both polynomials of degree at most $n$, and are both expressed in a polynomial basis $\phi$, then
\begin{equation}
\| \Delta f'(x) \| \le \| \mathbf{D}_{\phi} \| \| \Delta f (x) \|
\end{equation}
where the norms $\| \Delta f' \|$ and $\| \Delta f \|$ are vector norms of their coefficients in $\phi$ and the norm of the differentiation matrix is the corresponding subordinate matrix norm.
\end{theorem}
One should check the norm $\|\ensuremath{\mat{D}}\|$ whenever one uses a differentiation matrix.
We remark that the norms of powers of $\ensuremath{\mat{D}}$ can grow very large. For instance, for the Bernstein basis of dimension $n+1$ we find\footnote{We have no proof, only experimental evidence; it should be possible to prove this but we have not done so.} that $\| \ensuremath{\mat{D}}^n \|_\infty = 2^n n!$. The next power gives the zero matrix, of course. To give a sense of scale, we have $\| \ensuremath{\mat{D}} \|_\infty = 2n$ and hence this norm to the $n$th power is much larger yet, being $(2n)^n$ so a factor $n^n/n! \approx \exp(n)/\sqrt{2\pi n}$ larger.
As a corollary, from the results discussed in~\cite{Embree:HLA:2013} the $\varepsilon$-pseudospectral radius of the $n+1$-dimensional Bernstein $\ensuremath{\mat{D}}$ matrix must then at least be $(2^n n!)^{1/(n+1)} \varepsilon^{1/(n+1)} \sim 2n \varepsilon^{1/(n+1)}/e$ as $n \to \infty$, for any $\varepsilon > 0$. This implies that for large enough dimension, matrices very near to $\ensuremath{\mat{D}}$ will have eigenvalues larger than $1$ in magnitude. We believe that similar results hold for other bases, indicating that higher-order derivatives are hard to compute accurately by using repeated application of multiplication by differentiation matrices (as is to be expected).
\subsection{A Hermite interpolational example \label{sec:HermiteExample}}
Consider interpolating the simple polynomial that is identically $1$ on the interval $-1 \le z \le 1$, using nodes with confluency three. That is, at each node we supply the value of the function ($1$), the value of the first derivative ($0$), and the value of the second derivative divided by $2$, which is also in this case just $0$. We consider taking $n+1$ nodes $\tau_j$ for $0 \le j \le n$, which gives us $3(n+1)$ pieces of data and thus a polynomial of degree at most $3n+2$. We then plot the error $p(z)-1$ on this interval. We also compute the differentiation matrix $\ensuremath{\mat{D}}$ on these nodes with this confluency, and multiply $\ensuremath{\mat{D}}$ by the vector containing the data for the constant function $1$. This should give us an identically $0$ vector (call it $\vec{Z}$), but will not, because of rounding error. We compute the infinity norm of $\vec{Z}$ and the infinity norm of the matrix $\ensuremath{\mat{D}}$.
We take two sets of nodes: first the Chebyshev nodes $\tau_j = \cos(\pi(n-j)/n)$, and second the equally-spaced nodes $\tau_j = -1 + 2j/n$. We take $n=3$, $5$, $8$, $\ldots$, $55$ (Fibonacci numbers).
In figure~\ref{fig:Dnorm} we find a log-log plot of the norms of $\ensuremath{\mat{D}}$ for these $n$. Remember that the degree of the interpolant is at most $3n+2$. We see that the norm of $\ensuremath{\mat{D}}$ grows extremely rapidly for equally-spaced nodes (as we would expect). For Chebyshev nodes there is still substantial growth (for confluency $3$; for confluency $2$ there is less growth, and for confluency $4$ there is more), but for $n=55$ and confluency $3$ at all nodes we have $\|\ensuremath{\mat{D}}\| $ approximately $10^{10}$ which still gives some accuracy in $\vec{Z}$.
In figure~\ref{fig:Znorm} we see the corresponding norms of $\vec{Z}$. The behaviour is as predicted.
\par\medskip\noindent
\textbf{Remark}.
The confluency really matters. If we use just simple Lagrange interpolation, that is confluency $s_i=1$ at each node, then the interpolation on $n=55$ Chebyshev nodes is in error by no more than $3.5\cdot 10^{-12}$. Of course, the nominal degree is much lower than it was in the Hermite case with confluency $3$. When we up the degree to $165$, the Lagrange error is no more than $1.5\cdot 10^{-11}$. When the confluency is $3$, and $n=55$ which is comparable, the error is $1.4\cdot 10^{-5}$.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Dnorm.png}
\caption{A comparison of norms of the differentiation matrices for Hermite interpolational basis on $n+1$ nodes, of confluency $3$, between equally-spaced nodes (solid boxes) and Chebyshev nodes (circles). We see growth in $n$ for both sets of nodes, but much more rapid growth for equally-spaced nodes.}
\label{fig:Dnorm}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Znorm.png}
\caption{A comparison of norms of the vector $\vec{Z} = \ensuremath{\mat{D}} \mathbf{1}$ in Hermite interpolational bases. Equally-spaced nodes (solid box) and Chebyshev nodes (circles). As expected, we have $\|\vec{Z}\| \approx \|\ensuremath{\mat{D}}\|\cdot 10^{-16}$ when working in double precision.}
\label{fig:Znorm}
\end{figure}
\section{Concluding Remarks}
Expressing a polynomial in a particular basis reflects a choice taken by a mathematical modeller. We believe that choice should be respected. Indeed, changing bases can be ill-conditioned, often at least exponentially in the degree. There are exceptions, of course: interpolation on roots of unity with a Lagrange basis can be changed to a monomial basis by using the DFT, and the conversion is perfectly well-conditioned; similarly changing from a Lagrange basis on Chebyshev-Lobatto points to the Chebyshev basis is also perfectly well-conditioned. But, usually, one wants to continue to work in the basis chosen by the modeller. This is particularly true of the Bernstein basis, which has an optimal conditioning property: out of all bases that are nonnegative on the interval $[0,1]$, the Bernstein basis expression has the optimal condition number~\cite{Farouki(1996)}. This property was extended to bases nonnegative on a set of discrete points by~\cite{Corless(2004)}, who proved that Lagrange bases can be better even than Bernstein bases. See also~\cite{carnicer2017optimal}, who independently proved the same.
Differentiation is a fundamental operation, and it is helpful to be able to differentiate polynomials without changing bases. This paper has examined the properties of the matrices for accomplishing this. We found several of the results presented here to be surprising, notably that the Jordan Canonical Form for all the differentiation matrices considered here was the same. Likewise, that there is a uniform formula for a pseudo-inverse of all differentiation matrices of the type considered here was also a surprise.
One can extend this work in several ways. One of the first might be to look at differentiation matrices for \textsl{compact finite differences}. These are no longer always exact, and the matrices arising are no longer nilpotent (though they have null spaces corresponding to the polynomials of low enough degree that they are exact for). There are also some further experiments to run on the differentiation matrices we have studied in this paper already. For instance, it would be interesting to know theoretically the growth of $\|\ensuremath{\mat{D}}^k\|$ for various dimensions $n$; we found that for the Bernstein basis of dimension $n+1$ we had $\| \ensuremath{\mat{D}}^n \|_\infty = 2^n n!$. Since for the monomial basis we have $\| \ensuremath{\mat{D}}^n \|_\infty = n!$, this suggests that the natural scale for such a comparison is to divide by $n!$ and indeed that seems logical, because then in essence we are comparing the size of Taylor coefficients instead of comparing the size of derivatives and it is the Taylor coefficients that have a geometric interpretation in terms of location of nearby singularities. We leave this study of the dependence of the norm of $\ensuremath{\mat{D}}^k$ in different bases for future work.
\section*{Acknowledgements}
This work was supported by a Summer Undergraduate NSERC Scholarship for the third author. The second author was supported by an NSERC Discovery Grant.
We also thank ORCCA and the Rotman Institute of Philosophy.
\bibliographystyle{plain}
| {
"timestamp": "2018-09-18T02:07:36",
"yymm": "1809",
"arxiv_id": "1809.05769",
"language": "en",
"url": "https://arxiv.org/abs/1809.05769",
"abstract": "We collect here elementary properties of differentiation matrices for univariate polynomials expressed in various bases, including orthogonal polynomial bases and non-degree-graded bases such as Bernstein bases and Lagrange \\& Hermite interpolational bases.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Differentiation Matrices for Univariate Polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9838471647042428,
"lm_q2_score": 0.8333245932423308,
"lm_q1q2_score": 0.8198640383397836
} |
https://arxiv.org/abs/1704.03913 | Higher-order clustering in networks | A fundamental property of complex networks is the tendency for edges to cluster. The extent of the clustering is typically quantified by the clustering coefficient, which is the probability that a length-2 path is closed, i.e., induces a triangle in the network. However, higher-order cliques beyond triangles are crucial to understanding complex networks, and the clustering behavior with respect to such higher-order network structures is not well understood. Here we introduce higher-order clustering coefficients that measure the closure probability of higher-order network cliques and provide a more comprehensive view of how the edges of complex networks cluster. Our higher-order clustering coefficients are a natural generalization of the traditional clustering coefficient. We derive several properties about higher-order clustering coefficients and analyze them under common random graph models. Finally, we use higher-order clustering coefficients to gain new insights into the structure of real-world networks from several domains. |
\section{Derivation of higher-order clustering coefficients}
In this section, we derive our higher-order clustering coefficients and some of their
basic properties.
We first present an alternative interpretation of the classical clustering coefficient
and then show how this novel interpretation seamlessly generalizes to arrive at our
definition of higher-order clustering coefficients.
We then provide some probabilistic interpretations of higher-order clustering coefficients that will be useful
for our subsequent analysis.
\subsection{Alternative interpretation of the classical clustering coefficient}
Here we give an alternative interpretation of the clustering coefficient that
will later allow us to generalize it and quantify clustering of higher-order
network structures (this interpretation is summarized in Fig.~\ref{fig:ccf_def}).
Our interpretation is based on a
notion of clique expansion. First, we consider a $2$-clique $K$ in a graph $G$
(that is, a single edge $K$; see Fig.~\ref{fig:ccf_def}, row $C_2$, column 1).
Next, we \emph{expand} the clique $K$ by considering any edge $e$ adjacent to
$K$, i.e., $e$ and $K$ share exactly one node (Fig.~\ref{fig:ccf_def}, row
$C_2$, column 2).
This expanded subgraph forms a wedge, i.e., a length-$2$ path.
The classical global clustering coefficient $\gccf{}$ of
$G$ (sometimes called the
transitivity of $G$~\cite{boccaletti2006complex}) is then defined as the
fraction of wedges that are \emph{closed}, meaning that the $2$-clique and
adjacent edge induce a $(2 + 1)$-clique, or a triangle (Fig.~\ref{fig:ccf_def},
row $C_2$, column 3)~\cite{barrat2000properties,luce1949method}.
The novelty of our interpretation of the clustering coefficient is considering it
as a form of clique expansion, rather than as the closure of a length-$2$ path,
which is key to our generalizations in the next section.
Formally, the classical global clustering coefficient is
\begin{equation}\label{eq:global_ccf}
\gccf{} = \frac{6 \lvert K_3 \rvert}{\lvert W \rvert},
\end{equation}
where $K_3$ is the set of $3$-cliques (triangles), $W$ is the set of wedges, and
the coefficient $6$ comes from the fact that each $3$-clique closes 6
wedges---the 6 ordered pairs of edges in the triangle.
We can also reinterpret the local clustering coefficient~\cite{watts1998collective}
in this way. In this case, each wedge again consists of a $2$-clique and
adjacent edge (Fig.~\ref{fig:ccf_def}, row $C_2$, column 2), and we call the
unique node in the intersection of the $2$-clique and adjacent edge
the \emph{center} of the wedge. The \emph{local clustering clustering
coefficient} of a node $u$ is the fraction of wedges
centered at $u$ that are closed:
\begin{equation}\label{eq:local_ccf}
\lccf{}{u} =
\dfrac{2\lvert K_3(u) \rvert}{\lvert W(u) \rvert},
\end{equation}
where $K_3(u)$ is the set of $3$-cliques containing $u$ and $W(u)$ is the set of
wedges with center $u$ (if $\lvert W(u) \rvert = 0$, we say that $\lccf{}{u}$ is
undefined). The \emph{average clustering coefficient} $\accf{}$ is the mean of
the local clustering coefficients,
\begin{equation}\label{eq:avg_ccf}
\accf{} = \frac{1}{\lvert \widetilde V \rvert}\sum_{u \in \widetilde V} \lccf{}{u},
\end{equation}
where $\widetilde V$ is the set of nodes in the network where the local
clustering coefficient is defined.
\subsection{Generalizing to higher-order clustering coefficients}\label{sec:generalization}
Our alternative interpretation of the clustering coefficient, described above as
a form of clique expansion, leads to a natural generalization to higher-order
cliques. Instead of expanding $2$-cliques to $3$-cliques, we expand
$\ell$-cliques to $(\ell + 1)$-cliques (Fig.~\ref{fig:ccf_def}, rows $C_3$ and
$C_4$). Formally, we define an $\ell$-wedge to consist of an $\ell$-clique and
an adjacent edge for $\ell \ge 2$. Then we define the global $\ell$th-order clustering
coefficient $\gccf{\ell}$ as the fraction of $\ell$-wedges that are closed,
meaning that they induce an $(\ell + 1)$-clique in the network. We can write
this as
\begin{equation}\label{eq:global_ccf_l}
\gccf{\ell} = \frac{(\ell^2 + \ell)\lvert K_{\ell + 1} \rvert}{\lvert W_{\ell} \rvert},
\end{equation}
where $K_{\ell + 1}$ is the set of $(\ell + 1)$-cliques, and $W_{\ell}$ is the
set of $\ell$-wedges. The coefficient $\ell^2 + \ell$ comes from the fact that
each $(\ell + 1)$-clique closes that many wedges: each $(\ell+1)$-clique
contains $\ell + 1$ $\ell$-cliques, and each $\ell$-clique contains $\ell$ nodes
which may serve as the center of an $\ell$-wedge. Note that the classical
definition of the global clustering coefficient given in Eq.~\ref{eq:global_ccf}
is equivalent to the definition in Eq.~\ref{eq:global_ccf_l} when $\ell = 2$.
We also define higher-order local clustering coefficients:
\begin{equation} \label{eq:def_lccf}
\lccf{\ell}{u} = \frac{\ell \lvert K_{\ell + 1}(u) \rvert}{\lvert W_{\ell}(u) \rvert},
\end{equation}
where $K_{\ell + 1}(u)$ is the set of $(\ell + 1)$-cliques containing node $u$,
$W_{\ell}(u)$ is the set of $\ell$-wedges with center $u$ (where the center
is the unique node in the intersection of the $\ell$-clique and adjacent edge comprising the wedge; see Fig.~\ref{fig:ccf_def}),
and the coefficient $\ell$
comes from the fact that each $(\ell + 1)$-clique
containing $u$ closes that many $\ell$-wedges in $W_{\ell}(u)$.
The $\ell$th-order clustering coefficient of a node is defined for any node that
is the center of at least one $\ell$-wedge, and the average $\ell$th-order
clustering coefficient is the mean of the local clustering coefficients:
\begin{equation}\label{eq:def_accf}
\accf{\ell} = \frac{1}{\lvert \widetilde V_\ell \rvert}\sum_{u \in \widetilde V_\ell} \lccf{\ell}{u},
\end{equation}
where $\widetilde V_{\ell}$ is the set of nodes that are the centers of at least
one $\ell$-wedge.
To understand how to compute higher-order clustering coefficients, we substitute
the following useful identity
\begin{equation}\label{eq:wedge_identity}
\lvert W_{\ell}(u) \rvert = \lvert K_{\ell}(u) \rvert \cdot (d_u - \ell + 1),
\end{equation}
where $d_u$ is the degree of node $u$, into Eq.~\ref{eq:def_lccf} to get
\begin{equation} \label{eq:deriv_lccf}
\lccf{\ell}{u}
= \frac{\ell \cdot \lvert K_{\ell+1}(u) \rvert}{(d_u - \ell + 1) \cdot \lvert K_{\ell}(u) \rvert }.
\end{equation}
From Eq.~\ref{eq:deriv_lccf}, it is easy to see that we can compute all local
$\ell$th-order clustering coefficients by enumerating all $(\ell + 1)$-cliques
and $\ell$-cliques in the graph. The computational complexity of the algorithm
is thus bounded by the time to enumerate $(\ell + 1)$-cliques and
$\ell$-cliques. Using the Chiba and Nishizeki algorithm~\cite{chiba1985arboricity},
the complexity is $O(\ell a^{\ell-2}m)$, where $a$ is the arboricity of the graph,
and $m$ is the number of edges.
The arboricity $a$ may be as large as $\sqrt{m}$, so this algorithm
is only guaranteed to take polynomial time if $\ell$ is a constant.
In general, determining if there exists a single clique with at least $\ell$
nodes is NP-complete~\cite{karp1972reducibility}.
For the global clustering coefficient, note that
\begin{equation}
\lvert W_{\ell} \rvert = \sum_{u \in V} \lvert W_\ell(u) \rvert.
\end{equation}
Thus, it suffices to enumerate $\ell$-cliques (to compute $\lvert W_{\ell} \rvert$ using Eq.~\ref{eq:wedge_identity})
and to count the total number of $\ell$-cliques. In practice, we use
the Chiba and Nishizeki to enumerate cliques and simultaneously compute
$\gccf{\ell}$ and $\lccf{\ell}{u}$ for all nodes $u$.
This suffices for our clustering analysis with $\ell = 2, 3, 4$
on networks with over a hundred million edges in Section~\ref{sec:empirical}.
\subsection{Probabilistic interpretations of higher-order clustering coefficients}
To facilitate understanding of higher-order clustering coefficients and to
aid our analysis in Section~\ref{sec:theoretical}, we
present a few probabilistic interpretations of the quantities. First, we can
interpret $\lccf{\ell}{u}$ as the probability that a wedge $w$ chosen uniformly
at random from all wedges centered at $u$ is closed:
\begin{equation} \label{eq:prob_interp1}
\lccf{\ell}{u} = \probof{w \in K_{\ell + 1}(u)}.
\end{equation}
The variant of this interpretation for the classical clustering case of $\ell=2$
has been useful for graph algorithm development~\cite{seshadhri2013triadic}.
For the next probabilistic interpretation, it is useful to analyze the structure of the 1-hop
neighborhood graph $\nbr{1}{u}$ of a given node $u$ (not containing node $u$).
The vertex set of $\nbr{1}{u}$ is the set of
all nodes adjacent to $u$, and the edge set consists of all edges between neighbors
of $u$, i.e., $ \{ (v, w) \;\vert\; (u, v), (u, w), (v, w) \in E \}$, where $E$ is the edge set of the graph.
Any $\ell$-clique in $G$ containing node $u$ corresponds to a unique
$(\ell - 1)$-clique in $\nbr{1}{u}$, and specifically for $\ell = 2$, any edge $(u, v)$
corresponds to a node $v$ in $\nbr{1}{u}$. Therefore, each $\ell$-wedge
centered at $u$ corresponds to an $(\ell-1)$-clique $K$ and one of the
$d_u - \ell + 1$ nodes outside $K$ (i.e., in $\nbr{1}{u} \backslash K$).
Thus, Eq.~\ref{eq:deriv_lccf} can be re-written as
\begin{equation}\label{eq:lccf_nbrs}
\frac{\ell \cdot \lvert K_{\ell}(\nbr{1}{u}) \rvert}{(d_u - \ell + 1) \cdot \lvert K_{\ell-1}(\nbr{1}{u}) \rvert },
\end{equation}
where $K_{k}(\nbr{1}{u})$ denotes the number of $k$-cliques in $\nbr{1}{u}$.
If we uniformly at random select an $(\ell - 1)$-clique $K$ from $\nbr{1}{u}$
and then also uniformly at random select a node $v$ from $\nbr{1}{u}$ outside of
this clique, then $\lccf{\ell}{u}$ is the probability that these $\ell$ nodes
form an $\ell$-clique:
\begin{equation} \label{eq:prob_interp2}
\lccf{\ell}{u} = \probof{K \cup \{ v \} \in K_{\ell}(\nbr{1}{u})}.
\end{equation}
Moreover, if we condition on observing an $\ell$-clique from this sampling
procedure, then the $\ell$-clique itself is selected uniformly at random from
all $\ell$-cliques in $\nbr{1}{u}$. Therefore, $\lccf{\ell-1}{u} \cdot
\lccf{\ell}{u}$ is the probability that an $(\ell - 1)$-clique and two nodes
selected uniformly at random from $\nbr{1}{u}$ form an $(\ell + 1)$-clique.
Applying this recursively gives
\begin{equation}
\prod_{j=2}^{\ell}\lccf{j}{u} = \frac{\lvert K_{\ell}(\nbr{1}{u}) \rvert}{{d_u \choose \ell}}.
\end{equation}
In other words, the product of the higher-order local clustering coefficients of
node $u$ up to order $\ell$ is the $\ell$-clique density amongst $u$'s
neighbors.
\section{Discussion}
We have proposed higher-order clustering coefficients to study
higher-order closure patterns in networks, which generalizes the widely used
clustering coefficient that measures triadic closure.
Our work compliments other recent developments on the importance of higher-order
information in network navigation~\cite{rosvall2014memory,scholtes2017network}
and on temporal community structure~\cite{sekara2016fundamental}; in contrast,
we examine higher-order clique closure and only implicitly consider time as a
motivation for closure.
Prior efforts in generalizing clustering coefficients have focused on shortest
paths~\cite{fronczak2002higher}, cycle formation~\cite{caldarelli2004structure},
and triangle frequency in $k$-hop
neighborhoods~\cite{andrade2006neighborhood,jiang2004topological}.
Such approaches fail to capture closure patterns of cliques, suffer from
challenging computational issues, and are difficult to theoretically analyze in random graph
models more sophisticated than the Erd\H{o}s-R\'enyi model.
On the other hand, our higher-order clustering coefficients are simple but
effective measurements that are analyzable and easily computable (we only rely
clique enumeration, a well-studied algorithmic task).
Furthermore, our methodology provides new insights into the clustering behavior
of several real-world networks and random graph models, and our theoretical
analysis provides intuition for the way in which higher-order clustering
coefficients describe local clustering in graphs.
Finally, we focused on higher-order clustering coefficients as a global network
measurement and as a node-level measurement, and in related work we also
show that large higher-order clustering implies the existence of mesoscale clique-dense
community structure~\cite{yin2017local}.
\section{Experimental results on real-world networks}\label{sec:empirical}
We now analyze the higher-order clustering of real-world networks.
We first study how the higher-order global and average clustering coefficients vary
as we increase the order $\ell$ of the clustering coefficient on a collection
of 20 networks from several domains.
After, we concentrate on a few representative networks and
compare the higher-order clustering of real-world networks to null models.
We find that only some networks exhibit higher-order clustering once the traditional
clustering coefficient is controlled.
Finally, we examine the local clustering of real-world networks.
\subsection{Higher-order global and average clustering}
\input{macroscopic-ccfs-table.tex}
We compute and analyze the higher-order clustering for networks from a variety
of domains (Table~\ref{tab:all_ccfs}).
We briefly describe the collection of networks and their categorization below:
\begin{enumerate}
\item Two synthetic networks---a random instance of an $\textnormal{Erd\H{o}s-R\'enyi}$ graph with
$n=1,000$ nodes and edge probability $p=0.2$ and a small-world network with
$n=20,000$ nodes, $k = 10$, and rewiring probability $p=0.1$;
\item Four neural networks---the complete neural systems of the nematode worms
\emph{P.\ pacificus} and \emph{C.\ elegans} as well as the neural connections
of the Drosophila medulla and mouse retina;
\item Four online social networks---two Facebook friendship networks between
students at universities from 2005 (fb-Stanford, fb-Cornell) and two complete online
friendship networks (Pokec and Orkut);
\item Four collaboration networks---two co-authorship networks constructed
from arxiv submission categories (arxiv-AstroPh and arxiv-HepPh),
a co-authorship network constructed from DBLP, and the
co-committee membership network of United States congresspersons (congress-committees);
\item Four human communication networks---two email networks (email-Enron-core,
email-Eu-core), a Facebook-like messaging network from a college (CollegeMsg),
and the edits of user talk pages by other users on Wikipedia (wiki-Talk); and
\item Four technological systems networks---three autonomous systems
(oregon2-010526, as-caida-20071105, as-skitter) and a peer-to-peer connection
network (p2p-Genutella31).
\end{enumerate}
In all cases, we take the edges as undirected, even if the original network data
is directed.
Table~\ref{tab:all_ccfs} lists the $\ell$th-order global and average clustering
coefficients for $\ell=2,3,4$ as well as the fraction of nodes that are the
center of at least one $\ell$-wedge (recall that the average clustering
coefficient is the mean only over higher-order local clustering coefficients of nodes
participating in at least one $\ell$-wedge; see \citet{kaiser2008mean} for a
discussion on how this can affect network analyses). We highlight some important
trends in the raw clustering coefficients, and in the next section, we focus on
higher-order clustering compared to what one gets in a null model.
Propositions~\ref{prop:ccf_er} and \ref{prop:ccf_sw} say that we should
expect the higher-order global and average clustering coefficients to
decrease as we increase the order $\ell$ for both the $\textnormal{Erd\H{o}s-R\'enyi}$ and small-world
models, and indeed $\accf{2} > \accf{3} > \accf{4}$ for these networks. This
trend also holds for most of the real-world networks (mouse-retina,
congress-committees, and oregon2-010526 are the exceptions). Thus, when
averaging over nodes, higher-order cliques are overall less likely to close in
both the synthetic and real-world networks.
\input{ccfs-table}
The relationship between the higher-ordrer global clustering coefficient
$\gccf{\ell}$ and the order $\ell$ is less uniform over the datasets. For the three
co-authorship networks (arxiv-HepPh, arxiv-AstroPh, and DBLP) and the three
autonomous systems networks (oregon2-010526, as-caida-20071105,
and as-skitter), $\gccf{\ell}$
increases with $\ell$, although the base clustering levels are much higher for
co-authorship networks.
This is not simply due to the presence of cliques---a clique has the same clustering for any order
(Fig.~\ref{fig:ccf_diffs}, left). Instead, these datasets have nodes that serve
as the center of a star and also participate in a clique
(Fig.~\ref{fig:ccf_diffs}, right; see also Proposition~\ref{prop:ccf_bounds}).
On the other hand, $\gccf{\ell}$ decreases with $\ell$ for the two email
networks and the two nematode worm neural networks. Finally, the change in
$\gccf{\ell}$ need not be monotonic in $\ell$. In three of the four online
social networks, $\gccf{3} < \gccf{2}$ but $\gccf{4} > \gccf{3}$.
Overall, the trends in the higher-order clustering coefficients can be different
within one of our dataset categories, but tend to be uniform within
sub-categories: the change of $\accf{\ell}$ and $\gccf{\ell}$ with $\ell$
is the same for the two nematode worms within the neural networks,
the two email networks within the communication networks, and the three
co-authorship networks within the collaboration networks. These trends hold even
if the (classical) second-order clustering coefficients differ substantially in
absolute value.
While the raw clustering values are informative, it is also useful to compare
the clustering to what one expects from null models.
We find in the next section that this reveals additional insights into our data.
\subsection{Comparison against null models}
For one real-world network from each dataset category,
we also measure the higher-order clustering
coefficients with respect to two null models (Table~\ref{tab:data_summary}).
First, we compare against the Configuration Model (CM) that samples uniformly from
simple graphs with the same degree
distribution~\cite{bollobas1980probabilistic,milo2003uniform}.
In real-world networks, $\accf{2}$ is much larger than expected with respect to
the CM null model. We find that the same holds for $\accf{3}$.
Second, we use a null model that samples graphs preserving both degree
distribution and $\accf{2}$. Specifically, these are samples from an ensemble
of exponential graphs where the Hamiltonian measures the absolute value of the
difference between the original network and the sampled
network~\cite{park2004statistical}. Such samples are referred to as as
Maximally Random Clustered Networks (MRCN) and are sampled with a simulated
annealing procedure~\cite{colomer2013deciphering}. Comparing $\accf{3}$ between
the real-world and the null network, we observe different behavior in
higher-order clustering across our datasets. Compared to the MRCN null model, \emph{C. elegans} has
significantly less than expected higher-order clustering (in terms of
$\accf{3}$), the Facebook friendship and autonomous system networks have significantly
more than expected higher-order clustering,
and the co-authorship and email networks have
slightly (but not significantly) more than expected higher-order clustering
(Table~\ref{tab:data_summary}). Put another way, all real-world networks
exhibit clustering in the classical sense of triadic closure. However, the
higher-order clustering coefficients reveal that the friendship and autonomous
systems networks exhibit significant clustering beyond what is given by triadic
closure. These results suggest the need for models that directly account for
closure in node neighborhoods~\cite{bhat2016densification,lambiotte2016structural}.
Our finding about the lack of higher-order clustering in \emph{C. elegans}
agrees with previous results that 4-cliques are under-expressed, while open
3-wedges related to cooperative information propagation are
over-expressed~\cite{benson2016higher,milo2002network,varshney2011structural}.
This also provides credence for the ``3-layer'' model of
\emph{C. elegans}~\cite{varshney2011structural}. The observed clustering in the
friendship network is consistent with prior work showing the relative
infrequency of open $\ell$-wedges in many Facebook network subgraphs with
respect to a null model accounting for triadic
closure~\cite{ugander2013subgraph}. Co-authorship networks and email networks
are both constructed from ``events'' that create multiple edges---a paper with
$k$ authors induces a $k$-clique in the co-authorship graph and an email sent
from one address to $k$ others induces $k$ edges. This event-driven graph
construction creates enough closure structure so that the average third-order
clustering coefficient is not much larger than random graphs where the classical
second-order clustering coefficient and degree sequence is kept the same.
\begin{figure*}[tb]
\phantomsubfigure{fig:ccfsA}
\phantomsubfigure{fig:ccfsB}
\phantomsubfigure{fig:ccfsC}
\phantomsubfigure{fig:ccfsD}
\phantomsubfigure{fig:ccfsE}
\includegraphics[width=2\columnwidth]{local.eps}
\caption{Top row: Joint distributions of ($\lccf{2}{u}$,
$\lccf{3}{u}$) for (A) \emph{C. elegans} (B) Facebook friendship,
(C) arxiv co-authorship, (D) email, and
(E) autonomous systems networks.
Each blue dot represents a node, and the red curve
tracks the average over logarithmic bins. The upper trend line is the
bound in Eq.~\ref{eq:Bound_Kappa3}, and the lower trend line is
expected Erd\H{o}s-R\'enyi behavior from Proposition~\ref{prop:ccf_er_cond}.
Bottom row: Average higher-order clustering coefficients as a function of degree.
}\label{fig:ccfs}
\end{figure*}
\begin{figure}[tb]
\phantomsubfigure{fig:ccfs_nullA}
\phantomsubfigure{fig:ccfs_nullB}
\includegraphics[width=0.9\columnwidth]{local-null.eps}
\caption{Analogous plots of Fig.~\ref{fig:ccfs} for synthetic
(A) $\textnormal{Erd\H{o}s-R\'enyi}$ and (B) small-world networks.
Top row: Joint distributions of ($\lccf{2}{u}$, $\lccf{3}{u}$).
Bottom row: Average higher-order clustering coefficients as a function of degree.
}\label{fig:ccfs_null}
\end{figure}
We emphasize that simple clique counts are not sufficient to obtain these
results. For example, the discrepancy in the third-order average clustering of
\emph{C. elegans} and the MRCN null model is not simply due to the
presence of 4-cliques. The original neural network has nearly twice as many
4-cliques (2,010) than the samples from the MRCN model (mean 1006.2, standard
deviation 73.6), but the third-order clustering coefficient is larger in MRCN.
The reason is that clustering coefficients normalize clique counts
with respect to opportunities for closure.
Thus far, we have analyzed global and average higher-order clustering,
which both summarize the clustering of the entire network.
In the next section, we look at more localized properties, namely
the distribution of higher-order local clustering coefficients and the higher-order
average clustering coefficient as a function of node degree.
\subsection{Higher-order local clustering coefficients and degree dependencies}
We now examine more localized clustering properties of our networks.
Figure~\ref{fig:ccfs} (top) plots the joint distribution of $\lccf{2}{u}$ and
$\lccf{3}{u}$ for the five networks analyzed in Table~\ref{tab:data_summary},
and Fig.~\ref{fig:ccfs_null} (top) provides the analogous plots for the $\textnormal{Erd\H{o}s-R\'enyi}$
and small-world networks. In these plots, the lower dashed trend line represents the expected
$\textnormal{Erd\H{o}s-R\'enyi}$ behavior, i.e., the expected clustering if the edges in the neighborhood
of a node were configured randomly, as formalized in
Proposition~\ref{prop:ccf_er_cond}. The upper dashed trend line is the maximum
possible value of $\lccf{3}{u}$ given $\lccf{2}{u}$, as given by
Proposition~\ref{prop:ccf_bounds}.
For many nodes in \emph{C. elegans}, local clustering is nearly random
(Fig.~\ref{fig:ccfsA}, top), i.e., resembles the $\textnormal{Erd\H{o}s-R\'enyi}$ joint distribution
(Fig.~\ref{fig:ccfs_nullA}, top). In other words, there are many nodes that lie
on the lower trend line. This provides further evidence that \emph{ C. elegans}
lacks higher-order clustering. In the arxiv co-authorship network, there are
many nodes $u$ with a large value of $\lccf{2}{u}$ that have an even larger
value of $\lccf{3}{u}$ near the upper bound of Eq.~\ref{eq:Bound_Kappa3} (see
the inset of Fig.~\ref{fig:ccfsC}, top). This implies that some nodes appear in
both cliques and also as the center of star-like patterns, as in
Fig.~\ref{fig:ccf_diffs}. On the other hand, only a handful of nodes in the
Facebook friendships, Enron email, and Oregon autonomous systems networks are
close to the upper bound (insets of Figs.~\ref{fig:ccfsB},\ref{fig:ccfsD}, and
\ref{fig:ccfsE}, top).
Figures~\ref{fig:ccfs} and \ref{fig:ccfs_null} (bottom) plot higher-order
average clustering as a function of node degree in the real-world and synthetic
networks. In the $\textnormal{Erd\H{o}s-R\'enyi}$, small-world, \emph{C. elegans}, and Enron email
networks, there is a distinct gap between the average higher-order clustering
coefficients for nodes of all degrees. Thus, our previous finding that the
average clustering coefficient $\accf{\ell}$ decreases with $\ell$ in these
networks is independent of degree. In the Facebook friendship network,
$\lccf{2}{u}$ is larger than $\lccf{3}{u}$ and $\lccf{4}{u}$ on average for
nodes of all degrees, but $\lccf{3}{u}$ and $\lccf{4}{u}$ are roughly the same
for nodes of all degrees, which means that 4-cliques and 5-cliques close at
roughly the same rate, independent of degree, albeit at a smaller rate than
traditional triadic closure (Fig.~\ref{fig:ccfsB}, bottom). In the co-authorship
network, nodes $u$ have roughly the same $\lccf{\ell}{u}$ for $\ell = 2$, $3$,
$4$, which means that $\ell$-cliques close at about the same rate, independent
of $\ell$ (Fig.~\ref{fig:ccfsC}, bottom). In the Oregon autonomous systems
network, we see that, on average, $\lccf{4}{u} > \lccf{3}{u} > \lccf{2}{u}$ for
nodes with large degree (Fig.~\ref{fig:ccfsE}, bottom). This explains how the
global clustering coefficient increases with the order, but the average
clustering does not, as observed in Table~\ref{tab:all_ccfs}.
\section{Introduction}
Networks are a fundamental tool for understanding and modeling complex physical,
social, informational, and biological systems~\cite{newman2003structure}.
Although such networks are typically sparse, a recurring trait of networks
throughout all of these domains is the tendency of edges to appear in small
clusters or cliques~\cite{rapoport1953spread,watts1998collective}. In many
cases, such clustering can be explained by local evolutionary processes. For
example, in social networks, clusters appear due to the formation of triangles
where two individuals who share a common friend are more likely to become
friends themselves, a process known as \emph{triadic
closure}~\cite{rapoport1953spread,granovetter1973strength}. Similar triadic
closures occur in other networks: in citation networks, two references appearing
in the same publication are more likely to be on the same topic and hence more
likely to cite each other~\cite{wu2009modeling} and in co-authorship networks,
scientists with a mutual collaborator are more likely to collaborate in the
future~\cite{jin2001structure}. In other cases, local clustering arises from
highly connected functional units operating within a larger system, e.g.,
metabolic networks are organized by densely connected
modules~\cite{ravasz2003hierarchical}.
The \emph{clustering coefficient} quantifies the extent to which edges of a
network cluster in terms of triangles.
The clustering coefficient is defined as the fraction of
length-2 paths, or \emph{wedges}, that are closed with a
triangle~\cite{watts1998collective,barrat2000properties}
(Fig.~\ref{fig:ccf_def}, row $\gccf{2}$). In other words, the clustering
coefficient measures the probability of triadic closure in the network.
However, the clustering coefficient is inherently restrictive as it measures the
closure probability of just one simple structure---the triangle. Moreover, higher-order
structures such as larger cliques are crucial to the structure and function of
complex networks~\cite{benson2016higher,yaverouglu2014revealing,rosvall2014memory}.
For example, 4-cliques reveal community structure in word association and
protein-protein interaction networks~\cite{palla2005uncovering} and cliques of
sizes 5--7 are more frequent than triangles in many real-world networks with
respect to certain null models~\cite{slater2014mid}.
However, the extent of clustering of such higher-order structures has not been
well understood nor quantified.
\begin{figure}[tb]
\begin{tabular}{l c c c}
& 1.~Start with~ & 2.~Find an adjacent edge & 3.~Check for an \\
& an $\ell$-clique & to form an $\ell$-wedge & $(\ell+1)$-clique \\ \\
$\gccf{2}$ & \tpone{intro2_part1} & \tptwo{intro2_part1}{intro2_part2} & \tpthree{intro2_part1}{intro2_part2}{intro2_part3} \\ \\
$\gccf{3}$ & \tpone{intro3_part1} & \tptwo{intro3_part1}{intro3_part2} & \tpthree{intro3_part1}{intro3_part2}{intro3_part3} \\ \\
$\gccf{4}$ & \tpone{intro4_part1} & \tptwo{intro4_part1}{intro4_part2} & \tpthree{intro4_part1}{intro4_part2}{intro4_part3} \\
\end{tabular}
\caption{Overview of higher-order clustering coefficients as clique
expansion probabilities. The $\ell$th-order clustering coefficient
$\gccf{\ell}$ measures the probability that an $\ell$-clique and an adjacent
edge, i.e., an $\ell$-wedge, is closed, meaning that the $\ell - 1$ possible
edges between the $\ell$-clique and the outside node in the adjacent edge
exist to form an $(\ell + 1)$-clique.} \label{fig:ccf_def}
\end{figure}
Here, we provide a framework to quantify higher-order clustering in networks by
measuring the normalized frequency at which higher-order cliques are closed,
which we call \emph{higher-order clustering coefficients}.
We derive our higher-order clustering coefficients by extending a novel
interpretation of the classical clustering coefficient as a form
of clique expansion (Fig.~\ref{fig:ccf_def}).
We then derive several properties about higher-order clustering coefficients
and analyze them under the $G_{n,p}$ and small-world null models.
Using our theoretical analysis as a guide, we analyze the higher-order clustering
behavior of real-world networks from a variety of domains.
We find that each domain of networks has its own higher-order clustering pattern,
which the traditional clustering coefficient does not show on its own.
Conventional wisdom in network science posits that practically all real-world
networks exhibit clustering;
however, we find that not all networks exhibit higher-order clustering.
More specifically, once we control for the clustering as measured by the
classical clustering coefficient, some networks do not show significant
clustering in terms of higher-order cliques.
In addition to the theoretical properties and empirical findings
exhibited in this paper, our related work also demonstrates a connection
between higher-order clustering and community detection~\cite{yin2017local}.
\section{}
\end{document}
\section{Theoretical analysis and higher-order clustering in random graph models}\label{sec:theoretical}
We now provide some theoretical analysis of our higher-order clustering coefficients.
We first give some extremal bounds on the values that higher-order clustering coefficients
can take given the value of the traditional (second-order) clustering coefficient.
After, we analyze the values of higher-order clustering coefficients in two common
random graph models---the $G_{n,p}$ and small-world models.
The theory from this section will be a useful guide for interpreting the clustering
behavior of real-world networks in Section~\ref{sec:empirical}.
\subsection{Extremal bounds}
\begin{figure}[tb]
\begin{tabular}{l @{\hskip 12pt} c @{\hskip 12pt} c @{\hskip 12pt} c}
& \tpone{ccf_equal} & \tpone{ccf_lower} & \tpone{ccf_upper} \\ \\
$\lccf{2}{u}$ & $1$ & $\dfrac{d}{2(d - 1)} \approx \dfrac{1}{2}$ & $\frac{d - 2}{4d - 4} \approx \dfrac{1}{4}$ \\ \\
$\lccf{3}{u}$ & $1$ & $0$ & $\dfrac{d - 4}{2d - 4} \approx \dfrac{1}{2}$ \\ \\
$\lccf{4}{u}$ & $1$ & $0$ & $\dfrac{d - 6}{2d - 6} \approx \dfrac{1}{2}$
\end{tabular}
\caption{Example 1-hop neighborhoods of a node $u$ with degree $d$ with
different higher-order clustering. Left: For cliques, $\lccf{\ell}{u} = 1$ for any $\ell$.
Middle: If $u$'s neighbors form a complete bipartite graph, $\lccf{2}{u}$ is
constant while $\lccf{\ell}{u} = 0$, $\ell \ge 3$. Right: If half of $u$'s
neighbors form a star and half form a clique with $u$, then
$\lccf{\ell}{u} \approx \sqrt{\lccf{2}{u}}$, which is the upper bound in Proposition~\ref{prop:ccf_bounds}.}
\label{fig:ccf_diffs}
\end{figure}
We first analyze the relationships between local higher-order clustering
coefficients of different orders.
Our technical result is Proposition~\ref{prop:ccf_bounds}, which
provides essentially tight lower and upper bounds for higher-order local clustering
coefficients in terms of the traditional local clustering coefficient.
The main ideas of the proof are illustrated in Fig.~\ref{fig:ccf_diffs}.
\begin{proposition}\label{prop:ccf_bounds}
For any fixed $\ell \geq 3$,
\begin{equation} \label{eq:Bound_Kappa3}
0 \leq \lccf{\ell}{u} \leq \sqrt{\lccf{2}{u}}.
\end{equation}
Moreover,
\begin{enumerate}
\item There exists a finite graph $G$ with a node $u$ such that the lower bound is
tight and $\lccf{2}{u}$ is within $\epsilon$ of any prescribed value in $[0, \frac{\ell-2}{\ell-1}]$.
\item There exists a finite graph $G$ with a node $u$ such that $\lccf{\ell}{u}$ is within
$\epsilon$ of the upper bound for any prescribed value of $\lccf{2}{u} \in [0, 1]$.
\end{enumerate}
\end{proposition}
\begin{proof}
Clearly, $0 \leq \lccf{\ell}{u}$ if the local clustering coefficient is well
defined. This bound is tight when $\nbr{1}{u}$ is $(\ell - 1)$-partite, as
in the middle column of Fig.~\ref{fig:ccf_diffs}.
In the $(\ell - 1)$-partite case, $\lccf{2}{u} = \frac{\ell-2}{\ell-1}$.
By removing edges from this extremal case in a sufficiently large graph,
we can make $\lccf{2}{u}$ arbitrarily close to any value in $[0, \frac{\ell-2}{\ell-1}]$.
To derive the upper bound, consider the 1-hop neighborhood $\nbr{1}{u}$, and let
\begin{equation}
\delta_\ell(\nbr{1}{u}) = \frac{\lvert K_{\ell}(\nbr{1}{u}) \rvert}{{d_u \choose \ell}}
\end{equation}
denote the $\ell$-clique density of $\nbr{1}{u}$. The Kruskal-Katona
theorem~\cite{kruskal1963number,katona1966theorem} implies that
\begin{align*}
&\delta_{\ell}(\nbr{1}{u}) \leq [\delta_{\ell - 1}(\nbr{1}{u})]^{\ell / (\ell - 1)} \\
&\delta_{\ell - 1}(\nbr{1}{u}) \leq [\delta_{2}(\nbr{1}{u})]^{(\ell - 1) / 2}.
\end{align*}
Combining this with Eq.~\ref{eq:deriv_lccf} gives
\begin{align*}
\lccf{\ell}{u} &\leq [\delta_{\ell - 1}(\nbr{1}{u})]^{\frac{1}{\ell - 1}}
\leq \sqrt{\delta_{2}(\nbr{1}{u})} = \sqrt{\lccf{2}{u}},
\end{align*}
where the last equality uses the fact that $\lccf{2}{u}$ is the edge density of
$\nbr{1}{u}$.
The upper bound becomes tight when $\nbr{1}{u}$ consists of a clique and isolated nodes
(Fig.~\ref{fig:ccf_diffs}, right) and the neighborhood is sufficiently large.
Specifically, let $\nbr{1}{u}$ consist of a clique of size $c$ and $b$ isolated nodes.
When $\ell = 2$,
\begin{equation*}
\lccf{\ell}{u} = \frac{{c \choose 2}}{{c + b \choose 2}}
= \frac{(c - 1)c}{(c + b - 1)(c + b)} \to \left(\frac{c}{c + b}\right)^2
\end{equation*}
and by Eq.~\ref{eq:lccf_nbrs}, when $3 \le \ell \le c$,
\begin{align*}
\lccf{\ell}{u}
&= \frac{\ell \cdot {c \choose \ell}}{(c + b - \ell + 1) \cdot {c \choose \ell - 1}}
= \frac{c - \ell + 1}{c + b - \ell + 1} \to \frac{c}{c + b}.
\end{align*}
By adjusting the ratio $c / (b + c)$ in $\nbr{1}{u}$, we can construct a family
of graphs such that $\lccf{2}{u}$ takes any value in the interval $[0,1]$ as $d_u \to \infty$
and $\lccf{\ell}{u} \to \sqrt{\lccf{2}{u}}$ as $d_u \to \infty$.
\end{proof}
The second part of the result requires the neighborhoods to be sufficiently large in order
to reach the upper bound. However, we will see later that in some real-world data,
there are nodes $u$ for which $\lccf{3}{u}$ is close to the upper bound $\sqrt{\lccf{2}{u}}$
for several values of $\lccf{2}{u}$.
Next, we analyze higher-order clustering coefficients in two common random graph
models: the Erd\H{o}s-R\'enyi model with edge probability $p$ (i.e.,
the $G_{n,p}$ model~\cite{erdos1959random}) and the small-world
model~\cite{watts1998collective}.
\subsection{Analysis for the $G_{n,p}$ model}
Now, we analyze higher-order clustering coefficients in classical
Erd\H{o}s-R\'enyi random graph model, where each edge exists
independently with probability $p$ (i.e., the $G_{n,p}$
model~\cite{erdos1959random}).
We implicitly assume that $\ell$ is small in the following analysis so that there should be at
least one $\ell$-wedge in the graph (with high probability and $n$
large, there is no clique of size greater than $(2 + \epsilon)\log n / \log (1 / p)$
for any $\epsilon > 0$~\cite{bollobas1976cliques}).
Therefore, the global and local clustering coefficients are well-defined.
In the $G_{n, p}$ model, we first observe that any $\ell$-wedge is closed
if and only if the $\ell - 1$ possible edges between the $\ell$-clique and the
outside node in the adjacent edge exist to form an $(\ell+1)$-clique. Each of
the $\ell - 1$ edges exist independently with probability $p$ in the $G_{n, p}$
model, which means that the higher-order clustering coefficients should
scale as $p^{\ell - 1}$. We formalize this in the following proposition.
\begin{proposition}\label{prop:ccf_er}
Let $G$ be a random graph drawn from the $G_{n, p}$ model.
For constant $\ell$,
\begin{enumerate}
\item $\expectover{G}{\gccf{\ell}} = p^{\ell - 1}$
\item $\expectover{G}{\lccf{\ell}{u} \;\vert\; W_{\ell}(u) > 0} = p^{\ell - 1}$ for any node $u$
\item $\expectover{G}{\accf{\ell}} = p^{\ell - 1}$
\end{enumerate}
\end{proposition}
\begin{proof}
We prove the first part by conditioning on the set of $\ell$-wedges, $W_{\ell}$:
\begin{align*}
\expect{\gccf{\ell}}
&=\textstyle \expectover{G}{\expectover{W_{\ell}}{\gccf{\ell} \;\vert\; W_{\ell}}} \\
&=\textstyle \expectover{G}{\expectover{W_{\ell}}{\frac{1}{\lvert W_{\ell}\rvert}\sum_{w \in W_{\ell}}\probof{w \text{ is closed}}}} \\
&=\textstyle \expectover{G}{\expectover{W_{\ell}}{\frac{1}{\lvert W_{\ell}\rvert}\sum_{w \in W_{\ell}}p^{\ell - 1}}} \\
&=\textstyle \expectover{G}{p^{\ell - 1}} \\
&=\textstyle p^{\ell - 1}.
\end{align*}
As noted above, the second equality is well defined (with high probability)
for small $\ell$. The third equality comes from the fact that any
$\ell$-wedge is closed if and only if the $\ell - 1$ possible edges between
the $\ell$-clique and the outside node in the adjacent edge exist to form an
$(\ell+1)$-clique.
The proof of the second part is essentially the same, except we condition over
the set of possible cases where $W_{\ell}(u) > 0$.
Recall that $\widetilde V$ is the set of nodes at the center of at least one
$\ell$-wedge. To prove the third part, we take the conditional expectation
over $\widetilde V$ and use our result from the second part.
\end{proof}
The above results say that the global, local, and average $\ell$th order clustering
coefficients decrease exponentially in $\ell$.
It turns out that if we also condition on the second-order clustering coefficient
having some fixed value, then the higher-order clustering coefficients
still decay exponentially in $\ell$ for the $G_{n,p}$ model.
This will be useful for interpreting the distribution of local clustering coefficients
on real-world networks.
\begin{proposition}\label{prop:ccf_er_cond}
Let $G$ be a random graph drawn from the $G_{n, p}$ model.
Then for constant $\ell$,
\begin{align*}
& \expectover{G}{\lccf{\ell}{u} \;\vert\; \lccf{2}{u}, W_{\ell}(u) > 0} \\
&= \left[\lccf{2}{u} - (1 - \lccf{2}{u}) \cdot O(1 / d_u^2)\right]^{\ell - 1}
\approx (\lccf{2}{u})^{\ell - 1}.
\end{align*}
\end{proposition}
\begin{proof}
Similar to the proof of Proposition~\ref{prop:ccf_er_cond}, we look at the conditional
expectation over $W_{\ell}(u) > 0$:
\begin{align*}
&\textstyle \expectover{G}{\lccf{\ell}{u} \;\vert\; \lccf{2}{u}, W_{\ell}(u) > 0} \\
&=\textstyle \expectover{G}{\expectover{W_{\ell}(u) > 0}{\lccf{\ell}{u} \;\vert\; \lccf{2}{u},\; W_{\ell}(u)}} \\
&=\textstyle \expectover{G}{\expectover{W_{\ell}(u) > 0}{\frac{1}{\lvert W_{\ell}(u)\rvert}\sum_{w \in W_{\ell}(u)}\probof{w \text{ closed} \;\vert\; \lccf{2}{u}}}}.
\end{align*}
Now, note that $\nbr{1}{u}$ has $m = \lccf{2}{u} \cdot {d_u \choose 2}$ edges.
Knowing that $w \in W_{\ell}(u)$ accounts for ${\ell - 1 \choose 2}$ of these
edges. By symmetry, the other $q = m - {\ell - 1 \choose 2}$ edges appear in
any of the remaining $r = {d_u \choose 2} - {\ell - 1 \choose 2}$ pairs of nodes
uniformly at random. There are ${r \choose q}$ ways to place these edges, of
which ${r - \ell + 1 \choose q - \ell + 1}$ would close the wedge $w$. Thus,
\begin{align*}
&\probof{w \text{ is closed} \;\vert\; \lccf{2}{u}} \\
&=\textstyle \frac{{r - \ell + 1 \choose q - \ell + 1}}{{r \choose q}}
= \frac{(r - \ell + 1)!q!}{(q - \ell + 1)!r!}
= \frac{(q - \ell + 2)(q - \ell + 3) \cdots q}{(r - \ell + 2)(r - \ell + 3) \cdots r}.
\end{align*}
Now, for any small nonnegative integer $k$,
\begin{align*}
\frac{q - k}{r - k} &=\textstyle \frac{
\lccf{2}{u} \cdot {d_u \choose 2} - {\ell -1 \choose 2} - k
}{
{d_u \choose 2} - {\ell - 1 \choose 2} - k
} \\
&=\textstyle \lccf{2}{u} - (1 - \lccf{2}{u})\left[\frac{{\ell -1 \choose 2} + k}{{d_u \choose 2} - {\ell - 1 \choose 2} - k}\right] \\
&=\textstyle \lccf{2}{u} - (1 - \lccf{2}{u}) \cdot O(1 / d_u^2).
\end{align*}
(Recall that $\ell$ is constant by assumption, so the big-O notation is appropriate).
The above expression approaches $(\lccf{2}{u})^{\ell - 1}$ when $\lccf{2}{u} \to 1$
as well as when $d_u \to \infty$.
\end{proof}
Proposition~\ref{prop:ccf_er_cond} says that even if the second-order local
clustering coefficient is large, the $\ell$th-order clustering coefficient will
still decay exponentially in $\ell$, at least in the limit as $d_u$ grows large.
By examining higher-order clique closures, this allows us to distinguish
between nodes $u$ whose neighborhoods are ``dense but random"
($\lccf{2}{u}$ is large but $\lccf{\ell}{u} \approx (\lccf{2}{u})^{\ell - 1}$)
or ``dense and structured" ($\lccf{2}{u}$ is large \emph{and} $\lccf{\ell}{u} > (\lccf{2}{u})^{\ell - 1}$).
Only the latter case exhibits higher-order clustering.
We use this in our analysis of real-world networks in Section~\ref{sec:empirical}.
\subsection{Analysis for the small-world model}
We also study higher-order clustering in the small-world random graph
model~\cite{watts1998collective}. The model begins with a ring network where
each node connects to its $2k$ nearest neighbors. Then, for each node $u$ and
each of the $k$ edges $(u, v)$ with $v$ following $u$ clockwise in the ring,
the edge is rewired to $(u, w)$ with probability $p$, where $w$ is chosen
uniformly at random.
\begin{figure}[tb]
\begin{centering}
\includegraphics[width=1.0\columnwidth]{smallworld-ccs-updated}
\end{centering}
\caption{Average higher-order clustering coefficient $\accf{\ell}$ as a
function of rewiring probability $p$ in small-world networks for $\ell = 2, 3, 4$
($n = 20,000$, $k = 5$). Proposition~\ref{prop:ccf_sw} shows that
the $\ell$th-order clustering coefficient when $p = 0$ predicts that the clustering should decrease
modestly as $\ell$ increases.}
\label{fig:sw_ccfs}
\end{figure}
With no rewiring ($p = 0$) and $k \ll n$,
it is known that $\accf{2} \approx 3/4$~\cite{watts1998collective}.
As $p$ increases, the average clustering coefficient $\accf{2}$ slightly decreases until a phase transition near $p = 0.1$,
where $\accf{2}$ decays to $0$~\cite{watts1998collective} (also see
Fig.~\ref{fig:sw_ccfs}).
Here, we generalize these results for higher-order clustering coefficients.
\begin{proposition}\label{prop:ccf_sw}
In the small-world model without rewiring ($p = 0$),
\begin{equation*
\accf{\ell} \rightarrow (\ell + 1) / (2\ell)
\end{equation*}
for any constant $\ell \geq 2$ as $k \to \infty$ and $n \to \infty$ while $2k < n$.
\end{proposition}
\begin{proof}
Applying Eq.~\ref{eq:deriv_lccf}, it suffices to show that
\begin{equation} \label{eq:K_SW}
\lvert K_{\ell}(u) \rvert = \frac{\ell}{(\ell - 1)!}\cdot k^{\ell -1 } + O(k^{\ell - 2})
\end{equation}
as
\begin{align*}
\lccf{\ell}{u} = \frac{\ell \cdot \frac{(\ell + 1) k^{\ell}}{\ell!}}{(2k - \ell + 1) \cdot \frac{\ell k^{\ell -1 }}{(\ell - 1)!} },
\end{align*}
which approaches $\frac{\ell + 1}{2\ell}$ as $k \to \infty$.
Now we give a derivation of Eq.~\ref{eq:K_SW}.
We first label the $2k$ neighbors of $u$ as $1, 2, \ldots, 2k$ by their clockwise ordering in the ring.
Since $2k < n$, these nodes are unique.
Next, define the
\emph{span} of any $\ell$-clique containing $u$ as the difference between the
largest and smallest label of the $\ell-1$ nodes in the clique other than $u$.
The span $s$ of any $\ell$-clique satisfies $s \leq k-1$ since any node is
directly connected with a node of label difference no greater than $k-1$. Also,
$s \geq \ell-2$ since there are $\ell-1$ nodes in an $\ell$-clique other than
$u$. For each span $s$, we can find $2k-1-s$ pairs of $(i, j)$ such that
$1\leq i$, $j \leq 2k$ and $j - i = s$. Finally, for every such pair $(i, j)$,
there are ${s - 1 \choose \ell - 3}$ choices of $\ell-3$ nodes between $i$ and
$j$ which will form an $\ell$-clique together with nodes $u$, $i$, and
$j$. Therefore,
\begin{align*}
\lvert \cliqueL{\ell}{u} \rvert
&=\textstyle \sum_{s = \ell-2}^{k-1} (2k-1-s) \cdot {s - 1 \choose \ell - 3} \\
&=\textstyle \sum_{s = \ell-2}^{k-1} (2k-1-s) \cdot \frac{(s-1)(s - 2) \cdots (s - \ell + 3)}{(\ell - 3)!} \\
&=\textstyle \sum_{t = 1}^{k-\ell+2} (2k + 2 -t - \ell) \cdot \frac{t (t + 1) \cdots (t + \ell - 4)}{(\ell - 3)!}.
\end{align*}
If we ignore lower-order terms $k$ and note that $t = O(k)$, we get
\begin{align*}
\textstyle \lvert \cliqueL{\ell}{u} \rvert
&= \textstyle \sum_{t = 1}^{k} \left[ \frac{(2k - t)t^{\ell - 3}}{(\ell - 3)!} + O(k^{\ell - 3}) \right]
\\&=\textstyle \frac{1}{(\ell - 3)!}\sum_{t = 1}^{k} (2kt^{\ell - 3} - t^{\ell - 2}) + O(k^{\ell - 2}).
\\ & = \textstyle \frac{1}{(\ell - 3)!}\left[2k \cdot \frac{k^{\ell - 2}}{\ell - 2} - \frac{k^{\ell - 1}}{\ell - 1} \right] + O(k^{\ell - 2}),
\\ & = \textstyle \frac{\ell }{(\ell - 1)!} \cdot k^{\ell -1 } + O(k^{\ell - 2}) .
\end{align*}
\end{proof}
Proposition~\ref{prop:ccf_sw} shows that, when $p = 0$, $\accf{\ell}$ decreases as $\ell$ increases.
Furthermore, via simulation, we observe the same behavior as for $\accf{2}$ when
adjusting the rewiring probability $p$ (Fig.~\ref{fig:sw_ccfs}). Regardless of
$\ell$, the phase transition happens near $p = 0.1$.
Essentially, once there is enough rewiring, all local clique structure is lost, and
clustering at all orders is lost.
This is partly a consequence of Proposition~\ref{prop:ccf_bounds}, which
says that $\lccf{\ell}{u} \to 0$ as $\lccf{2}{u} \to 0$ for any $\ell$.
| {
"timestamp": "2018-01-08T02:13:21",
"yymm": "1704",
"arxiv_id": "1704.03913",
"language": "en",
"url": "https://arxiv.org/abs/1704.03913",
"abstract": "A fundamental property of complex networks is the tendency for edges to cluster. The extent of the clustering is typically quantified by the clustering coefficient, which is the probability that a length-2 path is closed, i.e., induces a triangle in the network. However, higher-order cliques beyond triangles are crucial to understanding complex networks, and the clustering behavior with respect to such higher-order network structures is not well understood. Here we introduce higher-order clustering coefficients that measure the closure probability of higher-order network cliques and provide a more comprehensive view of how the edges of complex networks cluster. Our higher-order clustering coefficients are a natural generalization of the traditional clustering coefficient. We derive several properties about higher-order clustering coefficients and analyze them under common random graph models. Finally, we use higher-order clustering coefficients to gain new insights into the structure of real-world networks from several domains.",
"subjects": "Social and Information Networks (cs.SI); Statistical Mechanics (cond-mat.stat-mech); Physics and Society (physics.soc-ph); Machine Learning (stat.ML)",
"title": "Higher-order clustering in networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.97631052387569,
"lm_q2_score": 0.8397339676722393,
"lm_q1q2_score": 0.8198411098942956
} |
https://arxiv.org/abs/1006.4176 | Unknotting Unknots | A knot is an an embedding of a circle into three-dimensional space. We say that a knot is unknotted if there is an ambient isotopy of the embedding to a standard circle. By representing knots via planar diagrams, we discuss the problem of unknotting a knot diagram when we know that it is unknotted. This problem is surprisingly difficult, since it has been shown that knot diagrams may need to be made more complicated before they may be simplified. We do not yet know, however, how much more complicated they must get. We give an introduction to the work of Dynnikov who discovered the key use of arc--presentations to solve the problem of finding a way to detect the unknot directly from a diagram of the knot. Using Dynnikov's work, we show how to obtain a quadratic upper bound for the number of crossings that must be introduced into a sequence of unknotting moves. We also apply Dynnikov's results to find an upper bound for the number of moves required in an unknotting sequence. | \section{Introduction}
When one first delves into the theory of knots, one learns that knots are typically studied using their diagrams. The first question that arises when considering these knot diagrams is: how can we tell if two knot diagrams represent the same knot? Fortunately, we have a partial answer to this question. Two knot diagrams represent the same knot in $\mathbb{R}^3$ if and only if they can be related by the Reidemeister moves, pictured below. Reidemeister proved this theorem in the 1920's~ \cite{Reid}, and
it is the underpinning of much of knot theory. For example, J. W. Alexander based the original definition of his celebrated polynomial on the Reidemeister moves~\cite{Alex}.
\begin{figure}[h]
\begin{center}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, height=1.5in]{ReidemeisterMoves.eps}
\end{center}
\vspace{-.3in}
\caption{The three Reidemeister moves}
\end{figure}
Now imagine that you are presented with a complicated diagram of an unknot and you would like to use Reidemeister moves to reduce it to the trivial diagram that has no crossings. In considering a problem of this sort, you stumble upon a curious fact. Given a diagram of an unknot to be unknotted, it might be necessary to make the diagram more complicated before it can be simplified. We call such a diagram
a {\em hard unknot diagram} \cite{KL}.
A nice example of this is the Culprit, shown in Figure~\ref{culprit_intro}. If you look closely, you'll find that no simplifying type I or type II Reidemeister moves and no type III moves are available. Yet this is indeed the unknot. In order to unknot it, we need to introduce new crossings with Reidemeister I and II moves. In Figure~\ref{unknotCulprit}, we see that we can unknot the Culprit by making the diagram larger by two crossings (via a Reidemeister move of type two) and that it takes a total of ten Reidemeister moves to accomplish the unknotting.
\bigbreak
\begin{figure}[h]
\begin{center}
\includegraphics[height=1.2in, angle=90]{Culprit.eps}
\end{center}\label{culprit_intro}
\caption{The Culprit}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[height=2in]{UndoCulprit.eps}
\end{center}
\caption{The Culprit Undone}\label{unknotCulprit}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[height=2in]{Smallest.eps}
\end{center}
\caption{The smallest hard unknots}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[height=1in]{Goeritz.eps}
\end{center}
\caption{The Goeritz unknot}
\end{figure}
In Figures 4 and 5 we indicate more examples of hard unknot diagrams. In Figure 4 we show the smallest possible such examples. In Figure 5 we show the very first such example, discovered by Goeritz in 1934 ~\cite{Goeritz}.
\bigbreak
At this point, we ask ourselves: how much more complicated does a diagram need to become before it can be simplified? Moreover, how many Reidemeister moves do we need to trivialize our picture? In this paper, we give a technique for finding upper bounds for these answers. In particular, we will prove the following theorem.
\bigbreak
\noindent {\bf Theorem 4.}
{\em Suppose $K$ is a diagram (in Morse form) of the unknot with crossing number $cr(K)$ and number of maxima $b(K)$. Let $M=2b(K)+cr(K)$. Then the diagram can be unknotted by a sequence of Reidemeister moves so that no intermediate diagram has more than $(M-2)^2$ crossings.}
\bigbreak
The definition of Morse form for a diagram will be given in the body of the paper.
In the case of our Culprit, we have that $cr(K) = 10$ and $b(K) = 5. $ Thus $M = 20$ and
$(M-2)^2 = 18^{2} = 324.$ In actuality we only needed a diagram with $12$ crossings in our unknotting
sequence. The theory of these bounds needs improvement, but it is, in fact, remarkable that there is a
theory at all for such questions. Along with this theorem we will also give bounds on the number of
Reidemeister moves needed for unknotting. We point the reader towards more results related to this question. As a disclaimer, we warn the reader that the difference between the lower bounds and upper bounds that are known is still vast. The quest for a satisfying answer to these questions continues.
\section{Preliminaries}
The method we present to find upper bounds makes use of a powerful result proven by Dynnikov in~\cite{dynnikov} regarding arc--presentations of knots. Here, we provide an overview of the theory of arc--presentations.
\begin{definition}
An \emph{arc--presentation} of a knot is a knot diagram comprised of horizontal and vertical line segments such that at each crossing in the diagram, the horizontal arc passes under the vertical arc. Furthermore, we require that no two edges in an arc--diagram are colinear.
Two arc--presentations are \emph{combinatorially equivalent} if they are isotopic in the plane via an ambient isotopy of the form $h(x,y)=(f(x),g(y))$.
The \emph{complexity} $c(L)$ of an arc--presentation is the number of vertical arcs in the diagram.
We say more generally that a link diagram is {\em rectangular} if it has only vertical and horizontal edges.
In Figure~\ref{arc} we give an example of a rectangular diagram that is an arc--presentation and another example of a rectangular diagram that is not an arc--presentation.
\end{definition}
Note that a rectangular diagram can naturally be drawn on a rectangular grid. If we start with such a grid and represent rectangular diagrams on the grid we have called these knots {\em mosaic knots} and
used them to define a notion of {\em quantum knot.} See ~\cite{Mosaic} for more about quantum knots.
For now, we focus our attention on arc--presentations.
\begin{figure}[h]
\begin{center}
\includegraphics[height=1.5in]{trefoil.eps}\includegraphics[height=1.5in]{notarc.eps}
\end{center}
\vspace{-.3in}
\caption{ \small{The picture on the left is an example of an arc--presentation of a trefoil. The picture on the right is an example that is \emph{not} an arc--presentation (since not all horizontal arcs pass under vertical arcs).}}
\label{arc}
\end{figure}
\begin{proposition}[Dynnikov]
Every knot has an arc--presentation. Any two arc--presentations of the same knot can be related to each other by a finite sequence of \emph{elementary moves}, pictured in Figures~\ref{stab} and~\ref{exch}.
\end{proposition}
The proof of this proposition is elementary, based on the Reidemeister moves. A sketch is provided in~\cite{dynnikov}. We will show how to convert a usual knot diagram to an arc--presentation in the next few paragraphs, making use of the concept of Morse diagrams of knots.
\begin{figure}[h]
\begin{center}
\includegraphics[trim = 0mm 30mm 0mm 30mm, clip, height=1in]{stab1.eps}
\includegraphics[trim = 0mm 30mm 0mm 30mm, clip, height=1in]{stab2.eps}
\end{center}
\caption{ \small{Elementary (de)stabilization moves. Stabilization moves increase the complexity of the arc--presentation while destabilization moves decrease the complexity.}}
\label{stab}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[trim = 0mm 30mm 0mm 30mm, clip, height=.9in]{exch2.eps}\includegraphics[trim = 0mm 30mm 0mm 30mm, clip, height=.9in]{arrow.eps}\includegraphics[trim = 0mm 30mm 0mm 30mm, clip, height=.9in]{exch1.eps}
\end{center}
\begin{center}
\includegraphics[trim = 0mm 30mm 0mm 30mm, clip, height=.9in]{exch3.eps}\includegraphics[trim = 0mm 30mm 0mm 30mm, clip, height=.9in]{arrow.eps}\includegraphics[trim = 0mm 30mm 0mm 30mm, clip, height=.9in]{exch4.eps}
\end{center}
\caption{ \small{Some examples of exchange moves. Other allowed exchange moves include switching the heights of two horizontal arcs that lie in distinct halves of the diagram.}}
\label{exch}
\end{figure}
\begin{definition} A knot diagram is in \emph{Morse form} if it has
\begin{enumerate}
\item no horizontal lines,
\item no inflection points,
\item a single singularity at each height, and
\item each crossing is oriented to create a 45 degree angle with the vertical axis.
\end{enumerate}
\end{definition}
We note that converting an arbitrary knot diagram into a diagram in Morse form requires no Reidemeister moves, only ambient isotopies of the plane. More information about Morse diagrams can be found in~\cite{lou}.
\begin{lemma}\label{arclemma}
Suppose a knot (or link) diagram $K$ in Morse form has $cr(K)$ crossings and $b(K)$ maxima. Then there is an arc--presentation $L_K$ of $K$ with complexity $c(L_K)$ at most $2b(K)+cr(K)$ that can be obtained by ambient isotopies of the plane (without the use of Reidemeister moves).
\end{lemma}
\begin{proof}
We begin with a diagram in Morse form and convert this diagram into a piecewise linear diagram composed of lines with slope $\pm1$ with a vertex corresponding to each maximum and minimum (possibly with additional vertices---at most one for each pair of successive extrema). If we rotate this diagram by 45 degrees, we have a diagram composed entirely of horizontal and vertical arcs with complexity at most $2b(K)$.
This diagram may fail to be an arc--presentation of $K$ if any crossing has a horizontal overpass. If more than half of the crossings in $K$ have horizontal overpasses, we rotate the diagram by 90 degrees. Now, at least half of the crossings are in the proper form. Any remaining crossings containing a horizontal overpass may locally be rotated 90 degrees to form our arc--presentation $L_K$, as shown in Figure~\ref{rotation}. For each crossing that requires this move, the complexity of the rectangular diagram increases by at most 2. Thus, the overall complexity of our diagram increases by at most $2(\frac{1}{2}cr(K))=cr(K)$. It follows that $c(L_K)\leq 2b(K)+cr(K)$.
Note that neither converting a Morse diagram into a piecewise linear diagram nor locally rotating a crossing uses Reidemeister moves. These are ambient isotopies of the plane.
\end{proof}
\begin{figure}[h]
\begin{center}
\includegraphics[trim = 0mm 20mm 0mm 20mm, clip,height=1in]{morse.eps}\includegraphics[trim = 0mm 20mm 0mm 20mm, clip,height=1in]{notarc.eps}
\end{center}
\caption{ \small{A Morse diagram of a knot and a corresponding rectangular diagram.}}
\label{morse}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[trim = 0mm 30mm 0mm 30mm, clip, height=.7in]{rotation.eps}
\end{center}
\caption{ \small{Rotating a crossing to convert a rectangular diagram into an arc--presentation.}}
\label{rotation}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[height=1.3in]{notarc.eps}\includegraphics[height=1.3in]{arrow.eps}\includegraphics[height=1.3in, angle=90]{notarc.eps}\includegraphics[height=1.3in]{arrow.eps}\includegraphics[height=1.3in]{ugly.eps}
\end{center}
\caption{ \small{Converting a rectangular diagram into an arc--presentation by rotating the diagram then rotating a crossing. Note that the resulting diagram can be reduced to the arc--presentation from Figure 1 with an exchange move that doesn't require any Reidemeister moves.}}
\label{morse}
\end{figure}
\section{Bounds on Crossings Needed to Simplify the Unknot}
Our motivation for using Dynnikov's work to find upper bounds for an unknotting Reidemeister sequence began with the following theorem from~\cite{dynnikov}.
\begin{theorem}[Dynnikov]\label{triv}
If $L$ is an arc--presentation of the unknot, then there exists a finite sequence of exchange and destabilization moves $$L\rightarrow L_1\rightarrow L_2\rightarrow
\cdots\rightarrow L_m$$ such than $L_m$ is trivial.\end{theorem}
What is particularly interesting about this result is that the unknot can be simplified \textbf{without increasing the complexity of the arc--presentation}, that is, without the use of stabilization moves.
This gives a useful physical bound on how large a diagram can be. Furthermore, if we apply Dynnikov's
method to a knotted knot, it will stop on a diagram that is not a planar circle. Thus Dynnikov can
detect the unknot.
\bigbreak
The problem of detecting the unknot has been investigated by many people.
For example, the papers by Birman and Hirsch ~\cite{Alg} and Birman and Moody ~\cite{Obstr} give such methods. More recently it has been shown that Heegard Floer Homology (a generalization of
the Alexander polynomial) not only detects the unknot, but can be used to calculate the least genus of
an orientable spanning surface for any knot. This is an outstanding result and we recommend that the
reader examine the paper by Manolescu, Oszvath, Szabo and Thurston ~\cite{Heegard} for more information. In that work, the Heegard Floer homology is expressed via a chain complex that is associated to a rectangular diagram of just the type that Dynnikov uses.
\bigbreak
Returning to the task at hand, we immediately derive a quadratic upper bound on the crossing number of diagrams in an unknotting sequence. A similar result can be found in~\cite{dynnikov}.
\begin{theorem}
Suppose $K$ is a diagram (in Morse form) of the unknot with crossing number $cr(K)$ and number of maxima $b(K)$. Then, for every $i$, the crossing number $cr(K_i)$ is no more than $(M-2)^2$ where $M=2b(K)+cr(K)$ and $K=K_0, K_1, K_2, ..., K_N$ is a sequence of knot diagrams such that $K_{i+1}$ is obtained from $K_i$ by a single Reidemeister move and $K_N$ is a trivial diagram of the unknot.
\end{theorem}
\begin{proof}
To begin, we notice that $K$ can be viewed as an arc--presentation of complexity $M$ by a simple ambient isotopy of the plane, as shown in Lemma~\ref{arclemma}. By Theorem~\ref{triv}, there is a sequence of arc--presentations beginning with $K$ and ending with the trivial arc--presentation each having complexity no more than $M$ such that a diagram and its successor are related by an exchange or a destabilization move. Each destabilization move either preserves or reduces the number of crossings in the diagram. In the case that a destabilization move reduces the number of crossings, it can be viewed as a simplifying Redemeister I move. Otherwise, it can be viewed as a simple ambient isotopy of the plane.
When an exchange move is performed, on the other hand, its analogous Reidemeister sequence may require type II and type III Reidemeister moves. (See Figure~\ref{factor}.) At most one type II Reidemeister move is required for any given exchange move, so an exchange move factors through a Reidemeister sequence of moves that adds at most two crossings (since type III moves preserve the crossing number). However, it is important to note that a Reidemeister II move is needed if and only if the exchange move itself increases the number of crossings by two in the arc--presentation. Thus, no more crossings are added when factoring an exchange move through a Reidemeister sequence than are added in the exchange move itself.
\begin{figure}[h]
\begin{center}
\includegraphics[height=1.3in]{exch2.eps}\includegraphics[height=1.2in]{arrow.eps}\includegraphics[height=1.3in]{exch2a.eps}
\includegraphics[height=1.2in]{arrow.eps}
\vspace{-.3in}
\includegraphics[height=1.3in]{exch2b.eps}\includegraphics[height=1.2in]{arrow.eps}\includegraphics[height=1.2in]{dots.eps}\includegraphics[height=1.2in]{arrow.eps}\includegraphics[height=1.4in]{exch2c.eps}
\vspace{-.2in}
\includegraphics[trim = 0mm 20mm 0mm 0mm, clip,height=1.2in]{arrow.eps}\includegraphics[height=1.3in]{exch1.eps}
\end{center}
\caption{ \small{Factoring an exchange move through a type II and multiple type III Reidemeister moves.}}
\label{factor}
\end{figure}
It is straightforward to show that the maximum number of crossings that may occur in an arc-presentation with complexity less than or equal to $M$ is bounded above by $(M-2)^2$. If we translate an arc--presentation sequence of moves in a canonical fashion into a sequence of Reidemeister moves to unknot our unknot, many knot diagrams in the Reidemeister sequence will be arc--presentations and, as such, will have fewer than $(M-2)^2$ crossings. Furthermore, diagrams in this sequence that are not arc--presentations have no more crossings than their arc--presentation relatives. Thus, there exists a sequence of Reidemeister moves that unknots our original diagram $K$ that does not increase the crossing number to more than $(M-2)^2.$
\end{proof}
\section{Bounds on Reidemeister Moves Needed to Simplify the Unknot}
To find our upper bound on the number of Reidemeister moves, we must first specify an upper bound on the number $m$ of exchange and destabilization moves required to trivialize an arc--presentation. This bound will depend on the complexity $c(L)=n$ of the arc--diagram $L$. We also must provide an upper bound on the number of Reidemeister moves required for a destabilization or exchange move.
In~\cite{dynnikov}, Dynnikov provides the following bounds on the number of combinatorially distinct arc--presentations of complexity $n$.
\begin{proposition} Let $N(n)$ denote the number of combinatorially distinct arc-- presentations of complexity $n$. Then the following inequality holds.
$$N(n)\leq \frac{1}{2}n [(n-1)!]^2$$
\end{proposition}
\begin{proof}
Suppose we want to create an arc--presentation on the $n\times n$ integer lattice. Let us choose a starting point in the lattice. There are $\frac{n}{2}=\frac{n^2}{2n}$ ways to choose this point since there are $n^2$ lattice points, $2n$ of which lie on a given diagram. From this point, we create a vertical arc ending at another point in the integer lattice. There are $n-1$ choices for this endpoint. From our new point, we want to create a horizontal arc with endpoint in the lattice. There are $n-1$ choices for this endpoint as well. Next, we make another vertical arc, choosing one of the $n-2$ possible endpoints. (There are only $n-2$ choices since no two arcs in the diagram should be colinear.) Similarly, we have $n-2$ choices for the endpoint of our next horizontal arc. Continuing in this fashion, we see that the number of distinct choices we must make is $[(n-1)!]^2$.
Multiplying this quantity by $\frac{n}{2}$ to account for the initial choice of starting point, we get $\frac{1}{2}n[(n-1)!]^2$
\end{proof}
Using this count on the number of distinct arc--presentations of a given size, we can find a bound (albeit a large one) on the number of arc--presentation moves we need. This is simply by virtue of the fact that any reasonable sequence of moves will contain mutually distinct arc--presentations that don't exceed the complexity of the original, and there are a limited number of such diagrams.
\begin{lemma}\label{size}
The number of terms, $m$, in the monotonic simplification of arc--presentation $L$ with $c(L)=n$ is bounded above by $\sum_{i=2}^n\frac{1}{2}i[(i-1)!]^2$.
\end{lemma}
\begin{proof}
Suppose that an arc--presentation $L$ has complexity $n$. Since each $L_k$ from Theorem~\ref{triv} is combinatorially distinct from any other $L_j$ with $k\neq j$, we know that the number $m$ of arc--presentations in the sequence must be at most $\sum_{i=2}^nN(i)$ which is no greater than $\sum_{i=2}^n\frac{1}{2}i[(i-1)!]^2$.
\end{proof}
We should note that, if we start with an arc--presentation of the unknot, every arc--presentation in our simplification sequence must be a diagram of the unknot. As $n$ gets larger, we recognize that far fewer arc--presentations of complexity $n$ are unknots. Thus, in practice, $m$ will be much lower than the upper bound provided here. The authors would be interested to know what the probability is that an arc--presentation of complexity $n$ is the unknot. Using this probability, we could tighten the upper bound we found above.
We return now to our second question: how many Reidemeister moves does it take to make an arc--presentation move?
\begin{lemma}\label{reid}
No more than $n-2$ Reidemeister moves are required to perform an exchange or destabilization move on an arc--presentation $L$ with complexity $c(L)=n$.
\end{lemma}
\begin{proof}
Clearly, a destabilization move requires at most one Reidemeister move, a type I move. Now consider the first exchange move pictured in Figure~\ref{exch}. Let $d$ be the number of vertical strands intersecting both of the horizontal strands to be switched. Then the move requires $d$ type III moves and one type II move. Thus, the exchange move requires $d+1$ Reidemeister moves. We note that $d<a$, if $a$ is the length of the shorter horizontal arc. But $a$ cannot be greater than $n-2$, so the number of Reidemeister moves required is less than or equal to $n-2$. Similarly, the second exchange move pictured above requires $d$ type III moves but no type II moves. Thus, both pictured exchange moves require no more than $n-2$ Reidemeister moves. We note that other versions of the exchange moves (where the horizontal arcs lie in distinct halves of the arc--presentation) require no Reidemeister moves.
\end{proof}
For the finale, we put our two results together.
\begin{theorem}
Suppose $K$ is a diagram (in Morse form) of the unknot with crossing number $cr(K)$ and number of maxima $b(K)$. Let $M=2b(K)+cr(K)$. Then the number of Reidemeister moves required to unknot $K$ is less than or equal to $$\sum_{i=2}^M\frac{1}{2}i[(i-1)!]^2(M-2).$$
\end{theorem}
\begin{proof}
Suppose the arc--presentation $L_K$ of our knot diagram $K$ has complexity $c(L_K)=n$. Then at most $m (n-2)$ Reidemeister moves are required to produce the trivial (complexity 2) arc--presentation, where $m$ is the number of moves in the monotonic simplification of $L_K$. By our lemma, this quantity is bounded above by $$\sum_{i=2}^n\frac{1}{2}i[(i-1)!]^2(n-2).$$ But we showed that $n\leq 2b(K)+cr(K)=M$, thus the number of Reidemeister moves required to unknot $K$ is less than or equal to $$\sum_{i=2}^M\frac{1}{2}i[(i-1)!]^2(M-2).$$
\end{proof}
\section{A Detour: Bounds for Untangling Links}\label{links}
To illustrate that similar questions may be extended to families of knots and links beyond the unknot, we take a short detour to the world of non-trivial knots and links. In keeping with our theme, we make use of the work of Dynnikov. He proved two other results regarding the simplification of certain link diagrams~\cite{dynnikov}. In a fashion analogous to the previous section, we may use Dynnikov's results to bound the number of Reidemeister moves and the number of crossings needed to simplify certain types link diagrams. Before we state these theorems, however, let us clearly define our terms.
\begin{figure}[h]
\begin{center}
\includegraphics[height=1.3in]{SplitLink.eps}
\end{center}
\caption{ \small{A split link.}}\label{split}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[height=1.5in]{ConnectSum.eps}
\end{center}
\caption{ \small{A composite knot.}}\label{composite}
\end{figure}
\begin{definition}
A link diagram $L$ is said to be \emph{split} if there is a line not intersecting $L$ such that there are components of the diagram lying on both sides of the line. A link (or knot) diagram $L$ is \emph{composite} if it can be viewed as a connect sum of two nontrivial links, i.e. if there is a line intersecting the link at two points such that the tangles on either side of the line are non-trivial. In general, a link is said to be split or composite if there exists a diagram of the link that is split or composite. Figures ~\ref{split} and~\ref{composite} give examples illustrating these definitions.
\end{definition}
Let's review the pertinent results from~\cite{dynnikov}.
\begin{theorem}[Dynnikov]
If $L$ is an arc--presentation of a split link, then there exists a finite sequence of exchange and destabilization moves $$L\rightarrow L_1\rightarrow L_2\rightarrow
\cdots\rightarrow L_m$$ such than $L_m$ is split.\end{theorem}
\begin{theorem}[Dynnikov]
If $L$ is an arc--presentation of a non-split composite link, then there exists a finite sequence of exchange and destabilization moves $$L\rightarrow L_1\rightarrow L_2\rightarrow
\cdots\rightarrow L_m$$ such than $L_m$ is composite.\end{theorem}
We note that the statements of Lemmas ~\ref{arclemma},~\ref{size} and~\ref{reid} hold for arbitrary links as well as diagrams of the unknot. Thus, the following result is an immediate consequence of the previous theorems.
\begin{theorem}
Suppose $L$ is a diagram (in Morse form) of a split (resp. non-split composite) link with crossing number $cr(L)$ and number of maxima $b(L)$. Let $M=2b(L)+cr(L)$. Then the number of Reidemeister moves required to transform $L$ into a split (resp. composite) diagram is less than or equal to $$\sum_{i=2}^M\frac{1}{2}i[(i-1)!]^2(M-2).$$
\end{theorem}
Similarly, we have the following extension of our results regarding maximum crossing numbers in a simplifying Reidemeister sequence.
\begin{theorem}
Suppose $L$ is a diagram (in Morse form) of a split (resp. non-split composite) link with crossing number $cr(L)$ and number of maxima $b(L)$. Then for every $i$, the crossing number $cr(L_i)$ is no more than $(M-2)^2$ where $M=2b(L)+cr(L)$ and $L=L_0, L_1, L_2, ..., L_N$ is a sequence of link diagrams such that $L_{i+1}$ is obtained from $L_i$ by a single Reidemeister move, $L_N$ is split (resp. composite).
\end{theorem}
\section{Hard Unknots}
We have provided several upper bounds regarding the complexity of the Reidemeister sequence required to simplify an unknot. The bound that Dynnikov's work helps us obtain for the number of Reidemeister moves required to unknot an unknot is superexponential. Using a different technique, Hass and Lagarias were able to find a bound that is exponential in the crossing number of the diagram~\cite{hl}. They use the same technique to find an exponential bound for the number of crossings required for unknotting. For bounds of this second sort, the one presented here is a comparatively sharper estimate.
Regarding lower bounds, it was recently shown in ~\cite{hn} that there are unknot diagrams for which the number of Reidemeister moves required for unknotting is quadratic in the crossing number of the initial diagram. In~\cite{hayashi}, similar quadratic lower bounds are given for links. On the other hand, little is known about how many additional crossings an unknot diagram might require in order to become unknotted. While the upper bound on the number of crossings needed in a Reidemeister sequence is merely quadratic in the crossing number of the initial unknot diagram, it nonetheless seems likely that this bound is far from being tight.
Let us return to our friend, the Culprit. This famous hard unknot diagram was originally discovered by Ken Millett and introduced in~\cite{Culprit}. Recall that hard unknots are difficult to unknot by virtue of the fact that no simplifying type I or type II Reidemeister moves and no type III moves are available. In Figure~\ref{culprit}, we picture a Morse diagram of the Culprit, its corresponding rectangular diagram, and its arc-presentation obtained by rotating crossings where the over-strand was horizontal. Note that we need not specify crossing information in the arc--presentation, for it is assumed that all vertical lines pass over horizontal lines.
\begin{figure}[h]
\begin{center}
\includegraphics[height=1.5in]{Culprit.eps}\includegraphics[height=1.5in]{CulpritGrid.eps}\includegraphics[height=1.5in]{CulpritArc.eps}
\end{center}
\caption{ \small{The Culprit with its rectangular diagram and arc--presentation.}}\label{culprit}
\end{figure}
We saw that the Culprit may be unknotted with ten Reidemeister moves. (See also ~\cite{KL}.) The maximum crossing number of all diagrams in the given Reidemeister sequence is 12, two more than the number of crossings in the Culprit. On the other hand, we can compute our upper bound on the number of crossings required for unknotting as follows. Since the crossing number $cr(K)=10$ and the number of maxima in the diagram is $b(K)=5$, we see that $M = cr(K) + 2b(K)=20$. Thus, our bound is $(M-2)^2=18^2=324.$
We can also use $M$ to find our bound for the number of Reidemeister moves required to unknot the Culprit. $$\sum_{i=2}^M\frac{1}{2}i[(i-1)!]^2(M-2)=9\sum_{i=2}^{20}i[(i-1)!]^2.$$ The largest term in this expression is roughly $10^{35}$, unfortunately quite a bit larger than ten.
We challenge the reader to find examples where the maximum crossing number is closer to our bound and where the number of needed Reidemeister moves is large in comparison to the number of crossings in the original diagram.
\section{Conclusions}
We've considered the phenomenon that it may be quite hard to unknot a trivial knot and have provided several upper and lower bounds on the number of Reidemeister moves and the number of crossings needed to do the job. Known hard unknots like the Culprit and examples from~\cite{hn} illustrate that unknotting can be tricky, but not as tricky as the upper bounds that are known would have us believe. To answer the questions we've posed, there is much more to be done.
\section{Acknowledgements}
The authors would like to thank Jeffrey Lagarias and John Sullivan for their valuable comments.
\bibliographystyle{abbrv}
| {
"timestamp": "2011-11-08T02:00:36",
"yymm": "1006",
"arxiv_id": "1006.4176",
"language": "en",
"url": "https://arxiv.org/abs/1006.4176",
"abstract": "A knot is an an embedding of a circle into three-dimensional space. We say that a knot is unknotted if there is an ambient isotopy of the embedding to a standard circle. By representing knots via planar diagrams, we discuss the problem of unknotting a knot diagram when we know that it is unknotted. This problem is surprisingly difficult, since it has been shown that knot diagrams may need to be made more complicated before they may be simplified. We do not yet know, however, how much more complicated they must get. We give an introduction to the work of Dynnikov who discovered the key use of arc--presentations to solve the problem of finding a way to detect the unknot directly from a diagram of the knot. Using Dynnikov's work, we show how to obtain a quadratic upper bound for the number of crossings that must be introduced into a sequence of unknotting moves. We also apply Dynnikov's results to find an upper bound for the number of moves required in an unknotting sequence.",
"subjects": "Geometric Topology (math.GT)",
"title": "Unknotting Unknots",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631619124993,
"lm_q2_score": 0.8311430499496095,
"lm_q1q2_score": 0.8198088867498952
} |
https://arxiv.org/abs/math/0601638 | Upper bounds for edge-antipodal and subequilateral polytopes | A polytope in a finite-dimensional normed space is subequilateral if the length in the norm of each of its edges equals its diameter. Subequilateral polytopes occur in the study of two unrelated subjects: surface energy minimizing cones and edge-antipodal polytopes. We show that the number of vertices of a subequilateral polytope in any d-dimensional normed space is bounded above by (d/2+1)^d for any d >= 2. The same upper bound then follows for the number of vertices of the edge-antipodal polytopes introduced by I.Talata (Period. Math. Hungar. 38 (1999), 231--246). This is a constructive improvement to the result of A.Pór (to appear) that for each dimension d there exists an upper bound f(d) for the number of vertices of an edge-antipodal d-polytopes. We also show that in d-dimensional Euclidean space the only subequilateral polytopes are equilateral simplices. | \section{Notation}
Denote the $d$-dimensional real linear space by $\numbersystem{R}^d$, a norm on $\numbersystem{R}^d$ by $\norm{\cdot}$,
its unit ball by $B$, and the ball with centre $x$ and radius $r$ by $B(x,r)$.
Denote the diameter of a set $C\subseteq \numbersystem{R}^d$ by $\diam(C)$, and (if it is measurable) its volume (or $d$-dimensional Lebesgue measure) by $\vol(C)$.
The \emph{dual norm} $\norm{\cdot}^\ast$ is defined by $\norm{x}^\ast:=\sup\{\ipr{x}{y} : \norm{y}\leq 1\}$, where $\ipr{\cdot}{\cdot}$ is the inner product on $\numbersystem{R}^d$.
Denote the number of elements of a finite set $S$ by $\card{S}$.
The \emph{difference body} of a set $S\subseteq\numbersystem{R}^d$ is $S-S:=\{x-y:x,y\in S\}$.
A \emph{polytope} is the convex hull of finitely many points in some $\numbersystem{R}^d$.
A \emph{$d$-polytope} is a polytope of dimension $d$.
A \emph{convex body} $C$ is a compact convex subset of $\numbersystem{R}^d$ with nonempty interior.
The boundary of $C$ is denoted by $\bd C$.
Given any convex body $C$ we define the \emph{relative norm $\norm{\cdot}_C$ determined by $C$} to be the norm with unit ball $C-C$, or equivalently,
\[ \norm{x}_C := \sup\{\lambda>0: a+\lambda x\in C \text{ for some } a\in C\}.\]
See \cite{Grunbaum, Ba, MR97f:52001} for background on polytopes, convexity, and finite-dimensional normed spaces.
\section{Introduction}
\subsection{Antipodal and edge-antipodal polytopes}
A $d$-polytope $P$ is \emph{antipodal} if for any two vertices $x$ and $y$ of $P$ there exist two parallel hyperplanes, one through $x$ and one through $y$, such that $P$ is contained in the closed slab bounded by the two hyperplanes.
Klee \cite{Klee} posed the problem of finding an upper bound for the number of vertices of an antipodal $d$-polytope in terms of $d$.
Danzer and Gr\"unbaum \cite{MR25:1488} proved the sharp upper bound of $2^d$.
See \cite{MS} for a recent survey.
A $d$-polytope $P$ is \emph{edge-antipodal} if for any two vertices $x$ and $y$ joined by an edge there exist two parallel hyperplanes, one through $x$ and one through $y$, such that $P$ is contained in the closed slab bounded by the two hyperplanes.
This notion was introduced by Talata \cite{Talata}, who conjectured that the number of vertices of an edge-antipodal $3$-polytope is bounded above by a constant.
Csik\'os \cite{Csikos} proved an upper bound of $12$, and
K.~Bezdek, Bisztriczky and B\"or\"oczky \cite{BBB} gave the sharp upper bound of $8$.
P\'or \cite{Por} proved that the number of vertices of an edge-antipodal $d$-polytope is bounded above by a function of $d$.
However, his proof is existential, with no information on the size of the upper bound.
Our main result is an explicit bound.
\begin{theorem}\label{th1}
Let $d\geq 2$.
Then the number of vertices of an edge-antipodal $d$-polytope is bounded above by $(\frac{d}{2}+1)^d$.
\end{theorem}
In the plane, an edge-antipodal polytope is clearly antipodal, and in this case the above theorem is sharp.
The bound given is not sharp for $d\geq 3$ (since the bound in Theorem~\ref{th2} below is not sharp).
In \cite{BBB} it is stated without proof that all edge-antipodal $3$-polytopes are antipodal.
On the other hand, Talata has an example of an edge-antipodal $d$-polytope that is not antipodal for each $d\geq 4$ (see \cite{Csikos} and Section~\ref{s4} below).
Most likely the largest number of vertices of an edge-antipodal $d$-polytope has an upper bound exponential in $d$, perhaps even $2^d$.
We also mention the paper by Bisztriczky and B{\"o}r{\"o}czky \cite{BB} discussing edge-antipodal $3$-polytopes.
Theorem~\ref{th1} is proved by considering a metric relative of edge-antipodal polytopes, discussed next.
\subsection{Equilateral and subequilateral polytopes}
A polytope $P$ is \emph{equilateral} with respect to a norm $\norm{\cdot}$ on $\numbersystem{R}^d$ if its vertex set is an \emph{equidistant set}, i.e., the distance between any two vertices is a constant.
This notion was first considered by Petty \cite{MR43:1051}, who showed that equilateral polytopes are antipodal, hence have at most $2^d$ vertices.
We now introduce the following natural weakening of this notion, analogous to the weakening from antipodal to edge-antipodal.
We say that a $d$-polytope $P$ is \emph{subequilateral} with respect to a norm $\norm{\cdot}$ on $\numbersystem{R}^d$ if the length of each of its edges equals its diameter.
Although not explicitly given a name, the vertex sets of subequilateral polytopes appear in the study of surface energy minimizing cones by Lawlor and Morgan \cite{MR95i:58051}; see Section~\ref{s4} for a discussion.
It is well-known and easy to prove that an edge-antipodal polytope $P$ is subequilateral with diameter $1$ in the relative norm $\norm{\cdot}_P$ determined by $P$ \cite{Talata, Csikos}.
It is also easy to see that any subequilateral polytope is edge-antipodal.
In order to prove Theorem~\ref{th1} it is therefore sufficient to bound the number of vertices of a subequilateral $d$-polytope.
\begin{theorem}\label{th2}
Let $d\geq 2$.
Then the number of vertices of a subequilateral $d$-polytope with respect to some norm $\norm{\cdot}$ is bounded above by $(\frac{d}{2}+1)^d$.
\end{theorem}
The proof is in Section~\ref{s2}.
In two-dimensional normed spaces subequilateral polytopes are always equilateral.
Therefore, the above theorem is sharp for $d=2$.
By analyzing equality in the proof of Theorem~\ref{th2}, it can be seen that the bound is not sharp for $d\geq 3$.
Since edge-antipodal $3$-polytopes have at most $8$ vertices, with equality only for parallelepipeds \cite{BBB}, it follows that subequilateral $3$-polytopes with respect to any norm has size at most $8$, with equality only if the unit ball of the norm is a parallelepiped homothetic to the polytope.
We finally mention that in Euclidean $d$-space $\mathbb{E}^d$ the only subequilateral polytopes are equilateral simplices, and give a proof.
In the proof we have to consider subequilateral polytopes in spherical spaces, making it possible to formulate a more general theorem for spaces of constant curvature.
Note that if we restrict ourselves to a hemisphere of the $d$-sphere $\mathbb{S}^d$ in $\mathbb{E}^{d+1}$, the notion of a polytope can be defined without ambiguity.
The definition of a subequilateral polytope then still makes sense in in a hemisphere of $\mathbb{S}^d$, as well as in hyperbolic $d$-space $\mathbb{H}^d$.
\begin{theorem}\label{th3}
Let $P$ be a subequilateral $d$-polytope in either $\mathbb{E}^d$, $\mathbb{H}^d$, or a hemisphere of $\mathbb{S}^d$.
Then $P$ is an equilateral $d$-simplex.
\end{theorem}
\begin{proof}
The proof is by induction on $d\geq 1$, with $d=1$ trivial and $d=2$ easy.
Suppose now $d\geq 3$.
Let $P$ be a subequilateral $d$-polytope in any of the three spaces.
By induction all facets of $P$ are equilateral simplices.
In particular, $P$ is simplicial.
Since $d\geq 3$, it is sufficient to show that $P$ is simple (see section~4.5 and exercise~4.8.11 of \cite{Grunbaum}).
Consider any vertex $v$ with neighbours $v_1,\dots,v_k$, $k\geq d$.
Then $v_1,\dots,v_k$ are contained in an open hemisphere $S$ of the $(d-1)$-sphere of radius $\diam(P)$ and centre $v$.
(This sphere will be isometric to some sphere in $\mathbb{E}^{d}$, not necessarily of radius $\diam(P)$.)
Consider the $(d-1)$-polytope $P'$ in $S$ generated by $v_1,\dots,v_k$ and any facet of $P'$ with vertex set $F\subset\{v_1,\dots,v_k\}$.
There exists a great sphere $C$ of $S$ passing through $F$ with $P'$ in one of the closed hemispheres determined by $C$.
It follows that the hyperplane $H$ generated by $C$ and $v$ passes through $F\cup\{v\}$, and $P$ is contained in one of the closed half spaces bounded by $H$.
Therefore, $F\cup\{v\}$ is the vertex set of a facet of $P$.
Similarly, it follows that for any vertex set $F$ of a facet of $P$ containing $v$, $F\setminus\{v\}$ is the vertex set of a facet of $P'$.
Therefore, any edge $v_iv_j$ of $P'$ is an edge of $P$, hence of length the diameter of $P$.
It follows that the distance between $v_i$ and $v_j$ in $H$ is the diameter of $P'$ as measured in $H$.
This shows that $P'$ is subequilateral in $H$, and so by induction is an equilateral $(d-1)$-simplex.
Therefore, $k=d$, giving that $P$ is a simple polytope, which finishes the proof.
\end{proof}
\section{A measure of non-equidistance}\label{s2}
The key to the proof of Theorem~\ref{th2} is a lower bound for the distance between two nonadjacent vertices of a subequilateral polytope.
For any finite set of points $V$ we define
\[\lambda(V;\norm{\cdot})=\diam(V)/\min_{x,y\in V, x\neq y}\norm{x-y}.\]
Since $\lambda(V;\norm{\cdot})\geq 1$, with equality if and only if $V$ is equidistant in the norm $\norm{\cdot}$, this functional measures how far $V$ is from being equidistant.
The next lemma generalizes the theorem of Petty \cite{MR43:1051} and Soltan \cite{MR52:4127} that the number of points in an equidistant set is bounded above by $2^d$.
In \cite{MR93d:52009} a proof of the $2^d$-upper bound was given using the isodiametric inequality for finite-dimensional normed spaces due to Busemann (equation~(2.2) on p.~241 of \cite{B}; see also Mel'nikov \cite{MR27:6191}).
However, since the isodiametric inequality has a quick proof using the Brunn-Minkowski inequality \cite{BZ}, it is not surprising that the latter inequality occurs in the following proof.
\begin{lemma}\label{l2}
Let $V$ be a finite set in a $d$-dimensional normed space.
Then $\card{V}\leq(\lambda(V;\norm{\cdot})+1)^d$.
\end{lemma}
\begin{proof}
Let $\lambda=\lambda(V;\norm{\cdot})$.
By scaling we may assume that $\diam(V)=\lambda$.
Then $\norm{x-y}\geq 1$ for all $x,y\in V$, $x\neq y$, hence the balls $B(v,1/2)$, $v\in V$, have disjoint interiors.
Define $C=\bigcup_{v\in V}B(v,1/2)$.
Then $\vol(C)=\card{V}(1/2)^d\vol(B)$ and $\diam(C)\leq 1+\lambda$.
By the Brunn-Minkowski inequality \cite{BZ} we obtain $\vol(C-C)^{1/d}\geq\vol(C)^{1/d}+\vol(-C)^{1/d}$.
Noting that $C-C\subseteq(1+\lambda)B$, the result follows.
\end{proof}
In order to find an upper bound on the number of vertices of a subequilateral polytope with vertex set $V$, it remains to bound $\lambda(V;\norm{\cdot})$ from above.
\begin{lemma}\label{l3}
Let $d\geq 2$ and let $V$ be the vertex set of a subequilateral $d$-polytope.
Then $\lambda(V;\norm{\cdot})\leq d/2$.
\end{lemma}
\begin{proof
Let $P$ be a subequilateral $d$-polytope of diameter $1$, and let $V$ be its vertex set.
We have to show that $\norm{x-y}\geq2/d$ for any distinct $x,y\in V$.
Since this follows from the definition if $xy$ is an edge of $P$,
we assume without loss that $xy$ is not an edge of $P$.
Then $xy$ intersects the convex hull $P'$ of $V\setminus\{x,y\}$ in a (possibly degenerate) segment, say $x'y'$, with $x$, $x'$, $y'$, $y$ in this order on $xy$.
Let $F_x$ and $F_y$ be facets of $P'$ containing $x'$ and $y'$, respectively.
We show that $\norm{x-x'}\geq1/d$.
For each vertex $z$ of $F_x$, $xz$ is an edge of $P$, hence $\norm{x-z}=1$.
By Carath\'eodory's theorem \cite[(2.2)]{Ba}, there exist $d$ vertices $z_1,\dots,z_d$ of the $(d-1)$-polytope $F_x$ and real numbers $\lambda_1,\dots,\lambda_d$ such that
\[ x'=\sum_{i=1}^d\lambda_i z_i,\quad \lambda_i\geq 0,\quad \sum_{i=1}^d\lambda_i=1.\]
Suppose without loss that $\lambda_d=\max_i\lambda_i$.
Then $\lambda_d\geq 1/d$.
By the triangle inequality we obtain
\begin{align*}
\norm{x'-z_d} & = \norm{\sum_{i=1}^{d-1}\lambda_i(z_i-z_d)}\leq \sum_{i=1}^{d-1}\lambda_i\norm{z_i-z_d}\\
&\leq \sum_{i=1}^{d-1}\lambda_i=1-\lambda_d\leq 1-\frac{1}{d},
\end{align*}
and
\begin{align*}
\norm{x-x'} & \geq\norm{x-z_d}-\norm{x'-z_d}\\
& \geq 1-(1-\frac{1}{d})=\frac{1}{d}.
\end{align*}
Similarly, $\norm{y-y'}\geq1/d$, and we obtain $\norm{x-y}\geq 2/d$.
\end{proof}
Lemmas~\ref{l2} and \ref{l3} now imply Theorem~\ref{th2}.\qed
\section{Concluding remarks}\label{s4}
\subsection{Sharpness of Lemma~\ref{l3}}
The following example shows that Lemma~\ref{l3} cannot be improved in general.
Consider the subspace $X=\{(x_1,\dots,x_{d+1}):\sum_{i=1}^dx_i=0\}$ of $\numbersystem{R}^{d+1}$ with the $\ell_1$ norm $\norm{(x_1,\dots,x_{d+1})}_1:=\sum_{i=1}^{d+1}\abs{x_i}$.
Let the standard unit vector basis of $\numbersystem{R}^{d+1}$ be $e_1,\dots,e_{d+1}$.
Let $c=\sum_{i=1}^d e_i$.
Then $V=\{de_i-c:i=1,\dots,d\}\cup\{\pm 2e_{d+1}\}$ is the vertex set of a $d$-polytope $P$ in $X$, with all intervertex distances equal to $2d$, except for the distance between $\pm 2e_{d+1}$, which is $4$.
It follows that $P$ is subequilateral and $\lambda(V;\norm{\cdot})=d/2$.
However, the above polytope $P$ is in fact antipodal, and so it is equilateral in $\norm{\cdot}_P$, which gives $\lambda(V;\norm{\cdot}_P)=1$.
It is easy to see that for any polytope $P$ subequilateral with respect to some norm $\norm{\cdot}$, and with vertex set $V$, we have $\lambda(V,\norm{\cdot})\leq\lambda(V,\norm{\cdot}_P)$.
One may therefore hope that for the norm $\norm{\cdot}_P$ the upper bound in Lemma~\ref{l3} may be improved, thus giving a better bound in Theorem~\ref{th1}.
The following example shows that any such improved upper bound will still have to be at least $(d-1)/2$, indicating that essentially new ideas will be needed to improve the upper bounds in Theorems~\ref{th1} and \ref{th2}.
We consider Talata's example \cite{Csikos} of an edge-antipodal polytope that is not antipodal.
Let $d\geq 4$, $e_1,\dots,e_{d}$ be the standard basis of $\numbersystem{R}^d$, $p=\frac{2}{d-1}\sum_{i=1}^{d-1}e_i$, and $\lambda=(d-1)/2-\varepsilon>1$ for some small $\varepsilon>0$.
Then the polytope $P$ with vertex set $V=\{o,e_1,\dots,e_d,p,e_d+\lambda p\}$ is edge-antipodal but not antipodal.
In fact, $\diam(V)\leq 1$ by definition of $\norm{\cdot}_P$, and since $\norm{e_d-o}_P=1$ and $\norm{p-o}_P=1/\lambda$, we obtain $\lambda(V,\norm{\cdot}_P)\geq\lambda$, which is arbitrarily close to $(d-1)/2$.
\subsection{Subequilateral polytopes in the work of Lawlor and Morgan}
Define the \emph{$\norm{\cdot}$-energy} of a hypersurface $S$ in $\numbersystem{R}^d$ to be $\norm{S}:=\int_S\norm{n(x)}dx$, where $n(x)$ is the Euclidean unit normal at $x\in S$.
In \cite{MR95i:58051} a sufficient condition is given to obtain an energy minimizing hypersurface partitioning a convex body.
We restate a special case of the ``General Norms Theorem I'' in \cite[pp.~66--67]{MR95i:58051} in terms of subequilateral polytopes.
(In the notation of \cite{MR95i:58051} we take all the norms $\Phi_{ij}$ to be the same.
Then the points $p_1,\dots,p_m$ in the hypothesis form an equidistant set with respect to the dual norm.
The weakening of the hypothesis in the last sentence of the General Norms Theorem I is easily seen to be equivalent to the requirement that $p_1,\dots,p_m$ is the vertex set of a subequilateral polytope.)
We refer to \cite{MR95i:58051} for the simple and enlightening proof using the divergence theorem.
\begin{lmtheorem}
Let $\norm{\cdot}$ be a norm on $\numbersystem{R}^n$, and let $p_1,\dots,p_m\in\numbersystem{R}^n$ be the vertex set of a subequilateral polytope of $\norm{\cdot}$-diameter $1$.
Let $\Sigma=\bigcup H_{ij}\subset C$ be a hypersurface which partitions some convex body $C$ into regions $R_1,\dots,R_m$ with $R_i$ and $R_j$ separated by a piece $H_{ij}$ of a hyperplane such that the parallel hyperplane passing through $p_i-p_j$ supports the unit ball $B$ at $p_i-p_j$.
Then for any hypersurface $M=\bigcup M_{ij}$ which also separates the $R_i\cap\bd C$ from each other in $C$, with the regions touching $R_i\cap\bd C$ and $R_j\cap\bd C$ facing each other across $M_{ij}$, we have $\norm{\Sigma}^\ast\leq\norm{M}^\ast$, i.e.\ $\Sigma$ minimizes $\norm{\cdot}^\ast$-energy, where $\norm{\cdot}^\ast$ is the norm dual to $\norm{\cdot}$.
\end{lmtheorem}
| {
"timestamp": "2006-01-26T13:51:21",
"yymm": "0601",
"arxiv_id": "math/0601638",
"language": "en",
"url": "https://arxiv.org/abs/math/0601638",
"abstract": "A polytope in a finite-dimensional normed space is subequilateral if the length in the norm of each of its edges equals its diameter. Subequilateral polytopes occur in the study of two unrelated subjects: surface energy minimizing cones and edge-antipodal polytopes. We show that the number of vertices of a subequilateral polytope in any d-dimensional normed space is bounded above by (d/2+1)^d for any d >= 2. The same upper bound then follows for the number of vertices of the edge-antipodal polytopes introduced by I.Talata (Period. Math. Hungar. 38 (1999), 231--246). This is a constructive improvement to the result of A.Pór (to appear) that for each dimension d there exists an upper bound f(d) for the number of vertices of an edge-antipodal d-polytopes. We also show that in d-dimensional Euclidean space the only subequilateral polytopes are equilateral simplices.",
"subjects": "Metric Geometry (math.MG)",
"title": "Upper bounds for edge-antipodal and subequilateral polytopes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9915543723101532,
"lm_q2_score": 0.8267117876664789,
"lm_q1q2_score": 0.8197296877010402
} |
https://arxiv.org/abs/0912.5031 | On two and three periodic Lyness difference equations | We describe the sequences {x_n}_n given by the non-autonomous second order Lyness difference equations x_{n+2}=(a_n+x_{n+1})/x_n, where {a_n}_n is either a 2-periodic or a 3-periodic sequence of positive values and the initial conditions x_1,x_2 are as well positive. We also show an interesting phenomenon of the discrete dynamical systems associated to some of these difference equations: the existence of one oscillation of their associated rotation number functions. This behavior does not appear for the autonomous Lyness difference equations. | \section{Introduction and main result}
This paper fully describes the sequences given by the non-autonomous
second order Lyness difference equations
\begin{equation}\label{eq}
x_{n+2}\,=\,\frac{a_n+x_{n+1}}{x_n},
\end{equation}
where $\{a_n\}_n$ is a $k$-periodic sequence taking positive values,
$k=2,3,$ and the initial conditions $x_1,x_2$ are as well positive.
This question is proposed in \cite[Sec. 5.43]{CL}. Recall that
non-autonomous recurrences appear for instance as population models
with a variable structure affected by some seasonality
\cite{ES1,ES2}, where $k$ is the number of seasons. Some dynamical
issues of similar type of equations have been studied in several
recent papers \cite{BHS,CJK,deA,JKN,KN,FJL,GKL}.
Recall that when $k=1,$ that is $a_n=a>0,$ for all $n\in\mathbb{N}$,
then (\ref{eq}) is the famous Lyness recurrence which is well
understood, see for instance \cite{BR,Z}. The cases $k=2,3$ have
been already studied and some partial results are established. For
both cases it is known that the solutions are persistent near a
given $k$-periodic solution, which is stable. This is proved by
using some known invariants, see \cite{KN, FJL, GKL}. Recall that in
our context it is said that a solution $\{x_n\}_n$ is persistent if
there exist two real positive constants $c$ and $C,$ which depend on
the initial conditions, such that for all $n\ge1, 0<c<x_n<C<\infty.$
We prove:
\begin{teo}\label{main} Let $\{x_n\}_n$ be any sequence defined by
(\ref{eq}) and $k\in\{2,3\}$. Then it is persistent. Furthermore, either
\begin{enumerate}
\item [(a)] the sequence $\{x_n\}_n$ is periodic, with period a multiple of $k$; or
\item [(b)] the sequence $\{x_n\}_n$
densely fills one or two (resp. one, two or three) disjoint intervals of
$\mathbb{R}^+$ when $\{a_n\}_n$ is 2-periodic (resp. 3-periodic). Moreover it
is possible by algebraic tools to distinguish which is the situation.
\end{enumerate}
\end{teo}
Our approach to describe the sequences $\{x_n\}_n$ is based on the
study of the natural dynamical system associated to (\ref{eq}) and
on the results of \cite{CGM}. The main tool that allows to
distinguish the number of intervals for the adherence of the
sequences $\{x_n\}_n$ is the computation of several resultants, see
Section~\ref{provaA}.
It is worth to comment that Theorem~\ref{main} is an extension of what happens
in the classical case $k=1$. There, the same result holds but in statement (b)
only appears one interval. Our second main result will prove that there are
other more significative differences between the case $k=1$ and the cases
$k=2,3.$ These differences are related with the lack of monotonicity of certain
rotation number functions associated to the dynamical systems given by the
Lyness recurrences, see Theorem~\ref{teonomonotonia}. The behaviors of these
rotation number functions are important for the understanding of the
recurrences, because they give the possible periods for them, see
\cite{BR,BC,Z}.
On the other hand in \cite{deA,GKL} it is proved that, at least for some values
of $\{a_n\}_n,$ the behaviour of $\{x_n\}_n$ for the case $k=5$ is totally
different. In particular unbounded positive solutions appear. In the
forthcoming paper \cite{CGM3} we explore in more detail the differences between
the cases $k=1,2,3$ and $k\ge4.$
This paper is organized as follows: Section~\ref{ds} presents the
difference equations that we are studying as discrete dynamical
systems and we state our main results on them, see
Theorems~\ref{rotacions} and \ref{teonomonotonia}. Section~\ref{pt2}
is devoted to the proof of Theorem~\ref{rotacions}. By using it, in
Section~\ref{provaA}, we prove Theorem~\ref{main} and we give some
examples of how to apply it to determine the number of closed
intervals of the adherence of $\{x_n\}_n$. In
Section~\ref{someproperties} we demonstrate
Theorem~\ref{teonomonotonia} and we also present some examples where
we study in more detail the rotation number function of the
dynamical systems associated to \eqref{eq}.
\section{Main results from the dynamical systems point of view}\label{ds}
In this section we reduce the study of the sequence $\{x_n\}_n$ to the study of
some discrete dynamical systems and we state our main results on them.
First we introduce some notations. When $k=2,$ set
\begin{equation}\label{k=2}a_n\,=\,\left\{\begin{array}{lllr}
a&\mbox{for}&n=2\ell+1,\\ b&\mbox{for}&\,n=2\ell,
\end{array}\right.\end{equation}
and when $k=3,$ set
\begin{equation}\label{k=3}a_n\,=\,\left\{\begin{array}{lllr}
a&\mbox{for}&n=3\ell+1,\\
b&\mbox{for}&n=3\ell+2,\\
c&\mbox{for}&n=3\ell,
\end{array}\right.\end{equation}
where $\ell\in\mathbb{N}$ and $a>0,b>0$ and $c>0.$
We also consider the maps $F_\alpha(x,y)$, with $\alpha\in\{a,b,c\},$ as
$$F_{\alpha}(x,y)=\left(y,\frac{\alpha+y}{x}\right),$$
defined on the open invariant set ${Q^+}:=\{(x,y):x>0,y>0\}\subset{\mathbb R}^2.$
Consider for instance $k=2.$ The sequence given by (\ref{eq}),
\begin{equation}\label{seqq}
x_1,x_2,x_3,x_4,x_5,x_6,x_7,\ldots,
\end{equation}
can be seen as
\[
(x_1,x_2)\xrightarrow{F_a}(x_2,x_3)\xrightarrow{F_b}(x_3,x_4)
\xrightarrow{F_a}(x_4,x_5)\xrightarrow{F_b}(x_5,x_6)\xrightarrow{F_a}\cdots.
\]
Hence the behavior of (\ref{seqq}) can be obtained from the study of the
dynamical system defined in ${Q^+}$ by the map:
$$F_{b,a}(x,y):=F_b\circ F_a (x,y)=\left(\frac {a+y}{x},\frac {a+b x+ y}{x
y}\right).
$$
Similarly, for $k=3$ we can consider the map:
$$
F_{c,b,a}(x,y):=F_c \circ F_b\circ F_a(x,y)=\left(\frac {a+bx+y}{xy},\frac
{a+bx+y+cxy}{y \left( a+y \right)} \right).
$$
Notice that both maps have an only fixed point in ${Q^+},$ which
depends on $a,b$ (and $c$), that for short we denote by $\bf{p}.$
It is easy to interpret the invariants for (\ref{eq}) and $k=2,3,$ given in
\cite{JKN,KN}, in terms of first integrals of the above maps, see also
Lemma~\ref{elemental}. We have that
$$V_{b,a}(x,y):=
{\frac {ax^2y+bxy^2+bx^2+ay^2+(b^2+a)x+(b+a^2)y+ab}{xy}},
$$
is a first integral for $F_{b,a}$ and
$$V_{c,b,a}(x,y):= {\frac
{c{x}^{2}y+ax{y}^{2}+b{x}^{2}+b{y}^{2}+(a+bc)x+(c+ab)y+ac}{xy}},$$
is a first integral for $F_{c,b,a}$. The topology of the level sets
of these integrals in ${Q^+}$ as well as the dynamics of the maps
restricted to them is described by the following result, that will
be proved in Section~\ref{pt2}.
\begin{teo}\label{rotacions}
\begin{itemize}
\item[(i)] The level sets of $V_{b,a}$ (resp. $V_{c,b,a}$) in $Q^+\setminus\{\bf{p}\}$ are
diffeomorphic to circles surrounding $\bf p$, which is the unique fixed point
of $F_{b,a}$ (resp. $F_{c,b,a}$).
\item[(ii)] The action of $F_{b,a}$ (resp. $F_{c,b,a}$) on each level set of
$V_{b,a}$ (resp. $V_{c,b,a}$) contained in $Q^+\setminus\{\bf{p}\}$ is
conjugated to a rotation of the circle.
\end{itemize}
\end{teo}
Once a result like the above one is established the study of the
possible periods of the sequences $\{x_n\}_n$ given by (\ref{eq}) is
quite standard. It suffices, first to get the rotation interval,
which is the open interval formed by all the rotation numbers given
by the above theorem, varying the level sets of the first integrals.
Afterwards, it suffices to find which are the denominators of all
the irreducible rational numbers that belong to the corresponding
interval, see \cite{BC,CGM2,Z}.
The study of the rotation number of these kind of rational maps is
not an easy task, see again \cite{BR,BC,CGM2,Z}. In particular, in
\cite{BR} was proved that the rotation number function parameterized
by the energy levels of the Lyness map $F_a, a\ne1,$ is always
monotonous, solving a conjecture of Zeeman given in \cite{Z}, see
also \cite{ER}. As far as we know, in this paper we give the first
simple example for which this rotation number function is neither
constant nor monotonous. We prove:
\begin{teo}\label{teonomonotonia}
There are positive values of $a$ and $b$, such that the rotation
number function $\rho_{b,a}(h)$ of $F_{b,a}$ associated to the
closed ovals of $\{V_{b,a}=h\}\subset{Q^+}$ has a local maximum.
\end{teo}
Hence, apart from the known behaviors for the autonomous Lyness maps, that is
global periodicity or monotonicity of the rotation number function, which
trivially holds for $F_{b,a},$ taking for instance $a=b=1$ or $a=b\ne1,$
respectively, there appear more complicated behaviors for the rotation number
function.
Our proof of this result relies on the study of lower and upper
bounds for the rotation number of $F_{b,a}$ on a given oval of a
level set of $V_{b,a}$ given for some $(a,b)\in(\mathbb{Q}^+)^2$ and
$\{V_{b,a}(x,y)=V_{b,a}(x_0,y_0)\},$ for
$(x_0,y_0)\in(\mathbb{Q}^+)^2$. This can be done because the map on
this oval is conjugated to a rotation and it is possible to use an
algebraic manipulator to follow and to order a finite number
iterates on it, which are also given by points with rational
coordinates. So, only exact arithmetic is used. A similar study
could be done for $F_{c,b,a}$.
\section{Proof of Theorem~\ref{rotacions}}\label{pt2}
{\rec {\it Proof of (i) of Theorem \ref{rotacions}.}} The orbits of
$F_{b,a}$ and $F_{c,b,a}$ lie on the level sets $V_{b,a}=h$ and
$V_{c,b,a}=h$ respectively. These level sets can be seen as the
algebraic curves given by
$$
C_2:=\{c_2(x,y)=ax^2y+bxy^2+bx^2-hxy+ay^2+(b^2+a)x+(b+a^2)y+ab=0\}
$$
and
$$
C_3:=\{c_3(x,y)=cx^2y+axy^2+bx^2-hxy+by^2+(a+bc)x+(c+ab)y+ac=0\},
$$ respectively.
Taking homogeneous coordinates on the projective plane $P{\mathbb R}^2$ both
curves $C_2$ and $C_3$ have the form
$$
C:=\{Sx^2y+Txy^2+Ux^2z+Vxyz+Wy^2z+Lxz^2+Myz^2+Nz^3=0\}.$$
In order to find the branches of them tending to infinity, we examine the
directions of approach to infinity ($z=0$) in the local charts determined by
$x=1$ and $y=1$ respectively.
In the local chart given by $x=1$, the curve $C$ writes as
$$
Sy+Ty^2+Uz+Vyz+Wy^2z+Lz^2+Myz^2+Nz^3=0$$ and it meets the straight line at
infinity $z=0$ when $y(S+Ty)=0.$ Since for both curves $C_2$ and $C_3$ the
coefficients $S$ and $T$ are positive, the only intersection point that could
give points in ${Q^+}$ is $(y,z)=(0,0).$ The algebraic curve $C$ arrives to
$(y,z)=(0,0)$ tangentially to the line $Sy+Uz=0.$ Since for both curves, $C_2$
and $C_3,$ the coefficients $S$ and $U$ are also positive, we have that the
branches of the level sets tending to infinity are not included in~$Q^+.$
An analogous study can be made in the chart given by $y=1,$ obtaining the same
conclusions.
Moreover, it can be easily checked that in the affine plane both
curves $C_2$ and $C_3$ do not intersect the part of the axes $x=0$
and $y=0$ which is in the boundary of ${Q^+}$.
In summary, there are no branches of the curves $C_2$ and $C_3$ tending to
infinity or crossing the axes $x=0$ and $y=0$ in $Q^+$, and therefore the
connected components of $C_i\cap Q^+$ for $i=2,3$ are bounded. Notice that this
result in particular already implies the persistence of the sequences given by
\eqref{eq}.
Consider $k=2$. We claim the following facts:
\begin{enumerate}[(a)]
\item In ${Q^+}$, the set of fixed points of $F_{b,a}$ and the set of singular
points of $C_1$ coincide and they only contain the point ${\bf
p}=(\bar x_,\bar y)$.
\item The function $V_{b,a}(x,y),$ has a
local minimum at $\bf p$.
\end{enumerate}
We remark that item (b) is already known. We present a new simple proof for the
sake of completeness.
From the above claims and the fact that the connected components of the level
sets of $V_{b,a}$ in ${Q^+}$ are bounded it follows that the level sets of
$V_{b,a}$ in ${Q^+}\setminus\{\bf{p}\}$ are diffeomorphic to circles.
Let us prove the above claims. The fixed points of $F_{b,a}$ are given by
$$
\begin{cases}
x=\frac{a+y}{x}, \\
y=\frac{a+bx+y}{xy},
\end{cases} \Leftrightarrow
\begin{cases}
x^2=a+y, \\
x(y^2-b)=a+y,
\end{cases}
$$
and so $x^2=x(y^2-b).$ Hence in $Q^+,$ we have that $x=y^2-b$ and the above
system is equivalent to
$$
\begin{cases}
x=y^2-b, \\
xy^2-bx-y-a=0,
\end{cases}
\Leftrightarrow \begin{cases}
x=y^2-b, \\
P(y):=y^4-2by^2-y+b^2-a=0.
\end{cases}
$$
It is not difficult to check that the last system of equations is
precisely the one that gives the critical points of the curves
$V_{a,b}=h.$ Moreover, from the first equation it is necessary that
$x=y^2-b>0$ and hence $y>\sqrt{b}.$ Since $P(y)$ has only one real
root in $(\sqrt{b},\infty)$ the uniqueness of the critical point
holds.
Let us prove that this critical point corresponds with a local minimum of
$V_{b,a}.$ We will check the usual sufficient conditions given by the Hessian
of $V_{b,a}$ at $\bf p$.
Firstly,
$$
\frac{\partial^2 }{\partial x^2}V_{b,a}(y^2-b,y)=2\,{\frac { \left(
y+a \right) \left( ay+b \right) }{ \left({y}^{ 2}-b \right)
^{3}y}}>0\,\mbox{ for }y>\sqrt{b}.$$ Secondly, the determinant of
the Hessian matrix at the points $(y^2-b,y)$ is
$$
h(y)=\frac{f(y)}{(b-y^2)^4y^4},
$$
where
$$f(y):=(by^2+a-b^2)(-b{y}^{6}+
3\left( a+{b}^{2} \right) {y}^{4}+ 4\left( {a}^{2}+b \right) {y}^{3}+ 3b\left(
2a-{b}^{2} \right) {y}^{2}+b^2({b}^{2}-a ).$$ A tedious computation shows that
$ f(y)=q(y)P(y)+r(y), $ with
\begin{align*} r(y)=&\left(4{a}^{2}{b}^{2}+ 4{a}^{3}+4{b}^{3}+6ab
\right) {y}^{3}+
\left(18{a}^{2}b+ 8{b}^{3}a+3{b}^{2} \right) {y}^{2}\\
&+ \left(-4{b}^{3}{a}^{2}+4{a}^{3}b-4{b}^{4} +12 a{b}^{2}+3{a}^{2} \right)
y-8{b}^{4}a+5{a}^{2}{b}^{2}+3{a}^{3}.
\end{align*}
Observe that if $\bar y$ is the positive root of $P(y)$, then
${\rm sign}(h(\bar y ))={\rm sign}(r(\bar y ))$. Taking into account that
$P(\bar y )=0$ implies that $a=\bar y ^4-2b\bar y ^2-\bar y +b^2$ we
have that
$$r(\bar y )= {\bar y}^{2} \left( 4\,{\bar y}^{3}-4\,b\bar y -1 \right) \left(
b\bar y +1-{\bar y}^{3} \right) ^{2} \left( b-{\bar y}^{2} \right)
^{2}.$$ So, $ {\rm sign}\left( 4\,{\bar y}^{3}-4b\bar y
-1\right)={\rm sign}\left(P'(\bar y )\right). $ Since
$P(\sqrt{b})=-\sqrt{b}-a<0,$ $\lim\limits_{y\to\infty} P(y)=+\infty$
and, on this interval, there is only one critical point of $P(y),$
which is simple, we get that $P'(\bar y )>0$ and so $h(\bar y )>0.$
Hence ${\bf p}$ is a local minimum of $V_{b,a}(x,y)$, as we wanted
to prove.
The same kind of arguments work to end the proof for the case $k=3,$ but the
computations are extremely more tedious. We only make some comments.
The fixed points of $F_{c,b,a}$ in $Q^+$ are given by:
\begin{equation*}\label{sistemafcba}
\begin{cases}
P(x):=x^5+ax^4-2x^3-(2a+bc)x^2+(1-b^2-c^2)x+a-bc=0,\, \\
y=Q(x):=\dfrac{(xb+a)}{(x-1)(x+1)}.
\end{cases}
\end{equation*}
It can be proved again that they coincide with the singular points of
$V_{c,b,a}$ in $Q^+.$ This fact follows from the computation of several
suitable resultants between ${\partial V_{c,b,a}}/{\partial x},$ ${\partial
V_{c,b,a}}/{\partial y}$ and $V_{c,b,a}.$
The uniqueness of the fixed point $\bf p$ in ${Q^+}$ can be shown as follows:
since $Q(x)>0$ implies that $x>1$, we only need to search solutions of
$P_5(x)=0$ in $(1,+\infty)$. With the new variable $z=x-1,$
\begin{equation*}\label{quantessolucions}
\widetilde{P}(z):=P(z+1):=z^5+(c+5)z^4+(8+4c)z^3+(4-ab+4c)z^2-(a+b)z-(a+b)^2=0,
\end{equation*}
Since $\widetilde{P}(0)<0$; $\lim\limits_{z\to +\infty}
\widetilde{P}(z)=+\infty$; and the Descarte's rule, we know that there is only
one positive solution, as we wanted to see.
Finally it can be proved that $\bf p$ is a non-degenerated local
minimum of $V_{c,b,a}$. These computations are complicated, and they
have been performed in a very smart way in \cite{KN}, so we skip
them and we refer the reader to this last reference.\qed
\subsection{Proof of (ii) of Theorem \ref{rotacions}}
In \cite{CGM} it is proved a result that characterizes the dynamics of
integrable diffeomorphisms having a \textsl{Lie Symmetry}, that is a vector
field $X$ such that $ X(F(p))=(DF(p))\,X(p)$. Next theorem states it,
particularized to the case we are interested.
\begin{teo}[\cite{CGM}]\label{teor}
Let ${\cal{U}}\subset \mathbb{R}^2$ be an open set and let $\Phi:{\cal{U}}\rightarrow {\cal{U}}$ be
a diffeomorphism such that:
\begin{enumerate}
\item[(a)] It has a smooth regular first integral $V:{\cal{U}}\rightarrow {\mathbb R},$ having
its level sets $\Gamma_h:=\{z=(x,y)\in{\cal{U}}\,:\, V(z)=h\}$ as simple
closed curves.
\item[(b)] There exists a smooth function $\mu:{\cal{U}}\rightarrow {\mathbb R}^+$ such
that for any $z\in {\cal{U}},$
\begin{equation*}\label{mu}
\mu(\Phi(z))=\det(D\Phi(z))\,\mu(z).
\end{equation*}
\end{enumerate}
Then the map $\Phi$ restricted to each $\Gamma_h$ is conjugated to a rotation
with rotation number $\tau(h)/T(h)$, where $T(h)$ is the period of $\Gamma_h$
as a periodic orbit of the planar differential equation
\[
\dot z=\mu(z)\left(-\frac{\partial V(z)}{\partial y},\frac{\partial
V(z)}{\partial x}\right)
\]
and $\tau(h)$ is the time needed by the flow of this equation for
going from any $w\in\Gamma_h$ to $\Phi(w)\in\Gamma_h.$
\end{teo}
Next lemma is one of the key points for finding a Lie symmetry for families of
periodic maps, like the $2$ and $3$--periodic Lyness maps.
\begin{lem}\label{mus}
Let $\{G_a\}_{a\in A}$ be a family of diffeomorphisms of ${\cal{U}}\subset
\mathbb{R}^2$. Suppose that there exists a smooth map $\mu:{\cal{U}}\to {\mathbb R}$
such that for any $a\in A$ and any $z\in {\cal{U}},$ the equation $
\mu(G_a(z))=\det(DG_a(z))\,\mu(z)$ is satisfied. Then, for every
choice $a_1,\ldots,a_k\in A,$ we have
$$
\mu(G_{[k]}(z))=\det(DG_{[k]}(z))\,\mu(z),$$ where
$G_{[k]}=G_{a_k}\circ\cdots\circ G_{a_{2}}\circ G_{a_1}.$
\end{lem}
\rec {\it Proof. }} % DEMOSTRACI\'{O It is only necessary to prove the result for $k=2$ because the general
case follows easily by induction. Consider $a_1,a_2\in A$ then
\begin{align*} \mu(G_{a_2,a_1}(z))& =\mu(G_{a_2}\circ
G_{a_1}(z))=\det(DG_{a_2}(G_{a_1}(z)))\,\mu(G_{a_1}(z))= \\
&= \det(DG_{a_2}(G_{a_1}(z)))\,\det(DG_{a_1}(z))\,\mu(z)= \det(D
(G_{a_2}\circ G_{a_1}(z)))\,\mu(z)=\\
&= \det(DG_{a_2,a_1}(z))\mu(z),
\end{align*}
and the lemma follows.\qed
\medskip
{\rec {\it Proof of (ii) of Theorem \ref{rotacions}.}} From part (i) of the
theorem we know that the level sets of $V_{b,a}$ and $V_{c,b,a}$ in
${Q^+}\setminus\{{\bf p}\}$ are diffeomorphic to circles. Moreover these
functions are first integrals of $F_{b,a}$ and $F_{c,b,a}$, respectively.
Notice also that for any $a,$ the Lyness map $F_a(x,y)=(y,\frac{a+y}{x})$
satisfies
$$\mu(F_{a}(x,y))=\det(DF_{a}(x,y))\mu(x,y),$$
with $\mu(x,y)=xy.$ Hence, by Lemma \ref{mus},
\[\mu(F_{b,a}(x,y))=\det(DF_{b,a}(x,y))\mu(x,y)\quad\mbox{and}\quad
\mu(F_{c,b,a}(x,y))=\det(DF_{c,b,a}(x,y))\mu(x,y).\] Thus, from
Theorem~\ref{teor}, the result follows.\qed
\medskip
It is worth to comment that once part (i) of the theorem is proved
it is also possible to prove that the dynamics of $F_{b,a}$ (resp.
$F_{c,b,a}$) restricted to the level sets of $V_{b,a}$ (resp.
$V_{c,b,a}$) is conjugated to a rotation by using that they are
given by cubic curves and that the map is birational, see
\cite{JRV}. We prefer our approach because it provides
a dynamical interpretation of the rotation number together with its
analytic characterization.
\section{Proof of Theorem \ref{main}}\label{provaA}
In order to prove Theorem~\ref{main} we need a preliminary result.
Consider the maps $F_{b,a}$ and $F_{a,b},$ jointly with their
corresponding first integrals $V_{b,a}$ and $V_{a,b}.$ In a similar
way consider $F_{c,b,a}\,,\,F_{a,c,b}$ and $F_{b,a,c}$ with
$V_{c,b,a}\,,\,V_{a,c,b}$ and $V_{b,a,c}.$ Some simple computations
prove the following elementary but useful lemma. Notice that it can
be interpreted as the relation between the first integrals and the
non-autonomous invariants.
\begin{lem}\label{elemental}
With the above notations:
\begin{itemize}
\item[(i)] $V_{b,a}(x,y)=V_{a,b}(F_a(x,y)).$
\item[(ii)] $V_{c,b,a}(x,y)=V_{a,c,b}(F_a(x,y))=V_{b,a,c}(F_b(F_a(x,y))).$
\end{itemize}
\end{lem}
{\rec {\it Proof of Theorem \ref{main}.}} We split the proof in two
steps. For $k=2,3$ we first prove that there are only two types of
behaviors for $\{x_n\}_n$, either this set of points is formed by
$kp$ points for some positive integer $p,$ or it has infinitely many
points whose adherence is given by at most $k$ intervals. Secondly,
in this later case, we provide an algebraic way for studying the
actual number of intervals.
\noindent {\bf First step:} We start with the case $k=2.$ With the notation
introduced in \eqref{k=2}, it holds that
\[
F_{b,a}(x_{2n-1},x_{2n})=(x_{2n+1},x_{2n+2}), \quad
F_{a,b}(x_{2n},x_{2n+1})=(x_{2n+2},x_{2n+3}),
\]
where $(x_1,x_2)\in{Q^+}$ and $n\ge 1$. So the odd terms of the sequence
$\{x_n\}_n$ are contained in the projection on the $x$-axis of the oval of
$\{V_{b,a}(x,y)=V_{b,a}(x_1,x_2)=h\}$ and the even terms in the corresponding
projection of $\{V_{a,b}(x,y)=V_{a,b}(F_{a}(x_1,x_2))=h\}$, where notice that
we have used Lemma~\ref{elemental}.
Recall that the ovals of $V_{b,a}$ are invariant by $F_{b,a}$ and
the ovals of $V_{a,b}$ are invariant by $F_{a,b}$. Notice also that
the trivial equality $F_a\circ F_{b,a}=F_{a,b}\circ F_a$ implies
that the action of $F_{b,a}$ on $\{V_{b,a}(x,y)=h\}$ is conjugated
to the action of $F_{a,b}$ on $\{V_{a,b}(x,y)=h\}$ via $F_a.$
From Theorem \ref{rotacions} we know that $F_{b,a}$ on the
corresponding oval is conjugated to a rotation of the circle. Hence,
if the corresponding rotation number is rational, then the orbit
starting at $(x_1,x_2)$ is periodic, of period say $q,$ then the
sequence $\{x_n\}_n$ is $2q$-periodic. On the other hand if the
rotation number is irrational, then the orbit of $(x_1,x_2)$
generated by $F_{b,a}$ fulfills densely the oval of
$\{V_{b,a}(x,y)=h\}$ in ${Q^+}$ and hence the subsequence of odd terms
also fulfills densely the projection of $\{V_{b,a}(x,y)=h\}$ in the
$x$-axis. Clearly, the sequence of even terms do the same with the
projection of the oval of $\{V_{a,b}(x,y)=h\}.$
Similarly when $k=3$ the equalities
\begin{align*}
F_{c,b,a}(x_{3n-2},x_{3n-1})&=(x_{3n+1},x_{3n+2}), \\
F_{a,c,b}(x_{3n-1},x_{3n})&=(x_{3n+2},x_{3n+3}),\\
F_{b,a,c}(x_{3n},x_{3n+1})&=(x_{3n+3},x_{3n+4}),
\end{align*}
where $n\ge1$, allow to conclude that each term $x_m,$ of the sequence
$\{x_n\}_n$ where we use the notation (\ref{k=3}), is contained in one of the
projections on the $x$-axis of the ovals $\{V_{c,b,a}(x,y)=
V_{c,b,a}(x_1,x_2)=:h\}$ and $\{V_{a,c,b}(x,y)=h\}$ and $\{V_{b,a,c}(x,y)=h\}$,
according with the remainder of $m$ after dividing it by 3. The rest of the
proof in this case follows as in the case $k=2.$ So the first step is done.
\noindent {\bf Second step:} From the above results it is clear that
the problem of knowing the number of connected components of the
adherence of $\{x_n\}_n$ is equivalent to the control of the
projections of several invariant ovals on the $x$-axis. The strategy
for $k=3$, and analogously for $k=2$, is the following. Consider the
ovals contained in the level sets given by $\{V_{c,b,a}(x,y)=h\}$,
$\{V_{a,c,b}(x,y)=h\}$ and $\{V_{b,a,c}(x,y)=h\}$ and denote by
$I=I(a,b,c,h), J=J(a,b,c,h)$ and $K=K(a,b,c,h)$ the corresponding
closed intervals of $(0,\infty)$ given by their projections on the
$x$-axis.
We want to detect the values of $h$ for which two of the intervals,
among $I,J$ and $K,$ have exactly one common point. First we seek
their boundaries. Since the level sets are given by cubic curves,
that are quadratic with respect the $y$-variable, these points will
correspond with values of $x$ for which the discriminant of the
quadratic equation with respect to $y$ is zero. So, we compute
\begin{align*}
R_1(x,h,a,b,c)&:=\mbox{dis}\,(xyV_{c,b,a}(x,y)-hxy,y)=0,\\
R_2(x,h,a,b,c)&:=\mbox{dis}\,(xyV_{a,c,b}(x,y)-hxy,y)=0,\\
R_3(x,h,a,b,c)&:=\mbox{dis}\,(xyV_{b,a,c}(x,y)-hxy,y)=0.
\end{align*}
Now we have to search for relations among $a,b,c$ and $h$ for which
two of these three functions have some common solution, $x.$ These
relations can be obtained by computing some suitable resultants.
Taking the resultants of $R_1$ and $R_2$; $R_2$ and $R_3$; and $R_1$
and $R_3$ with respect $x$ we obtain three polynomial equations
$R_4(h,a,b,c)=0$, $R_5(h,a,b,c)=0$ and $R_6(h,a,b,c)=0.$ In short,
once $a,b$ and $c$ are fixed we have obtained three polynomials in
$h$ such that a subset of their zeroes give the bifurcation values
which separate the number of intervals of the adherence of
$\{x_n\}_n$. See the results of Proposition~\ref{ex2} and
Example~\ref{ex3} for concrete applications of the method.
Before ending the proof we want to comment that for most values of
$a$, $b$ and $c,$ varying $h$ there appear the three possibilities,
namely $1$, $2$ or $3$ different intervals. The last case appears
for values of $h$ near $h_c:=V_{c,b,a}({\bf p})$, because the first
coordinates of the three points $\bf p,$ $F_a({\bf p})$
and $F_b(F_a({\bf p}))$ almost never coincide. The other situations
can be obtained by increasing $h.$ \qed
\begin{propo}\label{ex2} Consider the recurrence \eqref{eq} with $k=2$ and
$\{a_n\}_n$ as in \eqref{k=2}
taking the values $a=3$ and $b=1/2.$ Define
$h_c={(12z^3-33z+7)}/{(2(z^2-3))}\simeq 17.0394, $ where $z\simeq
2.1513$ is the biggest positive real root of \, $2z^4-12z^2-2z+17,$
and $h^*\simeq 17.1198,$ is the smallest positive root of
\[p_4(h):=112900h^4-2548088h^3-48390204h^2+564028596h+7613699255.\]
Then,
\begin{enumerate}[(i)]
\item The initial condition $(x_1,x_2)=(z,z^2-3)$ gives a two periodic
recurrence $\{x_n\}_n$. Moreover $V_{1/2,3}(z,3-z^2)=h_c.$
\item Let $(x_1,x_2)$ be any positive initial conditions, different
from $(z,z^2-3),$ and set $h=V_{1/2,3}(x_1,x_2).$ Let $\rho(h)$
denote the rotation number of $F_{1/2,3}$ restricted to the oval of
$\{V_{1/2,3}(x,y)=h\}.$ Then
\begin{enumerate}[(I)]
\item If $\rho(h)=p /q\in\mathbb{Q},$ with $\gcd(p,q)=1,$ then the sequence
$\{x_n\}_n$ is $2q$-periodic.
\item If $\rho(h)\not\in\mathbb{Q}$ and $h\in (h_c,h^*)$ then the adherence of
the sequence $\{x_n\}_n$ is formed by two disjoint closed intervals.
\item If $\rho(h)\not\in\mathbb{Q}$ and $h\in [h^*,\infty)$ then the adherence of
the sequence $\{x_n\}_n$ is one closed interval.
\end{enumerate}
\end{enumerate}
\end{propo}
We want to remark that, from a computational point of view, the case
(I) almost never is detected. Indeed, taking $a$ and $b$ rational
numbers and starting with rational initial conditions, by using
Mazur's theorem it can be seen that almost never the rotation number
will be rational, see the proof of \cite[Prop. 1]{BR}. Therefore,
in numeric simulations only situations (II) and (III) appear, and
the value $h=h^*$ gives the boundary between them. In general, for
$k=2,$ the value $h^*$ is always the root of a polynomial of degree
four, which is constructed from the values of $a$ and $b$.
\noindent{\it Proof of Proposition~\ref{ex2}.} Clearly $(z,3-z^2)$
is the fixed point of $F_{b,a}$ in ${Q^+}.$ Some computations give the
compact expression of $h_c:=V_{a,b}(z,z^2-3).$ To obtain the values
$h^*$ we proceed as in the proof of Theorem~\ref{main}. In general,
\begin{align*}
R_1(x,h,a,b)&:=\mbox{dis}\,(xyV_{b,a}(x,y)-hxy,y)\\
&\phantom{:}=(ax^2-hx+a^2+b)^2-4(bx+a)(bx^2+b^2x+ax+ab),\\
R_2(x,h,a,b)&:=\mbox{dis}\,(xyV_{a,b}(x,y)-hxy,y)\\
&\phantom{:}=(bx^2-hx+a+b^2)^2-4(ax+b)(ax^2+a^2x+bx+ab).
\end{align*}
Then we have to compute the resultant of the above polynomials with
respect to $x$. It always decomposes as the product of two quartic
polynomials in $h.$ Its expression is very large, so we only give it
when $a=3$ and $b=1/2$. It writes as
\[
\frac{625}{65536}\left(4h^4-1176h^3+308h^2+287380h+1816975\right)p_4(h).
\]
It has four real roots, two for each polynomial. Some further work
proves that the one that interests us is the smallest one of $p_4.$
\qed
We also give an example when $k=3,$ but skipping all the details.
\begin{example}\label{ex3} Consider the recurrence \eqref{eq} with $k=3$ and
$\{a_n\}_n$ as in \eqref{k=3}
taking the values $a=1/2, b=2$ and $c=3.$ Then for any positive
initial conditions $x_1$ and $x_2$, $V_{c,b,a}(x_1,x_2)=h\ge
V_{c,b,a}({\bf p})=h_c\simeq 15.9283.$ Moreover if the rotation
number of $F_{c,b,a}$ associated to the oval $\{V_{c,b,a}(x,y)=h\}$
is irrational then the adherence of $\{x_n\}_n$ is given by:
\begin{itemize}
\item Three intervals when $h\in (h_c,h^*),$ where $h^*\simeq
15.9614;$
\item Two intervals when $h\in [h^*,h^{**}),$ where $h^{**}\simeq
16.0015;$
\item One interval when $h\in[h^{**},\infty).$
\end{itemize}
The values $h^*$ and $h^{**}$ are roots of two polynomials of degree
8 with integer coefficients that can be explicitly given.
\end{example}
\section{Some properties of the rotation number function
}\label{someproperties}
From Theorem~\ref{rotacions} it is natural to introduce the {\it
rotation number function} for $F_{b,a}$:
\[
\rho_{b,a}:[h_c,\infty)\longrightarrow (0,1),
\]
where $h_c:=V_{b,a}({\bf p})$, as the map that associates to each
invariant oval $\{V_{b,a}(x,y)=h\}$, the rotation number
$\rho_{b,a}(h)$ of the function $F_{b,a}$ restricted to it. The
following properties hold:
\begin{enumerate}[(i)]
\item The function $\rho_{b,a}(h)$ is analytic for $a>0,b>0$,
$h>h_c$ and it is continuous at $h=h_c.$ This can be proved from the
tools introduced in \cite[Sec. 4]{CGM2}.
\item The value $\rho_{b ,a}(h_c)$ is given by the argument over $2\pi$ of the
eigenvalues (which have modulus one due to the integrability of
$F_{b,a}$) of the differential of $F_{b,a}$ at ${\bf p}$.
\item $\rho_{b,a}(h)=\rho_{a,b}(h).$
\item $\rho_{a,a}(h)=2\rho_a(h) \mod 1,$ where $\rho_a$ is the rotation
number\footnote{Notice that given a map of the circle there is an
ambiguity between $\rho$ and $1-\rho$ when one considers its
rotation number. So, while for us the rotation number of the
classical Lyness map for $a=1$ is $4/5$, in other papers it is
computed as $1/5.$} function associated to the classical Lyness
map. Then, from the results of \cite{BC} we know that
$\rho_{1,1}(h)\equiv3/5,$ that for $a\ne1,$ positive,
$\rho_{a,a}(h)$ is monotonous and
$\lim_{h\to\infty}\rho_{a,a}(h)=3/5.$
\end{enumerate}
Note that item (iii) follows because $F_{a,b}$ is conjugated with
$F_{b,a}$ via $\psi=F_a$ which is a diffeomorphism of ${Q^+}$, because
$ \psi^{-1} F_{a,b} \psi=F_a^{-1}F_a F_b F_a=F_b F_a=F_{b,a}. $
Since $\psi$ preserves the orientation, the rotation number
functions of $F_{a,b}$ and $F_{b,a}$ restricted to the corresponding
ovals must coincide.
Similar results to the ones given above hold for $F_{c,b,a}$ and its
corresponding rotation number function. Some obvious differences
are:
\begin{align*}
&\rho_{c,b,a}(h)=\rho_{b,a,c}(h)=\rho_{a,c,b}(h)\,
&&\rho_{a,a,a}(h)=3\rho(h)\,\mod1,\\
&\rho_{1,1,1}(h)=2/5, &&\lim_{h\to\infty}\rho_{a,a,a}(h)=2/5.
\end{align*}
We are convinced that when $a>0$ and $b>0,$
\[\lim_{h\to\infty}\rho_{b,a}(h)=3/5\quad\mbox{and}\quad\lim_{h\to\infty}\rho_{c,b,a}(h)=2/5,
\]
but we have not been able to prove these equalities. If they were
true, by combining them with the values of the rotation number
function at $h=h_c$ this would give a very useful information to
decide if, apart from the trivial cases $a=b=1 (c=1),$ there are
other cases for which the rotation number function is constant.
Notice that in these situations the maps $F_{b,a}$ or $F_{c,b,a}$
would be globally periodic in ${Q^+}.$ This information, together with
the values at $h_c$, also would be useful to know the regions where
the corresponding functions could be increasing or decreasing.
Finally notice that this rotation number at infinity is not
continuous when we approach to $a=0$ or $b=0$, where the recurrence
and the first integral are also well defined on ${Q^+}.$ For instance
$\rho_{0,0}(\rho)\equiv 2/3$ and the numerical experiments of next
subsection seem to indicate that for $a>0$ or $b>0,$
\[
\lim_{h\to\infty}\rho_{0,a}(h)=\lim_{h\to\infty}\rho_{b,0}(h)=5/8.
\]
Before proving Theorem~\ref{teonomonotonia} we introduce with an
example the algorithm that we will use along this section to compute
lower and upper bounds for the rotation number. We have implemented
it in an algebraic manipulator. Notice also that when we apply it
taking rational values of $a$ and $b$ and rational initial
conditions, it can be used as a method to achieve proofs, see next
example or the proof of Theorem~\ref{teonomonotonia}.
Fix $a=3,$ $b=2$ and $(x_0,y_0)=(1,1).$ Then $h=V_{2,3}(1,1)=34.$
Compute for instance the 27 points of the orbit starting at $(1,1),$
\[
(x_1,y_1)=(4,6),\quad (x_2,y_2)=\left(\frac9 4,
\frac{17}{24}\right),\quad (x_3,y_3)=\left(\frac{89}{54},
\frac{788}{153}\right),\ldots
\]
and consider them as points on the oval $\{V_{2,3}(x,y)=34\},$ see
Figure~\ref{fig1}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.35]{fig1
\end{center}
\caption{Oval of $\{V_{2,3}(x,y)=34\}$ with 27 iterates of
$F_{2,3}$. The label $0$ indicates the initial condition $(1,1)$,
and the label $k, k=1,\ldots,26,$ corresponds with the $k$-th point
of the orbit.}\label{fig1}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[height=5 cm,width=12 cm]{fig2
\end{center}
\caption{Lower and upper bounds for $\rho_{2,3}(34)$ obtained after
following some points of the orbit starting at
$(1,1)$.}\label{fig2}
\end{figure}
We already know that the restriction of $F_{2,3}$ to the given oval
is conjugated to a rotation, with rotation number
$\rho:=\rho_{2,3}(34)$ that we want to estimate. This can be done by
counting the number of turns that give the points $(x_j,y_j)$, after
fixing some orientation in the closed curve. We orientate the curve
with the counterclockwise sense. So, for instance we know that the
second point $(x_2,y_2)$ has given more that one turn and less than
two, giving that $1<2\rho<2,$ and hence that $\rho\in(1/2,1).$ Doing
the same reasoning with all the points computed we obtain,
\begin{align*}
&4<7\rho<5&\Rightarrow&&& \rho\in\left(\frac47,\frac57\right),\\
&8<14\rho<9&\Rightarrow&&& \rho\in\left(\frac8{14},\frac9{14}\right),\\
&10<19\rho<11&\Rightarrow&&& \rho\in\left(\frac{10}{19},\frac{11}{19}\right),\\
&14<26\rho<15&\Rightarrow&&&
\rho\in\left(\frac{14}{26},\frac{15}{26}\right),
\end{align*}
where we have only written the more relevant informacions obtained,
which are given by the points of the orbit closer to the initial
condition. So, we have shown that
\[
0.5714\simeq\frac{4}{7}<\rho_{2,3}(34)<\frac{15}{26}\simeq 0.5769.
\]
In Figure~\ref{fig2} we represent several successive lower and upper
approximations obtained while the orbit is turning around the oval.
We plot around six hundred steps, after skipping the first fifty
ones. By taking 1000 points we get
\[
0.5761246\simeq\frac{338}{578}<\rho_{2,3}(34)<\frac{473}{821}\simeq
0.5761267,
\]
and after 3000 points,
\[
0.57612457\simeq\frac{338}{578}<\rho_{2,3}(34)<\frac{1472}{2555}\simeq
0.57612524.
\]
In fact when we say that
$\rho_{2,3}(34)\in(\rho_{\mathrm{low}},\rho_{\mathrm{upp}})$, the
value $\rho_{\mathrm{low}}$ is the upper lower bound obtained by
following all the considered points of the orbit, and
$\rho_{\mathrm{upp}}$ is the lowest upper bound. Notice that taking
1000 or 3000 points we have obtained the same lower bound for
$\rho_{2,3}(34).$
Let us prove Theorem~\ref{teonomonotonia} by using the above
approach.
\bigskip
\noindent{\it Proof of Theorem \ref{teonomonotonia}.} Consider
$a=1/2,$ $b=3/2$ and the three points
\[
\mathbf{p}^1=\left(\displaystyle{\frac{149}{100}},\displaystyle{\frac{173}{100}}
\right),\quad\mathbf{p}^2=\left(\displaystyle{\frac{3}{40}},\displaystyle{\frac{173}{100}}
\right),\quad\mathbf{p}^3=\left(\displaystyle{\frac{1}{1000}},\displaystyle{\frac{173}{100}}
\right).
\]
Notice that
\begin{align*}
&h_1:=V_{3/2,1/2}(\mathbf{p}^1)=\frac{10655559}{1288850}\simeq8.27,\\
&h_2:=V_{3/2,1/2}(\mathbf{p}^2)=\frac{9328327}{207600}\simeq44.93,\\
&h_3:=V_{3/2,1/2}(\mathbf{p}^3)=\frac{1056238343}{346000}\simeq3052.71.
\end{align*}
Hence $h_c<h_1<h_2<h_3.$ By applying the algorithm described above,
using 100 points of each orbit starting at each ${\bf p}^j,
j=1,2,3,$ we obtain that
\[
\rho_{3/2,1/2}(h_1),\rho_{3/2,1/2}(h_3)\in\left(\frac
35,\frac{59}{98}\right)\quad\mbox{and}\quad
\rho_{3/2,1/2}(h_2)\in\left(\frac{56}{93},\frac{53}{88}\right).
\]
Since $59/98<56/93$ we have proved that the function
$\rho_{3/2,1/2}(h)$ has at least a local maximum in $(h_1,h_3).$
From the continuity of the rotation number function, with respect
$a,b$ and $h$, we notice that this result also holds for all values
of $a$ and $b$ in a neighborhood of $ a=1/2,b=3/2.$ \qed
\bigskip
We believe that with the same method it can be proved that a similar
result to the one given in Theorem~\ref{teonomonotonia} holds for
some maps $F_{c,b,a},$ but we have decided do not perform this
study.
\subsection{Some numerical explorations for $k=2.$}\label{seccionumeric}
We start by studying with more detail the rotation number function
$\rho_{3/2,1/2}(h)$, that we have considered to prove
Theorem~\ref{teonomonotonia}. In this case the fixed point is ${\bf
p}\simeq(1.493363282,1.730133891)$ and
$h_c=V_{b,a}({\mathbf{p}})=8.267483381.$
Moreover $\rho_{b,a}({h}_c)\simeq0.6006847931$. By applying our
algorithm for approximating the rotation number, with $5000$ points
on each orbit, we obtain the results presented in Table~1. In
Figure~\ref{fig3} we also plot the upper and lower bounds of
$\rho_{3/2,1/2}(h)$ that we have obtained by using a wide range of
values of $h.$
\begin{center}
\vglue 0.2cm
\begin{tabular}{|r|r|r|r|}
\hline Init. cond. $(x,\bar y)$ & Energy level $h$ &
$\rho_{\mathrm{low}}(h)\qquad$ & $\rho_{\mathrm{upp}}(h)
\qquad$\\
\hline
$\bar x$&$h_c\simeq 8.2675$& $\simeq 0.6006848$ & $\simeq 0.6006848$ \\
1.3&$8.3068$& $\frac{173}{288}\simeq 0.6006944$ & $\frac{2938}{4891}\simeq 0.6006951$ \\
0.75&$9.2747$& $\frac{1435}{2388}\simeq 0.6009213$ & $\frac{2087}{3473}\simeq 0.6009214$ \\
0.3&$14.7566$& $\frac{1548}{2573}\simeq 0.6016323$ & $\frac{2285}{3798}\simeq 0.6016324$ \\
0.075&$44.9347$& $\frac{657}{1091}\simeq 0.6021998$ & $\frac{2354}{3909}\simeq 0.6022001$ \\
0.001&$3052.75$& $\frac{2927}{4867}\simeq 0.6013972$ & $\frac{86}{143}\simeq 0.6013986$ \\
$5\cdot 10^{-6}$&$609716.07$& $\frac{1832}{3049}\simeq 0.6008527$ & $\frac{1409}{2345}\simeq 0.6008529$ \\
$5\cdot 10^{-256}$&$6.097\cdot 10^{255}$& $\frac{3}{5}= 0.6$ & $\frac{2999}{4998}\simeq 0.6000400$ \\
\hline
\end{tabular}
\vglue 1 cm
Table 1: Lower and upper bounds of the rotation number
$\rho_{3/2,1/2}(h)$, for some orbits of $F_{3/2,1/2}$ starting at
$(x,\bar y),$ where ${\bf p}=(\bar x,\bar y).$
\end{center}
\begin{figure}[h]
\begin{center}
\includegraphics[height=7 cm,width=14 cm]{fig3
\end{center}
\caption{Lower and upper bounds for $\rho_{3/2,1/2}(h)$. On the
horizontal axis we represent $-\log_{10}(h)$ and, on the vertical
axis, the value of the rotation number. Notice that for values of
$-\log_{10}(h)$ smaller that 70 both values are indistinguishable in
the Figure.}\label{fig3}
\end{figure}
For other values of $a$ and $b$ we obtain different behaviors. All
the experiments are performed by starting at the fixed point ${\bf
p}=(\bar x,\bar y),$ and increasing the energy level by taking
initial conditions of the form $(x,\overline{y})$, by decreasing $x$
to $0$. With this process we take orbits approaching to the boundary
of ${Q^+}$, that is lying on level sets of $V_{b,a}$ with increasing
energy. The step in the decrease of $x$ (and therefore in the
increase of $h$) is not uniform, and it has been manually tuned
making it smaller in those regions where a possible non monotonous
behavior could appear.
Consider the set of parameters $\Gamma=\{(a,b),\in [0,\infty)^2\}$,
where notice that we also consider the boundaries $a=0$ or $b=0,$
where the map $F_{b,a}$ is well defined. We already know that
the rotation number function behaves equal at $(a,b)$ and $(b,a).$
Moreover we know perfectly its behavior on the diagonal $(a,a)$
(when $a<1$ it is monotonous decreasing and when $a>1$ it is
monotonous increasing) and that $\rho_{1,1}(h)\equiv 4/5$ and
$\rho_{0,0}(h)\equiv 2/3.$ Hence a good strategy for an numerical
exploration can be to produce sequences of experiments using our
algorithm by fixing some $a\ge0$ and varying $b$. For instance we
obtain:
\begin{itemize}
\item Case $a=1/2.$ For all the values of $b>0$ considered, the rotation number function
seems to tend to $3/5$ when $h$ goes to infinity. Moreover it seems
\begin{itemize}
\item monotone decreasing for $b\in\{1/4,1\};$
\item to have a unique maximum when $b\in\{7/5,3/2\}$;
\item monotone increasing for $b\in\{2,3\}.$
\end{itemize}
\item Case $a=0.$ For all the values of $b>0$ considered, the rotation
number function seems to tend to $5/8$ when $h$ goes to infinity.
Moreover it seems
\begin{itemize}
\item monotone decreasing for $b\in\{1/10,3/10,1/2\}$;
\item to have a unique maximum when $b\in\{7/10,3/4\}$;
\item monotone decreasing for $b\in\{1,5\}$.
\end{itemize}
\end{itemize}
The above results, together with some other experiments for other
values of $a$ and $b$, not detailed in this paper, indicate the
existence of a subset of positive measure in $\Gamma$ where the
corresponding rotation number functions seem to present an unique
maximum. This subset probably separates two other subsets
of~$\Gamma$, one where $\rho_{b,a}(h)$ is monotonically decreasing
to $3/5$ , and another one where $\rho_{b,a}(h)$ increases
monotonically to the same value. The ``oscillatory subset'' seems to
shrink to $(a,b)=(1,1)$ when it approaches to the line $a=b$ and
seems to finish in one interval on each of the borders $\{a=0\}$ and
$\{b=0\}$. Further analysis must be done in this direction in order
to have a more accurate knowledge of the bifurcation diagram
associated to the behavior of $\rho_{b,a}$ on $\Gamma$.
| {
"timestamp": "2009-12-26T18:31:35",
"yymm": "0912",
"arxiv_id": "0912.5031",
"language": "en",
"url": "https://arxiv.org/abs/0912.5031",
"abstract": "We describe the sequences {x_n}_n given by the non-autonomous second order Lyness difference equations x_{n+2}=(a_n+x_{n+1})/x_n, where {a_n}_n is either a 2-periodic or a 3-periodic sequence of positive values and the initial conditions x_1,x_2 are as well positive. We also show an interesting phenomenon of the discrete dynamical systems associated to some of these difference equations: the existence of one oscillation of their associated rotation number functions. This behavior does not appear for the autonomous Lyness difference equations.",
"subjects": "Dynamical Systems (math.DS)",
"title": "On two and three periodic Lyness difference equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513930404762,
"lm_q2_score": 0.8311430499496095,
"lm_q1q2_score": 0.8196328765237175
} |
https://arxiv.org/abs/1107.0326 | The Monty Hall Problem in the Game Theory Class | The basic Monty Hall problem is explored to introduce into the fundamental concepts of the game theory and to give a complete Bayesian and a (noncooperative) game-theoretic analysis of the situation. Simple combinatorial arguments are used to exclude the holding action and to find minimax solutions. | \section{Introduction}
\begin{itemize}
\item[]
{\it Suppose you're on a game show, and you're given the choice of three doors:
Behind one door is a car; behind the others, goats.
You pick a door, say No. 1, and the host, who knows what's behind the doors,
opens another door, say No. 3, which has a goat. He then says to you,
``Do you want to pick door No. 2?" Is it to your advantage to switch your choice?}
\end{itemize}
With these famous words the {\it Parade Magazine} columnist vos Savant opened an exciting chapter in mathematical didactics.
The puzzle, which has become known as the {\it Monty Hall Problem} (MHP), has broken the records of popularity
among the probability paradoxes.
The book by Rosenhouse \cite{Rosenhouse} and the Wikipedia entry on the MHP present the history and variations of the problem.
In the basic version of the MHP the rules of the game specify that the host must always reveal one of the
unchosen doors to show that there is no prize there. Two remaining unrevealed doors
may hide the prize, creating the illusion of symmetry and suggesting that the action does not matter.
However, the symmetry is fallacious, and
switching is a better action, doubling the probability of winning.
There are two main explanations of the paradox. One of them, simplistic, amounts to just counting
the mutually exclusive cases: either you win with switching or with holding the first choice.
A more sophisticated argument, included in the textbooks as an exercise on the Bayes theorem, calculates the conditional
probability of winning in the situation described in vos Savant's wording of the problem.
In \cite{Gill} a critical analysis of the conventional approaches to the MHP has been done, with the advocated viewpoint
that the whole situation of making decision, viewed as a multistep process,
is a challenging instance of mathematical modelling, very much amenable to the analysis within the game-theoretic framework.
The textbook by Haggstr\"om \cite{Haggstrom} puts the zero-sum game in matrix form and presents a minimax solution.
Further steps in this direction were done in \cite{Doors} and \cite{Dominance}, where it was argued that
the game-theoretic concept of dominance allows to analyse the problem under fairly general assumptions on the prior
information of the decision-maker, including the very interesting case of nouniform distribution, only
occasionally included in exercise sections
of probability textbooks \cite{Grinstead, Tijms}.
One elementary new observation we make is that for every contestant's strategy of playing the game
after the initial door has been chosen, there is always (at least)
one `unlucky' door, the same for every admissible algorithm for revealing the doors by the host.
The prize is never found behind the `unlucky' door, hence
the contestant will always lose in at least one case out of three.
This leads, rather straightforwardly,
to the worst-case winning probability 2/3.
We prefer, however, for the sake of instruction and in anticipation of future
generalizations to give here a full Bayesian analysis, with the host biased in any possible way.
These notes, expanding upon the cited literature are intended to show that the MHP is indeed
an excellent occasion to expose the undergraduate students to basic ideas of the game theory and
to decision-making under various uncertainty scenarios.
Due to its remarkable symmetry features the zero-sum version of the game will undoubtedly enter the Hall of Fame of the classical games
like {\it matching pennies} and {\it paper-scissors-stone}.
\section{The MHP as a sequential decision process}
The game involves two actors which we christen Monte and Conie.
In the basic scenario to follow, the prize is hidden behind one of three doors by Monte,
then
Conie picks a door which is kept unrevealed,
then one of the unchosen doors is revealed as not containing the prize, and then an offer is made to switch the choice
from the initial guess to another unrevealed door. Conie wants the prize
and she wins it if her final choice falls on the door where the prize was hidden.
To state the rules more rigorously and to introduce some notation for the admissible actions we number the doors 1,2,3.
The game consists of four moves:
\begin{itemize}
\item[(i)] Monte
chooses a door number $\theta$ from doors 1,2,3 to hide the prize. He keeps $\theta$ in secret.
\item[(ii)] Conie picks door number $x$ out of 1,2,3. Monte observes Conie's choice.
\item[(iii)] Monte reveals a door distinct from $x$ as not hiding the prize,
and offers to Conie the possibility of choosing between door $x$ and another unrevealed door $y$.
\item[(iv)] Conie finally chooses door $z$ from doors $x$ and $y$. Conie wins the prize if $z=\theta$, otherwise she wins nothing.
\end{itemize}
Conie's ignorance about the location of the prize means in the mathematical language that her actions cannot depend on $\theta$ explicitly.
There is also another, more subtle, indefinite factor of which Conie is ignorant:
this is the way Monte chooses between two doors to reveal
in the case $x=\theta$. These two indefinite factors, which are under the control of her opponent, comprise
Conie's decision-making environment.
We say that there is a match if $\theta=x$, in which case $y$ on step
(iii)
can be one of two numbers distinct from $\theta$; whereas if there is a mismatch,
$\theta\neq x$, the rules force $y=\theta$. On step (iv), we say that Conie takes action `hold'
(of sticking with the initial choice), denoted $\tm$, if $z=x$;
and that she takes action `switch' (from the initial choice), denoted $\ts$, if $z=y$.
\par To illustrate, a sequence of admissible moves $\theta,x,y,z$ could be 2212, which means that
Monte hides the prize behind door 2, Conie picks door 2 (so there is a match),
then Monte offers to switch to door 1 (by revealing door 3 as not containing the prize), and finally
Conie plays $\tm$ by sticking with her initial choice 2. Since $z=\theta~$ Conie wins the prize.
Positions in the game represent all substantial information available for the move to follow.
These are represented by vertices in the game tree in Figure \ref{TheBigGameTree}.
An edge connects a position to another position achievable in one move.
The play starts at the root vertex with Monte's move leading to a position $\theta\in\{1,2,3\}$, then the moves of actors alternate.
A path in the tree from the root to a terminal vertex is determined by the actions of {\it both} actors.
Each path directed away from the root ends in a leaf node, with Conie's winning positions $(\theta,x,y,z)$ being those
with $z=\theta$.
There is one important feature of the game which we indicate by coloring positions in the figure.
Conie does not know the
winning door $\theta$. The information of Conie on her second move can be specified by partitioning the collection of relevant positions
in {\it information sets}. For the first Conie's move $x$ there is just one information set $\{1,2,3\}$.
On her second move,
Conie cannot distinguish e.g. between positions 121 and 221, since for $x=2$ and $y=1$
(when the revealed door is 3) the prize can be behind any of the doors 1 and 2; thus $\{121, 221\}$ is one information set
which we denote $*21$, with $*$ staying for the unknown admissible value of $\theta$ (1 or 2 in this case).
The complete list of information sets for
the second Conie's move is $*12, *13, *21, *23, *31, *32$.
Possible moves are labelled by actions $\{\tm,\ts\}$, and the action depends on the set. Thus if the action is $\tm$ from
position $112$, then it must be $\tm$ from position $212$.
The game tree with partition of positions into information sets is sometimes called the {\it Kuhn tree}.
Monte always knows the current position in the game exactly,
so his information sets are singletons.
\begin{figure}
$
{
\resizebox*{10cm}{18cm}
{
\newcommand{\XX}[1]
\Tr{\psframebox{\rule{0pt}{9pt}#1}}}
\psset{angleB=90,angleA=90}
\pstree
[treemode=R]
{\XX{start}}
{
\pstree{{\XX{1~~~~}}}
{
\pstree{{\XX{11~~~}}}{{\pstree{{\XX{\color{red}112~~}}}{{\XX{1121}~{\large\Aries}}\taput{\m}{\XX{1122}}}}\tbput{\s}
\pstree{{\XX{\color{orange}113~~}}} {{\XX{1131}~{\large\Aries}}\taput{\m}{\XX{1133}}}}\tbput{\s}
\pstree{{\XX{12~~~}}}{\pstree{{\XX{\color{yellow}121~~}}}{{\XX{1211}~{\large\Aries}}\taput{\s} {\XX{1212}}}}\tbput{\m}
\pstree{{\XX{13~~~}}}{\pstree{{\XX{\color{green}131~~}}}{{\XX{1311}~{\large\Aries}}\taput{\s}{\XX{1313}}}}\tbput{\m}
}
\pstree{{\XX{2~~~~}}}
{
\pstree{{\XX{21~~~}}}{\pstree{{\XX{\color{red}212~~}}}{{\XX{2121}}\taput{\m}{\XX{2122}~{\large\Aries}}}}\tbput{\s}
\pstree{{\XX{22~~~}}}
{\pstree{{\XX{\color{yellow}221~~}}}{{\XX{2211}}\taput{\s} {\XX{2212}~{\large\Aries}}}\tbput{\m} \pstree{{\XX{\color{blue}223~~}}}
{{\XX{2232}~{\large\Aries}}\taput{\m} {\XX{2233}}}}\tbput{\s}
\pstree{{\XX{23~~~}}}{\pstree{{\XX{\color{violet}232~~}}}{{\XX{2322}~{\large\Aries}}\taput{\s} {\XX{2323}}}}\tbput{\m}
}
\pstree{{\XX{3~~~~ }}}
{
\pstree{{\XX{31~~~}}}{\pstree{{\XX{\color{orange}313~~}}}{{\XX{3131}}\taput{\m} {\XX{3133}~{\large\Aries}}}} \tbput{\s}
\pstree{{\XX{32~~~}}}{\pstree{{\XX{\color{blue}323~~}}}{{\XX{3232}}\taput{\m} {\XX{3233}~{\large\Aries}}}} \tbput{\s}
\pstree{{\XX{33~~~}}}{\pstree{{\XX{\color{green}331~~}}}{{\XX{3311}}\taput{\s} {\XX{3313}~{\large\Aries}}}\tbput{\m}
\pstree{{\XX{\color{violet}332~~}}}{{\XX{3322}}\taput{\s}{\XX{3323}~{\large\Aries}}}}\tbput{\m}
}
}
}
}
$
\caption{The game tree with succession of moves $\theta,x,y,z$, and Conie's winning terminal positions marked \Aries.}
\label{TheBigGameTree}
\end{figure}
\section{Strategies and the payoff matrix}
A {\it strategy} of Monte is a rule which for each position, where Monte is on the move, specifies a follower exactly.
For his first move from the starting position a strategy specifies a value of $\theta$.
For his second move a strategy specifies the value of $y$ for each position $(\theta,x)$,
so we can consider $y$ as a function $y= d_\theta(x)$, where
\begin{eqnarray*}
d_\theta(x)=\theta ~~~{\rm if ~~~} x\neq\theta,\\
d_\theta(x)\neq \theta ~~~{\rm if~~~} \theta=x.
\end{eqnarray*}
Simply put, Monte's strategy can be encoded in a pair of door numbers like $12$, where $\theta=1$ is the door hiding the prize, and
$d_1(1)=2$ is the door to which the switch is offered in the case of match. This indeed determines the function $d_1(\cdot)$ uniquely because
$d_1(2)=d_1(3)=1$. With this notation,
the complete list of Monte's strategies is
$${\mathcal M}= \{12,~ 13,~ 21,~ 22,~ 31,~ 32\}.$$
A {\it strategy} of Conie is a rule which for each position where Conie is on the move
specifies a follower, in a way consistent with partition in information sets.
Thus, her strategy must specify the value of $x$. Furthermore, for each initial choice $x$ and the door offered for switch $y\neq x$
her strategy must specify an action from the set $\{\tm,\ts\}$, which
is a function $a_x(\cdot)$, so that the second Conie's move is
\begin{eqnarray*}
z=x ~~~{\rm if ~~~} a_x(y)=\tm,\\
z=y ~~~{\rm if ~~~} a_x(y)=\ts.
\end{eqnarray*}
When $x$ is fixed $a_x(y)$ must be specified for two possible values of $y$. We can write therefore Conie's strategy as a triple
like $2\ts\tm$ which specifies $x=2$ and $a_2(1)=\ts, a_2(3)=\tm$.
The second entry $\ts$ of $2\ts\tm$ encodes the action for smaller door number $y$, and the third entry $\tm$ for larger.
With similar conventions, the complete list of twelve strategies of Conie is \\
$${\mathcal C}= \{1\ts\ts,~ 1\ts\tm, ~1\tm\ts, ~1\tm\tm, ~2\ts\ts, ~2\ts\tm, ~2\tm\ts, ~2\tm\tm, ~3\ts\ts, ~3\ts\tm, ~3\tm\ts, ~3\tm\tm\}.$$
\noindent
Every strategy of the kind $x\tm\tm$ or $x\ts\ts$ will be called {\it constant-action} strategy,
and every strategy of the kind $x\ts\ts$ will be called {\it always-switching} stratgey.
Strategies $x\ts\tm$ and $x\tm\ts$ are {\it context-dependent} strategies.
A strategy profile of the actors is a strategy of Monte and a strategy of Conie. When a strategy profile is fixed,
the course of the game, represented by a path in the game tree, is fully inambiguous.
For instance, when the profile (12,2\ts\tm) is played by the actors, $\theta,x,y,z$ is
1211, because Monte offers a switch to door 1, and since 1 is the smaller door number of possible values of $y\in\{1,3\}$ Conie reacts with $\ts$,
hence
winning the prize. The second entry 2 of Monte's strategy 12 is immaterial for this outcome since there is a mismatch $\theta\neq x$.
We adopt the convention that Conie receives payoff 1 when she wins the prize and 0 otherwise.
The matrix $\CC$ in Figure \ref{ConPayoff}
represents the correspondence between the strategy profiles and Conie's payoffs.
\begin{figure}
\begin{center}
\begin{tabular}{c|cccccc}
$\theta,y=$ & 12 & 13 & 21 & 23 & 31 & 32\\
\hline
1\swsw &0 & 0 & 1 & 1 & 1 &1\\
1\masw &1 & 0 & 0 & 0 & 1 &1\\
1\swma &0 & 1 & 1 & 1 & 0 &0\\
1\mama &1 & 1 & 0 & 0 & 0 &0\\
& & & & & & \\
2\swsw &1 & 1 & 0 & 0 & 1 &1\\
2\masw &0 & 0 & 1 & 0 & 1 &1\\
2\swma &1 & 1 & 0 & 1 & 0 &0\\
2\mama &0 & 0 & 1 & 1 & 0 &0\\
& & & & & & \\
3\swsw &1 & 1 & 1 & 1 & 0 &0\\
3\masw &0 & 0 & 1 & 1 & 1 &0\\
3\swma &1 & 1 & 0 & 0 & 0 &1\\
3\mama &0 & 0 & 0 & 0 & 1 &1\\
\end{tabular}
\end{center}
\caption{Conie's payoff matrix $\CC$}
\label{ConPayoff}
\end{figure}
Unlike the game tree, the simplified matrix representation ignores many substantial
attributes like moves, positions and information sets. The game in this matrix form amounts to a very simple procedure:
Monte picks a column and Conie picks a row of matrix $\CC$. Conie's payoff is then the entry of $\CC$ that stays
on the intersection of the selected row and column.
An advantage of the matrix representation is that it unifies and simplifies
the analysis. In particular, we can compare different Conie's strategies under different assumptions on the decision-making
environments, that is Monte's behaviors. For instance, we can think of two Conies which simultaneously
play, say, 1\masw~ and 3\swsw~ (respectively) against the same strategy of Monte.
Note that `simultaneous' in this context refers to a logical comparison of the
outcomes, rather than to a real play.
A quick inspection of the matrix $\CC$ shows that the problem has a property called {\it weak dominance}:
\begin{itemize}
\item[]
{ for every Conie's strategy $A$ which in some situation employs action $\tm$ (so $A$ is of the kind $x\tm\tm$, $x\tm\ts$ or $x\ts\tm$) there exists
an always-switching strategy $B$ such that if $A$ wins against $S$,
then $B$ wins against $S$ as well, whichever strategy of Monte $S$}.
\end{itemize}
For instance, strategy 1\tm\ts~ which does not switch to door $y=2$, is dominated by always-switching strategy 2\ts\ts.
Amazingly, $1\tm\ts$ is beaten in the situation $\theta=1, y=3$ where the strategy uses the {\it switch} action!
The never-switching strategy 1\tm\tm~ is dominated by both 2\ts\ts~and~ 3\ts\ts.
The dominance is a very strong argument for excluding the strategies which may employ the action $\tm$.
The dominance feature has a nice interpretation in terms of `unlucky' door.
Suppose for a time-being that the prize-hiding is a move of nature (or quiz-team), out of control of Monte.
All Monte can do is to choose door $y$ to reveal when he can.
\begin{itemize}
\item[]
{The `unlucky' door theorem says: whichever Conie's strategy $A$, there exists a door $u$ (depending on $A$) such that
under $A$ the final choice $z$ does not fall on $u$ when $\theta=u$, whichever Monte's way of revealing doors when he can.
Strategy $u\ts\ts$~then weakly dominates $A$.}
\end{itemize}
Door $u=u(A)$ is marked in $\CC$ by two zeroes in positions $u*$ of the row $A$, for instance $u=2$ for $A=1\tm\ts$, and $u=2$ or $3$ for
$A=1\tm\tm$.
So the existence of the `unlucky' door means that Conie never wins for $\theta=u$, no matter how Monty reveals the doors.
Therefore,
\begin{itemize}
\item[]
{If $\theta$ is chosen uniformly at random Conie's winning probability cannot exceed $2/3$.}
\end{itemize}
\section{The Bayesian games}\label{Bayes}
We described above the sets of {\it pure} strategies $\mathcal M$ and $\mathcal C$, selecting from which a particular
profile determines inambiguously a succession of moves.
A {\it mixed} strategy of an actor is an assignment of probability to each pure strategy
of the actor.
Playing mixed strategies means using chance devices to choose from $\mathcal M$ and $\mathcal C$.
A mixed strategy of Monte can be written as a row vector $\QQ$ with six components corresponding to her pure strategies.
A mixed strategy of Conie can be written as a row vector $\PP$ of length twelve.
Conie's generic mixed strategy has 11 parameters, as probabilities add to one. A general theorem due to Kuhn says that if the actors
do not forget their private history
(perfect recall), like in our game, it is enough to consider a smaller class of {\it behavioral} strategies.
A behavioral strategy specifies probability distribution on the set of admissible moves for every information set of an actor.
Thus behavioral strategy of Conie is described by 8 parameters, 2 numbers for distribution of $x$ and
6 biases for coins tossed for each information set.
This subtle distinction of the general and behavioral strategies is only mentioned here for the sake of instruction,
but the distinction becomes important in some games where an actor may forget some elements of her private history.
The {\it support} of a mixed strategies are the pure strategies with nonzero probability.
For instance, the support of $(\dots,0,1,0,\dots)$ is one pure strategy,
thus we simply identify pure strategies with
mixed strategies of this kind. This identification allows us to view a mixed strategy
as a convex combination (also called mixture) of the pure strategies constituting the support.
A mixed strategy $\PP$ (respectively $\QQ$) is called {\it fully supported} if
the support is $\cal C$ (respectively $\cal M$).
When strategy profile $(\PP,\QQ)$ is played by the actors, the expected payoff for Conie, equal to her probability of winning the prize,
is computed by the matrix multiplication as $\PP\CC\QQ^T$, where $^T$ denotes transposition.
This way of computation presumes that the actors' choices
of pure strategies are independent random variables with values in $\mathcal M$ and $\mathcal C$ (respectively),
which may be simulated by the actors' private randomization devices.
The independence of individual strategies is a feature required by the idea of {\it noncooperative} game, in which there is no communication of actors
to play a joint strategy.
In the {\it Bayesian setting} of the decision problem Monte is supposed to play some fixed strategy $\QQ$, known to Conie.
The probability textbooks consider the Bayesian setting for the MHP, with $\QQ$ being the uniform distribution over $\mathcal M$,
which is equivalent to the assumption that Monte picks $\theta$ from $\{1,2,3\}$ by rolling a three-sided die,
and in the event of match picks a door $y$ from two possibilities by tossing a fair coin.
The general Bayesian formulation, which may be called a game against the nature,
models the situation where Conie deals with unconcious random algorithm
which neither has own goals nor can take advantage of Conie's mistakes.
Conie's optimal
behavior is then a {\it Bayesian strategy} $\PP'=\PP'(\QQ)$ which maximizes the probability of winning the prize:
$$\PP'\CC\QQ^T= \max_{\PP} \PP\CC\QQ^T.$$
Bayesian strategy always exists by
continuity of
$\PP\CC\QQ^T$ as function of $\PP$, and compactness of the set of mixed strategies.
Moreover, by linearity of $\PP\CC\QQ^T$ and convexity
\begin{itemize}
\item[]
The general
Bayesian strategy is an arbitrary mixture of the pure Bayesian strategies.
\end{itemize}
Now suppose $\QQ$ is fully supported, that is every pure strategy from $\cal M$ is played with positive probability,
and let $A$ and $B$ be two distinct pure strategies of Conie such that
$B$ weakly dominates $A$. Since all rows of $\cal C$ are distinct the latter means that the collection
of columns in which row $A$ has a 1 is a proper subset of the collection of columns in which $B$ has a 1.
But then the winning probability is strictly larger for $B$ than for $A$. Since every strategy admitting action $\tm$ is weakly
dominated by some always-switching strategy we conclude that
\begin{itemize}
\item[] If $\QQ$ is fully supported then only always-switching pure strategies can be Bayesian, hence every
Bayesian strategy is a mixture of always-switching strategies.
\end{itemize}
Thus in the `generic' case the Bayesian principle of optimality excludes strategies which may use $\tm$.
Having retained only always-switching strategies, it
is easy to find all Bayesian strategies explicitly.
\begin{itemize}
\item[]
Suppose that according to fully supported $\QQ$ the probability to hide the prize behind door $\theta\in\{1,2,3\}$ is $\pi_\theta$.
Let $\pi_1\geq\pi_2\geq\pi_3>0$ (otherwise re-label the doors).
Then the only pure Bayesian strategies are
\begin{itemize}
\item[\rm(1)] 3\ts\ts~ if $\pi_1\geq \pi_2>\pi_3$,
\item[\rm(2)] 2\ts\ts~ and 3\ts\ts~ if $\pi_1>\pi_2=\pi_3$,
\item[\rm(3)] 1\ts\ts,~ 2\ts\ts~ and 3\ts\ts~ if $\pi_1=\pi_2=\pi_3=1/3$.
\end{itemize}
\end{itemize}
If $\QQ$ is not fully supported then
the listed always-switching strategies are still Bayesian, but some other strategies may be Bayesian too.
For instance, a never-switching strategy $x\tm\tm$ is Bayesian if (and only if) Monte always hides the prize behind door $x$, although
even then every always-switching strategy $x'\ts\ts$ with $x'\neq x$ is Bayesian also.
More interestingly, suppose $\QQ=(1/3,0,1/3,0,1/3,0)$. This is a model for a `crawl' behavior of the host \cite{Rosenthal}, who is eager
to reveal the door with higher number when he has a choice. In this case the Bayesian pure strategies are
$1\ts\ts,~ 2\ts\ts,~3\ts\ts$ and $1\tm\ts,~ 2\tm\ts,~3\tm\ts$.
In general, the rule to determine all Bayesian pure strategies by excluding the dominated strategies is the following:
\begin{itemize}
\item[]
If strategy $\theta,y$ with $y=\min\{1,2,3\}\setminus\{\theta\}$ enters
$\QQ$ with nonzero probability then $\theta\ts\tm$ is not Bayesian.
\item[] If strategy $\theta,y$ with $y=\max\{1,2,3\}\setminus\{x\}$ enters $\QQ$ with nonzero probability then $\theta\tm\ts$ is not Bayesian.
\end{itemize}
For arbitrary $\QQ$~ Conie's strategy `point at the door which is the least likely to hide the prize, then always switch' is Bayesian,
giving the probability of win $1-\min_{\theta} \pi_{\theta}$, where $\pi_{\theta}$ is the probability of $\theta=1,2,3$.
Remarkably, all what Conie needs to know to play optimally in a Bayesian game is the number of the least likely door.
She tips at this door and switches all the time, winning the complementary probability no matter which are the biases
for revealing the doors in case of match.
In the special case $\pi_1=\pi_2=\pi_3=1/3$ every always-switching strategy is Bayesian no matter how
$y$ is chosen in the event of match $x=\theta$; a conclusion usually shown in the literature with the
help of conditional probabilities.
In fact, for an optimal choice of $x$, the {\it conditional} probability of winning with switching in every position
$xy$ is at least $1/2$. This is implied by the overall optimality (in the Bayesian sense), and is a simple instance of the
general Bellman's dynamic programming principle.
\section{The zero-sum game}
The zero-sum game is a model for interaction of actors with antagonistic goals.
Conie wins the prize when Monte loses it.
The essense of the zero-sum game is the {\it worst-case analysis}.
What Conie can guarantee in the game, when the behavior of Monte can be arbitrary (but agreeing with the game rules)?
What is the worst behavior of Monte?
We can write Monte's payoffs as another $12\times 6$ matrix $\HH$, but this is not necessary as $\HH=-\CC$, so
$\CC$ contains all information about the payoffs.
In this context, when Monty plays $\QQ$, Conies' Bayesian strategy $\PP'=\PP'(\QQ)$ is called a {\it best response}
to $\QQ$. Reciprocally,
when Conie plays a particular $\PP$ Monte's best response strategy $\QQ'=\QQ'(\PP)$
is the one for which Conie's winning probability is minimal,
$$\PP\CC\QQ'^T= \min_{\PP} \PP\CC\QQ^T.$$
A profile of actors' mixed strategies $(\PP^*,\QQ^*)$ is said to be a {\it minimax solution} (or saddle point)
if the strategies are the best responses to each other,
$$ \PP^*\CC\QQ^{*T}= \max_{\PP} \PP\CC\QQ^{*T} = \min_{\QQ} \PP^*\CC\QQ^T.$$
Such a solution exists for arbitrary matrix game with finite sets of strategies, by the minimax theorem due to von Neumann.
The number
$V:=\PP^*\CC\QQ^{*T}$ is called the {\it value of the game}, and it does not depend on particular minimax profile due to the fundamental
relation
$$V=\max_{\PP}\min_{\QQ} \PP\CC\QQ^{T} = \min_{\QQ}\max_{\PP} \PP\CC\QQ^T$$
involved in the minimax theorem.
Strategy $\PP^*$ of Conie is minimax if it guarantees the winning probability at least $V$ no matter what Monte does.
Strategy $\QQ^*$ of Monte is minimax if it incures the winning probability at most $V$ no matter what Conie does.
Recalling our discussion around the dominance it is really easy to see that $V=2/3$.
If Monte chooses $\theta$ uniformly at random, Conie cannot winning probability higher than $2/3$. On the other hand,
if $x$ is chosen uniformly at random and $x\ts\ts$ is played, Conie guarantees $2/3$ no matter what Monte does.
So a solution is found.
We wish to approach the solution more formally, manipulaing the payoff matrix.
The principle of the elimination of dominated strategies says:
\begin{itemize}
\item[] the value of the game is not altered if the game matrix is (repeatedly) reduced by eliminating a (weakly) dominated row or column.
\end{itemize}
Explicitly, strategy $1\swsw$~ dominates 2\masw~ and 2\mama~
\begin{center}
\begin{tabular}{c|cccccc}
1\swsw &0 & 0 & 1 & 1 & 1 &1\\
2\masw &0 & 0 & 1 & 0 & 1 &1\\
2\mama &0 & 0 & 1 & 1 & 0 &0\\
\end{tabular}
\end{center}
and 3\swsw~ dominates 2\swma~
\begin{center}
\begin{tabular}{c|cccccc}
3\swsw &1 & 1 & 1 & 1 & 0 &0\\
2\swma &1 & 1 & 0 & 1 & 0 &0\\
\end{tabular}
\end{center}
Continuing the elimination we reduce to
\begin{center}
\begin{tabular}{c|cccccc}
1\swsw &0 & 0 & 1 & 1 & 1 &1\\
2\swsw &1 & 1 & 0 & 0 & 1 &1\\
3\swsw &1 & 1 & 1 & 1 & 0 &0\\
\end{tabular}
\end{center}
Finally, discarding repeated columns
the matrix is reduced to the square matrix $\Cc$:
\vskip0.3cm
\begin{center}
\begin{tabular}{c|ccc}
$\theta,y=$ & 12 & 21 & 32 \\
\hline
1\swsw &0 & 1 & 1 \\
2\swsw &1 & 0 & 1 \\
3\swsw &1 & 1 & 0 \\
\end{tabular}
\end{center}
\vskip0.3cm
The reduced game $\Cc$ has a clear interpretation.
Monte and Conie choose door numbers $\theta$ and $x$, respectively, from $\{1,2,3\}$.
If the choices {\it mismatch} ($\theta\neq x$) Conie wins, otherwise there is no payoff.
Let us find an {\it equalizing} strategy $\Pp^*$ for which ${\Pp}^*\Cc{\Qq}^T$ is the same no matter
which counter-strategy $\Qq$ Monte plays. Taking for $\Qq$ pure strategies we arrive at the system of equations
$$p_2+p_3=p_1+p_2=p_1+p_3,$$
which taken together with $p_1+p_2+p_3=1$ is complete.
Solving the system we see that with $\Pp^*=(1/3,1/3,1/3)$ Conie wins with probability 2/3
no matter what Monte does. Similarly, when Monte plays $\Qq^*=(1/3,1/3,1/3)$ the winning probability is always
2/3 no matter what Conie does. Thus $(\Pp^*,\Qq^*)$ is a solution of the game and the value $V=2/3$ is confirmed.
Yet another way to arrive at $(\Pp^*,\Qq^*)$ is to use symmetry of matrix $\Cc$ induced by permutations of the door numbers
(see \cite{Ferguson}, Theorem 3.4).
A related game with diagonal matrix
$$\left(
\begin{array}{ccc}
-1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1 \\
\end{array}\right)
$$
is obtained by subtracting a constant matrix from $\Cc$. In this diagonal game {\it Monte} wins if the choices match, otherwise there is no payoff.
Going back to the original matrix $\CC$, we conclude
that the profile
\begin{eqnarray*}
\PP^*=\left({1\over 3},0,0,0,{1\over 3},0,0,0,{1\over 3},0,0,0\right),~~
\QQ^*_{1,1,1}=\left({1\over 3},0,{1\over 3},0,{1\over 3},0\right)
\end{eqnarray*}
is a solution to the game. Strategy $\PP^*$ is equalizing, as $\PP^*\CC \QQ$ =2/3 for all $\QQ$.
According to the solution $(\PP^*, \QQ^*_{1,1,1})$ Monte plays the `crawl' strategy: he
hides the prize uniformly at random,
and he always reveals the door with higher number, when there is a freedom for the second action.
Conie selects door $x$ uniformly at random and always plays $\ts$.
The generic Monte's strategy assigning probability $1/3$ to every value of $\theta$ is of the form
$$\QQ^*_{{\lambda_1},{\lambda_2},{\lambda_3}}=\left({\lambda_1\over 3}\,,\,{1-\lambda_1\over 3}\,,\,
{\lambda_2\over 3}\,,\,{1-\lambda_2\over 3}\,,\,{\lambda_3\over 3}\,,\,{1-\lambda_3\over 3}\right)$$
where $0\leq\lambda_\theta\leq 1$, $\theta\in \{1,2,3\}$.
Parameter $\lambda_\theta$ is the conditional
probability that Monte will offer switching to door with smaller number in the case of match.
If Conie plays best response to $\QQ^*_{{\lambda_1},{\lambda_2},{\lambda_3}}$ she wins with probability $2/3$, as
we have seen when considering the Bayesian setting. Therefore each strategy $\QQ^*_{{\lambda_1},{\lambda_2},{\lambda_3}}$ is minimax.
On the other hand, if the values of $\theta$ have probabilities $\pi_\theta$ a best response of Conie yields winning probability
$1-\min(\pi_1,\pi_2,\pi_3)$, which is
minimized for $\pi_1=\pi_2=\pi_3=1/3$. We see that
\begin{itemize}
\item[] The strategies $\QQ^*_{{\lambda_1},{\lambda_2},{\lambda_3}}$ and only they are the minimax strategies of Monte.
\end{itemize}
Furthermore, if $\QQ^*_{{\lambda_1},{\lambda_2},{\lambda_3}}$ is fully supported, which is the case when
$\lambda_{\theta}\in (0,1)$ for $\theta\in \{1,2,3\}$,
then the unique best response is $\PP^*$, thus
\begin{itemize}
\item[] Conie's strategy $\PP^*$ of choosing a door uniformly at random, then always switching is the unique minimax strategy.
\end{itemize}
The subclass of Monte's strategies with the second action independent of the first given match consists of strategies with equal probabilities
$\lambda_1=\lambda_2=\lambda_3$; these were discussed e.g. in \cite{Rosenhouse} (Version Five of the MHP).
We note that the general theory does not preclude
weakly dominated strategies from being minimax
(see \cite{Ferguson}, Section 2.6, Exercise 9). This does not occur in the MHP because there exist fully supported minimax strategies of Monte.
\section{The general-sum games}
The logical way to go beyond the zero-sum game is a non-zero sum game.
In such model there is a $12\times 6$ payoff matrix of Monte $\HH$ which need not be the negative of $\CC$.
A best response to Monte's strategy $\QQ$ is defined as before, but best response to Conie's strategy $\PP$ is now
a strategy which maximizes the expected payoff $\PP\HH\QQ^T$.
A central solution concept for the general-sum game game
is a {\it Nash equilibrium}, defined as
a profile of mixed strategies $(\PP',\QQ')$ which are best responses to each other,
$$\PP'\CC\QQ'^{T}=\max_{\PP} \PP\CC\QQ'^{T} ~~~{\rm and~~~} \PP'\HH\QQ'^{T}=\max_{\QQ} \PP'\HH\QQ^{T}.$$
That is to say, Nash equilibrium is a profile
$(\PP',\QQ')$ such that a unilateral change of the strategy by one of the actors cannot improve the private
payoff of the actor.
A general theorem due to John Nash ensures that at least one such Nash equilibrium exists.
Nash equilibrium is a concept of {\it noncooperative} game theory. The players cannot make binding agreements on a joint choice of strategy unless
the agreements are self-enforced. This is formalized by the independence presumed in the product formulas for computing the payoffs.
In every Nash equilibrium Conie will have winning probability not less than her minimax value $V=2/3$, which is
her {\it safety level}. Higher probability in some Nash equilibria might be possible, since the game is no longer antagonistic.
Our analysis of the Bayesian strategies can be applied to the general-sum games as well. Suppose in a Nash equilibrium $(\PP',\QQ')$
strategy $\QQ'$ is fully supported. Then $\PP'$, being a best response to $\QQ'$, is a mixture of always-switching strategies.
Let $\pi_1\geq \pi_2\geq \pi_3>0$ be probabilities of the values $\theta=1,2,3$ under $\QQ'$.
Then we have
\begin{itemize}
\item[]
{\rm A profile $(\PP',\QQ')$ is a Nash equilibrium with fully supported
$\QQ'$ if and only if there exists a probability vector $(p_1,p_2,p_3)$
such that the mixture with weights $p_1, p_2, p_3$
of rows $1\ts\ts, 2\ts\ts, 3\ts\ts$ of matrix $\HH$ is a row vector with equal entries. There are three
possibilities
\vskip0.1cm
\noindent
\begin{itemize}
\item[\rm(1)] $p_3=1$, $\pi_2>\pi_3$ and the row $3\ts\ts$ of $\HH$ is a constant row,
\item[\rm(2)] $p_1=0$, $\pi_1> \pi_2=\pi_3$ and a mixture of the rows $2\ts\ts$ and $3\ts\ts$ of $\HH$ is a constant row,
\item[\rm(3)] $p_1p_2p_3>0,~ \pi_1=\pi_2=\pi_3=1/3$, and the arithmetical average of rows
$1\ts\ts$, $2\ts\ts$ and $3\ts\ts$ of $\HH$ is a constant row
\end{itemize}
}
\end{itemize}
In the case (3)
Conie's winning probability is her safety level 2/3 but, unlike the zero-sum game,
the equilibrium strategy $\PP'$ need not give the same probability to every always-switching strategy.
Thus if the equilibrium has a property of nondegeneracy, when every pure strategy of Monte and every always-switching strategy of Conie
is played with positive probability, then for Conie the game brings the same as the game against antagonistic Monte.
If none of the mixtures of the $x\ts\ts$-rows is constant, there is no fully supported Nash equilibrium.
What could be plausible assumptions on Monte's payoff $\HH$? If Monte is only concerned about the fate of the prize,
and not where and how the prize is won,
his essentially distinct payoff structures are
$\HH=-\CC$ (antagonistic Monte) $\HH=\CC$ (sympathetic Monte) and $\HH={\bf 0}$ (indifferent Monte). The first case is zero-sum,
in the second case every entry 1 of $\CC$ corresponds to a pure Nash equilibrium, and in the third case every pair
$(\QQ, \PP')$ with best-response $\PP'=\PP'(\QQ)$ is a Nash equilibrium.
It is not hard to design further exotic examples of payoffs $\HH$ for which context-dependent strategies enter some Nash equilibrium profile.
If there is a moral of that it is perhaps this: context-dependent strategies
{\it may} be a rational kind of behavior under certain intensions of Monte.
The
source for this phenomenon is twofold. Firstly,
this is the very idea of equilibrium:
if Conie steps away from the context-dependent equilibrium strategy,
Monty is not forced to play old strategy and may change biases in a way unfavorable for Conie, typically pushing her down to the safety level $2/3$.
Secondly, the domination is only {\it weak}, thus can have no effect if support is not full.
However, such context-dependent equilibria are highly unstable, and minor perturbations of $\HH$ will destroy them.
Practically speaking, if Conie has any doubts about Monte's intensions
it is safe to stay with always-switching strategies.
As for the famous question, the noncooperative game theory gives more weight to vos Savant's solution by adding
\begin{itemize}
\item[]
{\it Yes, you should switch. You knew the rules of the game.
If your decision were to pick door 1 and hold when a switch to door 2 offered,
then I could beat your strategy by picking door 2 and switching whichever happens.
My strategy will be even strictly better than yours if the prize can ever be hidden behind door 2.}
\end{itemize}
If the game is {\it cooperative}, for instance if Monte and Conie want happily drive their new cadillac to Nice,
they could just favor door 1. But this is a completely different story.
| {
"timestamp": "2011-07-05T02:00:12",
"yymm": "1107",
"arxiv_id": "1107.0326",
"language": "en",
"url": "https://arxiv.org/abs/1107.0326",
"abstract": "The basic Monty Hall problem is explored to introduce into the fundamental concepts of the game theory and to give a complete Bayesian and a (noncooperative) game-theoretic analysis of the situation. Simple combinatorial arguments are used to exclude the holding action and to find minimax solutions.",
"subjects": "History and Overview (math.HO); Computer Science and Game Theory (cs.GT)",
"title": "The Monty Hall Problem in the Game Theory Class",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9914225130656699,
"lm_q2_score": 0.8267117983401363,
"lm_q1q2_score": 0.8196206886914172
} |
https://arxiv.org/abs/math/0503745 | Pseudo-random graphs | Random graphs have proven to be one of the most important and fruitful concepts in modern Combinatorics and Theoretical Computer Science. Besides being a fascinating study subject for their own sake, they serve as essential instruments in proving an enormous number of combinatorial statements, making their role quite hard to overestimate. Their tremendous success serves as a natural motivation for the following very general and deep informal questions: what are the essential properties of random graphs? How can one tell when a given graph behaves like a random graph? How to create deterministically graphs that look random-like? This leads us to a concept of pseudo-random graphs and the aim of this survey is to provide a systematic treatment of this concept. | \section{Introduction}
Random graphs have proven to be one of the most important and fruitful
concepts in modern Combinatorics and Theoretical Computer Science.
Besides being a fascinating study subject for their own sake, they
serve as essential instruments in proving an enormous number of
combinatorial statements, making their role quite hard to overestimate.
Their tremendous success serves as a natural motivation for the
following very general and deep informal questions: what are the
essential properties of random graphs? How can one tell when a given
graph behaves like a random graph? How to create deterministically
graphs that look random-like? This leads us to a concept of {\em
pseudo-random graphs}.
Speaking very informally, a pseudo-random graph $G=(V,E)$ is a graph
that behaves like a truly random graph $G(|V|,p)$ of the same edge
density $p=|E|\left/{{|V|}\choose 2}\right.$. Although the last sentence
gives some initial idea about this concept, it is not very informative,
as first of all it does not say in which aspect the pseudo-random graph
behavior is similar to that of the corresponding random graph, and
secondly it does not supply any quantitative measure of this
similarity. There are quite a few possible graph parameters that can
potentially serve for comparing pseudo-random and random graphs (and in
fact quite a few of them are equivalent in certain, very natural sense,
as we will see later), but probably the most important characteristics
of a truly random graph is its {\em edge distribution}. We can thus make
a significant step forward and say that a pseudo-random graph is a
graph with edge distribution resembling the one of a truly random graph
with the same edge density. Still, the quantitative measure of this
resemblance remains to be introduced.
Although first examples and applications of pseudo-random graphs
appeared very long
time ago, it was Andrew Thomason who launched systematic research on
this subject with his two papers \cite{Tho87a}, \cite{Tho87b} in the
mid-eighties. Thomason introduced the notion of jumbled graphs, enabling
to measure in quantitative terms the similarity between the edge
distributions of pseudo-random and truly random graphs. He also supplied
several examples of pseudo-random graphs and discussed many of their
properties. Thomason's papers undoubtedly defined directions of future
research for many years.
Another cornerstone contribution belongs to Chung, Graham and Wilson
\cite{ChuGraWil89} who in 1989 showed that many properties of different
nature are in certain sense equivalent to the notion of
pseudo-randomness, defined using the edge distribution. This fundamental
result opened many new horizons by showing additional facets of
pseudo-randomness.
Last years brought many new and striking results on pseudo-randomness
by various researchers. There are two clear trends in recent research on
pseudo-random graphs. The first is to apply very diverse methods from
different fields (algebraic, linear algebraic, combinatorial,
probabilistic etc.) to construct and study pseudo-random graphs. The
second and equally encouraging is to find applications, in many cases
quite surprising, of pseudo-random graphs to problems in Graph Theory,
Computer Science and other disciplines. This mutually enriching
interplay has greatly contributed to significant progress in research on
pseudo-randomness achieved lately.
The aim of this survey is to provide a systematic treatment of the
concept of pseudo-random graphs, probably the first since the two
seminal contributions of Thomason
\cite{Tho87a}, \cite{Tho87b}. Research in pseudo-random graphs has
developed tremendously since then, making it impossible to provide full
coverage of this subject in a single paper. We are thus forced to omit
quite a few directions, approaches, theorem proofs from our discussion.
Nevertheless we will attempt to provide the reader with a rather
detailed and illustrative account of the current state of research in
pseudo-random graphs.
Although, as we will discuss later, there
are several possible formal approaches
to pseudo-randomness, we will mostly emphasize the approach based on
graph eigenvalues. We find this approach, combining linear algebraic
and combinatorial tools in a very elegant way, probably the most
appealing, convenient and yet quite powerful.
This survey is structured as follows. In the next section we will
discuss various formal definitions of the notion of pseudo-randomness,
from the so called jumbled graphs of Thomason to the $(n,d,\lambda)$-graphs defined by Alon,
where pseudo-randomness is connected to the eigenvalue gap. We then
describe several known constructions of pseudo-random graphs, serving both as
illustrative examples for the notion of pseudo-randomness, and also as
test cases for many of the theorems to be presented afterwards. The
strength of every abstract concept is best tested by properties it
enables to derive. Pseudo-random graphs are certainly not an exception
here, so in Section 4 we discuss various properties of pseudo-random
graphs. Section 5, the final section of the paper, is devoted to
concluding remarks.
\section{Definitions of pseudo-random graphs}
Pseudo-random graphs are much more of a general concept describing some
graph theoretic phenomenon than of a rigid well defined notion -- the
fact reflected already in the plural form of the title of this
section! Here we describe various formal approaches to the concept of
pseudo-randomness. We start with stating known facts on the edge
distribution of random graphs, that will serve later as a benchmark for
all other definitions. Then we discuss the notion of jumbled graphs
introduced by Thomason in the mid-eighties. Then we pass on to the
discussion of graph properties, equivalent in a weak (qualitative) sense
to the pseudo-random edge distribution, as revealed by Chung, Graham and
Wilson in \cite{ChuGraWil89}. Our next item in this section is the
definition of pseudo-randomness based on graph eigenvalues -- the
approach most frequently used in this survey. Finally, we discuss the
related notion of strongly regular graphs, their eigenvalues and their
relation to pseudo-randomness.
\subsection{Random graphs}
As we have already indicated in the Introduction, pseudo-random graphs are
modeled after truly random graphs, and therefore mastering the edge
distribution in random graphs can provide the most useful insight on
what can be expected from pseudo-random graphs. The aim of this
subsection is to state all necessary definitions and results on random
graphs. We certainly do not intend to be comprehensive here, instead
referring the reader to two monographs on random graphs \cite{Bol01},
\cite{JanLucRuc00}, devoted entirely to the subject and presenting a
very detailed picture of the current research in this area.
A {\em random graph} $G(n,p)$ is a probability space of all labeled
graphs on $n$ vertices $\{1,\ldots,n\}$, where for each pair
$1\le i<j\le n$,
$(i,j)$ is an edge of $G(n,p)$ with probability $p=p(n)$,
independently of any other edges. Equivalently, the probability of a
graph $G=(V,E)$ with $V=\{1,\ldots,n\}$ in $G(n,p)$ is
$Pr[G]=p^{|E(G)|}(1-p)^{{n\choose 2}-|E(G)|}$. We will occasionally
mention also the probability space $G_{n,d}$, this is the probability
space of all $d$-regular graphs on $n$ vertices endowed with the uniform
measure, see the survey of Wormald \cite{Wor99} for more background.
We also say that a graph property ${\cal A}$ holds {\em almost surely}, or a.s. for
brevity, in $G(n,p)~(G_{n,d})$ if the probability that $G(n,p)~(G_{n,d})$
has ${\cal A}$ tends to one as the number of vertices $n$ tends to
infinity.
From our point of view the most important parameter of random graph
$G(n,p)$ is its edge distribution. This characteristics can be easily
handled due to the fact that $G(n,p)$ is a product probability
space with independent appearances of different edges. Below we cite
known results on the edge distribution in $G(n,p)$.
\begin{theo}\label{dist1}
Let $p=p(n)\le 0.99$. Then almost surely $G\in G(n,p)$ is such that if
$U$ is any set of $u$ vertices, then
$$
\left|e(U)-p{u\choose 2}\right| =
O\left(u^{3/2}p^{1/2}\log^{1/2}(2n/u)\right)\ .
$$
\end{theo}
\begin{theo}\label{dist2}
Let $p=p(n)\le 0.99$. Then almost surely $G\in G(n,p)$ is such that
if $U,W$ are disjoint sets of vertices satisfying $u=|U|\le w=|W|$, then
$$
\left|e(U,W)-puw\right|=O\left(u^{1/2}wp^{1/2}\log^{1/2}(2n/w)\right)\ .
$$
\end{theo}
The proof of the above two statements is rather straightforward. Notice
that both quantities $e(U)$ and $e(U,W)$ are binomially distributed
random variables with parameters ${u\choose 2}$ and $p$, and $uw$ and
$p$, respectively. Applying standard Chernoff-type estimates on the
tails of the binomial distribution (see, e.g., Appendix A of
\cite{AloSpe00}) and then the union bound, one gets the desired
inequalities.
It is very instructive to notice that we get less and less control over
the edge distribution as the set size becomes smaller.
For example, in the probability
space $G(n,1/2)$ every subset is expected to contain half of its
potential edges. While this is what happens almost surely for large
enough sets due to Theorem \ref{dist1}, there will be almost surely
sets of size about $2\log_2n$ containing all possible edges (i.e.
cliques), and there will be almost surely sets of about the same size,
containing no edges at all (i.e. independent sets).
For future comparison we formulate the above two theorems in the
following unified form:
\begin{coro}\label{dist3}
Let $p=p(n)\le 0.99$. Then almost surely in $G(n,p)$ for every two
(not necessarily) disjoint subsets of vertices $U,W\subset V$ of
cardinalities $|U|=u, |W|=w$, the number $e(U,W)$ of edges of $G$
with one endpoint in $U$ and the other one in $W$ satisfies:
\begin{eqnarray}\label{dist}
|e(U,W)-puw|= O(\sqrt{uwnp})\ .
\end{eqnarray}
\end{coro}
(A notational agreement here and later in the paper: if an edge $e$
belongs to the intersection $U\cap W$, then $e$ is counted twice in
$e(U,W)$.)
Similar bounds for edge distribution hold also in the space $G_{n,d}$
of $d$-regular graphs, although they are significantly harder to derive
there.
Inequality (\ref{dist}) provides us with a quantitative benchmark,
according to which we will later measure the uniformity of edge
distribution in pseudo-random graphs on $n$ vertices with edge density
$p=|E(G)|\left/{n\choose 2}\right.$
It is interesting to draw comparisons between research in random graphs
and in pseudo-random graphs. In general, many properties of random
graphs are much easier to study than the corresponding properties of
pseudo-random graphs, mainly due to the fact that along with the almost
uniform edge distribution described in Corollary \ref{dist3}, random
graphs possess as well many other nice features, first and foremost of
them being that they are in fact very simply defined product
probability spaces. Certain graph properties can be easily shown to hold
almost surely in $G(n,p)$ while they are not necessarily valid in
pseudo-random graphs of the same edge density. We will see quite a few
such examples in the next section. A general line of research
appears to be not to use pseudo-random methods to get new results for
random graphs, but rather to try to adapt techniques developed
for random graphs to the case of pseudo-random graphs, or alternatively
to develop original techniques and methods.
\subsection{Thomason's jumbled graphs}
In two fundamental papers \cite{Tho87a}, \cite{Tho87b} published in 1987
Andrew Thomason introduced the first formal quantitative definition of
pseudo-random graphs. It appears quite safe to attribute the launch of
the systematic study of pseudo-randomness to Thomason's papers.
Thomason used the term "jumbled" graphs in his papers. A graph
$G=(V,E)$ is said to be $(p,\alpha)$-{\em jumbled} if $p,\alpha$ are
real numbers satisfying $0<p<1\le \alpha$ if every subset of vertices
$U\subset V$ satisfies:
\begin{eqnarray}\label{jumbled}
\left|e(U)- p{{|U|}\choose 2}\right|\le \alpha |U|\ .
\end{eqnarray}
The parameter $p$ can be thought of as the density of $G$, while
$\alpha$ controls the deviation from the ideal distribution. According
to Thomason, the word "jumbled" is intended to convey the fact that the
edges are evenly spread throughout the graph.
The motivation for the above definition can be clearly traced to the
attempt to compare the edge distribution in a graph $G$ to that of a
truly random graph $G(n,p)$. Applying it indeed to $G(n,p)$ and
recalling (\ref{dist}) we conclude that the random graph $G(n,p)$ is
almost surely $O(\sqrt{np})$-jumbled.
Thomason's definition has several trivial yet very nice features.
Observe for example that if $G$ is $(p,\alpha)$-jumbled then the
complement $\bar{G}$ is $(1-p,\alpha)$-jumbled. Also, the definition is
hereditary -- if $G$ is $(p,\alpha)$-jumbled, then so is every induced
subgraph $H$ of $G$.
Note that being $(p,\Theta(np))$-jumbled for a graph $G$ on $n$
vertices and ${n\choose 2}p$ edges does not say too much about the edge
distribution of $G$ as the number of edges in linear sized sets can
deviate by a percentage from their expected value. However
as we shall see very soon if $G$ is known to be $(p,o(np))$-jumbled,
quite a lot can be said about its properties. Of course, the smaller is
the value of $\alpha$, the more uniform or jumbled is the edge
distribution of $G$. A natural question is then how small can be the
parameter $\alpha=\alpha(n,p)$ for a graph $G=(V,E)$ on $|V|=n$ vertices
with edge density $p=|E|\left/{n\choose 2}\right.$? Erd\H os and
Spencer proved in \cite{ErdSpe72} that $\alpha$ satisfies
$\alpha=\Omega(\sqrt{n})$ for a constant $p$; their method can be
extended to show $\alpha=\Omega(\sqrt{np})$ for all values of $p=p(n)$.
We thus may think about $(p,O(\sqrt{np}))$-jumbled graphs on $n$
vertices as in a sense best possible pseudo-random graphs.
Although the fact that $G$ is $(p,\alpha)$-jumbled carries in it a lot
of diverse information on the graph, it says almost nothing (directly at
least) about small subgraphs, i.e. those spanned by subsets $U$ of size
$|U|=o(\alpha/p)$. Therefore in principle a $(p,\alpha)$-jumbled graph
can have subsets of size $|U|=O(\alpha/p)$ spanning by a constant factor
less or more edges then predicted by the uniform distribution. In many
cases however quite a meaningful local information (such as the presence
of subgraphs of fixed size) can still be salvaged from global
considerations as we will see later.
Condition (\ref{jumbled}) has obviously a global nature as it applies to
{\em all} subsets of $G$, and there are exponentially many of them.
Therefore the following result of Thomason, providing a sufficient
condition for pseudo-randomness based on degrees and co-degrees only,
carries a certain element of surprise in it.
\begin{theo}\label{jumlocal}\cite{Tho87a}
Let $G$ be a graph on $n$ vertices with minimum degree $np$. If no pair
of vertices of $G$ has more than $np^2+l$ common neighbors, then $G$ is
$(p,\sqrt{(p+l)n})$-jumbled.
\end{theo}
The above theorem shows how the pseudo-randomness condition of
(\ref{jumbled}) can be ensured/checked by testing only a polynomial
number of easily accessible conditions. It is very useful for showing
that specific constructions are jumbled. Also, it can find algorithmic
applications, for example, a very similar approach has been used by
Alon, Duke, Lefmann, R\"odl and Yuster in their Algorithmic Regularity
Lemma \cite{AloDukLefRodYus94}.
As observed by Thomason, the minimum degree condition of Theorem
\ref{jumlocal} can be dropped if we require that every pair of vertices
has $(1+o(1))np^2$ common neighbors. One cannot however weaken the
conditions of the
theorem so as to only require that every {\em edge} is in at most
$np^2+l$ triangles.
Another sufficient condition for pseudo-randomness, this time of global
nature, has also been provided in \cite{Tho87a}, \cite{Tho87b}:
\begin{theo}\label{jumglobal}\cite{Tho87a}
Let $G$ be a graph of order $n$, let $\eta n$ be an integer between 2
and $n-2$, and let $\omega>1$ be a real number. Suppose that each
induced subgraph $H$ of order $\eta n$ satisfies
$|e(H)-p{{\eta n}\choose 2}|\le \eta n\alpha$. Then $G$ is
$(p,7\sqrt{n\alpha/\eta}/(1-\eta))$-jumbled. Moreover $G$ contains a
subset $U\subseteq V(G)$ of size
$|U|\ge \left(1-\frac{380}{n(1-\eta)^2w}\right)n$ such that the induced
subgraph $G[U]$ is $(p,\omega\alpha)$-jumbled.
\end{theo}
Thomason also describes in \cite{Tho87a}, \cite{Tho87b} several
properties of jumbled graphs. We will not discuss these results in
details here as we will mostly adopt a different approach to
pseudo-randomness. Occasionally however we will compare some of later
results to those obtained by Thomason.
\subsection{Equivalent definitions of weak pseudo-randomness}
Let us go back to the jumbledness condition (\ref{jumbled}) of Thomason.
As we have already noted it becomes non-trivial only when the error
term in (\ref{jumbled}) is $o(n^2p)$. Thus the latter condition can be
considered as the weakest possible condition for pseudo-randomness.
Guided by the above observation we now define the notion of weak
pseudo-randomness as follows. Let $(G_n)$ be a sequence of graphs,
where $G_n$ has $n$ vertices. Let also $p=p(n)$ is a parameter ($p(n)$
is a typical density of graphs in the sequence). We say that the
sequence $(G_n)$ is {\em weakly pseudo-random} if the following
condition holds:
\begin{eqnarray}\label{weakpr}
\mbox{For all subsets $U\subseteq V(G_n)$,}\quad\quad
\left|e(U)-p{{|U|}\choose 2}\right|=o(n^2p)\ .
\end{eqnarray}
For notational convenience we will frequently write $G=G_n$, tacitly
assuming that $(G)$ is in fact a sequence of graphs.
Notice that the error term in the above condition of weak
pseudo-randomness does not depend on the size of the subset $U$.
Therefore it applies essentially only to subsets $U$ of linear size,
ignoring subsets $U$ of size $o(n)$. Hence (\ref{weakpr}) is potentially
much weaker than Thomason's jumbledness condition (\ref{jumbled}).
Corollary \ref{dist3} supplies us with the first example of weakly
pseudo-random graphs -- a random graph $G(n,p)$ is weakly pseudo-random
as long as $p(n)$ satisfies $np\rightarrow\infty$. We can thus say that
if a graph $G$ on $n$ vertices is weakly pseudo-random for a parameter
$p$, then the edge distribution of $G$ is close to that of $G(n,p)$.
In the previous subsection we have already seen examples of conditions
implying pseudo-randomness. In general one can expect that conditions of
various kinds that hold almost surely in $G(n,p)$ may imply or be
equivalent to weak pseudo-randomness of graphs with edge density $p$.
Let us first consider the case of the constant edge density $p$. This
case has been treated extensively in the celebrated paper of Chung,
Graham and Wilson from 1989 \cite{ChuGraWil89}, where they formulated
several equivalent conditions for weak pseudo-randomness. In order to
state their important result we need to introduce some notation.
Let $G=(V,E)$ be a graph on $n$ vertices.
For a graph $L$ we denote by $N^*_G(L)$ the
number of labeled induced copies of $L$ in $G$, and by $N_G(L)$ the
number of labeled not necessarily induced copies of $L$ in $G$. For a
pair of vertices $x,y\in V(G)$, we set $s(x,y)$ to be the number of
vertices of $G$ joined to $x$ and $y$ the same way: either to both or to
none. Also, $codeg(x,y)$ is the number of common neighbors of $x$ and
$y$ in $G$. Finally, we order the eigenvalues $\lambda_i$ of the
adjacency matrix $A(G)$ so that $|\lambda_1|\ge |\lambda_2|\ge\ldots\ge
|\lambda_n|$.
\begin{theo}\label{CGW}\cite{ChuGraWil89}
Let $p\in (0,1)$ be fixed. For any graph sequence $(G_n)$ the following
properties are equivalent:
\begin{description}
\item[$P_1(l)$:\quad]
For a fixed $l\ge 4$ for all graphs $L$ on $l$ vertices,
$$
N_G^*(L)=(1+o(1))n^lp^{|E(L)|}(1-p)^{{l\choose 2}-|E(L)|}\ .
$$
\item[$P_2(t)$:\quad]
Let $C_t$ denote the cycle of length $t$. Let $t\ge 4$ be even,
$$
e(G_n)=\frac{n^2p}{2}+o(n^2)\quad\mbox{and}\quad
N_G(C_t)\le (np)^t+o(n^t)\ .
$$
\item[$P_3$:\quad]
$
e(G_n)\ge \frac{n^2p}{2}+o(n^2)\quad\mbox{and}\quad
\lambda_1=(1+o(1))np,~~ \lambda_2=o(n)\ .
$
\item[$P_4$:\quad] For each subset $U\subset V(G)$,\quad
$e(U)=\frac{p}{2}|U|^2+o(n^2)$\ .
\item[$P_5$:\quad] For each subset $U\subset V(G)$ with
$|U|=\lfloor\frac{n}{2}\rfloor$,\quad we have\quad
$e(U)=\left(\frac{p}{8}+o(1)\right)n^2\ .$
\item[$P_6$:\quad]
$\sum_{x,y\in V} |s(x,y)-(p^2+(1-p)^2)n|=o(n^3)$\ .
\item[$P_7$:\quad]
$\sum_{x,y\in V} |codeg(x,y)-p^2n|=o(n^3)$\ .
\end{description}
\end{theo}
Note that condition $P_4$ of this remarkable theorem is in fact
identical to our condition (\ref{weakpr}) of weak pseudo-randomness.
Thus according to the theorem all conditions $P_1$--$P_3$, $P_5-P_7$ are
in fact equivalent to weak pseudo-randomness!
As noted by Chung et al. probably the most surprising fact
(although possibly less surprising for the reader in view of Theorem
\ref{jumlocal}) is that apparently the weak condition $P_2(4)$ is strong
enough to imply weak pseudo-randomness.
It is quite easy to add another condition to the equivalence list of the
above theorem: for all $U,W\subset V$, $e(U,W)=p|U||W|+o(n^2)$.
A condition of a very different type, related to the celebrated
Szemer\'edi Regularity Lemma has been added to the above list by
Simonovits and S\'os in \cite{SimSos91}. They showed that if a graph $G$
possesses a Szemer\'edi partition in which almost all pairs have density
$p$, then $G$ is weakly pseudo-random, and conversely if $G$ is weakly
pseudo-random then in every Szemer\'edi partition all pairs are regular
with density $p$. An extensive background on the Szemer\'edi Regularity
Lemma, containing in particular the definitions of the above used
notions, can be found in a survey paper of Koml\'os and Simonovits
\cite{KomSim96}.
The reader may have gotten the feeling that basically every property of
random graphs $G(n,p)$ ensures weak pseudo-randomness. This feeling is
quite misleading, and one should be careful while formulating properties
equivalent to pseudo-randomness. Here is an example provided by Chung et
al. Let $G$ be a graph with vertex set $\{1,\ldots,4n\}$ defined as
follows: the subgraph of $G$ spanned by the first $2n$ vertices is a
complete bipartite graph $K_{n,n}$, the subgraph spanned by the last
$2n$ vertices is the complement of $K_{n,n}$, and for every pair
$(i,j),1\le i\le 2n, 2n+1\le j\le 4n$, the edge $(i,j)$ is present in
$G$ independently with probability $0.5$. Then $G$ is almost surely
a graph on $4n$
vertices with edge density $0.5$. One can verify that $G$ has properties
$P_1(3)$ and $P_2(2t+1)$ for every $t\ge 1$, but is obviously very far
from being pseudo-random (contains a clique and an independent set of
one quarter of its size). Hence $P_1(3)$ and $P_2(2t+1)$ are not
pseudo-random properties. This example shows also the real difference
between even and odd cycles in this context -- recall that Property
$P_2(2t)$ does imply pseudo-randomness.
A possible explanation to the above described somewhat disturbing
phenomenon has been suggested by Simonovits and S\'os in
\cite{SimSos97}. They noticed that the above discussed properties are
not hereditary in the sense that the fact that the whole graph $G$
possesses one of these properties does not imply that large induced
subgraphs of $G$ also have it. A property is called {\em hereditary} in
this context if it is assumed to hold for all sufficiently large
subgraphs $F$ of our graph $G$ with the same error term as for $G$.
Simonovits and S\'os proved that adding this hereditary condition gives
significant extra strength to many properties making them
pseudo-random.
\begin{theo}\cite{SimSos97}\label{SSher}
Let $L$ be a fixed graph on $l$ vertices, and let $p\in (0,1)$ be fixed.
Let $(G_n)$ be a sequence of graphs. If for every induced subgraph
$H\subseteq G$ on $h$ vertices,
$$
N_H(L)=p^{|E(L)|}h^l+o(n^l)\,,
$$
then $(G_n)$ is weakly pseudo-random, i.e. property $P_4$ holds.
\end{theo}
Two main distinctive features of the last result compared to Theorem
\ref{CGW} are: (a) $P_1(3)$ assumed hereditarily implies
pseudo-randomness; and (b) requiring the right number of copies of a
{\em single} graph $L$ on $l$ vertices is enough, compared to Condition
$P_1(l)$ required to hold for {\em all} graphs on $l$ vertices
simultaneously.
Let us switch now to the case of vanishing edge density $p(n)=o(1)$.
This case has been treated in two very recent papers of Chung and Graham
\cite{ChuGra02} and of Kohayakawa, R\"odl and Sissokho
\cite{KohRodSis02}.
Here the picture becomes significantly more complicated compared to the
dense case. In particular, there exist graphs with very balanced edge
distribution not containing a single copy of some fixed subgraphs (see
the Erd\H os-R\'enyi graph and the Alon graph in the next section
(Examples 6, 9, resp.)).
In an attempt to find properties equivalent to weak pseudo-randomness in
the sparse case, Chung and Graham define the following properties in
\cite{ChuGra02} :
\noindent{\bf CIRCUIT($t$):} The number of closed walks
$w_0,w_1,\ldots,w_t=w_0$ of length $t$ in $G$ is $(1+o(1))(np)^t$;
\noindent{\bf CYCLE($t$):} The number of labeled $t$-cycles in $G$ is
$(1+o(1))(np)^t$;
\noindent{\bf EIG:} The eigenvalues $\lambda_i$,
$|\lambda_1|\ge|\lambda_2|\ge\ldots |\lambda_n|$, of the adjacency
matrix of $G$ satisfy:
\begin{eqnarray*}
\lambda_1&=&(1+o(1))np\,,\\
|\lambda_i|&=&o(np), i>1\,.
\end{eqnarray*}
\noindent{\bf DISC:} For all $X,Y\subset V(G)$,
$$
|e(X,Y)-p|X||Y||=o(pn^2)\ .
$$
(DISC here is in fact DICS(1) in \cite{ChuGra02}).
\begin{theo}\label{CG1}\cite{ChuGra02}
Let $(G=G_n: n\rightarrow \infty)$ be a sequence of graphs with
$e(G_n)=(1+o(1))p{n\choose 2}$. Then the following implications hold for
all $t\ge 1$:
$$
CIRCUIT(2t)\Rightarrow EIG\Rightarrow DISC\ .
$$
\end{theo}
\noindent{\bf Proof.\quad}
To prove the first implication, let $A$ be the adjacency matrix of $G$,
and consider the trace $Tr(A^{2t})$. The $(i,i)$-entry of $A^{2t}$ is
equal to the number of closed walks of length $2t$
starting and ending at $i$, and hence $Tr(A^{2t})=(1+o(1))(np)^{2t}$.
On the other hand, since $A$ is symmetric it is similar to the diagonal
matrix $D=diag(\lambda_1,\lambda_2,\ldots,\lambda_n)$, and therefore
$Tr(A^{2t})=\sum_{i=1}^{2t}\lambda_i^{2t}$. We obtain:
$$
\sum_{i=1}^n\lambda_i^{2t}=(1+o(1))(np)^{2t}\ .
$$
Since the first eigenvalue of $G$ is easily shown to be as large as its
average degree, it follows that $\lambda_1\ge
2|E(G)|/|V(G)|=(1+o(1))np$. Combining these two facts we derive that
$\lambda_1=(1+o(1))np$ and $|\lambda_i|=o(np)$ as required.
The second implication will be proven in the next subsection. \hfill
$\Box$
\medskip
Both reverse implications are false in general. To see why
$DISC\not\Rightarrow EIG$ take a graph $G_0$ on $n-1$ vertices with all
degrees equal to $(1+o(1))n^{0.1}$ and having property $DISC$ (see next
section for examples of such graphs). Now add to $G_0$ a vertex $v^*$
and connect it to any set of size $n^{0.8}$ in $G_0$, let $G$
be the obtained graph. Since $G$ is obtained from $G_0$ by adding
$o(|E(G_0|)$ edges, $G$ still satisfies $DISC$. On the other hand, $G$
contains a star $S$ of size $n^{0.8}$ with a center at $v^*$,
and hence $\lambda_1(G)\ge\lambda_1(S)=\sqrt{n^{0.8}-1}\gg |E(G)|/n$
(see, e.g. Chapter 11 of \cite{Lov93} for the relevant proofs). This
solves an open question from \cite{ChuGra02}.
The Erd\H os-R\'enyi graph from the next section is easily seen to
satisfy $EIG$, but fails to satisfy $CIRCUIT(4)$. Chung and Graham
provide an alternative example in \cite{ChuGra02} (Example 1).
The above discussion indicates that one probably needs to impose some
additional condition on the graph $G$ to glue all these pieces together
and to make the above stated properties equivalent. One such condition
has been suggested by Chung and Graham who defined:
\medskip
\noindent{\bf U($t$):} For some absolute constant $c$, all degrees in
$G$ satisfy: $d(v)< cnp$, and for every pair of vertices $x,y\in G$
the number
$e_{t-1}(x,y)$ of walks of length $t-1$ from $x$ to $y$ satisfies:
$e_{t-1}(x,y)\le cn^{t-2}p^{t-1}$.
\medskip
Notice that $U(t)$ can only hold for $p>c'n^{-1+1/(t-1)}$, where $c'$
depends on $c$. Also, every dense graph ($p=\Theta(1)$) satisfies
$U(t)$.
As it turns out adding property $U(t)$ makes all the above defined
properties equivalent and thus equivalent to the notion of weak
pseudo-randomness (that can be identified with property $DISC$):
\begin{theo}\label{CG2}\cite{ChuGra02}
Suppose for some constant $c>0$, $p(n)>cn^{-1+1/(t-1)}$, where $t\ge 2$.
For any family of graphs $G_n$, $|E(G_n)|=(1+o(1))p{n\choose 2}$,
satisfying $U(t)$, the following properties are all equivalent:
$CIRCUIT(2t), CYCLE(2t), EIG$ and $DISC$.
\end{theo}
Theorem \ref{CG2} can be viewed as a sparse analog of Theorem \ref{CGW}
as it also provides a list of conditions equivalent to weak
pseudo-randomness.
Further properties implying or equivalent to pseudo-randomness,
including local statistics conditions, are given in \cite{KohRodSis02}.
\subsection{Eigenvalues and pseudo-random graphs}
In this subsection we describe an approach to pseudo-randomness based on
graph eigenvalues -- the approach most frequently used in this survey.
Although the eigenvalue-based condition
is not as general as the jumbledness
condition of Thomason or some other properties described in the previous
subsection, its power and convenience are so appealing that they
certainly constitute a good enough reason to prefer this approach. Below
we first provide a necessary background on graph spectra
and then derive quantitative estimates connecting the eigenvalue gap
and edge distribution.
Recall that the {\em adjacency matrix} of a graph $G=(V,E)$ with vertex
set $V=\{1,\ldots,n\}$ is an $n$-by-$n$ matrix whose entry $a_{ij}$ is
1 if $(i,j)\in E(G)$, and is 0 otherwise. Thus $A$ is a $0,1$ symmetric
matrix with zeroes along the main diagonal, and we can apply the
standard machinery of eigenvalues and eigenvectors of real symmetric
matrices. It follows that all eigenvalues of $A$ (usually also
called the eigenvalues of the graph $G$ itself) are real, and we denote
them by $\lambda_1\ge\lambda_2\ge\ldots\ge \lambda_n$. Also, there is
an orthonormal basis $B=\{x_1,\ldots,x_n\}$ of the euclidean space $R^n$
composed of eigenvectors of $A$: $Ax_i=\lambda_i x_i$, $x_i^tx_i=1$,
$i=1,\ldots,n$. The matrix $A$ can be decomposed then as:
$A=\sum_{i=1}^n\lambda_ix_ix_i^t$ -- the so called spectral
decomposition of $A$. (Notice that the product $xx^t$, $x\in R^n$, is an
$n$-by-$n$ matrix of rank 1; if $x,y,z\in R^n$ then
$y^t(xx^t)z=(y^tx)(x^tz)$). Every vector $y\in R^n$ can be easily
represented in basis $B$: $y=\sum_{i=1}^n(y^tx_i)x_i$. Therefore, for
$y,z\in R^n$, $y^tz=\sum_{i=1}^n (y^tx_i)(z^tx_i)$ and
$\|y\|^2=y^ty=\sum_{i=1}^n (y^tx_i)^2$.
All the above applies in fact to all real symmetric matrices.
Since the adjacency matrix $A$ of a graph $G$ is a matrix with
non-negative entries, one can derive some important extra features
of $A$, most notably the Perron-Frobenius Theorem, that reads in
the graph context as follows: if $G$ is connected then the
multiplicity of $\lambda_1$ is one, all coordinates of the first
eigenvector $x_1$ can be assumed to be strictly positive, and
$|\lambda_i|\le \lambda_1$ for all $i\ge 2$. Thus, graph spectrum
lies entirely in the interval $[-\lambda_1,\lambda_1]$.
For the most important special case of regular graphs
Perron-Frobenius implies the following corollary:
\begin{prop}\label{PF}
Let $G$ be a $d$-regular graph on $n$ vertices. Let
$\lambda_1\ge\lambda_2\ge\ldots\ge \lambda_n$ be the eigenvalues
of $G$. Then $\lambda_1=d$ and $-d\le \lambda_i \le d$ for all
$1\le i\le n$. Moreover, if $G$ is connected then the first
eigenvector $x_1$ is proportional to the all one vector
$(1,\ldots,1)^t\in R^n$, and $\lambda_i<d$ for all $i\ge 2$.
\end{prop}
To derive the above claim from the Perron-Frobenius Theorem
observe that $e=(1,\ldots,1)$ is immediately seen to be an
eigenvector of $A(G)$ corresponding to the eigenvalue $d$: $Ae=de$.
The positivity of the coordinates of $e$ implies then that $e$ is not
orthogonal to the first eigenvector, and hence is
in fact proportional to $x_1$ of $A(G)$.
Proposition \ref{PF} can be also proved directly without relying
on the Perron-Frobenius Theorem.
We remark that $\lambda_n=-d$ is possible, in fact it holds if and
only if the graph $G$ is bipartite.
All this background information, presented above in a somewhat
condensed form, can be found in many textbooks in Linear Algebra.
Readers more inclined to consult combinatorial books can find it
for example in a recent monograph of Godsil and Royle on Algebraic
Graph Theory \cite{GodRoy01}.
We now prove a well known theorem (see its variant, e.g., in Chapter 9,
\cite{AloSpe00}) bridging between graph spectra and edge distribution.
\begin{theo}\label{eigen}
Let $G$ be a $d$-regular graph on $n$ vertices. Let
$d=\lambda_1\ge\lambda_2\ge\ldots\lambda_n$ be the eigenvalues of $G$.
Denote
$$
\lambda=max_{2\le i\le n}|\lambda_i|\,.
$$
Then for every two subsets $U,W\subset V$,
\begin{equation}\label{eig}
\left|e(U,W)-\frac{d|U||W|}{n}\right|\le
\lambda\sqrt{|U||W|\left(1-\frac{|U|}{n}\right)
\left(1-\frac{|W|}{n}\right)}\ .
\end{equation}
\end{theo}
\noindent{\bf Proof.\ } Let $B=\{x_1,\ldots,x_n\}$ be an orthonormal
basis of $R^n$ composed from eigenvectors of $A$: $Ax_i=\lambda_ix_i$,
$1\le i\le n$. We represent $A=\sum_{i=1}^n\lambda_ix_ix_i^t$. Denote
\begin{eqnarray*}
A_1&=&\lambda_1x_1x_1^t\,,\\
{\cal E}&=&\sum_{i=2}^n\lambda_ix_ix_i^t\,,
\end{eqnarray*}
then $A=A_1+{\cal E}$.
Let $u=|U|$, $w=|W|$ be the cardinalities of $U,W$, respectively. We
denote the characteristic vector of $U$ by $\chi_U\in R^n$, i.e.
$\chi_U(i)=1$ if $i\in U$, and $\chi_U(i)=0$ otherwise. Similarly, let
$\chi_W\in R^n$ be the characteristic vector of $W$. We represent
$\chi_U$, $\chi_W$ according to $B$:
\begin{eqnarray*}
\chi_U &=& \sum_{i=1}^n \alpha_ix_i,\quad \alpha_i=\chi_U^tx_i,
\quad \sum_{i=1}^n\alpha_i^2=\|\chi_U\|^2=u\,,\\
\chi_W &=& \sum_{i=1}^n \beta_ix_i, \quad \beta_i=\chi_W^tx_i,
\quad \sum_{i=1}^n\beta_i^2=\|\chi_W\|^2=w\ .
\end{eqnarray*}
It follows easily from the definitions of $A$, $\chi_U$ and $\chi_W$
that the product $\chi_U^tA\chi_W$ counts exactly the number of edges of
$G$ with one endpoint in $U$ and the other one in $W$, i.e.
$$
e(U,W)=\chi_U^tA\chi_W\ =\chi_U^tA_1\chi_W+\chi_U^t{\cal E}\chi_W\ .
$$
Now we estimate the last two summands separately, the first of them
will be the main term for $e(U,W)$, the second one will be the error
term. Substituting the expressions for $\chi_U$, $\chi_W$ and recalling
the orthonormality of $B$, we get:
\begin{equation}\label{fir}
\chi_U^tA_1\chi_W=\left(\sum_{i=1}^n\alpha_ix_i\right)^t
(\lambda_1x_1x_1^t)
\left(\sum_{j=1}^n\beta_jx_j\right)=
\sum_{i=1}^n\sum_{j=1}^n \alpha_i\lambda_1\beta_j
(x_i^tx_1)(x_1^tx_j)=\alpha_1\beta_1\lambda_1\ .
\end{equation}
Similarly,
\begin{equation}\label{rest}
\chi_U^t{\cal E}\chi_W=\left(\sum_{i=1}^n\alpha_ix_i\right)^t
\left(\sum_{j=2}^n\lambda_jx_jx_j^t\right)
\left(\sum_{k=1}^n\beta_kx_k\right)=
\sum_{i=2}^n\alpha_i\beta_i\lambda_i\ .
\end{equation}
Recall now that $G$ is $d$-regular. Then according to Proposition
\ref{PF}, $\lambda_1=d$ and $x_1=\frac{1}{\sqrt{n}}(1,\ldots,1)^t$. We
thus get: $\alpha_1=\chi_U^tx_1=u/\sqrt{n}$ and
$\beta_1=\chi_W^tx_1=w/\sqrt{n}$. Hence it follows from (\ref{fir}) that
$\chi_U^tA_1\chi_W=duw/n$.
Now we estimate the absolute value of the error term
$\chi_U^t{\cal E}\chi_W$. Recalling (\ref{rest}), the definition of
$\lambda$ and the obtained values of $\alpha_1$, $\beta_1$, we
derive, applying Cauchy-Schwartz:
\begin{eqnarray*}
|\chi_U^t{\cal E}\chi_W|&=&|\sum_{i=2}^n\alpha_i\beta_i\lambda_i|
\le\lambda|\sum_{i=2}^n\alpha_i\beta_i|\le
\lambda\sqrt{\sum_{i=2}^n\alpha_i^2\sum_{i=2}^n\beta_i^2}\\
&=&\lambda\sqrt{(\|\chi_U\|^2-\alpha_1^2)(\|\chi_W\|^2-\beta_1^2)}=
\lambda\sqrt{\left(u-\frac{u^2}{n}\right)\left(w-\frac{w^2}{n}\right)}
\ .
\end{eqnarray*}
The theorem follows.\hfill $\Box$
\bigskip
The above proof can be extended to the irregular (general) case.
Since the obtained quantitative bounds on edge distribution turn out to
be somewhat cumbersome, we will just indicate how they can be obtained.
Let $G=(V,E)$ be a graph on $n$ vertices with {\em average} degree $d$.
Assume that the eigenvalues of $G$ satisfy $\lambda<d$, with $\lambda$
as defined in the theorem. Denote
$$
K=\sum_{v\in V}(d(v)-d)^2\ .
$$
The parameter $K$ is a measure of irregularity of $G$. Clearly $K=0$
if and only if $G$ is $d$-regular. Let
$e=\frac{1}{\sqrt{n}}(1,\ldots,1)^t$. We represent $e$ in the basis
$B=\{x_1,\ldots,x_n\}$ of the eigenvectors of $A(G)$:
$$
e=\sum_{i=1}^n\gamma_ix_i,\quad \gamma_i=e^tx_i,\quad
\sum_{i=1}^n\gamma_i^2=\|e\|^2=1\ .
$$
Denote $z=\frac{1}{\sqrt{n}}(d(v_1)-d,\ldots,d(v_n)-d)^t$, then
$\|z\|^2=K/n$. Notice that
$Ae=\frac{1}{\sqrt{n}}(d(v_1),\ldots,d(v_n))^t$ $=de+z$, and therefore
$z=Ae-de=\sum_{i=1}^n\gamma_i(\lambda_i-d)x_i$. This implies:
\begin{eqnarray*}
\frac{K}{n}&=&\|z\|^2= \sum_{i=1}^n\gamma_i^2(\lambda_i-d)^2\ge
\sum_{i=2}^n\gamma_i^2(\lambda_i-d)^2\\
&\ge& (d-\lambda)^2\sum_{i=2}^n\gamma_i^2\ .
\end{eqnarray*}
Hence $\sum_{i=2}^n\gamma_i^2\le \frac{K}{n(d-\lambda)^2}$. It follows
that $\gamma_1^2=1-\sum_{i=2}^n\gamma_i^2\ge 1-\frac{K}{n(d-\lambda)^2}$
and
$$
\gamma_1\ge \gamma_1^2 \ge 1-\frac{K}{n(d-\lambda)^2}\ .
$$
Now we estimate the distance between the vectors $e$ and $x_1$ and show
that they are close given that the parameter $K$ is small.
\begin{eqnarray*}
\|e-x_1\|^2&=& (e-x_1)^t(e-x_1)=e^te+x_1^tx_1-2e^tx_1=1+1-2\gamma_1
=2-2\gamma_1\\
&\le& \frac{2K}{n(d-\lambda)^2}\ .
\end{eqnarray*}
We now return to expressions (\ref{fir}) and (\ref{rest}) from the proof
of Theorem \ref{eigen}. In order to estimate the main term
$\chi_U^tA_1\chi_W$, we bound the coefficients $\alpha_1$, $\beta_1$
and $\lambda_1$ as follows:
$$
\alpha_1=\chi_U^tx_1=\chi_U^te+\chi_U^t(x_1-e)=
\frac{u}{\sqrt{n}}+\chi_U^t(x_1-e)\ ,
$$
and therefore
\begin{equation}\label{alpha1}
\left|\alpha_1-\frac{u}{\sqrt{n}}\right|=|\chi_U^t(x_1-e)|
\le \|\chi_U||\cdot\|x_1-e\|\le \frac{\sqrt{\frac{2Ku}{n}}}{d-\lambda}
\ .
\end{equation}
In a similar way one gets:
\begin{equation}\label{beta1}
\left|\beta_1-\frac{w}{\sqrt{n}}\right|\le
\frac{\sqrt{\frac{2Kw}{n}}}{d-\lambda}
\ .
\end{equation}
Finally, to estimate from above the absolute value of the difference
between $\lambda_1$ and $d$ we argue as follows:
$$
\frac{K}{n}=\|z\|^2=\sum_{i=1}^n\gamma_i^2(\lambda_i-d)^2\ge
\gamma_1^2(\lambda_1-d)^2\,,
$$
and therefore
\begin{equation}\label{lambda1}
|\lambda_1-d|\le \frac{1}{\gamma_1}\sqrt{\frac{K}{n}}
\le \frac{n(d-\lambda)^2}{n(d-\lambda)^2-K}
\sqrt{\frac{K}{n}}\ .
\end{equation}
Summarizing, we see from (\ref{alpha1}), (\ref{beta1}) and
(\ref{lambda1}) that the main term in the product $\chi_U^tA_1\chi_W$ is
equal to $\frac{duw}{n}$, just as in the regular case, and the error
term is governed by the parameter $K$.
In order to estimate the error term $\chi_U^t{\cal E} \chi_W$ we use
(\ref{rest}) to get:
\begin{eqnarray*}
\hspace{3.1cm}
|\chi_U^t{\cal E}\chi_W|&=&
\left|\sum_{i=2}^n\alpha_i\beta_i\lambda_i\right|
\le \lambda\left|\sum_{i=2}^n\alpha_i\beta_i\right|
\le \lambda \sqrt{\sum_{i=2}^n\alpha_i^2\sum_{i=2}^n\beta_i^2}\\
&\le& \lambda \sqrt{\sum_{i=1}^n\alpha_i^2\sum_{i=1}^n\beta_i^2}
= \lambda \|\chi_U\|\,\|\chi_W\|=\lambda\sqrt{uw}. \hspace{3.1cm} \Box
\end{eqnarray*}
Applying the above developed techniques we can prove now the second
implication of Theorem \ref{CG1}. Let us prove first that $EIG$ implies
$K=o(nd^2)$, where $d=(1+o(1))np$ is as before the average degree of
$G$. Indeed, for every vector $v\in R^n$ we have $\|Av\|\le
\lambda_1\|v\|$, and therefore
$$
\lambda_1^2n=\lambda_1^2e^te\ge (Ae)^t(Ae)=\sum_{v\in V}d^2(v)\ .
$$
Hence from $EIG$ we get: $\sum_{v\in V}d^2(v)\le (1+o(1))nd^2$.
As $\sum_{v}d(v)=nd$, it follows that:
$$
K=\sum_{v\in V}(d(v)-d)^2=\sum_{v\in V}d^2(v)-2d\sum_{v\in V}d(v)+nd^2
= (1+o(1))nd^2-2nd^2+nd^2=o(nd^2)\,,
$$
as promised. Substituting this into
estimates (\ref{alpha1}), (\ref{beta1}), (\ref{lambda1}) and using
$\lambda=o(d)$ of $EIG$ we get:
\begin{eqnarray*}
\alpha_1 &=& \frac{u}{\sqrt{n}}+o(\sqrt{u})\,,\\
\beta_1 &=& \frac{w}{\sqrt{n}}+o(\sqrt{w})\,,\\
\lambda_1 &=& (1+o(1))d\,,
\end{eqnarray*}
and therefore
$$
\chi_U^tA_1\chi_W=\frac{duw}{n}+o(dn)\ .
$$
Also, according to $EIG$, $\lambda=o(d)$, which implies:
$$
\chi_U^t{\cal E}\chi_w=o(d\sqrt{uw})=o(dn)\,,
$$
and the claim follows. \hfill $\Box$
Theorem \ref{eigen} is a truly remarkable result. Not only it connects
between two seemingly unrelated graph characteristics -- edge
distribution and spectrum, it also provides a very good quantitative
handle for the uniformity of edge distribution, based on easily
computable, both theoretically and practically, graph parameters --
graph eigenvalues. According to the bound (\ref{eig}), a polynomial number
of parameters can control quite well the number of edges in
exponentially many subsets of vertices.
The parameter $\lambda$ in the formulation of Theorem \ref{eigen} is
usually called the {\em second eigenvalue} of the $d$-regular graph $G$
(the first and the trivial one being $\lambda_1=d$). There is
certain inaccuracy though in this term, as in fact
$\lambda=\max\{\lambda_2,-\lambda_n\}$. Later we will call, following
Alon, a $d$-regular
graph $G$ on $n$ vertices in which all eigenvalues, but the first one,
are at most $\lambda$ in their absolute values, an {\em
$(n,d,\lambda)$-graph}.
Comparing (\ref{eig}) with the definition of jumbled graphs by Thomason
we see that an $(n,d,\lambda)$-graph $G$ is $(d/n,\lambda)$-jumbled. Hence
the parameter $\lambda$ (or in other words, the so called {\em spectral
gap} -- the difference between $d$ and $\lambda$) is responsible for
pseudo-random properties of such a graph. The smaller the value of
$\lambda$ compared to $d$, the more close is the edge distribution of
$G$ to the ideal uniform distribution. A natural question is then: how
small can be $\lambda$? It is easy to see that as long as $d\le
(1-\epsilon)n$, $\lambda=\Omega(\sqrt{d})$. Indeed, the trace of $A^2$
satisfies:
$$
nd=2|E(G)|=Tr(A^2)=\sum_{i=1}^n\lambda_i^2\le d^2+(n-1)\lambda_2
\le (1-\epsilon)nd+(n-1)\lambda^2\,,
$$
and $\lambda=\Omega(\sqrt{d})$ as claimed. More accurate bounds are
known for smaller values of $d$ (see, e.g. \cite{Nil91}). Based on these
estimates we can say that an $(n,d,\lambda)$-graph $G$, for which
$\lambda=\Theta(\sqrt{d})$, is a very good pseudo-random graph. We will
see several examples of such graphs in the next section.
\subsection{Strongly regular graphs}
A {\em strongly regular graph} $srg(n,d,\eta,\mu)$ is a $d$-regular
graph on $n$ vertices in which every pair of adjacent vertices has
exactly $\eta$ common neighbors and every pair of non-adjacent vertices
has exactly $\mu$ common neighbors. (We changed the very standard notation
in the above definition so as to avoid interference with other
notational conventions throughout this paper and to make it more
coherent,
usually the parameters are denoted $(v,k,\lambda,\mu)$). Two simple
examples of strongly regular graph are the pentagon $C_5$ that has
parameters $(5,2,0,1)$, and the Petersen graph whose parameters are
$(10,3,0,1)$. Strongly regular graphs were introduced by Bose in 1963
\cite{Bos63} who also pointed out their tight connections with finite
geometries. As follows from the definition, strongly regular graphs
are highly
regular structures, and one can safely predict that algebraic methods
are extremely useful in their study. We do not intend to provide any
systematic coverage of this fascinating concept here, addressing the
reader to the vast literature on the subject instead (see, e.g.,
\cite{BrovLi84}). Our aim here is to calculate the eigenvalues of
strongly regular graphs and then to connect them with pseudo-randomness,
relying on results from the previous subsection.
\begin{prop}\label{srg}
Let $G$ be a connected strongly regular graph with parameters
$(n,d,\eta,\mu)$. Then the eigenvalues of $G$ are: $\lambda_1=d$ with
multiplicity $s_1=1$,
$$
\lambda_2=\frac{1}{2}\left(\eta-\mu+\sqrt{(\eta-\mu)^2+4(d-\mu)}\right)
$$
and
$$
\lambda_3=\frac{1}{2}\left(\eta-\mu-\sqrt{(\eta-\mu)^2+4(d-\mu)}\right)
\,,
$$
with multiplicities
$$
s_2=\frac{1}{2}\left(n-1+\frac{(n-1)(\mu-\eta)-2d}
{\sqrt{(\mu-\eta)^2+4(d-\mu)}}\right)
$$
and
$$
s_3=\frac{1}{2}\left(n-1-\frac{(n-1)(\mu-\eta)-2d}
{\sqrt{(\mu-\eta)^2+4(d-\mu)}}\right)\,,
$$
respectively.
\end{prop}
\noindent{\bf Proof.\ } Let $A$ be the adjacency matrix of $A$. By
the definition of $A$ and the fact that $A$ is symmetric with zeroes on
the main diagonal, the $(i,j)$-entry of the square $A^2$ counts the
number of common neighbors of $v_i$ and $v_j$ in $G$ if $i\ne j$, and is
equal to the degree $d(v_i)$ in case $i=j$.
The statement that $G$ is $srg(n,d,\eta,\mu)$ is equivalent then to:
\begin{equation}\label{srg1}
AJ=dJ,\quad\quad A^2=(d-\mu)I+\mu J+(\eta-\mu)A\ ,
\end{equation}
where $J$ is the $n$-by-$n$ all-one matrix and $I$ is the $n$-by-$n$
identity matrix.
Since $G$ is $d$-regular and connected, we obtain from the
Perron-Frobenius Theorem that $\lambda_1=d$ is an eigenvalue of $G$ with
multiplicity 1 and with $e=(1,\ldots,1)^t$ as the corresponding
eigenvector. Let $\lambda\ne d$ be another eigenvalue of $G$, and let
$x\in R^n$ be a corresponding eigenvector. Then $x$ is orthogonal to
$e$, and therefore $Jx=0$. Applying both sides of the second identity in
(\ref{srg1}) to $x$ we get the equation:
$\lambda^2x=(d-\mu)x+(\eta-\mu)\lambda x$, which results in the
following quadratic equation for $\lambda$:
$$
\lambda^2+(\mu-\eta)\lambda+(\mu-d)=0\ .
$$
This equation has two solutions $\lambda_2$ and $\lambda_3$ as defined
in the proposition formulation. If we denote by $s_2$ and $s_3$ the
respective multiplicities of $\lambda_2$ and $\lambda_3$ as eigenvalues
of $A$, we get:
$$
1+s_2+s_3=n,\quad\quad Tr(A)=d+s_2\lambda_2+s_3\lambda_3=0\ .
$$
Solving the above system of linear equations for $s_2$ and $s_3$ we obtain
the assertion of the proposition.\hfill $\Box$
\medskip
Using the bound (\ref{eig}) we can derive from the above proposition that if
the parameters of a strongly regular graph $G$ satisfy $\eta\approx \mu$
then $G$ has a large eigenvalue gap and is therefore a good
pseudo-random graph. We will exhibit several examples of such graphs in
the next section.
\section{Examples}\label{examples}
Here we present some examples of pseudo-random graphs.
Many of them are well known and already appeared, e.g., in
\cite{Tho87a} and \cite{Tho87b}, but there also some which have been
discovered only recently. Since in the rest of the paper we will mostly
discuss properties of $(n,d,\lambda)$-graphs,
in our examples we emphasize the spectral properties of the constructed
graphs. We will also use most of these constructions
later to illustrate particular points and to test the strength of
the theorems.
\noindent
{\bf Random graphs}
\begin{enumerate}
\item
Let $G=G(n,p)$ be a random graph with edge probability $p$. If $p$ satisfies
$pn/\log n \rightarrow \infty$ and $(1-p)n\log n \rightarrow \infty$,
then almost surely all the degrees of $G$ are equal to $(1+o(1))np$.
Moreover it was proved by F\"uredi and Koml\'os \cite{FK}
that the largest eigenvalue of $G$ is a.s. $(1+o(1))np$ and that
$\lambda(G) \leq (2+o(1))\sqrt{p(1-p)n}$. They stated this result only
for constant $p$ but their proof shows that $\lambda(G) \leq O(\sqrt{np})$
also when $p\geq poly \log n/n$.
\item
For a positive integer-valued function $d=d(n)$ we define the model
$G_{n,d}$ of random regular graphs consisting of all regular graphs on
$n$ vertices of degree $d$ with the uniform probability distribution.
This definition of a random regular graph is conceptually simple, but it
is not easy to use. Fortunately, for small $d$ there is an efficient
way to generate $G_{n,d}$ which is useful for theoretical studies. This
is the so called {\em configuration model}. For more details about this
model, and random regular graphs in general we
refer the interested reader to two excellent monographs \cite{Bol01}
and \cite{JanLucRuc00}, or to a survey \cite{Wor99}.
As it turns out, sparse random regular graphs have quite different
properties from those of the binomial random graph $G(n,p), p=d/n$.
For example, they are almost surely
connected. The spectrum of $G_{n,d}$ for
a fixed $d$ was studied in \cite{FKS} by
Friedman, Kahn and Szemer\'edi. Friedman \cite{FRI} proved that
for constant $d$ the second largest eigenvalue of a random $d$-regular
graph is $\lambda = (1+o(1))2\sqrt{d-1}$. The approach of
Kahn and Szemer\'edi gives only $O(\sqrt{d})$ bound on $\lambda$ but
continues to work also when $d$ is small power of $n$.
The case $d \gg n^{1/2}$ was recently studied by Krivelevich, Sudakov,
Vu and Wormald \cite{KSVW}. They proved that in this case for any two
vertices $u,v \in G_{n,d}$ almost surely
$$\big|codeg(u,v)-d^2/n\big| < Cd^3/n^2 + 6d\sqrt{ \log n}
/\sqrt{n},$$
where $C$ is some constant and $codeg(u,v)$ is the number of
common neighbors of $u,v$. Moreover if $d \geq n/\log n$, then $C$ can
be defined to be zero. Using this it is easy to show that
for $d \gg n^{1/2}$, the second largest eigenvalue of a random $d$-regular
graph is $o(d)$. The true bound for the second largest
eigenvalue of $G_{n,d}$ should be probably $(1+o(1))2\sqrt{d-1}$ for
all values of $d$, but we are still far from proving it.
\noindent
\hspace{-0.95cm}{\bf Strongly regular graphs}
\item
Let $q=p^{\alpha}$ be a prime power which is congruent to $1$ modulo $4$ so
that $-1$ is a square in the finite field $GF(q)$.
Let $P_q$ be the graph whose vertices are all elements of
$GF(q)$ and two vertices are adjacent if and only if their difference is a
quadratic residue in $GF(q)$. This graph is usually called
the {\em Paley graph}. It is easy to see that $P_q$ is $(q-1)/2$-regular.
In addition one can easily compute the number of common neighbors of two
vertices in $P_q$.
Let $\chi$ be the {\em quadratic residue character} on $GF(q)$, i.e.,
$\chi(0)=0$, $\chi(x)=1$ if $x\not= 0$ and is a square in $GF(q)$ and
$\chi(x)=-1$ otherwise. By definition,
$\sum_x\chi(x)=0$ and the number of common neighbors
of two vertices $a$ and $b$ equals
$$\sum_{x\not=a,b}\left(\frac{1+\chi(a-x)}{2}\right)
\left(\frac{1+\chi(b-x)}{2}\right)=
\frac{q-2}{4}-\frac{\chi(a-b)}{2}+\frac{1}{4}\sum_{x\not=a,b}
\chi(a-x)\chi(b-x).$$
Using that for $x \not =b$, $\chi(b-x)=\chi\big((b-x)^{-1}\big)$, the last
term can be rewritten as
$$\sum_{x\not=a,b}\chi(a-x)\chi\big((b-x)^{-1}\big)=
\sum_{x\not=a,b} \chi\Big(\frac{a-x}{b-x}\Big)=
\sum_{x\not=a,b}\chi\Big(1+\frac{a-b}{b-x}\Big)=\sum_{x\not=0,1}\chi(x)=-1.$$
Thus the number of common neighbors of $a$ and $b$ is
$(q-3)/4-\chi(a-b)/2$. This equals $(q-5)/4$ if $a$ and $b$ are adjacent and
$(q-1)/4$ otherwise. This implies that
the Paley graph is a strongly regular graph
with parameters $\big(q, (q-1)/2, (q-5)/4, (q-1)/4\big)$
and therefore its second largest eigenvalue equals
$(\sqrt{q}+1)/2$.
\item
For any odd integer $k$ let $H_k$ denote the graph whose $n_k=2^{k-1}-1$
vertices are all binary vectors of length $k$ with an odd number of
ones except the all one vector,
in which two distinct vertices are adjacent
iff the inner product
of the corresponding vectors is $1$ modulo $2$. Using elementary linear
algebra it is easy to check that this graph is $(2^{k-2}-2)$-regular.
Also every two nonadjacent vertices vertices in it have
$2^{k-3}-1$ common neighbors and every two adjacent vertices vertices
have $2^{k-3}-3$ common neighbors. Thus $H_k$ is a strongly regular graph
with parameters $\big(2^{k-1}-1, 2^{k-2}-2, 2^{k-3}-3, 2^{k-3}-1\big)$
and with the second largest eigenvalue
$\lambda(H_k)=1+2^{\frac{k-3}{2}}$.
\item
Let $q$ be a prime power an let $V(G)$ be the elements of the two
dimensional vector space over $GF(q)$, so $G$ has $q^2$ vertices.
Partition the $q+1$ lines through the origin of the space into two sets
$P$ and $N$, where $|P|=k$. Two vertices $x$ and $y$ of the graph $G$
are adjacent if
$x-y$ is parallel to a line in $P$. This example is due to Delsarte and
Goethals and to Turyn (see \cite {Sei}). It is easy to check that
$G$ is strongly regular with parameters
$\big(k(q-1),(k-1)(k-2)+q-2,k(k-1)\big)$. Therefore its eigenvalues,
besides the trivial one are $-k$ and $q-k$. Thus if $k$ is
sufficiently large we obtain that $G$ is $d=k(q-1)$-regular graph whose
second largest eigenvalue is much smaller than $d$.
\noindent
\hspace{-0.95cm}{\bf Graphs arising from finite geometries}
\item
For any integer $t \geq 2$ and for any power $q=p^{\alpha}$ of prime $p$ let
$PG(q,t)$ denote the projective geometry of dimension $t$ over
the finite field $GF(q)$. The interesting case for our purposes here is
that of large $q$ and fixed $t$. The vertices of $PG(q,t)$
correspond to the equivalence classes of the set of all non-zero
vectors ${\bf x}=(x_0, \ldots, x_t)$ of length $t+1$ over $GF(q)$, where two
vectors are equivalent if one is a multiple of the other by an element
of the field. Let $G$ denote the graph whose vertices are the points of
$PG(q,t)$ and two (not necessarily distinct) vertices ${\bf x}$ and
${\bf y}$ are adjacent if and only if $x_0y_0+\ldots+x_ty_t=0$.
This construction is well known. In particular, in case $t=2$
this graph is often called the Erd\H os-R\'enyi graph and it contains no
cycles
of length $4$. It is easy to see that
the number of vertices of $G$ is
$n_{q,t}=\big(q^{t+1}-1\big)/\big(q-1\big)=\big(1+o(1)\big)q^t$
and that it is $d_{q,t}$-regular for
$d_{q,t}=\big(q^t-1\big)/\big(q-1\big)=\big(1+o(1)\big)q^{t-1}$,
where $o(1)$ tends to zero as $q$ tends to infinity.
It is easy to see that the number of vertices of $G$ with loops is
bounded by $2\big(q^{t}-1\big)/\big(q-1\big)=\big(2+o(1)\big)q^{t-1}$,
since for every possible value of $x_0, \ldots, x_{t-1}$ we have at most two
possible choices of $x_t$. Actually using more complicated computation, which
we omit, one can determine the exact number of vertices with loops.
The eigenvalues of $G$ are easy to compute (see \cite{AK}). Indeed, let
$A$ be the adjacency matrix of $G$. Then, by the properties of $PG(q,t)$,
$A^2=AA^T=\mu J+(d_{q,t}-\mu)I$, where
$\mu=\big(q^{t-1}-1\big)/\big(q-1\big)$,
$J$ is the all one matrix and $I$ is the identity
matrix, both of size $n_{q,t} \times n_{q,t}$. Therefore the largest
eigenvalue of $A$ is $d_{q,t}$ and the absolute
value of all other eigenvalues is $\sqrt{d_{q,t}-\mu}=q^{(t-1)/2}$.
\item
The generalized polygons are incidence structures consisting of points
$\cal P$
and lines $\cal L$. For our purposes we restrict our attention to those
in which every point is incident to $q+1$ lines and every line is incident to
$q+1$ points. A generalized $m$-gon defines a bipartite graph $G$ with
bipartition $({\cal P},{\cal L})$ that
satisfies the following conditions. The diameter of $G$ is $m$ and for
every vertex $v \in G$ there is a vertex $u \in G$ such that the
shortest path from $u$ to $v$ has length $m$. Also for every $r< m$
and for every two vertices $u, v$ at distance $r$ there exists a
unique path of length $r$ connecting them. This immediately implies that
every cycle in $G$ has length at least $2m$.
For $q \geq 2$, it was proved by Feit and Higman \cite{FH} that
$(q+1)$-regular generalized $m$-gons exist only for $m=3,4,6$.
A {\em polarity} of $G$ is a
bijection $\pi: {\cal P} \cup {\cal L} \rightarrow {\cal P} \cup {\cal
L}$ such that $\pi({\cal P})={\cal L}$, $\pi({\cal L})={\cal P}$
and $\pi^2$ is the identity map. Also
for every $p \in {\cal P}, l \in {\cal L}$, $\pi(p)$ is adjacent to
$\pi(l)$ if and only if $p$ and $l$ are adjacent.
Given $\pi$ we define a polarity
graph $G^{\pi}$ to be the graph whose vertices are point in $\cal P$ and
two (not necessarily distinct) points $p_1, p_2$ are adjacent iff $p_1$
was adjacent to $\pi(p_2)$
in $G$. Some properties of $G^{\pi}$ can be easily deduced from the
corresponding properties of $G$. In particular,
$G^{\pi}$ is $(q+1)$-regular and also contains no
even cycles of length less than $2m$.
For every $q$ which is an odd power of $2$, the incidence graph
of the generalized $4$-gon has a polarity. The corresponding polarity
graph
is a $(q+1)$-regular graph with $q^3+q^2+q+1$ vertices. See \cite{BCN},
\cite{LUW} for more details.
This graph contains no cycle of length $6$ and
it is not difficult to compute its eigenvalues (they can be derived,
for example, from
the eigenvalues of the corresponding bipartite incidence graph, given in
\cite{Ta}).
Indeed, all the eigenvalues, besides the trivial one (which is $q+1$)
are either $0$ or $\sqrt {2q}$ or $-\sqrt {2q}$.
Similarly, for every $q$ which is an odd power of $3$, the incidence
graph
of the generalized $6$-gon has a polarity. The corresponding polarity
graph
is a $(q+1)$-regular graph with $q^5+q^4+ \cdots +q+1$ vertices (
see again \cite{BCN}, \cite{LUW}).
This graph contains no cycle of length $10$ and
its eigenvalues can be derived using the same technique as in case of
the $4$-gon. All these eigenvalues, besides the trivial one
are either $\sqrt {3q}$ or $-\sqrt{3q}$ or $\sqrt {q}$ or $-\sqrt {q}$.
\noindent
\hspace{-0.95cm}{\bf Cayley graphs}
\item
Let $G$ be a finite group and let $S$ be a set of non-identity elements
of $G$ such that $S=S^{-1}$, i.e., for every $s \in S$, $s^{-1}$ also
belongs to $S$. The {\em Cayley graph} $\Gamma(G,S)$ of this group with
respect to the generating set $S$ is the graph whose set of vertices is $G$ and where
two vertices $g$ and $g'$ are adjacent if and only if $g'g^{-1} \in S$.
Clearly, $\Gamma(G,S)$ is $|S|$-regular and it is connected iff $S$ is a
set of generators of the group.
If $G$ is abelian then the eigenvalues of the Cayley graph can be
computed in terms of the characters of $G$. Indeed, let
$\chi: G \rightarrow C$ be a character of $G$ and let $A$ be the adjacency
matrix of $\Gamma(G,S)$ whose rows and columns are indexed by the elements
of $G$. Consider the vector ${\bf v}$ defined by ${\bf v}(g)=\chi(g)$.
Then it is easy to check that $A{\bf v}=\alpha {\bf v}$ with
$\alpha=\sum_{s\in S}\chi(s)$. In
addition all eigenvalues can be obtained in this way, since
every abelian group has exactly $|G|$ different character which are
orthogonal to each other. Using this fact, one can often give
estimates on the eigenvalues of $\Gamma(G,S)$ for abelian groups.
One example of a Cayley graph that has already been described earlier
is $P_q$. In that case the group is the additive group of the finite field
$GF(q)$ and $S$ is the set of all quadratic residues modulo $q$. Next we
present a slightly more general construction. Let $q=2kr+1$ be a prime
power and let $\Gamma$ be a Cayley graph whose group
is the additive group of $GF(q)$ and whose generating set is
$S=\big\{x=y^k~|~\mbox{for some}~y \in GF(q)\big\}$.
By definition, $\Gamma$ is $(q-1)/k$-regular. On the other hand,
this graph is not strongly regular unless $k=2$, when it is the Paley
graph. Let $\chi$ be a nontrivial additive character of $GF(q)$ and
consider the Gauss sum $\sum_{y \in GF(q)} \chi(y^k)$. Using the
classical bound $|\sum_{y \in GF(q)} \chi(y^k)|\leq (k-1)q^{1/2}$
(see e.g. \cite{LN}) and the above connection between characters and
eigenvalues we can conclude that the second largest eigenvalue of our
graph $\Gamma$ is bounded by $O(q^{1/2})$.
\item
Next we present a surprising construction obtained by Alon \cite{A94} of
a very dense pseudo-random graph that on the other hand is
triangle-free. For a positive integer $k$, consider the
finite field $GF(2^k)$, whose elements are represented by binary vectors of
length $k$. If $a, b, c$ are three such vectors, denote by
$(a,b,c)$ the binary vector of length $3k$ whose coordinates are those of
$a$, followed by coordinates of $b$ and then $c$. Suppose that $k$ is not
divisible by $3$. Let $W_0$ be the set of all nonzero
elements $\alpha \in GF(2^k)$ so that the leftmost bit in the binary
representation of $\alpha^7$ is $0$, and let $W_1$ be the set of all
nonzero elements $\alpha \in GF(2^k)$ for which the leftmost bit
of $\alpha^7$ is $1$. Since $3$ does not divide $k$, $7$ does not divide
$2^k-1$ and hence $|W_0|=2^{k-1}-1$ and $|W_1|=2^{k-1}$, as when
$\alpha$ ranges over all nonzero elements of the field so does
$\alpha^7$. Let $G_n$ be the graph whose vertices are all
$n=2^{3k}$ binary vectors of length $3k$, where two vectors
${\bf v}$ and ${\bf v}'$ are adjacent if and only if there exist $w_0\in
W_0$ and
$w_1\in W_1$ so that ${\bf
v}-{\bf v}'=(w_0,w_0^3,w_0^5)+(w_1,w_1^3,w_1^5)$, where here
powers are computed in the field $GF(2^k)$ and the addition is addition
modulo $2$. Note that $G_n$ is the Cayley graph of the additive group
${\bf Z}_2^{3k}$ with respect to the generating set
$S=U_0+U_1$, where $U_0=\big\{(w_0,w_0^3,w_0^5)~|~w_0\in W_0\big\}$
and $U_1$ is defined similarly. A well known fact from Coding
Theory (see e.g., \cite{MS}), which can be proved using the
Vandermonde determinant, is that every set of six distinct vectors in
$U_0 \cup U_1$ is linearly independent over $GF(2)$.
In particular all the vectors in $U_0+U_1$ are distinct,
$S=|U_0||U_1|$ and hence $G_n$ is $|S|=2^{k-1}(2^{k-1}-1)$-regular.
The statement that $G_n$ is triangle free is clearly equivalent to the
fact
that the sum modulo $2$ of any set of $3$ nonzero elements of
$S$ is not a zero-vector. Let $u_0+u_1, u'_0+u'_1$ and $u''_0+u''_1$
be three distinct element of $S$, where $u_0,u'_0,u''_0 \in U_0$
and $u_1,u'_1,u''_1 \in U_1$. By the above discussion, if the sum of these
six vectors is zero, then every vector must appear an even number of times
in the sequence $(u_0,u'_0,u''_0,u_1,u'_1,u''_1)$. However, since $U_0$ and
$U_1$ are disjoint, this is clearly impossible. Finally, as we already
mentioned, the eigenvalues of $G_n$ can be computed in terms of characters
of
${\bf Z}_2^{3k}$. Using this fact together with the Carlitz-Uchiyama bound
on the characters of ${\bf Z}_2^{3k}$ it was proved in \cite{A94} that the
second eigenvalue of $G_n$ is bounded by $\lambda \leq 9\cdot
2^k+3\cdot 2^{k/2}+1/4$.
\item
The construction above can be extended in the obvious way as mentioned in
\cite{ALONK}.
Let $h\geq 1$ and suppose that $k$ is an integer such that
$2^k-1$ is not divisible by $4h+3$.
Let $W_0$ be the set of all nonzero
elements $\alpha \in GF(2^k)$ so that the leftmost bit in the binary
representation of $\alpha^{4h+3}$ is $0$, and let $W_1$ be the set of all
nonzero elements $\alpha \in GF(2^k)$ for which the leftmost bit
of $\alpha^{4h+3}$ is $1$. Since $4h+3$ does not divide
$2^k-1$ we have that $|W_0|=2^{k-1}-1$ and $|W_1|=2^{k-1}$, as when
$\alpha$ ranges over all nonzero elements of the field so does
$\alpha^{4h+3}$.
Define $G$ to be the Cayley graph of the additive group
${\bf Z}_2^{(2h+1)k}$ with respect to the generating set
$S=U_0+U_1$, where $U_0=\big\{(w_0,w_0^3,\ldots,w_0^{4h+1})~|~w_0\in
W_0\big\}$ and $U_1$ is defined similarly. Clearly,
$G$ is a $2^{k-1}(2^{k-1}-1)$-regular graph on $2^{(2h+1)k}$ vertices.
Using methods from \cite{A94}, one can show that
$G$ contains no odd cycle of length $\leq 2h+1$ and that the
second eigenvalue of $G$ is bounded by $O(2^k)$.
\item
Now we describe the celebrated expander graphs constructed by
Lubotzky, Phillips and Sarnak \cite{LPS} and independently by Margulis
\cite{Margulis}.
Let $p$ and $q$ be unequal primes, both congruent to $1$ modulo $4$
and such that $p$ is a quadratic residue modulo $q$. As usual denote by
$PSL(2,q)$ the factor group of the group of two by two matrices over
$GF(q)$ with determinant $1$ modulo its normal subgroup consisting of the
two
scalar matrices $\bigg(\begin{array}{cc}1&0\\0&1\end{array}\bigg)$ and
$\bigg(\begin{array}{cc}-1&0\\0&-1\end{array}\bigg)$. The
graphs we describe are Cayley graphs of
$PSL(2,q)$. A well known theorem of Jacobi asserts that the number of
ways to represent a positive integer $n$ as a sum of $4$ squares is
$8\sum_{4 \not \, |\, d, d|n}d$. This easily implies that there are precisely
$p+1$
vectors ${\bf a}=(a_0, a_1, a_2, a_3)$, where $a_0$ is an odd positive
integer, $a_1, a_2, a_3$ are even integers and $a_0^2+ a_1^2+a_2^2+a_3^2=p$.
From each such vector construct the matrix $M_a$ in $PSL(2,q)$ where
$M_a=\frac{1}{\sqrt{p}}
\bigg(\begin{array}{cc}a_0+ia_1&a_2+ia_3\\-a_2+ia_3&a_0-ia_1\end{array}\bigg)$
and $i$ is an integer satisfying $i^2=-1(\mbox{mod}~ q)$.
Note that, indeed, the determinant of $M_a$ is $1$ and that the square root
of $p$ modulo $q$ does exist.
Let $G^{p,q}$ denote the Cayley graph of $PSL(2,q)$ with respect to these
$p+1$ matrices. In \cite{LPS} it was proved that
if $q >2\sqrt{p}$ then $G^{p,q}$ is a connected $(p+1)$-regular graph on
$n=q(q^2-1)/2$ vertices. Its girth is at least $2\log_p q$ and all the
eigenvalues of its adjacency matrix, besides the trivial one
$\lambda_1=p+1$, are at most $2 \sqrt{p}$ in absolute value.
The bound on the eigenvalues was obtained by applying
deep results of Eichler and Igusa concerning the Ramanujan conjecture.
The graphs $G^{p,q}$ have very good expansion properties and have numerous
applications in Combinatorics and Theoretical Computer Science.
\item
The {\em projective norm graphs} $NG_{p,t}$ have been constructed in
\cite{ARS}, modifying an earlier construction given in \cite{KRS}.
These graphs {\bf are not} Cayley graphs, but as one will immediately see,
their construction has a similar flavor.
The construction is the following. Let $t>2$ be an integer, let $p$
be a prime, let $GF(p)^*$ be the multiplicative group of the
field with $p$ elements
and let $GF(p^{t-1})$ be the field with $p^{t-1}$ elements.
The set of vertices of the graph $NG_{p,t}$ is the set
$V=GF(p^{t-1})\times GF(p)^*$. Two distinct vertices $(X,a)$
and $(Y,b)\in V$ are adjacent if and only if $N(X+Y)=ab$, where the norm
$N$ is understood over $GF(p)$, that is, $N(X)=X^{1+p+\cdots+p^{t-2}}.$
Note that $|V|=p^t-p^{t-1}$. If $(X,a)$ and $(Y,b)$ are adjacent,
then $(X,a)$ and $Y\neq -X$ determine $b$.
Thus $NG_{p,t}$ is a regular graph of degree $p^{t-1}-1$. In addition, it was
proved in \cite{ARS}, that $NG_{p,t}$ contains no complete bipartite graphs
$K_{t,(t-1)!+1}$.
These graphs can be also defined in the same manner starting with a
prime power instead of the prime $p$. It is also not difficult to
compute the eigenvalues of this graph. Indeed, put $q=p^{t-1}$ and let
$A$ be the adjacency matrix of $NG_{p,t}$.
The rows and columns of this matrix are indexed by the ordered pairs
of the set $GF(q) \times GF(p)^*$. Let $\psi$ be a character of the
additive group of $GF(q)$, and let $\chi$ be a character of the
multiplicative group of $GF(p)$. Consider the vector
${\bf v}: GF(q) \times GF(p)^* \mapsto C$ defined by
${\bf v}(X,a)=\psi(X) \chi(a)$. Now one can check (see \cite{AR},
\cite{Sz} for more
details) that the vector
${\bf v}$ is an eigenvector of $A^2$ with eigenvalue
$\big| \sum_{Z\in GF(q), Z\not=0} \psi(Z)\chi(N(Z))\big|^2$ and that all
eigenvalues of $A^2$ have this form.
Set $\chi'(Z)=\chi(N(Z))$ for all nonzero $Z$ in $GF(q)$. Note that as the
norm is multiplicative, $\chi'$ is a multiplicative
character of the large field. Hence the above expression
is a square of the absolute value of the Gauss sum and it is well known (see
e.g. \cite{Da},
\cite{Bol01}) that the value of each such square, besides the trivial one
(that is,
when either $\psi$ or $\chi'$ are trivial), is $q$.
This implies that the second largest eigenvalue of $NG_{p,t}$
is $\sqrt{q}=p^{(t-1)/2}$.
\end{enumerate}
\section{Properties of pseudo-random graphs}
We now examine closely properties of pseudo-random graphs, with a
special emphasis on $(n,d,\lambda)$-graphs. The majority of them
are obtained using the estimate (\ref{eig}) of Theorem \ref{eigen},
showing
again the extreme importance and applicability of the latter result. It
is instructive to compare the properties of pseudo-random graphs,
considered below, with the analogous properties of random graphs, usually
shown to hold by completely different methods. The set of properties we
chose to treat here is not meant to be comprehensive or systematic, but
quite a few rather diverse graph parameters will be covered.
\subsection{Connectivity and perfect matchings}
The {\em vertex-connectivity} of a graph $G$ is the minimum number of
vertices that
we need to delete to make $G$ disconnected. We denote this parameter by
$\kappa(G)$. For random graphs it is well known (see, e.g., \cite{Bol01})
that the vertex-connectivity is almost surely the same as the minimum
degree.
Recently it was also proved (see \cite{KSVW} and \cite{CFR}) that
random $d$-regular graphs are $d$-vertex-connected. For
$(n,d,\lambda)$-graphs it
is easy to show the following.
\begin{theo}
\label{connectivity}
Let $G$ be an $(n,d,\lambda)$-graph with $d \leq n/2$.
Then the vertex-connectivity of $G$ satisfies:
$$\kappa(G) \geq d-36\lambda^2/d.$$
\end{theo}
\noindent
{\bf Proof.}\, We can assume that $\lambda \leq d/6$, since otherwise
there is nothing to prove. Suppose that there is a subset $S \subset V$ of
size less than $d-36\lambda^2/d$
such that the induced graph $G[V-S]$ is disconnected. Denote by $U$ the
set of vertices of the smallest connected component of $G[V-S]$ and set
$W=V-(S \cup U)$. Then $|W| \geq (n-d)/2\geq n/4$ and there is no edge
between $U$ and $W$. Also $|U|+|S|>d$, since all the neighbors of
a vertex
from $U$ are contained in $S \cup U$. Therefore $|U|\geq 36\lambda^2/d$.
Since there are no edges between $U$ and $W$, by Theorem \ref{eigen},
we have that $d|U||W|/n <\lambda \sqrt{|U||W|}$. This implies that
$$|U|<\frac{\lambda^2n^2}{d^2|W|}= \frac{\lambda}{d}\frac{n}{|W|}
\frac{\lambda n}{d} \leq \frac{1}{6} \cdot 4 \cdot\frac{\lambda n}{d}<
\frac{\lambda n}{d}.$$
Next note that, by Theorem \ref{eigen}, the number of edges spanned by
$U$ is at most
$$e(U) \leq \frac{d|U|^2}{2n}+\frac{\lambda|U|}{2}<
\frac{\lambda n}{d}\frac{d|U|}{2n}+\frac{\lambda|U|}{2}=
\frac{\lambda|U|}{2}+\frac{\lambda|U|}{2}= \lambda|U|.$$
As the degree of every vertex in $U$ is $d$, it follows that
$$e(U,S)\geq d|U|-2e(U)> (d-2\lambda)|U|\geq 2d|U|/3.$$
On the other hand
using again Theorem \ref{eigen} together with the facts that
$|U|\geq 36\lambda^2/d$, $|S|<d$ and $d \leq n/2$ we conclude that
\begin{eqnarray*}
e(U,S)&\leq& \frac{d|U||S|}{n}+\lambda\sqrt{|U||S|}
<\frac{d}{n}d|U|+\lambda\sqrt{d|U|} \leq \frac{d|U|}{2}+
\frac{\lambda\sqrt{d}|U|}{\sqrt{|U|}}\\
&\leq&\frac{d|U|}{2}+\frac{\lambda\sqrt{d}|U|}{6\lambda/\sqrt{d}}=
\frac{d|U|}{2}+\frac{d|U|}{6}=\frac{2d|U|}{3}.
\end{eqnarray*}
This contradiction completes the proof.\hfill $\Box$
\medskip
\noindent
The constants in this theorem can be easily improved and we make no
attempt to optimize them. Note that, in particular, for an
$(n,d,\lambda)$-graph
$G$ with $\lambda=O(\sqrt{d})$ we have that $\kappa(G)= d-\Theta(1)$.
Next we present an example which shows that the assertion of Theorem
\ref{connectivity} is tight up to a constant factor.
Let $G$ be any $(n,d,\lambda)$-graph with $\lambda=\Theta(\sqrt{d})$.
We already constructed several such graphs in the previous section. For an
integer $k$, consider a new graph $G_k$, which is obtained by
replacing each vertex of $G$ by the complete graph of order $k$ and by
connecting two vertices of $G_k$ by an edge if and only if the
corresponding vertices of $G$ are connected by an edge.
Then it follows immediately from the definition that
$G_k$ has $n'=nk$ vertices and is $d'$-regular graph with $d'=dk+k-1$.
Let $\lambda'$ be the second eigenvalue of $G_k$. To estimate
$\lambda'$ note that the adjacency matrix of $G_k$ equals to
$A_G\otimes J_k+I_n\otimes A_{K_k}$. Here $A_G$ is the adjacency matrix of
$G$, $J_k$ is the all one matrix of size $k\times k$, $I_n$ is the
identity
matrix of size $n \times n$ and $A_{K_k}$ is the adjacency matrix of the
complete graph of order $k$. Also the tensor product of the $m\times n$
dimensional matrix $A=(a_{ij})$ and the $s\times t$-dimensional matrix
$B=(b_{kl})$ is the $ms\times nt$-dimensional matrix $A\otimes B$, whose
entry labeled $((i,k)(j,l))$ is $a_{ij}b_{kl}$. In case $A$ and $B$ are
symmetric matrices with spectrums $\{\lambda_1, \ldots , \lambda_n\}$, $\{
\mu_1,\ldots , \mu_t\}$ respectively, it is a simple consequence of the
definition that the spectrum of $A\otimes B$ is $\{ \lambda_i\mu_k: i=1,
\ldots ,n, k=1,\ldots , t\}$ (see, e.g. \cite{Lov93}).
Therefore the second eigenvalue of $A_G\otimes J_k$ is $k\lambda$.
On the other hand $I_n\otimes A_{K_k}$ is the adjacency matrix of the
disjoint union of $k$-cliques and therefore the absolute value of all its
eigenvalues is at most $k-1$.
Using these two facts we conclude that $\lambda'\leq \lambda k+k-1$ and
that $G_k$ is $(n'=nk,d'=dk+k-1,\lambda'=\lambda k+k-1)$-graph.
Also it is easy to see that the set of vertices
of $G_k$ that corresponds to a vertex in $G$ has exactly $dk$
neighbors outside this set. By deleting these neighbors we can disconnect
the graph $G_k$ and thus
$$\kappa(G_k) \leq dk=d'-(k-1)=d'-\Omega\big((\lambda')^2/d'\big).$$
Sometimes we can improve the result of Theorem \ref{connectivity}
using the information about co-degrees of vertices in our graph.
Such result was used in \cite{KSVW} to determine the vertex-connectivity
of dense random $d$-regular graphs.
\begin{prop}
\label{p42}\cite{KSVW}
Let $G=(V,E)$ be a
$d$-regular graph on $n$ vertices such that $ \sqrt{n}\log n < d \leq
3n/4$ and the number of common neighbors for every two
distinct vertices in $G$ is $(1+o(1))d^2/n$. Then the graph $G$ is
$d$-vertex-connected.
\end{prop}
Similarly to vertex-connectivity, define the {\em edge-connectivity} of
a graph $G$ to be the minimum number of
edges that we need to delete to make $G$ disconnected. We denote this
parameter by $\kappa'(G)$. Clearly the edge-connectivity is always at most
the minimum degree
of a graph. We also say that $G$ has a {\em perfect matching} if there
is a set of disjoint edges that covers all the vertices of $G$.
Next we show that $(n,d,\lambda)$-graphs even with a very weak
spectral gap are $d$-edge-connected and have a perfect matching
(if the number of vertices is even).
\begin{theo}
\label{edge-connectivity}
Let $G$ be an $(n,d,\lambda)$-graph with $d-\lambda\geq 2$.
Then $G$ is $d$-edge-connected. When $n$ is even,
it has a perfect matching.
\end{theo}
\noindent
{\bf Proof.}\,
Let $U$ be a subset of vertices of $G$ of size at most $n/2$. To prove
that $G$ is $d$-edge-connected we need to show that
there are always at least $d$ edges between $U$ and $V(G)-U$.
If $1 \leq |U|\leq d$, then every vertex in $U$ has at least $d-(|U|-1)$
neighbors outside $U$ and therefore $e(U,V(G)-U) \geq |U|\big(d-|U|+1\big)
\geq d$. On the other hand if $d \leq |U|\leq n/2$, then using that
$d-\lambda\geq 2$ together with Theorem
\ref{eigen} we obtain that
\begin{eqnarray*}
e\big(U,V(G)-U\big) &\geq&
\frac{d|U|(n-|U|)}{n}-\lambda\sqrt{|U|(n-|U|)\left(1-\frac{|U|}{n}\right)
\left(1-\frac{n-|U|}{n}\right)}\\
&=&(d-\lambda)\frac{(n-|U|)}{n}|U| \geq
2\cdot\frac{1}{2}\cdot|U|=|U|\geq d,
\end{eqnarray*}
and therefore $\kappa'(G)=d$.
To show that $G$ contains a perfect matching we apply the celebrated
Tutte's condition. Since $n$ is even, we need to prove that for every
nonempty set of vertices
$S$, the
induced graph $G[V-S]$ has at most $|S|$ connected components of odd size.
Since $G$ is $d$-edge-connected we have that there are at least $d$ edges
from every connected component of
$G[V-S]$ to $S$. On the other hand there are at most $d|S|$ edges incident
with vertices in $S$. Therefore $G[V-S]$ has at most $|S|$ connected
components and hence $G$ contains a perfect matching.
\hfill $\Box$
\subsection{Maximum cut}
Let $G=(V,E)$ be a graph and let $S$ be a nonempty proper
subset of $V$. Denote by $(S,V-S)$ the cut of $G$ consisting of all
edges with one end in $S$ and another one in $V-S$. The {\em size}
of the cut is the number of edges in it. The MAX CUT problem is the
problem of finding a cut of maximum size in $G$. Let $f(G)$ be the
size of the maximum cut in $G$.
MAX CUT is one of the most natural combinatorial
optimization problems. It is well known that this problem is
NP-hard \cite{GJ}. Therefore it is useful to have bounds on
$f(G)$ based on other parameters of the graph, that can be computed
efficiently.
Here we describe two such folklore results. First, consider a random partition
$V=V_1\cup V_2$, obtained by assigning each vertex $v\in V$ to $V_1$ or $V_2$
with probability $1/2$ independently.
It is easy to see that each edge of $G$ has probability $1/2$ to cross
between $V_1$ and $V_2$. Therefore the expected
number of edges in the cut $(V_1,V_2)$ is $m/2$, where $m$ is the number of edges
in $G$. This implies that for every graph $f(G)\geq m/2$. The example of
a complete graph shows that this lower bound is asymptotically optimal.
The second result provides an upper bound for $f(G)$, for a regular
graph $G$, in terms of the smallest eigenvalue of its adjacency matrix.
\begin{prop}
\label{max-cut}
Let $G$ be a $d$-regular graph (which may have loops) of order $n$ with
$m=dn/2$ edges and let
$\lambda_1\geq \lambda_2 \geq \ldots \geq \lambda_n$ be the eigenvalues of the
adjacency matrix of $G$. Then
$$f(G) \leq \frac{m}{2}-\frac{\lambda_n n}{4}.$$
In particular if $G$ is an $(n,d,\lambda)$-graph then
$f(G) \leq (d+\lambda)n/4$.
\end{prop}
\noindent
{\bf Proof.}\, Let $A=(a_{ij})$ be the adjacency matrix of
$G=(V,E)$ and let $V=\{1, \ldots, n\}$. Let
${\bf x}=(x_1, \ldots, x_n)$ be any vector with coordinates $\pm 1$.
Since the graph $G$ is $d$-regular we have
$$\sum_{(i,j)\in E} (x_i-x_j)^2=d\sum_{i=1}^n x_i^2-
\sum_{i,j}a_{ij}x_ix_j=dn-{\bf x}^tA{\bf x}.$$
By the variational definition of the eigenvalues of $A$,
for any vector $z \in R^n$, $z^tAz \geq \lambda_n\|z\|^2$.
Therefore
\begin{equation}
\label{a}
\sum_{(i,j)\in E} (x_i-x_j)^2=dn-{\bf x}^tA{\bf x} \leq
dn-\lambda_n\|{\bf x}\|^2=dn-\lambda_nn.
\end{equation}
Let $V=V_1\cup V_2$ be an arbitrary partition of $V$ into two disjoint
subsets and let $e(V_1,V_2)$ be the number of edges in the bipartite
subgraph of $G$ with bipartition $(V_1,V_2)$. For every vertex $v \in
V(G)$ define $x_v=1$ if $v \in V_1$ and $x_v=-1$ if $v \in V_2$.
Note that for every edge $(i,j)$ of $G$,
$(x_i-x_j)^2=4$ if this edge has its ends
in the distinct parts of the above partition and is zero otherwise.
Now using (\ref{a}), we conclude that
$$\hspace{3.2cm}
e(V_1,V_2)=\frac{1}{4} \sum_{(i,j)\in E} (x_i-x_j)^2 \leq
\frac{1}{4}(dn-\lambda_nn)=\frac{m}{2}-\frac{\lambda_nn}{4}.
\hspace{3.2cm}\Box$$
This upper bound is often used to show that some particular results
about maximum cuts are tight. For example this approach was used in
\cite{ALON} and \cite{ABKS}. In these papers the authors proved that
for every graph $G$ with $m$ edges and girth at least $r\geq 4$,
$f(G) \geq m/2 +\Omega\big(m^{\frac{r}{r+1}}\big)$.
They also show, using Proposition \ref{max-cut} and Examples 9, 6 from Section 3,
that this bound is tight for $r=4,5$.
\subsection{Independent sets and the chromatic number}
The {\em independence number} $\alpha(G)$ of a graph $G$
is the maximum cardinality of a set of vertices of $G$ no two of which are
adjacent. Using Theorem \ref{eigen} we can immediately establish an upper
bound on the size of a maximum independent set of pseudo-random graphs.
\begin{prop}
\label{ind-set}
Let $G$ be an $(n,d,\lambda)$-graph, then
$$\alpha(G)\leq \frac{\lambda n}{d+\lambda}.$$
\end{prop}
\noindent
{\bf Proof.}\, Let $U$ be an independent set in $G$, then
$e(U)=0$ and by Theorem \ref{eigen} we have that
$d|U|^2/n\leq \lambda |U|(1-|U|/n)$. This implies that
$|U|\leq \lambda n/(d+\lambda)$. \hfill $\Box$
\vspace{0.15cm}
\noindent
Note that even when $\lambda=O(\sqrt{d})$ this bound only has order of magnitude
$O(n/\sqrt{d})$. This contrasts sharply
with the behavior of random graphs where it is known
(see \cite{Bol01} and \cite{JanLucRuc00}) that the independence number of random graph
$G(n,p)$ is only $\Theta\big(\frac{n}{d}\log d\big)$ where
$d=(1+o(1))np$. More strikingly there are graphs for which
the bound in Proposition \ref{ind-set} cannot be improved. One such
graph is the
Paley graph $P_q$ with
$q=p^2$ (Example 3 in the previous section).
Indeed it is easy to see that
in this case all elements of the subfield $GF(p)\subset GF(p^2)$ are quadratic residues in
$GF(p^2)$. This implies that for every quadratic non-residue $\beta \in GF(p^2)$
all elements of any multiplicative coset $\beta GF(p)$ form an independent
set of size
$p$. As we already mentioned, $P_q$ is an
$(n,d,\lambda)$-graph with $n=p^2,d=(p^2-1)/2$ and $\lambda=(p+1)/2$.
Hence for this graph we get $\alpha(P_q)=\lambda n/(d+\lambda)$.
Next we obtain a lower bound on the independence number
of pseudo-random graphs. We present a slightly
more general result by Alon et al. \cite{AKS} which we will need later.
\begin{prop}
\label{ind-set1}\cite{AKS}
Let $G$ be an $(n,d,\lambda)$-graph such that
$\lambda<d\leq 0.9n$.
Then the induced subgraph $G[U]$ of $G$ on any
subset $U, |U|=m$, contains an independent set of size at least
$$\alpha(G[U]) \geq \frac{n}{2(d-\lambda)}
\ln \left(\frac{m(d-\lambda)}{n(\lambda+1)} +1 \right).$$
In particular,
$$\alpha(G) \geq \frac{n}{2(d-\lambda)}
\ln \left(\frac{(d-\lambda)}{(\lambda+1)} +1 \right).$$
\end{prop}
\noindent
{\bf Sketch of proof.}\,
First using Theorem \ref{eigen} it is easy to show that if
$U$ is a set of $bn$ vertices of $G$, then the minimum degree in the
induced subgraph $G[U]$ is at most $db+\lambda(1-b)=(d-\lambda)b+\lambda$.
Construct an independent set $I$ in the induced subgraph
$G[U]$ of $G$ by the following greedy procedure.
Repeatedly choose a vertex of minimum degree
in $G[U]$ , add it to the independent set $I$ and delete it and its
neighbors from $U$, stopping
when the remaining set of vertices is empty.
Let $a_i, i\geq 0$ be the sequence of numbers defined by the
following recurrence formula:
$$a_0=m,~~
a_{i+1}=a_i-\left(d\frac{a_i}{n}+\lambda(1-\frac{a_i}{n})+1\right)=
\left(1-\frac{d-\lambda}{n}\right)a_i-(\lambda+1),~\forall i \geq 0.$$
By the above discussion, it is easy to see
that the size of the remaining set of vertices after $i$ iterations is at
least $a_i$. Therefore
the size of the resulting independent set $I$ is at least the smallest
index $i$ such that $a_i\leq 0$. By solving the recurrence equation we
obtain that this index satisfies:
$$\hspace{5cm}i \geq \frac{n}{2(d-\lambda)}
\ln \left(\frac{m(d-\lambda)}{n(\lambda+1)} +1 \right)\,. \hspace{5cm}
\Box$$
\vspace{0.15cm}
\noindent
For an $(n,d,\lambda)$-graph $G$ with $\lambda \leq d^{1-\delta},
\delta>0$, this proposition
implies that $\alpha(G) \geq \Omega\big(\frac{n}{d}\log d\big)$. This shows that
the independence number of a
pseudo-random graph with a sufficiently small second eigenvalue is up to a
constant factor
at least as large as $\alpha(G(n,p))$ with $p=d/n$. On the other hand the graph $H_k$
(Example 4, Section 3)
shows that even when $\lambda \leq O(\sqrt{d})$ the independence number
of $(n,d,\lambda)$-graph can be smaller than $\alpha(G(n,p))$ with
$p=d/n$.
This graph has $n=2^{k-1}-1$ vertices, degree $d=(1+o(1))n/2$ and $\lambda=
\Theta(\sqrt{d})$. Also it is easy to see that every independent set in
$H_k$ corresponds to a family of orthogonal vectors in ${\bf Z}_2^k$ and thus has size at
most $k=(1+o(1))\log_2 n$. This is only half of the size of a maximum
independent set in the corresponding random graph $G(n,1/2)$.
A {\em vertex-coloring} of a graph $G$ is an assignment of a color
to each of its vertices. The coloring is {\em proper} if no two
adjacent vertices get the same color. The {\em chromatic number}
$\chi(G)$ of $G$ is the minimum number of colors used in a proper coloring of
it. Since every color class in the proper coloring of $G$ forms an independent
set we can immediately obtain that $\chi(G)\geq |V(G)|/\alpha(G)$. This together
with Proposition \ref{ind-set} implies the following result of Hoffman \cite{Ho}.
\begin{coro}
\label{hofman}
Let $G$ be an $(n,d,\lambda)$-graph. Then the chromatic number of $G$ is at least
$1+d/\lambda$.
\end{coro}
On the other hand, using Proposition \ref{ind-set1}, one can obtain the following
upper bound on the chromatic number of pseudo-random graphs.
\begin{theo}
\label{chromatic}\cite{AKS}
Let $G$ be an $(n,d,\lambda)$-graph
such that $\lambda<d\leq 0.9n$.
Then the chromatic number of $G$ satisfies
$$
\chi(G) \leq \frac{6(d-\lambda)}{\ln\big (\frac{d-\lambda}{\lambda+1}
+1\big)}\, .
$$
\end{theo}
\noindent
{\bf Sketch of proof.}\,
Color the graph $G$ as follows. As long as the remaining
set of vertices $U$ contains at least
$n/\ln (\frac{d-\lambda}{\lambda+1}+1)$ vertices, by Proposition
\ref{ind-set1} we can find an
independent
set of vertices in the induced subgraph $G[U]$ of size at least
$$\frac{n}{2(d-\lambda)} \ln \left(
\frac{|U|(d-\lambda)}{n(\lambda+1)}+1\right) \geq
\frac{n}{4(d-\lambda)} \ln \left(
\frac{d-\lambda}{\lambda+1}+1\right).$$
Color all the members of such a set
by a new color, delete them from the graph and
continue.
When this process terminates, the remaining set of vertices $U$ is
of size at most $n/\ln (\frac{d-\lambda}{\lambda+1}+1)$ and we used at
most $4(d-\lambda)/\ln(\frac{d-\lambda}{\lambda+1}+1)$ colors so far.
As we already mentioned above, for every subset $U'\subset U$ the induced
subgraph $G[U']$ contains a vertex of degree at most
$$(d-\lambda)\frac{|U'|}{n}+\lambda\leq
(d-\lambda)\frac{|U|}{n}+\lambda\leq
\frac{d-\lambda}{\ln (\frac{d-\lambda}{\lambda+1}+1)}+\lambda \leq
\frac{2(d-\lambda)}{\ln (\frac{d-\lambda}{\lambda+1}+1)}-1.$$
Thus we
can complete the coloring of $G$ by
coloring $G[U]$ using
at most $2(d-\lambda)/\ln (\frac{d-\lambda}{\lambda+1}+1)$
additional colors.
The total number of colors used is at most
$6(d-\lambda)/\ln (\frac{d-\lambda}{\lambda+1}+1)$. \hfill$\Box$
\vspace{0.15cm}
\noindent
For an $(n,d,\lambda)$-graph $G$ with $\lambda \leq d^{1-\delta}, \delta>0$ this proposition
implies that $\chi(G) \leq O\big(\frac{d}{\log d}\big)$. This shows that
the chromatic number of a
pseudo-random graph with a sufficiently small second eigenvalue is up to a
constant factor
at least as small as $\chi(G(n,p))$ with $p=d/n$.
On the other hand, the
Paley graph $P_q, q=p^2$, shows that sometimes the chromatic number of
a pseudo-random graph can be much smaller than the above bound, even in
the case
$\lambda=\Theta(\sqrt{d})$. Indeed, as we already mentioned above,
all elements of the subfield $GF(p)\subset GF(p^2)$ are quadratic residues in
$GF(p^2)$. This implies that for every quadratic non-residue $\beta \in GF(p^2)$
all elements of a multiplicative coset $\beta GF(p)$ form an independent
set of size
$p$. Also all additive cosets of $\beta GF(p)$ are independent sets in
$P_q$. This implies that $\chi(P_q)\leq \sqrt{q}=p$. In fact $P_q$ contains a clique of size
$p$ (all elements of a subfield $GF(p)$), showing that
$\chi(P_q)=\sqrt{q}\ll q/\log q$. Therefore the bound in Corollary
\ref{hofman} is best possible.
A more complicated quantity related to the chromatic number is the {\em
list-chromatic
number}
$\chi_l(G)$ of $G$, introduced in \cite{ERT} and \cite{Vi}. This is the
minimum integer $k$ such that for every assignment of a set $S(v)$ of
$k$ colors to every vertex $v$ of $G$, there is a proper coloring of
$G$ that assigns to each vertex $v$ a color from $S(v)$. The study of this
parameter received a considerable amount of attention in
recent years, see, e.g., \cite{Al2}, \cite{KTV} for two surveys.
Note that from the definition it follows immediately that $\chi_l(G) \geq
\chi(G)$ and it is known that the gap between these two parameters
can be arbitrarily large. The list-chromatic number of pseudo-random
graphs was
studied by Alon, Krivelevich and Sudakov \cite{AKS} and independently by Vu
\cite{Vu}. In \cite{AKS} and \cite{Vu} the authors mainly considered graphs with all degrees
$(1+o(1))np$ and all co-degrees $(1+o(1))np^2$. Here we use ideas from these two
papers to obtain an upper bound on the list-chromatic number of
an $(n,d,\lambda)$-graphs.
This bound has the same order of magnitude as the list chromatic number of
the truly random
graph $G(n,p)$ with $p=d/n$ (for more details see \cite{AKS}, \cite{Vu}).
\begin{theo}
\label{choice}
Suppose that $0<\delta<1$ and let $G$ be an $(n,d,\lambda)$-graph satisfying
$\lambda \leq d^{1-\delta}$, $d\le 0.9n$.
Then the list-chromatic number of $G$ is bounded by
$$\chi_l(G)\leq O\left(\frac{d}{\delta\log d}\right).$$
\end{theo}
\noindent
{\bf Proof.}\, Suppose that $d$ is sufficiently large and consider first the
case when $d \leq n^{1-\delta/4}$. Then by
Theorem \ref{eigen} the neighbors of every vertex in $G$ span
at most $d^3/n+\lambda d \leq O(d^{2-\delta/4})$ edges. Now we can apply the
result of Vu \cite{Vu} which says that if the neighbors of every vertex in
a graph $G$ with maximum degree $d$ span at most $O(d^{2-\delta/4})$
edges then
$\chi_l(G)\leq O\big(d/(\delta\log d)\big).$
Now consider the case when $d \geq n^{1-\delta/4}$. For every
vertex $v \in V$, let $S(v)$ be a list of at least $\frac{7d}{\delta \log n}$
colors.
Our objective is to prove that there is a proper coloring of $G$
assigning to each vertex a color from its list. As long as there is
a set $C$ of at least
$n^{1-\delta/2} $ vertices containing the same color $c$ in their
lists we can, by
Proposition \ref{ind-set1}, find an independent set of at least
$\frac{\delta n}{6d} \log n$ vertices in $C$, color them all by
$c$, omit them from the graph and omit the color $c$ from all
lists. The total number of colors that can be deleted in this
process cannot exceed $\frac{6d}{\delta \log n}$ (since in each
such deletion at least $\frac{\delta n}{6d}\log n$ vertices are
deleted from the graph). When this process terminates, no color
appears in more than $n^{1-\delta/2}$ lists, and each list still
contains at least $\frac{d}{\delta \log n} > n^{1-\delta/2}$
colors. Therefore, by Hall's theorem, we can assign to each of the
remaining vertices
a color from its list so that no color is being assigned to more
than one vertex,
thus completing the coloring and the proof.\hfill $\Box$
\subsection{Small subgraphs}
We now examine small subgraphs of pseudo-random graphs.
Let $H$ be a fixed graph of order $s$ with $r$ edges and with automorphism group
$Aut(H)$. Using the
second moment method it is
not difficult to show that for every constant $p$ the random graph $G(n,p)$ contains
$$(1+o(1))p^r(1-p)^{{s\choose 2}-r}\frac{n^s}{|Aut(H)|}$$
induced copies of $H$. Thomason extended
this result to jumbled graphs. He showed in \cite{Tho87a} that if a graph
$G$ is
$(p,\alpha)$-jumbled and $p^sn\gg 42\alpha s^2$ then the number of induced subgraphs of $G$
which are isomorphic to $H$ is
$(1+o(1))p^s(1-p)^{{s\choose 2}-r}n^s/|Aut(H)|$.
Here we present a result of Noga Alon \cite{Alon} that
proves that every large subset of the set of vertices
of $(n,d,\lambda)$-graph contains the "correct" number of copies of any
fixed sparse graph. An additional advantage of this result is that its assertion depends
not on the number of vertices $s$ in $H$ but only on its maximum degree
$\Delta$ which can be smaller than $s$.
Special cases of this result have appeared in various
papers including \cite{AK}, \cite{AP} and probably other papers as well.
The approach here is similar to the one in \cite{AP}.
\begin{theo}
\label{number-subgraphs} \cite{Alon}
Let $H$ be a fixed graph with $r$ edges, $s$ vertices and maximum degree
$\Delta$, and let $G=(V,E)$ be an $(n,d,\lambda)$-graph, where, say,
$d \leq 0.9n$.
Let $m <n$
satisfy $m \gg \lambda (\frac{n}{d})^{\Delta}$.
Then, for every
subset $V' \subset V$ of cardinality $m$, the number of (not necessarily
induced) copies of $H$ in $V'$ is
$$
\big(1+o(1)\big)\frac{m^s}{|Aut(H)|} \left(\frac{d}{n}\right)^r.
$$
\end{theo}
Note that this implies that a similar result holds for the number
of induced copies of
$H$. Indeed, if $ n \gg d$ and
$m \gg \lambda (\frac{n}{d})^{\Delta+1}$ then the number of copies
of each graph obtained from $H$ by adding to it at least one edge
is, by the above Theorem, negligible compared to the number of
copies of $H$, and hence almost all copies of $H$ in $V'$ are induced. If
$d=\Theta(n)$ then,
by inclusion-exclusion, the number of
induced copies of $H$ in $V'$ as above is also roughly the
"correct" number. A special case of the above theorem implies that if
$\lambda =O(\sqrt d)$ and $d \gg n^{2/3}$, then any $(n,d, \lambda)$-graph
contains many triangles. As shown in Example 9, Section 3, this is not true
when $d=(\frac{1}{4}+o(1)) n^{2/3}$, showing that the assertion of
the theorem is not far from being best possible.
\noindent
{\bf Proof of Theorem \ref{number-subgraphs}.}\,
To prove the theorem, consider a random one-to-one mapping
of the set of vertices of $H$ into the set of vertices $V'$.
Denote by $A(H)$ the event that
every edge of $H$ is mapped on an edge of $G$. In such a
case we say that the mapping is an embedding of $H$. Note that it
suffices to prove that
\begin{equation}
\label{e1}
Pr(A(H))=(1+o(1))\left(\frac{d}{n}\right)^r.
\end{equation}
We prove (\ref{e1}) by induction on the number of edges $r$.
The base case $(r=0)$ is trivial. Suppose that (\ref{e1}) holds for
all graphs with less than $r$ edges, and let
$uv$ be an edge of $H$.
Let $H_{uv}$ be the graph obtained from $H$ by removing the edge $uv$
(and keeping all vertices).
Let $H_u$ and $H_v$ be the induced subgraphs of $H$ on
the sets of vertices $V(H)\setminus \{ v\}$ and $V(H)\setminus
\{ u\}$, respectively, and let $H'$ be the induced subgraph of $H$ on
the set of
vertices $V(H)\setminus
\{ u,v\}$. Let $r'$ be the number of
edges of $H'$ and note that
$r-r' \leq 2(\Delta-1)+1=2\Delta -1. $
Clearly $Pr(A(H_{uv}))=Pr(A(H_{uv})|A(H'))\cdot
Pr(A(H'))$. Thus, by the induction hypothesis applied to
$H_{uv}$ and to $H'$:
\[
Pr(A(H_{uv})|A(H'))=(1+o(1)) \left(\frac{d}{n}\right)^{r-1-r'}.
\]
For an embedding $f'$ of $H'$, let $\nu(u,f')$ be the number
of extensions of $f'$ to an embedding of $H_u$ in $V'$; $\nu(v,f')$
denotes the same for $v$. Clearly, the number of extensions of
$f'$ to an embedding of $H_{uv}$ in $V'$ is at least
$\nu(u,f')\nu(v,f')-\min(\nu(u,f'),\nu(v,f'))$ and at most
$\nu(u,f')\nu(v,f')$. Thus we have
\[
\frac{\nu(u,f')\nu(v,f')-\min(\nu(u,f'),\nu(v,f'))}{(m-s+2)(m-s+1)}\leq
Pr\big(A(H_{uv})|f'\big)\leq
\frac{\nu(u,f')\nu(v,f')}{(m-s+2)(m-s+1)}.
\]
Taking expectation over all embeddings $f'$ the middle term
becomes $Pr(A(H_{uv})|A(H'))$, which is $(1+o(1))
(\frac{d}{n})^{r-1-r'}$. Note that by our choice of the
parameters and the well known fact that $\lambda =\Omega(\sqrt d)$,
the expectation of
the term $\min(\nu(u,f'),\nu(v,f'))~(~ \leq m)$ is negligible and we get
\[
E_{f'}\big(\nu(u,f')\nu(v,f')|\ A(H')\big)=(1+o(1)) m^2 \left(\frac{d}{n}\right)^{r-1-r'}.
\]
Now let $f$ be a random
one-to-one mapping of $V(H)$ into $V'$.
Let $f'$ be a fixed embedding of $H'$. Then
\[
Pr_f\big(A(H)|\ f|_{V(H)\setminus \{ u,v\}}=f'\big)=
\left(\frac{d}{n}\right)\frac{\nu(u,f')\nu(v,f')}{(m-s+2)(m-s+1)}+\delta,
\]
where $|\delta|\leq
\lambda\frac{\sqrt{\nu(u,f')\nu(v,f')}}{(m-s+2)(m-s+1)}$.
This follows from Theorem \ref{eigen}, where we take the possible
images of $u$ as the set $U$ and the possible images of $v$ as
the set $W$. Averaging over embeddings $f'$ we get
$Pr(A(H)|A(H'))$ on the left hand side. On the right hand side
we get $(1+o(1)) (\frac{d}{n})^{r-r'}$ from the first term plus the
expectation of the error term $\delta$. By Jensen's inequality,
the absolute value of this expectation is
bounded by
\[
\lambda\frac{\sqrt{E(\nu(u,f')\nu(v,f'))}}{(m-s+2)(m-s+1)}
=(1+o(1)) \frac{\lambda}{m} \left(\frac{d}{n}\right)^{(r-r'-1)/2}.
\]
Our assumptions on the parameters imply that this is negligible
with respect to the main term. Therefore
$ Pr(A(H))=Pr(A(H)|A(H')) \cdot Pr(A(H'))=(1+o(1)) \left(\frac{d}{n}\right)^r$,
completing the proof of
Theorem \ref{number-subgraphs}.
\hfill $\Box$
If we are only interested in the existence of one copy of $H$ then one can
sometimes improve the conditions on $d$ and $\lambda$ in Theorem
\ref{number-subgraphs}. For example if
$H$ is a complete graph of order $r$ then the following result was proved in
\cite{AK}.
\begin{prop}
\label{cliques}\cite{AK}
Let $G$ be an $(n,d,\lambda)$-graph. Then for every integer $r \geq 2$ every set of vertices
of $G$ of size
more than
$$\frac{(\lambda+1)n}{d}\left(1+\frac{n}{d}+\ldots+\Big(\frac{n}{d}\Big)^{r-2}\right)$$
contains a copy of a complete graph $K_r$.
\end{prop}
\noindent
In particular, when $d\geq\Omega(n^{2/3})$ and $\lambda \leq O(\sqrt{d})$ then
any $(n,d,\lambda)$-graph contains a
triangle and as shows Example 9 in Section 3 this is tight. Unfortunately
we do not know if
this bound is also tight for $r \geq 4$. It would be interesting to construct
examples of
$(n,d,\lambda)$-graphs with $d=\Theta\big(n^{1-1/(2r-3)}\big)$ and $\lambda \leq
O(\sqrt{d})$ which contain no copy of $K_r$.
Finally we present one additional result about the existence of odd
cycles in pseudo-random graphs.
\begin{prop}
\label{cycles}
Let $k\geq 1$ be an integer and let $G$ be an $(n,d,\lambda)$-graph such that
$d^{2k}/n\gg \lambda^{2k-1}$. Then $G$ contains a cycle of length $2k+1$.
\end{prop}
\noindent
{\bf Proof.}\,
Suppose that $G$ contains no cycle of length $2k+1$.
For every two vertices $u,v$ of $G$ denote by $d(u,v)$ the length of a shortest
path from $u$ to $v$. For every $i \geq 1$ let $N_i(v)=\{u~|~d(u,v)=i\}$ be the set of
all vertices in $G$ which are at distance exactly $i$ from $v$. In \cite{EFRS} Erd\H{o}s et
al. proved that if $G$ contains no cycle of length
$2k+1$ then for any $1 \leq i \leq k$ the induced graph $G[N_i(v)]$ contains
an independent set of size $|N_i(v)|/(2k-1)$. This result together with
Proposition \ref{ind-set} implies that for every vertex $v$ and for every $1 \leq i \leq
k$, $|N_i(v)| \leq (2k-1)\lambda n/d$. Since $d^{2k}/n\gg \lambda^{2k-1}$ we have that
$\lambda=o(d)$. Therefore by Theorem \ref{eigen}
$$e\big(N_i(v)\big) \leq \frac{d}{2n}|N_i(v)|^2+\lambda |N_i(v)| \leq
\frac{d}{n} \frac{(2k-1)\lambda n}{2d}|N_i(v)|+\lambda |N_i(v)|
<2k\lambda|N_i(v)|=o\big(d|N_i(v)|\big).$$
Next we prove by induction that for every $1\le i \leq k$,
$\frac{|N_{i+1}(v)|}{|N_i(v)|}\geq (1-o(1))d^2/\lambda^2$.
By the above discussion the number of edges spanned by
$N_1(v)$ is $o(d^2)$ and therefore
$e\big(N_1(v),N_2(v)\big)=d^2-o(d^2)=(1-o(1))d^2$.
On the other hand, by Theorem \ref{eigen}
\begin{eqnarray*}
e\big(N_1(v),N_2(v)\big) &\leq& \frac{d}{n}|N_1(v)||N_2(v)|+\lambda\sqrt{|N_1(v)||N_2(v)|}
\leq \frac{d}{n}\,d\,\frac{(2k-1)\lambda n}{d} +\lambda\sqrt{d|N_2(v)|}\\
&=&\lambda d\sqrt{\frac{|N_2(v)|}{d}}+O(\lambda d)=
\lambda d\sqrt{\frac{|N_2(v)|}{|N_1(v)|}}+o(d^2).
\end{eqnarray*}
Therefore $\frac{|N_2(v)|}{|N_1(v)|}\geq (1-o(1))d^2/\lambda^2$. Now
assume that
$\frac{|N_{i}(v)|}{|N_{i-1}(v)|}\geq (1-o(1))d^2/\lambda^2$. Since the number
of edges spanned by $N_{i}(v)$ is $o\big(d|N_i(v)|\big)$ we obtain
\begin{eqnarray*}
e\big(N_i(v),N_{i+1}(v)\big)&=&d|N_i(v)|-2e\big(N_i(v)\big)-e\big(N_{i-1}(v),N_i(v)\big) \\
&\geq& d|N_i(v)|-o\big(d|N_i(v)|\big)-d|N_{i-1}(v)|\\
&\geq&
(1-o(1))d|N_i(v)|-(1+o(1))d(\lambda^2/d^2)|N_i(v)|\\
&=&(1-o(1))d|N_i(v)|-o\big(d|N_i(v)|\big)=
(1-o(1))d|N_i(v)|.
\end{eqnarray*}
On the other hand, by Theorem \ref{eigen}
\begin{eqnarray*}
e\big(N_i(v),N_{i+1}(v)\big) &\leq&
\frac{d}{n}|N_i(v)||N_{i+1}(v)|+\lambda\sqrt{|N_i(v)||N_{i+1}(v)|}\\
&\leq& \frac{d}{n} \,\frac{(2k-1)\lambda n}{d}\,|N_i(v)|
+\lambda\sqrt{|N_i(v)||N_{i+1}(v)|}\\
&=&O(\lambda |N_i(v)|)+\lambda |N_i(v)|\sqrt{\frac{|N_{i+1}(v)|}{|N_i(v)|}}=
\lambda |N_i(v)|\sqrt{\frac{|N_{i+1}(v)|}{|N_i(v)|}}+o\big(d|N_i(v)|\big).
\end{eqnarray*}
Therefore $\frac{|N_{i+1}(v)|}{|N_i(v)|}\geq (1-o(1))d^2/\lambda^2$ and we proved the
induction step.
Finally note that
$$|N_k(v)| = d \prod_{i=1}^{k-1}\frac{|N_{i+1}(v)|}{|N_i(v)|}\geq
(1+o(1))d\left(\frac{d^2}{\lambda^2}\right)^{k-1}=
(1+o(1)) \frac{d^{2k-1}}{\lambda^{2k-2}}\gg (2k-1)\frac{\lambda n}{d}.$$
This contradiction completes the proof.
\hfill $\Box$
\vspace{0.15cm}
\noindent
This result implies that when $d\gg n^{\frac{2}{2k+1}}$ and $\lambda \leq O(\sqrt{d})$
then any $(n,d,\lambda)$-graph contains a cycle of length $2k+1$. As shown
by Example 10 of the
previous section this result is tight. It is worth mentioning here that
it follows from the result of
Bondy and Simonovits \cite{BS} that any $d$-regular graph with
$d \gg n^{1/k}$ contains a cycle of length $2k$.
Here we do not need to make any assumption about the second eigenvalue $\lambda$.
This bound is known to be tight for $k=2,3,5$ (see Examples 6,7, Section 3).
\subsection{Extremal properties}
Tur\'an's theorem \cite{T} is one of the fundamental results in
Extremal Graph Theory. It states that among $n$-vertex graphs not
containing a clique of size $t$ the complete $(t-1)$-partite
graph with (almost) equal parts has the maximum number of edges.
For two graphs $G$ and $H$ we define the Tur\'an number $ex(G,H)$ of $H$
in $G$, as the largest integer $e$, such that there is an $H$-free subgraph of
$G$ with $e$ edges. Obviously $ex (G,H) \leq |E(G)|$, where $E(G)$ denotes the
edge set of $G$. Tur\'an's theorem, in an asymptotic form, can be restated as
$$ex(K_n, K_t) = \left(\frac{t-2}{t-1}+o(1)\right){n\choose 2},$$
that is the largest $K_t$-free subgraph of $K_n$
contains approximately $\frac{t-2}{t-1}$-fraction of its edges.
Here we would like to describe an extension of this result to
$(n,d,\lambda)$-graphs.
For an arbitrary graph $G$ on $n$ vertices it is
easy to give a lower bound on $ex (G,K_t)$ following Tur\'an's
construction. One can partition the vertex set of $G$ into $t-1$
parts such that the degree of each vertex within its own part
is at most $\frac{1}{t-1}$-times its degree in $G$. Thus the subgraph
consisting of the edges of $G$ connecting two different parts has at least a
$\frac{t-2}{t-1}$-fraction of the edges of $G$ and
is clearly $K_t$-free. We say that a graph (or rather a family of graphs)
is {\em $t$-Tur\'an} if this
trivial lower bound is essentially an upper bound as well. More precisely,
$G$ is $t$-Tur\'an if $ex(G,K_t) = \big(\frac{t-2}{t-1} +o(1)\big) |E(G)|$.
It has been shown that for any fixed $t$, there is a
number $m(t,n)$ such that almost all graphs on $n$ vertices with
$m \ge m(t,n) $ edges are $t$-Tur\'an
(see \cite{SV}, \cite{KohRodS} for the most recent estimate
for $m(t,n)$). However, these
results are about random graphs and do not provide a
deterministic sufficient condition for a graph to be $t$-Tur\'an.
It appears that such a condition can be obtained by a simple assumption
about the spectrum of the graph. This was proved by
Sudakov, Szab\'o and Vu in \cite{SSV}. They obtained the following result.
\begin{theo}
\label{t1} \cite{SSV}
Let $t\geq 3$ be an integer and let $G=(V,E)$ be an
$(n,d,\lambda)$-graph. If $\lambda=o(d^{t-1}/n^{t-2})$ then
$$ex(G,K_t)=\left(\frac{t-2}{t-1}+o(1)\right)|E(G)|.$$
\end{theo}
Note that this theorem generalizes Tur\'an's theorem, as
the second eigenvalue of the complete graph $K_n$ is 1.
Let us briefly discuss the sharpness of Theorem \ref{t1}.
For $t=3$, one can show that its condition involving $n,d$ and $\lambda$ is
asymptotically tight. Indeed,
in this case the above theorem
states that if $d^2/n\gg \lambda$, then one needs to delete about
half of the edges of $G$ to destroy all the triangles. On
the other hand, by taking the example of Alon
(Section \ref{examples}, Example 9) whose parameters are:
$d=\Theta(n^{2/3})$, $\lambda=\Theta(n^{1/3})$, and blowing it up (which
means replacing each vertex by an independent set of size $k$ and
connecting two vertices in the new graph if and only if the
corresponding vertices of $G$ are connected by an edge) we get a graph
$G(k)$ with the following properties:
\begin{center}
$|V(G(k))|=n_k=nk$;\,\, $G(k)$ is $d_k=dk$-regular;\,\, $G(k)$ is
triangle-free;\,\,
$\lambda(G(k))=k\lambda$\, and \,$\lambda(G(k))=\Omega\big(d_k^2/n_k\big)$.
\end{center}
\noindent
The above bound for the second eigenvalue of $G(k)$ can be obtained by
using well known results on the eigenvalues of the tensor product of
two matrices, see \cite{KriSudSza02} for more details.
This construction implies that for $t=3$ and any
sensible degree $d$ the condition in Theorem \ref{t1} is not far
from being best possible.
\subsection{Factors and fractional factors}
Let $H$ be a fixed graph on $n$ vertices. We say that a graph $G$ on
$n$ vertices has an {\em $H$-factor} if $G$ contains $n/h$ vertex
disjoint copies of $H$. Of course, a trivial necessary condition for the
existence of an $H$-factor in $G$ is that $h$ divides $n$. For example,
if $H$ is just an edge $H=K_2$, then an $H$-factor is a perfect matching
in $G$.
One of the most important classes of graph embedding problems is to find
sufficient conditions for the existence of an $H$-factor in a graph $G$,
usually assuming that $H$ is fixed while the order $n$ of $G$ grows. In
many cases such conditions are formulated in terms of the minimum
degree of $G$. For example, the classical result of Hajnal and
Szemer\'edi \cite{HajSze70} asserts that if the minimum degree
$\delta(G)$ satisfies $\delta(G)\ge (1-\frac{1}{r})n$, then $G$
contains $\lfloor n/r\rfloor$ vertex disjoint copies of $K_r$. The
statement of this theorem is easily seen to be tight.
It turns our that pseudo-randomness allows in many cases to
significantly weaken sufficient conditions for $H$-factors and to obtain
results which fail to hold for general graphs of the same edge density.
Consider first the case of a constant edge density $p$. In this case the
celebrated Blow-up Lemma of Koml\'os, S\'ark\"ozy and Szemer\'edi
\cite{KomSarSze97} can
be used to show the existence of $H$-factors. In order to formulate the
Blow-up Lemma we need to introduce the notion of a super-regular pair.
Given $\epsilon>0$ and $0<p<1$, a bipartite graph $G$ with bipartition
$(V_1,V_2)$, $|V_1|=|V_2|=n$, is called {\em super
$(p,\epsilon)$-regular} if
\begin{enumerate}
\item For all vertices $v\in V(G)$,
$$
(p-\epsilon)n\le d(v)\le (p+\epsilon)n\ ;
$$
\item For every pair of sets $(U,W)$, $U\subset V_1$, $W\subset V_2$,
$|U|,|W|\ge \epsilon n$,
$$
\left|\frac{e(U,W)}{|U||W|}-\frac{|E(G)|}{n^2}\right|\le \epsilon\ .
$$
\end{enumerate}
\begin{theo}\label{KSS}\cite{KomSarSze97}
For every choice of integers $r$ and $\Delta$ and a real $0<p<1$ there
exist an $\epsilon>0$ and an integer $n_0(\epsilon)$ such that the
following is true. Consider an $r$-partite graph $G$ with all partition
sets $V_1,\ldots,V_r$ of order $n>n_0$ and all ${r\choose 2}$ bipartite
subgraphs $G[V_i,V_j]$ super $(p,\epsilon)$-regular. Then for every
$r$-partite graph $H$ with maximum degree $\Delta(H)\le \Delta$ and all
partition sets $X_1,\ldots,X_r$ of order $n$, there exists an embedding
$f$ of $H$ into $G$ with each set $X_i$ mapped onto $V_i$,
$i=1,\ldots,r$.
\end{theo}
(The above version of the Blow-up Lemma, due to R\"odl and Ruci\'nski
\cite{RodRuc99}, is somewhat different from and yet equivalent to
the original formulation of Koml\'os et al. We use it here as it is
somewhat closer in spirit to the notion of pseudo-randomness).
The Blow-up Lemma is a very powerful embedding tool. Combined with
another "big cannon", the Szemer\'edi Regularity Lemma, it can be used
to obtain approximate versions of many of the most famous embedding
conjectures. We suggest the reader to consult a survey of Koml\'os
\cite{Kom99} for more details and discussions.
It is easy to show that if $G$ is an $(n,d,\lambda)$-graph with
$d=\Theta(n)$ and $\lambda=o(n)$, and $h$ divides $n$, then a random
partition of $V(G)$ into $h$ equal parts $V_1,\ldots,V_h$ produces
almost surely ${h\choose 2}$ super $(d/n,\epsilon)$-regular pairs. Thus
the Blow-up Lemma can be applied to the obtained $h$-partite subgraph of
$G$ and we get:
\begin{coro}\label{Hdense}
Let $G$ be an $(n,d,\lambda)$-graph with $d=\Theta(n)$, $\lambda=o(n)$.
If $h$ divides $n$, then $G$ contains an $H$-factor, for every fixed
graph $H$ on $h$ vertices.
\end{coro}
The case of a vanishing edge density $p=o(1)$ is as usual
significantly more complicated. Here a sufficient condition for the
existence of an $H$-factor should depend heavily on the graph $H$, as
there may exist quite dense pseudo-random graphs without a single copy
of $H$, see, for example, the Alon graph (Example 9 of Section
\ref{examples}). When $H=K_2$, already a very weak pseudo-randomness
condition suffices to guarantee an $H$-factor, or a perfect matching, as
provided by Theorem \ref{edge-connectivity}. We thus consider the case
$H=K_3$, the task here is to guarantee a {\em triangle factor}, i.e. a
collection of $n/3$ vertex disjoint triangles. This problem has been
treated by Krivelevich, Sudakov and Szab\'o \cite{KriSudSza02}
who obtained the following result:
\begin{theo}\cite{KriSudSza02}\label{KrSS}
Let $G$ be an $(n,d,\lambda)$-graph. If $n$ is divisible by 3 and
$$
\lambda=o\left(\frac{d^3}{n^2\log n}\right)\,,
$$
then $G$ has a triangle factor.
\end{theo}
For best pseudo-random graphs with $\lambda=\Theta(\sqrt{d})$ the
condition of the above theorem is fulfilled when $d\gg
n^{4/5}\log^{2/5}n$.
To prove Theorem \ref{KrSS} Krivelevich et al. first partition the
vertex set $V(G)$ into three parts $V_1,V_2,V_3$ of equal cardinality at
random. Then they choose a perfect matching $M$ between $V_1$ an $V_2$
at random and form an auxiliary bipartite graph $\Gamma$ whose parts
are $M$ and $V_3$, and whose edges are formed by connecting $e\in M$ and
$v\in V_3$ if both endpoints of $e$ are connected by edges to $v$ in
$G$. The existence of a perfect matching in $\Gamma$ is equivalent to
the existence of a triangle factor in $G$. The authors of
\cite{KriSudSza02} then proceed to show that if $M$ is chosen at random
then the Hall condition is satisfied for $\Gamma$ with positive
probability.
The result of Theorem \ref{KrSS} is probably not tight. In fact, the
following conjecture is stated in \cite{KriSudSza02}:
\begin{conj}\cite{KriSudSza02}\label{KrSScon}
There exists an absolute constant $c>0$ so that every $d$-regular graph
$G$ on $3n$ vertices, satisfying $\lambda(G)\le cd^2/n$, has a triangle
factor.
\end{conj}
If true the above conjecture would be best possible, up to a constant
multiplicative factor. This is shown by taking the example of Alon
(Section \ref{examples}, Example 9)
and blowing each of its vertices by an independent set of size $k$.
As we already discussed in the previous section
(see also \cite{KriSudSza02}), this
gives a triangle-free $d_k$-regular graph
$G(k)$ on $n_k$ vertices which satisfies
$\lambda(G(k))=\Omega\big(d_k^2/n_k\big)$.
Krivelevich, Sudakov and Szab\'o considered in \cite{KriSudSza02} also
the fractional version of the triangle factor problem.
Given a graph $G=(V,E)$, denote by $T=T(G)$ the set of all triangles
of $G$. A function $f:T\rightarrow {\mathbb R}_+$ is called a
{\em fractional triangle factor} if for every $v\in V(G)$ one has
$\sum_{v\in t}f(t)=1$. If $G$ contains a triangle factor $T_0$, then
assigning values $f(t)=1$ for all $t\in T_0$, and $f(t)=0$ for all other $t\in T$ produces a fractional
triangle factor. This simple argument shows that the existence of a
triangle factor in $G$ implies the existence of a fractional triangle
factor. The converse statement is easily seen to be invalid in general.
The fact that a fractional triangle factor $f$ can take non-integer
values, as opposed to the characteristic vector of a "usual" (i.e.
integer) triangle factor, enables to invoke the powerful machinery of
Linear Programming to prove a much better result than Theorem
\ref{KrSS}.
\begin{theo}\label{KrSSfr}\cite{KriSudSza02}
Let $G=(V,E)$ be a $(n,d,\lambda)$-graph If $\lambda \le
0.1d^2/n$ then $G$ has a fractional triangle factor.
\end{theo}
This statement is optimal up to a constant factor -- see the discussion
following Conjecture \ref{KrSScon}.
Already for the next case $H=K_4$ analogs of Theorem \ref{KrSS} and
\ref{KrSSfr} are not known. In fact, even an analog of Conjecture
\ref{KrSScon} is not available either, mainly due to the fact that we do not
know the weakest possible spectral condition guaranteeing a single copy
of $K_4$, or $K_r$ in general, for $r\ge 4$.
Finally it would be interesting to show that
for every integer $\Delta$ there exist a real $M$ and an integer $n_0$
so that the following is true. If $n\ge n_0$ and $G$ is an
$(n,d,\lambda)$-graph for which
$\lambda \le d\big(d/n\big)^M,$
then $G$ contains a copy of any graph $H$ on at most $n$ vertices with
maximum degree $\Delta(H)\le \Delta$. This can be considered as a
sparse analog of the Blow-up Lemma.
\subsection{Hamiltonicity}
A {\em Hamilton cycle} in a graph is a cycle passing through all the
vertices of this graph. A graph is called {\em Hamiltonian} if it has at
least one Hamilton cycle. For background information on Hamiltonian
cycles the reader can consult a survey of Chv\'atal \cite{Chv85}.
The notion of Hamilton cycles is one of the most central in modern Graph
Theory, and many efforts have been devoted to obtain sufficient
conditions for Hamiltonicity. The absolute majority of such known
conditions
(for example, the famous theorem of Dirac asserting that a graph on $n$
vertices with minimal degree at least $n/2$ is Hamiltonian) deal with
graphs which are fairly dense. Apparently there are very few sufficient
conditions for the existence of a Hamilton cycle in sparse graphs.
As it turns out spectral properties of graphs can supply rather powerful
sufficient conditions for Hamiltonicity. Here is one such result, quite
general and yet very simple to prove, given our knowledge of properties
of pseudo-random graphs.
\begin{prop}\label{Ham1}
Let $G$ be an $(n,d,\lambda)$-graph. If
$$
d-36\frac{\lambda^2}{d} \ge \frac{\lambda n}{d+\lambda}\,,
$$
then $G$ is Hamiltonian.
\end{prop}
\noindent{\bf Proof.\ } According to Theorem \ref{connectivity} $G$ is
$(d-36\lambda^2/d)$-vertex-connected. Also, $\alpha(G)\le \lambda
n/(d+\lambda)$, as
stated
in Proposition \ref{ind-set}.
Finally, a theorem of Chv\'atal and Erd\H os \cite{ChvErd72}
asserts that if the vertex-connectivity of a graph $G$ is at least as
large as its independence number, then $G$ is Hamiltonian.\hfill $\Box$
\medskip
The Chv\'atal-Erd\H os Theorem has also been used by Thomason in
\cite{Tho87a}, who proved that a $(p,\alpha)$-jumbled graph $G$ with
minimal degree $\delta(G)=\Omega(\alpha/p)$ is Hamiltonian. His proof is
quite similar in spirit to that of the above proposition.
Assuming that $\lambda=o(d)$ and $d\rightarrow\infty$, the condition of
Proposition \ref{Ham1} reads then as: $\lambda\le (1-o(1))d^2/n$. For
best possible pseudo-random graphs, where $\lambda=\Theta(\sqrt{d})$,
this condition starts working when $d=\Omega(n^{2/3})$.
One can however prove a much stronger asymptotical result, using more
sophisticated tools for assuring Hamiltonicity. The authors prove such a
result in \cite{KriSud02}:
\begin{theo}\label{Ham2}\cite{KriSud02}
Let $G$ be an $(n,d,\lambda)$-graph. If $n$ is large enough and
$$
\lambda \leq \frac{(\log\log n)^2}{1000\log n(\log\log\log n)}\,d\,,
$$
then $G$ is Hamiltonian.
\end{theo}
The proof of Theorem \ref{Ham2} is quite involved technically. Its main
instrument is the famous rotation-extension technique of Posa
\cite{Pos76}, or rather a version of it developed by Koml\'os and
Szemer\'edi in \cite{KomSze83} to obtain the exact threshold for the
appearance of a Hamilton cycle in the random graph $G(n,p)$. We omit the
proof details here, referring the reader to \cite{KriSud02}.
For reasonably good pseudo-random graphs, in which
$\lambda\le d^{1-\epsilon}$ for some $\epsilon>0$, Theorem \ref{Ham2}
starts working already when the degree $d$ is only polylogarithmic in
$n$ -- quite a progress compared to the easy Proposition \ref{Ham1}! It
is possible though that an even stronger result is true as given by the
following conjecture:
\begin{conj}\label{cHam}\cite{KriSud02}
There exists a positive constant $C$ such that for large enough $n$, any
$(n,d,\lambda)$-graph that satisfies $d/ \lambda>C$
contains a Hamilton cycle.
\end{conj}
This conjecture is closely related to another well known problem on
Hamiltonicity. The {\em toughness} $t(G)$ of a graph $G$ is the largest
real $t$ so that for every positive integer $x \geq 2$
one should delete at least $tx$ vertices
from $G$ in order to get an induced subgraph of it with at least
$x$ connected components. $G$ is $t$-tough if $t(G) \geq t$.
This parameter was introduced by
Chv\'atal in \cite{Chv73}, where he observed that
Hamiltonian graphs are $1$-tough and conjectured that
$t$-tough graphs are Hamiltonian for large enough $t$.
Alon showed in \cite{Alo95} that if $G$ is an $(n,d,\lambda)$-graph,
then the toughness of $G$ satisfies $t(G)>\Omega(d/\lambda)$. Therefore
the conjecture of Chv\'atal implies the above conjecture.
Krivelevich and Sudakov used Theorem \ref{Ham2} in \cite{KriSud02} to
derive Hamiltonicity of sparse random Cayley graphs. Given a group $G$
of order $n$, choose a set $S$ of $s$ non-identity elements uniformly
at random and form a Cayley graph $\Gamma(G,S\cup S^{-1})$ (see
Example 8 in Section 3 for the definition of a Cayley graph). The question
is how large
should be the value of $t=t(n)$ so as to guarantee the almost sure
Hamiltonicity of the random Cayley graph no matter which group $G$ we
started with.
\begin{theo}\label{Ham3}\cite{KriSud02}
Let $G$ be a group of order $n$.
Then for every $c>0$ and large enough $n$
a Cayley graph $X(G,S\cup S^{-1})$, formed by
choosing a set $S$ of $c\log^5 n$ random generators in $G$,
is almost surely Hamiltonian.
\end{theo}
\noindent{\bf Sketch of proof.\ }
Let $\lambda$ be the second largest by absolute value eigenvalue of
$X(G,S)$. Note that the Cayley graph $X(G,S)$ is $d$-regular for
$d \geq c\log^5 n$. Therefore to prove Hamiltonicity of
$X(G,S)$, by Theorem \ref{Ham2} it is enough to show that almost surely
$\lambda/d \leq O(\log n)$. This can be done by
applying an approach of Alon and Roichman \cite{AloRoi94} for bounding
the second eigenvalue of a random Cayley graph.\hfill$\Box$
\medskip
We note that a well known conjecture claims that every connected Cayley
graph is Hamiltonian. If true the conjecture would easily imply that
as few as $O(\log n)$ random generators are enough to give almost sure
connectivity and thus Hamiltonicity.
\subsection{Random subgraphs of pseudo-random graphs}
There is a clear tendency in recent years to study random graphs
different from the classical by now model $G(n,p)$ of binomial random
graphs. One of the most natural models for random graphs, directly
generalizing $G(n,p)$, is defined as follows. Let $G=(V,E)$ be a graph
and let $0<p<1$. The {\em random subgraph} $G_p$ if formed by choosing
every edge of $G$ independently and with probability $p$. Thus, when $G$
is the complete graph $K_n$ we get back the probability space $G(n,p)$.
In many cases the obtained random graph $G_p$ has many interesting and
peculiar features, sometimes reminiscent of those of $G(n,p)$, and
sometimes inherited from those of the host graph $G$.
In this subsection we report on various results obtained on random
subgraphs of pseudo-random graphs. While studying this subject, we
study in fact not a single probability space, but rather a family
of probability spaces, having many common features, guaranteed by
those of pseudo-random graphs. Although several results have already
been achieved in this direction, overall it is much less developed than
the study of binomial random graphs $G(n,p)$, and one can certainly
expect many new results on this topic to appear in the future.
We start with Hamiltonicity of random subgraphs of pseudo-random
graphs. As we learned in the previous section spectral condition are in
many cases sufficient to guarantee Hamiltonicity. Suppose then that a
host graph $G$ is a Hamiltonian $(n,d,\lambda)$-graph. How small can
the edge probability $p=p(n)$ be chosen so as to guarantee almost sure
Hamiltonicity of the random subgraph $G_p$? This question has been
studied by Frieze and the first author in \cite{FriKri02}. They obtained
the following result.
\begin{theo}\label{FK}\cite{FriKri02}
Let $G$ be an $(n,d,\lambda)$-graph.
Assume that
$\lambda=o\left(\frac{d^{5/2}}{n^{3/2}(\log n)^{3/2}}\right)$. Form a
random subgraph $G_p$ of $G$ by choosing each edge of $G$ independently
with probability $p$. Then for any function $\omega(n)$ tending to infinity
arbitrarily slowly:
\begin{enumerate}
\item if $p(n)=\frac{1}{d}(\log n+\log\log n-\omega(n))$, then
$G_p$ is almost surely not Hamiltonian;
\item if $p(n)=\frac{1}{d}(\log n+\log\log n+\omega(n))$, then
$G_p$ is almost surely Hamiltonian.
\end{enumerate}
\end{theo}
Just as in the case of $G(n,p)$ (see, e.g. \cite{Bol01}) it is quite
easy to predict the critical probability for the appearance of a
Hamilton cycle in $G_p$. An obvious obstacle for its existence is a
vertex of degree at most one. If such a vertex almost surely exists in
$G_p$, then $G_p$ is almost surely non-Hamiltonian. It is a
straightforward exercise to show that the smaller probability in the
statement of Theorem \ref{FK} gives the almost sure existence of such
a vertex. The larger probability can be shown to be sufficient to
eliminate almost surely all vertices of degree at most one in $G_p$.
Proving that this is sufficient for almost sure Hamiltonicity is much
harder. Again as in the case of $G(n,p)$ the rotation-extension
technique of Posa \cite{Pos76} comes to our rescue. We omit technical
details of the proof of Theorem \ref{FK}, referring the reader to
\cite{FriKri02}.
One of the most important events in the study of random graphs was the
discovery of the sudden appearance of the giant component by Erd\H os
and R\'enyi \cite{ErdRen60}. They proved that all connected components
of $G(n,c/n)$ with $0<c<1$ are almost surely trees or unicyclic and
have size $O(\log n)$. On the other hand, if $c>1$, then $G(n,c/n)$
contains almost surely a unique component of size linear in $n$ (the so
called {\em giant component}), while all other components are at most
logarithmic in size. Thus, the random graph $G(n,p)$ experiences the so
called {\em phase transition} at $p=1/n$.
Very recently Frieze, Krivelevich and Martin showed \cite{FriKriMar02}
that a very similar behavior holds for random subgraphs of many
pseudo-random graphs. To formulate their result, for $\alpha>1$ we
define $\bar{\alpha}<1$ to be the unique solution (other than
$\alpha$) of the equation $xe^{-x}=\alpha e^{-\alpha}$.
\begin{theo}\label{FKM}\cite{FriKriMar02}
Let $G$ be an $(n,d,\lambda)$-graph. Assume that
$
\lambda=o(d).
$
Consider the random subgraph $G_{\alpha/d}$, formed by choosing each
edge of $G$ independently and with probability $p=\alpha/d$. Then:
\begin{itemize}
\item[(a)] If $\alpha<1$ then almost surely the maximum component size
is $O(\log n)$.
\item[(b)] If $\alpha>1$ then almost surely there is a unique giant
component of asymptotic size
$\left(1-\frac{\bar{\alpha}}{\alpha}\right)n$
and the remaining components are of size $O(\log n)$.
\end{itemize}
\end{theo}
Let us outline briefly the proof of Theorem \ref{FKM}. First, bound
(\ref{eig}) and known estimates on the number of $k$-vertex trees in
$d$-regular graphs are used to get estimates on the expectation of the
number of connected components of size $k$ in $G_{p}$, for various
values of $k$. Using these estimates it is proved then that almost
surely $G_p$ has no connected components of size between
$(1/\alpha\gamma)\log n$ and $\gamma n$ for a properly chosen
$\gamma=\gamma(\alpha)$. Define $f(\alpha)$ to be 1 for all $\alpha\le
1$, and to be $\bar{\alpha}/\alpha$ for $\alpha>1$. One can show then
that almost surely in $G_{\alpha/d}$
the number of vertices in components of size between 1 and
$d^{1/3}$ is equal to $nf(\alpha)$ up to the error term which is
$O(n^{5/6}\log n)$. This is done by first calculating the expectation of
the last quantity, which is asymptotically equal to $nf(\alpha)$, and
then by applying the Azuma-Hoeffding martingale inequality.
Given the above, the proof of Theorem \ref{FKM} is straightforward. For
the case $\alpha<1$ we have $nf(\alpha)=n$ and therefore all but at
most $n^{5/6}\log n$ vertices lie in components of size at most
$(1/\alpha\gamma)\log n$. The remaining vertices should be in components
of size at least $\gamma n$, but there is no room for such components.
If $\alpha>1$, then $(\bar{\alpha}/\alpha)n+O(n^{5/6}\log n)$ vertices
belong to components of size at most $(1/\alpha\gamma)\log n$, and all
remaining vertices are in components of size at least $\gamma n$. These
components are easily shown to merge quickly into one giant component
of a linear size. The detail can be found in \cite{FriKriMar02}
(see also \cite{ABS} for some related results).
One of the recent most popular subjects in the study of random graphs is
proving sharpness of thresholds for various combinatorial properties.
This direction of research was spurred by a powerful theorem of
Friedgut-Bourgain \cite{Fri99}, providing a sufficient condition for the
sharpness of a threshold. The authors together with Vu apply this
theorem in \cite{KriSudVu02}
to show sharpness of graph connectivity, sometimes also called
{\em network reliability}, in random subgraphs of a wide class of graphs.
Here are the relevant definitions. For a connected graph $G$ and edge
probability $p$ denote by $f(p)=f(G,p)$ the probability that a random
subgraph $G_p$ is connected. The function $f(p)$ can be easily shown to
be strictly monotone. For a fixed
positive constant $x \leq 1$ and a graph $G$, let
$p_{x} $ denote the (unique) value of $p$ where $f(G, p_{x})=
x$. We say that a family $(G_i)_{i=1}^{\infty}$ of graphs
satisfies the {\em sharp threshold} property if for any fixed positive
$\epsilon \le 1/2$
$$
\lim_{i \rightarrow \infty} \frac{ p_{\epsilon} (G_i)} { p_{1-\epsilon}
(G_i)} \rightarrow 1.
$$
Thus the threshold for connectivity is sharp if the width of the
transition interval is negligible compared to the critical probability.
Krivelevich, Sudakov and Vu proved in \cite{KriSudVu02} the following theorem.
\begin{theo}\cite{KriSudVu02}\label{KSV}
Let $(G_i)_{i=1}^{\infty}$ be a family of distinct
graphs, where $G_i$ has $n_i$ vertices, maximum degree $d_i$ and
it is $k_i$-edge-connected. If
$$
\lim_{i \rightarrow \infty} \frac{k_i \ln n_i}{d_i}=\infty,
$$
then the family $(G_i)_{i=1}^{\infty}$ has a sharp connectivity
threshold.
\end{theo}
The above theorem extends a celebrated result of Margulis \cite{Mar74}
on network reliability (Margulis' result applies to the case where the
critical probability is a constant).
Since $(n,d,\lambda)$ graphs are $d(1-o(1))$-connected as long as
$\lambda=o(d)$ by Theorem \ref{connectivity}, we immediately get the
following result on the sharpness of the connectivity threshold for
pseudo-random graphs.
\begin{coro}\label{KSVpr}
Let $G$ be an $(n,d,\lambda)$-graph. If $\lambda=o(d)$, then the
threshold for connectivity in the random subgraph $G_p$ is sharp.
\end{coro}
Thus already weak connectivity is sufficient to guarantee sharpness of
the threshold. This result has potential practical applications as
discussed in \cite{KriSudVu02}.
Finally we consider a different probability space created from a graph
$G=(V,E)$. This space is obtained by putting random weights on the edges
of $G$ independently. One can then ask about the behavior of optimal
solutions for various combinatorial optimization problems.
Beveridge, Frieze and McDiarmid treated in \cite{BevFriMcD98} the
problem of estimating the weight of a random minimum length spanning
tree in regular graphs. For each edge $e$ of a connected
graph $G=(V,E)$ define the length $X_e$ of $e$ to be a random variable
uniformly distributed
in the interval $(0,1)$, where all $X_e$ are independent. Let
$mst(G,\bf{X})$ denote the minimum length of a spanning tree in such a
graph, and let $mst(G)$ be the expected value of $mst(G,{\bf X})$. Of
course, the value of $mst(G)$ depends on the connectivity structure of
the graph $G$. Beveridge et al. were able to prove however that if the
graph $G$ is assumed to be almost regular and has a modest edge
expansion, then $mst(G)$ can be calculated asymptotically:
\begin{theo}\cite{BevFriMcD98}\label{BFM}
Let $\alpha=\alpha(d)=O(d^{-1/3})$ and let $\rho(d)$ and $\omega(d)$
tend to infinity with $d$. Suppose that the graph $G=(V,E)$ satisfies
$$
d\le d(v)\le (1+\alpha)d\quad\mbox{for all $v\in V(G)$}\,,
$$
and
$$
\frac{e(S,V\setminus S)}{|S|}\ge \omega d^{2/3}\log d
\quad\mbox{for all $S\subset V$ with $d/2<|S|\le min\{\rho d,|V|/2\}$}\ .
$$
Then
$$
mst(G)=(1+o(1))\frac{|V|}{d}\zeta(3)\ ,
$$
where the $o(1)$ term tends to 0 as $d\rightarrow\infty$, and
$\zeta(3)=\sum_{i=1}^{\infty}i^{-3}=1.202...$.
\end{theo}
The above theorem extends a celebrated result of Frieze \cite{Fri85},
who proved it in the case of the complete graph $G=K_n$.
Pseudo-random graphs supply easily the degree of edge expansion required
by Theorem \ref{BFM}. We thus get:
\begin{coro}\label{BFMpr}
Let $G$ be an $(n,d,\lambda)$-graph. If $\lambda=o(d)$ then
$$
mst(G)=(1+o(1))\frac{n}{d}\zeta(3)\ .
$$
\end{coro}
Beveridge, Frieze and McDiarmid also proved that the random variable
$mst(G,{\bf X})$ is sharply concentrated around its mean given by
Theorem \ref{BFM}.
Comparing between the very well developed research of binomial random
graphs
$G(n,p)$ and few currently available results on random subgraphs of
pseudo-random graphs, we can say that many interesting problems
in the latter subject are yet to be addressed, such as the
asymptotic behavior of the independence number and the chromatic number,
connectivity, existence of matchings and factors, spectral properties,
to mention just a few.
\subsection{Enumerative aspects}
Pseudo-random graphs on $n$ vertices with edge density $p$ are quite
similar in many aspects to the random graph $G(n,p)$. One can thus
expect that counting statistics in pseudo-random graphs will be close to
those in truly random graphs of the same density. As the random graph
$G(n,p)$ is a product probability space in which each edge behaves
independently, computing the expected number of most subgraphs in
$G(n,p)$ is straightforward. Here are just a few examples:
\begin{itemize}
\item The expected number of perfect matchings in $G(n,p)$ is
$\frac{n!}{(n/2)!2^{n/2}}p^{n/2}$ (assuming of course that $n$ is even);
\item The expected number of spanning trees in $G(n,p)$ is
$n^{n-2}p^{n-1}$;
\item The expected number of Hamilton cycles in $G(n,p)$ is
$\frac{(n-1)!}{2}p^n$.
\end{itemize}
In certain cases it is possible to prove that the actual number of
subgraphs in a pseudo-random graph on $n$ vertices with edge density
$p=p(n)$ is close to the corresponding expected value in the binomial
random graph $G(n,p)$.
Frieze in \cite{Fri00} gave estimates on the number of perfect
matchings and Hamilton cycles in what he calls super
$\epsilon$-regular graphs. Let $G=(V,E)$ be a graph on $n$ vertices with
${n\choose 2}p$ edges, where $0<p<1$ is a constant.
Then $G$ is called {\em super $(p,\epsilon)$-regular}, for a constant
$\epsilon>0$, if
\begin{enumerate}
\item For all vertices $v\in V(G)$,
$$
(p-\epsilon)n \le d(v)\le (p+\epsilon)n\,;
$$
\item
For all $U,W\subset V$, $U\cap W=\emptyset$, $|U|, |W|\ge \epsilon n$,
$$
\left|\frac{e(U,W)}{|U||W|}-p\right|\le \epsilon\ .
$$
\end{enumerate}
Thus, a super $(p,\epsilon)$-regular graph $G$ can be considered a
non-bipartite analog of the notion of a super-regular pair defined
above.
In our terminology, $G$ is a weakly pseudo-random graph of constant
density $p$, in which {\em all} degrees are asymptotically equal to
$pn$. Assume that $n=2\nu$ is even. Let $m(G)$ denote the number of
perfect matchings in $G$ and let $h(G)$ denote the number of Hamilton
cycles in $G$,
and let $t(G)$ denote the number of spanning trees in $G$.
\begin{theo}\label{F}\cite{Fri00}
If $\epsilon$ is sufficiently small and $n$ is sufficiently large then
\begin{description}
\item[{\bf (a)}]
$$
(p-2\epsilon)^{\nu}\frac{n!}{\nu!2^{\nu}}\le m(G) \le
(p+2\epsilon)^{\nu}\frac{n!}{\nu!2^{\nu}}\ ;
$$
\item [{\bf (b)}]
$$
(p-2\epsilon)^nn!\le h(G)\le (p+2\epsilon)^nn!\ ;
$$
\end{description}
\end{theo}
Theorem \ref{F} thus implies that the numbers of perfect matchings and
of Hamilton cycles in super $\epsilon$-regular
graphs are quite close asymptotically to the expected values of the
corresponding quantities in the random graph $G(n,p)$. Part (b) of
Theorem \ref{F} improves significantly Corollary 2.9 of Thomason
\cite{Tho87a} which estimates from below the number of Hamilton cycles
in jumbled graphs.
Here is a very brief sketch of the proof of Theorem \ref{F}. To estimate
the number of perfect matchings in $G$, Frieze takes a random partition
of the vertices of $G$ into two equal parts $A$ and $B$ and estimates
the number of perfect matchings in the bipartite subgraph of $G$
between $A$ and $B$. This bipartite graph is almost surely super
$2\epsilon$-regular, which allows to apply bounds previously
obtained by Alon, R\"odl and Ruci\'nski \cite{AloRodRuc98} for such
graphs.
Since each Hamilton cycle is a union of two perfect matchings, it
follows immediately that $h(G)\le m^2(G)/2$, establishing the desired
upper bound on $h(G)$. In order to prove a lower bound, let $f_k$ be the
number of 2-factors in $G$ containing exactly $k$ cycles, so that
$f_1=h(G)$. Let also $A$ be the number of ordered pairs of edge disjoint
perfect matchings in $G$. Then
\begin{equation}\label{A}
A= \sum_{i=1}^{\lfloor n/3\rfloor}2^kf_k\ .
\end{equation}
For a perfect matching $M$ in $G$ let $a_M$ be the number of perfect
matchings of $G$ disjoint from $M$. Since deleting $M$ disturbs
$\epsilon$-regularity of $G$ only marginally, one can use part (a) of
the theorem to get $a_M\ge (p-2\epsilon)^{\nu}\frac{n!}{\nu!2^{\nu}}$.
Thus
\begin{equation}\label{A1}
A=\sum_{M\in G}a_M\ge \left((p-2\epsilon)^{\nu}\frac{n!}{\nu!2^{\nu}}
\right)^2 \ge (p-2\epsilon)^nn!\cdot \frac{1}{3n^{1/2}}\ .
\end{equation}
Next Frieze shows that the ratio $f_{k+1}/f_k$ can be bounded by a
polynomial in $n$ for all $1\le k\le k_1=O(p^{-2})$,
$f_k\le 5^{-(k-k_0)/2}\max\{f_{k_0+1},f_{k_0}\}$ for all $k\ge k_0+2,
k_0=\Theta(p^{-3}\log n)$ and that the ratio
$(f_{k_1+1}+\ldots+f_{\lfloor n/3\rfloor})/f_{k_1}$ is also bounded by
a polynomial in $n$.
Then from (\ref{A}), $A\le O_p(1)\sum_{k=1}^{k_0+1}f_k$ and thus $A\le
n^{O(1)}f_1$. Plugging (\ref{A1}) we get the desired lower bound.
One can also show (see \cite {Noga}) that the number of spanning trees $t(G)$ in super
$(p,\epsilon)$-regular graphs satisfies:
$$
(p-2\epsilon)^{n-1}n^{n-2}\le t(G)\le (p+2\epsilon)^{n-1}n^{n-2}\ ,
$$
for small enough $\epsilon>0$ and large enough $n$.
In order to estimate from below the number of spanning trees in $G$,
consider a random mapping $f:V(G)\rightarrow V(G)$, defined by choosing
for each $v\in V$ its neighbor $f(v)$ at random. Each such $f$ defines
a digraph $D_f=(V,A_f)$, $A_f=\{(v,f(v)): v\in V\}$. Each component of
$D_f$ consists of cycle $C$ with a rooted forest whose roots are all in
$C$. Suppose that $D_f$ has $k_f$ components. Then a spanning tree
of $G$ can be obtained by deleting the lexicographically first edge of
each cycle in $D_f$, and then extending the $k_f$ components to a
spanning tree. Showing that $D_f$ has typically $O(\sqrt{n})$
components implies that most of the mappings $f$ create a digraph close
to a spanning tree of $G$, and therefore:
$$
t(G)\ge n^{-O(\sqrt{n})}|f:V\rightarrow V|\ge
n^{-O(\sqrt{n})}(p-\epsilon)n^n\ .
$$
For the upper bound on $t(G)$ let
$\Omega^*=\{f:V\rightarrow V: (v,f(v))\in E(G)$ for $v\not =1$ and
$f(1)=1 \}$. Then
$$
t(G)\le |\Omega^*| \le \big((p+\epsilon)n\big)^{n-1}\le
(p+2\epsilon)^{n-1}n^{n-2}\ .
$$
To see this consider the following injection from the spanning trees of
$G$ into $\Omega^*$: orient each edge of a tree $T$ towards vertex 1 and
set $f(1)=1$. Note that this proof does not use the fact that the graph is
pseudo-random.
Surprisingly it shows that all nearly regular connected graphs with the
same density have
approximately the same number of spanning trees.
For sparse pseudo-random graphs one can use Theorem \ref{FK} to estimate
the number of Hamilton cycles. Let $G$ be an $(n,d,\lambda)$-graph
satisfying the conditions of Theorem \ref{FK}. Consider the random
subgraph $G_p$ of $G$, where $p=(\log n+2\log\log n)/d$. Let $X$ be the
random variable counting the number of Hamilton cycles in $G_p$.
According to Theorem \ref{FK}, $G_p$ has almost surely a Hamilton cycle,
and therefore $E[X]\ge 1-o(1)$. On the other hand, the probability that
a given Hamilton cycle of $G$ appears in $G_p$ is exactly $p^n$.
Therefore the linearity of expectation implies $E[X]=h(G)p^n$. Combining
the above two estimates we derive:
$$
h(G)\ge \frac{1-o(1)}{p^n}=\left(\frac{d}{(1+o(1))\log n}\right)^n
\ .
$$
We thus get the following corollary:
\begin{coro}\cite{FriKri02}\label{FKc}
Let $G$ be an $(n,d,\lambda)$-graph with
$\lambda=o(d^{5/2}/(n^{3/2}(\log n)^{3/2}))$.
Then $G$ contains at least $\left(\frac{d}{(1+o(1))\log n}\right)^n$
Hamilton cycles.
\end{coro}
Note that the number of Hamilton cycles in any $d$-regular graph on $n$
vertices obviously does not exceed $d^n$. Thus for graphs satisfying the
conditions of Theorem \ref{FK} the above corollary provides an
asymptotically tight estimate on the exponent of the number of Hamilton
cycles.
\section{Conclusion}
Although we have made an effort to provide a systematic coverage of the
current research in pseudo-random graphs, there are certainly quite a
few subjects that were left outside this survey, due to the limitations
of space and time (and of the authors' energy). Probably the most notable
omission is a discussion of diverse applications of pseudo-random
graphs to questions from other fields, mostly Extremal Graph Theory,
where pseudo-random graphs provide the best known bounds for an amazing
array of problems. We hope to cover this direction in one of our future
papers. Still, we would like to believe that this survey can be helpful
in mastering various results and techniques pertaining to this field.
Undoubtedly many more of them are bound to appear in the future and will
make this fascinating subject even more deep, diverse and appealing.
\medskip
\noindent{\bf Acknowledgment.\ } The authors would like to thank Noga
Alon for many illuminating discussions and for kindly granting us his
permission to present his Theorem \ref{number-subgraphs} here. The
proofs of Theorems \ref{connectivity}, \ref{edge-connectivity} were
obtained in discussions with him.
| {
"timestamp": "2005-03-31T18:59:47",
"yymm": "0503",
"arxiv_id": "math/0503745",
"language": "en",
"url": "https://arxiv.org/abs/math/0503745",
"abstract": "Random graphs have proven to be one of the most important and fruitful concepts in modern Combinatorics and Theoretical Computer Science. Besides being a fascinating study subject for their own sake, they serve as essential instruments in proving an enormous number of combinatorial statements, making their role quite hard to overestimate. Their tremendous success serves as a natural motivation for the following very general and deep informal questions: what are the essential properties of random graphs? How can one tell when a given graph behaves like a random graph? How to create deterministically graphs that look random-like? This leads us to a concept of pseudo-random graphs and the aim of this survey is to provide a systematic treatment of this concept.",
"subjects": "Combinatorics (math.CO)",
"title": "Pseudo-random graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978384668497878,
"lm_q2_score": 0.8376199572530448,
"lm_q1q2_score": 0.819514524204227
} |
https://arxiv.org/abs/1306.2385 | Linear groups - Malcev's theorem and Selberg's lemma | An account of two fundamental facts concerning finitely generated linear groups: Malcev's theorem on residual finiteness, and Selberg's lemma on virtual torsion-freeness. | \section*{Introduction}
A group is \textbf{linear} if it is (isomorphic to) a subgroup of $\mathrm{GL}_n(K)$, where $K$ is a field. If we want to specify the field, we say that the group is linear over $K$. The following theorems are fundamental, at least from the perspective of combinatorial group theory.
\begin{thm*}[Malcev 1940] A finitely generated linear group is residually finite.
\end{thm*}
\begin{thm*}[Selberg 1960] A finitely generated linear group over a field of zero characteristic is virtually torsion-free.
\end{thm*}
A group is \textbf{residually finite} if its elements are distinguished by the finite quotients of the group, i.e., if each non-trivial element of the group remains non-trivial in a finite quotient. A group is \textbf{virtually torsion-free} if some finite-index subgroup is torsion-free. As a matter of further terminology, Selberg's theorem is usually referred to as Selberg's lemma, and Malcev is alternatively transliterated as Mal'cev or Maltsev.
\begin{content} The main body of this text has three sections. In the first one we discuss residual finiteness and virtual torsion-freeness, with emphasis on their relation to a third property - roughly speaking, a ``$p$-adic'' refinement of residual finiteness. The main theorem we are actually interested in, due to Platonov (1968), gives such refined residual properties for finitely generated linear groups. Both Malcev's theorem and Selberg's lemma are consequences of this more powerful, but lesser known, theorem of Platonov. The second section is devoted to $\mathrm{SL}_n(\mathbb{Z})$. This example is too important, too interesting, too much fun to receive anything less than a scenic analysis. In the last section we return to a proof of Platonov's theorem. \end{content}
\begin{comments}
Besides the Russian original \cite{P}, the only other source in the literature for Platonov's theorem that I am aware of is the account of Wehrfritz in \cite[Chapter 4]{W}. The proof presented herein is, I think, considerably simpler. It is mainly influenced by the discussion of Malcev's theorem in the lecture notes by Stallings \cite{S}, and it is quite similar to Platonov's own arguments in \cite{P}.
An alternative road to Selberg's lemma is to use valuations. This is the approach taken by Cassels in \emph{Local fields} (Cambridge University Press 1986), and by Ratcliffe in \emph{Foundations of hyperbolic manifolds} (2nd edition, Springer 2006).
I thank, in chronological order, Andy Putman for a useful answer via \url{mathoverflow.net}, Jean-Fran\c{c}ois Planchat for a careful reading and constructive comments, and Vadim Alekseev for a translation of Platonov's article.
\end{comments}
\begin{convention} In this text, rings are commutative and unital.
\end{convention}
\section{Virtual and residual properties of groups}\label{VR}
Virtual torsion-freeness and residual finiteness are instances of the following terminology. Let $\mathcal{P}$ be a group-theoretic property. A group is \textbf{virtually $\mathcal{P}$} if it has a finite-index subgroup enjoying $\mathcal{P}$. A group is \textbf{residually $\mathcal{P}$} if each non-trivial element of the group remains non-trivial in some quotient group enjoying $\mathcal{P}$. The virtually $\mathcal{P}$ groups and the residually $\mathcal{P}$ groups contain the $\mathcal{P}$ groups. It may certainly happen that a property is virtually stable (e.g., finiteness) or residually stable (e.g., torsion-freeness).
Besides virtual torsion-freeness and residual finiteness, we are interested in the hybrid notion of \textbf{virtual residual $p$-finiteness} where $p$ is a prime. This is obtained by residualizing the property of being a finite $p$-group, followed by the virtual extension. The notion of virtual residual $p$-finiteness has, in fact, the leading role in this account for it relates both to residual finiteness and to virtual torsion-freeness.
\begin{lem}[``Going down'']
If $\mathcal{P}$ is inherited by subgroups, then both virtually $\mathcal{P}$ and residually $\mathcal{P}$ are inherited by subgroups. In particular, virtual torsion-freeness, residual finiteness, and virtual residual $p$-finiteness are inherited by subgroups.
\end{lem}
\begin{lem}[``Going up'']\label{going up}
Virtually $\mathcal{P}$ passes to finite-index supergroups. In particular, both virtual torsion-freeness and virtual residual $p$-finiteness pass to finite-index supergroups. Residual finiteness passes to finite-index supergroups.
\end{lem}
Observe that residual $p$-finiteness, just like torsion-freeness, is not virtually stable. Residual finiteness does pass to finite-index supergroups because of the following equivalent description: a group is residually finite if and only if every non-trivial element lies outside of a finite-index subgroup.
Residual $p$-finiteness trivially implies residual finiteness. Going up, we obtain:
\begin{prop}\label{one p suffices}
Virtual residual $p$-finiteness for some prime $p$ implies residual finiteness.
\end{prop}
On the other hand, residual $p$-finiteness imposes torsion restrictions. Namely, in a residually $p$-finite group, the order of a torsion element must be a $p$-th power. Hence, if a group is residually $p$-finite and residually $q$-finite for two different primes $p$ and $q$, then it is torsion-free. Virtualizing this statement, we obtain:
\begin{prop}\label{p and q}
Virtual residual $p$-finiteness and virtual residual $q$-finiteness for two different primes $p$ and $q$ imply virtual torsion-freeness.
\end{prop}
In light of Propositions~\ref{one p suffices} and ~\ref{p and q}, we see that Malcev's theorem and Selberg's lemma are consequences of the following:
\begin{thm*}[Platonov 1968] Let $G$ be a finitely generated linear group over a field $K$. If $\mathrm{char}\:K=0$, then $G$ is virtually residually $p$-finite for all but finitely many primes $p$. If $\mathrm{char}\:K=p$, then $G$ is virtually residually $p$-finite.
\end{thm*}
\section{Essellennzee}\label{SL}
In this section we examine $\mathrm{SL}_n(\mathbb{Z})$, where $n\geq 2$. We start with a most familiar fact.
\begin{prop}\label{SLn f.g.}
$\mathrm{SL}_n(\mathbb{Z})$ is generated by the elementary matrices $\{1_n+e_{ij}: i\neq j\}$.
\end{prop}
\begin{proof}
In general, if $A$ is a euclidean domain then $\mathrm{SL}_n(A)$ is generated by the elementary matrices $\{1_n+a\cdot e_{ij}: a\in A, i\neq j\}$. The first step is to turn any matrix in $\mathrm{SL}_n(A)$ into a diagonal one via elementary operations. Using division with remainder, and the euclidean map on $A$ as a way of measuring the decrease in complexity, we can insure that a single non-zero entry, say $a$, remains in the first row. Column swapping brings $a$ in position $(1,1)$, and row reductions using the invertible $a$ turn all other entries in the first column to $0$. Now repeat this procedure for the remaining $(n-1)\times (n-1)$ block. The second step is to bring a diagonal matrix of determinant $1$ to the identity matrix $1_n$ via elementary operations. This is done using the transition
\begin{align*} \begin{pmatrix}
a & 0 \\
0 & b
\end{pmatrix} \leadsto
\begin{pmatrix}
a & a \\
0 & b
\end{pmatrix} \leadsto
\begin{pmatrix}
1 & a \\
(a^{-1}-1)b & b
\end{pmatrix} \leadsto
\begin{pmatrix}
1 & a \\
0 & ab
\end{pmatrix} \leadsto
\begin{pmatrix}
1 & 0 \\
0 & ab
\end{pmatrix}.\end{align*}
Finally, if the additive group of $A$ is generated by $\{a_1,\dots, a_k\}$, then the corresponding matrices $\{1_n+a_1\cdot e_{ij},\dots, 1_n+a_k\cdot e_{ij}: i\neq j\}$ generate all the elementary matrices.
\end{proof}
Let $N$ be a positive integer. Reduction modulo $N$ defines a group homomorphism $\mathrm{SL}_n(\mathbb{Z})\to \mathrm{SL}_n(\mathbb{Z}/N)$, which enjoys the following remarkable property:
\begin{lem}
The congruence homomorphism $\mathrm{SL}_n(\mathbb{Z})\to \mathrm{SL}_n(\mathbb{Z}/N)$ is onto.
\end{lem}
\begin{proof} Since $\mathrm{SL}_n(\mathbb{Z})$ is generated by the elementary matrices, and the elementary matrices are mapped to the elementary matrices by the congruence homomorphism, its surjectivity is equivalent to the elementary generation of $\mathrm{SL}_n(\mathbb{Z}/N)$. The Chinese Remainder Theorem provides a decomposition $\mathbb{Z}/N\simeq \prod \mathbb{Z}/{p_i^{s_i}}$ into local rings, mirroring the decomposition $N=\prod p_i^{s_i}$ into primes. Direct products preserve elementary generation, and we now show that it holds over local rings. This is actually easier than elementary generation for euclidean domains. Let $A$ be a local ring and pick a matrix in $\mathrm{SL}_n(A)$. Some first-row entry is not in $\pi$, the maximal ideal of $A$, so it is invertible in $A$. Column swapping brings this element in the $(1,1)$-position, and then the first row and the first column can be cleared. The rest goes as in the proof of Proposition~\ref{SLn f.g.}.
\end{proof}
The kernel of the congruence homomorphism
\begin{align*}\Gamma(N):=\ker \big( \mathrm{SL}_n(\mathbb{Z})\to \mathrm{SL}_n(\mathbb{Z}/N)\big)=\big\{X\in \mathrm{SL}_n(\mathbb{Z}): X \equiv 1_n \;\textrm{mod}\: N\big\}\end{align*}
is the \textbf{principal congruence subgroup of level $N$}. In particular, $\Gamma(1)=\mathrm{SL}_n(\mathbb{Z})$.
The principal congruence subgroups are normal, finite-index subgroups of $\mathrm{SL}_n(\mathbb{Z})$. The following lemma provides the formula for their index.
\begin{lem}\label{computing the index} The index of $\Gamma(N)$ is given by
\begin{align*}
[\Gamma(1):\Gamma(N)]=|\mathrm{SL}_n(\mathbb{Z}/N)|=N^{n^2-1}\prod_{p|N}\Big(\prod_{i=2}^{n}(1-p^{-i})\Big).
\end{align*}
\end{lem}
\begin{proof}
First, recall that
\begin{align*}|\mathrm{SL}_n(\mathbb{Z}/p)|=\frac{1}{p-1}|\mathrm{GL}_n(\mathbb{Z}/p)|=\frac{1}{p-1}\prod_{i=0}^{n-1}(p^n-p^i)=p^{n^2-1}\prod_{i=2}^{n}(1-p^{-i}).
\end{align*}
Next, we find the size of $\mathrm{SL}_n(\mathbb{Z}/{p^k})$. Consider again the general case of a local ring $A$ with maximal ideal $\pi$. The congruence map $\mathrm{GL}_n(A)\to \mathrm{GL}_n(A/\pi)$ is onto: \emph{any} lift to $A$ of a matrix in $\mathrm{GL}_n(A/\pi)$ has determinant not in $\pi$, i.e., invertible in $A$. Furthermore, the kernel of the congruence map $\mathrm{GL}_n(A)\to \mathrm{GL}_n(A/\pi)$ is $1_n+\mathrm{M}_n(\pi)$ since a matrix congruent to $1_n$ modulo $\pi$ has determinant in $1+\pi$, hence invertible in $A$. Thus, if $A$ is also finite, then
\begin{align*}|\mathrm{GL}_n(A)|=|\pi|^{n^2}\cdot |\mathrm{GL}_n(A/\pi)|.\end{align*}
Now $|\mathrm{GL}_n|=|\mathrm{GL}_1|\cdot |\mathrm{SL}_n|$ over any ring, and $|\mathrm{GL}_1(A)|=|\pi|\cdot |\mathrm{GL}_1(A/\pi)|$, so
\begin{align*}
|\mathrm{SL}_n(A)|=|\pi|^{n^2-1}\cdot |\mathrm{SL}_n(A/\pi)|.
\end{align*}
Returning to the particular case we are interested in, we obtain
\begin{align*}
|\mathrm{SL}_n(\mathbb{Z}/{p^k})|=p^{(k-1)(n^2-1)} |\mathrm{SL}_n(\mathbb{Z}/{p})|=(p^k)^{n^2-1}\prod_{i=2}^{n}(1-p^{-i}).
\end{align*}
Finally, the size of $\mathrm{SL}_n(\mathbb{Z}/N)$ is obtained by multiplying the above formula over the prime decomposition of $N$.
\end{proof}
Bass - Lazard - Serre (Bull. Amer. Math. Soc. 1964) and Mennicke (Ann. Math. 1965) have shown that, for $n\geq 3$, every finite-index subgroup of $\mathrm{SL}_n(\mathbb{Z})$ contains some principal congruence subgroup. This Congruence Subgroup Property is definitely not true for $n=2$. The first one to state this failure was Klein (1880), and proofs were subsequently provided by Fricke (1886) and Pick (1886). It is in fact known by now that it is an exceptional feature for a finite-index subgroup of $\mathrm{SL}_2(\mathbb{Z})$ to contain a principal congruence subgroup.
The principal congruence subgroups are organized according to the divisibility of their levels: $\Gamma(M)\supseteq \Gamma(N) \Leftrightarrow M|N$, that is, ``to contain is to divide''. This puts the emphasis on the prime stratum $\{\Gamma(p) : p \textrm{ prime}\}$, and on the descending chains $\{\Gamma(p^k) : k\geq 1\}$ corresponding to each prime $p$. Observe that
$\cap_{p}\: \Gamma(p)=\{1_n\}$, and that $\cap_{k}\: \Gamma(p^k)=\{1_n\}$ for each prime $p$, meaning that the elements of $\mathrm{SL}_n(\mathbb{Z})$ can be distinguished both along the prime stratum, as well as along each descending $p$-chain. Thus:
\begin{prop}
$\mathrm{SL}_n(\mathbb{Z})$ is residually finite.
\end{prop}
Clearly $\mathrm{SL}_n(\mathbb{Z})$ is not torsion-free. For example,
\begin{align*} \begin{pmatrix} 0 & -1\\ 1 & 0\end{pmatrix}, \qquad \begin{pmatrix} 0 & -1\\ 1 & 1\end{pmatrix}\end{align*}
are elements of order $4$, respectively $6$, in $\mathrm{SL}_2(\mathbb{Z})$. However, we have:
\begin{prop}
$\mathrm{SL}_n(\mathbb{Z})$ is virtually torsion-free.
\end{prop}
This is an immediate consequence of the following fact, due to Minkowski (1887):
\begin{lem}
$\Gamma(N)$ is torsion-free provided $N\geq 3$.
\end{lem}
\begin{proof} It suffices to show that $\Gamma (4)$ and $\Gamma(p)$, where $p\geq 3$ is a prime, are torsion-free. Let $p$ be any prime, and assume that $X\in \Gamma(p)$ is a non-trivial element having finite order. Up to replacing $X$ by a power of itself, we may assume that $X^q=1_n$ for a prime $q$. Then
\begin{align*}
-q (X-1_n)=\sum_{i\geq 2}^q \binom{q}{i} (X-1_n)^i.
\end{align*}
Let $p^s$, where $s\geq 1$, be the highest power of $p$ dividing all the entries of $X-1_n$. The left hand side of the displayed identity is divisible by at most $p^s$ if $q\neq p$, and by at most $p^{s+1}$ if $q=p$. The right hand side is divisible by $p^{2s}$, and even by $p^{2s+1}$ if $q=p\geq 3$. Hence $q=p=2$ and $s=1$. The conclusion that $p=2$ and $s=1$ means that $\Gamma(2)$ is the only one in the prime stratum which harbours torsion, and that $\Gamma(4)$, its successor in the descending $2$-chain, is free of torsion. The conclusion that $q=2$ means that torsion elements in $\Gamma(2)$ have order a power of $2$. As $X^2\in \Gamma(4)$ whenever $X\in \Gamma(2)$, and $\Gamma(4)$ is torsion-free, it follows that non-trivial torsion elements in $\Gamma(2)$ have order $2$.
\end{proof}
This lemma can be used to control the torsion spectrum - that is, the possible orders of torsion elements - in $\mathrm{SL}_n(\mathbb{Z})$. Let us illustrate the basic idea in the simplest case, when $n=2$. Given a group homomorphism, the torsion spectra of its domain, kernel, and range are trivially related by $\tau(\mathrm{dom})\subseteq \tau(\mathrm{ker})\cdot \tau(\mathrm{ran})$. In the case of a congruence homomorphism, this reads as $\tau(\mathrm{SL}_2(\mathbb{Z}))\subseteq \tau(\Gamma(N))\cdot \tau(\mathrm{SL}_2(\mathbb{Z}/N))$. If $N=3$ then $\tau(\Gamma(3))=\{1\}$, and it can be checked that $\tau(\mathrm{SL}_2(\mathbb{Z}/3))=\{1,2,3,4,6\}$. Somewhat easier, in fact, is to let $N=2$: then $\tau(\Gamma(2))=\{1,2\}$, and it is immediate that $\tau(\mathrm{SL}_2(\mathbb{Z}/2))=\{1,2,3\}$. We conclude that $\tau(\mathrm{SL}_2(\mathbb{Z}))\subseteq \{1,2,3,4,6\}$. Equality holds, actually, since there are elements of order $4$ and $6$ in $\mathrm{SL}_2(\mathbb{Z})$.
The presence of a torsion element with composite order, namely $6$, implies that $\mathrm{SL}_n(\mathbb{Z})$ is not residually $p$-finite for any prime $p$. As with torsion-freeness, this is easily remedied by passing to a finite-index subgroup:
\begin{prop}
$\mathrm{SL}_n(\mathbb{Z})$ is virtually residually $p$-finite for each prime $p$.
\end{prop}
More precisely, we show:
\begin{lem}\label{residual p-finiteness of PCS}
$\Gamma(N)$ is residually $p$-finite for each prime $p$ dividing $N$.
\end{lem}
\begin{proof} It suffices to prove that, for each prime $p$, $\Gamma(p)$ is residually $p$-finite. To that end, we claim that each successive quotient $\Gamma(p^{k})/\Gamma(p^{k+1})$ in the descending chain $\{\Gamma(p^k) : k\geq 1\}$ is a $p$-group. This is seen most directly by observing that each element in $\Gamma(p^{k})/\Gamma(p^{k+1})$ has order $p$: for any matrix $1_n+p^kX\in \Gamma(p^k)$ we have
\begin{align*}(1_n+p^kX)^p=1_n+\sum _{i=1}^p \binom{p}{i} p^{ki}X^i\in \Gamma(p^{k+1}).\end{align*}
Another way is to use the formula for the index (Lemma~\ref{computing the index}), which yields that $\Gamma(p^{k})/\Gamma(p^{k+1})$ has size $p^{n^2-1}$.
A third, more involved argument shows that each successive quotient $\Gamma(p^{k})/\Gamma(p^{k+1})$ is isomorphic to $(\mathbb{Z}/p,+)^{n^2-1}$. Start by noting that, for any matrix $1_n+p^kX\in \Gamma(p^k)$, we have $1=\det (1_n+p^kX)=1+p^k\:\mathrm{tr}(X)\textrm{ mod } p^{2k}$; in particular, $p$ divides $\mathrm{tr} (X)$. Let $\mathfrak{sl}_n(\mathbb{Z}/p)$ denote the additive group of traceless $n\times n$ matrices over $\mathbb{Z}/p$. Then the map
\begin{align*} \phi_k: \Gamma(p^k)\to \mathfrak{sl}_n(\mathbb{Z}/p), \quad \phi_k(1_n+p^kX)= X \textrm{ mod } p\end{align*}
is well-defined. Firstly, $\phi_k$ is a homomorphism: for $1_n+p^kX$ and $1_n+p^kY$ in $\Gamma(p^k)$ we have
$\phi_k\big((1_n+p^kX)(1_n+p^kY)\big)= X+Y+p^kXY \textrm{ mod } p=X+Y \textrm{ mod } p$.
Secondly, the kernel of $\phi_k$ is $\Gamma(p^{k+1})$. Thirdly, we claim that $\phi_k$ is onto. The target group $\mathfrak{sl}_n(\mathbb{Z}/p)$ is generated by the $n^2-n$ off-diagonal matrix units $\{e_{ij}: 1\leq i\neq j \leq n\}$ together with the $n-1$ diagonal differences $\{e_{ii}-e_{i+1\: i+1}: 1\leq i\leq n-1\}$. It is immediate that the off-diagonal matrix units are in the image of $\phi_k$, as $\phi_k(1_n+p^ke_{ij})=e_{ij}$ for $i\neq j$. To obtain the diagonal differences, consider an $n\times n$ matrix having the $2\times 2$-block
\begin{align*}
\begin{pmatrix} 1+p^k& p^k\\-p^k &1-p^k \end{pmatrix}
\end{align*}
on the diagonal, all the other non-zero entries being $1$'s along the remaining diagonal slots. This is a matrix in $\Gamma(p^k)$ which is mapped by $\phi_k$ to $e_{ii}-e_{i+1\: i+1} + e_{i\: i+1}-e_{i+1\: i}$. As $e_{i\: i+1}$ and $e_{i+1\: i}$ are in the image of $\phi_k$, the same is true for $e_{ii}-e_{i+1\: i+1}$. \end{proof}
\begin{rem}\label{what lurks} Scratch most properties of $\mathrm{SL}_n(\mathbb{Z})$ and you will find a great discrepancy between $\mathrm{SL}_2(\mathbb{Z})$ and $\mathrm{SL}_{n\geq 3}(\mathbb{Z})$ lurking underneath. For the discussion at hand, the difference turns out to be the following: $\mathrm{SL}_2(\mathbb{Z})$ admits finite-index subgroups which are residually $p$-finite for all primes $p$, whereas in $\mathrm{SL}_{n\geq 3}(\mathbb{Z})$ every finite-index subgroup is residually $p$-finite for only finitely many primes $p$. The question which clarifies and sharpens this contrast is whether principal congruence subgroups can be residually $p$-finite for a prime $p$ not dividing the level.
In $\mathrm{SL}_2(\mathbb{Z})$, the answer is that $\Gamma(2)$ is residually $p$-finite for $p=2$ only, but $\Gamma(N)$ with $N\geq 3$ is residually $p$-finite for all primes $p$. The exceptional case is due to the $2$-torsion in $\Gamma(2)$. In the higher level case there is no torsion. Now a torsion-free subgroup of $\mathrm{SL}_2(\mathbb{Z})$ is free, since $\mathrm{SL}_2(\mathbb{Z})$ acts on a tree with finite vertex stabilizers and without inversion (see I.\S 4 of Serre's \emph{Trees}, Springer 1980). Thus $\Gamma(N)$ with $N\geq 3$ is free. We may then use mutual abstract embeddings to conclude that $\Gamma(N)$ with $N\geq 3$, in fact every free group, is residually $p$-finite for all primes $p$.
In $\mathrm{SL}_{n\geq 3}(\mathbb{Z})$, the answer is that $\Gamma(N)$ is residually $p$-finite if and only if $p$ divides $N$. Once we know this, the Congruence Subgroup Property will imply that each finite-index subgroup of $\mathrm{SL}_{n\geq 3}(\mathbb{Z})$ is residually $p$-finite for only finitely many primes $p$. Now let us justify the answer, specifically the forward implication. The proof hinges on computing the abelianization of $\Gamma (N)$, and this is essentially due to Lee and Szczarba (Invent. Math. 1976). As in the proof of Lemma~\ref{residual p-finiteness of PCS}, there is a well-defined homomorphism
\begin{align*} \Gamma(N)\to \mathfrak{sl}_n(\mathbb{Z}/N), \quad 1_n+NX \mapsto X \textrm{ mod } N\end{align*}
which is furthermore onto. Thus $\Gamma(N)/\Gamma(N^2)\simeq \mathfrak{sl}_n(\mathbb{Z}/N)\simeq (\mathbb{Z}/N, +)^{n^2-1}$, and the commutator subgroup $[\Gamma(N),\Gamma(N)]$ is contained in $\Gamma(N^2)$. On the other hand, we have $1_n+N^2e_{ik}=[1_n+Ne_{ij},1_n+Ne_{jk}]\in [\Gamma(N),\Gamma(N)]$ for distinct $i,j, k$. At this point we use the fact that the principal congruence subgroup of level $M$ is normally generated by $\{1_n+Me_{ij}: i\neq j\}$, the $M$-th powers of the elementary matrices. This is what Mennicke actually proved in his approach to the Congruence Subgroup Property. As pointed out soon after by Bass - Milnor - Serre (Publ. Math. IHES 1967), this fact is equivalent to the Congruence Subgroup Property. It follows that $\Gamma(N^2)$ is contained in $[\Gamma(N),\Gamma(N)]$, by the normality of $\Gamma(N)$. Summarizing, $[\Gamma(N),\Gamma(N)]=\Gamma(N^2)$, so that the abelianization of $\Gamma (N)$ is $(\mathbb{Z}/N, +)^{n^2-1}$. Finally, if $\Gamma(N)$ maps onto a non-trivial finite $p$-group then the abelianization of $\Gamma(N)$ maps onto the corresponding abelianization, which is a non-trivial $p$-group, and we conclude that $p$ divides $N$.
\end{rem}
\section{Proof of Platonov's theorem}\label{P}
Let $G$ be a finitely generated linear group over a field $K$, say $G\leq \mathrm{GL}_n(K)$. In $K$, consider the subring $A$ generated by the multiplicative identity $1$ and the matrix entries of a finite, symmetric set of generators for $G$. Thus $A$ is a finitely generated domain, and $G\leq \mathrm{GL}_n(A)$. Platonov's theorem is then a consequence of the following:
\begin{thm}\label{what we're actually proving}
Let $A$ be a finitely generated domain. If $\mathrm{char}\:A=0$, then $\mathrm{GL}_n(A)$ is virtually residually $p$-finite for all but finitely many primes $p$. If $\mathrm{char}\:A=p$, then $\mathrm{GL}_n(A)$ is virtually residually $p$-finite.
\end{thm}
The proof of Theorem~\ref{what we're actually proving} is a straightforward variation on the example of $\mathrm{SL}_n(\mathbb{Z})$, as soon as we know the following facts:
\begin{lem}\label{technical lemma}
Let $A$ be a finitely generated domain. Then the following hold:
\begin{itemize}
\item[i.] $A$ is noetherian.
\item[ii.] $\cap_{k}\: I^k =0$ for any ideal $I\neq A$.
\item[iii.] If $A$ is a field, then $A$ is finite.
\item[iv.] The intersection of maximal ideals of $A$ is $0$.
\item[v.] If $\mathrm{char}\:A=0$, then only finitely many primes $p=p\cdot 1$ are invertible in $A$.
\end{itemize}
\end{lem}
Let us postpone the proof of Lemma~\ref{technical lemma} for the moment, and see how to obtain Theorem~\ref{what we're actually proving}. The principal congruence subgroup of $\mathrm{GL}_n(A)$ corresponding to an ideal $I$ of $A$ is defined by
\begin{align*}\Gamma(I)=\ker \big( \mathrm{GL}_n(A)\to \mathrm{GL}_n(A/I)\big).\end{align*}
If $\pi$ is a maximal ideal then $A/\pi$ is a finite field, by \ref{technical lemma} iii, so $\Gamma(\pi)$ has finite index in $\mathrm{GL}_n(A)$. Also $\cap_{\pi}\: \Gamma(\pi)=\{1_n\}$ as $\pi$ runs over the maximal ideals of $A$, by \ref{technical lemma} iv. This shows that $\mathrm{GL}_n(A)$ is residually finite, thereby proving Malcev's theorem.
We claim that $\pi^k/\pi^{k+1}$ is finite for each $k\geq 1$. In general, if $M$ is an $R$-module which is annihilated by an ideal $I$ -- in the sense that $IM=0$ -- then $M$ is also an $R/I$-module in a natural way: namely, define $\overline{r}\cdot m:=r\cdot m$ for $r\in R$ and $m\in M$. Furthermore, if $M$ is finitely generated as an $R$-module then $M$ is finitely generated as an $R/I$-module. In the case at hand, the $A$-module $\pi^k$ is finitely generated since $A$ is noetherian, so the $A$-module $\pi^k/\pi^{k+1}$ is also finitely generated. Therefore $\pi^k/\pi^{k+1}$ is finite dimensional as an $A/\pi$-module. As $A/\pi$ is finite, $\pi^k/\pi^{k+1}$ is finite as well.
The ring $A/\pi^k$ is finite for each $k\geq 1$, so each $\Gamma(\pi^k)$ has finite index in $\mathrm{GL}_n(A)$. Furthermore, $\cap_{k}\: \Gamma(\pi^k)=\{1_n\}$ by \ref{technical lemma} ii. (This shows once again that $\mathrm{GL}_n(A)$ is residually finite.) Let $p$ be the characteristic of $A/\pi$, so $p\in \pi$. Then $\Gamma(\pi^{k})/\Gamma(\pi^{k+1})$ is a $p$-group: for $X\in \Gamma(\pi^k)$ we have
\begin{align*}X^p=1_n+\sum _{i=1}^p \binom{p}{i} (X-1_n)^i\in \Gamma(\pi^{k+1}).\end{align*}
To conclude, $\mathrm{GL}_n(A)$ is virtually residually $p$-finite for each prime $p$ not invertible in $A$. By \ref{technical lemma} v, this happens for all but finitely many primes $p$ in the zero characteristic case. In characteristic $p$, there is only such prime, namely $p$ itself. Theorem~\ref{what we're actually proving} is proved.
We now return to the proof of the lemma.
\begin{proof}[Proof of Lemma~\ref{technical lemma}] The first two points are standard: i) follows from the Hilbert Basis Theorem, and ii) is the Krull Intersection Theorem for domains.
iii) We use the following fact:
\begin{quotation}
Let $F\subseteq F(u)$ be a field extension with $F(u)$ finitely generated as a ring. Then $F\subseteq F(u)$ is a finite extension and $F$ is finitely generated as a ring.
\end{quotation}
Here is how we use this fact. Let $F$ be the prime field of $A$ and let $a_1,\dots,a_N$ be generators of $A$ as a ring. Thus $A=F(a_1,\dots,a_N)$. Going down the chain
\begin{align*}
A=F(a_1,\dots,a_N)\supseteq F(a_1,\dots,a_{N-1})\supseteq \ldots\supseteq F
\end{align*}
we obtain that $F\subseteq A$ is a finite extension, and that $F$ is finitely generated as a ring. Then $F$ is a finite field, as $\mathbb{Q}$ is not finitely generated as a ring, and so $A$ is finite.
Now here is how we prove the fact. Assume that $u$ is transcendental over $F$, i.e., $F(u)$ is the field of rational functions in $u$. Let $P_1/Q_1, \dots, P_N/Q_N$ generate $F(u)$ as a ring, where $P_i, Q_i\in F[u]$. The multiplicative inverse of $1+u\cdot \prod Q_i$ is a polynomial expression in the $P_i/Q_i$'s, which can be written as $R/\prod Q_i^{s_i}$. Therefore $\prod Q_i^{s_i}=(1+u\cdot \prod Q_i)R$ in $F[u]$. But this is impossible, since $\prod Q_i^{s_i}$ is relatively prime to $1+u\cdot \prod Q_i$.
Thus $u$ is algebraic over $F$. Let $X^d+\alpha_1X^{d-1}+\dots+\alpha_d$ be the minimal polynomial of $u$ over $F$. Let also $a_1,\dots,a_N$ be ring generators of $F(u)=F[u]$. We may write each $a_i$ as $\sum_{0\leq m\leq d-1} \beta_{i,m} \: u^m$, with $\beta_{i,m} \in F$. We claim that the $\alpha_j$'s and the $\beta_{i,m}$'s are ring generators of $F$. Let $c\in F$. Then $c$ is a polynomial in $a_1,\dots,a_N$ over $F$, hence a polynomial in $u$ over the subring of $F$ generated by the $\beta_{i,m}$'s, hence a polynomial in $u$ of degree less than $d$ over the subring of $F$ generated by the $\alpha_j$'s and the $\beta_{i,m}$'s. By the linear independence of $\{1,u,\dots,u^{d-1}\}$, the latter polynomial is actually of degree $0$. Hence $c$ ends up in the subring of $F$ generated by the $\alpha_j$'s and the $\beta_{i,m}$'s.
iv) Let $a\neq 0$ in $A$. To find a maximal ideal of $A$ not containing $a$, we rely on the basic avoidance: maximal ideals do not contain invertible elements. Consider the localization $A'=A[1/a]$. Let $\pi'$ be a maximal ideal in $A'$, so $a\notin \pi'$. The restriction $\pi=\pi'\cap A$ is an ideal in $A$, and $a\notin \pi$. We show that $\pi$ is maximal. The embedding $A\hookrightarrow A'$ induces an embedding $A/\pi\hookrightarrow A'/\pi'$. As $A'/\pi'$ is a field which is finitely generated as a ring, in follows from iii) that $A'/\pi'$ is finite field. Therefore the subring $A/\pi$ is a finite domain, hence a field as well.
v) We shall use Noether's Normalization Theorem, which says the following.
\begin{quotation}
Let $R$ be a finitely generated algebra over a field $F\subseteq R$. Then there are elements $x_1,\dots, x_N\in R$ algebraically independent over $F$ such that $R$ is integral over $F[x_1,\dots,x_N]$.
\end{quotation}
In our case, $\mathbb{Z}$ is a subring of $A$, and $A$ is an integral domain which is finitely generated as a $\mathbb{Z}$-algebra. Extending to rational scalars, we have that $A_\mathbb{Q}=\mathbb{Q}\otimes_\mathbb{Z} A$ is a finitely generated $\mathbb{Q}$-algebra. By the Normalization Theorem, there exist elements $x_1,\dots, x_N$ in $A_\mathbb{Q}$ which are algebraically independent over $\mathbb{Q}$, and such that $A_\mathbb{Q}$ is integral over $\mathbb{Q}[x_1,\dots,x_N]$. Up to replacing each $x_i$ by an integral multiple of itself, we may assume that $x_1,\dots, x_N$ are in $A$. There is some positive $m\in \mathbb{Z}$ such that each ring generator of $A$ is integral over $\mathbb{Z}[1/m][x_1,\dots, x_N]$. Thus $A[1/m]$ is integral over the subring $\mathbb{Z}[1/m][x_1,\dots, x_N]$. If a prime $p$ is invertible in $A$, then it is also invertible in $A[1/m]$ while at the same time $p\in \mathbb{Z}[1/m][x_1,\dots, x_N]$.
Now we use the following general fact. Let $R$ be a ring which is integral over a subring $S$. If $s\in S$ is invertible in $R$, then $s$ is already invertible in $S$. The proof is easy. Let $r\in R$ with $rs=1$. We have $r^d+s_{1}r^{d-1}+\dots+s_{d-1}r+s_d=0$ for some $s_i\in S$, since $r$ is integral over $S$. Multiplying through by $s^{d-1}$ yields $r\in S$.
Returning to our proof, we infer that $p$ is invertible in $\mathbb{Z}[1/m][x_1,\dots, x_N]$. By the algebraic independence of $x_1,\dots, x_N$, it follows that $p$ is actually invertible in $\mathbb{Z}[1/m]$. But only finitely many primes have this property, namely the prime factors of $m$.
\end{proof}
\begin{rem} Let $A$ be an infinite, finitely generated domain with $\mathrm{char}\:A=p>0$.
If $n\geq 2$ then the $p$-torsion group $(A,+)$ embeds in $\mathrm{GL}_n(A)$, and this prevents $\mathrm{GL}_n(A)$ from being virtually residually $\ell$-finite for any prime $\ell\neq p$. So we cannot do any better in the positive characteristic case of Theorem~\ref{what we're actually proving}.
Selberg's lemma fails in positive characteristic for a similar reason. The elementary group $\mathrm{E}_n(A)=\langle 1_n+a\cdot e_{ij} : a\in A, i\neq j\rangle$ is linear over the fraction field of $A$, and it fails to be virtually torsion-free since it contains copies of the infinite torsion group $(A,+)$. Furthermore, if $n\geq 3$ then $\mathrm{E}_n(A)$ is finitely generated. This is due to the commutator relations $[1_n+a\cdot e_{ij}, 1_n+b\cdot e_{jk}]=1_n+ab\cdot e_{ik}$ for distinct $i,j,k$, which imply that $\mathrm{E}_n(A)$ is generated by $\{1_n+a_1\cdot e_{ij},\dots, 1_n+a_N\cdot e_{ij} : i\neq j\}$ whenever $a_1, \dots, a_N$ are ring generators for $A$. For a concrete example, take $A$ to be the polynomial ring $\mathbb{F}_p[t]$, in which case $\mathrm{E}_n(\mathbb{F}_p[t])=\mathrm{SL}_n(\mathbb{F}_p[t])$ since $\mathbb{F}_p[t]$ is a euclidean domain.
\end{rem}
\begin{rem}
Among finitely generated groups, we have the following implications:
\begin{align*}
\textrm{linear}\;\Rightarrow\;\textrm{virtually residually $p$-finite for some prime $p$}\;\Rightarrow\;\textrm{residually finite}
\end{align*}
The first implication, a ``$p$-adic'' refinement of Malcev's theorem, is an immediate consequence of Platonov's theorem. The second implication is Proposition~\ref{one p suffices}. Neither implication can be reversed, as witnessed by the following examples.
According to the previous remark, $\mathrm{SL}_n(\mathbb{F}_p[t])$ for $n\geq 3$ is finitely generated and virtually residually $\ell$-finite for $\ell=p$ only. Therefore $\mathrm{SL}_n(\mathbb{F}_{p}[t])\times \mathrm{SL}_n(\mathbb{F}_{q}[t])$, where $p$ and $q$ are different primes, is finitely generated, residually finite but not virtually residually $\ell$-finite for any prime $\ell$.
The automorphism group of the free group on $n$ generators, $\mathrm{Aut} (F_{n})$, is virtually residually $p$-finite for all primes $p$. Indeed, as we have seen in Remark~\ref{what lurks}, free groups are residually $p$-finite for all primes $p$. Now a theorem of Lubotzky (J. Algebra 1980) says that $\mathrm{Aut} (G)$ is virtually residually $p$-finite whenever the finitely generated group $G$ is virtually residually $p$-finite. This is the ``$p$-adic'' analogue of an older, simpler, and better known theorem of G. Baumslag (J. London Math Soc. 1963) saying that $\mathrm{Aut} (G)$ is residually finite whenever the finitely generated group $G$ is residually finite. On the other hand, Formanek and Procesi (J. Algebra 1992) have shown that $\mathrm{Aut} (F_n)$ is not linear for $n\geq 3$. \end{rem}
| {
"timestamp": "2013-06-12T02:00:49",
"yymm": "1306",
"arxiv_id": "1306.2385",
"language": "en",
"url": "https://arxiv.org/abs/1306.2385",
"abstract": "An account of two fundamental facts concerning finitely generated linear groups: Malcev's theorem on residual finiteness, and Selberg's lemma on virtual torsion-freeness.",
"subjects": "Group Theory (math.GR)",
"title": "Linear groups - Malcev's theorem and Selberg's lemma",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759610129464,
"lm_q2_score": 0.8354835452961425,
"lm_q1q2_score": 0.8195057254028574
} |
https://arxiv.org/abs/2005.14125 | Notes on ridge functions and neural networks | These notes are about ridge functions. Recent years have witnessed a flurry of interest in these functions. Ridge functions appear in various fields and under various guises. They appear in fields as diverse as partial differential equations (where they are called plane waves), computerized tomography and statistics. These functions are also the underpinnings of many central models in neural networks.We are interested in ridge functions from the point of view of approximation theory. The basic goal in approximation theory is to approximate complicated objects by simpler objects. Among many classes of multivariate functions, linear combinations of ridge functions are a class of simpler functions. These notes study some problems of approximation of multivariate functions by linear combinations of ridge functions. We present here various properties of these functions. The questions we ask are as follows. When can a multivariate function be expressed as a linear combination of ridge functions from a certain class? When do such linear combinations represent each multivariate function? If a precise representation is not possible, can one approximate arbitrarily well? If well approximation fails, how can one compute/estimate the error of approximation, know that a best approximation exists? How can one characterize and construct best approximations? If a smooth function is a sum of arbitrarily behaved ridge functions, can it be expressed as a sum of smooth ridge functions? We also study properties of generalized ridge functions, which are very much related to linear superpositions and Kolmogorov's famous superposition theorem. These notes end with a few applications of ridge functions to the problem of approximation by single and two hidden layer neural networks with a restricted set of weights.We hope that these notes will be useful and interesting to both researchers and students. | \chapter*{Preface}
These notes are about \textit{ridge functions}. Recent years have
witnessed a flurry of interest in these functions. Ridge functions
appear in various fields and under various guises. They appear in fields as diverse as
partial differential equations (where they are called \textit{plane waves}),
computerized tomography and statistics. These functions are also
the underpinnings of many central models in neural networks.
We are interested in ridge functions from the point of view of approximation theory.
The basic goal in approximation theory is to approximate complicated objects by simpler objects.
Among many classes of multivariate functions, linear combinations of ridge functions are a class of simpler functions.
These notes study some problems of approximation of multivariate functions by linear combinations of ridge functions.
We present here various properties of these functions.
The questions we ask are as follows. When can a multivariate function be expressed as a linear combination of ridge functions from a certain class? When do such linear combinations represent each multivariate function? If a precise representation is not possible, can one approximate arbitrarily well? If well approximation fails, how can one compute/estimate the error of approximation, know that a best approximation exists? How can one characterize and construct best approximations? If a smooth function is a sum of arbitrarily behaved ridge functions, is it true that it can be expressed as a sum of smooth ridge functions?
We also study properties of generalized ridge functions, which are very much related to linear superpositions and Kolmogorov's famous superposition theorem. These notes end with a few applications of ridge functions to the problem of approximation by single and two hidden layer neural networks with a restricted set of weights.
We hope that these notes will be useful and interesting to both researchers and graduate students.
\newpage
\tableofcontents
\newpage
\chapter*{Introduction}
\addcontentsline{toc}{chapter}{Introduction}
Recent years have seen a growing interest in the study of special
multivariate functions called ridge functions. A \textit{ridge function}, in
its simplest format, is a multivariate function of the form $g\left( \mathbf{%
a}\cdot \mathbf{x}\right) $, where $g:\mathbb{R}\rightarrow \mathbb{R}$, $%
\mathbf{a}=\left( a_{1},...,a_{d}\right) $ is a fixed vector (direction) in $%
\mathbb{R}^{d}\backslash \left\{ \mathbf{0}\right\} $, $\mathbf{x}=\left(
x_{1},...,x_{d}\right) $ is the variable and $\mathbf{a}\cdot \mathbf{x}$ is
the standard inner product. In other words, a ridge function is a
multivariate function constant on the parallel hyperplanes $\mathbf{a}\cdot
\mathbf{x}=c$, $c\in \mathbb{R}$. These functions arise naturally in various
fields. They arise in computerized tomography (see, e.g., \cite%
{72,73,74,97,106,111}), statistics (see, e.g., \cite{13,14,27,33,42}) and
neural networks (see, e.g., \cite{22,Is,58,94,100,119,123}). These
functions are also used in modern approximation theory as an effective and
convenient tool for approximating complicated multivariate functions (see,
e.g., \cite{38,66,57,Kr,101,114,118,137}).
It should be remarked that long before the appearance of the name
\textquotedblleft ridge", these functions were used in PDE theory under the name of \textit{plane waves}.
For example, see the book by F. John \cite{69}. In general, sums of ridge functions with
fixed directions occur in the study of hyperbolic constant coefficient partial
differential equations. As an example, assume that $(\alpha _{i},\beta
_{i}),~i=1,...,r,$ are pairwise linearly independent vectors in $\mathbb{R}%
^{2}$. Then the general solution to the homogeneous partial differential
equation
\begin{equation*}
\prod\limits_{i=1}^{r}\left( \alpha {_{i}{\frac{\partial }{\partial {x}}}%
+\beta _{i}{\frac{\partial }{\partial {y}}}}\right) {u}\left( {x,y}\right) =0
\end{equation*}%
are all functions of the form
\begin{equation*}
u(x,y)=\sum\limits_{i=1}^{r}g_{i}\left( \beta {_{i}x-\alpha _{i}y}\right)
\end{equation*}%
for arbitrary continuous univariate functions $g_{i}$, $i=1,...,r$. Here the
derivatives are understood in the sense of distributions.
The term \textquotedblleft ridge function" was coined by Logan and Shepp in
their seminal paper \cite{97} devoted to the basic mathematical problem of
computerized tomography. This problem consists of reconstructing a given
multivariate function from values of its integrals along certain straight
lines in the plane. The integrals along parallel lines can be considered as
a ridge function. Thus, the problem is to reconstruct $f$ from some set of
ridge functions generated by the function $f$ itself. In practice, one can
consider only a finite number of directions along which the above integrals
are taken. Obviously, reconstruction from such data needs some additional
conditions to be unique, since there are many functions $g$ having the same
integrals. For uniqueness, Logan and Shepp \cite{97} used the criterion of
minimizing the $L_{2}$ norm of $g$. That is, they found a function $g(x,y)$
with the minimum $L_{2}$ norm among all functions, which has the same
integrals as $f$. More precisely, let $D$ be the unit disk in the plane and
an unknown function $f(x,y)$ be square integrable and supported on $D.$ We
are given projections $P_{f}(t,\theta )$ (integrals of $f$ along the lines $%
x\cos \theta +y\sin \theta =t$) and looking for a function $g=g(x,y)$ of
minimum $L_{2}$ norm, which has the same projections as $f:$ $P_{g}(t,\theta
_{j})=P_{f}(t,\theta _{j}),$ $j=0,1,...,n-1$, where the angles $\theta _{j}$
generate equally spaced directions, i.e. $\theta _{j}=\frac{j\pi }{n},$ $%
j=0,1,...,n-1.$ The authors of \cite{97} showed that this problem of
tomography is equivalent to the problem of $L_{2}$-approximation of the
function $f$ by sums of ridge functions with the equally spaced directions $%
(\cos \theta _{j},\sin \theta _{j})$, $j=0,1,...,n-1.$ They gave a
closed-form expression for the unique function $g(x,y)$ and showed that the
unique polynomial $P(x,y)$ of degree $n-1$ which best approximates $f$ in $%
L_{2}(D)$ is determined from the above $n$ projections of $f$ and can be
represented as a sum of $n$ ridge functions.
Kazantsev \cite{72} solved the above problem of tomography without requiring
that the considered directions are equally spaced. Marr \cite{106}
considered the problem of finding a polynomial of degree $n-2$, whose
projections along lines joining each pair of $n$ equally spaced points on
the circumference of $D$ best matches the given projections of $f$ in the
sense of minimizing the sum of squares of the differences. Thus we see that
the problems of tomography give rise to an independent study of
approximation theoretic properties of the following set of linear
combinations of ridge functions:
\begin{equation*}
\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) =\left\{
\sum\limits_{i=1}^{r}g_{i}\left( \mathbf{a}^{i}\cdot \mathbf{x}\right)
:g_{i}:\mathbb{R}\rightarrow \mathbb{R},i=1,...,r\right\} ,
\end{equation*}%
where directions $\mathbf{a}^{1},...,\mathbf{a}^{r}$ are fixed and belong to
the $d$-dimensional Euclidean space. Note that the set $\mathcal{R}\left(
\mathbf{a}^{1},...,\mathbf{a}^{r}\right) $ is a linear space.
Ridge function approximation also appears in statistics in \textit{%
Projection Pursuit}. This term was introduced by Friedman and Tukey \cite{32}
to name a technique for the explanatory analysis of large and multivariate
data sets. This technique seeks out ``interesting" linear projections of the
multivariate data onto a line or a plane. Projection Pursuit algorithms
approximate a multivariate function $f$ by sums of ridge functions with
variable directions, that is, by functions from the set
\begin{equation*}
\mathcal{R}_{r}=\left\{ \sum\limits_{i=1}^{r}g_{i}\left( \mathbf{a}^{i}\cdot
\mathbf{x}\right) :\mathbf{a}^{i}\in \mathbb{R}^{d}\setminus \{\mathbf{0}%
\},\ g_{i}:\mathbb{R}\rightarrow \mathbb{R},i=1,...,r\right\} .
\end{equation*}%
Here $r$ is the only fixed parameter, directions $\mathbf{a}^{1},...,\mathbf{%
a}^{r}$ and functions $g_{1},...,g_{r}$ are free to choose. The first method
of such approximation was developed by Friedman and Stuetzle \cite{33}.
Their approximation process called \textit{Projection Pursuit Regression}
(PPR) operates in a stepwise and greedy fashion. The process does not find a
best approximation from $\mathcal{R}_{r}$, it algorithmically constructs
functions $g_{r}\in \mathcal{R}_{r},$ such that $\left\Vert
g_{r}-f\right\Vert _{L_{2}}\rightarrow 0,$ as $r\rightarrow \infty $. At
stage $m$, PPR looks for a univariate function $g_{m}$ and direction $%
\mathbf{a}^{m}$ such that the ridge function $g_{m}\left( \mathbf{a}%
^{m}\cdot \mathbf{x}\right) $ best approximates the residual $%
f(x)-\sum\limits_{j=1}^{m-1}g_{j}\left( \mathbf{a}^{j}\cdot \mathbf{x}%
\right) $. Projection pursuit regression has been proposed as an approach to
bypass the curse of dimensionality and now is applied to prediction in
applied sciences. In \cite{13,14}, Candes developed a new approach based not
on stepwise construction of approximation but on a new transform called the
\textit{ridgelet transform}. The ridgelet transform represents general
functions as integrals of \textit{ridgelets} -- specifically chosen ridge
functions.
The significance of approximation by ridge functions is well understood from
its role in the theory of \textit{neural networks}. Ridge functions appear
in the definitions of many central neural network models. It is a broad
knowledge that neural networks are being successfully applied across an
extraordinary range of problem domains, in fields as diverse as finance,
medicine, engineering, geology and physics. Generally speaking, neural
networks are being introduced anywhere that there are problems of
prediction, classification or control. Thus not surprisingly, there is a
great interest to this powerful and very popular area of research (see,
e.g., \cite{119} and a great deal of references therein). An artificial
neural network is a way to perform computations using networks of
interconnected computational units vaguely analogous to neurons simulating
how our brain solves them. An artificial neuron, which forms the basis for
designing neural networks, is a device with $d$ real inputs and an output.
This output is generally a ridge function of the given inputs. In
mathematical terms, a neuron may be described as
\begin{equation*}
y=\sigma (\mathbf{w\cdot x}-\theta ),
\end{equation*}%
where $\mathbf{x=(x}_{1},...,x_{d})\in \mathbb{R}^{d}$ are the input
signals, $w=(w_{1},...,w_{d})\in \mathbb{R}^{d}$ are the synaptic weights, $%
\theta \in \mathbb{R}$ is the bias, $\sigma $ is the activation function and
$y$ is the output signal of the neuron. In a layered neural network the
neurons are organized in the form of layers. We have at least two layers: an
input and an output layer. The layers between the input and the output
layers (if any) are called hidden layers, whose computation nodes are
correspondingly called hidden neurons or hidden units. The output signals of
the first layer are used as inputs to the second layer, the output signals
of the second layer are used as inputs to the third layer, and so on for the
rest of the network. Neural networks with this kind of architecture is
called a \textit{Multilayer Feedforward Perceptron} (MLP). This is the most
popular model among other neural network models. In this model, a neural
network with a single hidden layer and one output represents a function of
the form
\begin{equation*}
\sum_{i=1}^{r}c_{i}\sigma (\mathbf{w}^{i}\mathbf{\cdot x}-\theta _{i}).
\end{equation*}
Here the weights $\mathbf{w}^{i}$ are vectors in $\mathbb{R}^{d}$, the
thresholds $\theta _{i}$ and the coefficients $c_{i}$ are real numbers and \
the activation function $\sigma $ is a univariate function. We fix only $%
\sigma $ and $r$. Note that the functions $\sigma (\mathbf{w}^{i}\mathbf{%
\cdot x}-\theta _{i})$ are ridge functions. Thus it is not surprising that
some approximation theoretic problems related to neural networks have strong
association with the corresponding problems of approximation by ridge
functions.
It is clear that in the special case, linear combinations of ridge functions
turn into sums of univariate functions. This is also the simplest case. The
simplicity of the approximation apparatus itself guarantees its utility in
applications where multivariate functions are constant obstacles. In
mathematics, this type of approximation has arisen, for example, in
connection with the classical functional equations \cite{11}, the numerical
solution of certain PDE boundary value problems \cite{9}, dimension theory
\cite{132,133}, etc. In computer science, it arises in connection with the
efficient storage of data in computer databases (see, e.g., \cite{140}).
There is an interesting interconnection between the theory of approximation
by univariate functions and problems of equilibrium construction in
economics (see \cite{136}).
Linear combinations of ridge functions with fixed directions allow a natural
generalization to functions of the form $g(\alpha _{1}(x_{1})+\cdot \cdot
\cdot +\alpha _{d}(x_{d}))$, where $\alpha _{i}(x_{i})$, $i=\overline{1,d},$
are real univariate functions. Such a generalization has a strong
association with linear superpositions. A linear superposition is a function
expressed as the sum%
\begin{equation*}
\sum\limits_{i=1}^{r}g_{i}(h_{i}(x)), \; x \in X,
\end{equation*}%
where $X$ is any set (in particular, a subset of $\mathbb{R}^{d}$), $%
h_{i}:X\rightarrow {{\mathbb{R}}},~i=1,...,r,$ are arbitrarily fixed
functions, and $g_{i}:\mathbb{R}\rightarrow \mathbb{R},~i=1,...,r.$ Note
that here we deal with more complicated composition than the composition of
a univariate function with the inner product. A starting point in the study
of linear superpositions was the well known superposition theorem of
Kolmogorov \cite{83} (see also the paper on Kolmogorov's works by Tikhomirov
\cite{139}). This theorem states that for the unit cube $\mathbb{I}^{d},~%
\mathbb{I}=[0,1],~d\geq 2,$ there exist $2d+1$ functions $%
\{s_{q}\}_{q=1}^{2d+1}\subset C(\mathbb{I}^{d})$ of the form
\begin{equation*}
s_{q}(x_{1},...,x_{d})=\sum_{p=1}^{d}\varphi _{pq}(x_{p}),~\varphi _{pq}\in
C(\mathbb{I}),~p=1,...,d,~q=1,...,2d+1
\end{equation*}%
such that each function $f\in C(\mathbb{I}^{d})$ admits the representation
\begin{equation*}
f(x)=\sum_{q=1}^{2d+1}g_{q}(s_{q}(x)),~x=(x_{1},...,x_{d})\in \mathbb{I}%
^{d},~g_{q}\in C({{\mathbb{R)}}}.
\end{equation*}%
Thus, any continuous function on the unit cube can be represented as a
linear superposition with the fixed inner functions $s_{1},...,s_{2d+1}$. In
literature, these functions are called universal functions or the Kolmogorov
functions. Note that all the functions $g_{q}(s_{q}(x))$ in the Kolmogorov
superposition formula are generalized ridge functions, since each $s_{q}$ is a
sum of univariate functions.
In these notes, we consider some problems of approximation and/or representation of multivariate
functions by linear combinations of ridge functions,
generalized ridge functions and feedforward neural networks. The notes consist
of five chapters.
Chapter 1 is devoted to the approximation from some sets of ridge functions
with arbitrarily fixed directions in $C$ and $L_{2}$ metrics. First, we
study problems of representation of multivariate functions by linear
combinations of ridge functions. Then, in case of two fixed directions and
under suitable conditions, we give complete solutions to three basic
problems of uniform approximation, namely, problems on existence,
characterization, and construction of a best approximation. We also study
problems of well approximation (approximation with arbitrary accuracy)
and representation of continuous multivariate functions by sums
of two continuous ridge functions. The reader will see the main difficulties
and remained open problems in the uniform approximation by sums of more than
two ridge functions. For $L_{2}$ approximation, a number of summands does
not play such an essential role as it plays in the uniform approximation. In
this case, it is known that a best approximation always exists and unique.
For some special domains in $\mathbb{R}^{d}$, we characterize and then
construct the best approximation. We also give an explicit formula for the
approximation error.
Chapter 2 explores the following open problem raised in Buhmann and
Pinkus \cite{12}, and Pinkus \cite[p. 14]{117}. Assume we are given a
function $f(\mathbf{x})=f(x_{1},...,x_{n})$ of the form
\begin{equation*}
f(\mathbf{x})=\sum_{i=1}^{k}f_{i}(\mathbf{a}^{i}\cdot \mathbf{x}),
\end{equation*}%
where the $\mathbf{a}^{i},$ $i=1,...,k,$ are pairwise linearly independent
vectors (directions) in $\mathbb{R}^{d}$, $f_{i}$ are arbitrarily behaved
univariate functions and $\mathbf{a}^{i}\cdot \mathbf{x}$ are standard inner
products. Assume, in addition, that $f$ is of a certain smoothness class,
that is, $f\in C^{s}(\mathbb{R}^{d})$, where $s\geq 0$ (with the convention
that $C^{0}(\mathbb{R}^{d})=C(\mathbb{R}^{d})$). Is it true that there will
always exist $g_{i}\in C^{s}(\mathbb{R})$ such that
\begin{equation*}
f(\mathbf{x})=\sum_{i=1}^{k}g_{i}(\mathbf{a}^{i}\cdot \mathbf{x})\text{ ?}%
\end{equation*}
In this chapter, we solve this problem up to some multivariate polynomial.
We find various conditions on the directions $%
\mathbf{a}^{i}$ allowing to express this polynomial as a sum of smooth ridge functions with these directions. We also
consider the question of constructing $g_{i}$ using the information about the known functions $f_{i}$.
Chapter 3 is devoted to the simplest type of ridge functions -- univariate
functions. Note that a ridge function depends only on one variable if its
direction coincides with the coordinate direction. Thus, in case of
coincidence of all given directions with the coordinate directions, the
problem of ridge function approximation turns into the problem of
approximation of multivariate functions by sums of univariate functions. In
this chapter, we first consider the approximation of a bivariate function $%
f(x,y)$ by sums $\varphi (x)+\psi (y)$ on a rectangular domain $R$. We
construct special classes of continuous functions depending on a numerical
parameter and characterize each class in terms of the approximation error
calculation formulas. This parameter will show which points of $R$ the
calculation formula involves. We will also construct a best approximating
sum $\varphi _{0}(x)+\psi _{0}(y)$ to a function from constructed classes.
Then we develop a method for obtaining explicit formulas for the error of
approximation of bivariate functions, defined on a union of rectangles, by
sums of univariate functions. It should be remarked that formulas of such
type were known only for functions defined on a rectangle with sides
parallel to the coordinate axes. Our method, based on a maximization process
over certain objects, called \textquotedblleft closed bolts", allows the
consideration of functions defined on hexagons, octagons and stairlike
polygons with sides parallel to the coordinate axes. At the end of this
chapter we discuss one important result from Golomb's paper \cite{37}. This
paper, published in 1959, made a start of a systematic study of
approximation of multivariate functions by various compositions, including
sums of univariate functions. In \cite{37}, along with many other results,
Golomb obtained a duality formula for the error of approximation to a
multivariate function from the set of sums of univariate functions.
Unfortunately, his proof had a gap, which was 24 years later pointed out by
Marshall and O'Farrell \cite{107}. But the question if Golomb's formula was
correct, remained unsolved. In Chapter 3, we show that Golomb's formula is
correct, and moreover it holds in a stronger form.
Chapter 4 tells us about some problems concerning generalized ridge functions $%
g(\alpha _{1}(x_{1})+\cdot \cdot \cdot +\alpha _{d}(x_{d}))$ and linear
superpositions. We consider the problem of representation of general
functions by linear superpositions. We show that if some representation by
linear superpositions, in particular by linear combinations of generalized ridge
functions, holds for continuous functions, then it holds for all functions.
This leads us to extensions of many superpositions theorems (such as the
well-known Kolmogorov superposition theorem, Ostrand's superposition
theorem, etc.) from continuous to arbitrarily behaved multivariate
functions. Concerning generalized ridge functions, we see that every
multivariate function can be written as a generalized ridge function or as a sum
of finitely many such functions. We also study the uniqueness of representation of
functions by linear superpositions.
Chapter 5 is about neural network approximation. The analysis in this chapter is
based on properties of ordinary and generalized ridge functions.
We consider a single and two hidden layer feedforward neural network models with a restricted set of
weights. Such network models are important from the point of view of
practical applications. We study approximation properties of single hidden
layer neural networks with weights varying on a finite set of directions and
straight lines. We give several necessary and sufficient conditions for well
approximation by such networks. For a set of weights consisting of two
directions (and two straight lines), we show that there is a geometrically
explicit solution to the problem. Regarding
two hidden layer feedforward neural networks, we prove that two hidden layer
neural networks with $d$ inputs, $d$ neurons in the first hidden layer, $2d+2
$ neurons in the second hidden layer and with a specifically constructed
sigmoidal, infinitely differentiable and almost monotone activation function can approximate
any continuous multivariate function with arbitrary precision. We show that
for this approximation only a finite number of fixed weights (precisely, $d$
fixed weights) suffice.
There are topics related to ridge functions that are not presented here. The
glaring omission is that of interpolation at points and on straight lines by
ridge functions. We also do not address, for example, questions of linear
independence and spanning by linear combinations of ridge monomials in the
spaces of homogeneous and algebraic polynomials of a fixed degree, integral
representations of functions where the kernel is a ridge function,
approximation algorithms for finding best approximations from spaces of
linear combinations of ridge functions. These and similar topics may be
found in the monograph by Pinkus \cite{117}. The reader may also consult
the survey articles \cite{57,87,118}.
\newpage
\chapter{Properties of linear combinations of ridge functions}
In this chapter, we consider approximation-theoretic problems arising in
ridge function approximation. First we briefly review some results on approximation
by sums of ridge functions with both fixed and variable directions. Then we
analyze the problem of representability of an arbitrary multivariate
function by linear combinations of ridge functions with fixed directions. In
the special case of two fixed directions, we characterize a best uniform
approximation from the set of sums of ridge functions with these directions.
For a class of bivariate functions we use this result to construct
explicitly a best approximation. Questions on existence of a best
approximation are also studied. We also study problems of well approximation
(approximation with arbitrary accuracy) and representation of continuous
multivariate functions by sums of two continuous ridge functions. The reader
will see the main difficulties and remained open problems in the uniform
approximation by sums of more than two ridge functions. For $L_{2}$
approximation, a number of summands does not play such an essential role as
it plays in the uniform approximation. In this case, it is known that a best
approximation always exists and unique. For some special domains in $%
\mathbb{R}^{d}$, we characterize and then construct the best approximation.
We also give an explicit formula for the approximation error.
\bigskip
\section{A brief excursion into the approximation theory of ridge functions}
In this section we briefly review some results on approximation properties of the sets $%
\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) $ and $\mathcal{R}%
_{r}$. These results are presented without proofs but with discussions and complete references.
We hope this section will whet the reader's appetite for the rest of these notes,
where a more comprehensive study of concrete mathematical problems is provided.
\subsection{$\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) $ --
ridge functions with fixed directions}
It is clear that well approximation of a multivariate function $f:X\rightarrow \mathbb{R}$ from
some normed space by using elements of the set $\mathcal{R}\left( \mathbf{a}%
^{1},...,\mathbf{a}^{r}\right)$ is not always possible. The value of the
approximation error depends not only on the approximated function $f$ but
also on geometrical structure of the given set $X$. This poses challenging research problems on
computing the error of approximation and constructing best approximations from
$\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right)$. Serious difficulties arise
when one attempts to solve these problems in continuous function spaces endowed with
the uniform norm. For example, let us consider the algorithm for finding
best approximations, called the \textit{Diliberto-Straus algorithm} (see \cite{18}). The essence of
this algorithm is as follows. Let $X$ be a compact subset of $\mathbb{R}^{d}$
and $A_{i}$ be a best approximation operator from the space of continuous
functions $C(X)$ to the subspace of ridge functions $G_{i}=\{g_{i}\left(
\mathbf{a}^{i}\cdot \mathbf{x}\right) :~g_{i}\in C(\mathbb{R)},~\mathbf{x}%
\in X\}$, $i=1,...,r.$ That is, for each function $f$ $\in C(X)$, the
function $A_{i}f$ is a best approximation to $f$ from $G_{i}.$ Set
\begin{equation*}
Tf=(I-A_{r})(I-A_{r-1})\cdot \cdot \cdot (I-A_{1})f,
\end{equation*}%
where $I$ is the identity operator. It is clear that
\begin{equation*}
Tf=f-g_{1}-g_{2}-\cdot \cdot \cdot -g_{r},
\end{equation*}%
where $g_{k}$ is a best approximation from $G_{k}$ to the function $%
f-g_{1}-g_{2}-\cdot \cdot \cdot -g_{k-1}$, $k=1,...,r.$ Consider powers of
the operator $T$: $T^{2},T^{3}$ and so on. Is the sequence $%
\{T^{n}f\}_{n=1}^{\infty }$ convergent? In case of an affirmative answer,
which function is the limit of $T^{n}f,$ as $n\rightarrow \infty$? One may
expect that the sequence $\{T^{n}f\}_{n=1}^{\infty }$ converges to $%
f-g^{\ast },$ where $g^{\ast }$ is a best approximation from $\mathcal{R}%
\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) $ to $f$. This conjecture
was first stated by Diliberto and Straus \cite{26} in 1951 for the
uniform approximation of a multivariate function, defined on the unit cube,
by sums of univariate functions (that is, sums of ridge functions with the
coordinate directions). But later it was shown by Aumann \cite{4} that the sequence generated by this
algorithm may not converge if $r>2$. For $r=2$ and certain convex
compact sets $X$, the sequence $\{\|T^{n}f\|\}_{n=1}^{\infty }$
converges to the approximation error $\|f-g_{0}\|$,
where $g_{0}$ is a best approximation from $\mathcal{R}%
\left( \mathbf{a}^{1},\mathbf{a}^{2}\right)$ (see \cite{61,117}).
However, it is not yet clear whether $\|T^{n}f-(f-g_{0})\|$ converges to zero
as $n\rightarrow \infty$. In the case $r>2$ no efficient algorithm is known for finding a best uniform
approximation from $\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}%
^{r}\right)$. Note that in the $L_{2}$ metric, the Diliberto-Straus algorithm
converges as desired for an arbitrary number of distinct
directions. This also holds in the $L_{p}$ space setting, provided that $p>1$ and
$\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right)$ is closed (see \cite{118}). But in the $L_{1}$ space setting,
the alternating algorithm does not work even in the case of two directions (see \cite{117}).
One of the basic problems concerning the approximation by sums of ridge
functions with fixed directions is the problem of verifying if a given
function $f$ belongs to the space $\mathcal{R}\left( \mathbf{a}^{1},...,%
\mathbf{a}^{r}\right) $. This problem has a simple solution if the space
dimension $d=2$ and a given function $f(x,y)$ has partial derivatives up to $%
r$-th order. For the representation of $f(x,y)$ in the form%
\begin{equation*}
f(x,y)=\sum_{i=1}^{r}g_{i}(a_{i}x+b_{i}y),
\end{equation*}%
it is necessary and sufficient that
\begin{equation*}
\prod\limits_{i=1}^{r}\left( b_{i}\frac{\partial }{\partial x}-a_{i}\frac{%
\partial }{\partial y}\right) f=0.\eqno(1.1)
\end{equation*}%
This recipe is also valid for continuous bivariate functions provided that
the derivatives are understood in the sense of distributions.
Unfortunately such a simple characterization does not carry over to the
case of more than two variables. Below we provide two results
concerning the general case of arbitrarily many variables.
\bigskip
\textbf{Proposition 1.1 }(Diaconis, Shahshahani \cite{25}). \textit{Let $%
\mathbf{a}^{1},...,\mathbf{a}^{r}$ be pairwise linearly independent vectors
in $\mathbb{R}^{d}. $ Let for $i=1,2,...,r$, $H^{i}$ denote the hyperplane $%
\{\mathbf{c}\in \mathbb{R}^{d}$: $\mathbf{c\cdot a}^{i}=0\}.$ Then a
function $f\in C^{r}(\mathbb{R}^{d})$ can be represented in the form}
\begin{equation*}
f(\mathbf{x})=\sum\limits_{i=1}^{r}g_{i}\left( \mathbf{a}^{i}\cdot \mathbf{x}%
\right) +P(\mathbf{x}),
\end{equation*}%
\textit{where $P(\mathbf{x})$ is a polynomial of degree not more than $r$,
if and only if}
\begin{equation*}
\prod\limits_{i=1}^{r}\sum_{s=1}^{d}c_{s}^{i}\frac{\partial f}{\partial x_{s}%
}=0,
\end{equation*}%
\textit{for all vectors $\mathbf{c}^{i}=(c_{1}^{i},c_{2}^{i},...,c_{d}^{i})%
\in H^{i}, $ $i=1,2,...,r.$}
\bigskip
There are examples showing that one cannot simply dispense with the
polynomial $P(\mathbf{x})$ in the above proposition (see \cite{25}). In
fact, a polynomial term appears in the sufficiency part of the proof of this
proposition.
Lin and Pinkus \cite{95} obtained more general result on the representation
by sums of ridge functions with fixed directions.
We need some notation to present their result. Each
polynomial $p(x_{1},...,x_{d})$ generates the differential operator $p(\frac{%
\partial }{\partial x_{1}},...,\frac{\partial }{\partial x_{d}}).$ Let $P(%
\mathbf{a}^{1},...,\mathbf{a}^{r})$ denote the set of polynomials which
vanish on all the lines $\{\lambda \mathbf{a}^{i},\lambda \in \mathbb{R}\},$
$i=1,...,r.$ Obviously, this is an ideal in the ring of all polynomials. Let
$Q$ be the set of polynomials $q=q(x_{1},...,x_{d})$ such that $p(\frac{%
\partial }{\partial x_{1}},...,\frac{\partial }{\partial x_{d}})q=0$, for
all $p(x_{1},...,x_{d})\in P(\mathbf{a}^{1},...,\mathbf{a}^{r}).$
\bigskip
\textbf{Proposition 1.2 }(Lin, Pinkus \cite{95}). \textit{Let $\mathbf{a}%
^{1},...,\mathbf{a}^{r}$ be pairwise linearly independent vectors in $%
\mathbb{R}^{d}.$ A function $f\in C(\mathbb{R}^{d})$ can be expressed in the
form}
\begin{equation*}
f(\mathbf{x})=\sum\limits_{i=1}^{r}g_{i}(\mathbf{a}^{i}\cdot \mathbf{x)},
\end{equation*}%
\textit{if and only if $f$ belongs to the closure of the linear span of $Q.$}
\bigskip
In \cite{120}, A.Pinkus considered the problems of smoothness and uniqueness
in ridge function representation. For a given function $f$ $\in $ $\mathcal{R%
}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) $, he posed and answered the
following questions. If $f$ belongs to some smoothness class, what can we
say about the smoothness of the functions $g_{i}$? How many different ways
can we write $f$ as a linear combination of ridge functions? These and similar problems
will be extensively discussed in Chapter 2.
The above problem of representation of fixed functions by sums of
ridge functions gives rise to the problem of representation of some classes
of functions by such sums. For example, one may consider the following
problem. Let $X$ be a subset of the $d$-dimensional Euclidean space. Let $%
C(X),$ $B(X),$ $T(X)$ denote the set of continuous, bounded and all real
functions defined on $X$, respectively. In the first case, we additionally
suppose that $X$ is a compact set. Let $\mathcal{R}_{c}\left( \mathbf{a}%
^{1},...,\mathbf{a}^{r}\right) $ and $\mathcal{R}_{b}\left( \mathbf{a}%
^{1},...,\mathbf{a}^{r}\right) $ denote the subspaces of $\mathcal{R}\left(
\mathbf{a}^{1},...,\mathbf{a}^{r}\right) $ comprising only sums with
continuous and bounded terms $g_{i}\left( \mathbf{a}^{i}\cdot \mathbf{x}%
\right) $, $i=1,...,r$, respectively. The following questions naturally
arise: For which sets $X$,
$(1)$ $C(X)=\mathcal{R}_{c}\left(
\mathbf{a}^{1},...,\mathbf{a}^{r}\right)$?
$(2)$ $B(X)=\mathcal{R}_{b}\left(
\mathbf{a}^{1},...,\mathbf{a}^{r}\right)$?
$(3)$ $T(X)=\mathcal{R}\left(
\mathbf{a}^{1},...,\mathbf{a}^{r}\right)$?
The first two questions in a more
general setting were answered in Sternfeld \cite{132,131}.
The third question will be answered in the next section. Let us briefly
discuss some results of Sternfeld concerning ridge function representation. These results
have been mostly overlooked in the corresponding ridge function literature,
as they have to do with more general superpositions of functions and do not directly mention ridge functions.
Assume we are given directions $%
\mathbf{a}^{1},...,\mathbf{a}^{r}\in \mathbb{R}^{d}\backslash \{\mathbf{0}\}$
and a set $X\subseteq \mathbb{R}^{d}.$ Following Sternfeld, we say that a family $F=\{\mathbf{a}^{1},...,%
\mathbf{a}^{r}\}$ \textit{uniformly separates points} of $X$ if there exists
a number $0<\lambda \leq 1$ such that for each pair $\{\mathbf{x}%
_{j}\}_{j=1}^{m}$, $\{\mathbf{z}_{j}\}_{j=1}^{m}$ of disjoint finite
sequences in $X$, there exists some direction $\mathbf{a}^{k}\in F$ so that
if from the two sequences $\{\mathbf{a}^{k}\cdot \mathbf{x}_{j}\}_{j=1}^{m}$
and $\{\mathbf{a}^{k}\cdot \mathbf{z}_{j}\}_{j=1}^{m}$ we remove a maximal
number of pairs of points $\mathbf{a}^{k}\cdot \mathbf{x}_{j_{1}}$ and $%
\mathbf{a}^{k}\cdot \mathbf{z}_{j_{2}}$ with $\mathbf{a}^{k}\cdot \mathbf{x}%
_{j_{1}}=\mathbf{a}^{k}\cdot \mathbf{z}_{j_{2}},$ then there remains at
least $\lambda m$ points in each sequence (or, equivalently, at most $%
(1-\lambda )m$ pairs can be removed). Sternfeld \cite{132}, in particular,
proved that a family of directions $F=\{\mathbf{a}^{1},...,\mathbf{a}^{r}\}$
uniformly separates points of $X$ if and only if $\mathcal{R}_{b}\left(
\mathbf{a}^{1},...,\mathbf{a}^{r}\right) =B(X)$. In \cite{132}, he also
obtained a practically convenient sufficient condition for the equality $%
\mathcal{R}_{b}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) =B(X).$ To
describe his condition, define the set functions
\begin{equation*}
\tau _{i}(Z)=\{\mathbf{x}\in Z:~|p_{i}^{-1}(p_{i}(\mathbf{x}))\bigcap Z|\geq
2\},
\end{equation*}%
where $Z\subset X,~p_{i}(\mathbf{x})=\mathbf{a}^{i}\cdot \mathbf{x}$, \ $%
i=1,\ldots ,r,$ and $|Y|$ denotes the cardinality of a set $Y$.
Define $\tau (Z)$ to be $\bigcap_{i=1}^{k}\tau _{i}(Z)$ and define $\tau
^{2}(Z)=\tau (\tau (Z))$, $\tau ^{3}(Z)=\tau (\tau ^{2}(Z))$ and so on
inductively.
\bigskip
\textbf{Proposition 1.3 }(Sternfeld \cite{132}). \textit{If $\tau
^{n}(X)=\emptyset $ for some $n$, then $\mathcal{R}_{b}\left( \mathbf{a}%
^{1},...,\mathbf{a}^{r}\right) =B(X)$. If $X$ is a compact subset of $%
\mathbb{R}^{d}$, and $\tau ^{n}(X)=\emptyset $ for some $n$, then $\mathcal{R%
}_{c}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) =C(X)$.}
\bigskip
If $r=2$, the sufficient condition \textquotedblleft $\tau ^{n}(X)=\emptyset $ for
some $n$" turns out to be also necessary. In this case,
the equality $\mathcal{R}_{b}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right)
=B(X)$ is equivalent to the equality $\mathcal{R}_{c}\left( \mathbf{a}^{1},%
\mathbf{a}^{2}\right) =C(X)$. In another work \cite{131}, \ Sternfeld
obtained a measure-theoretic necessary and sufficient condition for the
equality $\mathcal{R}_{c}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right)
=C(X)$. Let $p_{i}(\mathbf{x})=\mathbf{a}^{i}\cdot \mathbf{x}$, \ $%
i=1,\ldots ,r$, $X$ be a compact set in $\mathbb{R}^{d}$ and $M(X)$ be a
class of measures defined on some field of subsets of $X$.
Following Sternfeld, we say that a family $F=\{%
\mathbf{a}^{1},...,\mathbf{a}^{r}\}$ \textit{uniformly separates measures}
of the class $M(X)$ if there exists a number $0<\lambda \leq 1$ such that
for each measure $\mu $ in $M(X)$ the equality $\left\Vert \mu \circ
p_{k}^{-1}\right\Vert \geq \lambda \left\Vert \mu \right\Vert $ holds for
some direction $\mathbf{a}^{k}\in F$. Sternfeld \cite{134,131}, in
particular, proved that the equality $\mathcal{R}_{c}\left( \mathbf{a}%
^{1},...,\mathbf{a}^{r}\right) =C(X)$ holds if and only if the family of
directions $\{\mathbf{a}^{1},...,\mathbf{a}^{r}\}$ uniformly separates
measures of the class $C(X)^{\ast }$ (that is, the class of regular Borel
measures). In addition, he proved that $\mathcal{R}_{b}\left( \mathbf{a}^{1},...,%
\mathbf{a}^{r}\right) =B(X)$ if and only if the family of directions $\{%
\mathbf{a}^{1},...,\mathbf{a}^{r}\}$ uniformly separates measures of the
class $l_{1}(X)$ (that is, the class of finite measures defined on countable
subsets of $X$). Since $l_{1}(X)\subset C(X)^{\ast },$ the first equality $%
\mathcal{R}_{c}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) =C(X)$
implies the second equality $\mathcal{R}_{b}\left( \mathbf{a}^{1},...,%
\mathbf{a}^{r}\right) =B(X).$ The inverse is not true (see \cite{131}).
We emphasize again that the above results of Sternfeld were obtained for
more general functions, than linear combinations of ridge functions, namely
for functions of the form $\sum_{i=1}^{r}g_{i}(h_{i}(x))$, where $h_{i}$
arbitrarily fixed functions (bounded or continuous) defined on $X.$ Such functions will
be discussed in Chapter 4.
\bigskip
\subsection{$\mathcal{R}_{r}$ -- ridge functions with variable directions}
Obviously, the\ set $\mathcal{R}_{c}\left( \mathbf{a}^{1},...,\mathbf{a}%
^{r}\right) $ is not dense in $C(\mathbb{R}^{d})$ in the topology of uniform
convergence on compact subsets of $\mathbb{R}^{d}.$ Density here does not
hold because the number of considered directions is finite. If consider all
the possible directions, then the set $\mathcal{R}=span\{g(\mathbf{a}\cdot
\mathbf{x)}:~g\in C(\mathbb{R)},~\mathbf{a}\in \mathbb{R}^{d}\backslash \{%
\mathbf{0}\}\}$ will certainly be dense in the space $C(\mathbb{R}^{d})$ in
the above mentioned topology. In order to be sure, it is enough to consider
only the functions $e^{\mathbf{a}\cdot \mathbf{x}}\in \mathcal{R}$, the
linear span of which is dense in $C(\mathbb{R}^{d})$ by the
Stone-Weierstrass theorem. In fact, for density it is not necessary to
comprise all directions. The following theorem shows how many directions in
totality satisfy the density requirements.
\bigskip
\textbf{Proposition 1.4 }(Vostrecov and Kreines \cite{142}, Lin and Pinkus
\cite{95}). \textit{For density of the set}
\begin{equation*}
\mathcal{R(A)}=span\{g(\mathbf{a}\cdot \mathbf{x)}:~g\in C(\mathbb{R)},~%
\mathbf{a}\in \mathcal{A}\subset \mathbb{R}^{d}\}
\end{equation*}%
\textit{in $C(\mathbb{R}^{d})$ (in the topology of uniform convergence on
compact sets) it is necessary and sufficient that the only homogeneous
polynomial which vanishes identically on $\mathcal{A}$ is the zero
polynomial.}
\bigskip
Since in the definition of $\mathcal{R(A)}$ we vary over all univariate
functions$~g,$ allowing one direction $\mathbf{a}$ is equivalent to allowing
all directions $k\mathbf{a}$ for every real $k$. Thus it is sufficient to
consider only the set $\mathcal{A}$ of directions normalized to the unit
sphere $S^{n-1}.$ For example, if $\mathcal{A}$ is a subset of the sphere $%
S^{n-1},$ which contains an interior point (interior point with respect to
the induced topology on $S^{n-1}$), then $\mathcal{R(A)}$ is dense in the
space $C(\mathbb{R}^{d}).$ The proof of Proposition 1.4 highlights an
important fact that the set $\mathcal{R(A)}$ is dense in $C(\mathbb{R}^{d})$
in the topology of uniform convergence on compact subsets if and only if $%
\mathcal{R(A)}$ contains all the polynomials (see \cite{95}).
Representability of polynomials by sums of ridge functions is a building
block for many results. In many works (see, e.g., \cite{119}), the following
fact is fundamental: Every multivariate polynomial $h(\mathbf{x}%
)=h(x_{1},...,x_{d})$ of degree $k $ can be represented in the form
\begin{equation*}
h(\mathbf{x})=\sum\limits_{i=1}^{l}p_{i}(\mathbf{a}^{i}\cdot \mathbf{x),}
\end{equation*}%
where $p_{i}$ is a univariate polynomial, $\mathbf{a}^{i}\in \mathbb{R}^{d}$%
, and $l=$ $\binom{d-1+k}{k}$.
For example, for the representation of a bivariate polynomial of degree $k$,
it is needed $k+1$ univariate polynomials and $k+1$ directions (see \cite{97}%
). The proof of this fact is organized so that the directions $\mathbf{a}^{i}
$, $i=1,...,k+1$, are chosen once for all multivariate polynomials of $k$-th
degree. At one of the seminars in the Technion -- Israel Institute of
Technology in 2007, A. Pinkus posed two problems:
1) Can every multivariate polynomial of degree $k$ be represented by less
than $l$ ridge functions?
2) How large is the set of polynomials represented by $l-1,$ $l-2,...$ ridge
functions?
Note that for bivariate polynomials the 1-st problem is solved positively,
that is, the number $l=k+1$ can be reduced. Indeed, for a bivariate
polynomial $P(x,y)$ of $k$-th degree, there exist many combinations of real
numbers $c_{0},...,c_{k}$ such that
\begin{equation*}
\sum_{i=0}^{k}c_{i}\frac{\partial ^{k}}{\partial x^{i}\partial y^{k-i}}%
P(x,y)=0.
\end{equation*}%
Further the numbers $c_{i}$, $i=0,...,k$, can be selected to enjoy the
property that the polynomial $\sum_{i=0}^{k}c_{i}t^{i}$ has distinct real
zeros. Then it is not difficult to verify that the differential operator $%
\sum_{i=0}^{k}c_{i}\frac{\partial ^{k}}{\partial x^{i}\partial y^{k-i}}$ can
be written in the form
\begin{equation*}
\prod\limits_{i=1}^{k}\left( b_{i}\frac{\partial }{\partial x}-a_{i}\frac{%
\partial }{\partial y}\right) ,
\end{equation*}%
for some pairwise linearly independent vectors $(a_{i},b_{i})$, $i=1,...,k$.
Now from the above criterion (1.1) we obtain that the polynomial $P(x,y)$
can be represented as a sum of $k$ ridge functions. Note that the problem of
representation of a multivariate algebraic polynomial $P(\mathbf{x})$ in the
form $\sum_{i=0}^{r}g_{i}(\mathbf{a}^{i}\cdot \mathbf{x})$ with minimal $r$
was extensively studied in the monograph by Pinkus \cite{117}.
In connection with the 2-nd problem of Pinkus, V. Maiorov \cite{103} studied
certain geometrical properties of the manifold $\mathcal{R}_{r}$. Namely, he
estimated the $\varepsilon $-entropy numbers in terms of smaller $%
\varepsilon $-covering numbers of the compact class formed by the
intersection of the class $\mathcal{R}_{r}$ with the unit ball in the space
of polynomials of degree at most $s$ on $\mathbb{R}^{d}$. Let $E$ be a
Banach space and let for $x\in E$ and $\delta >0,$ $S(x,\delta )$ denote the
ball of radius $\delta $ centered at the point $x$. For any positive number $%
\varepsilon $, the $\varepsilon $-covering number of a set $F$ in the space $%
E$ represents the quantity
\begin{equation*}
L_{\varepsilon }(F,E)=\min \left\{ N:~\exists x_{1},...,x_{N}\in F\text{
such that }F\subset \bigcup_{i=1}^{N}S(x_{i},\varepsilon )\text{ }\right\}.
\end{equation*}
The $\varepsilon $-entropy of $F$ is defined as the number $H_{\varepsilon
}(F,E)\overset{def}{=}$ $\log _{2}L_{\varepsilon }(F,E)$. The notion of $%
\varepsilon $-entropy has been devised by A.N.Kolmogorov (see \cite{84,85})
to classify compact metric sets according to their massivity.
In order to formulate Maiorov's result, let $\mathcal{P}_{s}^{d}$ be the
space of all polynomials of degree at most $s$ on $\mathbb{R}^{d}$, $%
L_{q}=L_{q}(I)$, $1\leq q\leq \infty $, be the space of $q$-integrable
functions on the unit cube $I=[0,1]^{d}$ with the norm $\left\Vert
f\right\Vert _{q}=\left( \int_{I}\left\vert f(x)\right\vert ^{q}dx\right)
^{1/q}$, $BL_{q}$ be the unit ball in the space $L_{q},$ and $B_{q}\mathcal{P%
}_{s}^{d}=$ $BL_{q}\cap $ $\mathcal{P}_{s}^{d}$ be the unit ball in the
space $\mathcal{P}_{s}^{d}$ equipped with the $L_{q}$ metric.
\bigskip
\textbf{Proposition 1.5 }(Maiorov \cite{103}). \textit{Let $r,s\in \mathbb{N}
$, $1\leq q\leq \infty $, $0 < \varepsilon < 1$. The $\varepsilon $-entropy
of the class $B_{q}\mathcal{P}_{s}^{d}\cap \mathcal{R}_{r}$ in the space $%
L_{q}$ satisfies the inequalities}
1)
\begin{equation*}
c_{1}rs\leq \frac{H_{\varepsilon }(B_{q}\mathcal{P}_{s}^{d}\cap \mathcal{R}%
_{r},L_{q})}{\log _{2}\frac{1}{\varepsilon }}\leq c_{2}rs\log _{2}\frac{%
2es^{d-1}}{r},
\end{equation*}
\textit{for $r\leq s^{d-1}.$}
2)
\begin{equation*}
c_{1}^{^{\prime }}s^{d}\leq \frac{H_{\varepsilon }(B_{q}\mathcal{P}%
_{s}^{d}\cap \mathcal{R}_{r},L_{q})}{\log _{2}\frac{1}{\varepsilon }}\leq
c_{2}^{^{\prime }}s^{d},
\end{equation*}
\textit{for $r>s^{d-1}.$ In these inequalities $c_{1},c_{2},c_{1}^{^{\prime
}},c_{2}^{^{\prime }}$ are constants depending only on $d$.}
\bigskip
Let us consider $\mathcal{R}_{r}$ as a subspace of some normed linear space $%
X$ endowed with the norm $\left\Vert \cdot \right\Vert _{X}.$ The error of
approximation of a given function $f\in X$ by functions $g\in \mathcal{R}%
_{r} $ is defined as follows
\begin{equation*}
E(f,\mathcal{R}_{r},X)\overset{def}{=}\underset{g\in \mathcal{R}_{r}}{\inf }%
\left\Vert f-g\right\Vert _{X}.
\end{equation*}
Let $B^{d}$ denote the unit ball in the space $\mathbb{R}^{d}.$ Besides, let
$\mathbb{Z}_{+}^{d}$ denote the lattice of nonnegative multi-integers in $%
\mathbb{R}^{d}.$ For $k=(k_{1},...,k_{d})\in \mathbb{Z}_{+}^{d},$ set $%
\left\vert k\right\vert =k_{1}+\cdot \cdot \cdot +k_{d}$, $\mathbf{x}^{%
\mathbf{k}}=x_{1}^{k_{1}}\cdot \cdot \cdot x_{d}^{k_{d}}$ and
\begin{equation*}
D^{\mathbf{k}}=\frac{\partial ^{\left\vert k\right\vert }}{\partial
^{k_{1}}x_{1}\cdot \cdot \cdot \partial ^{k_{d}}x_{d}}
\end{equation*}
The Sobolev space $W_{p}^{m}(B^{d})$ is the space of functions defined on $%
B^{d}$ with the norm
\begin{equation*}
\left\Vert f\right\Vert _{m,p}=\left\{
\begin{array}{c}
\left( \sum_{0\leq \left\vert \mathbf{k}\right\vert \leq m}\left\Vert D^{%
\mathbf{k}}f\right\Vert _{p}^{p}\right) ^{1/p},\text{ if }1\leq p<\infty \\
\max_{0\leq \left\vert \mathbf{k}\right\vert \leq m}\left\Vert D^{\mathbf{k}%
}f\right\Vert _{\infty },\text{ if }p=\infty .%
\end{array}%
\right.
\end{equation*}
Here
\begin{equation*}
\left\Vert h(\mathbf{x})\right\Vert _{p}=\left\{
\begin{array}{c}
\left( \int_{B^{n}}\left\vert h(\mathbf{x})\right\vert ^{p}d\mathbf{x}%
\right) ^{1/p},\text{ if }1\leq p<\infty \\
ess\sup_{\mathbf{x}\in B^{d}}\left\vert h(\mathbf{x})\right\vert ,\text{ if }%
p=\infty .%
\end{array}%
\right.
\end{equation*}
Let $S_{p}^{m}(B^{d})$ be the unit ball in $W_{p}^{m}(B^{d})$:
\begin{equation*}
S_{p}^{m}(B^{d})=\{f\in W_{p}^{m}(B^{d}):\left\Vert f\right\Vert _{m,p}\leq
1~\}.
\end{equation*}
In 1999, Maiorov \cite{102} proved the following result
\bigskip
\textbf{Proposition 1.6 }(Maiorov \cite{102}). \textit{Assume $m\geq 1$ and $%
d\geq 2$. Then for each $r\in \mathbb{N}$ there exists a function $f\in
S_{2}^{m}(B^{d})$ such that}
\begin{equation*}
E(f,\mathcal{R}_{r},L_{2})\geq Cr^{-m/(d-1)},\eqno(1.2)
\end{equation*}
\textit{where $C$ is a constant independent of $f$ and $r.$}
\bigskip
For $d=2,$ this inequality was proved by Oskolkov \cite{114}. In \cite{102},
Maiorov also proved that for each function $f\in S_{2}^{m}(B^{d})$
\begin{equation*}
E(f,\mathcal{R}_{r},L_{2})\leq Cr^{-m/(d-1)}.\eqno(1.3)
\end{equation*}
Thus he established the following order for the error of approximation to
functions in $S_{2}^{m}(B^{d})$ from the class $\mathcal{R}_{r}$:
\begin{equation*}
E(S_{2}^{m}(B^{d}),\mathcal{R}_{r},L_{2})\overset{def}{=}\sup_{f\in
S_{2}^{m}(B^{d})}E(f,\mathcal{R}_{r},L_{2})\asymp r^{-m/(d-1)}.
\end{equation*}
Pinkus \cite{119} revealed that the upper bound (1.3) is also valid in the $L_{p}
$ metric ($1\leq p\leq \infty $). In other words, for every function $f\in
S_{p}^{m}(B^{d})$
\begin{equation*}
E(f,\mathcal{R}_{r},L_{p})\leq Cr^{-m/(d-1)}.
\end{equation*}
These inequalities were successfully applied to some problems of
approximation of multivariate functions by neural networks with a single
hidden layer. Recall that such networks are given by the formula $%
\sum_{i=1}^{r}c_{i}\sigma (\mathbf{w}^{i}\mathbf{\cdot x}-\theta _{i}).$ By $%
\mathcal{M}_{r}(\sigma )$ let us denote the set of all single hidden layer
networks with the activation function $\sigma $. That is,
\begin{equation*}
\mathcal{M}_{r}(\sigma )=\left\{ \sum_{i=1}^{r}c_{i}\sigma (\mathbf{w}^{i}%
\mathbf{\cdot x}-\theta _{i}):~c_{i},\theta _{i}\in \mathbb{R},~\mathbf{w}%
^{i}\in \mathbb{R}^{d}\right\} .
\end{equation*}
The above results on ridge approximation from $\mathcal{R}_{r}$ enable us to
estimate the rate with which the approximation error $E(f,\mathcal{M}%
_{r}(\sigma ),L_{2})$ tends to zero. First note that $\mathcal{M}_{r}(\sigma
)\subset \mathcal{R}_{r},$ since each function of the form $\sigma (\mathbf{%
w\cdot x}-\theta )$ is a ridge function with the direction $\mathbf{w}$.
Thus the lower bound (1.2) holds also for the set $\mathcal{M}_{r}(\sigma )$%
: there exists a function $f\in S_{2}^{m}(B^{d})$ for which
\begin{equation*}
E(f,\mathcal{M}_{r}(\sigma ),L_{2})\geq Cr^{-m/(d-1)}.
\end{equation*}
It remains to see whether the upper bound (1.3) is valid for $\mathcal{M}%
_{r}(\sigma )$. Clearly, it cannot be valid if $\sigma $ is an arbitrary
continuous function. Here we are dealing with the question if there exists a
function $\sigma ^{\ast }\in C(\mathbb{R})$, for which
\begin{equation*}
E(f,\mathcal{M}_{r}(\sigma ^{\ast }),L_{2})\leq Cr^{-m/(d-1)}.
\end{equation*}
This question is answered affirmatively by the following result.
\bigskip
\textbf{Proposition 1.7 }(Maiorov, Pinkus \cite{99}). \textit{There exists a
function $\sigma ^{\ast }\in C(\mathbb{R})$ with the following properties}
1) \textit{$\sigma ^{\ast }$ is infinitely differentiable and strictly
increasing;}
2) \textit{$\lim_{t\rightarrow \infty }\sigma^{\ast } (t)=1$ and $\lim_{t\rightarrow
-\infty }\sigma^{\ast } (t)=0;$}
3) \textit{for every $g\in \mathcal{R}_{r}$ and $\varepsilon >0$ there exist
$c _{i},\theta _{i}\in \mathbb{R}$ and $\mathbf{w}^{i}\in \mathbb{R}^{d}$
satisfying}
\begin{equation*}
\sup_{\mathbf{x}\in B^{d}}\left\vert g(\mathbf{x})-\sum_{i=1}^{r+d+1}c_{i}%
\sigma^{\ast } (\mathbf{w}^{i}\mathbf{\cdot x}-\theta _{i})\right\vert <\varepsilon .
\end{equation*}
\bigskip
Temlyakov \cite{138} considered the approximation from some certain subclass
of $\mathcal{R}_{r}$ in $L_{2}$ metric. More precisely, he considered the
approximation of a function $f\in L_{2}(D),$ where $D$ is the unit disk in $%
\mathbb{R}^{2}$, by functions $\sum\limits_{i=1}^{r}g_{i}(\mathbf{a}%
^{i}\cdot \mathbf{x)}\in \mathcal{R}_{r}\cap L_{2}(D)$, which satisfy the
additional condition $\left\Vert g_{i}(\mathbf{a}^{i}\cdot \mathbf{x)}%
\right\Vert _{2}\leq B\left\Vert f\right\Vert _{2},$ $i=1,...,r$ ($B$ is a
given positive number). Let $\sigma _{r}^{B}(f)$ be the error of this
approximation. For this approximation error, the author of \cite{138}
obtained upper and lower bounds. Let, for $\alpha >0,$ $H^{\alpha }(D)$
denote the set of all functions $f\in L_{2}(D)$, which can be represented in
the form
\begin{equation*}
f=\sum_{n=1}^{\infty }P_{n},
\end{equation*}%
where $P_{n}$ are bivariate algebraic polynomials of total degree $2^{n}-1$
satisfying the inequalities
\begin{equation*}
\left\Vert P_{n}\right\Vert _{2}\leq 2^{-\alpha n},\text{ }n=1,2,...
\end{equation*}
\bigskip
\textbf{Proposition 1.8 }(Temlyakov \cite{138}). \textit{1) For every $f\in
H^{\alpha }(D) $, we have}
\begin{equation*}
\sigma _{r}^{1}(f)\leq C(\alpha )r^{-\alpha }.
\end{equation*}
\textit{2) For any given $\alpha >0$, $B>0$, $r>1$, there exists a function $%
f\in H^{\alpha }(D)$ such that}
\begin{equation*}
\sigma _{r}^{B}(f)\geq C(\alpha ,B)(r\ln r)^{-\alpha }.
\end{equation*}
\bigskip
Petrushev \cite{116} proved the following interesting result: Let $X_{k}$ be
the $k$ dimensional linear space of univariate functions in $L_{2}[-1,1],$ $%
k=1,2,...$. Besides, let $B^{d}$ and $S^{d-1}$ denote correspondingly the
unit ball and unit sphere in the space $\mathbb{R}^{d}$. If $X_{k}$ provides
order of approximation $O(k^{-m})$ for univariate functions with $m$
derivatives in$\ L_{2}[-1,1]$ and $\Omega _{k}$ are appropriately chosen
finite sets of directions distributed on $S^{d-1}$, then the space $%
Y_{k}=span\{p_{k}(\mathbf{a}\cdot \mathbf{x}):~p_{k}\in X_{k},~\mathbf{a}\in
\Omega _{k}\}$ will provide approximation of order $O(k^{-m-d/2+1/2})$ for
every function $f\in L_{2}(B^{d})$ with smoothness of order $m+d/2-1/2$.
Thus, Petrushev showed that the above form of ridge approximation has the
same efficiency of approximation as the traditional multivariate polynomial
approximation.
Many other results concerning the approximation of multivariate functions by
functions from the set $\mathcal{R}_{r}$ and their applications in neural
network theory may be found in \cite{43,94,99,119,123}.
\bigskip
\section{Representation of multivariate functions by linear combinations of
ridge functions}
In this section we develop a technique for verifying if a multivariate function can be expressed as a sum of ridge functions with given directions. We also obtain a necessary and sufficient condition for the representation of all multivariate functions on a subset $X$ of $\mathbb{R}^{d}$ by sums of ridge functions with fixed directions.
\subsection{Two representation problems}
Let $X$ be a subset of ${{\mathbb{R}}}^{d}$ and $\{\mathbf{a}%
^{i}\}_{i=1}^{r} $ be arbitrarily fixed nonzero directions (vectors) in ${{%
\mathbb{R}}}^{d}$. Consider the following set of linear combinations of
ridge functions.
\begin{equation*}
\mathcal{R}(\mathbf{a}^{1},...,\mathbf{a}^{r};X)=\left\{
\sum\limits_{i=1}^{r}g_{i}(\mathbf{a}^{i}\cdot \mathbf{x}),~\mathbf{x}\in
X,~g_{i}:\mathbb{R}\rightarrow \mathbb{R},~i=1,...,r\right\}
\end{equation*}%
In this section, we are going to deal with the following two problems:
\bigskip
\textbf{Problem 1.} \textit{What conditions imposed on $f:X\rightarrow
\mathbb{R}$ are necessary and sufficient for the inclusion $f\in \mathcal{R}(%
\mathbf{a}^{1},...,\mathbf{a}^{r};X)$?}
\bigskip
\textbf{Problem 2.} \textit{What conditions imposed on $X$ are necessary and
sufficient that every function defined on $X$ belongs to the space $\mathcal{%
R}(\mathbf{a}^{1},...,\mathbf{a}^{r};X)$?}
\bigskip
As noticed in Section 1.1, Problem 1 was considered for continuous functions
in \cite{95} and a theoretical result was obtained. It was also noticed
there that the similar problem of representation of $f$ in the form $%
\sum_{i=1}^{r}g_{i}(\mathbf{a}^{i}\cdot \mathbf{x})+P(\mathbf{x})$ with
polynomial $P(\mathbf{x})$ was solved for continuously differentiable
functions in \cite{25}. Problem 2 was solved in \cite{10} for finite subsets
$X$ of ${{\mathbb{R}}}^{d}$ and in \cite{81} for the case when $r=d$ and $%
\mathbf{a}^{i}$ are the coordinate directions.
Here we consider both Problem 1 and Problem 2 without imposing on $X$, $f$
and $r$ any conditions. In fact, we solve these problems for more general,
than $\mathcal{R}(\mathbf{a}^{1},...,\mathbf{a}^{r};X)$, set of functions.
Namely, we solve them for the set
\begin{equation*}
\mathcal{B}(X)=\mathcal{B}(h_{1},...,h_{r};X)=\left\{
\sum\limits_{i=1}^{r}g_{i}(h_{i}(x)),~x\in X,~g_{i}:\mathbb{R}\rightarrow
\mathbb{R},~i=1,...,r\right\} ,
\end{equation*}%
where $h_{i}:X\rightarrow {{\mathbb{R}}},~i=1,...,r,$ are arbitrarily fixed
functions. In particular, the functions $h_{i},~i=1,...,r$, may be equal to
scalar products of the variable $\mathbf{x}$ with some vectors $\mathbf{a}%
^{i}$, $i=1,...,r$. Only in this special case, we have $\mathcal{B}%
(h_{1},...,h_{r};X)=\mathcal{R}(\mathbf{a}^{1},...,\mathbf{a}^{r};X).$
\bigskip
\subsection{Cycles}
The main idea leading to solutions of the above problems is in using new
objects called \textit{cycles} with respect to $r$ functions $%
h_{i}:X\rightarrow \mathbb{R},~i=1,...,r$ (and in particular, with respect
to $r$ directions $\mathbf{a}^{1},...,\mathbf{a}^{r}$). In the sequel, by $%
\delta _{A}$ we will denote the characteristic function of a set $\ A\subset
\mathbb{R}.$ That is,
\begin{equation*}
\delta _{A}(y)=\left\{
\begin{array}{c}
1,~if~y\in A \\
0,~if~y\notin A.%
\end{array}%
\right.
\end{equation*}
\bigskip
\textbf{Definition 1.1.} \textit{Given a subset $X\subset \mathbb{R}^{d}$\
and functions $h_{i}:X\rightarrow \mathbb{R},~i=1,...,r$. A set of points $%
\{x_{1},...,x_{n}\}\subset X$ is called a cycle with respect to the
functions $h_{1},...,h_{r}$ (or, concisely, a cycle if there is no
confusion), if there exists a vector $\lambda =(\lambda _{1},...,\lambda
_{n})$ with the nonzero real coordinates $\lambda _{i},~i=1,...,n,$ such that%
}
\begin{equation*}
\sum_{j=1}^{n}\lambda _{j}\delta _{h_{i}(x_{j})}=0,~i=1,...,r.\eqno(1.4)
\end{equation*}
\bigskip
If $h_{i}=\mathbf{a}^{i}\cdot \mathbf{x}$, $i=1,...,r$, where $\mathbf{a}%
^{1},...,\mathbf{a}^{r}$ are some directions in $\mathbb{R}^{d}$, a cycle,
with respect to the functions $h_{1},...,h_{r}$, is called a cycle with
respect to the directions\textit{\ }$\mathbf{a}^{1},...,\mathbf{a}^{r}.$
Let for $i=1,...,r,$ the set $\{h_{i}(x_{j}),~j=1,...,n\}$ have $k_{i}$
different values. Then it is not difficult to see that Eq. (1.4) stands for
a system of $\sum_{i=1}^{r}k_{i}$ homogeneous linear equations in unknowns $%
\lambda _{1},...,\lambda _{n}.$ If this system has any solution with the
nonzero components, then the given set $\{x_{1},...,x_{n}\}$ is a cycle. In
the last case, the system has also a solution $m=(m_{1},...,m_{n})$ with the
nonzero integer components $m_{i},~i=1,...,n.$ Thus, in Definition 1.1, the
vector $\lambda =(\lambda _{1},...,\lambda _{n})$ can be replaced with a
vector $m=(m_{1},...,m_{n})$ with $m_{i}\in \mathbb{Z}\backslash \{0\}.$
For example, the set $l=\{(0,0,0),~(0,0,1),~(0,1,0),~(1,0,0),~(1,1,1)\}$ is
a cycle in $\mathbb{R}^{3}$ with respect to the functions $%
h_{i}(z_{1},z_{2},z_{3})=z_{i},~i=1,2,3.$ The vector $\lambda $ in
Definition 1.1 can be taken as $(2,1,1,1,-1).$
In case $r=2,$ the picture of cycles becomes more clear. Let, for example, $%
h_{1}$ and $h_{2}$ be the coordinate functions on $\mathbb{R}^{2}.$ In this
case, a cycle is the union of some sets $A_{k}$ with the property: each $%
A_{k}$ consists of vertices of a closed broken line with the sides parallel
to the coordinate axis. These objects (sets $A_{k}$) have been exploited in
practically all works devoted to the approximation of bivariate functions by
univariate functions, although under various different names (see ``bolt of lightning"
in Section 1.3). If the functions $h_{1}$ and $h_{2}$ are
arbitrary, the sets $A_{k}$ can be described as a trace of some point
traveling alternatively in the level sets of $h_{1}$ and $h_{2},$ and then
returning to its primary position. It should be remarked that in the case $%
r>2,$ cycles do not admit such a simple geometric description. We refer the
reader to Braess and Pinkus \cite{10} for the description of cycles when $r=3
$ and $h_{i}(\mathbf{x})=\mathbf{a}^{i}\cdot \mathbf{x},$ $\mathbf{x}\in
\mathbb{R}^{2},~\mathbf{a}^{i}\in \mathbb{R}^{2}\backslash \{\mathbf{0}%
\},~i=1,2,3.$
Let $T(X)$ denote the set of all functions on $X.$ With each pair $%
\left\langle p,\lambda \right\rangle ,$ where $p=\{x_{1},...,x_{n}\}$ is a
cycle in $X$ and $\lambda =(\lambda _{1},...,\lambda _{n})$ is a vector
known from Definition 1.1, we associate the functional
\begin{equation*}
G_{p,\lambda }:T(X)\rightarrow \mathbb{R},~~G_{p,\lambda
}(f)=\sum_{j=1}^{n}\lambda _{j}f(x_{j}).
\end{equation*}%
In the following, such pairs $\left\langle p,\lambda \right\rangle $ will be
called \textit{cycle-vector pairs} of $X.$ It is clear that the functional $%
G_{p,\lambda }$ is linear and $G_{p,\lambda }(g)=0$ for all functions $g\in
\mathcal{B}(h_{1},...,h_{r};X).$
\bigskip
\textbf{Lemma 1.1.} \textit{Let $X$ have cycles and $h_{i}(X)\cap
h_{j}(X)=\varnothing ,$ for all $i,j\in \{1,...,r\},~i\neq j.$ Then a
function $f:X\rightarrow \mathbb{R}$ belongs to the set $\mathcal{B}%
(h_{1},...,h_{r};X)$ if and only if $G_{p,\lambda }(f)=0$ for any
cycle-vector pair $\left\langle p,\lambda \right\rangle $ of $X.$}
\bigskip
\begin{proof} The necessity is obvious, since the functional $G_{p,\lambda }$
annihilates all members of
$\mathcal{B}(h_{1},...,h_{r};X)$. Let us prove the sufficiency. Introduce
the notation
\begin{eqnarray*}
Y_{i} &=&h_{i}(X),~i=1,...,r; \\
\Omega &=&Y_{1}\cup ...\cup Y_{r}.
\end{eqnarray*}
Consider the following set.
\begin{equation*}
\mathcal{L}=\{Y=\{y_{1},...,y_{r}\}:\text{if there exists }x\in X\text{ such
that }h_{i}(x)=y_{i},~i=1,...,r\}\eqno(1.5)
\end{equation*}
Note that $\mathcal{L}$ is not a subset of $\Omega $. It is a set
of some certain subsets of $\Omega .$ Each element of $\mathcal{L}$ is a set
$Y=\{y_{1},...,y_{r}\}\subset \Omega $ with the property that there exists $%
x\in X$ such that $h_{i}(x)=y_{i},~i=1,...,r.$
In what follows, all the points $x$ associated with $Y$ by (1.5) will be
called $(\ast )$-points of $Y.$ It is clear that the number of such points
depends on $Y$ as well as on the functions $h_{1},...,h_{r}$, and may be
greater than 1. But note that if any two points $x_{1}$ and $x_{2}$ are $%
(\ast )$-points of $Y$, then the set $\{x_{1}$, $x_{2}\}$ necessarily forms
a cycle with the associated vector $\lambda _{0}=(1;-1).$ Indeed, if $x_{1}$
and $x_{2}$ are $(\ast )$-points of $Y$, then $h_{i}(x_{1})=h_{i}(x_{2})$, $%
i=1,...,r,$ whence
\begin{equation*}
1\cdot \delta _{h_{i}(x_{1})}+(-1)\cdot \delta _{h_{i}(x_{2})}\equiv
0,~i=1,...,r.
\end{equation*}
The last identity means that the set $p_{0}=\{x_{1},$ $x_{2}\}$ forms a
cycle and $\lambda _{0}=(1;-1)$ is an associated vector. Then by the
the sufficiency condition, $G_{p_{0},\lambda _{0}}(f)=0$, whcih yields
that $f(x_{1})=f(x_{2})$.
Let now $Y^{\ast }$ be the set of all $(\ast )$-points of $Y.$ Since we have
already known that $f(Y^{\ast })$ is a single number, we can define the
function
\begin{equation*}
t:\mathcal{L}\rightarrow \mathbb{R},~t(Y)=f(Y^{\ast }).
\end{equation*}%
Or, equivalently, $t(Y)=f(x),$ where $x$ is an arbitrary $(\ast )$-point of $%
Y$.
Consider now a class $\mathcal{S}$ of functions of the form $%
\sum_{j=1}^{k}r_{j}\delta _{D_{j}},$ where $k$ is a positive integer, $r_{j}$
are real numbers and $D_{j}$ are elements of $\mathcal{L},~j=1,...,k.$ We
fix neither the numbers $\ k,~r_{j},$ nor the sets $D_{j}.$ Clearly, $%
\mathcal{S\ }$is a linear space. Over $\mathcal{S}$, we define the functional
\begin{equation*}
F:\mathcal{S}\rightarrow \mathbb{R},~F\left( \sum_{j=1}^{k}r_{j}\delta
_{D_{j}}\right) =\sum_{j=1}^{k}r_{j}t(D_{j}).
\end{equation*}
First of all, we must show that this functional is well defined. That is,
the equality
\begin{equation*}
\sum_{j=1}^{k_{1}}r_{j}^{\prime }\delta _{D_{j}^{\prime
}}=\sum_{j=1}^{k_{2}}r_{j}^{\prime \prime }\delta _{D_{j}^{\prime \prime }}
\end{equation*}%
always implies the equality
\begin{equation*}
\sum_{j=1}^{k_{1}}r_{j}^{\prime }t(D_{j}^{\prime
})=\sum_{j=1}^{k_{2}}r_{j}^{\prime \prime }t(D_{j}^{\prime \prime }).
\end{equation*}%
In fact, this is equivalent to the implication
\begin{equation*}
\sum_{j=1}^{k}r_{j}\delta _{D_{j}}=0\Longrightarrow
\sum_{j=1}^{k}r_{j}t(D_{j})=0,~\text{for all }k\in \mathbb{N}\text{, }%
r_{j}\in \mathbb{R}\text{, }D_{j}\subset \mathcal{L}\text{.}\eqno(1.6)
\end{equation*}
Suppose that the left-hand side of the implication (1.6) be satisfied. Each
set $D_{j}$ consists of $r$ real numbers $y_{1}^{j},...,y_{r}^{j}$, $%
j=1,...,k.$ By the hypothesis of the lemma, all these numbers are different.
Therefore,
\begin{equation*}
\delta _{D_{j}}=\sum_{i=1}^{r}\delta _{y_{i}^{j}},~j=1,...,k.\eqno(1.7)
\end{equation*}%
Eq. (1.7) together with the left-hand side of (1.6) gives
\begin{equation*}
\sum_{i=1}^{r}\sum_{j=1}^{k}r_{j}\delta _{y_{i}^{j}}=0.\eqno(1.8)
\end{equation*}%
Since the sets $\{y_{i}^{1},y_{i}^{2},...,y_{i}^{k}\}$, $i=1,...,r,$ are
pairwise disjoint, we obtain from (1.8) that
\begin{equation*}
\sum_{j=1}^{k}r_{j}\delta _{y_{i}^{j}}=0,\text{ }i=1,...,r.\eqno(1.9)
\end{equation*}
Let now $x_{1},...,x_{k}$ be some $(\ast )$-points of the sets $%
D_{1},...,D_{k}$ respectively. Since by (1.5), $y_{i}^{j}=h_{i}(x_{j})$, for
$i=1,...,r$ and $j=1,...,k,$ it follows from (1.9) that the set $%
\{x_{1},...,x_{k}\}$ is a cycle. Then by the condition of the sufficiency, $%
\sum_{j=1}^{k}r_{j}f(x_{j})=0.$ Hence $\sum_{j=1}^{k}r_{j}t(D_{j})=0.$ We
have proved the implication (1.6) and hence the functional $F$ is well
defined. Note that the functional $F$ is linear (this can be easily seen
from its definition).
Consider now the following space:
\begin{equation*}
\mathcal{S}^{\prime }=\left\{ \sum_{j=1}^{k}r_{j}\delta _{\omega
_{j}}\right\} ,
\end{equation*}%
where $k\in \mathbb{N}$, $r_{j}\in \mathbb{R}$, $\omega _{j}\subset \Omega .$
As above, we do not fix the parameters $k$, $r_{j}$ and $\omega _{j}.$
Clearly, the space $\mathcal{S}^{\prime }$ is larger than $\mathcal{S}$. Let
us prove that the functional $F$ can be linearly extended to the space $%
\mathcal{S}^{\prime }$. So, we must prove that there exists a linear
functional $F^{\prime }:\mathcal{S}^{\prime }\rightarrow \mathbb{R}$ such
that $F^{\prime }(x)=F(x)$, for all $x\in \mathcal{S}$. Let $H$ denote the
set of all linear extensions of $F$ to subspaces of $\mathcal{S}^{\prime }$
containing $\mathcal{S}$. The set $H$ is not empty, since it contains a
functional $F.$ For each functional $v\in H$, let $dom(v)$ denote the domain
of $v$. Consider the following partial order in $H$: $v_{1}\leq v_{2}$, if $%
v_{2}$ is a linear extension of $v_{1}$ from the space $dom(v_{1})$ to the
space $dom(v_{2}).$ Let now $P$ be any chain (linearly ordered subset) in $H$%
. Consider the following functional $u$ defined on the union of domains of
all functionals $p\in P$:
\begin{equation*}
u:\bigcup\limits_{p\in P}dom(p)\rightarrow \mathbb{R},~u(x)=p(x),\text{ if }%
x\in dom(p)
\end{equation*}
Obviously, this functional is well defined and linear. Besides, the
functional $u$ provides an upper bound for $P.$ We see that the arbitrarily
chosen chain $P$ has an upper bound. Then by Zorn's lemma, there is a
maximal element $F^{\prime }\in H$. We claim that the functional $F^{\prime
} $ must be defined on the whole space $\mathcal{S}^{\prime }$. Indeed, if $%
F^{\prime }$ is defined on a proper subspace $\mathcal{D\subset }$ $\mathcal{%
S}^{\prime }$, then it can be linearly extended to a space larger than $%
\mathcal{D}$ by the following way: take any point $x\in \mathcal{S}^{\prime
}\backslash \mathcal{D}$ and consider the linear space $\mathcal{D}^{\prime
}=\{\mathcal{D}+\alpha x\}$, where $\alpha $ runs through all real numbers.
For an arbitrary point $y+\alpha x\in \mathcal{D}^{\prime }$, set $%
F^{^{\prime \prime }}(y+\alpha x)=F^{\prime }(y)+\alpha b$, where $b$ is any
real number considered as the value of $F^{^{\prime \prime }}$ at $x$. Thus,
we constructed a linear functional $F^{^{\prime \prime }}\in H$ satisfying $%
F^{\prime }\leq F^{^{\prime \prime }}.$ The last contradicts the maximality
of $F^{\prime }.$ This means that the functional $F^{\prime }$ is defined on
the whole $\mathcal{S}^{\prime }$ and $F\leq F^{\prime }$ ($F^{\prime }$ is
a linear extension of $F$).
Define the following functions by means of the functional $F^{\prime }$:
\begin{equation*}
g_{i}:Y_{i}\rightarrow \mathbb{R},\text{ }g_{i}(y_{i})\overset{def}{=}%
F^{\prime }(\delta _{y_{i}}),\text{ }i=1,...,r.
\end{equation*}%
Let $x$ be an arbitrary point in $X.$ Obviously, $x$ is a $(\ast )$-point of
some set $Y=\{y_{1},...,y_{r}\}\subset \mathcal{L}.$ Thus,
\begin{eqnarray*}
f(x) &=&t(Y)=F(\delta _{Y})=F\left( \sum_{i=1}^{r}\delta _{y_{i}}\right)
=F^{\prime }\left( \sum_{i=1}^{r}\delta _{y_{i}}\right) = \\
\sum_{i=1}^{r}F^{\prime }(\delta _{y_{i}})
&=&\sum_{i=1}^{r}g_{i}(y_{i})=\sum_{i=1}^{r}g_{i}(h_{i}(x)).
\end{eqnarray*}%
\end{proof}
\bigskip
\subsection{Minimal cycles and the main results}
\textbf{Definition 1.2.} \textit{A cycle $p=\{x_{1},...,x_{n}\}$ is said to
be minimal if $p$ does not contain any cycle as its proper subset.}
\bigskip
For example, the set $l=\{(0,0,0),~(0,0,1),~(0,1,0),~(1,0,0),~(1,1,1)\}$
considered above is a minimal cycle with respect to the functions $%
h_{i}(z_{1},z_{2},z_{3})=z_{i},~i=1,2,3.$ Adding the point $(0,1,1)$ to $l$,
we will have a cycle, but not minimal. The vector $\lambda $ associated with
$l\cup \{(0,1,1)\}$ can be taken as $(3,-1,-1,-2,2,-1).$
A minimal cycle $p=\{x_{1},...,x_{n}\}$ has the following obvious properties:
\begin{description}
\item[(a)] \textit{The vector $\lambda $ associated with $p$ through Eq.
(1.4) is unique up to multiplication by a constant;}
\item[(b)] \textit{If in (1.4), $\sum_{j=1}^{n}\left\vert \lambda
_{j}\right\vert =1,$ then all the numbers $\lambda _{j},~j=1,...,n,$ are
rational.}
\end{description}
Thus, a minimal cycle $p$ uniquely (up to a sign) defines the functional
\begin{equation*}
~G_{p}(f)=\sum_{j=1}^{n}\lambda _{j}f(x_{j}),\text{ \ }\sum_{j=1}^{n}\left%
\vert \lambda _{j}\right\vert =1.
\end{equation*}
\bigskip
\textbf{Lemma 1.2.} \textit{The functional $G_{p,\lambda }$ is a linear
combination of functionals $G_{p_{1}},...,G_{p_{k}},$ where $p_{1},...,p_{k}$
are minimal cycles in $p.$}
\bigskip
\begin{proof} Let $\left\langle p,\lambda \right\rangle $ be a cycle-vector
pair of $X$, where $p=\{x_{1},...,x_{n}\}$ and $\lambda =(\lambda
_{1},...,\lambda _{n})$. Let $p_{1}=$ $\{y_{1}^{1},...,y_{s_{1}}^{1}\},$ $%
s_{1}<n$, be a minimal cycle in $p$ and
\begin{equation*}
G_{p_{1}}(f)=\sum_{j=1}^{s_{1}}\nu _{j}^{1}f(y_{j}^{1}),\text{ }%
\sum_{j=1}^{s_{1}}\left\vert \nu _{j}^{1}\right\vert =1.
\end{equation*}
Without loss of generality, we may assume that $y_{1}^{1}=x_{1}.$ Put
\begin{equation*}
t_{1}=\frac{\lambda _{1}}{\nu _{1}^{1}}.
\end{equation*}%
Then the functional $G_{p,\lambda }-t_{1}G_{p_{1}}$ has the form
\begin{equation*}
G_{p,\lambda }-t_{1}G_{p_{1}}=\sum_{j=1}^{n_{1}}\lambda _{j}^{1}f(x_{j}^{1}),
\end{equation*}%
where $x_{j}^{1}\in p$, $\lambda _{j}^{1}\neq 0$, $j=1,...,n_{1}$. Clearly,
the set $l_{1}=\{x_{1}^{1},...,x_{n_{1}}^{1}\}$ is a cycle in $p$ with the
associated vector $\lambda ^{1}=(\lambda _{1}^{1},...,\lambda _{n_{1}}^{1})$%
. Besides, $x_{1}\notin l_{1}$. Thus, $n_{1}<n$ and $G_{l_{1},\lambda ^{1}}=$
$G_{p,\lambda }-t_{1}G_{p_{1}}$. If $l_{1}$ is minimal, then the proof is
completed. Assume $l_{1}$ is not minimal. Let $p_{1}=$ $%
\{y_{1}^{2},...,y_{s_{2}}^{2}\},$ $s_{2}<n_{1},$ be a minimal cycle in $%
l_{1} $ and
\begin{equation*}
G_{p_{2}}(f)=\sum_{j=1}^{s_{2}}\nu _{j}^{2}f(y_{j}^{2}),\text{ }%
\sum_{j=1}^{s_{2}}\left\vert \nu _{j}^{2}\right\vert =1.
\end{equation*}
Without loss of generality, we may assume that $y_{1}^{2}=x_{1}^{1}.$ Put
\begin{equation*}
t_{2}=\frac{\lambda _{1}^{1}}{\nu _{1}^{2}}.
\end{equation*}%
Then the functional $G_{l_{1},\lambda ^{1}}-t_{2}G_{p_{2}}$ has the form
\begin{equation*}
G_{l_{1},\lambda ^{1}}-t_{2}G_{p_{2}}=\sum_{j=1}^{n_{2}}\lambda
_{j}^{2}f(x_{j}^{2}),
\end{equation*}%
where $x_{j}^{2}\in l_{1}$, $\lambda _{j}^{2}\neq 0$, $j=1,...,n_{2}$.
Clearly, the set $l_{2}=\{x_{1}^{2},...,x_{n_{2}}^{2}\}$ is a cycle in $%
l_{1} $ with the associated vector $\lambda ^{2}=(\lambda
_{1}^{2},...,\lambda _{n_{2}}^{2})$. Besides, $x_{1}^{1}\notin l_{2}$. Thus,
$n_{2}<n_{1}$ and $G_{l_{2},\lambda ^{2}}=$ $G_{l_{1},\lambda
^{1}}-t_{2}G_{p_{2}}.$ If $l_{2}$ is minimal, then the proof is completed.
Let $l_{2}$ be not minimal. Repeating the above process for $l_{2}$, then
for $l_{3}$, etc., after some $k-1$ steps we will come to a minimal cycle $%
l_{k-1}$ and the functional
\begin{equation*}
G_{l_{k-1},\lambda ^{k-1}}=G_{l_{k-2},\lambda
^{k-2}}-t_{k-1}G_{p_{k-1}}=\sum_{j=1}^{n_{k-1}}\lambda
_{j}^{k-1}f(x_{j}^{k-1}).
\end{equation*}%
Since the cycle $l_{k-1}$ is minimal,
\begin{equation*}
G_{l_{k-1},\lambda ^{k-1}}=t_{k}G_{l_{k-1}},\text{ \ where }%
t_{k}=\sum_{j=1}^{n_{k-1}}\left\vert \lambda _{j}^{k-1}\right\vert .
\end{equation*}%
Now putting $p_{k}=l_{k-1}$ and considering the above chain relations
between the functionals $G_{l_{i},\lambda ^{i}}$, $i=1,...,k-1,$ we obtain
that
\begin{equation*}
G_{p,\lambda }=\sum_{i=1}^{k}t_{i}G_{p_{i}}.
\end{equation*}
\end{proof}
\textbf{Theorem 1.1.} \textit{Assume $X\subset \mathbb{R}^{d}$ and $%
h_{1},...,h_{r}$ are arbitrarily fixed real functions on $X.$ The following
assertions are valid.}
\textit{1) Let $X$ have cycles with respect to the functions $%
h_{1},...,h_{r} $. A function $f:X\rightarrow \mathbb{R}$ belongs to the
space $\mathcal{B}(h_{1},...,h_{r};X)$ if and only if $G_{p}(f)=0$ for any
minimal cycle $p\subset X$.}
\textit{2) Let $X$ have no cycles. Then $\mathcal{B}%
(h_{1},...,h_{r};X)=T(X). $}
\bigskip
\begin{proof} 1) The necessity is clear. Let us prove the sufficiency. On the
strength of Lemma 1.2, it is enough to prove that if $G_{p,\lambda }(f)=0$
for any cycle-vector pair $\left\langle p,\lambda \right\rangle $ of $X$,
then $f\in \mathcal{B}(X).$
Consider a system of intervals $\{(a_{i},b_{i})\subset \mathbb{R}%
\}_{i=1}^{r} $ such that $(a_{i},b_{i})\cap (a_{j},b_{j})=\varnothing $ for
all the indices $i,j\in \{1,...,r\}$, $~i\neq j.$ For $i=1,...,r$, let $\tau
_{i}$ be one-to-one mappings of $\mathbb{R}$\ onto $(a_{i},b_{i}).$
Introduce the following functions on $X$:
\begin{equation*}
h_{i}^{^{\prime }}(x)=\tau _{i}(h_{i}(x)),\text{ }i=1,...,r.
\end{equation*}
It is clear that any cycle with respect to the functions $h_{1},...,h_{r}$
is also a cycle with respect to the functions $h_{1}^{^{\prime
}},...,h_{r}^{^{\prime }}$, and vice versa. Besides, $h_{i}^{\prime }(X)\cap
h_{j}^{\prime }(X)=\varnothing ,$ for all $i,j\in \{1,...,r\},~i\neq j.$
Then by Lemma 1.1,
\begin{equation*}
f(x)=g_{1}^{\prime }(h_{1}^{\prime }(x))+\cdots +g_{r}^{\prime
}(h_{r}^{\prime }(x)),
\end{equation*}%
where $g_{1}^{\prime },...,g_{r}^{\prime }$ are univariate functions
depending on $f$. From the last equality we obtain that
\begin{equation*}
f(x)=g_{1}^{\prime }(\tau _{1}(h_{1}(x)))+\cdots +g_{r}^{\prime }(\tau
_{r}(h_{r}(x)))=g_{1}(h_{1}(x))+\cdots +g_{r}(h_{r}(x)).
\end{equation*}%
That is, $f\in \mathcal{B}(X)$.
2) Let $f:X\rightarrow \mathbb{R}$ be an arbitrary function. First suppose
that $h_{i}(X)\cap h_{j}(X)=\varnothing ,$ for all $i,j\in \{1,...,r\}$,$%
~i\neq j.$ In this case, the proof is similar to and even simpler than that
of Lemma 1.1. Indeed, the set of all $(\ast )$-points of $Y$ consists of a
single point, since otherwise we would have a cycle with two points, which
contradicts the hypothesis of the 2-nd part of the theorem. Further,
well definition of the functional $F$ becomes obvious, since the left-hand
side of (1.6) also contradicts the nonexistence of cycles. Thus, as in the
proof of Lemma 1.1, we can extend $F$ to the space $\mathcal{S}^{\prime }$
and then obtain the desired representation for the function $f$. Since $f$
is arbitrary, $T(X)=\mathcal{B}(X).$
Using the techniques from the proof of the 1-st part of the theorem, one can
easily generalize the above argument to the case when the functions $%
h_{1},...,h_{r}$ have arbitrary ranges.
\end{proof}
\bigskip
\textbf{Theorem 1.2.} \textit{$\mathcal{B}(h_{1},...,h_{r};X)=T(X)$ if and
only if $X$ has no cycles with respect to the functions }$h_{1},...,h_{r}$%
\textit{.}
\bigskip
\begin{proof} The sufficiency immediately follows from Theorem 1.1. To prove
the necessity, assume that $X$ has a cycle $p=\{x_{1},...,x_{n}\}$. Let $%
\lambda =(\lambda _{1},...,\lambda _{n})$ be a vector associated with $p$ by
Eq. (1.4). Consider a function $f_{0}$ on $X$ with the property: $%
f_{0}(x_{i})=1,$ for indices $i$ such that $\lambda _{i}\,>0$ and $%
f_{0}(x_{i})=-1,$ for indices $i$ such that $\lambda _{i}\,<0$. For this
function, $G_{p,\lambda }(f_{0})\neq 0$. Then by Theorem 1.1, $f_{0}\notin
\mathcal{B}(X)$. Hence $\mathcal{B}(X)\neq T(X)$. The contradiction shows
that $X$ does not admit cycles.
\end{proof}
\subsection{Corollaries}
From Theorems 1.1 and 1.2 we obtain the following corollaries for the ridge
function representation.
\bigskip
\textbf{Corollary 1.1.} \textit{Assume $X\subset \mathbb{R}^{d}$ and
$\mathbf{a}^{1},...,\mathbf{a}^{r}\in \mathbb{R}^{d}\backslash \{\mathbf{0}%
\}$. The following assertions are valid.}
\textit{1) Let $X$ have cycles with respect to the directions $%
\mathbf{a}^{1},...,\mathbf{a}^{r}$. A function $f:X\rightarrow
\mathbb{R}$ belongs to the space $\mathcal{R}(\mathbf{a}^{1},...,%
\mathbf{a}^{r};X)$ if and only if $G_{p}(f)=0$ for any
minimal cycle $p\subset X$.}
\textit{2) Let $X$ have no cycles. Then every function $%
f:X\rightarrow \mathbb{R}$ belongs to the space $\mathcal{R}(%
\mathbf{a}^{1},...,\mathbf{a}^{r};X)$.}
\bigskip
\textbf{Corollary 1.2.} \textit{$\mathcal{R}(\mathbf{a}^{1},...,\mathbf{a}^{r};X)=T(X)$
if and only if $X$ has no cycles with respect to the
directions $\mathbf{a}^{1},...,\mathbf{a}^{r}$.}
\bigskip
Note that solutions to Problems 1 and 2 are given by Corollaries 1.1 and
1.2, correspondingly. Although it is not always easy to find all cycles of a
given set $X$ and even to know if $X$ possesses a single cycle, Corollaries
1.1 and 1.2 are of more practical than theoretical character. Particular
cases of Problems 1 and 2 evidence in favor of our opinion. For example, for
the problem of representation by sums of two ridge functions, the picture of
cycles is completely describable (see the beginning of this section). The
interpretation of cycles with respect to three directions in the plane can
be found in Braess and Pinkus \cite{10}. A geometric description of cycles
with respect to 4 and more directions is quite complicated and requires deep
techniques from geometry and graph theory. This is not within the aim of our
study.
From the last corollary, it follows that if representation by sums of ridge
functions with fixed directions $\mathbf{a}^{1},...,\mathbf{a}^{r}$ is valid
in the class of continuous functions (or in the class of bounded functions),
then such representation is valid in the class of all functions. For a rigid
mathematical formulation of this result, let us introduce the notation:
\begin{equation*}
\mathcal{R}_{c}(\mathbf{a}^{1},...,\mathbf{a}^{r};X)=\left\{
\sum\limits_{i=1}^{r}g_{i}(\mathbf{a}^{i}\cdot \mathbf{x}),~\mathbf{x}\in
X,~g_{i}(\mathbf{a}^{i}\cdot \mathbf{x})\in C(X\mathbb{)},~i=1,...,r\right\}
\end{equation*}
and
\begin{equation*}
\mathcal{R}_{b}(\mathbf{a}^{1},...,\mathbf{a}^{r};X)=\left\{
\sum\limits_{i=1}^{r}g_{i}(\mathbf{a}^{i}\cdot \mathbf{x}),~\mathbf{x}\in
X,~g_{i}(\mathbf{a}^{i}\cdot \mathbf{x})\in B(X\mathbb{)},~i=1,...,r\right\}
\end{equation*}
Here $C(X)$ and $B(X)$ denote the spaces of continuous and bounded functions
defined on $X\subset \mathbb{R}^{d}$ correspondingly (for the first space,
the set $X$ is supposed to be compact). As we know (see Section 1.1) from
the results of Sternfeld it follows that the equality $\mathcal{R}_{c}(%
\mathbf{a}^{1},...,\mathbf{a}^{r};X)=C(X)$ implies the equality $\mathcal{R}%
_{b}(\mathbf{a}^{1},...,\mathbf{a}^{r};X)=B(X).$ In other words, if every
continuous function is represented by sums of ridge functions (with fixed
directions!), then every bounded function also obeys such representation
(with bounded summands). Corollaries 1.1 and 1.2 allow us to obtain the
following result.
\bigskip
\textbf{Corollary 1.3.} \textit{Let $X$ be a compact subset of $\mathbb{R}%
^{d}$ and $\mathbf{a}^{1},...,\mathbf{a}^{r}$ be given directions in $%
\mathbb{R}^{d}\backslash \{\mathbf{0}\}$. If $\mathcal{R}_{c}(\mathbf{a}%
^{1},...,\mathbf{a}^{r};X)=C(X),$ then $\mathcal{R}(\mathbf{a}^{1},...,%
\mathbf{a}^{r};X)=T(X).$}
\bigskip
\begin{proof} If every continuous function defined on $X\subset \mathbb{R}^{d}$ is
represented by sums of ridge functions with the directions $\mathbf{a}%
^{1},...,\mathbf{a}^{r}$, then it can be shown by applying the same idea (as
in the proof of Theorem 1.2) that the set $X$ has no cycles with respect to
the given directions. Only, because of continuity, Urysohn's great lemma
should be taken into account. That is, it should be taken into account that,
by assuming the existence of a cycle $p_{0}=\{x_{1},...,x_{n}\}$ with an
associated vector $\lambda _{0}=(\lambda _{1},...,\lambda _{n})$, we can
deduce from Urysohn's great lemma the existence of a continuous
function $u:X\rightarrow \mathbb{R}$ satisfying
1) $u(x_{i})=1,$ for indices $i$ such that $\lambda _{i}\,>0$
2) $u(x_{j})=-1,$ for indices $j$ such that $\lambda _{j}\,<0$,
3) $-1<u(x)<1,$ for all $x\in X\backslash p_{0}.$
These properties mean that $G_{p_{0},\lambda _{0}}(u)\neq
0\Longrightarrow u\notin \mathcal{R}_{c}(\mathbf{a}^{1},...,\mathbf{a}%
^{r};X)\Longrightarrow \mathcal{R}_{c}(\mathbf{a}^{1},...,\mathbf{a}%
^{r};X)\neq C(X).$
But if $X$ has no cycles with respect to the directions $\mathbf{a}^{1},...,%
\mathbf{a}^{r}$, then by Corollary 1.2, $\mathcal{R}(\mathbf{a}^{1},...,%
\mathbf{a}^{r};X)=T(X).$
\end{proof}
Let us now give some examples of sets over which the representation by
linear combinations of ridge functions is possible.
\begin{description}
\item[(1)] Let $r=2$ and $X$ be the union of two parallel lines not
perpendicular to the directions $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$.
Then $X$ has no cycles with respect to $\{\mathbf{a}^{1},\mathbf{a}^{2}\}$.
Therefore, by Corollary 1.2, $\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}%
^{2};X\right) =T(X).$
\item[(2)] Let $r=2,$ $\mathbf{a}^{1}=(1,1)$, $\mathbf{a}^{2}=(1,-1)$ and $X$
be the graph of the function $y=\arcsin (\sin x)$. Then $X$ has no cycles
and hence $\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2};X\right) =T(X).$
\item[(3)] Assume now we are given $r$ directions $\{\mathbf{a}^{j}\}_{j=1}^{r}$ and $%
r+1$ points $\{\mathbf{x}^{i}\}_{i=1}^{r+1}\subset \mathbb{R}^{d}$ such that%
\begin{eqnarray*}
\mathbf{a}^{1}\cdot \mathbf{x}^{i} &=&\mathbf{a}^{1}\cdot \mathbf{x}^{j}\neq
\mathbf{a}^{1}\cdot \mathbf{x}^{2}\text{, \ for }1\leq i,j\leq r+1\text{, }%
i,j\neq 2 \\
\mathbf{a}^{2}\cdot \mathbf{x}^{i} &=&\mathbf{a}^{2}\cdot \mathbf{x}^{j}\neq
\mathbf{a}^{2}\cdot \mathbf{x}^{3}\text{, \ for }1\leq i,j\leq r+1\text{, }%
i,j\neq 3 \\
&&\mathbf{......................................} \\
\mathbf{a}^{r}\cdot \mathbf{x}^{i} &=&\mathbf{a}^{r}\cdot \mathbf{x}^{j}\neq
\mathbf{a}^{r}\cdot \mathbf{x}^{r+1}\text{, \ for }1\leq i,j\leq r.
\end{eqnarray*}%
The simplest data realizing these equations are the basis directions in $%
\mathbb{R}^{d}$ and the points $(0,0,...,0)$, $(1,0,...,0)$, $(0,1,...,0)$%
,..., $(0,0,...,1)$. From the first equation we obtain that $\mathbf{x}^{2}$
cannot be a point of any cycle in $X=\{\mathbf{x}^{1},...,\mathbf{x}^{r+1}\}$%
. Sequentially, from the second, third, ..., $r$-th equations it follows
that the points $\mathbf{x}^{3},\mathbf{x}^{4},...,\mathbf{x}^{r+1}$ also
cannot be points of cycles in $X$, respectively. Thus the set $X$ does not
contain cycles at all. By Corollary 1.2, $\mathcal{R}\left( \mathbf{a}%
^{1},...,\mathbf{a}^{r};X\right) =T(X).$
\item[(4)] Assume we are given directions $\{\mathbf{a}^{j}\}_{j=1}^{r}$ and a curve $%
\gamma $ in $\mathbb{R}^{d}$ such that for any $c\in \mathbb{R}$, $\gamma $
has at most one common point with at least one of the hyperplanes $\mathbf{a}%
^{j}\cdot \mathbf{x}=c$, $j=1,...,r.$ Clearly, the curve $\gamma $ has no
cycles and hence $\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r};\gamma
\right) =T(\gamma ).$
\end{description}
Braess and Pinkus \cite{10} considered the partial case of Problem 2:
characterize a set of points $\left( \mathbf{x}^{1},...,\mathbf{x}%
^{k}\right) \subset \mathbb{R}^{d}$ such that for any data $\{\alpha
_{1},...,\alpha _{k}\}\subset \mathbb{R}$ there exists a function $g\in
\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r};\mathbb{R}^{d}\right) $
satisfying $g(\mathbf{x}^{i})=\alpha _{i},$ $i=1,...,k$. In connection with
this problem, they introduced the notion of the \textit{NI}-property (non
interpolation property) and \textit{MNI}-property (minimal non interpolation
property) of a finite set of points as follows:
Given directions $\{\mathbf{a}^{j}\}_{j=1}^{r}\subset \mathbb{R}%
^{d}\backslash \{\mathbf{0}\}$, we say that a set of points $\{\mathbf{x}%
^{i}\}_{i=1}^{k}\subset \mathbb{R}^{d}$ has the \textit{NI}-property with
respect to $\{\mathbf{a}^{j}\}_{j=1}^{r}$, if there exists $\{\alpha
_{i}\}_{i=1}^{k}\subset \mathbb{R}$ such that we cannot find a function $%
g\in \mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r};\mathbb{R}%
^{d}\right) $ satisfying $g(\mathbf{x}^{i})=\alpha _{i},$ $i=1,...,k$. We
say that \ the set $\{\mathbf{x}^{i}\}_{i=1}^{k}\subset \mathbb{R}^{d}$ has
the \textit{MNI}-property with respect to $\{\mathbf{a}^{j}\}_{j=1}^{r}$, if
$\{\mathbf{x}^{i}\}_{i=1}^{k}$ but no proper subset thereof has the \textit{%
NI}-property.
It follows from Corollary 1.2 that a set $\{\mathbf{x}^{i}\}_{i=1}^{k}$ has
the \textit{NI}-property if and only if $\{\mathbf{x}^{i}\}_{i=1}^{k}$
contains a cycle with respect to the functions $h_{i}=\mathbf{a}^{i}\cdot
\mathbf{x},$ $i=1,...,r$ (or, simply, to the directions $\mathbf{a}^{i},$ $%
i=1,...,r$) and the \textit{MNI}-property if and only if the set $\{\mathbf{x%
}^{i}\}_{i=1}^{k}$ itself is a minimal cycle with respect to the given
directions. Taking into account this argument and Definitions 1.1 and 1.2,
we obtain that the set $\{\mathbf{x}^{i}\}_{i=1}^{k}$ has the \textit{NI}%
-property if and only if there is a vector $\mathbf{m}=(m_{1},...,m_{k})\in
\mathbb{Z}^{k}\backslash \{\mathbf{0}\}$ such that
\begin{equation*}
\sum_{j=1}^{k}m_{j}g(\mathbf{a}^{i}\cdot \mathbf{x}^{j})=0,
\end{equation*}%
for $i=1,...,r$ and all functions $g:\mathbb{R\rightarrow R}$. This set has
the \textit{MNI}-property if and only if the vector $\mathbf{m}$ has the
additional properties: it is unique up to multiplication by a constant and
all its components are different from zero. This special consequence of
Corollary 1.2 was proved in \cite{10}.
\bigskip
\section{Characterization of an extremal sum of ridge functions}
The approximation problem considered in this section is to approximate a
continuous multivariate function $f\left( \mathbf{x}\right) =f\left( {%
x_{1},...,x_{d}}\right) $ by sums of two ridge functions in the uniform
norm. We give a necessary and sufficient condition for a sum of two ridge
functions to be a best approximation to $f\left( \mathbf{x}\right) .$ This
main result is next used in a special case to obtain an explicit formula for
the approximation error and to construct a best approximation. The problem
of well approximation by such sums is also considered.
\subsection{Exposition of the problem}
Consider the following set of sums of ridge functions
\begin{equation*}
\mathcal{R}=\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) ={\left\{ {%
g_{1}\left( \mathbf{a}^{1}{\cdot }\mathbf{x}\right) +g_{2}\left( \mathbf{a}^{2}{%
\cdot }\mathbf{x}\right) :g}_{i}{\in C\left( {\mathbb{R}}\right) ,i=1,2}%
\right\} }.
\end{equation*}
That is, we fix directions $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ and consider linear
combinations of ridge functions with these directions.
Assume $f\left( \mathbf{x}\right) $ is a continuous function on a
compact subset $Q$ of $\mathbb{R}^{d}$. We want to find conditions that are
necessary and sufficient for a function $g_{_{0}}\in \mathcal{R}\left(
\mathbf{a}^{1},\mathbf{a}^{2}\right) $ to be an extremal element (or a best
approximation) to $f$. In other words, we want to characterize such sums $%
g_{0}\left( \mathbf{x}\right) =g_{1}\left( \mathbf{a}^{1}{\cdot }\mathbf{x}%
\right) +g_{2}\left( \mathbf{a}^{2}{\cdot }\mathbf{x}\right) $ of ridge
functions that
\begin{equation*}
{\left\Vert {f-g_{0}}\right\Vert }={\max\limits_{{\mathbf{x}\in Q}}}{%
\left\vert {f\left( \mathbf{x}\right) -g}_{{0}}{\left( \mathbf{x}\right) }%
\right\vert }=E\left( {f}\right) ,
\end{equation*}%
where
\begin{equation*}
E\left( {f}\right) =E(f,\mathcal{R})\overset{def}{=}{\inf_{g \in \mathcal{R}\left(
\mathbf{a}^{1},\mathbf{a}^{2}\right)}}{\left\Vert {f-g}%
\right\Vert }
\end{equation*}%
is the error in approximating from $\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2}%
\right) .$ The other related problem is how to construct these sums of ridge
functions. We also want to know if we can approximate well, i.e. for which
compact sets $Q,$ $\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) $ is
dense in $C\left( {Q}\right) $ in the topology of uniform convergence. It
should be remarked that solutions to these problems may be useful in
connection with the study of partial differential equations. For example,
assume that $\left( {a_{1},b_{1}}\right) $ and $\left( {a_{2},b_{2}}\right) $
are linearly independent vectors in $\mathbb{R}^{2}.$ Then the general
solution to the homogeneous partial differential equation
\begin{equation*}
\left( {a_{1}{\frac{\partial }{\partial {x}}}+b_{1}{\frac{\partial }{%
\partial {y}}}}\right) \left( {a_{2}{\frac{\partial }{\partial {x}}}+b_{2}{%
\frac{\partial }{\partial {y}}}}\right) {u}\left( {x,y}\right) =0\eqno(1.10)
\end{equation*}%
are all functions of the form
\begin{equation*}
u\left( {x,y}\right) =g_{1}\left( {b_{1}x-a_{1}y}\right) +g_{_{2}}\left( {%
b_{2}x-a_{2}y}\right) \eqno(1.11)
\end{equation*}%
for arbitrary $g_{1}$ and $g_{2}.$ In \cite{36}, Golitschek and Light
described an algorithm that computes the error of approximation of a
continuous function $f\left( {x,y}\right) $ by solutions of
equation (1.10), provided that $a_{1}=b_{2}=1$, $a_{2}=b_{1}=0.$ Using our
result (see Theorem 1.3), one can characterize those solutions (1.11) that are
extremal to a given function $f(x,y)$. For a certain class of functions $f(x,y)
$, one can also easily calculate the approximation error and construct an
extremal solution (see Theorems 1.5 and 1.6 below).
The problem of approximating by functions from the set $\mathcal{R}\left(
\mathbf{a}^{1},\mathbf{a}^{2}\right) $ arises in other contexts too. Buck \cite{11}
studied the classical functional equation: given $\beta (t)\in C[0,1]$, $%
0\leq \beta (t)\leq 1$, for which $u\in C[0,1]$ does there exist $\varphi
\in C[0,1]$ such that
\begin{equation*}
\varphi (t)=\varphi \left( \beta (t)\right) +u(t)?
\end{equation*}%
He proved that the set of all $u$ satisfying this condition is dense in the
set
\begin{equation*}
\{v\in C[0,1]:\ v(t)=0\ \mbox{whenever}\ \beta (t)=t\}
\end{equation*}%
if and only if $\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) $ with the
unit directions $\mathbf{a}^{1}=(1;0)$ and $\mathbf{a}^{2}=(0,1)$ is dense in $C(K)$%
, where $K=\{(x,y):y=x\ \mbox{or}\ y=\beta (x),\ 0\leq x\leq 1\}$.
Although there are enough reasons to consider approximation problems
associated with the set $\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) $
in an independent way, one may ask why sums of only two ridge functions are
considered instead of sums with an arbitrary number of terms. We will try to
answer this fair question in Section 1.3.4.
\
\subsection{The characterization theorem}
Let $Q$ be a compact subset of $\mathbb{R}^{d}$ and $\mathbf{a}^{1},\mathbf{a}^{2}%
\in \mathbb{R}^{d}\backslash {\left\{ \mathbf{0}\right\} }.$
\bigskip
\textbf{Definition 1.3.} \textit{A finite or infinite ordered set $p=\left(
\mathbf{p}{_{1},\mathbf{p}_{2},...}\right) \subset Q$ with $\mathbf{p}%
_{i}\neq \mathbf{p}_{i+1},$ and either $\mathbf{a}^{1}\cdot \mathbf{p}_{1}=%
\mathbf{a}^{1}\cdot \mathbf{p}_{2},\mathbf{a}^{2}\cdot \mathbf{p}_{2}=\mathbf{a}^{2}%
\cdot \mathbf{p}_{3},\mathbf{a}^{1}\cdot \mathbf{p}_{3}=\mathbf{a}^{1}\cdot \mathbf{p%
}_{4},...$ or $\mathbf{a}^{2}\cdot \mathbf{p}_{1}=\mathbf{a}^{2}\cdot \mathbf{p}%
_{2},~\mathbf{a}^{1}\cdot \mathbf{p}_{2}=\mathbf{a}^{1}\cdot \mathbf{p}_{3},
\mathbf{a}^{2}\cdot \mathbf{p}_{3}=\mathbf{a}^{2}\cdot \mathbf{p}_{4},...$is called a path
with respect to the directions $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$.}
\bigskip
This notion (in the two-dimensional case) was introduced by Braess and Pinkus
\cite{10}. They showed that paths give geometric means of deciding if a set
of points ${\left\{ {\mathbf{x}}^{i}\right\} }_{i=1}^{m}\subset \mathbb{R}%
^{2}$ has the \textit{NI} property (see Section 1.2.4). Ismailov and Pinkus
\cite{63} used these objects to study the problem of interpolation on
straight lines by linear combinations of a finite number of ridge functions
with fixed directions. In \cite{51,53,44} paths were generalized to those
with respect to two functions. The last objects turned out to be useful in
problems of approximation and representation by sums of compositions of
fixed multivariate functions with univariate functions.
If $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ are the coordinate vectors in $\mathbb{R}%
^{2}$, then Definition 1.3 defines a \textit{bolt of lightning}. The idea of
bolts was first introduced in Diliberto and Straus \cite{26}, where these
objects are called \textit{permissible lines}. They appeared further in a number
of papers, although under several different names (see, e.g., \cite%
{29,34,36,55,56,79,78,76,82,93,108,107,113}). Note that the term
\textquotedblleft bolt of lightning" is due to Arnold \cite{3}.
For the sake of brevity, we use the term ``path" instead of the long expression
``path with respect to the directions $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$".
The length of a path is the number of its points. A single point is a path
of the unit length. A finite path $\left( \mathbf{p}_{1} ,\mathbf{p}_{2}
,...,\mathbf{p}_{2n} \right)$ is said to be closed if $\left(\mathbf{p}_{1} ,%
\mathbf{p}_{2} ,...,\mathbf{p}_{2n}, \mathbf{p}_{1}\right)$ is a path.
We associate each closed path $p=\left(\mathbf{p}_{1}, \mathbf{p}_{2} ,...,%
\mathbf{p}_{2n} \right) $ with the functional
\begin{equation*}
G_{p} (f)=\frac{1}{2n} \sum\limits_{k=1}^{2n}(-1)^{k+1} f(\mathbf{p}_{k}).
\end{equation*}
This functional has the following obvious properties:
(a) If $g\in \mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right)$, then $G_{p}
(g)=0$.
(b) $\left\| G_{p} \right\| \leq 1$ and if $\mathbf{p}_{i} \neq \mathbf{p}_{j}$ for all $%
i\neq j,$ $1\leq i,j\leq 2n$ , then $\left\| G_{p} \right\| =1$.
\bigskip
\textbf{Lemma 1.3.} \textit{Let a compact set $Q$ have closed paths. Then
\begin{equation*}
\sup\limits_{p\subset Q}\left\vert G_{p}(f)\right\vert \leq E\left( f\right)
,\eqno(1.12)
\end{equation*}%
where the sup is taken over all closed paths. Moreover, inequality (1.12) is
sharp, i.e. there exist functions for which (1.12) turns into equality.}
\bigskip
\begin{proof} Let $p$ be a closed path in $Q$ and $g$ be any function from $%
\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) $. By the linearity of
$G_{p}$ and properties (a) and (b),
\begin{equation*}
\left\vert G_{p}(f)\right\vert =\left\vert G_{p}(f-g)\right\vert \leq
\left\Vert f-g\right\Vert .\eqno(1.13)
\end{equation*}%
Since the left-hand and the right-hand sides of (1.13) do not depend on $g$
and $p$ respectively, it follows from (1.13) that
\begin{equation*}
\sup_{p\subset Q}\left\vert G_{p}(f)\right\vert \leq \inf_{g \in \mathcal{R}\left(
\mathbf{a}^{1},\mathbf{a}^{2}\right) }\left\Vert f-g\right\Vert .\eqno(1.14)
\end{equation*}
Now we prove the sharpness of (1.12). By assumption $Q$ has closed paths.
Then $Q$ has a closed path $p^{\prime }=\left( \mathbf{p}_{1}^{\prime },...,%
\mathbf{p}_{2m}^{\prime }\right)$ with distinct points $\mathbf{p}_{1}^{\prime },...,%
\mathbf{p}_{2m}^{\prime}$. In fact, such a special path can be obtained
from any closed path $p=\left( \mathbf{p}_{1},...,\mathbf{p}_{2n}\right) $
by the following simple algorithm: if the points of the path $p$ are not all
distinct, let $i$ and $k>0$ be the minimal indices such that $\mathbf{p}_{i}=%
\mathbf{p}_{i+2k}$; delete from $p$ the subsequence $\mathbf{p}_{i+1},...,%
\mathbf{p}_{i+2k}$ and call $p$ the obtained path; repeat the above step
until all points of $p$ are all distinct; set $p^{\prime }:=p$. On the other
hand there exist continuous functions $h=h(\mathbf{x})$ on $Q$ such that $h(%
\mathbf{p}_{i}^{\prime })=1$, $i=1,3,...,2m-1$, $h(\mathbf{p}_{i}^{\prime
})=-1$, $i=2,4,...,2m$ and $-1<h(\mathbf{x})<1$ elsewhere. For such
functions we have
\begin{equation*}
G_{p^{\prime }}(h)=\Vert h\Vert =1\eqno(1.15)
\end{equation*}%
and
\begin{equation*}
E(h)\leq \Vert h\Vert ,\eqno(1.16)
\end{equation*}%
where the last inequality follows from the fact that $0\in \mathcal{R}\left(
\mathbf{a}^{1},\mathbf{a}^{2}\right) .$ From (1.14)-(1.16) it follows that
\begin{equation*}
\sup_{p\subset Q}\left\vert G_{p}(h)\right\vert =E\left( h\right) .
\end{equation*}%
\end{proof}
\textbf{Lemma 1.4.} \textit{Let $Q$ be a convex compact subset of $%
\mathbb{R}^{d}$ and $f \in C(Q)$. For a vector $\mathbf{e}\in
\mathbb{R}^{d}\backslash \mathbf{\{0\}}$ and a real number $t$ set
\begin{equation*}
Q_{t}=\left\{ {\mathbf{x}}\in Q:\mathbf{e}\cdot \mathbf{x}=t\right\} ,\ \ \
T_{h}=\left\{ t\in \mathbb{R}:Q_{t}\neq \emptyset \right\} .
\end{equation*}%
The functions
\begin{equation*}
g_{1} (t)=\max_{\mathbf{x} \in Q_t} f(\mathbf{x} ),\ \ t\in T_{h}\ \ %
\mbox{and}\ \ g_{2} (t)=\min\limits_{\mathbf{x} \in Q_t} f(\mathbf{x} ),\ \
\ t\in T_{h}
\end{equation*}
are defined and continuous on $T_{h} $.}
\bigskip
The proof of this lemma is not difficult and can be obtained by the
well-known elementary methods of mathematical analysis.
\bigskip
\textbf{Definition 1.4.} \textit{A finite or infinite path $(\mathbf{p}_{1},%
\mathbf{p}_{2},...)$ is said to be extremal for a function $u \in
C(Q)$ if $u(\mathbf{p}_{i})=(-1)^{i}\left\Vert u\right\Vert ,i=1,2,...$ or $%
u(\mathbf{p}_{i})=(-1)^{i+1}\left\Vert u\right\Vert ,$ $i=1,2,...$}
\bigskip
\textbf{Theorem 1.3.} \textit{Let $Q\subset \mathbb{R}^{d}$ be a convex
compact set satisfying the following condition}
\textit{Condition (A): For any path $q=(\mathbf{q}_{1},\mathbf{q}_{2},...,%
\mathbf{q}_{n})\subset Q$ there exist points $\mathbf{q}_{n+1},\mathbf{q}%
_{n+2},...,\mathbf{q}_{n+s}\in Q$ such that $(\mathbf{q}_{1},\mathbf{q}%
_{2},...,\mathbf{q}_{n+s})$ is a closed path and $s$ is not more than some
positive integer $N_{0}$ independent of $q$.}
\textit{Then a necessary and sufficient condition for a function $g_{0}\in
\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) $ to be an extremal element
to the given function $f \in C(Q)$ is the existence of a closed or infinite
path $l=(\mathbf{p}_{1},\mathbf{p}_{2},...)$ extremal for the function $%
f_{1}=f-g_{0}$.}
\bigskip
It should be remarked that the above condition (A) strongly
depends on the fixed directions $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$. For example,
in the familiar case of a square $S\subset \mathbb{R}^2$ there are many
directions which are not allowed. If it is possible to reach a corner of $S$
with not more than one of the two directions orthogonal to $\mathbf{a}^{1}
$ and $\mathbf{a}^{2}$, respectively (we don't differentiate between directions $%
\mathbf{c}$ and $-\mathbf{c}$), the triple $(S,\mathbf{a}^{1}, \mathbf{a}^{2})$ does
not satisfy condition (A) of the theorem. Here are simple examples: Let $%
S=[0;1]^2$, $\mathbf{a}^{1}=(1;0)$, $\mathbf{a}^{2}=(1;1)$. Then the ordered set $%
\{(0;1), (1;0), (1;1)\}$ is a path in $S$ which can not be made closed. In
this case, $(1;1)$ is not reached with the direction orthogonal to $\mathbf{b%
}$. Let now $\mathbf{a}^{1}=\left(1;\frac{1}{2}\right)$, $\mathbf{a}^{2}=(1;1)$.
Then the corner $(1;1)$ is reached with none of the directions orthogonal to
$\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ respectively. In this case, for any positive
integer $N_0$ and any point $\mathbf{q}_0$ in $S$ one can chose a point $%
\mathbf{q}_1\in S$ from a sufficiently small neighborhood of the corner $%
(1;1)$ so that any path containing $\mathbf{q}_0$ and $\mathbf{q}_1$ has the
length more than $N_0$. These examples and a little geometry show that if a convex
compact set $Q\subset\mathbb{R}^2$ satisfies condition (A) of Theorem
1.3, then any point in the boundary of $Q$ must be reached with each of the
two directions orthogonal to $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ respectively. If $%
Q\subset \mathbb{R}^d, \mathbf{a}^{1}, \mathbf{a}^{2}\in \mathbb{R}^d\backslash\{%
\mathbf{0}\}$, $d>2$, there are many directions orthogonal to $\mathbf{a}^{1}$
and $\mathbf{a}^{2} $. In this case, condition (A) requires that any point in
the boundary of $Q$ should be reached with at least two directions
orthogonal to $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$, respectively.
\begin{proof} \textit{Necessity.} Let $g_{0}(\mathbf{x}) =g_{1,0} \left( \mathbf{a}^{1}{\cdot
}\mathbf{x}\right) +g_{2,0} \left(\mathbf{a}^{2}{\cdot }\mathbf{x}\right)$ be an
extremal element from $\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2} \right)$ to
$f$. We must show that if there is not a closed path extremal for $f_{1} $,
then there exists a path extremal for $f_{1} $ with the infinite length
(number of points). Suppose the contrary. Suppose that there exists a
positive integer $N$ such that the length of each path extremal for $f_{1} $
is not more than $N$. Set the following functions:
\begin{equation*}
f_{n} =f_{n-1} -g_{1,n-1} -g_{2,n-1} ,\ \ n=2, 3, ...,
\end{equation*}
where
\begin{equation*}
g_{1,n-1} =g_{1,n-1} \left(\mathbf{a}^{1}{\cdot }\mathbf{x}\right)=\frac{1}{2}
\left(\max\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y%
}=\mathbf{a}^{1}{\cdot }\mathbf{x}}} f_{n-1}(\mathbf{y}) +\min\limits_{\substack{
\mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y}=\mathbf{a}^{1}{\cdot }\mathbf{x}
}} f_{n-1}(\mathbf{y})\right)
\end{equation*}
\begin{equation*}
g_{2,n-1} =g_{2,n-1} (\mathbf{a}^{2}{\cdot }\mathbf{x})=\frac{1}{2} \left( \max
_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{2}{\cdot }\mathbf{y}=\mathbf{a}^{2}{%
\cdot }\mathbf{x}}} \left( f_{n-1} (\mathbf{y})-g_{1,n-1}(\mathbf{a}^{1}{\cdot }%
\mathbf{y})\right)\right.
\end{equation*}
\begin{equation*}
\left.+\min\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{2}{\cdot }\mathbf{%
y}=\mathbf{a}^{2}{\cdot }\mathbf{x}}} \left( f_{n-1}(\mathbf{y}) -g_{1,n-1}(%
\mathbf{a}^{1}{\cdot }\mathbf{y}) \right) \right).
\end{equation*}
By Lemma 1.4, all the functions $f_{n}(\mathbf{x}),$ $n=2,3,...,$ are
continuous on $Q$. By assumption $g_{0}$ is a best approximation to $f$.
Hence $\left\Vert f_{1}\right\Vert =E\left( f\right) $. Now let us show that $%
\left\Vert f_{2}\right\Vert =E\left( f\right) $. Indeed, for any $\mathbf{x}%
\in Q$
\begin{equation*}
f_{1}(\mathbf{x})-g_{1,1}(\mathbf{a}^{1}{\cdot }\mathbf{x})\leq \frac{1}{2}%
\left( \max\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{%
y}=\mathbf{a}^{1}{\cdot }\mathbf{x}}}f_{1}(\mathbf{y})-\min\limits_{\substack{
\mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y}=\mathbf{a}^{1}{\cdot }\mathbf{x}
}}f_{1}(\mathbf{y})\right) \leq E(f)\eqno(1.17)
\end{equation*}%
and
\begin{equation*}
f_{1}(\mathbf{x})-g_{1,1}(\mathbf{a}^{1}{\cdot }\mathbf{x})\geq \frac{1}{2}%
\left( \min\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{%
y}=\mathbf{a}^{1}{\cdot }\mathbf{x}}}f_{1}(\mathbf{y})-\max\limits_{\substack{
\mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y}=\mathbf{a}^{1}{\cdot }\mathbf{x}
}}f_{1}(\mathbf{y})\right) \geq -E(f).\eqno(1.18)
\end{equation*}
Using the definition of $g_{2,1}(\mathbf{a}^{2}\cdot \mathbf{x})$, for any $%
\mathbf{x}\in Q$ we have
\begin{equation*}
f_{1}(\mathbf{x})-g_{1,1}(\mathbf{a}^{1}\cdot \mathbf{x})-g_{2,1}(\mathbf{a}^{2}%
\cdot \mathbf{x})
\end{equation*}
\begin{equation*}
\leq \frac{1}{2}\left( \max\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{2}%
\cdot \mathbf{y}=\mathbf{a}^{2}\cdot \mathbf{x}}}\left( f_{1}(\mathbf{y}%
)-g_{1,1}(\mathbf{a}^{1}\cdot \mathbf{y})\right) -\min\limits_{ _{\substack{
\mathbf{y}\in Q \\ \mathbf{a}^{2}\cdot \mathbf{y}=\mathbf{a}^{2}\cdot \mathbf{x}}}%
}\left( f_{1}(\mathbf{y})-g_{1,1}(\mathbf{a}^{1}\cdot \mathbf{y})\right) \right)
\end{equation*}
and
\begin{equation*}
f_{1}(\mathbf{x})-g_{1,1}(\mathbf{a}^{1}\cdot \mathbf{x})-g_{2,1}(\mathbf{a}^{2}%
\cdot \mathbf{x})
\end{equation*}
\begin{equation*}
\leq \frac{1}{2}\left( \min\limits_{_{\substack{ \mathbf{y}\in Q \\ \mathbf{%
a}^{2}\cdot \mathbf{y}=\mathbf{a}^{2}\cdot \mathbf{x}}}}\left( f_{1}(\mathbf{y}%
)-g_{1,1}(\mathbf{a}^{1}\cdot \mathbf{y})\right) -\max\limits_{\substack{
\mathbf{y}\in Q \\ \mathbf{a}^{2}\cdot \mathbf{y}=\mathbf{a}^{2}\cdot \mathbf{x}}}%
\left( f_{1}(\mathbf{y})-g_{1,1}(\mathbf{a}^{1}\cdot \mathbf{y})\right) \right).
\end{equation*}
Using (1.17) and (1.18) in the last two inequalities, we obtain that for any
$\mathbf{x}\in Q$
\begin{equation*}
-E(f)\leq f_{2}(\mathbf{x})=f_{1}(\mathbf{x})-g_{1,1}(\mathbf{a}^{1}{\cdot }%
\mathbf{x})-g_{2,1}(\mathbf{a}^{2}{\cdot }\mathbf{x})\leq E(f).
\end{equation*}
Therefore,
\begin{equation*}
\left\Vert f_{2}\right\Vert \leq E(f).\eqno(1.19)
\end{equation*}
Since $f_{2}(\mathbf{x})-f(\mathbf{x})$ belongs to $\mathcal{R}\left(
\mathbf{a}^{1},\mathbf{a}^{2}\right) $, we deduce from (1.19) that
\begin{equation*}
\left\Vert f_{2}\right\Vert =E(f).
\end{equation*}
By the same way, one can show that $\|f_3\|=E(f)$, $\|f_4\|=E(f)$, and so
on. Thus we can write
\begin{equation*}
\left\| f_{n} \right\| =E(f), \ \mbox{for any}\ n .
\end{equation*}
Let us now prove the implications
\begin{equation*}
f_{1}(\mathbf{p}_{0})<E(f)\Rightarrow f_{2}(\mathbf{p}_{0})<E(f)\eqno(1.20)
\end{equation*}%
and
\begin{equation*}
f_{1}(\mathbf{p}_{0})>-E(f)\Rightarrow f_{2}(\mathbf{p}_{0})>-E(f),\eqno%
(1.21)
\end{equation*}%
where $\mathbf{p}_{0}\in Q$. First, we are going to prove the implication
\begin{equation*}
f_{1}(\mathbf{p}_{0})<E(f)\Rightarrow f_{1}(\mathbf{p}_{0})-g_{1,1}(\mathbf{a%
}\cdot \mathbf{p}_{0})<E(f).\eqno(1.22)
\end{equation*}
There are two possible cases.
1) $\max\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y}=%
\mathbf{a}^{1}{\cdot }\mathbf{p}_0}} f_{1} (\mathbf{y} )=E(f)$ and $\min\limits
_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y}=\mathbf{a}^{1}{%
\cdot }\mathbf{p}_0}} f_{1} (\mathbf{y}) = -E(f). $ In this case, $g_{1,1}(%
\mathbf{a}^{1}\cdot \mathbf{p}_0)=0$. Hence
\begin{equation*}
f_1(\mathbf{p}_0)-g_{1,1}(\mathbf{a}^{1}\cdot \mathbf{p}_0)< E(f).
\end{equation*}
2) $\max\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y}=%
\mathbf{a}^{1}{\cdot }\mathbf{p}_{0}}}f_{1}(\mathbf{y})=E(f)-\varepsilon _{1}$
and $\min\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y}%
=\mathbf{a}^{1}{\cdot }\mathbf{p}_{0}}}f_{1}(\mathbf{y})=-E(f)+\varepsilon _{2}$,%
\newline
where $\varepsilon _{1}$, $\varepsilon _{2}$ are nonnegative real numbers
with the sum $\varepsilon _{1}+\varepsilon _{2}\not=0$. In this case,
\begin{eqnarray*}
f_{1}(\mathbf{p}_{0})-g_{1,1}(\mathbf{a}^{1}{\cdot }\mathbf{p}_{0}) &\leq
&\max\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y}=%
\mathbf{a}^{1}{\cdot }\mathbf{p}_{0}}}f_{1}(\mathbf{y})-g_{1,1}(\mathbf{a}^{1}{\cdot
}\mathbf{p}_{0})= \\
&=&\frac{1}{2}\left( \max\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{%
\cdot }\mathbf{y}=\mathbf{a}^{1}{\cdot }\mathbf{p}_{0}}}f_{1}(\mathbf{y}%
)-\min\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y}=%
\mathbf{a}^{1}{\cdot }\mathbf{p}_{0}}}f_{1}(\mathbf{y})\right) =
\end{eqnarray*}%
\begin{equation*}
=E(f)-\frac{\varepsilon _{1}+\varepsilon _{2}}{2}<E(f).
\end{equation*}
Thus we have proved (1.22). Using this method, we can also prove that
\begin{equation*}
f_{1}(\mathbf{p}_{0})-g_{1,1}(\mathbf{a}^{1}{\cdot }\mathbf{p}%
_{0})<E(f)\Rightarrow f_{1}(\mathbf{p}_{0})-g_{1,1}(\mathbf{a}^{1}{\cdot }%
\mathbf{p}_{0})-g_{2,1}(\mathbf{a}^{2}{\cdot }\mathbf{p}_{0})<E(f).\eqno(1.23)
\end{equation*}%
Now (1.20) follows from (1.22) and (1.23). By the same way we can prove
(1.21). It follows from implications (1.20) and (1.21) that if $f_{2}(%
\mathbf{p}_{0})=E(f)$, then $f_{1}(\mathbf{p}_{0})=E(f)$ and if $f_{2}(%
\mathbf{p}_{0})=-E(f)$, then $f_{1}(\mathbf{p}_{0})=-E(f)$. This simply
means that each path extremal for $f_{2}$ will be extremal for $f_{1}$.
Now we show that if any path extremal for $f_{1} $ has the length not more
than $N$, then any path extremal for $f_{2} $ has the length not more than $%
N-1$. Suppose the contrary. Suppose that there is a path extremal for $f_{2}
$ with the length equal to $N$. Denote it by $q=(\mathbf{q}_{1} ,\mathbf{q}%
_{2} ,...,\mathbf{q}_{N} )$. Without loss of generality we may assume that $%
\mathbf{a}^{2}\cdot \mathbf{q}_{N-1} =\mathbf{a}^{2}\cdot \mathbf{q}_{N}$. As it has
been shown above, the path $q$ is also extremal for $f_{1}$. Assume that $%
f_{1} (\mathbf{q}_{N} )=E(f)$. Then there is not any $\mathbf{q}_{0} \in Q$
such that $\mathbf{q}_{0} \neq \mathbf{q}_{N} $, $\mathbf{a}^{1}\cdot \mathbf{q}%
_{0} =\mathbf{a}^{1}\cdot \mathbf{q}_{N}$ and $f_{1} (\mathbf{q}_{0} )=-E(f)$.
Indeed, if there was such $\mathbf{q}_{0}$ and $\mathbf{q}_{0} \not\in q$,
the path $(\mathbf{q}_{1} ,\mathbf{q}_{2} ,...,\mathbf{q}_{N} ,\mathbf{q}%
_{0} )$ would be extremal for $f_{1} $. But this would contradict our
assumption that any path extremal for $f_{1} $ has the length not more than $%
N$. Besides, if there was such $\mathbf{q}_{0} $ and $\mathbf{q}_{0} \in q$,
we could form some closed path extremal for $f_{1} $. This also would
contradict our assumption that there does not exist a closed path extremal
for $f_{1} $.
Hence
\begin{equation*}
\max\limits_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y}=%
\mathbf{a}^{1}{\cdot }\mathbf{q}_N}} f_{1} (\mathbf{y} )=E(f),\ \ \min\limits
_{\substack{ \mathbf{y}\in Q \\ \mathbf{a}^{1}{\cdot }\mathbf{y}=\mathbf{a}^{1}{%
\cdot }\mathbf{q}_N}} f_{1} (\mathbf{y})>-E(f).
\end{equation*}
Therefore,
\begin{equation*}
\left| f_{1} (\mathbf{q}_{N} )-g_{1,1} (\mathbf{a}^{1}{\cdot }\mathbf{q}%
_N)\right| <E(f).
\end{equation*}
From the last inequality it is easy to obtain that (see the proof of
implications (1.20) and (1.21))
\begin{equation*}
\left\vert f_{2}(\mathbf{q}_{N})\right\vert <E(f).
\end{equation*}%
This means, on the contrary to our assumption, that the path $(\mathbf{q}%
_{1},\mathbf{q}_{2},...,\mathbf{q}_{N})$ can not be extremal for $f_{2}$.
Hence any path extremal for $f_{2}$ has the length not more than $N-1$.
By the same way, it can be shown that any path extremal for $f_{3}$ has the
length not more than $N-2$, any path extremal for $f_{4}$ has the length not
more than $N-3$ and so on. Finally, we will obtain that there is not a path
extremal for $f_{N+1}$. Hence there is not a point $\mathbf{p}_{0}\in Q$
such that $|f_{N+1}(\mathbf{p}_{0})|=\Vert f_{N+1}\Vert $. But by Lemma 1.4,
all the functions $f_{2}$, $f_{3},...,f_{N+1}$ are continuous on the compact
set $Q$; hence the norm $\Vert f_{N+1}\Vert $ must be attained. This
contradiction means that there exists a path extremal for $f_{1}$ with the
infinite length.
\bigskip
\textit{Sufficiency.} Let a path $p=(\mathbf{p}_{1},\mathbf{p}_{2},...,%
\mathbf{p}_{2n})$ be closed and extremal for $f_{1}$. Then
\begin{equation*}
\left\vert G_{p}(f)\right\vert =\left\Vert f-g_{0}\right\Vert .\eqno(1.24)
\end{equation*}
By Lemma 1.3,
\begin{equation*}
\left\vert G_{p}(f)\right\vert \leq E(f).\eqno(1.25)
\end{equation*}
It follows from (1.24) and (1.25) that $g_{0}$ is a best approximation.
Let now a path $p=(\mathbf{p}_{1},\mathbf{p}_{2},...,\mathbf{p}_{n},...)$ be
infinite and extremal for $f_{1}$. Consider the sequence $p_{n}=(\mathbf{p}%
_{1},\mathbf{p}_{2},...,\mathbf{p}_{n})$, $n=1,2,...,$ of finite paths. By
condition (A) of the theorem, for each $p_{n}$
there exists a closed path $p_{n}^{m_{n}}=(\mathbf{p}_{1},\mathbf{p}_{2},...,%
\mathbf{p}_{n},\mathbf{q}_{n+1},...,\mathbf{q}_{n+m_{n}})$, where $m_{n}\leq
N_{0}$. Then for any positive integer $n$,
\begin{equation*}
\left\vert G_{p_{n}^{m_{n}}}(f)\right\vert =\left\vert
G_{p_{n}^{m_{n}}}(f-g_{0})\right\vert \leq \frac{n\left\Vert
f-g_{0}\right\Vert +m_{n}\left\Vert f-g_{0}\right\Vert }{n+m_{n}}=\left\Vert
f-g_{0}\right\Vert
\end{equation*}%
and
\begin{equation*}
\left\vert G_{p_{n}^{m_{n}}}(f)\right\vert \geq \frac{n\left\Vert
f-g_{0}\right\Vert -m_{n}\left\Vert f-g_{0}\right\Vert }{n+m_{n}}=\frac{%
n-m_{n}}{n+m_{n}}\left\Vert f-g_{0}\right\Vert .
\end{equation*}
It follows from the above two inequalities for $\left\vert G_{p_{n}^{m_{n}}}(f)\right\vert$ that
\begin{equation*}
\sup_{p_{n}^{m_{n}}}\left\vert G_{p_{n}^{m_{n}}}(f)\right\vert =\left\Vert
f-g_{0}\right\Vert.
\end{equation*}
This together with Lemma 1.3 give that
\begin{equation*}
\Vert f-g_{0}\Vert \leq E(f).
\end{equation*}%
Hence $g_{0}$ is a best approximation.
\end{proof}
\bigskip
Theorem 1.3 has been proved by using only methods of classical analysis. By
implementing more deep techniques from functional analysis we will see below
that condition (A) and the convexity assumption on a compact set $Q$ can be
dropped.
\bigskip
\textbf{Theorem 1.4.}\ \textit{\ Assume $Q$ is a compact subset of $\mathbb{R%
}^{d}$. A function $g_{0}\in \mathcal{R}$ is a best approximation to a
function $f\in C(Q)$ if and only if there exists a closed or infinite path $%
p=(\mathbf{p}_{1},\mathbf{p}_{2},...)$ extremal for the function $f-g_{0}$.}
\bigskip
\begin{proof} \textit{Sufficiency.} There are two possible cases. The first case
happens when there exists a closed path $(\mathbf{p}_{1},...,\mathbf{p}%
_{2n}) $ extremal for the function $f-g_{0}.$ Let us check that in this
case, $f-g_{0}$ is a best approximation. Indeed, on the one hand, the
following equalities are valid
\begin{equation*}
\left\vert \sum_{i=1}^{2n}(-1)^{i}f(\mathbf{p}_{i})\right\vert =\left\vert
\sum_{i=1}^{2n}(-1)^{i}\left[ f-g_{0}\right] (\mathbf{p}_{i})\right\vert
=2n\left\Vert f-g_{0}\right\Vert .
\end{equation*}%
On the other hand, for any function $g\in \mathcal{R}$, we have
\begin{equation*}
\left\vert \sum_{i=1}^{2n}(-1)^{i}f(\mathbf{p}_{i})\right\vert =\left\vert
\sum_{i=1}^{2n}(-1)^{i}\left[ f-g\right] (\mathbf{p}_{i})\right\vert \leq
2n\left\Vert f-g\right\Vert .
\end{equation*}%
Therefore, $\left\Vert f-g_{0}\right\Vert \leq \left\Vert f-g\right\Vert $
for any $g\in \mathcal{R}$. That is, $g_{0}$ is a best approximation.
The second case happens when we do not have closed paths extremal for $%
f-g_{0}$, but there exists an infinite path $(\mathbf{p}_{1},\mathbf{p}%
_{2},...)$ extremal for $f-g_{0}$. To analyze this case, consider the
following linear functional
\begin{equation*}
L_{q}:C(Q)\rightarrow \mathbb{R}\text{, \ }L_{q}(F)=\frac{1}{n}%
\sum_{i=1}^{n}(-1)^{i}F(\mathbf{q}_{i}),
\end{equation*}%
where $q=\{\mathbf{q}_{1},...,\mathbf{q}_{n}\}$ is a finite path in $Q$. It
is easy to see that the norm $\left\Vert L_{q}\right\Vert \leq 1$ and $%
\left\Vert L_{q}\right\Vert =1$ if and only if the set of points of $q$ with
odd indices $O=\{\mathbf{q}_{i}\in q:$ $i$ \textit{is an odd number}$\}$ do
not intersect with the set of points of $q$ with even indices $E=\{\mathbf{q}%
_{i}\in q:$ $i$ \textit{is an even number}$\}$. Indeed, from the definition
of $L_{q}$ it follows that $\left\vert L_{q}(F)\right\vert \leq \left\Vert
F\right\Vert $ for all functions $F\in C(Q)$, whence $\left\Vert
L_{q}\right\Vert \leq 1.$ If $O\cap E=\varnothing $, then for a function $%
F_{0}$ with the property $F_{0}(\mathbf{q}_{i})=-1$ if $i$ is odd, $F_{0}(%
\mathbf{q}_{i})=1$ if $i$ is even and $-1<F_{0}(x)<1$ elsewhere on $Q,$ we
have $\left\vert L_{q}(F_{0})\right\vert =\left\Vert F_{0}\right\Vert .$
Hence, $\left\Vert L_{q}\right\Vert =1$. Recall that such a function $F_{0}$
exists on the basis of Urysohn's great lemma.
Note that if $q$ is a closed path, then $L_{q}$ annihilates all members of
the class $\mathcal{R}$. But in general, when $q$ is not closed, we do not
have the equality $L_{q}(g)=0,$ for all members $g\in \mathcal{R}$.
Nonetheless, this functional has the important property that
\begin{equation*}
\left\vert L_{q}(g_{1}+g_{2})\right\vert \leq \frac{2}{n}(\left\Vert
g_{1}\right\Vert +\left\Vert g_{2}\right\Vert ),\eqno(1.26)
\end{equation*}%
where $g_{1}$ and $g_{2}$ are ridge functions with the directions $\mathbf{a}%
_{1}$ and $\mathbf{a}_{2}$, respectively, that is, $g_{1}=g_{1}(\mathbf{a}%
_{1}\cdot \mathbf{x})$ and $g_{2}=g_{2}(\mathbf{a}_{2}\cdot \mathbf{x}).$
This property is important in the sense that if $n$ is sufficiently large,
then the functional $L_{q}$ is close to an annihilating functional. To prove
(1.26), note that $\left\vert L_{q}(g_{1})\right\vert \leq \frac{2}{n}%
\left\Vert g_{1}\right\Vert $ and $\left\vert L_{q}(g_{2})\right\vert \leq
\frac{2}{n}\left\Vert g_{2}\right\Vert $. These estimates become obvious if
consider the chain of equalities $g_{1}(\mathbf{a}_{1}\cdot \mathbf{q}%
_{1})=g_{1}(\mathbf{a}_{1}\cdot \mathbf{q}_{2}),$ $g_{1}(\mathbf{a}_{1}\cdot
\mathbf{q}_{3})=g_{1}(\mathbf{a}_{1}\cdot \mathbf{q}_{4}),...$(or $g_{1}(%
\mathbf{a}_{1}\cdot \mathbf{q}_{2})=g_{1}(\mathbf{a}_{1}\cdot \mathbf{q}%
_{3}),$ $g_{1}(\mathbf{a}_{1}\cdot \mathbf{q}_{4})=g_{1}(\mathbf{a}_{1}\cdot
\mathbf{q}_{5}),...$) for $g_{1}(\mathbf{a}_{1}\cdot \mathbf{x})$ and the
corresponding chain of equalities for $g_{2}(\mathbf{a}_{2}\cdot \mathbf{x})$%
.
Now consider the infinite path $p=(\mathbf{p}_{1},\mathbf{p}_{2},...)$ and
form the finite paths $p_{k}=(\mathbf{p}_{1},...,\mathbf{p}_{k}),$ $%
k=1,2,... $. For ease of notation, let us set $L_{k}=L_{p_{k}}.$ The
sequence $\{L_{_{k}}\}_{k=1}^{\infty }$ is a subset of the unit ball of the
conjugate space $C^{\ast }(Q).$ By the Banach-Alaoglu theorem, the unit ball
is weak$^{\text{*}}$ compact in the weak$^{\text{*}}$ topology of $C^{\ast
}(Q)$ (see \cite[p.68]{122}). It follows from this theorem that
the sequence $\{L_{_{k}}\}_{k=1}^{\infty }$ must have weak$^{\text{*}}$
cluster points. Suppose $L^{\ast }$ denotes one of them. Without loss of
generality we may assume that $L_{k}\overset{weak^{\ast }}{\longrightarrow }%
L^{\ast },$ as $k\rightarrow \infty .$ From (1.26) it follows that $L^{\ast
}(g_{1}+g_{2})=0.$ That is, $L^{\ast }\in \mathcal{R}^{\bot },$ where the
symbol $\mathcal{R}^{\bot }$ stands for the annihilator of $\mathcal{R}$.
Since in addition $\left\Vert L^{\ast }\right\Vert \leq 1,$ we can write that
\begin{equation*}
\left\vert L^{\ast }(f)\right\vert =\left\vert L^{\ast }(f-g)\right\vert
\leq \left\Vert f-g\right\Vert ,\eqno(1.27)
\end{equation*}%
for all functions $g\in \mathcal{R}.$ On the other hand, since the infinite
bolt $p$ is extremal for $f-g_{0}$
\begin{equation*}
\left\vert L_{k}(f-g_{0})\right\vert =\left\Vert f-g_{0}\right\Vert ,\text{ }%
k=1,2,...
\end{equation*}%
Therefore,
\begin{equation*}
\left\vert L^{\ast }(f)\right\vert =\left\vert L^{\ast }(f-g_{0})\right\vert
=\left\Vert f-g_{0}\right\Vert .\eqno(1.28)
\end{equation*}%
From (1.27) and (1.28) we conclude that
\begin{equation*}
\left\Vert f-g_{0}\right\Vert \leq \left\Vert f-g\right\Vert ,
\end{equation*}%
for all $g\in \mathcal{R}.$ In other words, $g_{0}$ is a best approximation
to $f$. We proved the sufficiency of the theorem.
\textit{Necessity.} The proof of this part is mainly based on the following
result of Singer \cite{S}: Let $X$ be a compact space, $U$
be a linear subspace of $C(X)$, $f\in C(X)\backslash U$ and $u_{0}\in U.$
Then $u_{0}$ is a best approximation to $f$ if and only if there exists a
regular Borel measure $\mu $ on $X$ such that
(1) The total variation $\left\Vert \mu \right\Vert =1$;
(2) $\mu $ is orthogonal to the subspace $U$, that is, $\int_{X}ud\mu =0$
for all $u\in U$;
(3) For the Jordan decomposition $\mu =\mu ^{+}-\mu ^{-}$,
\begin{equation*}
f(x)-u_{0}(x)=\left\{
\begin{array}{c}
\left\Vert f-u_{0}\right\Vert \text{ for }x\in S^{+}, \\
-\left\Vert f-u_{0}\right\Vert \text{ for }x\in S^{-},%
\end{array}%
\right.
\end{equation*}%
where $S^{+}$ and $S^{-}$ are closed supports of the positive measures $\mu
^{+}$ and $\mu ^{-}$, respectively.
Let us show how we use this theorem in the proof of necessity part of our
theorem. Assume $g_{0}\in \mathcal{R}$ is a best approximation. For the
subspace $\mathcal{R},$ the existence of a measure $\mu $ satisfying the
conditions (1)-(3) is a direct consequence of Singer's result. Let $\mathbf{x}%
_{0}$ be any point in $S^{+}.$\ Consider the point $y_{0}=\mathbf{a}%
_{1}\cdot \mathbf{x}_{0}$ and a $\delta $-neighborhood of $y_{0}$. That is,
choose an arbitrary $\delta >0$ and consider the set $I_{\delta
}=(y_{0}-\delta ,y_{0}+\delta )\cap \mathbf{a}_{1}\cdot Q.$ Here, $\mathbf{a}%
_{1}\cdot Q=\{\mathbf{a}_{1}\cdot \mathbf{x}:$ $\mathbf{x}\in Q\}.$ For any
subset $E\subset \mathbb{R}$, put
\begin{equation*}
E^{i}=\{\mathbf{x}\in Q:\mathbf{a}_{i}\cdot \mathbf{x}\in E\},\text{ }i=1,2.%
\text{ }
\end{equation*}
Clearly, for some sets $E,$ one or both the sets $E^{i}$ may be empty. Since
$I_{\delta }^{1}\cap S^{+}$ is not empty (note that $\mathbf{x}_{0}\in
I_{\delta }^{1}$), it follows that $\mu ^{+}(I_{\delta }^{1})>0.$ At the
same time $\mu (I_{\delta }^{1})=0,$ since $\mu $ is orthogonal to all
functions $g_{1}(\mathbf{a}_{1}\cdot \mathbf{x}).$ Therefore, $\mu
^{-}(I_{\delta }^{1})>0.$ We conclude that $I_{\delta }^{1}\cap S^{-}$ is
not empty. Denote this intersection by $A_{\delta }.$ Tending $\delta $ to $%
0,$ we obtain a set $A$ which is a subset of $S^{-}$ and has the property
that for each $\mathbf{x}\in A,$ we have $\mathbf{a}_{1}\cdot \mathbf{x}=%
\mathbf{a}_{1}\cdot \mathbf{x}_{0}.$ Fix any point $\mathbf{x}_{1}\in A$.
Changing $\mathbf{a}_{1}$, $\mu ^{+}$, $S^{+}$ to $\mathbf{a}_{2}$, $\mu
^{-} $ and $S^{-}$ correspondingly, repeat the above process with the point $%
y_{1}=\mathbf{a}_{2}\cdot \mathbf{x}_{1}$ and a $\delta $-neighborhood of $%
y_{1}$. Then we obtain a point $\mathbf{x}_{2}\in S^{+}$ such that $\mathbf{a%
}_{2}\cdot \mathbf{x}_{2}=\mathbf{a}_{2}\cdot \mathbf{x}_{1}.$ Continuing
this process, one can construct points $\mathbf{x}_{3}$, $\mathbf{x}_{4}$,
and so on. Note that \ the set of all constructed points $\mathbf{x}_{i}$, $%
i=0,1,...,$ forms a path. By Singer's above result, this path is extremal for the
function $f-g_{0}$. We have proved the necessity and hence Theorem 1.4.
\end{proof}
Theorem 1.4, in a more general setting, was proven in Pinkus \cite[p.99]{117}
under additional assumption that $Q$ is convex. Convexity assumption was
made to guarantee continuity of the following functions
\begin{equation*}
g_{1,i}(t)=\max_{\substack{ \mathbf{x}\in Q \\ \mathbf{a}_{i}\cdot \mathbf{x%
}=t}}F(\mathbf{x})\ \ \text{and }\ g_{2,i}(t)=\min\limits_{\substack{
\mathbf{x}\in Q \\ \mathbf{a}_{i}\cdot \mathbf{x}=t}}F(\mathbf{x}),\text{ }%
i=1,2,
\end{equation*}%
where $F$ is an arbitrary continuous function on $Q$. Note that in the proof
of Theorem 1.4 we did not need continuity of these functions.
\bigskip
It is well known that characterization theorems of this type are very
essential in approximation theory. Chebyshev was the first to prove a
similar result for polynomial approximation. Khavinson \cite{79}
characterized extremal elements in the special case of the problem
considered here. His case allows the approximation of a continuous bivariate
function $f\left( {x,y}\right) $ by functions of the form $\varphi \left( {x}%
\right) +\psi \left( {y}\right)$.
\bigskip
\subsection{Construction of an extremal element}
In 1951, Diliberto and Straus \cite{26} established a formula for the error
of approximation of a bivariate function by sums of univariate functions. Their
formula contains the supremum over all closed bolts (see Section 3.3.1).
Although the mentioned formula is
valid for all continuous functions, it is not easily calculable. Therefore,
it cannot give a desired effect if one is interested in the precise
value of the approximation error. After this general result some authors
started to seek easily calculable formulas for the approximation error by
considering not the whole space of continuous functions, but some subsets thereof
(see, for example, \cite{4,7,55,56,79,121}). These subsets were chosen so
that they could provide precise and easy computation of the approximation
error. Since the set of ridge functions contains univariate functions,
one may ask for explicit formulas for the error of
approximation of a multivariate function by sums of ridge functions.
In this section, we see how with the use of Theorem 1.3 (or 1.4) it is
possible to find the approximation error and construct an extremal element in the problem of approximation
by sums of ridge functions. We restrict ourselves to $%
\mathbb{R}^{2}.$ To make the problem more precise, let $\Omega $ be a
compact set in $\mathbb{R}^{2},$ $f \in C\left( {%
\Omega }\right)$, $\mathbf{a}=\left( {a_{1},a_{2}}\right)$ and $\mathbf{b}%
=\left( {b_{1},b_{2}}\right)$ be linearly independent vectors.
Consider the approximation of $f$ by functions from $\mathcal{R}=\mathcal{R}\left(
\mathbf{a},\mathbf{b}\right)$. We want, under
some suitable conditions on $f\;$and $\Omega $, to establish a formula for an easy
and direct computation of the approximation error $E\left(f,\mathcal{R}\right)$.
\bigskip
\textbf{Theorem 1.5.} \textit{Let
\begin{equation*}
\Omega =\left\{ \mathbf{x}\in \mathbb{R}^{2}:c_{1}\leq \mathbf{a}\cdot
\mathbf{x}\leq d_{1},\ \ c_{2}\leq \mathbf{b}\cdot \mathbf{x}\leq
d_{2}\right\} ,
\end{equation*}%
where $c_{1}<d_{1}$ and $c_{2}<d_{2}$. Let a function $f(\mathbf{x})\in
C(\Omega )$ have the continuous partial derivatives $\frac{\partial ^{2}f}{%
\partial x_{1}^{2}},\frac{\partial ^{2}f}{\partial x_{1}\partial x_{2}},%
\frac{\partial ^{2}f}{\partial x_{2}^{2}}$ and for any $\mathbf{x}\in \Omega
$
\begin{equation*}
\frac{\partial ^{2}f}{\partial x_{1}\partial x_{2}}\left(
a_{1}b_{2}+a_{2}b_{1}\right) -\frac{\partial ^{2}f}{\partial x_{1}^{2}}%
a_{2}b_{2}-\frac{\partial ^{2}f}{\partial x_{2}^{2}}a_{1}b_{1}\geq 0.
\end{equation*}%
Then
\begin{equation*}
E\left(f,\mathcal{R}\right)=\frac{1}{4}\left(
f_{1}(c_{1},c_{2})+f_{1}(d_{1},d_{2})-f_{1}(c_{1},d_{2})-f_{1}(d_{1},c_{2})%
\right) ,
\end{equation*}%
where
\begin{equation*}
f_{1}(y_{1},y_{2})=f\left( \frac{y_{1}b_{2}-y_{2}a_{2}}{a_{1}b_{2}-a_{2}b_{1}%
},\frac{y_{2}a_{1}-y_{1}b_{1}}{a_{1}b_{2}-a_{2}b_{1}}\right) .\eqno(1.29)
\end{equation*}%
}
\bigskip
\begin{proof} Introduce the new variables
\begin{equation*}
y_{1}=a_{1}x_{1}+a_{2}x_{2},\ \ y_{2}=b_{1}x_{1}+b_{2}x_{2}.\eqno(1.30)
\end{equation*}
Since the vectors $(a_{1},a_{2})$ and $(b_{1},b_{2})$ are linearly
independent, for any $(y_{1},y_{2})\in Y$, where $Y=[c_{1},d_{1}]\times
\lbrack c_{2},d_{2}]$, there exists only one solution $(x_{1},x_{2})\in
\Omega $ of the system (1.30). The coordinates of this solution are
\begin{equation*}
x_{1}=\frac{y_{1}b_{2}-y_{2}a_{2}}{a_{1}b_{2}-a_{2}b_{1}},\qquad \ x_{2}=%
\frac{y_{2}a_{1}-y_{1}b_{1}}{a_{1}b_{2}-a_{2}b_{1}}.\eqno(1.31)
\end{equation*}
The linear transformation (1.31) transforms the function $f(x_{1},x_{2})$ to
the function $f_{1}(y_{1},y_{2})$. Consider the approximation of $%
f_{1}(y_{1},y_{2})$ from the set
\begin{equation*}
\mathcal{Z}=\left\{ z_{1}(y_{1})+z_{2}(y_{2}):z_{i}\in C(\mathbb{R}),\
i=1,2\right\} .
\end{equation*}
It is easy to see that
\begin{equation*}
E\left( f,\mathcal{R}\right) =E\left( f_{1},\mathcal{Z}\right) .\eqno(1.32)
\end{equation*}
With each rectangle $S=\lbrack u_{1} ,v_{1} ]\times \lbrack u_{2} ,v_{2}
]\subset Y$ we associate the functional
\begin{equation*}
L\left(h, S\right) =\frac{1}{4} \left(h(u_{1} ,u_{2} )+h(v_{1} ,v_{2}
)-h(u_{1} ,v_{2} )-h (v_{1} ,u_{2})\right),\ \ h\in C(Y).
\end{equation*}
This functional has the following obvious properties:
(i) $L(z,S)=0$ for any $z\in \mathcal{Z}$ and $S\subset Y$.
(ii) For any point $(y_{1} ,y_{2} )\in Y$, $L(f_1,
Y)=\sum\limits_{i=1}^{4}L(f_1, S_{i} ) $, where $S_{1} =[c_{1} ,y_{1}]\times
[c_{2} ,y_{2}],$ $S_{2} =[y_{1} ,d_{1}]\times [y_{2} ,d_{2}],$ $S_{3}
=[c_{1} ,y_{1}]\times [y_{2} ,d_{2}],$ $S_{4} =[y_{1} ,d_{1}]\times [c_{2}
,y_{2}]$.
By the conditions of the theorem, it is not difficult to verify that
\begin{equation*}
\frac{\partial ^{2} f_1}{\partial y_{1} \partial y_{2} } \geq 0\ \
\mbox{for
any}\ \ (y_{1} ,y_{2} )\in Y.
\end{equation*}
Integrating both sides of the last inequality over arbitrary rectangle $%
S=[u_{1},v_{1}]\times \lbrack u_{2},v_{2}]\subset Y$, we obtain that
\begin{equation*}
L\left( f_{1},S\right) \geq 0.\eqno(1.33)
\end{equation*}%
Set the function
\begin{equation*}
f_{2}(y_{1},y_{2})=L\left( f_{1},S_{1}\right) +L\left( f_{1},S_{2}\right)
-L\left( f_{1},S_{3}\right) -L\left( f_{1},S_{4}\right) .\eqno(1.34)
\end{equation*}
It is not difficult to verify that the function $f_{1}-f_{2}$ belongs to $%
\mathcal{Z}$. Hence
\begin{equation*}
E\left( f_{1},\mathcal{Z}\right) =E\left( f_{2},\mathcal{Z}\right) .\eqno%
(1.35)
\end{equation*}
Calculate the norm $\left\Vert f_{2}\right\Vert $. From the property (ii),
it follows that
\begin{equation*}
f_{2}(y_{1},y_{2})=L(f_{1},Y)-2(L(f_{1},S_{3})+L(f_{1},S_{4}))
\end{equation*}%
and
\begin{equation*}
f_{2}(y_{1},y_{2})=2\left( L\left( f_{1},S_{1}\right) +L\left(
f_{1},S_{2}\right) \right) -L\left( f_{1},Y\right) .
\end{equation*}%
From the last equalities and (1.33), we obtain that
\begin{equation*}
\left\vert f_{2}(y_{1},y_{2})\right\vert \leq L\left( f_{1},Y\right) ,\ %
\mbox{for any}\ (y_{1},y_{2})\in Y.
\end{equation*}%
On the other hand, one can check that
\begin{equation*}
f_{2}(c_{1},c_{2})=f_{2}(d_{1},d_{2})=L\left( f_{1},Y\right) \eqno(1.36)
\end{equation*}%
and
\begin{equation*}
f_{2}(c_{1},d_{2})=f_{2}(d_{1},c_{2})=-L\left( f_{1},Y\right) .\eqno(1.37)
\end{equation*}%
Therefore,
\begin{equation*}
\left\Vert f_{2}\right\Vert =L\left( f_{1},Y\right) .\eqno(1.38)
\end{equation*}%
Note that the points $%
(c_{1},c_{2}),(c_{1},d_{2}),(d_{1},d_{2}),(d_{1},c_{2}) $ in the given order
form a closed path with respect to the directions $(0;1) $ and $(1;0)$. We
conclude from (1.36)-(1.38) that this path is extremal for $f_{2}$. By
Theorem 1.3, $z_{0}=0$ is a best approximation to $f_{2}$. Hence
\begin{equation*}
E\left( f_{2},\mathcal{Z}\right) =L\left( f_{1},Y\right) .\eqno(1.39)
\end{equation*}
Now from (1.32),(1.35) and (1.39) we finally conclude that
\begin{equation*}
E\left( f,\mathcal{R}\right) =L\left( f_{1},Y\right) =\frac{1}{4}\left(
f_{1}(c_{1},c_{2})+f_{1}(d_{1},d_{2})-f_{1}(c_{1},d_{2})-f_{1}(d_{1},c_{2})%
\right) ,
\end{equation*}%
which is the desired result. \end{proof}
\textbf{Corollary 1.4.}\ \textit{Let all the conditions of Theorem 1.5 hold
and $f_{1}(y_{1},y_{2})$ is the function defined in (1.29). Then the
function $g_{0}(y_{1},y_{2})=g_{1,0}(y_{1})+g_{2,0}(y_{2})$, where
\begin{equation*}
g_{1,0}(y_{1})=\frac{1}{2}f_{1}(y_{1},c_{2})+\frac{1}{2}f_{1}(y_{1},d_{2})-%
\frac{1}{4}f_{1}(c_{1},c_{2})-\frac{1}{4}f_{1}(d_{1},d_{2}),
\end{equation*}%
\begin{equation*}
g_{2,0}(y_{2})=\frac{1}{2}f_{1}(c_{1},y_{2})+\frac{1}{2}f_{1}(d_{1},y_{2})-%
\frac{1}{4}f_{1}(c_{1},d_{2})-\frac{1}{4}f_{1}(d_{1},c_{2})
\end{equation*}%
and $y_{1}=a_{1}x_{1}+a_{2}x_{2}$, $y_{2}=b_{1}x_{1}+b_{2}x_{2}$, is a best
approximation from the set $\mathcal{R}(a,b)$ to the function $f$.}
\bigskip
\begin{proof} It is not difficult to verify that the function $%
f_{2}(y_{1},y_{2})$ defined in (1.34) has the form
\begin{equation*}
f_{2}(y_{1},y_{2})=f_{1}(y_{1},y_{2})-g_{1,0}(y_{1})-g_{2,0}(y_{2}).
\end{equation*}
On the other hand, we know from the proof of Theorem 1.5 that
\begin{equation*}
E(f_{1},\mathcal{Z})=\left\Vert f_{2}\right\Vert .
\end{equation*}%
Therefore, the function $g_{1,0}(y_{1})+g_{2,0}(y_{2})$ is a best
approximation to $f_{1}$. Then the function $g_{1,0}(\mathbf{a}\cdot \mathbf{%
x})+g_{2,0}(\mathbf{b}\cdot \mathbf{x})$ is an extremal element from $%
\mathcal{R}(\mathbf{a},\mathbf{b})$ to $f(\mathbf{x})$. \end{proof}
\textbf{Remark 1.1.} Rivlin and Sibner \cite{121}, and Babaev \cite{7}
proved Theorem 1.5 for the case in which $\mathbf{a}${\ and\ }$\mathbf{b}$
are the coordinate vectors. Our proof of Theorem 1.5 is different, short and
elementary. Moreover, it has turned out to be useful in constructing an
extremal element (see the proof of Corollary 1.4).
\bigskip
\subsection{Density of ridge functions and some problems}
Let $\mathbf{a}^{1}${\ and }$\mathbf{a}^{2}$ be nonzero directions in $%
\mathbb{R}^{d}$. One may ask the following question: are there cases in
which the set $\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) $ is
dense in the space of all continuous functions? Undoubtedly, a positive
answer depends on the geometrical structure of compact sets over which all
the considered functions are defined. This problem may be interesting in the
theory of partial differential equations. Take, for example, equation
(1.10). A positive answer to the problem means that for any continuous
function $f$ there exist solutions of the given equation uniformly
converging to $f$.
It should be remarked that our problem is a special case of the problem
considered by Marshall and O'Farrell. In \cite{107}, they obtained a
necessary and sufficient condition for a sum $A_{1}+A_{2}$ of two
subalgebras to be dense in $C(U)$, where $C(U)$ denotes the space of
real-valued continuous functions on a compact Hausdorff space $U$. Below we
describe Marshall and O' Farrell's result for sums of ridge functions.
Let $X$ be a compact subset of $\mathbb{R}^{d}.$ The relation on $X$,
defined by setting $\mathbf{x}\approx \mathbf{y}$ if $\mathbf{x}$ and $%
\mathbf{y}$ belong to some path in $X$, is an equivalence relation. The
equivalence classes we call orbits. \bigskip
\textbf{Theorem 1.6.} \textit{Let $X$ be a compact subset of $\mathbb{R}^{d}$
with all its orbits closed. The set $\mathcal{\ R}\left( \mathbf{a}^{1},%
\mathbf{a}^{2}\right) $ is dense in $C(X)$ if and only if $X$ contains no
closed path with respect to the directions $\mathbf{a}^{1}${\ and }$\mathbf{a%
}^{2}$.}
\bigskip
The proof immediately follows from proposition 2 in \cite{108} established
for the sum of two algebras. Since that proposition was given without proof,
for completeness of the exposition we give the proof of Theorem 1.6.
\begin{proof}
\textit{Necessity}. If $X$ has closed paths, then $X$ has a closed path $%
p^{\prime }=\left( \mathbf{p}_{1}^{\prime },...,\mathbf{p}_{2m}^{\prime
}\right) $ such that all points $\mathbf{p}_{1}^{\prime },...,\mathbf{p}%
_{2m}^{\prime }$ are distinct. In fact, such a special path can be obtained
from any closed path $p=\left( \mathbf{p}_{1},...,\mathbf{p}_{2n}\right) $
by the following simple algorithm: if the points of the path $p$ are not all
distinct, let $i$ and $k>0$ be the minimal indices such that $\mathbf{p}_{i}=%
\mathbf{p}_{i+2k}$; delete from $p$ the subsequence $\mathbf{p}_{i+1},...,%
\mathbf{p}_{i+2k}$ and call $p$ the obtained path; repeat the above step
until all points of $p$ are all distinct; set $p^{\prime }:=p$. By Urysohn's
great lemma, there exist continuous functions $h=h(\mathbf{x})$ on $X$ such
that $h(\mathbf{p}_{i}^{\prime })=1$, $i=1,3,...,2m-1$, $h(\mathbf{p}%
_{i}^{\prime })=-1$, $i=2,4,...,2m$ and $-1<h(\mathbf{x})<1$ elsewhere.
Consider the measure
\begin{equation*}
\mu _{p^{\prime }}=\frac{1}{2m}\sum_{i=1}^{2m}(-1)^{i-1}\delta _{\mathbf{p}%
_{i}^{\prime }}\text{ ,}
\end{equation*}%
where $\delta _{\mathbf{p}_{i}^{\prime }}$ is a point mass at $\mathbf{p}%
_{i}^{\prime }$. For this measure, $\int\limits_{X}hd\mu _{p^{\prime }}=1$
and $\int\limits_{X}gd\mu _{p^{\prime }}=0$ for all functions $g\in \mathcal{%
R}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) $. Thus the set $\mathcal{R}%
\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) $ cannot be dense in\textit{\ }%
$C(X)$.
\textit{Sufficiency}. We are going to prove that the only annihilating
regular Borel measure for $\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}%
^{2}\right) $ is the zero measure. Suppose, contrary to this assumption,
there exists a nonzero annihilating measure on $X$ for $\mathcal{R}\left(
\mathbf{a}^{1},\mathbf{a}^{2}\right) $. The class of such measures with
total variation not more than $1$ we denote by $S.$ Clearly, $S$ is weak-*
compact and convex. By the Krein-Milman theorem, there exists an extreme
measure $\mu $ in $S.$ Since the orbits are closed, $\mu $ must be supported
on a single orbit. Denote this orbit by $T.$
For $i=1,2,$ let $X_{i}$ be the quotient space of $X$ obtained by
identifying the points $\mathbf{y}$ and $\mathbf{z}$ whenever $\mathbf{a}^{i}%
{\cdot }\mathbf{y}=\mathbf{a}^{i}{\cdot }\mathbf{z}$. Let $\pi _{i}$ be the
natural projection of $X$ onto $X_{i}$. For a fixed point $t\in X$ set $%
T_{1}=\{t\}$, $T_{2}=\pi _{1}^{-1}\left( \pi _{1}T_{1}\right) $, $T_{3}=\pi
_{2}^{-1}\left( \pi _{2}T_{2}\right) $, $T_{4}=\pi _{1}^{-1}\left( \pi
_{1}T_{3}\right) $, $...$ Obviously, $T_{1}\subset T_{2}\subset T_{3}\subset
\cdot \cdot \cdot $ . Therefore, for some $k\in \mathbb{N}$, $\left\vert \mu
\right\vert (T_{2k})>0$, where $\left\vert \mu \right\vert $ is a total
variation measure of $\mu$. Since $\mu $ is orthogonal to every continuous
function of the form ${g\left( \mathbf{a}^{1}{\cdot }\mathbf{x}\right) }$, $%
\mu (T_{2k})=0$. From the Haar decomposition $\mu (T_{2k})=\mu
^{+}(T_{2k})-\mu ^{-}(T_{2k})$ it follows that $\mu ^{+}(T_{2k})=\mu
^{-}(T_{2k})>0$. Fix a Borel subset $S_{0}\subset T_{2k}$ such that $\mu
^{+}(S_{0})>0$ and $\mu ^{-}(S_{0})=0$. Since $\mu $ is orthogonal to every
continuous function of the form ${g\left( \mathbf{a}^{2}{\cdot }\mathbf{x}%
\right) }$, $\mu (\pi _{2}^{-1}\left( \pi _{2}S_{0}\right) )=0.$ Therefore,
one can chose a Borel set $S_{1}$ such that $S_{1}\subset \pi
_{2}^{-1}\left( \pi _{2}S_{0}\right) \subset T_{2k+1}$, $S_{1}\cap
S_{0}=\varnothing $, $\mu ^{+}(S_{1})=0$, $\mu ^{-}(S_{1})\geqslant \mu
^{+}(S_{0})$. By the same way one can chose a Borel set $S_{2}$ such that $%
S_{2}\subset \pi _{1}^{-1}\left( \pi _{1}S_{1}\right) \subset T_{2k+2}$, $%
S_{2}\cap S_{1}=\varnothing $, $\mu ^{-}(S_{2})=0$, $\mu
^{+}(S_{2})\geqslant \mu ^{-}(S_{1})$, and so on.
The sets $S_{0},S_{1},S_{2},...$are pairwise disjoint. For otherwise, there
would exist positive integers $n$ and $m,$ with $n<m$ and a path $%
(y_{n},y_{n+1},...,y_{m})$ such that $y_{i}\in S_{i}$ for $i=n,...,m$ and $%
y_{m}\in S_{m}\cap S_{n}$. But then there would exist paths $%
(z_{1},z_{2},...,z_{n-1},y_{n})$ and $(z_{1},z_{2}^{^{\prime
}},...,z_{n-1}^{^{\prime }},y_{m})$ with $z_{i}$ and $z_{i}^{^{\prime }}$ in
$T_{i}$ for $i=2,...,n-1.$ Hence, the set
\begin{equation*}
\{z_{1},z_{2},...,z_{n-1},y_{n},y_{n+1},...,y_{m},z_{n-1}^{^{\prime
}},...,z_{2}^{^{\prime }},z_{1}\}
\end{equation*}%
would contain a closed path. This would contradict our assumption on $X.$
Now, since the sets $S_{0},S_{1},S_{2},...,$ are pairwise disjoint and $%
\left\vert \mu \right\vert (S_{i})\geqslant \mu ^{+}(S_{0})>0$ for each $%
i=1,2,...,$ it follows that the total variation of $\mu $ is infinite. This
contradiction completes the proof.
\end{proof}
The following corollary concerns the problem considered by Colitschek and
Light \cite{36}.
\bigskip \textbf{Corollary 1.5.} \textit{Let $D$ be a compact subset of \ $%
\mathbb{R}^{2}$ with all its orbits closed. Let $W$ denote the set of all
solutions of the wave equation
\begin{equation*}
\frac{\partial ^{2}w}{\partial s\partial t}(s,t)=0,\;\ \ \ \ (s,t)\in D.
\end{equation*}%
Then
\begin{equation*}
\inf\limits_{w\in W}\left\Vert f-w\right\Vert =0
\end{equation*}%
for any continuous function $f(s,t)$ on $D$ if and only if $D$ contains no
closed bolt of lightning.}
\bigskip
\begin{proof} Let $\pi _{1}$ and $\pi _{2}$ denote the usual
coordinate projections, viz: $\pi _{1}(s,t)=s$ and $\pi _{2}(s,t)=t$, $%
(s,t)\in \mathbb{R}^{2}$. Set $S=\pi _{1}(D)$ and $T=\pi _{2}(D)$. It is
easy to see that
\begin{equation*}
W=\left\{ w\in C(D):w(s,t)=x(s)+y(t),\;\ \ x\in C^{2}(S),\;\ y\in
C^{2}(T)\right\} .
\end{equation*}
Set
\begin{equation*}
\widetilde{W}=\left\{ w\in C(D):w(s,t)=x(s)+y(t),\;\ \ x\in C(S),\;\ y\in
C(T)\right\} .
\end{equation*}
Since the set $W$ is dense in $\widetilde{W},$
\begin{equation*}
\inf\limits_{w\in W}\left\Vert f-w\right\Vert =\inf\limits_{w\in \widetilde{W%
}}\left\Vert f-w\right\Vert.
\end{equation*}
But by Theorem 1.6, the equality
\begin{equation*}
\inf\limits_{w\in \widetilde{W}}\left\Vert f-w\right\Vert =0
\end{equation*}%
holds for any $f\in C(D)$ if and only if $D$ contains no closed bolt of lightning.
\end{proof}
Let us discuss some difficulties that arise when studying sums of more than two ridge functions.
Consider the set
\begin{equation*}
\mathcal{R}\left( \mathbf{a}{^{1},...,}\mathbf{a}{^{r}}\right) ={\left\{
\sum\limits_{i=1}^{r}{g}_{i}{{{\left( \mathbf{a}{^{i}\cdot }\mathbf{x}%
\right) ,g}}}_{{i}}{{{\ \in C\left( \mathbb{R}\right) ,i=1,...,r}}}\right\} }%
,
\end{equation*}%
where $\mathbf{a}{^{1},...,}\mathbf{a}{^{r}}$ are pairwise linearly
independent vectors in $\mathbb{R}^{d}\backslash \{\mathbf{0}\}$. Let $r\geq
3$. How can one define a path in this general case? Recall that in the case when $r=2$, a path is
an ordered set of points $\left( \mathbf{p}_{1},\mathbf{p}_{2},...,\mathbf{p}%
_{n}\right) $ in $\mathbb{R}^{d}$ with edges $\mathbf{p}_{i}\mathbf{p}_{i+1}$
in alternating hyperplanes. The first, the third, the fifth,... hyperplanes
(also the second, the fourth, the sixth,... hyperplanes) are parallel. If
not differentiate between parallel hyperplanes, the path $\left( \mathbf{p}%
_{1},\mathbf{p}_{2},...,\mathbf{p}_{n}\right) $ can be considered as a trace
of some point traveling in two alternating hyperplanes. In this case, if the
point starts and stops at the same location (i.e., if $\mathbf{p}_{n}=%
\mathbf{p}_{1})$ and $n$ is an odd number, then the path functional
\begin{equation*}
G(f)=\frac{1}{n-1}\sum\limits_{i=1}^{n-1}(-1)^{i+1}f(\mathbf{p}_{i}),
\end{equation*}%
annihilates sums of ridge functions with the corresponding two fixed directions. The
picture becomes quite different and more complicated when the number of directions more than
two. The simple generalization of the above-mentioned arguments demands a
point traveling in three or more alternating hyperplanes. But in this case
the appropriate generalization of the functional $G$ does not annihilate
functions from $\mathcal{R}\left( \mathbf{a}{^{1},...,}\mathbf{a}{^{r}}%
\right) $.
There were several attempts to fill this gap in the special case when $r=d$
and $\mathbf{a}{^{1},...,}\mathbf{a}{^{r}}$ are the coordinate vectors.
Unfortunately, all these attempts failed (see, for example, the attempts in
\cite{26,37} and the refutations in \cite{4,109}).
At the end of this subsection we want to draw the readers attention to the following problems.
All these problems are open and cannot be solved by the methods presented here.
Let $Q$ be a compact subset of $\mathbb{R}^{d}$. Consider the approximation
of a continuous function defined on $Q$ by functions from $\mathcal{R}\left(
\mathbf{a}{^{1},...,}\mathbf{a}{^{r}}\right)$. Let $r\ge 3$.
\textbf{Problem 3.} \textit{Characterize those functions from $\mathcal{R}%
\left( \mathbf{a}{^{1},...,}\mathbf{a}{^{r}}\right) $ that are extremal to a
given continuous function.}
\textbf{Problem 4.} \textit{Establish explicit formulas for the error in
approximating from $\mathcal{R}\left( \mathbf{a}{^{1},...,}\mathbf{a}{^{r}}%
\right) $ \ and construct a best approximation.}
\textbf{Problem 5.} \textit{Find necessary and sufficient geometrical
conditions for the set $\mathcal{R}\left( \mathbf{a}{^{1},...,}\mathbf{a}{%
^{r}}\right) $ to be dense in $C(Q)$.}
It should be remarked that in \cite{108}, Problem 5 was set up for the sum
of $r $ subalgebras of $C(Q)$. Lin and Pinkus \cite{95} proved that the set $%
\mathcal{R}\left( \mathbf{a}{^{1},...,}\mathbf{a}{^{r}}\right) $ ($r$ may be
very large) is not dense in $C(\mathbb{R}^{d})$ in the topology of uniform
convergence on compact subsets of $\mathbb{R}^{d}$. That is, there are
compact sets $Q\subset \mathbb{R}^{d}$ such that $\mathcal{R}\left( \mathbf{a%
}{^{1},...,}\mathbf{a}{^{r}}\right) $ is not dense in $C(Q)$. In the case $%
r=2$, Theorem 1.6 complements this result, by describing compact sets $%
Q\subset \mathbb{R}^{2}$, for which $\mathcal{R}\left( \mathbf{a}{^{1},}%
\mathbf{a}{^{2}}\right) $ is dense in $C(Q)$.
\bigskip
\section{Sums of continuous ridge functions}
In this section, we find geometric means of deciding if any continuous
multivariate function can be represented by a sum of two continuous ridge
functions.
\subsection{Exposition of the problem}
In this section, we will consider the following representation problem
associated with the set $\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}%
^{r}\right) .$
\bigskip
\textbf{Problem 6.}\ \textit{Let $X$ be a compact subset of \ $\mathbb{R}%
^{d}.$ Give geometrical conditions that are necessary and sufficient for
\begin{equation*}
\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right)=C\left( X\right)
,
\end{equation*}%
where $C\left( X\right) $ is the space of continuous functions on $X$
furnished with the uniform norm.}
\bigskip
We solve this problem for $r=2$. Problem 6, like Problems 3--5 from the previous section, is open in
the case $r\geq 3$. Geometrical characterization of compact sets $X \subset \mathbb{R}^{d}$ with the property
$\mathcal{R}\left(\mathbf{a}^{1},...,\mathbf{a}^{r}\right)=C\left(X\right)$, $r\geq 3$,
seems to be beyond the scope of the methods discussed herein. Nevertheless, recall that this
problem in a quite abstract form, which involves regular Borel measures on $X$,
was solved by Sternfeld (see Section 1.1.1).
In the sequel, we will use the notation
\begin{equation*}
H_{1}=H_{1}\left( X\right) =\left\{ g_{1}\left( \mathbf{a}^{1}\cdot \mathbf{x%
}\right) :g_{1}\in C\left( \mathbb{R}\right) \right\} ,
\end{equation*}
\begin{equation*}
H_{2}=H_{2}\left( X\right) =\left\{ g_{2}\left( \mathbf{a}^{2}\cdot \mathbf{x%
}\right) :g_{2}\in C\left( \mathbb{R}\right) \right\} .
\end{equation*}
Note that by this notation, $\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}%
^{2}\right) =H_{1}+H_{2}.$
At the end of this section, we generalize the obtained result from $%
H_{1}+H_{2}$ to the set of sums $g_{1}\left( h_{1}\left( \mathbf{x}\right)
\right) +g_{2}\left( h_{2}\left( \mathbf{x}\right) \right)$, where $h_{1},
h_{2}$ are fixed continuous functions on $X$.
\bigskip
\subsection{The representation theorem}
\textbf{Theorem 1.7.} \textit{Let $X$ be a compact subset of $\mathbb{R}%
^{d} $. The equality
\begin{equation*}
H_{1}\left( X\right) +H_{2}\left( X\right) =C\left( X\right)
\end{equation*}%
holds if and only if $X$ contains no closed path and there exists a positive
integer $n_{0}$ such that the lengths of paths in $X$ are bounded by $n_{0}$.%
}
\bigskip
\begin{proof}
\textit{Necessity.} Let $H_{1}+H_{2}=C\left( X\right) $. Consider the linear
operator
\begin{equation*}
A:H_{1}\times H_{2}\rightarrow C\left( X\right), ~~~ A\left[ \left(
g_{1},g_{2}\right) \right] =g_{1}+g_{2},
\end{equation*}%
where $g_{1}\in H_{1},g_{2}\in H_{2}.$ The norm on $H_{1}\times H_{2}$ we
define as
\begin{equation*}
\left\Vert \left( g_{1},g_{2}\right) \right\Vert =\left\Vert
g_{1}\right\Vert +\left\Vert g_{2}\right\Vert .
\end{equation*}%
It is obvious that the operator $A$ is continuous with respect to this norm.
Besides, since $C\left( X\right) =H_{1}+H_{2},$ $A$ is a surjection.
Consider the conjugate operator
\begin{equation*}
A^*:C\left( X\right) ^{\ast }\rightarrow \left[ H_{1}\times H_{2}\right]
^{\ast }, ~~~ A^{\ast }\left[ G\right] =\left( G_{1},G_{2}\right) ,
\end{equation*}%
where the functionals $G_{1}$ and $G_{2}$ are defined as follows
\begin{equation*}
G_{1}\left( g_{1}\right) =G\left( g_{1}\right) ,g_{1}\in H_{1}; ~~~
G_{2}\left( g_{2}\right) =G\left( g_{2}\right) ,g_{2}\in H_{2}.
\end{equation*}
An element $\left( G_{1},G_{2}\right) $ from $\left[ H_{1}\times H_{2}\right]
^{\ast }$ has the norm
\begin{equation*}
\left\Vert \left( G_{1},G_{2}\right) \right\Vert =\max \left\{ \left\Vert
G_{1}\right\Vert ,\left\Vert G_{2}\right\Vert \right\} .\eqno(1.40)
\end{equation*}
Let now $p=\left( p_{1},...,p_{m}\right) $ be any path with different
points: $p_{i}\neq p_{j}$ for any $i\neq j$, $1\leq i,~j\leq m$. We
associate with $p$ the following functional over $C\left( X\right) $
\begin{equation*}
L\left[ f\right] =\frac{1}{m}\sum\limits_{i=1}^{m}\left( -1\right)
^{i-1}f\left( p_{i}\right) .
\end{equation*}%
Since $\left\vert L(f)\right\vert \leq \left\Vert f\right\Vert $ and $%
\left\vert L(g)\right\vert =\left\Vert g\right\Vert $ for a continuous
function $g(\mathbf{x})$ such that $g(p_{i})=1,\ $for odd indices $i,\
g(p_{j})=-1,$ for even\ indices$\ j\ $and $-1<g(\mathbf{x})<1$ elsewhere, we
obtain that $\left\Vert L\right\Vert =1$. Let $A^{\ast }\left[ L\right]
=\left( L_{1},L_{2}\right) $. One can easily verify that
\begin{equation*}
\left\Vert L_{i}\right\Vert \leq \frac{2}{m},i=1,2.
\end{equation*}%
Therefore, from (1.40) we obtain that
\begin{equation*}
\left\Vert A^{\ast }\left[ L\right] \right\Vert \leq \frac{2}{m}.\eqno(1.41)
\end{equation*}%
Since $A$ is a surjection, there exists $\delta >0$ such that
\begin{equation*}
\left\Vert A^{\ast }\left[ G\right] \right\Vert \geq \delta \left\Vert
G\right\Vert ~~~~\;\mbox{for any functional}\;\ G\in C\left( X\right) ^{\ast
}
\end{equation*}%
Hence
\begin{equation*}
\left\Vert A^{\ast }\left[ L\right] \right\Vert \geq \delta .\eqno(1.42)
\end{equation*}%
Now from (1.41) and (1.42) we conclude that
\begin{equation*}
m\leq \frac{2}{\delta }.
\end{equation*}
This means that for a path with different points, $n_{0}$ can be chosen as $%
\left[ \frac{2}{\delta }\right] +1$.
Let now $p=\left( p_{1},...,p_{m}\right) $ be a path with at least two
coinciding points. Then we can form a closed path with different points.
This may be done by the following way: let $i\ $and $j\ $be indices such
that $p_{i}=\ p_{j}\ $and $j-i\ $takes its minimal value. Note that in this
case all the points $p_{i},p_{i+1},...,p_{j-1}\ $are distinct. Now if $j-i\ $%
is an even number, then the path $(p_{i},p_{i+1},...,p_{j-1})\ $, and if $\
j-i\ $is an odd number, then the path $(p_{i+1},...,p_{j-1})$ is a closed
path with different points. It remains to show that $X$ can not possess
closed paths with different points. Indeed, if $q=\left(
q_{1},...,q_{2k}\right) $ is a path of this type, then the functional $L,$
associated with $q,$ annihilates all functions from $H_{1}+H_{2}$. On the
other hand, $L\left[ f\right] =1$ for a continuous function $f$ on $X$
satisfying the conditions $f\left( t\right) =1$ if $t\in \left\{
q_{1},q_{3},...,q_{2k-1}\right\} ;$ $f\left( t\right) =-1$ if $t\in \left\{
q_{2},q_{4},...,q_{2k}\right\} ;$ $f\left( t\right) \in \left( -1;1\right) $
if $t\in X\backslash q$ . This implies on the contrary to our assumption
that $H_{1}+H_{2}\neq C\left( X\right) $. The necessity has been proved.
\textit{Sufficiency.} Let $X$ contains no closed path and the lengths of all
paths are bounded by some positive integer $n_{0}$. We may suppose that any
path has different points. Indeed, in other case we can form a closed path,
which contradicts our assumption.
For $i=1,2,$ let $X_{i}$ be the quotient space of $X$ obtained by
identifying the points $a$ and $b$ whenever $g\left( a\right) =g\left(
b\right) $ for each $g$ in $H_{i}$. Let $\pi _{i}$ be the natural projection
of $X$ onto $X_{i}$. For a point $t\in X$ set $T_{1}=\pi _{1}^{-1}\left( \pi
_{1}t\right) ,T_{2}=\pi _{2}^{-1}\left( \pi _{2}T_{1}\right) ,\ldots .$ By $%
O\left( t\right) $ denote the orbit of $X$ containing $t.$ Since the length
of any path in $X$ is not more than $n_{0}$, we conclude that $O\left(
t\right) =T_{n_{0}}$. Since $X\ $is compact, the sets $%
T_{1},T_{2},...,T_{n_{0}},\ $hence $O(t),$ are compact. By Theorem 1.6,
$\overline{H_{1}+H_{2}}=C\left( X\right)$.
Now let us show that $H_{1}+H_{2}$ is closed in $C\left( X\right) $. Set
\begin{equation*}
H_{3}=H_{1}\cap H_{2}.
\end{equation*}
Let $X_{3}$ and $\pi _{3}$ be the associated quotient space and projection.
Fix some $a\in X_{3}$. Show, within conditions of our theorem, that if $t\in
\pi _{3}^{-1}\left( a\right) ,$ then $O\left( t\right) =\pi _{3}^{-1}\left(
a\right) $. The inclusion $O\left( t\right) \subset \pi _{3}^{-1}\left(
a\right) $ is obvious. Suppose that there exists a point $t_{1}\in \pi
_{3}^{-1}\left( a\right) $ such that $t_{1}\notin O\left( t\right)
$. Then $O\left( t\right) \cap O\left( t_{1}\right) =\emptyset $. By $X|O$
denote the factor space generated by orbits of $X$. $X|O$ is a normal
topological space with its natural factor topology. Hence we can construct a
continuous function $u\in C\left( X|O\right) $ such that $u\left( O\left(
t\right) \right) =0,$ $u\left( O\left( t_{1}\right) \right) =1$. The
function $\upsilon \left( x\right) =u\left( O\left( x\right) \right) ,\;\
x\in X,$ is continuous on $X$ and belongs to $H_{3}$ as a function being
constant on each orbit. But, since $O\left( t\right) \subset \pi
_{3}^{-1}\left( a\right) $ and $O\left( t_{1}\right) \subset \pi
_{3}^{-1}\left( a\right) $, the function $\upsilon \left( x\right) $ can not
take different values on $O\left( t\right) $ and $O\left( t_{1}\right) $.
This contradiction means that there is not a point $t_{1}\in \pi
_{3}^{-1}\left( a\right) $ such that $t_{1}\notin O\left( t\right) $. Thus,
\begin{equation*}
O\left( t\right) =\pi _{3}^{-1}\left( a\right) \eqno(1.43)
\end{equation*}%
for any $a\in X_{3}$ and $t\in \pi _{3}^{-1}\left( a\right) $.
Now prove that there exists a positive real number $c$ such that
\begin{equation*}
\sup\limits_{z\in X_{3}}\underset{\pi _{3}^{-1}\left( z\right) }{var}f\leq
c\sup\limits_{y\in X_{2}}\underset{\pi _{2}^{-1}\left( y\right) }{var}f\eqno%
(1.44)
\end{equation*}%
for all $f$ in $H_{1}$. Note that for $Y\subset X,\ \;\underset{Y}{var}f$ is
the variation of $f$ on the set $Y.$ That is, $\;$%
\begin{equation*}
\underset{Y}{var}f=\sup\limits_{x,y\in Y}\left\vert f\left( x\right)
-f\left( y\right) \right\vert .
\end{equation*}
Due to (1.43), inequality (1.44) can be written in the following form
\begin{equation*}
\sup_{t\in X}\underset{O\left( t\right) }{var}f\leq c\sup_{t\in X}\underset{%
\pi _{2}^{-1}\left( \pi _{2}\left( t\right) \right) }{var}f\eqno(1.45)
\end{equation*}%
for all $f\in H_{1}$.
Let $t\in X$ and $t_{1},t_{2}$ be arbitrary points of $O\left( t\right) $.
Then there is a path $\left( b_{1},b_{2},...,b_{m}\right) $ with $%
b_{1}=t_{1} $ and $b_{m}=t_{2}$. Besides, by the condition, $m\leq n_{0}$ .
Let first $\mathbf{a}^{2}\cdot b_{1}=\mathbf{a}^{2}\cdot b_{2},$ $\mathbf{a}%
^{1}\cdot b_{2}=\mathbf{a}^{1}\cdot b_{3},...,\mathbf{a}^{2}\cdot b_{m-1}=%
\mathbf{a}^{2}\cdot b_{m}$. Then for any function $f\in H_{1}$
\begin{equation*}
\left\vert f\left( t_{1}\right) -f\left( t_{2}\right) \right\vert
=\left\vert f\left( b_{1}\right) -f\left( b_{2}\right) +...-f\left(
b_{m}\right) \right\vert \leq
\end{equation*}%
\begin{equation*}
\leq \left\vert f\left( b_{1}\right) -f\left( b_{2}\right) \right\vert
+...+\left\vert f\left( b_{m-1}\right) -f\left( b_{m}\right) \right\vert
\leq \frac{n_{o}}{2}\sup_{t\in X}\underset{\pi _{2}^{-1}\left( \pi
_{2}\left( t\right) \right) }{var}f.\eqno(1.46)
\end{equation*}
It is not difficult to verify that inequality (1.46) holds in all other
possible cases of the path $\left( b_{1},...,b_{m}\right) $. Now from (1.46)
we obtain (1.45), hence (1.44), where $c=\frac{n_{0}}{2}$. In \cite{108},
Marshall and O'Farrell proved the following result (see
\cite[Proposition 4]{108}): Let $A_{1}\ $and $A_{2}\ $be closed subalgebras of $C(X)\ $that
contain the constants. Let $(X_{1},\pi _{1}),\ (X_{2},\pi _{2})\ $and $%
(X_{3},\pi _{3})\ $be the quotient spaces and projections associated with
the algebras $A_{1},$ $A_{2}\ $and $A_{3}=A_{1}\cap A_{2}\ $respectively.
Then $A_{1}+A_{2}\ $is closed in $C(X)\ $if and only if there exists a
positive real number $c$ such that
\begin{equation*}
\sup\limits_{z\in X_{3}}\underset{\pi _{3}^{-1}\left( z\right) }{var}f\leq
c\sup\limits_{y\in X_{2}}\underset{\pi _{2}^{-1}\left( y\right) }{var}f
\end{equation*}%
for all $f\ $in $A_{1}.$
By this proposition, (1.44) implies that $H_{1}+H_{2}$ is closed in $C\left(
X\right) $. Thus we finally obtain that $H_{1}+H_{2}=C\left( X\right) $.
\end{proof}
Paths with respect to two directions are explicit objects and give geometric
means of deciding if $H_{1}+H_{2}=C\left( X\right) $. Let us show this in
the example of the bivariate ridge functions $g_{1}=x_{1}+x_{2}\ $and $%
g_{2}=x_{1}-x_{2}.$ If $X$ is the union of two parallel line segments in $%
\mathbb{R}^{2},$ not parallel to any of the lines $x_{1}+x_{2}=0$ and $%
x_{1}-x_{2}=0,\ $then Theorem 1.7 holds. If $X$ is any bounded part of the
graph of the function $x_{2}=\arcsin (\sin x_{1}),$ then Theorem 1.7 also
holds. Let now $X\ $be the set
\begin{equation*}
\begin{array}{c}
\{(0,0),(1,-1),(0,-2),(-1\frac{1}{2},-\frac{1}{2}),(0,1),(\frac{3}{4},\frac{1%
}{4}),(0,-\frac{1}{2}), \\
(-\frac{3}{8},-\frac{1}{8}),(0,\frac{1}{4}),(\frac{3}{16},\frac{1}{16}%
),...\}.%
\end{array}%
\end{equation*}
In this case, there is no positive integer bounding lengths of all paths.
Thus Theorem 1.7 fails. Note that since orbits of all paths are closed,
Theorem 1.6 from the previous section shows $H_{1}+H_{2}$ is dense in $%
C\left( X\right) .$
If $X$ is any set with interior points, then both Theorem 1.6 and Theorem
1.7 fail, since any such set contains the vertices of some parallelogram
with sides parallel to the directions $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$,
that is a closed path.
Theorem 1.7 admits a direct generalization to the representation by sums $%
g_{1}\left( h_{1}\left( \mathbf{x}\right) \right) +g_{2}\left( h_{2}\left(
\mathbf{x}\right) \right) $, where $h_{1}\left( \mathbf{x}\right) $ and $%
h_{2}\left( \mathbf{x}\right) $ are fixed continuous functions on $X$. This
generalization needs consideration of new objects -- paths with respect to
two continuous functions.
\bigskip
\textbf{Definition 1.5.} \textit{Let $X$ be a compact set in $\mathbb{R}%
^{d}$ and $h_{1},h_{2}\in C\left( X\right)$. A finite ordered subset $%
\left( p_{1},p_{2},...,p_{m}\right)$ of $X$ with $p_{i}\neq p_{i+1}\left(
i=1,...,m-1\right)$, and either $h_{1}\left( p_{1}\right) =h_{1}\left(
p_{2}\right)$, $h_{2}\left( p_{2}\right) =h_{2}\left( p_{3}\right)$, $%
h_{1}\left( p_{3}\right) =h_{1}\left( p_{4}\right),...,$ or $h_{2}\left(
p_{1}\right) =h_{2}\left( p_{2}\right)$, $h_{1}\left( p_{2}\right)
=h_{1}\left( p_{3}\right)$, $h_{2}\left( p_{3}\right) =h_{2}\left(
p_{4}\right),...,$ is called a path with respect to the functions $h_{1}$
and $h_{2}$ or, shortly, an $h_{1}$-$h_{2}$ path.}
\bigskip
\textbf{Theorem 1.8.} \textit{Let $X$ be a compact subset of $\mathbb{R}^{d}$%
. All functions $f \in C(X)$ admit a representation
\begin{equation*}
f(\mathbf{x})=g_{1}\left( h_{1}\left( \mathbf{x}\right) \right) +g_{2}\left(
h_{2}\left( \mathbf{x}\right) \right) ,~g_{1},g_{2}\in C(\mathit{\mathbb{R}})
\end{equation*}%
if and only if the set $X$ contains no closed $h_{1}$-$h_{2}$ path and there
exists a positive integer $n_{0}$ such that the lengths of $h_{1}$-$h_{2}$
paths in $X$ are bounded by $n_{0}$.}
\bigskip
The proof can be carried out by the same arguments as above.
It should be noted that Theorem 1.8 was first proved by Khavinson in his
monograph \cite{76}. Khavinson's proof (see \cite[p.87]{76}) used theorems
of Sternfeld \cite{132} and Medvedev \cite[Theorem 2.2]{76}, whereas our
proof, which generalizes the ideas of Khavinson, was based on the above
proposition of Marshall and O'Farrell.
\bigskip
\section{On the proximinality of ridge functions}
In this section, using two results of Garkavi, Medvedev and Khavinson \cite%
{35}, we give sufficient conditions for proximinality of sums of two ridge
functions with bounded and continuous summands in the spaces of bounded and
continuous multivariate functions, respectively. In the first case, we give
an example which shows that the corresponding sufficient condition cannot be
made weaker for certain subsets of $\mathbb{R}^{n}$. In the second case, we
obtain also a necessary condition for proximinality. All the results are
furnished with plenty of examples. The results, examples and following
discussions naturally lead us to a conjecture on the proximinality of the
considered class of ridge functions.
\subsection{Problem statement}
Let $E$ be a normed linear space and $F$ be its subspace. We say that $F$ is
proximinal in $E$ if for any element $e\in E$ there exists at least one
element $f_{0}\in F$ such that
\begin{equation*}
\left\Vert e-f_{0}\right\Vert =\inf_{f\in F}\left\Vert e-f\right\Vert .
\end{equation*}
In this case, the element $f_{0}$ is said to be extremal to $e$.
We are interested in the problem of proximinality of the set of linear
combinations of ridge functions in the spaces of bounded and continuous
functions respectively. This problem will be considered in the simplest case
when the class of approximating functions is the set
\begin{equation*}
\mathcal{R}=\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) ={\
\left\{ {g_{1}\left( \mathbf{a}^{1}{\cdot }\mathbf{x}\right) +g_{2}\left(
\mathbf{a}^{2}{\cdot }\mathbf{x}\right) :g}_{i}:{{\mathbb{R\rightarrow R}}
,i=1,2}\right\} }.
\end{equation*}
Here $\mathbf{a}^{1}${and }$\mathbf{a}^{2}$ are fixed directions and we vary
over ${g}_{i}$. It is clear that this is a linear space. Consider the
following three subspaces of $\mathcal{R}$. The first is obtained by taking
only bounded sums ${g_{1}\left( \mathbf{a}^{1}{\cdot }\mathbf{x}\right)
+g_{2}\left( \mathbf{a}^{2}{\cdot }\mathbf{x}\right) }$ over some set $X$ in
$\mathbb{R}^{n}.$ We denote this subspace by $\mathcal{R}_{a}(X)$. The
second and the third are subspaces of $\mathcal{R}$ with bounded and
continuous summands $g_{i}\left( \mathbf{a}^{i}\cdot \mathbf{x}\right)
,~i=1,2,$ on $X$ respectively. These subspaces will be denoted by $\mathcal{%
R }_{b}(X)$ and $\mathcal{R}_{c}(X).$ In the case of $\mathcal{R}_{c}(X),$
the set $X$ is considered to be compact.
Let $B(X)$ and $C(X)$ be the spaces of bounded and continuous multivariate
functions over $X$ respectively. What conditions must one impose on $X$ in
order that the sets $\mathcal{R}_{a}(X)$ and $\mathcal{R}_{b}(X)$ be
proximinal in $B(X)$ and the set $\mathcal{R}_{c}(X)$ be proximinal in $C(X)$%
? We are also interested in necessary conditions for proximinality. It
follows from one result of Garkavi, Medvedev and Khavinson (see
\cite[Theorem 1]{35}) that $\mathcal{R}_{a}(X)$ is proximinal in $B(X)$ for all subsets
$X$ of $\mathbb{R}^{n}$. There is also an answer (see \cite[Theorem 2]{35})
for proximinality of $\mathcal{R}_{b}(X)$ in $B(X)$. This will be discussed
in Section 1.5.2. Is the set $\mathcal{R}_{b}(X)$ always proximinal in $B(X)$%
? There is an an example of a set $X\subset \mathbb{R}^{n}$ and a bounded
function $f$ on $X$ for which there does not exist an extremal element in $%
\mathcal{R}_{b}(X)$.
In Section 1.5.3, we will obtain sufficient conditions for the existence of
extremal elements from $\mathcal{R}_{c}(X)$ to an arbitrary function $f$ $%
\in $ $C(X)$. Based on one result of Marshall and O'Farrell \cite{108}, we
will also give a necessary condition for proximinality of $\mathcal{R}_{c}(X)
$ in $C(X)$. All the theorems, following discussions and examples of the
paper will lead us naturally to a conjecture on the proximinality of the
subspaces $\mathcal{R}_{b}(X)$ and $\mathcal{R}_{c}(X)$ in the spaces $B(X)$
and $C(X)$ respectively.
The reader may also be interested in the more general case with the set $%
\mathcal{R}=\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right)$. In
this case, the corresponding sets $\mathcal{R}_{a}(X)$, $\mathcal{R}_{b}(X)$
and $\mathcal{R}_{c}(X)$ are defined similarly. Using the results of \cite%
{35}, one can obtain sufficient (but not necessary) conditions for
proximinality of these sets. This needs, besides paths, the consideration of
some additional and more complicated relations between points of $X$. Here
we will not consider the case $r\geq 3$, since our main purpose is to draw
the reader's attention to the arisen problems of proximinality in the
simplest case of approximation. For the existing open problems connected
with the set $\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) $,
where $r\geq 3$, see \cite{53} and \cite{118}.
\bigskip
\subsection{Proximinality of $\mathcal{R}_{b}(X)$ in $B(X)$}
Let $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ be two different directions in $%
\mathbb{R}^{n}$. In the sequel, we will use paths with respect to the
directions $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$. Recall that a length of a
path is the number of its points and can be equal to $\infty $ if the path
is infinite. A singleton is a path of the unit length. We say that a path $%
\left( \mathbf{x}^{1},...,\mathbf{x}^{m}\right) $ belonging to some subset $X
$ of $\mathbb{R}^{n}$ is irreducible if there is not another path $\left(
\mathbf{y}^{1},...,\mathbf{y}^{l}\right) \subset X$ with $\mathbf{y}^{1}=%
\mathbf{x}^{1},~\mathbf{y}^{l}=\mathbf{x}^{m}$ and $l<m$.
The following theorem follows from \cite[Theorem 2]{35}.
\bigskip
\textbf{Theorem 1.9.} \textit{Let $X\subset $ $\mathbb{R}^{n}$ and the
lengths of all irreducible paths in $X$ be uniformly bounded by some
positive integer. Then each function in $B(X)$ has an extremal element in $%
\mathcal{R}_{b}(X)$.}
\bigskip
There are a large number of sets in $\mathbb{R}^{n}$ satisfying the
hypothesis of this theorem. For example, if a set $X$ has a cross section
according to one of the directions $\mathbf{a}^{1}$ or $\mathbf{a}^{2}$,
then the set $X$ satisfies the hypothesis of Theorem 1.9. By a cross section
according to the direction $\mathbf{a}^{1}$ we mean any set $X_{\mathbf{a}%
^{1}}=\{x\in X:\ \mathbf{a}^{1}\cdot \mathbf{x}=c\}$, $c\in \mathbb{R}$, with
the property: for any $\mathbf{y}\in X$ there exists a point $\mathbf{y}%
^{1}\in X_{\mathbf{a}^{1}}$ such that $\mathbf{a}^{2}\cdot \mathbf{y}=%
\mathbf{a}^{2}\cdot \mathbf{y}^{1}$. By the similar way, one can define a
cross section according to the direction $\mathbf{a}^{2}$. For more on cross
sections in problems of proximinality of sums of univariate functions see
\cite{34,77}. Regarding Theorem 1.9 one may ask if the condition of the
theorem is necessary for proximinality of $\mathcal{R}_{b}(X)$ in $B(X)$.
While we do not know a complete answer to this question, we are going to
give an example of a set $X $ for which Theorem 1.9 fails. Let $\mathbf{a}%
^{1}=(1;-1),\ \mathbf{a}^{2}=(1;1).$ Consider the set
\begin{eqnarray*}
X &=&\{(2;\frac{2}{3}),(\frac{2}{3};-\frac{2}{3}),(0;0),(1;1),(1+\frac{1}{2}%
;1-\frac{1}{2}),(1+\frac{1}{2}+\frac{1}{4};1-\frac{1}{2}+\frac{1}{4}), \\
&&(1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8};1-\frac{1}{2}+\frac{1}{4}-\frac{1}{8%
}),...\}.
\end{eqnarray*}%
In what follows, the elements of $X$ in the given order will be denoted by $%
\mathbf{x}^{0},\mathbf{x}^{1},\mathbf{x}^{2},...$ . It is clear that $X$ is
a path of the infinite length and $\mathbf{x}^{n}\rightarrow \mathbf{x}^{0}$
as $n\rightarrow \infty $. Let $\sum_{n=1}^{\infty }c_{n}$ be any
divergent series with the terms $c_{n}>0$ and $c_{n}\rightarrow 0$ as $%
n\rightarrow \infty $. Besides let $f_{0}$ be a function vanishing at the
points $\mathbf{x}^{0},\mathbf{x}^{2},\mathbf{x}^{4},...,$ and taking values
$c_{1},c_{2},c_{3},...$ at the points $\mathbf{x}^{1},\mathbf{x}^{3},\mathbf{%
\ x}^{5},...$, respectively. It is obvious that $f_{0}$ is continuous on $X$.
The set $X$ is compact and satisfies all the conditions of Theorem 1.6. By
that theorem, $\overline{\mathcal{R}_{c}(X)}=C(X).$ Therefore, for any
continuous function on $X$, thus for $f_{0}$,
\begin{equation*}
\inf_{g\in \mathcal{R}_{c}(X)}\left\Vert f_{0}-g\right\Vert _{C(X)}=0.\eqno%
(1.47)
\end{equation*}
Since $\mathcal{R}_{c}(X)\subset \mathcal{R}_{b}(X),$ we obtain from (1.47)
that
\begin{equation*}
\inf_{g\in \mathcal{R}_{b}(X)}\left\Vert f_{0}-g\right\Vert _{B(X)}=0.\eqno%
(1.48)
\end{equation*}
Suppose that $f_{0}$ has an extremal element ${g_{1}^{0}\left( \mathbf{a}^{1}%
{\cdot }\mathbf{x}\right) +g_{2}^{0}\left( \mathbf{a}^{2}{\ \cdot }\mathbf{x}%
\right) }$ in $\mathcal{R}_{b}(X).$ By the definition of $\mathcal{R}_{b}(X)$%
, the ridge functions ${g_{i}^{0},i=1,2}$, are bounded on $X.$ From (1.48)\
it follows that $f_{0}={g_{1}^{0}\left( \mathbf{a}^{1}{\ \cdot }\mathbf{x}%
\right) +g_{2}^{0}\left( \mathbf{a}^{2}{\cdot }\mathbf{x}\right) .}$ Since $%
\mathbf{a}^{1}\cdot \mathbf{x}^{2n}=\mathbf{a}^{1}\cdot \mathbf{x}^{2n+1}$
and $\mathbf{a}^{2}\cdot \mathbf{x}^{2n+1}=\mathbf{a}^{2}\cdot \mathbf{x}%
^{2n+2},$ for $n=0,1,...,$ we can write that
\begin{equation*}
\sum_{n=0}^{k}c_{n+1}=\sum_{n=0}^{k}\left[ f(\mathbf{x}^{2n+1})-f(\mathbf{x}%
^{2n})\right]
\end{equation*}
\begin{equation*}
=\sum_{n=0}^{k}\left[ {g_{2}^{0}}(\mathbf{x}^{2n+1})-{g_{2}^{0}}(\mathbf{x}%
^{2n})\right] ={g_{2}^{0}(}\mathbf{a}^{2}\cdot \mathbf{x}^{2k+1})-{g_{2}^{0}(%
}\mathbf{a}^{2}\cdot \mathbf{x}^{0}).\eqno(1.49)
\end{equation*}
Since $\sum_{n=1}^{\infty }c_{n}=\infty ,$ we deduce from (1.49)\ that the
function ${g_{2}^{0}\left( \mathbf{a}^{2}{\cdot }\mathbf{x}\right) }$ is not
bounded on $X.$ This contradiction means that the function $f_{0}$ does not
have an extremal element in $\mathcal{R}_{b}(X).$ Therefore, the space $%
\mathcal{R}_{b}(X)$ is not proximinal in $B(X).$
\bigskip
\subsection{Proximinality of $\mathcal{R}_{c}(X)$ in $C(X)$}
In this section, we give a sufficient condition and also a necessary condition
for proximinality of $\mathcal{R}_{c}(X)$ in $C(X)$.
\bigskip
\textbf{Theorem 1.10.} \textit{Let the system of linearly independent vectors $%
\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ have a complement to a basis $\{\mathbf{a%
}^{1},...,\mathbf{a}^{n}\}$ in $\mathbb{R}^{n}$ with the property: for any
point $\mathbf{x}^{0}\in X$ and any positive real number $\delta $ there
exist a number $\delta _{0}\in (0,\delta ]$ and a point $\mathbf{x}^{\sigma
} $ in the set}
\begin{equation*}
\sigma =\{\mathbf{x}\in X:\mathbf{a}^{2}\cdot \mathbf{x}^{0}-\delta _{0}\leq
\mathbf{a}^{2}\cdot \mathbf{x}\leq \mathbf{a}^{2}\cdot \mathbf{x}^{0}+\delta
_{0}\},
\end{equation*}
\textit{such that the system}
\begin{equation*}
\left\{
\begin{array}{c}
\mathbf{a}^{2}\cdot \mathbf{x}^{\prime }=\mathbf{a}^{2}\cdot \mathbf{x}%
^{\sigma } \\
\mathbf{a}^{1}\cdot \mathbf{x}^{\prime }=\mathbf{a}^{1}\cdot \mathbf{x} \\
\sum_{i=3}^{n}\left\vert \mathbf{a}^{i}\cdot \mathbf{x}^{\prime }-\mathbf{a}%
^{i}\cdot \mathbf{x}\right\vert <\delta%
\end{array}%
\right. \eqno(1.50)
\end{equation*}%
\textit{has a solution $\mathbf{x}^{\prime }\in \sigma $ for all points $%
\mathbf{x}\in \sigma .$Then the space $\mathcal{R}_{c}(X)$ is proximinal in $%
C(X).$}
\bigskip
\begin{proof}
Introduce the following mappings and sets:
\begin{equation*}
\pi _{i}:X\rightarrow \mathbb{R}\text{, }\pi _{i}(\mathbf{x)=a}^{i}\cdot
\mathbf{x}\text{, }Y_{i}=\pi _{i}(X\mathbf{)}\text{, }i=1,...,n.
\end{equation*}
Since the system of vectors $\{\mathbf{a}^{1},...,\mathbf{a}^{n}\}$ is
linearly independent, the mapping $\pi =(\pi _{1},...\pi _{n})$ is an
injection from $X$ into the Cartesian product $Y_{1} \times ...\times Y_{n} $
. Besides, $\pi $ is linear and continuous. By the open mapping theorem, the
inverse mapping $\pi ^{-1}$ is continuous from $Y=\pi (X)$ onto $X.$ Let $f$
be a continuous function on $X$. Then the composition $f\circ \pi
^{-1}(y_{1},...y_{n})$ will be continuous on $Y,$ where $y_{i}=\pi _{i}(
\mathbf{x),}\ i=1,...,n,$ are the coordinate functions. Consider the
approximation of the function $f\circ \pi ^{-1}$ by elements from
\begin{equation*}
G_{0}=\{g_{1}(y_{1})+g_{2}(y_{2}):\ g_{i}\in C(Y_{i}),\ i=1,2\}
\end{equation*}
over the compact set $Y$. Then one may observe that the function $f$ has an
extremal element in $\mathcal{R}_{c}(X)$ if and only if the function $f\circ
\pi ^{-1}$ has an extremal element in $G_{0}$. Thus the problem of
proximinality of $\mathcal{R}_{c}(X)$ in $C(X)$ is reduced to the problem of
proximinality of $G_{0}$ in $C(Y).$
Let $T,T_{1},...,T_{m+1}$ be metric compact spaces and $T\subset $ $%
T_{1}\times ...\times T_{m+1}.$ For $i=1,...,m,$ let $\varphi _{i}$ be the
continuous mappings from $T$ onto $T_{i}.$ In \cite{35}, the authors obtained
sufficient conditions for proximinality of the set
\begin{equation*}
C_{0}=\{\sum_{i=1}^{n}g_{i}\circ \varphi _{i}:\ g_{i}\in C(T_{i}),\
i=1,...m\}
\end{equation*}%
in the space $C(T)$ of continuous functions on $T.$ Since $Y\subset $ $%
Y_{1}\times Y_{2}\times Z_{3},$ where $Z_{3}=Y_{3}\times ...\times Y_{n},$
we can use this result in our case, for the approximation of the function $%
f\circ \pi ^{-1}$ by elements from $G_{0}$. By this theorem, the set $G_{0}$
is proximinal in $C(Y)$ if for any $y_{2}^{0}\in Y_{2}$ and $\delta >0$
there exists a number $\delta _{0}\in (0,$ $\delta )$ such that the set $%
\sigma (y_{2}^{0},\delta _{0})=[y_{2}^{0}-\delta _{0},y_{2}^{0}+\delta
_{0}]\cap Y_{2}$ has $(2,\delta )$ maximal cross section. The last means
that there exists a point $y_{2}^{\sigma }\in \sigma (y_{2}^{0},\delta _{0})$
with the property: for any point $(y_{1},y_{2},z_{3})\in Y,$ with the second
coordinate $y_{2}$ from the set $\sigma (y_{2}^{0},\delta _{0}),$ there
exists a point $(y_{1}^{\prime },y_{2}^{\sigma },z_{3}^{\prime })\in Y$ such
that $y_{1}=y_{1}^{\prime }$ and $\rho (z_{3},z_{3}^{\prime })<\delta ,$
where $\rho $ is a metrics in $Z_{3}.$ Since these conditions are equivalent
to the conditions of Theorem 1.10, the space $G_{0}$ is proximinal in the
space $C(Y).$ Then by the above conclusion, the space $\mathcal{R}_{c}(X)$
is proximinal in $C(X).$ \end{proof}
Let us give some simple examples of compact sets satisfying the hypothesis
of Theorem 1.10. For the sake of brevity, we restrict ourselves to the case $%
n=3.$
\begin{enumerate}
\item[(a)] Assume $X$ is a closed ball in $\mathbb{R}^{3}$ and $\mathbf{a}^{1}$, $\mathbf{a}^{2}$
are orthogonal directions. Then Theorem 1.10 holds. Note that
in this case, we can take $\delta _{0}=\delta $ and $\mathbf{a}^{3}$ as an orthogonal
vector to both the vectors $\mathbf{a}^{1}$ and $\mathbf{a}^{2}.$
\item[(b)] Let $X$ be the unite cube, $\mathbf{a}^{1}=(1;1;0),\ a^{2}=(1;-1;0).$ Then
Theorem 1.10 also holds. In this case, we can take $\delta _{0}=\delta $ and
$\mathbf{a}^{3}=(0;0;1).$ Note that the unit cube does not satisfy the hypothesis of
the theorem for many directions (take, for example, $\mathbf{a}^{1}=(1;2;0)$ and $\mathbf{a}^{2}=(2;-1;0)$).
\end{enumerate}
In the following example, one can not always chose $\delta _{0}$ as equal to
$\delta $.
\begin{enumerate}
\item[(c)] Let $X=\{(x_{1},x_{2},x_{3}):\ (x_{1},x_{2})\in Q,\ 0\leq
x_{3}\leq 1\},$ where $Q$ is the union of two triangles $A_{1}B_{1}C_{1}$
and $A_{2}B_{2}C_{2}$ with the vertices $A_{1}=(0;0),\ B_{1}=(1;2),\
C_{1}=(2;0),\ A_{2}=(1\frac{1}{2};1),\ B_{2}=(2\frac{1}{2};-1),\ C_{2}=(3%
\frac{1}{2};1).$ Let $\mathbf{a}^{1}=(0;1;0)$ and $\mathbf{a}^{2}=(1;0;0).$ Then it is easy to
see that Theorem 1.10 holds (the vector $\mathbf{a}^{3}$ can be chosen as $(0;0;1)$).
In this case, $\delta _{0}$ can not be always chosen as equal to $\delta $.
Take, for example, $\mathbf{x}^{0}=(1\frac{3}{4};0;0)$ and $\delta =1\frac{3%
}{4}.$ If $\delta _{0}=\delta ,$ then the second equation of the system
(1.50) has not a solution for a point $(1;2;0)$ or a point $(2\frac{1}{2}%
;-1;0).$ But if we take $\delta _{0}$ not more than $\frac{1}{4}$, then for $%
\mathbf{x}^{\sigma }=\mathbf{x}^{0}$ the system has a solution. Note that
the last inequality $\left\vert \mathbf{a}^{3}\cdot \mathbf{x}^{\prime }-%
\mathbf{a}^{3}\cdot \mathbf{x}\right\vert <\delta $ of the system can be
satisfied with the equality $\mathbf{a}^{3}\cdot \mathbf{x}^{\prime }=%
\mathbf{a}^{3}\cdot \mathbf{x}$ if $\mathbf{a}^{3}=(0;0;1).$
\end{enumerate}
It should be remarked that the results of \cite{35} tell nothing about
necessary conditions for proximinality of the spaces considered there. To
fill this gap in our case, we want to give a necessary condition for
proximinality of $\mathcal{R}_{c}(X)$ in $C(X)$. First, let us introduce
some notation. By $\mathcal{R}_{c}^{i},\ i=1,2,$ we will denote the set of
continuous ridge functions $g\left( \mathbf{a}^{i}\cdot \mathbf{x}\right) $
on the given compact set $X\subset \mathbb{R}^{n}.$ Note that $\mathcal{R}%
_{c}=\mathcal{R}_{c}^{1}+\mathcal{R}_{c}^{2}.$ Besides, let $\mathcal{R}%
_{c}^{3}=\mathcal{R}_{c}^{1}\cap \mathcal{R}_{c}^{2}.$ For $i=1,2,3,$ let $%
X_{i}$ be the quotient space obtained by identifying points $y_{1}$ and $%
y_{2}$ in $X$ whenever $f(y_{1})=f(y_{2})$ for each $f$ in $\mathcal{R}%
_{c}^{i}.$ By $\pi _{i}$ denote the natural projection of $X$ onto $X_{i},$ $%
i=1,2,3.$ Note that we have already dealt with the quotient spaces $X_{1}$, $%
X_{2}$ and the projections $\pi _{1},\pi _{2}$ in the previous section.
Recall that the relation on $X$, defined by setting $\ y_{1}\approx y_{2}$
if $y_{1}$ and $y_{2}$ belong to some path, is an equivalence relation and
the equivalence classes are called orbits. By $O(t)$ denote the orbit of $X$
containing $t.$ For $Y\subset X,$ let $var_{Y}\ f$ be the variation of a
function $f$ on the set $Y.$ That is,
\begin{equation*}
\underset{Y}{var}f=\sup\limits_{x,y\in Y}\left\vert f\left( x\right)
-f\left( y\right) \right\vert .
\end{equation*}
The following theorem is valid.
\bigskip
\textbf{Theorem 1.11.} \textit{Suppose that the space $\mathcal{R}_{c}(X)$
is proximinal in $C(X).$Then there exists a positive real number c such that}
\begin{equation*}
\sup_{t\in X}\underset{O\left( t\right) }{var}\mathit{f\leq c}\sup_{t\in X}%
\underset{\pi _{2}^{-1}\left( \pi _{2}\left( t\right) \right) }{var}\mathit{f%
}\eqno(1.51)
\end{equation*}%
\textit{for all $f$ in $\mathcal{R}_{c}^{1}.$}
\bigskip
\begin{proof}
The proof is based on the
following result of Marshall and O'Farrell (see \cite[Proposition 4]{108}): Let $A_{1}\ $and $A_{2}\
$be closed subalgebras of $C(X)\ $that contain the constants. Let $%
(X_{1},\pi _{1}),\ (X_{2},\pi _{2})\ $and $(X_{3},\pi _{3})\ $be the
quotient spaces and projections associated with the algebras $A_{1},$ $%
A_{2}\ $and $A_{3}=A_{1}\cap A_{2}\ $respectively. Then $A_{1}+A_{2}\ $is
closed in $C(X)\ $if and only if there exists a positive real number $c$
such that
\begin{equation*}
\sup\limits_{z\in X_{3}}\underset{\pi _{3}^{-1}\left( z\right) }{var}f\leq
c\sup\limits_{y\in X_{2}}\underset{\pi _{2}^{-1}\left( y\right) }{var}f\eqno%
(1.52)
\end{equation*}%
for all $f\ $in $A_{1}.$
If $\mathcal{R}_{c}(X)$ is proximinal in $C(X),$ then it is necessarily
closed and therefore, by the above proposition, (1.52) holds for the
algebras $A_{1}^{i}=\mathcal{R}_{c}^{i},\ i=1,2,3.$ The right-hand side of
(1.52) is equal to the right-hand side of (1.51). Let $t$ be some point in $%
X $ and $z=\pi _{3}(t).$ Since each function $\ f\in \mathcal{R}_{c}^{3}$ is
constant on the orbit of $t$ (note that $f$ is both of the form ${\
g_{1}\left( \mathbf{a}^{1}{\cdot }\mathbf{x}\right) }$ and of the form ${\
g_{2}\left( \mathbf{a}^{2}{\cdot }\mathbf{x}\right) }$), $O(t)\subset \pi
_{3}^{-1}(z).$ Hence,
\begin{equation*}
\sup_{t\in X}\underset{O\left( t\right) }{var}f\leq c\sup\limits_{z\in X_{3}}%
\underset{\pi _{3}^{-1}\left( z\right) }{var}f\eqno(1.53)
\end{equation*}
From (1.52) and (1.53) we obtain (1.51).
\end{proof}
Note that the inequality (1.52) provides not worse but less practicable
necessary condition for proximinality than the inequality (1.51) does. On
the other hand, there are many cases in which both the inequalities are
equivalent. For example, assume the lengths of irreducible paths of $X$ are
bounded by some positive integer $n_{0}$. In this case, it can be shown that
the inequality (1.52), hence (1.51), holds with the constant $c=\frac{n_{0}}{%
2}$ and moreover $O(t)=\pi _{3}^{-1}(z)$ for all $t\in X$, where $z=\pi
_{3}(t)$ (see the proof of \cite[Theorem 5]{53}). Therefore, the
inequalities (1.51) and (1.52) are equivalent for the considered class of
sets $X.$ The last argument shows that all the compact sets $X\subset $ $%
\mathbb{R}^{n}$ over which $\mathcal{R}_{c}(X)$ is not proximinal in $C(X)$
should be sought in the class of sets having irreducible paths consisting of
sufficiently many points. For example, let $I=[0;1]^{2}$ be the
unit square, $\mathbf{a}^{1}=(1;1)$, $\mathbf{a}^{2}=(1;\frac{1}{2}).$ Consider the path
\begin{equation*}
l_{k}=\{(1;0),(0;1),(\frac{1}{2};0),(0;\frac{1}{2}),(\frac{1}{4};0),...,(0;
\frac{1}{2^{k}})\}.
\end{equation*}
It is clear that $l_{k}$ is an irreducible path with the length $2k+2 $,
where $k$ may be very large. Let $g_{k}$ be a continuous univariate function
on $\mathbb{R}$ satisfying the conditions: $g_{k}(\frac{1}{2^{k-i}} )=i,\
i=0,...,k,$ $g_{k}(t)=0$ if $t<\frac{1}{2^{k}},\ i-1\leq g_{k}(t)\leq i $ if
$t\in (\frac{1}{2^{k-i+1}},\frac{1}{2^{k-i}}),\ i=1,...,k,$ and $g_{k}(t)=k$
if $t>1.$ Then it can be easily verified that
\begin{equation*}
\sup_{t\in X}\underset{\pi _{2}^{-1}\left( \pi _{2}\left( t\right) \right) }{%
var}g_{k}(\mathbf{a}^{1}{\cdot }\mathbf{x})\leq 1.\eqno(1.54)
\end{equation*}
Since $\max_{\mathbf{x}\in I}g_{k}(\mathbf{a}^{1}{\cdot }\mathbf{x} )=k,$ $%
\min_{\mathbf{x}\in I}g_{k}(\mathbf{a}^{1}{\cdot }\mathbf{x})=0$ and $var_{%
\mathbf{x}\in O\left( t_{1}\right) }g_{k}(\mathbf{a}^{1}{\cdot }\mathbf{\ x}%
)=k $ for $t_{1}=(1;0),$ we obtain that
\begin{equation*}
\sup_{t\in X}\underset{O\left( t\right) }{var}g_{k}(\mathbf{a}^{1}{\cdot }%
\mathbf{x})=k.\eqno(1.55)
\end{equation*}
Since $k$ may be very large, from (1.54) and (1.55) it follows that the
inequality (1.51) cannot hold for the function $g_{k}(\mathbf{a}^{1}{\ \cdot
}\mathbf{x})\in \mathcal{R}_{c}^{1}.$ Thus the space $\mathcal{R}_{c}(I)$
with the directions $\mathbf{a}^{1}=(1;1)$ and $\mathbf{a}^{2}=(1;\frac{1}{2})$ is not
proximinal in $C(I)$.
It should be remarked that if a compact set $X\subset $ $\mathbb{R}^{n}$
satisfies the hypothesis of Theorem 1.10, then the length of all irreducible
paths are uniformly bounded (see the proof of Theorem 1.10 and lemma in \cite%
{35}). We have already seen that if the last condition does not hold, then
the proximinality of both $\mathcal{R}_{c}(X)$ in $C(X)$ and $\mathcal{R}%
_{b}(X)$ in $B(X)$ fail for some sets $X$. In addition to the examples given above
and in Section 1.5.2, one can easily construct many other examples of such
sets. All these examples, Theorems 1.9--1.11 and the subsequent
remarks justify the statement of the following conjecture:
\bigskip
\textbf{Conjecture. }\textit{Let $X$ be some subset of $\mathbb{R} ^{n}.$
The space $\mathcal{R}_{b}(X)$ is proximinal in $B(X)$ and the space $%
\mathcal{R}_{c}(X)$ is proximinal in $C(X)$ (in this case, $X$ is considered
to be compact) if and only if the lengths of all irreducible paths of $X$
are uniformly bounded.}
\bigskip
\textbf{Remark 1.2.} Medvedev's result (see \cite[p.58]{76}), which later
came to our attention, in particular, says that the set $R_{c}(X)$ is closed
in $C(X)$ if and only if the lengths of all irreducible paths of $X$ are
uniformly bounded. Thus, in the case of $C(X)$, the necessity of the above
conjecture was proved by Medvedev.
\bigskip
\textbf{Remark 1.3.} Note that there are situations in which a continuous
function (a specific function on a specially constructed set) has an
extremal element in $\mathcal{R}_{b}(X)$, but not in $\mathcal{R}_{c}(X)$
(see \cite[p.73]{76}). One subsection of \cite{76} (see p.68 there) was
devoted to the proximinality of sums of two univariate functions with
continuous and bounded summands in the spaces of continuous and bounded
bivariate functions, respectively. If $X\subset \mathbb{R}^{2}$ and $\mathbf{%
a}^{1},\mathbf{a}^{2} $ be linearly independent directions in $\mathbb{R}^{2}
$, then the linear transformation $y_{1}=$ $\mathbf{a}^{1}\cdot \mathbf{x\,}$%
, $y_{2}=$ $\mathbf{a}^{2}\cdot \mathbf{x}$ reduces the problems of
proximinality of $\mathcal{R}_{b}(X)$ in $B(X)$ and $\mathcal{R}_{c}(X)$ in $%
C(X)$ to the problems considered in that subsection. But in general, when $%
X\subset \mathbb{R}^{n}$ and $n>2$, they cannot be reduced to those in
\cite{76}.
\bigskip
\section{On the approximation by weighted ridge functions}
In this section, we characterize the best $L_{2}$-approximation to a
multivariate function by linear combinations of ridge functions multiplied
by some fixed weight functions. In the special case, when the weight
functions are constants, we obtain explicit formulas for both the best
approximation and approximation error.
\subsection{Problem statement}
Ridge approximation in $L_{2}$ started to be actively studied in the late 90's by
K.I. Oskolkov \cite{114}, V.E. Maiorov \cite{102}, A. Pinkus \cite{118},
V.N. Temlyakov \cite{138}, P. Petrushev \cite{116} and other researchers.
Let $D$ be the unit disk in $\mathbb{R}^{2}$. In \cite{97}, Logan and Shepp
along with other results gave a closed-form expression for the best $L_{2}$-approximation
to a function $f \in L_{2}\left(
D\right)$ from the set $\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right)$.
Their solution requires that the directions $\mathbf{a}^{1},...,\mathbf{a}%
^{r}$ be equally-spaced and involves finite sums of convolutions with
explicit kernels. In the $n$-dimensional case, we obtained an expression of
simpler form for the best $L_{2}$-approximation to square-integrable
multivariate functions over a certain domain, provided that $r=n$ and the
directions $\mathbf{a}^{1},...,\mathbf{a}^{r}$ are linearly independent (see
\cite{52}).
In this section, we consider the approximation by functions from the following more general set
\begin{equation*}
\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r};~w_{1},...,w_{r}\right)
=\left\{ \sum\limits_{i=1}^{r}w_{i}(\mathbf{x})g_{i}\left( \mathbf{a}%
^{i}\cdot \mathbf{x}\right) :g_{i}:\mathbb{R}\rightarrow \mathbb{R}%
,~i=1,...,r\right\} ,
\end{equation*}%
where $w_{1},...,w_{r}$ are fixed multivariate functions. We
characterize the best $L_{2}$-approximation from this set in the case $%
r\leq n.$ Then, in the special case when the weight functions $%
w_{1},...,w_{r}$ are constants, we will prove two theorems on explicit
formulas for the best approximation and the approximation error,
respectively. At present, we do not yet know how to approach
these problems in other possible cases of $r.$
\bigskip
\subsection{Characterization of the best approximation}
Let $X$ be a subset of $\mathbb{R}^{n}$ with a finite Lebesgue measure.
Consider the approximation of a function $f\left( \mathbf{x}\right) =f\left(
x_{1},...,x_{n}\right) $ in $L_{2}\left( X\right) $ by functions from the manifold $%
\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r};~w_{1},...,w_{r}\right) $%
, where $r\leq n.$ We suppose that the functions $w_{i}(\mathbf{x})$ and the
products $w_{i}(\mathbf{x})\cdot g_{i}\left( \mathbf{a}^{i}\cdot \mathbf{x}%
\right) ,~i=1,...,r$, belong to the space $L_{2}\left( X\right) .$ Besides,
we assume that the vectors $\mathbf{a}^{1},...,\mathbf{a}^{r}$ are linearly
independent. We say that a function $g_{w}^{0}=\sum\limits_{i=1}^{r}w_{i}(%
\mathbf{x})g_{i}^{0}\left( \mathbf{a}^{i}\cdot \mathbf{x}\right) $ in $%
\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r};~w_{1},...,w_{r}\right) $
is the best approximation (or extremal) to $f$ if
\begin{equation*}
\left\Vert f-g_{w}^{0}\right\Vert _{L_{2}\left( X\right) }=\inf\limits_{g\in
\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r};~w_{1},...,w_{r}\right)
}\left\Vert f-g\right\Vert _{L_{2}\left( X\right) }.
\end{equation*}
Let the system of vectors $\{\mathbf{a}^{1},...,\mathbf{a}^{r},\mathbf{a%
}^{r+1},...,\mathbf{a}^{n}\}$ be a completion of the system $\{\mathbf{a}%
^{1},...,\mathbf{a}^{r}\}$ to a basis in $\mathbb{R}^{n}.$\ Let $%
J:X\rightarrow \mathbb{R}^{n}$ be the linear transformation given by the
formulas
\begin{equation*}
y_{i}=\mathbf{a}^{i}\cdot \mathbf{x,}\quad \,i=1,...,n.\eqno(1.56)
\end{equation*}%
Since the vectors $\mathbf{a}^{i},$ $i=1,...,n$, are linearly independent,
it is an injection. The Jacobian $\det J$ of this transformation is a
constant different from zero.
Let the formulas
\begin{equation*}
x_{i}=\mathbf{b}^{i}\cdot \mathbf{y},\;\;i=1,...,n,
\end{equation*}%
stand for the solution of linear equations (1.56) with respect to $%
x_{i},\;i=1,...,n.$
Introduce the notation
\begin{equation*}
Y=J\left( X\right)
\end{equation*}%
and
\begin{equation*}
Y_{i}=\left\{ y_{i}\in \mathbb{R}:\;\;y_{i}=\mathbf{a}^{i}\cdot \mathbf{x}%
,\;\;\mathbf{x}\in X\right\} ,\,i=1,...,n.
\end{equation*}
For any function $u\in L_{2}\left( X\right) ,$ put
\begin{equation*}
u^{\ast }=u^{\ast }\left( \mathbf{y}\right) \overset{def}{=}u\left( \mathbf{b%
}^{1}\cdot \mathbf{y},...,\mathbf{b}^{n}\cdot \mathbf{y}\right) .
\end{equation*}%
It is obvious that $u^{\ast }\in L_{2}\left( Y\right) .$ Besides,
\begin{equation*}
\int\limits_{Y}u^{\ast }\left( \mathbf{y}\right) d\mathbf{y}=\left\vert \det
J\right\vert \cdot \int\limits_{X}u\left( \mathbf{x}\right) d\mathbf{x}\eqno%
(1.57)
\end{equation*}%
and
\begin{equation*}
\left\Vert u^{\ast }\right\Vert _{L_{2}\left( Y\right) }=\left\vert \det
J\right\vert ^{1/2}\cdot \left\Vert u\right\Vert _{L_{2}\left( X\right) }.%
\eqno(1.58)
\end{equation*}%
Set
\begin{equation*}
L_{2}^{i}=\{w_{i}^{\ast }(\mathbf{y})g\left( y_{i}\right) \in
L_{2}(Y)\},~i=1,...,r.
\end{equation*}
We need the following auxiliary lemmas.
\bigskip
\textbf{Lemma 1.5.} \textit{Let $f\left( \mathbf{x}\right) \in
L_{2}\left( X\right) $. A function $\sum\limits_{i=1}^{r}w_{i}(\mathbf{x}%
)g_{i}^{0}\left( \mathbf{a}^{i}\cdot \mathbf{x}\right) $ is extremal to the
function $f\left( \mathbf{x}\right) $ if and only if \ $\sum%
\limits_{i=1}^{r}w_{i}^{\ast }(\mathbf{y})g_{i}^{0}\left( y_{i}\right) $ is
extremal from the space $L_{2}^{1}\mathit{\oplus }...\oplus L_{2}^{r}$ to
the function $f^{\ast }\left( \mathbf{y}\right) $.}
\bigskip
Due to (1.58) the proof of this lemma is obvious.
\bigskip
\textbf{Lemma 1.6.} \textit{Let $f\left( \mathbf{x}\right) \in
L_{2}\left( X\right) $. A function $\sum\limits_{i=1}^{r}w_{i}(\mathbf{x}%
)g_{i}^{0}\left( \mathbf{a}^{i}\cdot \mathbf{x}\right) $ is extremal to the
function $f\left( \mathbf{x}\right) $ if and only if}
\begin{equation*}
\int\limits_{X}\left( f\left( \mathbf{x}\right) -\sum\limits_{i=1}^{r}w_{i}(%
\mathbf{x})g_{i}^{0}\left( \mathbf{a}^{i}\cdot \mathbf{x}\right) \right)
w_{j}(\mathbf{x})h\left( \mathbf{a}^{j}\cdot \mathbf{x}\right) d\mathbf{x}%
=0\
\end{equation*}%
\textit{for any ridge function $h\left( \mathbf{a}^{j}\cdot \mathbf{x}%
\right) $ such that $\mathit{w}_{j}\mathit{(x)h}\left( \mathbf{a}^{j}\cdot
\mathbf{x}\right) $$\in L_{2}\left( X\right)$, $j=1,...,r$.}
\bigskip
\textbf{Lemma 1.7.} \textit{The following formula is valid for the error
of approximation to a function $f\left( \mathbf{x}\right) $ in $L_{2}\left(
X\right) $ from $\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}%
^{r};~w_{1},...,w_{r}\right) $:}
\begin{equation*}
E\left( f\right) =\left( \left\Vert f\left( \mathbf{x}\right) \right\Vert
_{L_{2}\left( X\right) }^{2}-\left\Vert \sum\limits_{i=1}^{r}w_{i}(\mathbf{x}%
)g_{i}^{0}\left( \mathbf{a}^{i}\cdot \mathbf{x}\right) \right\Vert
_{L_{2}\left( X\right) }^{2}\right) ^{\frac{1}{2}},
\end{equation*}%
\textit{where $\sum\limits_{i=1}^{r}w_{i}(\mathbf{x})g_{i}^{0}\left( \mathbf{%
a}^{i}\cdot \mathbf{x}\right) $ is the best approximation to $f\left(
\mathbf{x}\right) $.}
\bigskip
Lemmas 1.6 and 1.7 follow from the well-known facts of functional
analysis that the best approximation of an element $x$ in a Hilbert space $H$
from a linear subspace $Z$ of $H$ must be the image of $x$ via the
orthogonal projection onto $Z$ and the sum of squares of norms of orthogonal
vectors is equal to the square of the norm of their sum.
We say that $Y$ is an \textit{$r$-set} if it can be represented as $Y_{1}\times
...\times Y_{r}\times Y_{0},$ where $Y_{0}$ is some set from the space $%
\mathbb{R}^{n-r}.$ In a special case, $Y_{0}$ may be equal to $Y_{r+1}\times
...\times Y_{n},$ but it is not necessary. By $Y^{\left( i\right) },$ we
denote the Cartesian product of the sets $Y_{1},...,Y_{r},Y_{0}$ except for $%
Y_{i},\;i=1,...,r$. That is, $Y^{\left( i\right) }=Y_{1}\times ...\times
Y_{i-1}\times Y_{i+1}\times ...\times Y_{r}\times Y_{0},\,\ i=1,...,r$.
\bigskip
\textbf{Theorem 1.12.} \textit{Let $Y$ be an $r$-set. A function
$\sum\limits_{i=1}^{r}w_{i}(\mathbf{x})g_{i}^{0}\left( \mathbf{a}^{i}\cdot
\mathbf{x}\right) $ is the best approximation to $f(\mathbf{x)}$ if and only
if}
\begin{equation*}
g_{j}^{0}\left( y_{j}\right) =\frac{1}{\int\limits_{Y^{\left( j\right)
}}w_{j}^{\ast 2}(\mathbf{y})d\mathbf{y}^{\left( j\right) }}%
\int\limits_{Y^{\left( j\right) }}\left( f^{\ast }\left( \mathbf{y}\right)
-\sum\limits_{\substack{ i=1 \\ i\neq j}}^{r}w_{i}^{\ast }(\mathbf{y}%
)g_{i}^{0}\left( y_{i}\right) \right) w_{j}^{\ast }(\mathbf{y})d\mathbf{y}%
^{\left( j\right) },\eqno(1.59)
\end{equation*}%
\textit{for $j=1,...,r$.}
\bigskip
\begin{proof} \textit{Necessity.} Let a function $\sum\limits_{i=1}^{r}w_{i}(%
\mathbf{x})g_{i}^{0}\left( \mathbf{a}^{i}\cdot \mathbf{x}\right) $ be
extremal to $f$. Then by Lemma 1.5, the function $\sum%
\limits_{i=1}^{r}w_{i}^{\ast }(\mathbf{y})g_{i}^{0}\left( y_{i}\right) $ in $%
L_{2}^{1}\oplus ...\oplus L_{2}^{r}$ is extremal to $f^{\ast }$. By Lemma
1.6 and equality (1.57),
\begin{equation*}
\int\limits_{Y}f^{\ast }\left( \mathbf{y}\right) w_{j}^{\ast }(\mathbf{y}%
)h\left( y_{j}\right) d\mathbf{y}=\int\limits_{Y}w_{j}^{\ast }(\mathbf{y}%
)h\left( y_{j}\right) \sum\limits_{i=1}^{r}w_{i}^{\ast }(\mathbf{y}%
)g_{i}^{0}\left( y_{i}\right) d\mathbf{y}\eqno(1.60)
\end{equation*}%
for any product $w_{j}^{\ast }(\mathbf{y})h\left( y_{j}\right) $ in $%
L_{2}^{j},\;\;j=1,...,r$. Applying Fubini's theorem to the integrals in
(1.60), we obtain that
\begin{eqnarray*}
&&\int\limits_{Y_{j}}h\left( y_{j}\right) \left[ \int\limits_{Y^{\left(
j\right) }}f^{\ast }\left( \mathbf{y}\right) w_{j}^{\ast }(\mathbf{y})d%
\mathbf{y}^{\left( j\right) }\right] dy_{j} \\
&=&\int\limits_{Y_{j}}h\left( y_{j}\right) \left[ \int\limits_{Y^{\left(
j\right) }}w_{j}^{\ast }(\mathbf{y})\sum\limits_{i=1}^{r}w_{i}^{\ast }(%
\mathbf{y})g_{i}^{0}\left( y_{i}\right) d\mathbf{y}^{\left( j\right) }\right]
dy_{j}.
\end{eqnarray*}%
Since $h\left( y_{j}\right) $ is an arbitrary function such that $%
w_{j}^{\ast }(\mathbf{y})h\left( y_{j}\right) \in L_{2}^{j}$,
\begin{equation*}
\int\limits_{Y^{\left( j\right) }}f^{\ast }\left( \mathbf{y}\right)
w_{j}^{\ast }(\mathbf{y})d\mathbf{y}^{(j)}=\int\limits_{Y^{\left( j\right)
}}w_{j}^{\ast }(\mathbf{y})\sum\limits_{i=1}^{r}w_{i}^{\ast }(\mathbf{y}%
)g_{i}^{0}\left( y_{i}\right) d\mathbf{y}^{\left( j\right) },\;\;j=1,...,r.
\end{equation*}%
Therefore,
\begin{equation*}
\int\limits_{Y^{\left( j\right) }}w_{j}^{\ast 2}(\mathbf{y})g_{j}^{0}\left( {%
y_{j}}\right) d\mathbf{y}^{\left( j\right) }=\int\limits_{Y^{\left( j\right)
}}\left( f^{\ast }\left( \mathbf{y}\right) -\sum\limits_{\substack{ i=1 \\ %
i\neq j}}^{r}w_{i}^{\ast }(\mathbf{y})g_{i}^{0}\left( y_{i}\right) \right)
w_{j}^{\ast }(\mathbf{y})d\mathbf{y}^{\left( j\right) },
\end{equation*}%
for $j=1,...,r.$ Now, since $y_{j}\notin Y^{\left( j\right) }$, we obtain (1.59).
\textit{Sufficiency.} Note that all the equalities in the proof of the necessity can
be obtained in the reverse order. Thus, (1.60) can be obtained from (1.59).
Then by (1.57) and Lemma 1.6, we finally conclude that the function $%
\sum\limits_{i=1}^{r}w_{i}(\mathbf{x})g_{i}^{0}\left( \mathbf{a}^{i}\cdot
\mathbf{x}\right) $ is extremal to $f\left( \mathbf{x}\right) $.
\end{proof}
In the following, $\left\vert Q\right\vert $ will denote the Lebesgue
measure of a measurable set $Q.$ The following corollary is obvious.
\bigskip
\textbf{Corollary 1.6.} \textit{Let $Y$ be an }$r$\textit{-set. A
function $\sum\limits_{i=1}^{r}g_{i}^{0}\left( \mathbf{a}^{i}\cdot \mathbf{x}%
\right) $ in $\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) $
is the best approximation to $f(\mathbf{x)}$ if and only if}
\begin{equation*}
g_{j}^{0}\left( y_{j}\right) =\frac{1}{\left\vert Y^{\left( j\right)
}\right\vert }\int\limits_{Y^{\left( j\right) }}\left( f^{\ast }\left(
\mathbf{y}\right) -\sum\limits_{\substack{ i=1 \\ i\neq j}}%
^{r}g_{i}^{0}\left( y_{i}\right) \right) d\mathbf{y}^{\left( j\right)
},\;\;j=1,...,r.
\end{equation*}%
\bigskip
In \cite{52}, this corollary was proven for the case $r=n.$
\bigskip
\subsection{Formulas for the best approximation and approximation error}
In this section, we establish explicit formulas for both the
best approximation and approximation error, provided that the weight
functions are constants. In this case, since we vary over $g_{i},$ the set $%
\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r};~w_{1},...,w_{r}\right) $
coincides with $\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) .$
Thus, without loss of generality, we may assume that $w_{i}(\mathbf{x})=1$
for $i=1,...,r.$
For brevity of the further exposition, introduce the notation
\begin{equation*}
A=\int\limits_{Y}f^{\ast }\left( \mathbf{y}\right) d\mathbf{y}\text{ and \ }%
f_{i}^{\ast }=f_{i}^{\ast }(y_{i})=\int\limits_{Y^{\left( i\right) }}f^{\ast
}\left( \mathbf{y}\right) d\mathbf{y}^{\left( i\right) },~i=1,...,r.
\end{equation*}
The following theorem is a generalization of the main result of \cite%
{52} from the case $r=n$ to the cases $r<n.$
\bigskip
\textbf{Theorem 1.13.} \textit{Let $Y$ be an }$r$\textit{-set. Set the
functions}
\begin{equation*}
g_{1}^{0}\left( y_{1}\right) =\frac{1}{\left\vert Y^{\left( 1\right)
}\right\vert }f_{1}^{\ast }-\left( r-1\right) \frac{A}{\left\vert
Y\right\vert }
\end{equation*}%
\textit{and}
\begin{equation*}
g_{j}^{0}\left( y_{j}\right) =\frac{1}{\left\vert Y^{\left( j\right)
}\right\vert }f_{j}^{\ast },\;j=2,...,r.
\end{equation*}%
\textit{Then the function $\sum\limits_{i=1}^{r}g_{i}^{0}\left( \mathbf{a}%
^{i}\cdot \mathbf{x}\right) $ is the best approximation from $\mathcal{R}%
\left( \mathbf{a}^{1},...,\mathbf{a}^{r}\right) $ to $f\left( \mathbf{x}%
\right) $.}
\bigskip
The proof is simple. It is sufficient to verify that
the functions $g_{j}^{0}\left( y_{j}\right) ,\;j=1,...,r$, satisfy the
conditions of Corollary 1.6. This becomes obvious if note that
\begin{equation*}
\sum\limits_{\underset{i\neq j}{i=1}}^{r}\frac{1}{\left\vert Y^{\left(
j\right) }\right\vert }\frac{1}{\left\vert Y^{\left( i\right) }\right\vert }%
\int\limits_{Y^{\left( j\right) }}\left[ \int\limits_{Y^{\left( i\right)
}}f^{\ast }\left( \mathbf{y}\right) d\mathbf{y}^{\left( i\right) }\right] d%
\mathbf{y}^{\left( j\right) }=\left( r-1\right) \frac{1}{\left\vert
Y\right\vert }\int\limits_{Y}f^{\ast }\left( \mathbf{y}\right) d\mathbf{y}
\end{equation*}%
for $j=1,...,r$.
\bigskip
\textbf{Theorem 1.14.} \textit{Let $Y$ be an $r$-set. Then the
error of approximation to a function $f(x)$ from the set $\mathcal{R}\left(
\mathbf{a}^{1},...,\mathbf{a}^{r}\right) $ can be calculated by the formula}
\begin{equation*}
E(f)=\left\vert \det J\right\vert ^{-1/2}\left( \left\Vert f^{\ast
}\right\Vert _{L_{2}(Y)}^{2}-\sum_{i=1}^{r}\frac{1}{\left\vert Y^{\left(
i\right) }\right\vert ^{2}}\left\Vert f_{i}^{\ast }\right\Vert
_{L_{2}(Y)}^{2}+(r-1)\frac{A^{2}}{\left\vert Y\right\vert }\right) ^{1/2}.
\end{equation*}
\bigskip
\begin{proof} From Eq. (1.58), Lemma 1.7 and Theorem 1.13, it follows that
\begin{equation*}
E(f)=\left\vert \det J\right\vert ^{-1/2}\left( \left\Vert f^{\ast
}\right\Vert _{L_{2}(Y)}^{2}-I\right) ^{1/2},\eqno(1.61)
\end{equation*}%
where
\begin{equation*}
I=\left\Vert \sum_{i=1}^{r}\frac{1}{\left\vert Y^{\left( i\right)
}\right\vert }f_{i}^{\ast }-(r-1)\frac{A}{\left\vert Y\right\vert }%
\right\Vert _{L_{2}(Y)}^{2}.
\end{equation*}%
The integral $I$ can be written as a sum of the following four integrals:
\begin{eqnarray*}
I_{1} &=&\sum_{i=1}^{r}\frac{1}{\left\vert Y^{\left( i\right) }\right\vert
^{2}}\left\Vert f_{i}^{\ast }\right\Vert
_{L_{2}(Y)}^{2},~I_{2}=\sum_{i=1}^{r}\sum\limits_{\substack{ j=1 \\ j\neq i
}}^{r}\frac{1}{\left\vert Y^{\left( i\right) }\right\vert }\frac{1}{%
\left\vert Y^{\left( j\right) }\right\vert }\int\limits_{Y}f_{i}^{\ast
}f_{j}^{\ast }d\mathbf{y,} \\
I_{3} &=&-2(r-1)\frac{1}{\left\vert Y\right\vert }A\sum_{i=1}^{r}\frac{1}{%
\left\vert Y^{\left( i\right) }\right\vert }\int\limits_{Y}f_{i}^{\ast }d%
\mathbf{y,}~I_{4}=(r-1)^{2}\frac{A^{2}}{\left\vert Y\right\vert }.
\end{eqnarray*}%
\qquad \qquad
It is not difficult to verify that
\begin{equation*}
\int\limits_{Y}f_{i}^{\ast }f_{j}^{\ast }d\mathbf{y=}\left\vert Y_{0}\times
\prod\limits_{\substack{ k=1 \\ k\neq i,j}}^{r}Y_{k}\right\vert A^{2},\text{
for }i,j=1,...,r,~i\neq j,\eqno(1.62)
\end{equation*}%
and
\begin{equation*}
\int\limits_{Y}f_{i}^{\ast }d\mathbf{y}=\left\vert Y_{0}\times \prod\limits
_{\substack{ k=1 \\ k\neq i}}^{r}Y_{k}\right\vert A,\text{ for }i=1,...,r.%
\eqno(1.63)
\end{equation*}
Considering (1.62) and (1.63) in the expressions of $I_{2}$ and $I_{3}$
respectively, we obtain that
\begin{equation*}
I_{2}=r(r-1)\frac{A^{2}}{\left\vert Y\right\vert }\text{ and }I_{3}=-2r(r-1)%
\frac{A^{2}}{\left\vert Y\right\vert }.
\end{equation*}%
Therefore,
\begin{equation*}
I=I_{1}+I_{2}+I_{3}+I_{4}=\sum_{i=1}^{r}\frac{1}{\left\vert Y^{\left(
i\right) }\right\vert ^{2}}\left\Vert f_{i}^{\ast }\right\Vert
_{L_{2}(Y)}^{2}-(r-1)\frac{A^{2}}{\left\vert Y\right\vert }.
\end{equation*}%
Now the last equality together with (1.61) complete the proof. \end{proof}
\textbf{Example.} Consider the following set
\begin{equation*}
X=\{\mathbf{x}\in \mathbb{R}^{4}:y_{i}=y_{i}(\mathbf{x})\in \lbrack
0;1],~i=1,...,4\},
\end{equation*}%
where
\begin{equation*}
\left\{
\begin{array}{c}
y_{1}=x_{1}+x_{2}+x_{3}-x_{4} \\
y_{2}=x_{1}+x_{2}-x_{3}+x_{4} \\
y_{3}=x_{1}-x_{2}+x_{3}+x_{4} \\
y_{4}=-x_{1}+x_{2}+x_{3}+x_{4}%
\end{array}%
\right. \eqno(1.64)
\end{equation*}%
Let the function
\begin{equation*}
f=8x_{1}x_{2}x_{3}x_{4}-\sum_{i=1}^{4}x_{i}^{4}+2\sum_{i=1}^{3}%
\sum_{j=i+1}^{4}x_{i}^{2}x_{j}^{2}
\end{equation*}%
be given on $X.$ Consider the approximation of this function by functions from $%
\mathcal{R}\left( \mathbf{a}^{1},\mathbf{a}^{2},\mathbf{a}^{3}\right) ,%
\mathcal{\ }$where $\mathbf{a}^{1}=(1;1;1;-1),~\mathbf{a}^{2}=(1;1;-1;1),~%
\mathbf{a}^{3}=(1;-1;1;1).$ Putting $\mathbf{a}^{4}=(-1;1;1;1),$ we complete
the system of vectors $\mathbf{a}^{1},\mathbf{a}^{2},\mathbf{a}^{3}$ to the
basis $\{\mathbf{a}^{1},\mathbf{a}^{2},\mathbf{a}^{3},\mathbf{a}^{4}\}$ in $%
\mathbb{R}^{4}.$ The linear transformation $J$ defined by (1.64) maps the
set $X$ onto the set $Y=[0;1]^{4}.$ The inverse transformation is given by
the formulas
\begin{equation*}
\left\{
\begin{array}{c}
x_{1}=\frac{1}{4}y_{1}+\frac{1}{4}y_{2}+\frac{1}{4}y_{3}-\frac{1}{4}y_{4} \\
x_{2}=\frac{1}{4}y_{1}+\frac{1}{4}y_{2}-\frac{1}{4}y_{3}+\frac{1}{4}y_{4} \\
x_{3}=\frac{1}{4}y_{1}-\frac{1}{4}y_{2}+\frac{1}{4}y_{3}+\frac{1}{4}y_{4} \\
x_{4}=-\frac{1}{4}y_{1}+\frac{1}{4}y_{2}+\frac{1}{4}y_{3}+\frac{1}{4}y_{4}%
\end{array}%
\right.
\end{equation*}%
It can be easily verified that $f^{\ast }=y_{1}y_{2}y_{3}y_{4}$ and $Y$
is a $3$-set with $Y_{i}=[0;1],$ $i=1,2,3.$ Besides, $Y_{0}=[0;1].$ After
easy calculations we obtain that $A=\allowbreak \frac{1}{16};~$\ $%
f_{i}^{\ast }=\allowbreak \frac{1}{8}y_{i}$ for $i=1,2,3;$ $\det J=-16;$ $%
\left\Vert f^{\ast }\right\Vert _{L_{2}(Y)}^{2}=\frac{1}{81};$ $\left\Vert
f_{i}^{\ast }\right\Vert _{L_{2}(Y)}^{2}=\frac{1}{192},$ $i=1,2,3.$ Now from
Theorems 1.13 and 1.14 it follows that the function $\frac{1}{8}%
\sum_{i=1}^{3}\left( \mathbf{a}^{i}\cdot \mathbf{x}\right) -\allowbreak
\frac{1}{8}$ is the best approximation from $\mathcal{R}\left( \mathbf{a}^{1},%
\mathbf{a}^{2},\mathbf{a}^{3}\right) $ to $f$ and $E(f)=\frac{1}{576}\sqrt{2}%
\sqrt{47}.$
\bigskip
\textbf{Remark 1.4.} Most of the material in this chapter is to be found in
\cite{52,53,54,49,50,47,66,64}.
\newpage
\chapter{The smoothness problem in ridge function representation}
This chapter discusses the following open problem raised in Buhmann and
Pinkus \cite{12}, and Pinkus \cite[p. 14]{117}. Assume we are given a
function $f(\mathbf{x})=f(x_{1},...,x_{n})$ of the form
\begin{equation*}
f(\mathbf{x})=\sum_{i=1}^{k}f_{i}(\mathbf{a}^{i}\cdot \mathbf{x}),\eqno(2.1)
\end{equation*}%
where the $\mathbf{a}^{i},$ $i=1,...,k,$ are pairwise linearly independent
vectors (directions) in $\mathbb{R}^{n}$, $f_{i}$ are arbitrarily behaved
univariate functions and $\mathbf{a}^{i}\cdot \mathbf{x}$ are standard inner
products. Assume, in addition, that $f$ is of a certain smoothness class,
that is, $f\in C^{s}(\mathbb{R}^{n})$, where $s\geq 0$ (with the convention
that $C^{0}(\mathbb{R}^{n})=C(\mathbb{R}^{n})$). Is it true that there will
always exist $g_{i}\in C^{s}(\mathbb{R})$ such that
\begin{equation*}
f(\mathbf{x})=\sum_{i=1}^{k}g_{i}(\mathbf{a}^{i}\cdot \mathbf{x})\text{ ?}%
\eqno(2.2)
\end{equation*}
In this chapter, we solve this problem up to some multivariate polynomial.
In the special case $n=2$, we see that this multivariate polynomial can be
written as a sum of polynomial ridge functions with the given directions $%
\mathbf{a}^{i}$. In addition, we find various conditions on the directions $%
\mathbf{a}^{i}$ guaranteeing a positive solution to the problem. We also
consider the question on constructing $g_{i}$ using the information about
the known functions $f_{i}$.
Most of the material of this chapter may be found in \cite{2,1,A2,A1,120}.
\bigskip
\section{A solution to the problem up to a multivariate polynomial}
In this section, we solve the above problem up to a multivariate polynomial.
That is, we show that if (2.1) holds for $f\in C^{s}(\mathbb{R}^{n})$ and
arbitrarily behaved $f_{i}$, then there exist $g_{i}\in C^{s}(\mathbb{R})$
such that
\begin{equation*}
f(\mathbf{x})=\sum_{i=1}^{k}g_{i}(\mathbf{a}^{i}\cdot \mathbf{x})+P(\mathbf{x%
}),
\end{equation*}%
where $P(\mathbf{x})$ is a polynomial of degree at most $k-1$. In the
special case $n=2$, we see that this multivariate polynomial can be written
as a sum of polynomial ridge functions with the given directions $\mathbf{a}%
^{i}$ and thus (2.2) holds with $g_{i}\in C^{s}(\mathbb{R})$.
\subsection{A brief overview of some results}
We start this subsection with the simple observation that for $k=1$ and $k=2$
the smoothness problem is easily solved. Indeed for $k=1$ by choosing $%
\mathbf{c}\in \mathbb{R}^{n}$ satisfying $\mathbf{a}^{1}\cdot \mathbf{c}=1$,
we have that $f_{1}(t)=f(t\mathbf{c)}$ is in $C^{s}(\mathbb{R})$. The same
argument can be carried out for the case $k=2.$ In this case, since the
vectors $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ are linearly independent,
there exists a vector $\mathbf{c}\in \mathbb{R}^{n}$ satisfying $\mathbf{a}%
^{1}\cdot \mathbf{c}=1$ and $\mathbf{a}^{2}\cdot \mathbf{c}=0.$ Therefore,
we obtain that the function $f_{1}(t)=f(t\mathbf{c)}-f_{2}(0)$ is in the
class $C^{s}(\mathbb{R})$. Similarly, one can verify that $f_{2}\in C^{s}(%
\mathbb{R})$.
The above cases with one and two ridge functions in (2.1) show that the
functions $f_{i}$ inherit smoothness properties of the given $f$. The
picture is absolutely different if the number of directions $k\geq 3$. For $%
k=3$, there are ultimately smooth functions which decompose into sums of
very badly behaved ridge functions. This phenomena comes from the classical
Cauchy Functional Equation (CFE). This equation,%
\begin{equation*}
h(x+y)=h(x)+h(y),\text{ }h:\mathbb{R\rightarrow R},\eqno(2.3)
\end{equation*}%
looks very simple and has a class of simple solutions $h(x)=cx,$ $c\in
\mathbb{R}$. However, it easily follows from Hamel basis theory that CFE
also has a large class of wild solutions. These solutions are called
\textquotedblleft wild" because they are extremely pathological. They are,
for example, not continuous at a point, not monotone on an interval, not
bounded on any set of positive measure (see, e.g., \cite{A}). Let $h_{1}$ be
any wild solution of the equation (2.3). Then the zero function can be
represented as%
\begin{equation*}
0=h_{1}(x)+h_{1}(y)-h_{1}(x+y).\eqno(2.4)
\end{equation*}%
Note that the functions involved in (2.4) are bivariate ridge functions with
the directions $(1,0)$, $(0,1)$ and $(1,1)$, respectively. This example
shows that for $k\geq 3$ the functions $f_{i}$ in (2.1) may not inherit
smoothness properties of the function $f$, which in the case of (2.4) is the
identically zero function. Thus the above problem arises naturally.
However, it was shown by some authors that, additional conditions on $f_{i}$
or the directions $\mathbf{a}^{i}$ guarantee smoothness of the
representation (2.1). It was first proved by Buhmann and Pinkus \cite{12}
that if in (2.1) $f\in C^{s}(\mathbb{R}^{n})$, $s\geq k-1$ and $f_{i}\in
L_{loc}^{1}(\mathbb{R)}$ for each $i$, then $f_{i}\in C^{s}(\mathbb{R)}$ for
$i=1,...,k.$ Later Pinkus \cite{120} found a strong relationship between CFE
and the problem of smoothness in ridge function representation. He
generalized extensively the previous result of Buhmann and Pinkus \cite{12}.
He showed that the solution is quite simple and natural if the functions $%
f_{i}$ are taken from a certain class $\mathcal{B}$ of real-valued functions
defined on $\mathbb{R}$. $\mathcal{B}$ includes, for example, the set of
continuous functions, the set of bounded functions, the set of Lebesgue
measurable functions (for the precise definition of $\mathcal{B}$ see the
next subsection). The result of Pinkus \cite{120} states that if in (1.1) $%
f\in C^{s}(\mathbb{R}^{n})$ and each $f_{i}\in \mathcal{B}$, then
necessarily $f_{i}\in C^{s}(\mathbb{R)}$ for $i=1,...,k$.
Note that severe restrictions on the directions $\mathbf{a}^{i}$ also
guarantee smoothness of the representation (2.1). For example, in (2.1) the
inclusions $f_{i}\in C^{s}(\mathbb{R})$, $i=1,...,k,$ are automatically
valid if the directions $\mathbf{a}^{i}$ are linearly independent and if
these directions are not linearly independent, then there exists $f\in C^{s}(%
\mathbb{R}^{n})$ of the form (2.1) such that the $f_{i}\notin C^{s}(\mathbb{R%
}),$ $i=1,...,k$ (see \cite{86}). Indeed, if the directions $\mathbf{a}^{i}$
are linearly independent, then for each $i=1,...,k,$ we can choose a vector $%
\mathbf{b}^{i}$ such that $\mathbf{b}^{i}\cdot \mathbf{a}^{i}=1,$ but at the
same time $\mathbf{b}^{i}\cdot \mathbf{a}^{j}=0,$ for all $j=1,...,k,$ $%
j\neq i$. Putting $\mathbf{x}=\mathbf{b}^{i}t$ in (2.1) yields that
\begin{equation*}
f(\mathbf{b}^{i}t)=f_{i}(t)+\sum_{j=1,j\neq i}^{k}f_{j}(0),\text{ }i=1,...,k.
\end{equation*}%
This shows that all the functions $f_{i}$ and $f$ belong to the same
smoothness class. If the directions $\mathbf{a}^{i}$ are not linearly
independent, then there exist numbers $\lambda _{1},...,\lambda _{k}$ such
that $\sum_{i=1}^{k}\left\vert \lambda _{i}\right\vert >0$ and $%
\sum_{i=1}^{k}\lambda _{i}\mathbf{a}^{i}=\mathbf{0}$. Let $h$ be any wild
solution of CFE. Then it is not difficult to verify that
\begin{equation*}
0=\sum_{i=1}^{k}h_{i}(\mathbf{a}^{i}\cdot \mathbf{x}),
\end{equation*}%
where $h_{i}(t)=h(\lambda _{i}t),$ $i=1,...,k.$ Note that in the last
representation, the zero function is an ultimately smooth function, while
all the functions $h_{i}$ are highly nonsmooth.
The above result of Pinkus was a starting point for further research on
continuous and smooth sums of ridge functions. Much work in this direction
was done by Konyagin and Kuleshov \cite{86,K2}, and Kuleshov \cite{K4}. They
mainly analyze the continuity of $f_{i}$, that is, the question of if and
when continuity of $f$ guarantees the continuity of $f_{i}$. There are also
other results concerning different properties, rather than continuity, of $%
f_{i}$. Most results in \cite{86,K2,K4} involve certain subsets (convex open
sets, convex bodies, etc.) of $\mathbb{R}^{n}$ instead of only $\mathbb{R}%
^{n}$ itself.
In \cite{1}, Aliev and Ismailov gave a partial solution to the smoothness
problem. Their solution comprises the cases in which $s\geq 2$ and $k-1$
directions of the given $k$ directions are linearly independent.
Kuleshov \cite{88} generalized Aliev and Ismailov's result \cite[Theorem 2.3]%
{1} to all possible cases of $s$. That is, he proved that if a function $%
f\in C^{s}(\mathbb{R}^{n})$, where $s\geq 0$, is of the form (2.1) and $%
(k-1) $-tuple of the given set of $k$ directions $\mathbf{a}^{i}$ forms a
linearly independent system, then there exist $g_{i}\in C^{s}(\mathbb{R})$, $%
i=1,...,k $, such that (2.2) holds (see \cite[Theorem 3]{88}). In Section
2.2 we give a new constructive proof of Kuleshov's result.
\bigskip
\subsection{A result of A. Pinkus}
In \cite{120}, A. Pinkus considered the smoothness problem in ridge function
representation. For a given function $f$ $:\mathbb{R}^{n}\rightarrow \mathbb{%
R}$, he posed and partially answered the following question. If $f$ belongs
to some smoothness class and (2.1) holds, what can we say about the
smoothness of the functions $f_{i}$? He proved that for a large class of
representing functions $f_{i}$, these $f_{i}$ are smooth. That is, if
apriori we assume that in the representation (2.1) the functions $f_{i}$ is
of a certain class of \textquotedblleft reasonably well behaved functions",
then they have the same degree of smoothness as the function $f.$ As the
mentioned class of \textquotedblleft reasonably well behaved functions" one
may take, e.g., the set of functions that are continuous at a point, the set
of Lebesgue measurable functions, etc. All these classes arise from the
class $\mathcal{B}$ considered by Pinkus \cite{120} and the classical theory
of CFE. In \cite{120}, $\mathcal{B}$ denotes any linear space of real-valued
functions $u$ defined on $\mathbb{R}$, closed under translation, such that
if there is a function $v\in C(\mathbb{R)}$ for which $u-v$ satisfies CFE,
then $u-v$ is necessarily linear, i.e. $u(x)-v(x)=cx,$ for some constant $%
c\in \mathbb{R}$. Such a definition of $\mathcal{B}$ is required in the
proof of the following theorem.
\bigskip
\textbf{Theorem 2.1} (Pinkus \cite{120}). \textit{Assume $f\in C^{s}(\mathbb{%
R}^{n})$ is of the form (2.1). Assume, in addition, that each $f_{i}\in
\mathcal{B}$. Then necessarily $f_{i}\in C^{s}(\mathbb{R)}$ for $i=1,...,k.$}
\bigskip
\begin{proof} We prove this theorem by induction on $k.$ The result is
valid when $k=1$. Indeed, taking any direction $\mathbf{c}$ such that $%
\mathbf{a}^{1}\cdot \mathbf{c}=1$ and putting $x=\mathbf{c}t$ in (2.1), we
obtain that $f_{1}(t)=f(\mathbf{c}t)\in C^{s}(\mathbb{R})$. Assume that the
result is valid for $k-1.$ Let us show that it is valid for $k$.
Chose any vector $\mathbf{e}\in \mathbb{R}^{n}$ satisfying $\mathbf{e\cdot a}%
^{k}=0$ and $\mathbf{e\cdot a}^{i}=b_{i}\neq 0$, for $i=1,...,k-1.$ Clearly,
there exists a vector with this property. The property of $\mathbf{e}$
enables us to write that
\begin{equation*}
f(\mathbf{x+e}t)-f(\mathbf{x)=}\sum_{i=1}^{k-1}f_{i}(\mathbf{a}^{i}\cdot
\mathbf{x}+b_{i}t)-f_{i}(\mathbf{a}^{i}\cdot \mathbf{x}).
\end{equation*}%
Thus%
\begin{equation*}
F(\mathbf{x}):=f(\mathbf{x+e}t)-f(\mathbf{x})=\sum_{i=1}^{k-1}h_{i}(\mathbf{a%
}^{i}\cdot \mathbf{x}),
\end{equation*}%
where%
\begin{equation*}
h_{i}(y)=f_{i}(y+b_{i}t)-f_{i}(y)\text{, }i=1,...,k-1.
\end{equation*}%
Since $f_{i}\in \mathcal{B}$ and $\mathcal{B}$ is translation invariant, $%
h_{i}\in \mathcal{B}$. In addition, since $F\in C^{s}(\mathbb{R}^{n})$, it
follows by our induction assumption that $h_{i}\in C^{s}(\mathbb{R})$. Note
that this inclusion is valid for all $t\in \mathbb{R}$.
In \cite{B1}, de Bruijn proved that if for any $c\in \mathbb{R}$ the
difference $u(y+c)-u(y)$ ($u$ is any real function on $\mathbb{R}$) belongs
to the class $C^{s}(\mathbb{R})$, then $u$ is necessarily of the form $u=v+r$%
, where $v\in $ $C^{s}(\mathbb{R})$ and $r$ satisfies CFE. Thus each
function $f_{i}$ is of the form $f_{i}=v_{i}+r_{i}$, where $v_{i}\in $ $%
C^{s}(\mathbb{R})$ and $r_{i}$ satisfies CFE. By our assumption, each $f_{i}$
is in $\mathcal{B}$, and from the definition of $\mathcal{B}$ it follows
that $r_{i}=f_{i}-v_{i}$ is a linear function. Thus $f_{i}=v_{i}+r_{i}$,
where both $v_{i},r_{i}\in $ $C^{s}(\mathbb{R})$, implying that $f_{i}\in
C^{s}(\mathbb{R})$. This is valid for $i=1,...,k-1$, and hence also for $i=k$.
\end{proof}
\textbf{Remark 2.1. }In de Bruijn \cite{B1,B2}, there are delineated various
classes of real-valued functions $\mathcal{D}$ with the property that if $%
\bigtriangleup _{t}f=f(\cdot +t)-f(\cdot )\in \mathcal{D}$ for all $t\in
\mathbb{R}$, then $f-s\in \mathcal{D}$, for some $s$ satisfying CFE (for
such classes see the next subsection). Some translation invariant classes
among them are $C^{\infty }(\mathbb{R})$ functions; analytic functions;
algebraic polynomials; trigonometric polynomials. Theorem 2.1 can be
suitably restated for any of these classes.
\bigskip
\subsection{Polynomial functions of $k$-th order}
Given $h_{1},...,h_{k}\in \mathbb{R}$, we define inductively the difference
operator $\Delta _{h_{1}...h_{k}}$ as follows
\begin{eqnarray*}
\Delta _{h_{1}}f(x) &:&=f(x+h_{1})-f(x), \\
\Delta _{h_{1}...h_{k}}f &:&=\Delta _{h_{k}}(\Delta _{h_{1}...h_{k-1}}f),%
\text{ }f:\mathbb{R\rightarrow R}.
\end{eqnarray*}%
If $h_{1}=\cdots=h_{k}=h,$ then we write briefly $\Delta _{h}^{k}f$ instead
of $\Delta _{\underset{n\text{ times}}{\underbrace{h...h}}}f$. For various
properties of difference operators see \cite[Section 15.1]{Kuc}.
\bigskip
\textbf{Definition 2.1 }(see \cite{Kuc}). \textit{A function $f:\mathbb{R\rightarrow
R}$ is called a polynomial function of order $k$ ($k\in \mathbb{N}$%
) if for every $x\in \mathbb{R}$ and $h\in \mathbb{R}$ we have}
\begin{equation*}
\Delta _{h}^{k+1}f(x)=0.
\end{equation*}
It can be shown that if $\Delta _{h}^{k+1}f=0$ for any $h\in \mathbb{R}$,
then $\Delta _{h_{1}...h_{k+1}}f=0$ for any $h_{1},...,h_{k+1}\in \mathbb{R}$
(see \cite[Theorem 15.3.3]{Kuc}). A polynomial of degree at most $k$ is a
polynomial function of order $k$ (see \cite[Theorem 15.9.4]{Kuc}). The
polynomial functions generalize ordinary polynomials, and reduce to the
latter under mild regularity assumptions. For example, if a polynomial
function is continuous at one point, or bounded on a set of positive
measure, then it continuous at all points (see \cite{Cies, Kurepa}), and
therefore is a polynomial of degree $k$ (see \cite[Theorem 15.9.4]{Kuc}).
Basic results concerning polynomial functions are due to S. Mazur-W. Orlicz
\cite{Maz}, McKiernan \cite{Mc}, Djokovi\'{c} \cite{Djok}. The following
theorem, which we will use in the sequel, yield implicitly the general
construction of polynomial functions.
\bigskip
\textbf{Theorem 2.2 }(see \cite[Theorems 15.9.1 and 15.9.2]{Kuc}). \textit{A
function $f:\mathbb{R\rightarrow R}$ is a polynomial function of order $k$
if and only if it admits a representation}
\begin{equation*}
f=f_{0}+f_{1}+...+f_{k},
\end{equation*}%
\textit{where $f_{0}$ is a constant and $f_{j}:\mathbb{R\rightarrow R}$, $%
j=1,...,k$, are diagonalizations of $j$-additive symmetric functions $F_{j}:%
\mathbb{R}^{j}\mathbb{\rightarrow R}$, i.e.,}
\begin{equation*}
f_{j}(x)=F_{j}(x,...,x).
\end{equation*}
\bigskip
Note that a function $F_{p}:\mathbb{R}^{p}\mathbb{\rightarrow R}$ is called $%
p$-additive if for every $j,$ $1\leq j\leq p,$ and for every $%
x_{1},...,x_{p},y_{j}\in \mathbb{R}$
\begin{equation*}
F(x_{1},...,x_{j}+y_{j},...,x_{p})=F(x_{1},...,x_{p})+F(x_{1},...,x_{j-1},y_{j},x_{j+1},...,x_{p}),
\end{equation*}%
i.e., $F$ is additive in each of its variables $x_{j}$ (see \cite[p.363]%
{Kuc}). A simple example of a $p$-additive function is given by the product
\begin{equation*}
f_{1}(x_{1})\times\cdots\times f_{p}(x_{p}),
\end{equation*}%
where the univariate functions $f_{j},$ $j=1,...,p$, are additive.
Following de Bruijn, we say that a class $\mathcal{D}$ of real functions has
the \textit{difference property} if any function $f:\mathbb{R\rightarrow R}$
such that $\bigtriangleup _{h}f\in \mathcal{D}$ for all $h\in \mathbb{R}$,
admits a decomposition $f=g+S$, where $g\in \mathcal{D}$ and $S$ satisfies
the Cauchy Functional Equation (2.3). Several classes with the difference
property are investigated in de Bruijn \cite{B1,B2}. Some of these classes
are:
\smallskip
1) $C(\mathbb{R)}$, continuous functions;
2) $C^{s}(\mathbb{R)}$, functions with continuous derivatives up to order $s$%
;
3) $C^{\infty }(\mathbb{R)}$, infinitely differentiable functions;
4) analytic functions;
5) functions which are absolutely continuous on any finite interval;
6) functions having bounded variation over any finite interval;
7) algebraic polynomials;
8) trigonometric polynomials;
9) Riemann integrable functions.
\smallskip
A natural generalization of classes with the difference property are classes
of functions with the difference property of $k$-th order.
\bigskip
\textbf{Definition 2.2 }(see \cite{Gajda}). \textit{A class $\mathcal{F}$ is said to
have the difference property of $k$-th order if any
function $f:\mathbb{R\rightarrow R}$ such that $\bigtriangleup _{h}^{k}f\in
\mathcal{F}$ for all $h\in \mathbb{R}$, admits a decomposition $f=g+H$,
where $g\in \mathcal{F}$ and $H$ is a polynomial function of $k$-th order.}
\bigskip
It is not difficult to see that the class $\mathcal{F}$ has the difference
property of first order if and only if it has the difference property in de
Bruijn's sense. There arises a natural question: which of the above classes
have difference properties of higher orders? Gajda \cite{Gajda} considered
this question in its general form, for functions defined on a locally
compact Abelian group and showed that for any $k\in \mathbb{N}$, continuous
functions have the difference property of $k$-th order (see \cite[Theorem 4]%
{Gajda}). The proof of this result is based on several lemmas, in
particular, on the following lemma, which we will also use in the sequel.
\bigskip
\textbf{Lemma 2.1.} (see \cite[Lemma 5]{Gajda}). \textit{For each $k\in
\mathbb{N}$ the class of all continuous functions defined on $\mathbb{R}$
has the difference property of $k$-th order.}
\bigskip
In fact, Gajda \cite{Gajda} proved this lemma for Banach space valued
functions, but the simplest case with the space $\mathbb{R}$ has all the
difficulties. Unfortunately, the proof of the lemma has an essential gap.
The author of \cite{Gajda} tried to reduce the proof to $\mod1$ periodic
functions, but made a mistake in proving the continuity of the difference $%
\Delta _{h_{1}...h_{k-1}}(f-f^{\ast })$. Here $f^{\ast }:\mathbb{%
R\rightarrow R}$ is a $\mod1$ periodic function defined on the interval $%
[0,1)$ as $f^{\ast }(x)=f(x)$ and extended to the whole $\mathbb{R}$ with
the period $1$. That is, $f^{\ast }(x)=f(x)$ for $x\in \lbrack 0,1)$ and $%
f^{\ast }(x+1)=f^{\ast }(x)$ for $x\in \mathbb{R}$. In the proof, the author
of \cite{Gajda} takes a point $x\in \lbrack m,m+1)$ and writes that
\begin{equation*}
\Delta _{h_{1}...h_{k-1}}(f-f^{\ast })(x)=\Delta
_{h_{1}...h_{k-1}}(f(x)-f(x-m)),
\end{equation*}%
which is not valid. Even though $f^{\ast }(x)=f(x-m)$ for any $x\in \lbrack
m,m+1)$, the differences $\Delta _{h_{1}...h_{k-1}}f^{\ast }(x)$ and $\Delta
_{h_{1}...h_{k-1}}f(x-m)$ are completely different, since the latter may
involve values of $f$ at points outside $[0,1)$, which have no relationship
with the definition of $f^{\ast }$.
In the next section, we give a new proof for Lemma 2.1 (see Theorem 2.3
below). We hope that our proof is free from mathematical errors and thus the
above lemma itself is valid.
\bigskip
\subsection{Some auxiliary results on polynomial functions}
In this section, we do further research on polynomial functions and prove
some auxiliary results.
\bigskip
\textbf{Lemma 2.2.} \textit{If $f:\mathbb{R\rightarrow R}$ is a polynomial
function of order $k$, then for any $p\in $ $\mathbb{N}$ and any fixed $\xi
_{1},...,\xi _{p}\in \mathbb{R}$, the function}%
\begin{equation*}
g(x_{1},...,x_{p})=f(\xi _{1}x_{1}+\cdots +\xi _{p}x_{p}),
\end{equation*}%
\textit{considered on the $p$ dimensional space $\mathbb{Q}^{p}$ of rational
vectors, is an ordinary polynomial of degree at most $k$.}
\bigskip
\begin{proof} By Theorem 2.2,
\begin{equation*}
f=\sum_{m=0}^{k}f_{m},\eqno(2.5)
\end{equation*}%
where $f_{0}$ is a constant and $f_{m}:\mathbb{R\rightarrow R}$, $1,...,m$,
are diagonalizations of $m$-additive symmetric functions $F_{m}:\mathbb{R}%
^{m}\mathbb{\rightarrow R}$, i.e.,
\begin{equation*}
f_{m}(x)=F_{m}(x,...,x).
\end{equation*}%
For a $m$-additive function $F_{m}$ the equality
\begin{equation*}
F_{m}(\xi _{1},...,\xi _{i-1},r\xi _{i},\xi _{i+1},...,\xi _{m})=rF_{m}(\xi
_{1},...,\xi _{m})
\end{equation*}%
holds for all $i=1,...,m$ and any $r\in \mathbb{Q}$, $\xi _{i}\in $ $\mathbb{%
R}$, $i=1,...,m$ (see \cite[Theorem 13.4.1]{Kuc}). Using this, it is not
difficult to verify that for any $(x_{1},...,x_{p})\in \mathbb{Q}^{p}$,
\begin{eqnarray*}
f_{m}(\xi _{1}x_{1}+\cdots +\xi _{p}x_{p}) &=&F_{m}(\xi _{1}x_{1}+\cdots
+\xi _{p}x_{p},...,\xi _{1}x_{1}+\cdots +\xi _{p}x_{p}) \\
&=&\sum_{\substack{ 0\leq s_{i}\leq m,~\overline{i=1,p} \\ s_{1}+\cdots
+s_{p}=m}}A_{s_{1}...s_{p}}F_{m}(\underset{s_{1}}{\underbrace{\xi
_{1},...,\xi _{1}}},...,\underset{s_{p}}{\underbrace{\xi _{p},...,\xi _{p}}}%
)x_{1}^{s_{1}}...x_{p}^{s_{p}}.
\end{eqnarray*}%
Here $A_{s_{1}...s_{p}}$ are some coefficients, namely $%
A_{s_{1}...s_{p}}=m!/(s_{1}!...s_{p}!).$ Considering the last formula in
(2.5), we conclude that the function $g(x_{1},...,x_{p})$, restricted to $%
\mathbb{Q}^{p}$, is a polynomial of degree at most $k$.
\end{proof}
\textbf{Lemma 2.3.} \textit{Assume $f$ is a polynomial function of order $k$%
. Then there exists a polynomial function $H$ of order $k+1$ such that $%
H(0)=0$ and}
\begin{equation*}
f(x)=H(x+1)-H(x).\eqno(2.6)
\end{equation*}
\bigskip
\begin{proof} Consider the function
\begin{equation*}
H(x):=xf(x)+\sum_{i=1}^{k}(-1)^{i}\frac{x(x+1)...(x+i)}{(i+1)!}\Delta
_{1}^{i}f(x).\eqno(2.7)
\end{equation*}%
Clearly, $H(0)=0.$ We are going to prove that $H$ is a polynomial function
of order $k+1$ and satisfies (2.6).
Let us first show that for any polynomial function $g$ of order $m$ the
function $G_{1}(x)=xg(x)$ is a polynomial function of order $m+1.$ Indeed,
for any $h_{1},...,h_{m+2}\in \mathbb{R}$ we can write that
\begin{equation*}
\Delta _{h_{1}...h_{m+2}}G_{1}(x)=(x+h_{1}+\cdots +h_{m+2})\Delta
_{h_{1}...h_{m+2}}g(x)
\end{equation*}%
\begin{equation*}
+\sum_{i=1}^{m+2}h_{i}\Delta
_{h_{1}...h_{i-1}h_{i+1...}h_{m+2}}g(x).\eqno(2.8)
\end{equation*}%
The last formula is verified directly by using the known product property of
differences, that is, the equality
\begin{equation*}
\Delta _{h}(g_{1}g_{2})=g_{1}\Delta _{h}g_{2}+g_{2}\Delta _{h}g_{1}+\Delta
_{h}g_{1}\Delta _{h}g_{2}.\eqno(2.9)
\end{equation*}%
Now since $g$ is a polynomial function of order $m$, all summands in (2.8)
is equal to zero; hence we obtain that $G_{1}(x)$ is a polynomial function
of order $m+1$. By induction, we can prove that the function $%
G_{p}(x)=x^{p}g(x)$ is a polynomial function of order $m+p.$ Since $\Delta
_{1}^{i}f(x)$ in (2.7) is a polynomial function of order $k-i$, it follows
that all summands in (2.7) are polynomial functions of order $k+1$.
Therefore, $H(x)$ is a polynomial function of order $k+1$.
Now let us prove (2.6). Considering the property (2.9) in (2.7) we can write
that%
\begin{equation*}
\Delta _{1}H(x)=\left[ f(x)+(x+1)\Delta _{1}f(x)\right]
\end{equation*}%
\begin{equation*}
+\sum_{i=1}^{k}(-1)^{i}\left[ \frac{(x+1)...(x+i+1)}{(i+1)!}\Delta
_{1}^{i+1}f(x)+\Delta _{1}\left( \frac{x(x+1)...(x+i)}{(i+1)!}\right) \Delta
_{1}^{i}f(x)\right] .\eqno(2.10)
\end{equation*}
Note that in (2.10)
\begin{equation*}
\Delta _{1}\left( \frac{x(x+1)...(x+i)}{(i+1)!}\right) =\frac{(x+1)...(x+i)}{%
i!}.
\end{equation*}%
Considering this and the assumption $\Delta _{1}^{k+1}f(x)=0$, it follows
from (2.10) that
\begin{equation*}
\Delta _{1}H(x)=f(x),
\end{equation*}%
that is, (2.6) holds.
\end{proof}
\bigskip
The next lemma is due to Gajda \cite{Gajda}.
\bigskip
\textbf{Lemma 2.4 }(see \cite[Corollary 1]{Gajda}). \textit{Let $f:$ $%
\mathbb{R\rightarrow R}$ be a $\mod1$ periodic function such that, for any $%
h_{1},...,h_{k}\in \mathbb{R}$, $\Delta _{h_{1}...h_{k}}f$ is continuous.
Then there exist a continuous function $g:$ $\mathbb{R\rightarrow R}$ and a
polynomial function $H$ of $k$-th order such that $f=g+H$.}
\bigskip
The following theorem generalizes de Bruijn's theorem (see \cite[Theorem 1.1]%
{B1}) on the difference property of continuous functions and shows that
Gajda's above lemma (see Lemma 2.1) is valid. Note that the main result of
\cite{Gajda} also uses this theorem.
\bigskip
\textbf{Theorem 2.3.} \textit{Assume for any $h_{1},...,h_{k}\in \mathbb{R}$%
, the difference $\Delta _{h_{1}...h_{k}}f(x)$ is a continuous function of
the variable $x$. Then there exist a function $g\in C(\mathbb{R})$ and a
polynomial function $H$ of $k$-th order with the property $H(0)=0$ such that}
\begin{equation*}
f=g+H.
\end{equation*}
\bigskip
\begin{proof} We prove this theorem by induction. For $k=1$, the theorem
is the result of de Bruijn: if $f$ is such that, for each $h$, $\Delta
_{h}f(x)$ is a continuous function of $x$, then it can be written in the
form $g+H$, where $g$ is continuous and $H$ is additive (that is, satisfies
the Cauchy Functional Equation). Assume that the theorem is valid for $k-1.$
Let us prove it for $k$. Without loss of generality we may assume that $%
f(0)=f(1)$. Otherwise, we can prove the theorem for $f_{0}(x)=f(x)-\left[
f(1)-f(0)\right] x$ and then automatically obtain its validity for $f$.
Consider the function
\begin{equation*}
F_{1}(x)=f(x+1)-f(x)\text{, }x\in \mathbb{R}.\eqno(2.11)
\end{equation*}%
Since for\ any $h_{1},...,h_{k}\in \mathbb{R}$, $\Delta _{h_{1}...h_{k}}f(x)$
is a continuous function of $x$ and $\Delta _{h_{1}...h_{k-1}}F_{1}=\Delta
_{h_{1}...h_{k-1}1}f$, the difference $\Delta _{h_{1}...h_{k-1}}F_{1}(x)$
will be a continuous function of $x$, as well. By assumption, there exist a
function $g_{1}\in C(\mathbb{R})$ and a polynomial function $H_{1}$ of $%
(k-1) $-th order with the property $H_{1}(0)=0$ such that
\begin{equation*}
F_{1}=g_{1}+H_{1}.\eqno(2.12)
\end{equation*}%
It follows from Lemma 2.3 that there exists a polynomial function $H_{2}$ of
order $k$ such that $H_{2}(0)=0$ and
\begin{equation*}
H_{1}(x)=H_{2}(x+1)-H_{2}(x).\eqno(2.13)
\end{equation*}%
Substituting (2.13) in (2.12) we obtain that
\begin{equation*}
F_{1}(x)=g_{1}(x)+H_{2}(x+1)-H_{2}(x).\eqno(2.14)
\end{equation*}%
It follows from (2.11) and (2.14) that
\begin{equation*}
g_{1}(x)=\left[ f(x+1)-H_{2}(x+1)\right] -\left[ f(x)-H_{2}(x)\right] .\eqno%
(2.15)
\end{equation*}
Consider the function
\begin{equation*}
F_{2}=f-H_{2}.\eqno(2.16)
\end{equation*}%
Since $H_{2}$ is a polynomial function of order $k$ and for any $%
h_{1},...,h_{k}\in \mathbb{R}$ the difference $\Delta _{h_{1}...h_{k}}f(x)$
is a continuous function of $x$, we obtain that $\Delta
_{h_{1}...h_{k}}F_{2}(x)$ is also a continuous function of $x$. In addition,
since $f(0)=f(1)$ and $H_{2}(0)=H_{2}(1)=0$, it follows from (2.16) that $%
F_{2}(0)=F_{2}(1)$. We will use these properties of $F_{2}$ below.
Let us write (2.15) in the form
\begin{equation*}
g_{1}(x)=F_{2}(x+1)-F_{2}(x),\eqno(2.17)
\end{equation*}%
and define the following $\mod1$ periodic function
\begin{eqnarray*}
F^{\ast }(x) &=&F_{2}(x)\text{ for }x\in \lbrack 0,1), \\
F^{\ast }(x+1) &=&F^{\ast }(x)\text{ for }x\in \mathbb{R}.
\end{eqnarray*}
Consider the function
\begin{equation*}
F=F_{2}-F^{\ast }.\eqno(2.18)
\end{equation*}%
Let us show that $F\in C(\mathbb{R)}$. Indeed since $F(x)=0$ for $x\in
\lbrack 0,1)$, $F$ is continuous on $(0,1)$. Consider now the interval $%
[1,2) $. For any $x\in \lbrack 1,2)$ by the definition of $F^{\ast }$ and
(2.17) we can write that
\begin{equation*}
F(x)=F_{2}(x)-F_{2}(x-1)=g_{1}(x-1).\eqno(2.19)
\end{equation*}%
Since $g_{1}\in C(\mathbb{R)}$, it follows from (2.19) that $F$ is
continuous on $(1,2)$. Note that by (2.17) $g_{1}(0)=0$; hence $%
F(1)=g_{1}(0)=0$. Since $F\equiv 0$ on $[0,1)$, $F(1)=0$ and $F\in C(1,2),$
we obtain that $F$ is continuous on $(0,2)$. Consider the interval $[2,3)$.
For any $x\in \lbrack 2,3)$ we can write that
\begin{equation*}
F(x)=F_{2}(x)-F_{2}(x-2)=g_{1}(x-1)+g_{1}(x-2).\eqno(2.20)
\end{equation*}%
Since $g_{1}\in C(\mathbb{R)}$, $F$ is continuous on $(2,3)$. Note that by
(2.19) $\lim_{x\rightarrow 2-}F(x)=g_{1}(1)$ and by (2.20) $F(2)=g_{1}(1).$
We obtain from these arguments that $F$ is continuous on $(0,3)$. In the
same way, we can prove that $F$ is continuous on $(0,m)$ for any $m\in
\mathbb{N}$.
Similar arguments can be used to prove the continuity of $F$ on $(-m,0)$ for
any $m\in \mathbb{N}$. We show it for the first interval $[-1,0)$. For any $%
x\in \lbrack -1,0)$ by the definition of $F^{\ast }$ and (2.17) we can write
that
\begin{equation*}
F(x)=F_{2}(x)-F_{2}(x+1)=-g_{1}(x).
\end{equation*}%
Since $g_{1}\in C(\mathbb{R)}$, it follows that $F$ is continuous on $(-1,0)$%
. Besides, \newline $\lim_{x\rightarrow 0-}F(x)=-g_{1}(0)=0.$ This shows that $F$ is
continuous on $(-1,1)$, since $F\equiv 0$ on $[0,1).$ Combining all the
above arguments we conclude that $F\in C(\mathbb{R)}$.
Since $F\in C(\mathbb{R)}$ and $\Delta _{h_{1}...h_{k}}F_{2}(x)$ is a
continuous function of $x$, we obtain from (2.18) that $\Delta
_{h_{1}...h_{k}}F^{\ast }(x)$ is also a continuous function of $x.$ By Lemma
2.4, there exist a function $g_{2}\in C(\mathbb{R)}$ and a polynomial
function $H_{3}$ of order $k$ such that
\begin{equation*}
F^{\ast }=g_{2}+H_{3}.\eqno(2.21)
\end{equation*}%
It follows from (2.16), (2.18) and (2.21) that
\begin{equation*}
f=F+g_{2}+H_{2}+H_{3}.\eqno(2.22)
\end{equation*}
Introduce the notation
\begin{eqnarray*}
H(x) &=&H_{2}(x)+H_{3}(x)-H_{3}(0), \\
g(x) &=&F(x)+g_{2}(x)+H_{3}(0).
\end{eqnarray*}%
Obviously, $g\in C(\mathbb{R)}$ and $H(0)=0$. It follows from (2.22) and the
above notation that
\begin{equation*}
f=g+H.
\end{equation*}%
This completes the proof of the theorem.
\end{proof}
\bigskip
\subsection{Main results}
We start this subsection with the following lemma.
\bigskip
\textbf{Lemma 2.5.} \textit{Assume we are given pairwise linearly
independent vectors $\mathbf{a}^{i},$ $i=1,...,k,$ and a function $f\in C(%
\mathbb{R}^{n})$ of the form (2.1) with arbitrarily behaved univariate
functions $f_{i}$. Then for any $h_{1},...,h_{k-1}\in \mathbb{R}$, and all
indices $i=1,...,k$, $\Delta _{h_{1}...h_{k-1}}f_{i}\in C(\mathbb{R})$.}
\bigskip
\begin{proof} We prove this lemma for the function $f_{k}.$ It can be
proven for the other functions $f_{i}$ in the same way. Let $%
h_{1},...,h_{k-1}\in \mathbb{R}$ be given. Since the vectors $\mathbf{a}^{i}$
are pairwise linearly independent, for each $j=1,...,k-1,$ there is a vector
$\mathbf{b}^{j}$ such that $\mathbf{b}^{j}\cdot \mathbf{a}^{j}=0$ and $%
\mathbf{b}^{j}\cdot \mathbf{a}^{k}\neq 0$. It is not difficult to see that
for any $\lambda \in \mathbb{R}$, $\Delta _{\lambda \mathbf{b}^{j}}f_{j}(%
\mathbf{a}^{j}\cdot \mathbf{x})=0.$ Therefore, for any $\lambda
_{1},...,\lambda _{k-1}\in \mathbb{R}$, we obtain from (2.1) that
\begin{equation*}
\Delta _{\lambda _{1}\mathbf{b}^{1}...\lambda _{k-1}\mathbf{b}^{k-1}}f(%
\mathbf{x})=\Delta _{\lambda _{1}\mathbf{b}^{1}...\lambda _{k-1}\mathbf{b}%
^{k-1}}f_{k}(\mathbf{a}^{k}\cdot \mathbf{x}).\eqno(2.23)
\end{equation*}%
Note that in multivariate setting the difference operator $\Delta _{\mathbf{h%
}^{1}...\mathbf{h}^{k}}f(\mathbf{x})$ is defined similarly as in the
previous section. If in (2.23) we take
\begin{eqnarray*}
\mathbf{x} &\mathbf{=}&\frac{\mathbf{a}^{k}}{\left\Vert \mathbf{a}%
^{k}\right\Vert ^{2}}t\text{, }t\in \mathbb{R}, \\
\lambda _{j} &=&\frac{h_{j}}{\mathbf{a}^{k}\cdot \mathbf{b}^{j}}\text{, }%
j=1,...,k-1,
\end{eqnarray*}%
we will obtain that $\Delta _{h_{1}...h_{k-1}}f_{k}\in C(\mathbb{R})$.
\end{proof}
The following theorem is valid.
\bigskip
\textbf{Theorem 2.4.} \textit{Assume a function $f\in C(\mathbb{R}^{n})$ is
of the form (2.1). Then there exist continuous functions $g_{i}:\mathbb{%
R\rightarrow R}$, $i=1,...,k$, and a polynomial $P(\mathbf{x})$ of degree at
most $k-1$ such that}
\begin{equation*}
f(\mathbf{x})=\sum_{i=1}^{k}g_{i}(\mathbf{a}^{i}\cdot \mathbf{x})+P(\mathbf{x%
}).\eqno(2.24)
\end{equation*}
\bigskip
\begin{proof} By Lemma 2.5 and Theorem 2.3, for each $i=1,...,k$, there
exists a function $g_{i}\in C(\mathbb{R})$ and a polynomial function $H_{i}$
of $(k-1)$-th order with the property $H_{i}(0)=0$ such that
\begin{equation*}
f_{i}=g_{i}+H_{i}.\eqno(2.25)
\end{equation*}
Consider the function
\begin{equation*}
F(\mathbf{x})=f(\mathbf{x})-\sum_{i=1}^{k}g_{i}(\mathbf{a}^{i}\cdot \mathbf{x%
}).\eqno(2.26)
\end{equation*}%
It follows from (2.1), (2.25) and (2.26) that
\begin{equation*}
F(\mathbf{x})=\sum_{i=1}^{k}H_{i}(\mathbf{a}^{i}\cdot \mathbf{x}).\eqno(2.27)
\end{equation*}
Denote the restrictions of the multivariate functions $H_{i}(\mathbf{a}%
^{i}\cdot \mathbf{x})$ to the space $\mathbb{Q}^{n}$ by $P_{i}(\mathbf{x})$,
respectively. By Lemma 2.2, the functions $P_{i}(\mathbf{x})$ are ordinary
polynomials of degree at most $k-1$. Since the space $\mathbb{Q}^{n}$ is
dense in $\mathbb{R}^{n}$, and the functions $F(\mathbf{x})$, $P_{i}(\mathbf{%
x})$, $i=1,...,k$, are continuous on $\mathbb{R}^{n}$, and the equality
\begin{equation*}
F(\mathbf{x})=\sum_{i=1}^{k}P_{i}(\mathbf{x}),\eqno(2.28)
\end{equation*}%
holds for all $\mathbf{x}\in \mathbb{Q}^{n}$, we obtain that (2.28) holds
also for all $\mathbf{x}\in \mathbb{R}^{n}$. Now (2.24) follows from (2.26)
and (2.28) by putting $P=\sum_{i=1}^{k}P_{i}$.
\end{proof}
Now we generalize Theorem 2.4 from $C(\mathbb{R}^{n})$ to any space $C^{s}(%
\mathbb{R}^{n})$ of $s$-th order continuously differentiable functions.
\bigskip
\textbf{Theorem 2.5.} \textit{Assume $f\in C^{s}(\mathbb{R}^{n})$ is of the
form (2.1). Then there exist functions $g_{i}\in C^{s}(\mathbb{R})$, $%
i=1,...,k$, and a polynomial $P(\mathbf{x})$ of degree at most $k-1$ such
that (2.24) holds.}
\bigskip
The proof is based on Theorems 2.1 and 2.4. On the one hand, it follows from
Theorem 2.4 that the $s$-th order continuously differentiable function $f-P$
can be expressed as $\sum_{i=1}^{k}g_{i}$ with continuous $g_{i}$. On the
other hand, since the class $\mathcal{B}$ in Theorem 2.1, in particular, can
be taken as $C(\mathbb{R}),$ it follows that $g_{i}\in C^{s}(\mathbb{R})$.
\bigskip
Note that Theorem 2.5 solves the problem posed in Buhmann and Pinkus \cite%
{12} and Pinkus \cite[p.14]{117} up to a polynomial. The following theorem
shows that in the two dimensional setting $n=2$ it solves the problem
completely.
\bigskip
\textbf{Theorem 2.6.} \textit{Assume a function $f\in C^{s}(\mathbb{R}^{2})$
is of the form}
\begin{equation*}
f(x,y)=\sum_{i=1}^{k}f_{i}(a_{i}x+b_{i}y),
\end{equation*}%
\textit{where $(a_{i},b_{i})$ are pairwise linearly independent vectors in $%
\mathbb{R}^{2}$ and $f_{i}$ are arbitrary univariate functions. Then there
exist functions $g_{i}\in C^{s}(\mathbb{R})$, $i=1,...,k$, such that}
\begin{equation*}
f(x,y)=\sum_{i=1}^{k}g_{i}(a_{i}x+b_{i}y).\eqno(2.29)
\end{equation*}
\bigskip
The proof of this theorem is not difficult. First we apply Theorem 2.5 and
obtain that
\begin{equation*}
f(x,y)=\sum_{i=1}^{k}\overline{g}_{i}(a_{i}x+b_{i}y)+P(x,y),\eqno(2.30)
\end{equation*}%
where $\overline{g}_{i}\in C^{s}(\mathbb{R})$ and $P(x,y)$ is a bivariate
polynomial of degree at most $k-1$. Then we use the known fact that a
bivariate polynomial $P(x,y)$ of degree $k-1$ is decomposed into a sum of
ridge polynomials with any given $k$ pairwise linearly independent
directions $(a_{i},b_{i}),$ $i=1,...,k$ (see e.g. \cite{97}). That is,
\begin{equation*}
P(x,y)=\sum_{i=1}^{k}p_{i}(a_{i}x+b_{i}y),
\end{equation*}%
where $p_{i}$ are univariate polynomials of degree at most $k-1$.
Considering this in (2.30) gives the desired representation (2.29).
\bigskip
\textbf{Remark 2.2.} Theorem 2.5 can be restated also for the classes $%
C^{\infty }(\mathbb{R})$ of infinitely differentiable functions and $D(%
\mathbb{R})$ of analytic functions. That is, if under the conditions of
Theorem 2.5, we have $f\in C^{\infty }(\mathbb{R}^{n})$ (or $f\in D(\mathbb{R%
}^{n})$), then this function can be represented also in the form (2.24) with
$g_{i}\in C^{\infty }(\mathbb{R})$ (or $g_{i}\in D(\mathbb{R})$). This
follows, similarly to the case $C^{s}(\mathbb{R})$ above, from Theorem 2.4
and Remark 2.1. These arguments are also valid for Theorem 2.6.
\bigskip
\section{A solution to the smoothness problem under certain conditions}
Assume we are given a function $f\in C^{s}(\mathbb{R}^{n})$ of the form
(2.1). In this section, we discuss various conditions on the directions $%
\mathbf{a}^{i}$ guaranteeing the validity of (2.2) with $g_{i}\in C^{s}(%
\mathbb{R})$.
\subsection{Directions with only rational components}
The following theorems, in particular, show that if directions of ridge
functions have only rational coordinates then no polynomial term appears in
Theorems 2.4 and 2.5.
\bigskip
\textbf{Theorem 2.7.} \textit{Assume a function $f\in C(\mathbb{R}^{n})$ is
of the form (2.1) and there is a nonsingular linear transformation $T:$ $%
\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}$ such that $T\mathbf{a}^{i}\in
\mathbb{Q}\mathit{^{n}},$ $i=1,...,k$. Then there exist continuous functions
$g_{i}:\mathbb{R\rightarrow R}$, $i=1,...,k$, such that (2.2) holds.}
\bigskip
\begin{proof} Applying the coordinate change $\mathbf{x\rightarrow y}$,
given by the formula $\mathbf{x}=T\mathbf{y}$, to both sides of (2.1) we
obtain that
\begin{equation*}
\tilde{f}(\mathbf{y})=\sum_{i=1}^{k}f_{i}(\mathbf{b}^{i}\cdot \mathbf{y}),
\end{equation*}%
where $\tilde{f}(\mathbf{y})=f(T\mathbf{y})$ and $\mathbf{b}^{i}=T\mathbf{a}%
^{i},$ $i=1,...,k.$ Let us repeat the proof of Theorem 2.4 for the function $%
\tilde{f}$. Since the vectors $\mathbf{b}^{i}$, $i=1,...,k,$ have rational
coordinates, it is not difficult to see that the restrictions of the
functions $H_{i}$ to $\mathbb{Q}$ are univariate polynomials. Indeed, for
each $\mathbf{b}^{i}$ we can choose a vector $\mathbf{c}^{i}$ with rational
coordinates such that $\mathbf{b}^{i}\cdot \mathbf{c}^{i}=1$. If in the
equality $H_{i}(\mathbf{b}^{i}\cdot \mathbf{x})=P_{i}(\mathbf{x}),$ $\mathbf{%
x}\in \mathbb{Q}^{n}$, we take $\mathbf{x=c}^{i}t$ with $t\in \mathbb{Q}$,
we obtain that $H_{i}(t)=P_{i}(\mathbf{c}^{i}t)$ for all $t\in \mathbb{Q}$.
Now since $P_{i}$ is a multivariate polynomial on $\mathbb{Q}^{n}$, $H_{i}$
is a univariate polynomial on $\mathbb{Q}$. Denote this univariate
polynomial by $L_{i}$. Thus the formula
\begin{equation*}
P_{i}(\mathbf{x})=L_{i}(\mathbf{b}^{i}\cdot \mathbf{x})\eqno(2.31)
\end{equation*}%
holds for each $i=1,...,k$, and all $\mathbf{x}\in \mathbb{Q}^{n}$. Since $%
\mathbb{Q}^{n}$ is dense in $\mathbb{R}^{n}$, we see that (2.31) holds, in
fact, for all $\mathbf{x}\in \mathbb{R}^{n}$. Thus the polynomial $P(\mathbf{%
x})$ in (2.24) can be expressed as $\sum_{i=1}^{k}L_{i}(\mathbf{b}^{i}\cdot
\mathbf{x})$. Considering this in Theorem 2.4, we obtain that
\begin{equation*}
\tilde{f}(\mathbf{y})=\sum_{i=1}^{k}g_{i}(\mathbf{b}^{i}\cdot \mathbf{y}),%
\eqno(2.32)
\end{equation*}%
where $g_{i}$ are continuous functions. Using the inverse transformation $%
\mathbf{y}=T^{-1}\mathbf{x}$ in (2.32) we arrive at (2.2).
\end{proof}
\textbf{Theorem 2.8.} \textit{Assume a function $f\in C^{s}(\mathbb{R}^{n})$
is of the form (2.1) and there is a nonsingular linear transformation $T:$ $%
\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}$ such that $T\mathbf{a}^{i}\in
\mathbb{Q}\mathit{^{n}},$ $i=1,...,k$. Then there exist functions $g_{i}\in
C^{s}(\mathbb{R})$, $i=1,...,k$, such that (2.2) holds.}
\bigskip
The proof of this theorem easily follows from Theorem 2.7 and Theorem 2.5.
\bigskip
\subsection{Linear independence of $k-1$ directions}
We already know that if the given directions $\mathbf{a}^{i}$ form a
linearly independent set, then the smoothness problem has a positive
solution (see Section 2.1.1). What can we say if all $\mathbf{a}^{i}$ are
not linearly independent? In the sequel, we show that if $k-1$ of the
directions $\mathbf{a}^{i}$, $i=1,...,k,$ are linearly independent, then in
(2.1) $f_{i}$ can be replaced with $g_{i}\in C^{s}(\mathbb{R})$. We will
also estimate the modulus of continuity of $g_{i}$ in terms of the modulus
of continuity of a function generated from $f$ under a linear transformation.
Let $F:\mathbb{R}^{n}\rightarrow \mathbb{R}$, $n\geq 1,$ be any function and
$\Omega \subset \mathbb{R}^{n}$. The function
\begin{equation*}
\omega (F;\delta ;\Omega )=\sup \left\{ \left\vert F(\mathbf{x})-F(\mathbf{y}%
)\right\vert :\mathbf{x},\mathbf{y}\in \Omega ,\text{ }\left\vert \mathbf{x}-%
\mathbf{y}\right\vert \leq \delta \right\} ,\text{ }0\leq \delta \leq
diam\Omega ,
\end{equation*}%
is called the modulus of continuity of the function $F(\mathbf{x}%
)=F(x_{1},...,x_{n})$ on the set $\Omega .$ We will also use the notation $%
\omega _{\mathbb{Q}}(F;\delta ;\Omega )$, which stands for the function $%
\omega (F;\delta ;\Omega \cap \mathbb{Q}^{n})$. Here $\mathbb{Q}$ denotes
the set of rational numbers. Note that $\omega _{\mathbb{Q}}(F;\delta
;\Omega )$ makes sense if the set $\Omega \cap \mathbb{Q}^{n}$ is not empty.
Clearly, $\omega _{\mathbb{Q}}(F;\delta ;\Omega )\leq \omega (F;\delta
;\Omega )$. The equality $\omega _{\mathbb{Q}}(F;\delta ;\Omega )=\omega
(F;\delta ;\Omega )$ holds for continuous $F$ and certain sets $\Omega $.
For example, it holds if for any $\mathbf{x},\mathbf{y}\in \Omega $ with $%
\left\vert \mathbf{x}-\mathbf{y}\right\vert \leq \delta $ there exist
sequences $\left\{ \mathbf{x}_{m}\right\} ,\left\{ \mathbf{y}_{m}\right\}
\subset \Omega \cap \mathbb{Q}^{n}$ such that $\mathbf{x}_{m}\rightarrow
\mathbf{x}$, $\mathbf{y}_{m}\rightarrow \mathbf{y}$ and $\left\vert \mathbf{x%
}_{m}-\mathbf{y}_{m}\right\vert \leq \delta ,$ for all $m$. There are many
sets $\Omega $, which satisfy this property.
\bigskip
The following lemma is valid.
\bigskip
\textbf{Lemma 2.6.} \textit{Assume a function $G\in C(\mathbb{R}^{n})$ has
the form}
\begin{equation*}
G(x_{1},...,x_{n})=\sum_{i=1}^{n}g(x_{i})-g(x_{1}+\cdot \cdot \cdot +x_{n}),%
\eqno(2.33)
\end{equation*}%
\textit{where $g$ is an arbitrarily behaved function. Then the following
inequality holds}
\begin{equation*}
\omega _{\mathbb{Q}}(g;\delta ;[-M,M])\leq 2\delta \left\vert
g(1)-g(0)\right\vert +3\omega \left( G;\delta ;[-M,M]^{n}\right) ,\eqno(2.34)
\end{equation*}%
\textit{where $\delta \in \left( 0,\frac{1}{2}\right) \cap \mathbb{Q}$ and $%
M\geq 1$.}
\bigskip
\begin{proof} Consider the function $f(t)=g(t)-g(0)$ and write (2.33) in
the form
\begin{equation*}
F(x_{1},...,x_{n})=\sum_{i=1}^{n}f(x_{i})-f(x_{1}+\cdot \cdot \cdot +x_{n}),%
\eqno(2.35)
\end{equation*}%
where
\begin{equation*}
F(x_{1},...,x_{n})=G(x_{1},...,x_{n})-(n-1)g(0).
\end{equation*}%
Note that the functions $f$ and $g$, as well as the functions $F$ and $G,$
have a common modulus of continuity. Thus we prove the lemma if we prove it
for the pair $\left\langle F,f\right\rangle .$
Since $f(0)=0,$ it follows from (2.35) that
\begin{equation*}
F(x_{1},0,...,0)=F(0,x_{2},0,...,0)=\cdot \cdot \cdot =F(0,0,...,x_{n})=0.%
\eqno(2.36)
\end{equation*}%
For the sake of brevity, introduce the notation $\mathcal{F}(x_{1},x_{2})=$ $%
F(x_{1},x_{2},0,...,0)$. Obviously, for any real number $x,$
\begin{eqnarray*}
\mathcal{F}(x,x) &=&2f(x)-f(2x); \\
\mathcal{F}(x,2x) &=&f(x)+f(2x)-f(3x); \\
&&\cdot \cdot \cdot \\
\mathcal{F}(x,(k-1)x) &=&f(x)+f((k-1)x)-f(kx).
\end{eqnarray*}
We obtain from the above equalities that
\begin{eqnarray*}
f(2x) &=&2f(x)-\mathcal{F}(x,x), \\
f(3x) &=&3f(x)-\mathcal{F}(x,x)-\mathcal{F}(x,2x), \\
&&\cdot \cdot \cdot \\
f(kx) &=&kf(x)-\mathcal{F}(x,x)-\mathcal{F}(x,2x)-\cdot \cdot \cdot -%
\mathcal{F}(x,(k-1)x).
\end{eqnarray*}%
Thus for any nonnegative integer $k$,
\begin{equation*}
f(x)=\frac{1}{k}f(kx)+\frac{1}{k}\left[ \mathcal{F}(x,x)+\mathcal{F}%
(x,2x)+\cdot \cdot \cdot +\mathcal{F}(x,(k-1)x)\right] .\eqno(2.37)
\end{equation*}
Consider now the simple fraction $\frac{p}{m}\in (0,\frac{1}{2})$ and set $%
m_{0}=\left[ \frac{m}{p}\right] .$ Here $[r]$ denotes the whole number part
of $r$. Clearly, $m_{0}\geq 2$ and the remainder $p_{1}=m-m_{0}p<p.$ Taking $%
x=\frac{p}{m}$ and $k=m_{0}$ in (2.37) gives us the following equality
\begin{equation*}
f\left( \frac{p}{m}\right) =\frac{1}{m_{0}}f\left( 1-\frac{p_{1}}{m}\right)
\end{equation*}%
\begin{equation*}
+\frac{1}{m_{0}}\left[ \mathcal{F}\left( \frac{p}{m},\frac{p}{m}\right) +%
\mathcal{F}\left( \frac{p}{m},\frac{2p}{m}\right) +\cdot \cdot \cdot +%
\mathcal{F}\left( \frac{p}{m},(m_{0}-1)\frac{p}{m}\right) \right] .\eqno%
(2.38)
\end{equation*}%
On the other hand, since
\begin{equation*}
\mathcal{F}\left( \frac{p_{1}}{m},1-\frac{p_{1}}{m}\right) =f\left( \frac{%
p_{1}}{m}\right) +f\left( 1-\frac{p_{1}}{m}\right) -f(1),
\end{equation*}%
it follows from (2.38) that
\begin{equation*}
f\left( \frac{p}{m}\right) =\frac{f(1)}{m_{0}}
\end{equation*}%
\begin{equation*}
+\frac{1}{m_{0}}\left[
\mathcal{F}\left( \frac{p}{m},\frac{p}{m}\right) +\cdot \cdot \cdot +%
\mathcal{F}\left( \frac{p}{m},(m_{0}-1)\frac{p}{m}\right) +\mathcal{F}\left(
\frac{p_{1}}{m},1-\frac{p_{1}}{m}\right) \right]
\end{equation*}
\begin{equation*}
-\frac{1}{m_{0}}f\left( \frac{p_{1}}{m}\right) .\eqno(2.39)
\end{equation*}
Put $m_{1}=\left[ \frac{m}{p_{1}}\right] $, $p_{2}=m-m_{1}p_{1}.$ Clearly, $%
0\leq p_{2}<p_{1}$. Similar to (2.39), we can write that
\begin{equation*}
f\left( \frac{p_{1}}{m}\right) =\frac{f(1)}{m_{1}}
\end{equation*}%
\begin{equation*}
+\frac{1}{m_{1}}\left[
\mathcal{F}\left( \frac{p_{1}}{m},\frac{p_{1}}{m}\right) +\cdot \cdot \cdot +%
\mathcal{F}\left( \frac{p_{1}}{m},(m_{1}-1)\frac{p_{1}}{m}\right) +\mathcal{F%
}\left( \frac{p_{2}}{m},1-\frac{p_{2}}{m}\right) \right]
\end{equation*}
\begin{equation*}
-\frac{1}{m_{1}}f\left( \frac{p_{2}}{m}\right) .\eqno(2.40)
\end{equation*}
Let us make a convention that (2.39) is the $1$-st and (2.40) is the $2$-nd
formula. One can continue this process by defining the chain of pairs $%
(m_{2},p_{3}),$ $(m_{3},p_{4})$ until the pair $(m_{k-1},p_{k})$ with $%
p_{k}=0$ and writing out the corresponding formulas for each pair. For
example, the last $k$-th formula will be of the form
\begin{equation*}
f\left( \frac{p_{k-1}}{m}\right) =\frac{f(1)}{m_{k-1}}
\end{equation*}%
\begin{equation*}
+\frac{1}{m_{k-1}}\left[ \mathcal{F}\left( \frac{p_{k-1}}{m},\frac{p_{k-1}}{m%
}\right) +\cdot \cdot \cdot +\mathcal{F}\left( \frac{p_{k-1}}{m},(m_{k-1}-1)%
\frac{p_{k-1}}{m}\right) +\mathcal{F}\left( \frac{p_{k}}{m},1-\frac{p_{k}}{m}%
\right) \right]
\end{equation*}%
\begin{equation*}
-\frac{1}{m_{k-1}}f\left( \frac{p_{k}}{m}\right) .\eqno(2.41)
\end{equation*}%
Note that in (2.41), $f\left( \frac{p_{k}}{m}\right) =0$ and $\mathcal{F}%
\left( \frac{p_{k}}{m},1-\frac{p_{k}}{m}\right) =0$. Considering now the $k$%
-th formula in the $(k-1)$-th formula, then the obtained formula in the $%
(k-2)$-th formula, and so forth, we will finally arrive at the equality
\begin{equation*}
f\left( \frac{p}{m}\right) =f(1)\left[ \frac{1}{m_{0}}-\frac{1}{m_{0}m_{1}}%
+\cdot \cdot \cdot +\frac{(-1)^{k-1}}{m_{0}m_{1}\cdot \cdot \cdot m_{k-1}}%
\right]
\end{equation*}
\begin{equation*}
+\frac{1}{m_{0}}\left[ \mathcal{F}\left( \frac{p}{m},\frac{p}{m}\right)
+\cdot \cdot \cdot +\mathcal{F}\left( \frac{p}{m},(m_{0}-1)\frac{p}{m}%
\right) +\mathcal{F}\left( \frac{p_{1}}{m},1-\frac{p_{1}}{m}\right) \right]
\end{equation*}
\begin{equation*}
-\frac{1}{m_{0}m_{1}}\left[ \mathcal{F}\left( \frac{p_{1}}{m},\frac{p_{1}}{m}%
\right) +\cdot \cdot \cdot +\mathcal{F}\left( \frac{p_{1}}{m},(m_{1}-1)\frac{%
p_{1}}{m}\right) +\mathcal{F}\left( \frac{p_{2}}{m},1-\frac{p_{2}}{m}\right) %
\right]
\end{equation*}
\begin{equation*}
+\cdot \cdot \cdot +
\end{equation*}%
\begin{equation*}
\frac{(-1)^{k-1}}{m_{0}m_{1}\cdot \cdot \cdot m_{k-1}}\left[ \mathcal{F}%
\left( \frac{p_{k-1}}{m},\frac{p_{k-1}}{m}\right) +\cdot \cdot \cdot +%
\mathcal{F}\left( \frac{p_{k-1}}{m},(m_{k-1}-1)\frac{p_{k-1}}{m}\right) %
\right] .\eqno(2.42)
\end{equation*}%
Taking into account (2.36) and the definition of $\mathcal{F}$, for any
point of the form $\left( \frac{p_{i}}{m},c\right) $, $i=0,1,...,k-1,$ $%
p_{0}=p$, $c\in \lbrack 0,1]$, we can write that
\begin{equation*}
\left\vert \mathcal{F}\left( \frac{p_{i}}{m},c\right) \right\vert
=\left\vert \mathcal{F}\left( \frac{p_{i}}{m},c\right) -\mathcal{F}\left(
0,c\right) \right\vert \leq \omega \left( F;\frac{p_{i}}{m};[0,1]^{n}\right)
\leq \omega \left( F;\frac{p}{m};[0,1]^{n}\right) .
\end{equation*}%
Applying this inequality to each term $\mathcal{F}\left( \frac{p_{i}}{m}%
,\cdot \right) $ in (2.42), we obtain that
\begin{equation*}
\left\vert f\left( \frac{p}{m}\right) \right\vert \leq \left[ \frac{1}{m_{0}}%
-\frac{1}{m_{0}m_{1}}+\cdot \cdot \cdot +\frac{(-1)^{k-1}}{m_{0}m_{1}\cdot
\cdot \cdot m_{k-1}}\right] \left\vert f(1)\right\vert
\end{equation*}
\begin{equation*}
+\left[ 1+\frac{1}{m_{0}}+\cdot \cdot \cdot +\frac{1}{m_{0}\cdot \cdot \cdot
m_{k-2}}\right] \omega \left( F;\frac{p}{m};[0,1]^{n}\right) .\eqno(2.43)
\end{equation*}
Since $m_{0}\leq m_{1}\leq \cdot \cdot \cdot \leq m_{k-1},$ it is not
difficult to see that in (2.43)
\begin{equation*}
\frac{1}{m_{0}}-\frac{1}{m_{0}m_{1}}+\cdot \cdot \cdot +\frac{(-1)^{k-1}}{%
m_{0}m_{1}\cdot \cdot \cdot m_{k-1}}\leq \frac{1}{m_{0}}
\end{equation*}%
and
\begin{equation*}
1+\frac{1}{m_{0}}+\cdot \cdot \cdot +\frac{1}{m_{0}\cdot \cdot \cdot m_{k-2}}%
\leq \frac{m_{0}}{m_{0}-1}.
\end{equation*}%
Considering the above two inequalities in (2.43) we obtain that
\begin{equation*}
\left\vert f\left( \frac{p}{m}\right) \right\vert \leq \frac{\left\vert
f(1)\right\vert }{m_{0}}+\frac{m_{0}}{m_{0}-1}\omega \left( F;\frac{p}{m}%
;[0,1]^{n}\right) .\eqno(2.44)
\end{equation*}%
Since $m_{0}=\left[ \frac{m}{p}\right] \geq 2,$ it follows from (2.44) that
\begin{equation*}
\left\vert f\left( \frac{p}{m}\right) \right\vert \leq \frac{2p\left\vert
f(1)\right\vert }{m}+2\omega \left( F;\frac{p}{m};[0,1]^{n}\right) .\eqno%
(2.45)
\end{equation*}
Let now $\delta \in \left( 0,\frac{1}{2}\right) \cap \mathbb{Q}$ be a
rational increment, $M\geq 1$ and $x,x+\delta $ be two points in $\left[ -M,M%
\right] \cap \mathbb{Q}.$ By (2.45) we can write that
\begin{equation*}
\left\vert f(x+\delta )-f(x)\right\vert \leq \left\vert f(\delta
)\right\vert +\left\vert F(x,\delta ,0,...,0)\right\vert \leq 2\delta
\left\vert f(1)\right\vert +3\omega \left( F;\delta ;[-M,M]^{n}\right) .\eqno%
(2.46)
\end{equation*}%
Now (2.34) follows from (2.46) and the definitions of $f$ and $F$.
\end{proof}
\textbf{Remark 2.3.} The above lemma shows that the restriction of $g$ to
the set of rational numbers $\mathbb{Q}$ is uniformly continuous on any
interval $[-M,M]\cap \mathbb{Q}$.
\bigskip
To prove the main result of this section we need the following lemma.
\bigskip
\textbf{Lemma 2.7.} \textit{Assume a function $G\in C(\mathbb{R}^{n})$ has
the form}
\begin{equation*}
G(x_{1},...,x_{n})=\sum_{i=1}^{n}g(x_{i})-g(x_{1}+\cdot \cdot \cdot +x_{n}),
\end{equation*}%
\textit{where $g$ is an arbitrary function. Then there exists a function $%
F\in C(\mathbb{R})$ such that}
\begin{equation*}
G(x_{1},...,x_{n})=\sum_{i=1}^{n}F(x_{i})-F(x_{1}+\cdot \cdot \cdot +x_{n})%
\eqno(2.47)
\end{equation*}%
\textit{and the following inequality holds}
\begin{equation*}
\omega (F;\delta ;[-M,M])\leq 3\omega \left( G;\delta ;[-M,M]^{n}\right) ,%
\eqno(2.48)
\end{equation*}%
\textit{where $0\leq \delta \leq \frac{1}{2}$ and $M\geq 1$.}
\bigskip
\begin{proof} Consider the function
\begin{equation*}
u(t)=g(t)-\left[ g(1)-g(0)\right] t.
\end{equation*}%
Obviously, $u(1)=u(0)$ and
\begin{equation*}
G(x_{1},...,x_{n})=\sum_{i=1}^{n}u(x_{i})-u(x_{1}+\cdot \cdot \cdot +x_{n}).%
\eqno(2.49)
\end{equation*}%
By Lemma 2.6, the restriction of $u$ to $\mathbb{Q}$ is continuous and
uniformly continuous on every interval $[-M,M]\cap \mathbb{Q}$. Denote this
restriction by $v$.
Let $y$ be any real number and $\{y_{k}\}_{k=1}^{\infty }$ be any sequence
of rational numbers converging to $y$. We can choose $M>0$ so that $y_{k}\in
\lbrack -M,M]$ for any $k\in \mathbb{N}$. It follows from the uniform
continuity of $v$ on $[-M,M]\cap \mathbb{Q}$ that the sequence $%
\{v(y_{k})\}_{k=1}^{\infty }$ is Cauchy. Thus there exits a finite limit $%
\lim_{k\rightarrow \infty }v(y_{k})$. It is not difficult to see that this
limit does not depend on the choice of $\{y_{k}\}_{k=1}^{\infty }$.
Let $F$ denote the following extension of $v$ to the set of real numbers.
\begin{equation*}
F(y)=\left\{
\begin{array}{c}
v(y),\text{ if }y\in \mathbb{Q}\text{;} \\
\lim_{k\rightarrow \infty }v(y_{k}),\text{ if }y\in \mathbb{R}\backslash
\mathbb{Q}\text{ and }\{y_{k}\}\text{ is a sequence in }\mathbb{Q}\text{
tending to }y.%
\end{array}%
\right.
\end{equation*}%
In view of the above arguments, $F$ is well defined on the whole real line.
Let us prove that for this function (2.47) is valid.
Consider an arbitrary point $(x_{1},...,x_{n})\in \mathbb{R}^{n}$ and
sequences of rationale numbers $\{y_{k}^{i}\}_{k=1}^{\infty },i=1,...,n,$
tending to $x_{1},...,x_{n},$ respectively. Taking into account (2.49), we
can write that
\begin{equation*}
G(y_{k}^{1},...,y_{k}^{n})=\sum_{i=1}^{n}v(y_{k}^{i})-v(y_{k}^{1}+\cdot
\cdot \cdot +y_{k}^{n}),\text{ for all }k=1,2,...,\eqno(2.50)
\end{equation*}%
since $v$ is the restriction of $u$ to $\mathbb{Q}$. Tending $k\rightarrow
\infty $ in both sides of (2.50) we obtain (2.47).
Let us now prove that $F\in C(\mathbb{R})$ and (2.48) holds. Since $%
v(1)=v(0) $ we obtain from (2.49) and (2.34) that for $\delta \in \left( 0,%
\frac{1}{2}\right) \cap \mathbb{Q}$, $M\geq 1$ and any numbers $a,b\in
\lbrack -M,M]\cap \mathbb{Q}$, $\left\vert a-b\right\vert \leq \delta ,$ the
following inequality holds
\begin{equation*}
\left\vert v(a)-v(b)\right\vert \leq 3\omega \left( G;\delta
;[-M,M]^{n}\right) .\eqno(2.51)
\end{equation*}%
Consider any real numbers $r_{1}$ and $r_{2}$ satisfying $r_{1},r_{2}\in
\lbrack -M,M]$, $\left\vert r_{1}-r_{2}\right\vert \leq \delta $ and take
sequences $\{a_{k}\}_{k=1}^{\infty }\subset \lbrack -M,M]\cap \mathbb{Q}$, $%
\{b_{k}\}_{k=1}^{\infty }\subset \lbrack -M,M]\cap \mathbb{Q}$ with the
property $\left\vert a_{k}-b_{k}\right\vert \leq \delta ,$ $k=1,2,...,$ and
tending to $r_{1}$ and $r_{2}$, respectively. By (2.51),
\begin{equation*}
\left\vert v(a_{k})-v(b_{k})\right\vert \leq 3\omega \left( G;\delta
;[-M,M]^{n}\right) .
\end{equation*}%
If we take limits\ on both sides of the above inequality, we obtain that
\begin{equation*}
\left\vert F(r_{1})-F(r_{2})\right\vert \leq 3\omega \left( G;\delta
;[-M,M]^{n}\right) ,
\end{equation*}%
which means that $F$ is uniformly continuous on $[-M,M]$ and
\begin{equation*}
\omega \left( F;\delta ;[-M,M]\right) \leq 3\omega \left( G;\delta
;[-M,M]^{n}\right) .
\end{equation*}%
Note that in the last inequality $\delta $ is a rational number from the
interval $\left( 0,\frac{1}{2}\right) .$ It is well known that the modulus
of continuity $\omega (f;\delta ;\Omega )$ of a continuous function $f$ is
continuous from the right for any compact set $\Omega \subset \mathbb{R}^{n}$
and it is continuous from the left for certain compact sets $\Omega $, in
particular for rectangular sets (see \cite{Kol}). It follows immediately
that (2.48) is valid for all $\delta \in \lbrack 0,\frac{1}{2}].$
\end{proof}
The following theorem was first obtained by Kuleshov \cite{88}. Below, we
prove this using completely different ideas. Our proof, which is taken from
\cite{2}, contains a theoretical method for constructing the functions $%
g_{i}\in C^{s}(\mathbb{R})$ in (2.2). Using this method, we will also estimate
the modulus of continuity of $g_{i}$ in terms of the modulus of continuity
of $f$ (see Remark 2.4 below).
\bigskip
\textbf{Theorem 2.9.} \textit{Assume we are given $k$ directions $\mathbf{a}%
^{i}$, $i=1,...,k$, in $\mathbb{R}^{n}\backslash \{\mathbf{0}\}$ and $k-1$
of them are linearly independent.\ Assume that a function $f\in C(\mathbb{R}%
^{n})$ is of the form (2.1). Then $f$ can be represented also in the form
(2.2) with $g_{i}\in C(\mathbb{R})$, $i=1,...,k$.}
\bigskip
\begin{proof} Without loss of generality, we may assume that the first $%
k-1 $ vectors $\mathbf{a}^{1},\mathbf{...},\mathbf{a}^{k-1}$ are linearly
independent. Thus there exist numbers $\lambda _{1},...,\lambda _{k-1}\in
\mathbb{R}$ such that $\mathbf{a}^{k}=\lambda _{1}\mathbf{a}^{1}+\cdot \cdot
\cdot +\lambda _{k-1}\mathbf{a}^{k-1}$. We may also assume that the first $p$
numbers $\lambda _{1},...,\lambda _{p}$, $1\leq p\leq k-1$, are nonzero and
the remaining $\lambda _{j}$s are zero. Indeed, if necessary, we can
rearrange the vectors $\mathbf{a}^{1},\mathbf{...},\mathbf{a}^{k-1}$ so that
this assumption holds. Complete the system $\{\mathbf{a}^{1},...,\mathbf{a}%
^{k-1}\}$ to a basis $\{\mathbf{a}^{1},...,\mathbf{a}^{k-1},\mathbf{b}%
^{k},...,\mathbf{b}^{n}\}$ and consider the linear transformation $\mathbf{y}%
=A\mathbf{x,}$ where $\mathbf{x}=(x_{1},...,x_{n})^{T},$ $\mathbf{y}%
=(y_{1},...,y_{n})^{T}$ and $A$ is the matrix, rows of which are formed by
the coordinates of the vectors $\mathbf{a}^{1},...,\mathbf{a}^{k-1},\mathbf{b%
}^{k},...,\mathbf{b}^{n}.$ Using this transformation, we can write (2.1) in
the form
\begin{equation*}
f(A^{-1}\mathbf{y})=f_{1}(y_{1})+\cdot \cdot \cdot
+f_{k-1}(y_{k-1})+f_{k}(\lambda _{1}y_{1}+\cdot \cdot \cdot +\lambda
_{p}y_{p}).\eqno(2.52)
\end{equation*}
For the brevity of exposition in the sequel, we put $l=k-1$ and use the
notation
\begin{equation*}
w=f_{l+1},\text{ }\Phi (y_{1},...,y_{l})=f(A^{-1}\mathbf{y})\text{ and }%
Y_{j}=(y_{1},...,y_{j-1},y_{j+1},...,y_{l}), j=1,...,l.
\end{equation*}%
Using this notation, we can write (2.52) in the form
\begin{equation*}
\Phi (y_{1},...,y_{l})=f_{1}(y_{1})+\cdot \cdot \cdot
+f_{l}(y_{l})+w(\lambda _{1}y_{1}+\cdot \cdot \cdot +\lambda _{p}y_{p}).\eqno%
(2.53)
\end{equation*}%
In (2.53), taking sequentially $Y_{1}=0,$ $Y_{2}=0,$..., $Y_{l}=0$ we obtain
that
\begin{equation*}
f_{j}(y_{j})=\Phi (y_{1},...,y_{l})|_{Y_{j}=\mathbf{0}}-w(\lambda
_{j}y_{j})-\sum_{\substack{ i=1 \\ i\neq j}}^{l}f_{i}(0),\text{ }j=1,...,l.%
\eqno(2.54)
\end{equation*}%
Substituting (2.54) in (2.53), we obtain the equality
\begin{equation*}
\left.
\begin{array}{c}
w(\lambda _{1}y_{1}+\cdot \cdot \cdot +\lambda
_{p}y_{p})-\sum_{j=1}^{p}w(\lambda _{j}y_{j})-(l-p)w(0)= \\
=\Phi (y_{1},...,y_{l})-\sum_{j=1}^{l}\Phi (y_{1},...,y_{l})|_{Y_{j}=\mathbf{%
0}}+(l-1)\sum_{j=1}^{l}f_{j}(0).%
\end{array}%
\right. \eqno(2.55)
\end{equation*}%
We see that the right hand side of (2.55) depends only on the variables $%
y_{1},y_{2},...,y_{p}.$ Denote the right hand side of (2.55) by $%
H(y_{1},...,y_{p}).$ That is, set%
\begin{equation*}
H(y_{1},...,y_{p})\overset{def}{=}\Phi (y_{1},...,y_{l})-\sum_{j=1}^{l}\Phi
(y_{1},...,y_{l})|_{Y_{j}=\mathbf{0}}+(l-1)\sum_{j=1}^{l}f_{j}(0).\eqno(2.56)
\end{equation*}%
We will use the following identity, which follows from (2.55) and (2.56)%
\begin{equation*}
H(y_{1},...,y_{p})=w(\lambda _{1}y_{1}+\cdot \cdot \cdot +\lambda
_{p}y_{p})-\sum_{j=1}^{p}w(\lambda _{j}y_{j})-(l-p)w(0).\eqno(2.57)
\end{equation*}
It follows from (2.56) and the continuity of $f$ that the function $H$ is
continuous on $\mathbb{R}^{p}$. Then, defining the function
\begin{equation*}
G(y_{1},...,y_{p})=H(\frac{y_{1}}{\lambda _{1}},...,\frac{y_{p}}{\lambda _{p}%
})+(l-p)w(0)\eqno(2.58)
\end{equation*}%
and applying Lemma 2.7, we obtain that there exists a function $F\in C(%
\mathbb{R})$ such that
\begin{equation*}
G(y_{1},...,y_{p})=F(y_{1}+\cdot \cdot \cdot +y_{p})-\sum_{j=1}^{p}F(y_{p}).%
\eqno(2.59)
\end{equation*}%
It follows from the formulas (2.57)-(2.59) that
\begin{equation*}
w(\lambda _{1}y_{1}+\cdot \cdot \cdot +\lambda
_{p}y_{p})-\sum_{j=1}^{p}w(\lambda _{j}y_{j})=F(\lambda _{1}y_{1}+\cdot
\cdot \cdot +\lambda _{p}y_{p})-\sum_{j=1}^{p}F(\lambda _{j}y_{j}).\eqno%
(2.60)
\end{equation*}
Let us introduce the following functions
\begin{equation*}
\left.
\begin{array}{c}
g_{j}(y_{j})=\Phi (y_{1},...,y_{l})|_{Y_{j}=\mathbf{0}}-F(\lambda
_{j}y_{j})-\sum_{\substack{ i=1 \\ i\neq j}}^{l}f_{i}(0),\text{ }j=1,...,p,
\\
g_{j}(y_{j})=\Phi (y_{1},...,y_{l})|_{Y_{j}=\mathbf{0}}-\sum_{\substack{ i=1
\\ i\neq j}}^{l}f_{i}(0)-w(0),\text{ }j=p+1,...,l.%
\end{array}%
\right. \eqno(2.61)
\end{equation*}
Note that $g_{j}\in C(\mathbb{R}),$ $j=1,...,l.$ Considering (2.55), (2.60)
and (2.61) it is not difficult to verify that
\begin{equation*}
\Phi (y_{1},...,y_{l})=g_{1}(y_{1})+\cdot \cdot \cdot
+g_{l}(y_{l})+F(\lambda _{1}y_{1}+\cdot \cdot \cdot +\lambda _{p}y_{p}).\eqno%
(2.62)
\end{equation*}%
In (2.62), denoting $F=g_{k}$, recalling the definition of $\Phi $ and going
back to the variable $\mathbf{x}=(x_{1},...,x_{n})$ by using again the
linear transformation $\mathbf{y}=A\mathbf{x}$, we finally obtain (2.2).
\end{proof}
\textbf{Remark 2.4.} Using Theorem 2.9 and Lemma 2.7, one can estimate the
modulus of continuity of the functions $g_{i}$ in representation (2.2) in
terms of the modulus of continuity of $\Phi $. To show how one can do this,
assume $C>0,$ $M\geq 1,$ $0\leq \delta \leq \frac{1}{2\max \{\left\vert
\lambda _{j}\right\vert \}}$ and introduce the following sets
\begin{equation*}
\mathcal{M}_{j}=\left\{ (x_{1},...,x_{l})\in \mathbb{R}^{l}:x_{j}\in \lbrack
-M/\left\vert \lambda _{j}\right\vert ,M/\left\vert \lambda _{j}\right\vert ]%
\text{, }x_{i}=0\text{ for }i\neq j\right\},
\end{equation*}%
\begin{equation*}
j=1,...,p;
\end{equation*}%
\begin{equation*}
\mathcal{C}_{j}=\left\{ (x_{1},...,x_{l})\in \mathbb{R}^{l}:x_{j}\in \lbrack
-C,C]\text{, }x_{i}=0\text{ for }i\neq j\right\} ,\text{ }j=p+1,...,l.
\end{equation*}%
It can be easily obtained from (2.61) that
\begin{equation*}
\omega (g_{j};\delta ;[-M/\left\vert \lambda _{j}\right\vert ,M/\left\vert
\lambda _{j}\right\vert ])\leq \omega \left( \Phi ;\delta ;\mathcal{M}%
_{j}\right) +\omega (F;\delta _{1};[-M,M]),\text{ }j=1,...,p,\eqno(2.63)
\end{equation*}%
\begin{equation*}
\omega (g_{j};\delta ;[-C,C])\leq \omega \left( \Phi ;\delta ;\mathcal{C}%
_{j}\right) ,\text{ }j=p+1,...,l,\eqno(2.64)
\end{equation*}%
where $\delta _{1}=\delta \cdot \max \{\left\vert \lambda _{j}\right\vert
\}. $ To estimate $\omega (F;\delta _{1};[-M,M])$ in (2.63), we refer to
Lemma 2.7. Applying Lemma 2.7 to the function $G$ in (2.58) we obtain that
in addition to (2.59) the following inequality holds.
\begin{equation*}
\omega (F;\delta _{1};[-M,M])\leq 3\omega \left( G;\delta
_{1};[-M,M]^{p}\right) .\eqno(2.65)
\end{equation*}%
Note that here $0\leq \delta _{1}\leq \frac{1}{2}$ as in Lemma 2.7. It
follows from (2.58) and (2.65) that
\begin{equation*}
\omega (F;\delta _{1};[-M,M])\leq 3\omega \left( H;\delta _{2};\mathcal{K}%
\right) ,\eqno(2.66)
\end{equation*}%
where $\mathcal{K}=[-M/\left\vert \lambda _{1}\right\vert ,M/\left\vert
\lambda _{1}\right\vert ]\times \cdot \cdot \cdot \times \lbrack
-M/\left\vert \lambda _{p}\right\vert ,M/\left\vert \lambda _{p}\right\vert
] $ and \newline
$\delta _{2}=\delta _{1}/\min \{\left\vert \lambda _{j}\right\vert \} $.
Further, (2.66) and (2.56) together yield that
\begin{equation*}
\omega (F;\delta _{1};[-M,M])\leq (3l+3)\omega \left( \Phi ;\delta _{2};%
\mathcal{S}\right) ,\eqno(2.67)
\end{equation*}%
where $\mathcal{S}=\left\{ (x_{1},...,x_{l})\in \mathbb{R}%
^{l}:(x_{1},...,x_{p})\in \mathcal{K}\text{, }x_{i}=0\text{ for }i>p\right\}
$. Now it follows from (2.63) and (2.67) that
\begin{equation*}
\omega (g_{j};\delta ;[-M/\left\vert \lambda _{j}\right\vert ,M/\left\vert
\lambda _{j}\right\vert ])\leq \omega \left( \Phi ;\delta ;\mathcal{M}%
_{j}\right) +(3l+3)\omega \left( \Phi ;\delta _{2};\mathcal{S}\right) ,\text{
}j=1,...,p.\eqno(2.68)
\end{equation*}%
Formulas (2.64), (2.67) and (2.68) provide us with upper estimates for the
modulus of continuity of the functions $g_{j},$ $j=1,...,k,$ in terms of the
modulus of continuity of $\Phi $. Recall that in these estimates $l=k-1$, $%
F=g_{k}$ and $\lambda _{j}$ are coefficients\textit{\ }in the expression $%
\mathbf{a}^{k}=\lambda _{1}\mathbf{a}^{1}+\cdot \cdot \cdot +\lambda _{p}%
\mathbf{a}^{p}$.
\bigskip
Theorems 2.1 and 2.9 together give the following result.
\bigskip
\textbf{Theorem 2.10. }\textit{Assume we are given $k$ directions $\mathbf{a}%
^{i}$, $i=1,...,k$, in $\mathbb{R}^{n}\backslash \{\mathbf{0}\}$ and $k-1$
of them are linearly independent.\ Assume that a function $f\in C^{s}(%
\mathbb{R}^{n})$ is of the form (2.1). Then $f$ can be represented also in
the form (2.2), where the functions $g_{i}\in C^{s}(\mathbb{R})$, $i=1,...,k$%
.}
\bigskip
Indeed, on the one hand, it follows from Theorem 2.9 that $f$ can be
expressed as (2.2) with continuous $g_{i}$. On the other hand, since the
class $\mathcal{B}$ in Theorem 2.1, in particular, can be taken as $C(%
\mathbb{R}),$ it follows that $g_{i}\in C^{s}(\mathbb{R})$.
\bigskip
\textbf{Remark 2.5.} In addition to the above $C^{s}(\mathbb{R})$, Theorems
2.9 can be restated also for some other subclasses of the space of
continuous functions. These are $C^{\infty }(\mathbb{R})$ functions;
analytic functions; algebraic polynomials; trigonometric polynomials. More
precisely, assume $\mathcal{H}(\mathbb{R})$ is any of these subclasses and $%
\mathcal{H}(\mathbb{R}^{n})$ is the $n$-variable analog of the $\mathcal{H}(%
\mathbb{R})$. If under the conditions of Theorem 2.9, we have $f\in \mathcal{%
H}(\mathbb{R}^{n})$, then this function can be represented in the form (2.2)
with $g_{i}\in \mathcal{H}(\mathbb{R}).$ This follows, similarly to the case
$C^{s}(\mathbb{R})$ above, from Theorem 2.9 and Remark 2.1.
\bigskip
\section{A constructive analysis of the smoothness problem}
Note that Theorems 2.4-2.10 are generally existence results. They tell about
existence of smooth ridge functions $g_{i}$ in the corresponding
representation formula (2.2) or (2.24). They are uninformative if we want to
construct explicitly these functions.
In this section, we give two theorems which do not only address the
smoothness problem, but also are useful in constructing the mentioned $g_{i}$%
.
\subsection{Bivariate case}
We start with the constructive analysis of the smoothness problem for
bivariate functions. We show that if a bivariate function of a certain
smoothness class is represented by a sum of finitely many, arbitrarily
behaved ridge functions, then, under suitable conditions, it also can be
represented by a sum of ridge functions of the same smoothness class and
these ridge functions can be constructed explicitely.
\bigskip
\textbf{Theorem 2.11.} \textit{Assume $(a_{i},b_{i})$, $i=1,...,n$ \ are
pairwise linearly independent vectors in $\mathbb{R}^{2}$. Assume that a
function $f\in C^{s}(\mathbb{R}^{2})$ has the form}
\begin{equation*}
f(x,y)=\sum_{i=1}^{n}f_{i}(a_{i}x+b_{i}y),
\end{equation*}%
\textit{where $f_{i}$ are arbitrary univariate functions and $s\geq n-2.$
Then $f$ can be represented also in the form}
\begin{equation*}
f(x,y)=\sum_{i=1}^{n}g_{i}(a_{i}x+b_{i}y),\eqno(2.69)
\end{equation*}%
\textit{where the functions $g_{i}\in C^{s}(\mathbb{R})$, $i=1,...,n$. In
(2.69), the functions $g_{i}$, $i=1,...,n,$ can be constructed by the
formulas}
\begin{eqnarray*}
g_{p} &=&\varphi _{p,n-p-1},\text{ }p=1,...,n-2; \\
g_{n-1} &=&h_{1,n-1};\text{ }g_{n}=h_{2,n-1}.
\end{eqnarray*}%
\textit{Here all the involved functions $\varphi _{p,n-p-1}$\textit{, }$%
h_{1,n-1}$ and $h_{2,n-1}$ can be found inductively as follows}%
\begin{eqnarray*}
h_{1,1}(t) &=&\frac{\partial ^{n-2}}{\partial l_{1}\cdot \cdot \cdot
\partial l_{n-2}}f^{\ast }(t,0),~ \\
h_{2,1}(t) &=&\frac{\partial ^{n-2}}{\partial l_{1}\cdot \cdot \cdot
\partial l_{n-2}}f^{\ast }(0,t)-\frac{\partial ^{n-2}}{\partial l_{1}\cdot
\cdot \cdot \partial l_{n-2}}f^{\ast }(0,0); \\
h_{1,k+1}(t) &=&\frac{1}{e_{1}\cdot l_{k}}\int_{0}^{t}h_{1,k}(z)dz,\text{ }%
k=1,...,n-2; \\
h_{2,k+1}(t) &=&\frac{1}{e_{2}\cdot l_{k}}\int_{0}^{t}h_{2,k}(z)dz,\text{ }%
k=1,...,n-2;
\end{eqnarray*}%
\textit{and}
\begin{equation*}
\varphi _{p,1}(t)=\frac{\partial ^{n-p-2}f^{\ast }}{\partial l_{p+1}\cdot
\cdot \cdot \partial l_{n-2}}\left( \frac{\widetilde{a}_{p}t}{\widetilde{a}%
_{p}^{2}+\widetilde{b}_{p}^{2}},\frac{\widetilde{b}_{p}t}{\widetilde{a}%
_{p}^{2}+\widetilde{b}_{p}^{2}}\right) -h_{1,p+1}\left( \frac{\widetilde{a}%
_{p}t}{\widetilde{a}_{p}^{2}+\widetilde{b}_{p}^{2}}\right)
\end{equation*}%
\begin{equation*}
-h_{2,p+1}\left( \frac{\widetilde{b}_{p}t}{\widetilde{a}_{p}^{2}+\widetilde{b%
}_{p}^{2}}\right)-\sum_{j=1}^{p-1}\varphi _{j,p-j+1}\left( \frac{\widetilde{a%
}_{j}\widetilde{a}_{p}+\widetilde{b}_{j}\widetilde{b}_{p}}{\widetilde{a}%
_{p}^{2}+\widetilde{b}_{p}^{2}}t\right),
\end{equation*}
\begin{equation*}
p=1,...,n-2\left( \text{for }p=n-2\text{, }\frac{\partial ^{n-p-2}f^{\ast }}{%
\partial l_{p+1}\cdot \cdot \cdot \partial l_{n-2}}:=f^{\ast }\right) ;
\end{equation*}
\begin{equation*}
\varphi _{p,k+1}(t)=\frac{1}{(\widetilde{a}_{p},\widetilde{b}_{p})\cdot
l_{k+p}}\int_{0}^{t}\varphi _{p,k}(z)dz,\text{ }p=1,...,n-3,\text{ }%
k=1,...,n-p-2.
\end{equation*}
\textit{In the above formulas}
\begin{equation*}
\widetilde{a}_{p}=\frac{a_{p}b_{n}-a_{n}b_{p}}{a_{n-1}b_{n}-a_{n}b_{n-1}};~%
\widetilde{b}_{p}=\frac{a_{n-1}b_{p}-a_{p}b_{n-1}}{a_{n-1}b_{n}-a_{n}b_{n-1}}%
,~p=1,...,n-2,
\end{equation*}
\begin{equation*}
l_{p}=\left( \frac{\widetilde{b}_{p}}{\sqrt{\widetilde{a}_{p}^{2}+\widetilde{%
b}_{p}^{2}}},\frac{-\widetilde{a}_{p}}{\sqrt{\widetilde{a}_{p}^{2}+%
\widetilde{b}_{p}^{2}}}\right) ,\text{ }p=1,...,n-2.
\end{equation*}
\begin{equation*}
f^{\ast }(x,y)=f\left( \frac{b_{n}x-b_{n-1}y}{a_{n-1}b_{n}-a_{n}b_{n-1}},%
\frac{a_{n}x-a_{n-1}y}{a_{n}b_{n-1}-a_{n-1}b_{n}}\right) .
\end{equation*}
\bigskip
\begin{proof} Since the vectors $(a_{n-1},b_{n-1})$ and $(a_{n},b_{n})$
are linearly independent, there is a nonsingular linear transformation $%
S:(x,y)\rightarrow (x^{^{\prime }},y^{^{\prime }})$ such that $%
S:(a_{n-1},b_{n-1})\rightarrow (1,0)$ and $S:(a_{n},b_{n})\rightarrow (0,1).$
Thus, without loss of generality we may assume that the vectors $%
(a_{n-1},b_{n-1})$ and $(a_{n},b_{n})$ coincide with the coordinate vectors $%
e_{1}=(1,0)$ and $e_{2}=(0,1)$ respectively. Therefore, to prove the first
part of the theorem it is enough to show that if a function $f\in C^{s}(%
\mathbb{R}^{2})$ is expressed in the form
\begin{equation*}
f(x,y)=\sum_{i=1}^{n-2}f_{i}(a_{i}x+b_{i}y)+f_{n-1}(x)+f_{n}(y),
\end{equation*}%
with arbitrary $f_{i}$, then there exist functions $g_{i}$ $\in C^{s}(%
\mathbb{R})$, $i=1,...,n$, such that $f$ is also expressed in the form
\begin{equation*}
f(x,y)=\sum_{i=1}^{n-2}g_{i}(a_{i}x+b_{i}y)+g_{n-1}(x)+g_{n}(y).\eqno(2.70)
\end{equation*}
By $\Delta _{l}^{(\delta )}F$ we denote the increment of a function $F$ in a
direction $l=(l^{\prime },l^{\prime \prime }).$ That is,
\begin{equation*}
\Delta _{l}^{(\delta )}F(x,y)=F(x+l^{\prime }\delta ,y+l^{\prime \prime
}\delta )-F(x,y).
\end{equation*}%
We also use the notation $\frac{\partial F}{\partial l}$ which denotes the
derivative of $F$ in the direction $l$.
It is easy to check that the increment of a ridge function $g(ax+by)$ in a
direction perpendicular to $(a,b)$ is zero. Let $l_{1},...,l_{n-2}$ be unit
vectors perpendicular to the vectors $(a_{1},b_{1}),...,(a_{n-2},b_{n-2})$
correspondingly. Then for any set of numbers $\delta _{1},...,\delta
_{n-2}\in \mathbb{R}$ we have
\begin{equation*}
\Delta _{l_{1}}^{(\delta _{1})}\cdot \cdot \cdot \Delta _{l_{n-2}}^{(\delta
_{n-2})}f(x,y)=\Delta _{l_{1}}^{(\delta _{1})}\cdot \cdot \cdot \Delta
_{l_{n-2}}^{(\delta _{n-2})}\left[ f_{n-1}(x)+f_{n}(y)\right] .\eqno(2.71)
\end{equation*}
Denote the left hand side of (2.71) by $S(x,y).$ That is, set%
\begin{equation*}
S(x,y)\overset{def}{=}\Delta _{l_{1}}^{(\delta _{1})}\cdot \cdot \cdot
\Delta _{l_{n-2}}^{(\delta _{n-2})}f(x,y).
\end{equation*}%
Then from (2.71) it follows that for any real numbers $\delta _{n-1}$and $%
\delta _{n}$,
\begin{equation*}
\Delta _{e_{1}}^{(\delta _{n-1})}\Delta _{e_{2}}^{(\delta _{n})}S(x,y)=0,
\end{equation*}%
or in expanded form,
\begin{equation*}
S(x+\delta _{n-1},y+\delta _{n})-S(x,y+\delta _{n})-S(x+\delta
_{n-1},y)+S(x,y)=0.
\end{equation*}%
Putting in the last equality $\delta _{n-1}=-x,$ $\delta _{n}=-y$, we obtain
that
\begin{equation*}
S(x,y)=S(x,0)+S(0,y)-S(0,0).
\end{equation*}%
This means that
\begin{equation*}
\Delta _{l_{1}}^{(\delta _{1})}\cdot \cdot \cdot \Delta _{l_{n-2}}^{(\delta
_{n-2})}f(x,y)
\end{equation*}
\begin{equation*}
=\Delta _{l_{1}}^{(\delta _{1})}\cdot \cdot \cdot \Delta
_{l_{n-2}}^{(\delta _{n-2})}f(x,0)+\Delta _{l_{1}}^{(\delta _{1})}\cdot
\cdot \cdot \Delta _{l_{n-2}}^{(\delta _{n-2})}f(0,y)-\Delta
_{l_{1}}^{(\delta _{1})}\cdot \cdot \cdot \Delta _{l_{n-2}}^{(\delta
_{n-2})}f(0,0).
\end{equation*}
By the hypothesis of the theorem, the derivative $\frac{\partial ^{n-2}}{%
\partial l_{1}\cdot \cdot \cdot \partial l_{n-2}}f(x,y)$ exists at any point
$(x,y)\in $ $\mathbb{R}^{2}$. Thus, it follows from the above formula that
\begin{equation*}
\frac{\partial ^{n-2}f}{\partial l_{1}\cdot \cdot \cdot \partial l_{n-2}}%
(x,y)=h_{1,1}(x)+h_{2,1}(y),\eqno(2.72)
\end{equation*}%
where $h_{1,1}(x)=\frac{\partial ^{n-2}}{\partial l_{1}\cdot \cdot \cdot
\partial l_{n-2}}f(x,0)$ and $h_{2,1}(y)=\frac{\partial ^{n-2}}{\partial
l_{1}\cdot \cdot \cdot \partial l_{n-2}}f(0,y)-\frac{\partial ^{n-2}}{%
\partial l_{1}\cdot \cdot \cdot \partial l_{n-2}}f(0,0)$. Note that $h_{1,1}$
and $h_{2,1}$ belong to the class $C^{s-n+2}(\mathbb{R}).$
By $h_{1,2}$ and $h_{2,2}$ denote the antiderivatives of $h_{1,1}$ and $%
h_{2,1}$ satisfying the condition $h_{1,2}(0)=h_{2,2}(0)=0$ and multiplied
by the numbers $1/(e_{1}\cdot l_{1})$ and $1/(e_{2}\cdot l_{1})$
correspondingly. That is,
\begin{eqnarray*}
h_{1,2}(x) &=&\frac{1}{e_{1}\cdot l_{1}}\int_{0}^{x}h_{1,1}(z)dz; \\
h_{2,2}(y) &=&\frac{1}{e_{2}\cdot l_{1}}\int_{0}^{y}h_{2,1}(z)dz.
\end{eqnarray*}%
Here $e\cdot l$ denotes the scalar product between vectors $e$ and $l$.
Obviously, the function
\begin{equation*}
F_{1}(x,y)=h_{1,2}(x)+h_{2,2}(y)
\end{equation*}%
obeys the equality
\begin{equation*}
\frac{\partial F_{1}}{\partial l_{1}}(x,y)=h_{1,1}(x)+h_{2,1}(y).\eqno(2.73)
\end{equation*}%
From (2.72) and (2.73) we obtain that
\begin{equation*}
\frac{\partial }{\partial l_{1}}\left[ \frac{\partial ^{n-3}f}{\partial
l_{2}\cdot \cdot \cdot \partial l_{n-2}}-F_{1}\right] =0.
\end{equation*}%
Hence, for some ridge function $\varphi _{1,1}(a_{1}x+b_{1}y),$
\begin{equation*}
\frac{\partial ^{n-3}f}{\partial l_{2}\cdot \cdot \cdot \partial l_{n-2}}%
(x,y)=h_{1,2}(x)+h_{2,2}(y)+\varphi _{1,1}(a_{1}x+b_{1}y).\eqno(2.74)
\end{equation*}%
Here all the functions $h_{2,1},h_{2,2}(y),\varphi _{1,1}\in C^{s-n+3}(%
\mathbb{R}).$
Set the following functions
\begin{eqnarray*}
h_{1,3}(x) &=&\frac{1}{e_{1}\cdot l_{2}}\int_{0}^{x}h_{1,2}(z)dz; \\
h_{2,3}(y) &=&\frac{1}{e_{2}\cdot l_{2}}\int_{0}^{y}h_{2,2}(z)dz; \\
\varphi _{1,2}(t) &=&\frac{1}{(a_{1},b_{1})\cdot l_{2}}\int_{0}^{t}\varphi
_{1,1}(z)dz.
\end{eqnarray*}%
Note that the function
\begin{equation*}
F_{2}(x,y)=h_{1,3}(x)+h_{2,3}(y)+\varphi _{1,2}(a_{1}x+b_{1}y)
\end{equation*}%
obeys the equality
\begin{equation*}
\frac{\partial F_{2}}{\partial l_{2}}(x,y)=h_{1,2}(x)+h_{2,2}(y)+\varphi
_{1,1}(a_{1}x+b_{1}y).\eqno(2.75)
\end{equation*}%
From (2.74) and (2.75) it follows that
\begin{equation*}
\frac{\partial }{\partial l_{2}}\left[ \frac{\partial ^{n-4}f}{\partial
l_{3}\cdot \cdot \cdot \partial l_{n-2}}-F_{2}\right] =0.
\end{equation*}%
The last equality means that for some ridge function $\varphi
_{2,1}(a_{2}x+b_{2}y),$
\begin{equation*}
\frac{\partial ^{n-4}f}{\partial l_{3}\cdot \cdot \cdot \partial l_{n-2}}%
(x,y)=h_{1,3}(x)+h_{2,3}(y)+\varphi _{1,2}(a_{1}x+b_{1}y)+\varphi
_{2,1}(a_{2}x+b_{2}y).\eqno(2.76)
\end{equation*}%
Here all the functions $h_{1,3},$ $h_{2,3},$ $\varphi _{1,2},$ $\varphi
_{2,1}\in C^{s-n+4}(\mathbb{R}).$
Note that in the left hand sides of (2.72), (2.74) and (2.76) we have the
mixed directional derivatives of $f$ and the order of these derivatives is
decreased by one in each consecutive step. Continuing the above process,
until it reaches the function $f$, we obtain the desired representation
(2.70).
The formulas for $g_{i}$ are obtained in the process of the above proof.
These formulas involve certain functions which can be found inductively as
described in the proof. The validity of the formulas for the functions $%
h_{1,k}$ and $h_{2,k}$, $k=1,...,n-1,$ is obvious. The formulas for $\varphi
_{p,1}$ and $\varphi _{p,k+1}$ can be obtained from (2.74), (2.76) and the
subsequent (assumed but not written) equations if we put $x=\widetilde{a}%
_{p}t/(\widetilde{a}_{p}^{2}+\widetilde{b}_{p}^{2})$ and $y=\widetilde{b}%
_{p}t/(\widetilde{a}_{p}^{2}+\widetilde{b}_{p}^{2})$. Note that $(\widetilde{%
a}_{p},\widetilde{b}_{p}),$ $p=1,...,n-2,$ are the images of vectors $%
(a_{p},b_{p})$ under the linear transformation $S$ which takes the vectors $%
(a_{n-1},b_{n-1})$ and $(a_{n},b_{n})$ to the coordinate vectors $e_{1}=(1,0)$ and
$e_{2}=(0,1),$ respectively. Besides, note that for $p=1,...,n-2,$ the
vectors $l_{p}$ are perpendicular to the vectors $(\widetilde{a}_{p},%
\widetilde{b}_{p})$, respectively and $f^{\ast }$ is the function generated
from $f$ by the above liner transformation.
\end{proof}
Theorem 2.11 can be applied to some higher order partial differential
equations in two variables, e.g., to the following homogeneous equation
\begin{equation*}
\prod\limits_{i=1}^{r}\left( \alpha _{i}\frac{\partial }{\partial x}+\beta
_{i}\frac{\partial }{\partial y}\right) u(x,y)=0,\eqno(2.77)
\end{equation*}%
where $(\alpha _{i},\beta _{i}),~i=1,...,r,$ are pairwise linearly
independent vectors in $\mathbb{R}^{2}$. Clearly, the general solution to
this equation are all functions of the form
\begin{equation*}
u(x,y)=\sum\limits_{i=1}^{r}v_{i}(\beta _{i}x-\alpha _{i}y),\eqno(2.78)
\end{equation*}%
where $v_{i}\in C^{r}(\mathbb{R})$, $i=1,...,r$. Based on Theorem 2.11, for
the general solution, one can demand only smoothness of the sum $u$ and
dispense with smoothness of the summands $v_{i}$. More precisely, the
following corollary is valid.
\bigskip
\textbf{Corollary 2.1.} \textit{Assume a function $u\in C^{r}(\mathbb{R}%
^{2}) $ is of the form (2.78) with arbitrarily behaved $v_{i}$. Then $u$ is
a solution to Equation (2.77).}
\bigskip
\textbf{Remark 2.6.} If in Theorem 2.11 $s\geq n-1,$ then the functions $%
g_{i}$, $i=1,...,n,$ can be constructed (up to polynomials) by the method
discussed in Buhmann and Pinkus \cite{12}. This method is based on the fact
that for a direction $\mathbf{c}=(c_{1},...,c_{m})$ orthogonal to a given
direction $\mathbf{a}\in \mathbb{R}^{m}\backslash \{\mathbf{0}\},$ the
operator
\begin{equation*}
D_{\mathbf{c}}=\sum_{k=1}^{m}c_{k}\frac{\partial }{\partial x_{k}}
\end{equation*}%
acts on $m$-variable ridge functions $g(\mathbf{a}\cdot \mathbf{x})$ as
follows
\begin{equation*}
D_{\mathbf{c}}g(\mathbf{a}\cdot \mathbf{x})=\left( \mathbf{c}\cdot \mathbf{a}%
\right) g^{\prime }(\mathbf{a}\cdot \mathbf{x}).
\end{equation*}%
Thus, if in our case for fixed $r\in \{1,...,n\},$ vectors $l_{k},$ $k\in
\{1,...,n\}$, $k\neq r$, are perpendicular to the vectors $(a_{k},b_{k})$,
then
\begin{equation*}
\prod\limits_{\substack{ k=1 \\ k\neq r}}^{n}D_{l_{k}}f(x,y)=\prod\limits
_{\substack{ k=1 \\ k\neq r}}^{n}D_{l_{k}}\sum_{i=1}^{n}g_{i}(a_{i}x+b_{i}y)
\end{equation*}%
\begin{equation*}
=\sum_{i=1}^{n}\left( \prod\limits _{\substack{ k=1 \\ k\neq r}}^{n}\left(
(a_{i},b_{i})\cdot l_{k}\right) \right)
g_{i}^{(n-1)}(a_{i}x+b_{i}y)=\prod\limits_{\substack{ k=1 \\ k\neq r}}%
^{n}\left( (a_{r},b_{r})\cdot l_{k}\right) g_{r}^{(n-1)}(a_{r}x+b_{r}y).
\end{equation*}%
Now $g_{r}$ can be easily constructed from the above formula (up to a
polynomial of degree at most $n-2$). Note that this method is not feasible
if in Theorem 2.11 the function $f$ is of the class $C^{n-2}(\mathbb{R}^{2})$%
.
\bigskip
\subsection{Multivariate case}
In this subsection, we generalize ideas from the previous subsection to
prove constructively that if a multivariate function of a certain smoothness
class is represented by a sum of $k$ arbitrarily behaved ridge functions,
then, under suitable conditions, it can be represented by a sum of ridge
functions of the same smoothness class and some polynomial of a certain
degree. The appearance of a polynomial term is mainly related to the fact
that in $\mathbb{R}^{n}$ ($n\geq 3)$ there are many directions orthogonal to
a given direction. Such a result was proved nonconstructively in Section
2.1.5 (see Theorem 2.5), but here under a mild hypothesis on the degree of
smoothness, we give a new proof for this theorem, which will provide us with
a recipe for constructing the functions $g_{i}$ in (2.24).
The following theorem is valid.
\bigskip
\textbf{Theorem 2.12.} \textit{Assume $f\in C^{s}(\mathbb{R}^{n})$ is of the
form (2.1). Let $s\geq k-p+1,$ where $p$ is the number of vectors $\mathbf{a}%
^{i}$ forming a maximal linearly independent system. Then there exist
functions $g_{i}\in C^{s}(\mathbb{R})$ and a polynomial $P(\mathbf{x})$ of
total degree at most $k-p+1$ such that (2.24) holds and $g_{i}$ can be
constructed algorithmically.}
\bigskip
\begin{proof} We start the proof by choosing a maximal linearly
independent system in $\{\mathbf{a}^{1},....,\mathbf{a}^{k}\}$. The case
when the system $\{\mathbf{a}^{1},....,\mathbf{a}^{k}\}$ itself is linearly
independent is obvious (see Section 2.1.1). Thus we omit this special case
here. Without loss of generality we may assume that the first $p$ vectors $%
\mathbf{a}^{1},....,\mathbf{a}^{p}$, $p<k$, are linearly independent. Thus,
the vectors $\mathbf{a}^{j},$ $j=p+1,...,k,$ can be expressed as linear
combinations $\lambda _{1}^{j}\mathbf{a}^{1}+\cdot \cdot \cdot +\lambda
_{p}^{j}\mathbf{a}^{p}$, where $\lambda _{1}^{j},...,\lambda _{p}^{j}$ are
real numbers. In addition, we can always apply a nonsingular linear
transformation $S$ of the coordinates such that $S:\mathbf{a}^{i}\rightarrow
\mathbf{e}_{i},$ $i=1,...,p,$ where $\mathbf{e}_{i}$ denotes the $i$-th unit
vector. This reduces the initial representation (2.1) to the following
simpler form
\begin{equation*}
f(\mathbf{x})=f_{1}(x_{1})+\cdot \cdot \cdot
+f_{p}(x_{p})+\sum_{i=1}^{m}f_{p+i}(\mathbf{a}^{i}\cdot \mathbf{x}).\eqno%
(2.79)
\end{equation*}%
Note that we keep the notation of (2.1), but here $\mathbf{x}%
=(x_{1},...,x_{p}),$ $\mathbf{a}^{i}=(\lambda _{1}^{i}\mathbf{,...,}\lambda
_{p}^{i})\in \mathbb{R}^{p}$ and $m=k-p.$ Obviously, we prove Theorem 2.12
if we prove it for the representation (2.79). Thus, in the sequel, we prove
that if $f\in C^{s}(\mathbb{R}^{n})$ is of the form (2.79) and $s\geq m+1,$
then there exist functions $g_{i}\in C^{s}(\mathbb{R})$ and a polynomial $P(%
\mathbf{x})$ of total degree at most $m+1$ such that
\begin{equation*}
f(\mathbf{x})=g_{1}(x_{1})+\cdot \cdot \cdot
+g_{p}(x_{p})+\sum_{i=1}^{m}g_{p+i}(\mathbf{a}^{i}\cdot \mathbf{x})+P(%
\mathbf{x}).
\end{equation*}
In the process of the proof, we also see how these $g_{i}$ are constructed.
For each $i=1,...,m,$ let $\{\mathbf{e}_{1}^{(i)},...,\mathbf{e}%
_{p-1}^{(i)}\}$ denote an orthonormal basis in the hyperplane perpendicular
to $\mathbf{a}^{i}.$ By $\Delta _{\mathbf{e}}^{(\delta )}F$ we denote the
increment of a function $F$ in a direction $\mathbf{e}$ of length $\delta .$
That is,
\begin{equation*}
\Delta _{\mathbf{e}}^{(\delta )}F(\mathbf{x})=F(\mathbf{x}+\delta \mathbf{e}%
)-F(\mathbf{x}).
\end{equation*}%
We also use the notation $\frac{\partial F}{\partial \mathbf{e}}$ to denote
the derivative of $F$ in a direction $\mathbf{e}$.
It is easy to check that the increment of a ridge function $g(\mathbf{a\cdot
x})$ in any direction perpendicular to $\mathbf{a}$ is zero. For example,
\begin{equation*}
\Delta _{\mathbf{e}_{j}^{(i)}}^{(\delta )}g(\mathbf{a}^{i}\mathbf{\cdot x)}%
=0,
\end{equation*}%
for all $i=1,...,m,$ $j=1,...,p-1.$ Therefore, for any indices $%
i_{1},...,i_{m}\in \{1,...,p-1\}$, $q\in \{1,...,p\}$ and numbers $\delta
_{1},...,\delta _{m},\delta \in \mathbb{R}$ we have the formula
\begin{equation*}
\Delta _{\mathbf{e}_{i_{1}}^{(1)}}^{(\delta _{1})}\Delta _{\mathbf{e}%
_{i_{2}}^{(2)}}^{(\delta _{2})}\cdot \cdot \cdot \Delta _{\mathbf{e}%
_{i_{m}}^{(m)}}^{(\delta _{m})}\Delta _{\mathbf{e}_{q}}^{(\delta )}f(\mathbf{%
x})=\Delta _{\mathbf{e}_{i_{1}}^{(1)}}^{(\delta _{1})}\Delta _{\mathbf{e}%
_{i_{2}}^{(2)}}^{(\delta _{2})}\cdot \cdot \cdot \Delta _{\mathbf{e}%
_{i_{m}}^{(m)}}^{(\delta _{m})}\Delta _{\mathbf{e}_{q}}^{(\delta
)}f_{q}(x_{q}),
\end{equation*}%
where $\mathbf{e}_{q}$ denotes the $q$-th unit vector. This means that for
each $q=1,...,p,$ the mixed directional derivative
\begin{equation*}
\frac{\partial ^{m+1}f}{\partial \mathbf{e}_{i_{1}}^{(1)}\cdot \cdot \cdot
\partial \mathbf{e}_{i_{m}}^{(m)}\partial x_{q}}(\mathbf{x})
\end{equation*}%
depends only on the variable $x_{q}.$ Denote this derivative by $%
h_{i_{1},...,i_{m}}^{0,q}(x_{q})$:
\begin{equation*}
h_{i_{1},...,i_{m}}^{0,q}(x_{q})=\frac{\partial ^{m+1}f}{\partial \mathbf{e}%
_{i_{1}}^{(1)}\cdot \cdot \cdot \partial \mathbf{e}_{i_{m}}^{(m)}\partial
x_{q}}(\mathbf{x}),\text{ }q=1,...,p.\eqno(2.80)
\end{equation*}%
Since $f\in C^{s}(\mathbb{R}^{p}),$ we obtain that $%
h_{i_{1},...,i_{m}}^{0,q}\in C^{s-m-1}(\mathbb{R}).$ It follows from (2.80)
that
\begin{equation*}
d\left( \frac{\partial ^{m}f}{\partial \mathbf{e}_{i_{1}}^{(1)}\cdot \cdot
\cdot \partial \mathbf{e}_{i_{m}}^{(m)}}\right)
=h_{i_{1},...,i_{m}}^{0,1}(x_{1})dx_{1}+\cdot \cdot \cdot
+h_{i_{1},...,i_{m}}^{0,p}(x_{p})dx_{p}.\eqno(2.81)
\end{equation*}%
We conclude from (2.81) that
\begin{equation*}
\frac{\partial ^{m}f}{\partial \mathbf{e}_{i_{1}}^{(1)}\cdot \cdot \cdot
\partial \mathbf{e}_{i_{m}}^{(m)}}(\mathbf{x}%
)=h_{i_{1},...,i_{m}}^{1,1}(x_{1})+\cdot \cdot \cdot
+h_{i_{1},...,i_{m}}^{1,p}(x_{p})+c_{i_{1},...,i_{m}},\eqno(2.82)
\end{equation*}%
where the functions $h_{i_{1},...,i_{m}}^{1,q}(x_{q})$, $q=1,...,p,$ are
antiderivatives of $h_{i_{1},...,i_{m}}^{0,q}(x_{q})$ satisfying the
condition $h_{i_{1},...,i_{m}}^{1,q}(0)=0$ and $c_{i_{1},...,i_{m}}$ is a
constant. Note that $h_{i_{1},...,i_{m}}^{1,q}\in C^{s-m}(\mathbb{R}),$ $%
q=1,...,p.$ Obviously, for any pair $k,t\in \{1,...,p-1\}$,
\begin{equation*}
\frac{\partial ^{m+1}f}{\partial \mathbf{e}_{i_{1}}^{(1)}\cdot \cdot \cdot
\partial \mathbf{e}_{i_{m-1}}^{(m-1)}\partial \mathbf{e}_{k}^{(m)}\partial
\mathbf{e}_{t}^{(m)}}=\frac{\partial ^{m+1}f}{\partial \mathbf{e}%
_{i_{1}}^{(1)}\cdot \cdot \cdot \partial \mathbf{e}_{i_{m-1}}^{(m-1)}%
\partial \mathbf{e}_{t}^{(m)}\partial \mathbf{e}_{k}^{(m)}}\eqno(2.83)
\end{equation*}%
It follows from (2.82) and (2.83) that
\begin{equation*}
(\mathbf{e}_{q}\cdot \mathbf{e}_{k}^{(m)})\left(
h_{i_{1},...,i_{m-1},t}^{1,q}\right) ^{^{\prime }}(x_{q})=(\mathbf{e}%
_{q}\cdot \mathbf{e}_{t}^{(m)})\left( h_{i_{1},...,i_{m-1},k}^{1,q}\right)
^{^{\prime }}(x_{q})+c,
\end{equation*}%
where $c$ is a constant depending on the parameters $i_{1},...,i_{m-1},k,t$
and $q.$ Recall that by construction, $h_{i_{1},...,i_{m}}^{1,q}(0)=0.$ Hence
\begin{equation*}
(\mathbf{e}_{q}\cdot \mathbf{e}%
_{k}^{(m)})h_{i_{1},...,i_{m-1},t}^{1,q}(x_{q})=(\mathbf{e}_{q}\cdot \mathbf{%
e}_{t}^{(m)})h_{i_{1},...,i_{m-1},k}^{1,q}(x_{q})+cx_{q}.\eqno(2.84)
\end{equation*}
Since for each $q=1,...,p,$ the vectors $\mathbf{e}_{q}$ and $\mathbf{a}^{m}$
are linearly independent, there exists an index $i_{m}(q)\in \{1,...,p-1\}$
such that the vector $\mathbf{e}_{i_{m}(q)}^{(m)}$ is not orthogonal to $%
\mathbf{e}_{q}.$ That is, $\mathbf{e}_{q}\cdot \mathbf{e}_{i_{m}(q)}^{(m)}$ $%
\neq 0.$ For each $q=1,...,p,$ fix the index $i_{m}(q)$ and define the
following functions
\begin{equation*}
h_{i_{1},...,i_{m-1}}^{2,q}(x_{q})=\frac{1}{\mathbf{e}_{q}\cdot \mathbf{e}%
_{i_{m}(q)}^{(m)}}\int_{0}^{x_{q}}h_{i_{1},...,i_{m-1},i_{m}(q)}^{1,q}(z)dz.%
\eqno(2.85)
\end{equation*}%
and
\begin{equation*}
F_{i_{1},...,i_{m-1}}(\mathbf{x})=h_{i_{1},...,i_{m-1}}^{2,1}(x_{1})+\cdot
\cdot \cdot +h_{i_{1},...,i_{m-1}}^{2,p}(x_{p}).
\end{equation*}
\bigskip It is easy to obtain from (2.84) and (2.85) that for any $i_{m}\in
\{1,...,p-1\},$%
\begin{equation*}
\frac{\partial F_{i_{1},...,i_{m-1}}}{\partial \mathbf{e}_{i_{m}}^{(m)}}(%
\mathbf{x})=h_{i_{1},...,i_{m}}^{1,1}(x_{1})+\cdot \cdot \cdot
+h_{i_{1},...,i_{m}}^{1,p}(x_{p})+P_{i_{1},...,i_{m}}^{(1)},\eqno(2.86)
\end{equation*}%
where $P_{i_{1},...,i_{m}}^{(1)}$ is a polynomial of total degree not
greater than $1$. It follows from (2.82) and (2.86) that
\begin{equation*}
\frac{\partial }{\partial \mathbf{e}_{i_{m}}^{(m)}}\left[ \frac{\partial
^{m-1}f}{\partial \mathbf{e}_{i_{1}}^{(1)}\cdot \cdot \cdot \partial \mathbf{%
e}_{i_{m-1}}^{(m-1)}}-F_{i_{1},...,i_{m-1}}\right] (\mathbf{x}%
)=c_{i_{1},...,i_{m}}-P_{i_{1},...,i_{m}}^{(1)}(\mathbf{x}).\eqno(2.87)
\end{equation*}%
Note that the last equality is valid for all vectors $\mathbf{e}%
_{i_{m}}^{(m)},$ which form a basis in the hyperplane orthogonal to $\mathbf{%
a}^{m}$. Thus from (2.87) we conclude that the following expansion is valid
\begin{equation*}
\left.
\begin{array}{c}
\frac{\partial ^{m-1}f}{\partial \mathbf{e}_{i_{1}}^{(1)}\cdot \cdot \cdot
\partial \mathbf{e}_{i_{m-1}}^{(m-1)}}(\mathbf{x}%
)=h_{i_{1},...,i_{m-1}}^{2,1}(x_{1})+\cdot \cdot \cdot
+h_{i_{1},...,i_{m-1}}^{2,p}(x_{p}) \\
+\varphi _{i_{1},...,i_{m-1}}^{2,1}(\mathbf{a}^{m}\cdot \mathbf{x}%
)+P_{i_{1},...,i_{m-1}}^{(2)}(\mathbf{x}).%
\end{array}%
\right. \eqno(2.88)
\end{equation*}%
Here all the functions $%
h_{i_{1},...,i_{m-1}}^{2,1},...,h_{i_{1},...,i_{m-1}}^{2,p}(x_{p}),\varphi
_{i_{1},...,i_{m-1}}^{2,1}\in C^{s-m+1}(\mathbb{R})$ and $%
P_{i_{1},...,i_{m-1}}^{(2)}$ is a polynomial of total degree not greater
than $2.$
Since for each $q=1,...,p,$ the vector $\mathbf{e}_{q}$ is not collinear to $%
\mathbf{a}^{m-1},$ there is an index $i_{m-1}(q)\in \{1,...,p-1\}$ such that
$\mathbf{e}_{i_{m-1}(q)}^{(m-1)}$ is not orthogonal to $\mathbf{e}_{q}$.
Similarly, since $\mathbf{a}^{m-1}$ is not collinear to $\mathbf{a}^{m},$
there is an index $i_{m-1}(m)\in \{1,...,p-1\}$ such that $\mathbf{e}%
_{i_{m-1}(m)}^{(m-1)}$ is not orthogonal to $\mathbf{a}^{m}$. Fix the
indices $i_{m-1}(q)$, $i_{m-1}(m)$ and consider the following functions
\begin{equation*}
h_{i_{1},...,i_{m-2}}^{3,q}(x_{q})=\frac{1}{\mathbf{e}_{q}\cdot \mathbf{e}%
_{i_{m-1}(q)}^{(m-1)}}%
\int_{0}^{x_{q}}h_{i_{1},...,i_{m-2},i_{m-1}(q)}^{2,q}(z)dz,\text{ }%
q=1,...,p,\eqno(2.89)
\end{equation*}
\begin{equation*}
\varphi _{i_{1},...,i_{m-2}}^{3,1}(t)=\frac{1}{\mathbf{a}^{m}\cdot \mathbf{e}%
_{i_{m-1}(m)}^{(m-1)}}\int_{0}^{t}\varphi
_{i_{1},...,i_{m-2},i_{m-1}(m)}^{2,1}(z)dz,\eqno(2.90)
\end{equation*}%
and
\begin{equation*}
F_{i_{1},...,i_{m-2}}(\mathbf{x})=h_{i_{1},...,i_{m-2}}^{3,1}(x_{1})+\cdot
\cdot \cdot +h_{i_{1},...,i_{m-2}}^{3,p}(x_{p})+\varphi
_{i_{1},...,i_{m-2}}^{3,1}(\mathbf{a}^{m}\cdot \mathbf{x}).\eqno(2.91)
\end{equation*}
Similar to (2.84), one can easily verify that for any pair $k,t\in
\{1,...,p-1\}$ and for all $q=1,...,p,$ the following equalities are valid.
\begin{equation*}
\left.
\begin{array}{c}
(\mathbf{e}_{q}\cdot \mathbf{e}%
_{k}^{(m-1)})h_{i_{1},...,i_{m-2},t}^{2,q}(x_{q})=(\mathbf{e}_{q}\cdot
\mathbf{e}_{t}^{(m-1)})h_{i_{1},...,i_{m-2},k}^{2,q}(x_{q})+H_{q}(x_{q}), \\
(\mathbf{a}^{m}\cdot \mathbf{e}_{k}^{(m-1)})\varphi
_{i_{1},...,i_{m-2},t}^{2,1}(\mathbf{a}^{m}\cdot \mathbf{x})=(\mathbf{a}%
^{m}\cdot \mathbf{e}_{t}^{(m-1)})\varphi _{i_{1},...,i_{m-2},k}^{2,1}(%
\mathbf{a}^{m}\cdot \mathbf{x})+\Phi (\mathbf{x}),%
\end{array}%
\right. \eqno(2.92)
\end{equation*}%
where $H_{q}$ and $\Phi $ are univariate and $n$-variable polynomials of
degree not greater than $2.$ Indeed, applying the Schwarz formula
\begin{equation*}
\frac{\partial ^{m+1}f}{\partial \mathbf{e}_{i_{1}}^{(1)}\cdot \cdot \cdot
\partial \mathbf{e}_{i_{m-2}}^{(m-2)}\partial \mathbf{e}_{k}^{(m-1)}\partial
\mathbf{e}_{t}^{(m-1)}\partial \mathbf{e}_{i_{m}(q)}^{(m)}}=\frac{\partial
^{m+1}f}{\partial \mathbf{e}_{i_{1}}^{(1)}\cdot \cdot \cdot \partial \mathbf{%
e}_{i_{m-2}}^{(m-2)}\partial \mathbf{e}_{t}^{(m-1)}\partial \mathbf{e}%
_{k}^{(m-1)}\partial \mathbf{e}_{i_{m}(q)}^{(m)}}
\end{equation*}%
on the symmetry of derivatives, it follows from (2.82) that for any $k,t\in
\{1,...,p-1\}$
\begin{equation*}
(\mathbf{e}_{q}\cdot \mathbf{e}_{k}^{(m-1)})\left(
h_{i_{1},...,i_{m-2},t,i_{m}(q)}^{1,q}\right) ^{^{\prime }}(x_{q})=(\mathbf{e%
}_{q}\cdot \mathbf{e}_{t}^{(m-1)})\left(
h_{i_{1},...,i_{m-2},k,i_{m}(q)}^{1,q}\right) ^{^{\prime }}(x_{q})+d,
\end{equation*}%
where $d$ is a constant depending on the parameters $i_{1},...,i_{m-2},k,t$
and $i_{m}(q).$ Since, by construction, $h_{i_{1},...,i_{m}}^{1,q}(0)=0,$ we
obtain that
\begin{equation*}
(\mathbf{e}_{q}\cdot \mathbf{e}%
_{k}^{(m-1)})h_{i_{1},...,i_{m-2},t,i_{m}(q)}^{1,q}(x_{q})=(\mathbf{e}%
_{q}\cdot \mathbf{e}%
_{t}^{(m-1)})h_{i_{1},...,i_{m-2},k,i_{m}(q)}^{1,q}(x_{q})+dx_{q}.
\end{equation*}%
The last equality together with (2.85) yield that
\begin{equation*}
(\mathbf{e}_{q}\cdot \mathbf{e}_{k}^{(m-1)})\left(
h_{i_{1},...,i_{m-2},t}^{2,q}\right) ^{^{\prime }}(x_{q})=(\mathbf{e}%
_{q}\cdot \mathbf{e}_{t}^{(m-1)})\left( h_{i_{1},...,i_{m-2},k}^{2,q}\right)
^{^{\prime }}(x_{q})+dx_{q}.
\end{equation*}%
Therefore, the first equality in (2.92) holds. Considering this and applying
the corresponding Schwarz formula to (2.88) we obtain the second equality in
(2.92).
Taking into account the definitions (2.89), (2.90) and the relations (2.92),
we obtain from (2.91) that for any $i_{m-1}\in \{1,...,p-1\},$%
\begin{equation*}
\frac{\partial F_{i_{1},...,i_{m-2}}}{\partial \mathbf{e}_{i_{m-1}}^{(m-1)}}(%
\mathbf{x})
\end{equation*}
\begin{equation*}
=h_{i_{1},...,i_{m-1}}^{2,1}(x_{1})+\cdot \cdot \cdot
+h_{i_{1},...,i_{m-1}}^{2,p}(x_{p})+\varphi _{i_{1},...,i_{m-1}}^{2,1}(%
\mathbf{a}^{m}\cdot \mathbf{x})+\widetilde{P}_{i_{1},...,i_{m-1}}^{(2)}(%
\mathbf{x}),\eqno(2.93)
\end{equation*}%
where $\widetilde{P}_{i_{1},...,i_{m-1}}^{(2)}$ is a polynomial of degree
not greater than $2.$ It follows from (2.88) and (2.93) that
\begin{equation*}
\frac{\partial }{\partial \mathbf{e}_{i_{m-1}}^{(m-1)}}\left[ \frac{\partial
^{m-2}f}{\partial \mathbf{e}_{i_{1}}^{(1)}\cdot \cdot \cdot \partial \mathbf{%
e}_{i_{m-2}}^{(m-2)}}-F_{i_{1},...,i_{m-2}}\right] (\mathbf{x}%
)=P_{i_{1},...,i_{m-1}}^{(2)}(\mathbf{x})-\widetilde{P}%
_{i_{1},...,i_{m-1}}^{(2)}(\mathbf{x}).\eqno(2.94)
\end{equation*}%
Note that the last equality is valid for all vectors $\mathbf{e}%
_{i_{m-1}}^{(m-1)},$ which form a basis in the hyperplane orthogonal to $%
\mathbf{a}^{m-1}$. Considering this, from (2.94) we derive the following
representation
\begin{equation*}
\left.
\begin{array}{c}
\frac{\partial ^{m-2}f}{\partial \mathbf{e}_{i_{1}}^{(1)}\cdot \cdot \cdot
\partial \mathbf{e}_{i_{m-2}}^{(m-2)}}(\mathbf{x}%
)=h_{i_{1},...,i_{m-2}}^{3,1}(x_{1})+\cdot \cdot \cdot
+h_{i_{1},...,i_{m-2}}^{3,p}(x_{p}) \\
+\varphi _{i_{1},...,i_{m-2}}^{3,1}(\mathbf{a}^{m}\cdot \mathbf{x})+\varphi
_{i_{1},...,i_{m-2}}^{3,2}(\mathbf{a}^{m-1}\cdot \mathbf{x}%
)+P_{i_{1},...,i_{m-2}}^{(3)}(\mathbf{x}).%
\end{array}%
\right. \eqno(2.95)
\end{equation*}%
Here all the functions \newline $%
h_{i_{1},...,i_{m-2}}^{3,1},...,h_{i_{1},...,i_{m-2}}^{3,p}(x_{p}),$ $%
\varphi _{i_{1},...,i_{m-2}}^{3,1},$ $\varphi _{i_{1},...,i_{m-2}}^{3,2}\in
C^{s-m+2}(\mathbb{R})$ and $P_{i_{1},...,i_{m-2}}^{(3)}$ is a polynomial of
total degree not greater than $3.$
Note that in the left hand sides of (2.82), (2.88) and (2.95) we have the
mixed directional derivatives of $f$ and the order of these derivatives is
decreased by one at each consecutive step. Continuing the above process,
until it reaches the function $f$, we obtain the desired representation.
Note that the above proof gives a recipe for constructing the smooth ridge
functions $g_{i}$. Writing out explicit recurrent formulas for $g_{i}$, as
in Theorem 2.11, is technically cumbersome here and hence is avoided.
\end{proof}
\textbf{Remark 2.7.} Note that using Theorem 2.12, the degree of polynomial $%
P(\mathbf{x})$ in Theorem 2.5 can be reduced. Indeed, it follows from (2.27)
and (2.28) that the the above polynomial $P(\mathbf{x})$ is of the form
(2.1). On the other hand, by Theorem 2.12 there exist functions $g_{i}^{\ast
}\in C^{s}(\mathbb{R})$, $i=1,...,k$, and a polynomial $G(\mathbf{x})$ of
degree at most $k-p+1$ such that
\begin{equation*}
P(\mathbf{x})=\sum_{i=1}^{k}g_{i}^{\ast }(\mathbf{a}^{i}\cdot \mathbf{x})+G(%
\mathbf{x}).
\end{equation*}%
Now considering this in (2.24) we see that our assertion is true.
\bigskip
At the end of this chapter, we want to draw the reader's attention to the
following uniqueness question. Assume we are given pairwise linearly
independent vectors $\mathbf{a}^{i},$ $i=1,...,k,$ in $\mathbb{R}^{n}$ and a
function $f:\mathbb{R}^{n}\rightarrow \mathbb{R}$ of the form (2.1). How
many different ways can $f$ be written as a sum of ridge functions with the
directions $\mathbf{a}^{i}$? Clearly, representation (2.1) is not
unique, since we can always add some constants $c_{i}$ to $f_{i}$ without
changing the resulting sum in (2.1) provided that $\sum_{i=1}^{k}c_{i}=0$. It turns out
that under minimal requirements representation (2.1) is unique up to
polynomials of degree at most $k-2$. More precisely, if, in addition to (2.1), $%
f$ also has the form (2.2) and $%
f_{i},g_{i}\in \mathcal{B}$, $i=1,...,k$, then the functions $f_{i}-g_{i}$ are univariate
polynomials of degree at most $k-2$. This result is due to Pinkus
\cite[Theorem 3.1]{117}. It follows immediately from
this result that in Theorems 2.6--2.10 the functions $g_{i}$ is unique up to
a univariate polynomial. This is also valid for $g_{i}$ in Theorems 2.4 and
2.5, but in this case for the proof we must apply a slightly different result of Pinkus
\cite[Corollary 3.2]{117}: Assume a multivariate
polynomial $f$ of degree $m$ is of the form (2.1) and $f_{i}\in \mathcal{B}$
for $i=1,...,k$. Then $f_{i}$ are univariate polynomials of degree at most $%
l=\max \left\{ m,k-2\right\} $.
A different uniqueness problem, in a more general setting, will be analyzed
in Chapter 4. In that problem we will look for sets $Q\subset \mathbb{R}^{n}$
for which representation (2.1), considered on $Q$, is unique.
\newpage
\chapter{Approximation of multivariate functions by sums of univariate
functions}
It is clear that in the special case, when directions of ridge functions
coincide with the coordinate directions, the problem of approximation by
linear combinations of these functions turn into the problem of
approximation by sums of univariate functions. This is also the simplest
case in ridge function approximation. The simplicity of the approximation
guarantees its practicability in application areas, where complicated
multivariate functions are main obstacles. In mathematics, this type of
approximation has arisen, for example, in connection with the classical
functional equations \cite{11}, the numerical solution of certain PDE
boundary value problems \cite{9}, dimension theory \cite{133,132}, etc. In
this chapter, we obtain some results concerning the problem of best
approximation by sums of univariate functions.
Most of the material of this chapter is taken from \cite{59,55,56,48}.
\bigskip
\section{Characterization of some bivariate function classes by formulas for
the error of approximation}
This section is devoted to calculation formulas for the error of
approximation of bivariate functions by sums of univariate functions.
Certain classes of bivariate functions depending on some numerical parameter
are constructed and characterized in terms of the approximation error
calculation formulas.
\subsection{Exposition of the problem}
The approximation problem considered here is to approximate a continuous and
real-valued function of two variables by sums of two continuous functions of
one variable. To make the problem precise, let $Q$ be a compact set in the $%
xOy$ plane. Consider the approximation of a continuous function $f \in C(Q)$
by functions from the manifold $D=\left\{ \varphi (x)+\psi (y)\right\} ,$
where $\varphi (x),\psi (y)$ are defined and continuous on the projections
of $Q$ into the coordinate axes $x$ and $y$, respectively. The approximation
error is defined as the distance from $f$ to $D:$
\begin{equation*}
E(f)=dist(f,D)=\inf\limits_{D}\left\Vert f-\varphi -\psi \right\Vert _{C(Q)}=
\end{equation*}%
\begin{equation*}
=\inf\limits_{D}\max\limits_{(x,y)\in Q}\left\vert f(x,y)-\varphi (x)-\psi
(y)\right\vert.
\end{equation*}%
A function $\varphi _{0}(x)+\psi _{0}(y)$ from $D$, if it exists, is called
an extremal element or a best approximating sum if
\begin{equation*}
E(f)=\left\Vert f-\varphi _{0}-\psi _{0}\right\Vert _{C(Q)}.
\end{equation*}%
To show that $E(f)$ depends also on $Q$, in some cases to avoid confusion,
we will write $E(f,Q)$ instead of $E(f)$.
In this section we deal with calculation formulas for $E(f)$. In 1951
Diliberto and Straus published a paper \cite{26}, in which along with other
results they established a formula for $E(f,R)$, where $R$ here and
throughout this section is a rectangle with sides parallel to the coordinate
axes, containing supremum over all closed lightning bolts. Later the same
formula was established by other authors differently, in cases of both
rectangle (see \cite{113}) and more general sets (see \cite{79,107}). Although the formula was valid for all continuous functions, it was not
easily calculable. Some authors started to seek easily calculable formulas
for the approximation error for some subsets of continuous functions. Rivlin
and Sibner \cite{121} proved a result, which allow one to find the exact
value of $E(f,R)$ for a function $f(x,y)$ having the continuous and
nonnegative derivative $\frac{\partial ^{2}f}{\partial x\partial y}$. This
result in a more general case (for functions of $n$ variables) was proved by
Flatto \cite{30}. Babaev \cite{6} generalized Rivlin and Sibner's result (as
well as Flatto's result, see \cite{7}). More precisely, he considered the
class $M(R)$ of continuous functions $f(x,y) $ with the property
\begin{equation*}
\Delta
_{h_{1},h_{1}}f=f(x,y)+f(x+h_{1},y+h_{2})-f(x,y+h_{2})-f(x+h_{1},y)\geq 0
\end{equation*}%
for each rectangle $\left[ x,x+h_{1}\right] \times \left[ y,y+h_{2}\right]
\subset R$, and proved that if $f(x,y)$ belongs to $M(R)$, where $R=\left[
a_{1},b_{1}\right] \times \left[ a_{2},b_{2}\right] $, then
\begin{equation*}
E(f,R)=\frac{1}{4}\left[
f(a_{1},a_{2})+f(b_{1},b_{2})-f(a_{1},b_{2})-f(b_{1},a_{2})\right] .
\end{equation*}%
As seen from this formula, to calculate $E(f)$ it is sufficient to find only
values of $f(x,y)$ at the vertices of $R$. One can see that the formula also
gives a sufficient condition for membership in the class $M(R)$, i.e. if
\begin{equation*}
E(f,S)=\frac{1}{4}\left[
f(x_{1},y_{1})+f(x_{2},y_{2})-f(x_{1},y_{2})-f(x_{2},y_{1})\right] ,
\end{equation*}%
for a given $f$ and for each $S=\left[ x_{1},x_{2}\right] \times \left[
y_{1},y_{2}\right] \subset R$, then the function $f(x,y)$ is from $M(R)$.
Our purpose is to construct new classes of continuous functions, which will
depend on a numerical parameter, and characterize each class in terms of the
approximation error calculation formulas. The mentioned parameter will show
which points of $R$ the calculation formula involves. We will also construct
a best approximating sum $\varphi _{0}+\psi _{0}$ to a function from
constructed classes.
\bigskip
\subsection{Definition of the main classes}
Let throughout this section $R=\left[ a_{1},b_{1}\right] \times \left[
a_{2},b_{2}\right] $ be a rectangle and $c\in (a_{1},b_{1}]$. Denote $R_{1}=%
\left[ a_{1},c\right] \times \left[ a_{2},b_{2}\right] $ and $R_{2}=\left[
c,b_{1}\right] \times \left[ a_{2},b_{2}\right] $. It is clear that $%
R=R_{1}\cup R_{2}$ and if $c=b_{1}$, then $R=R_{1}$.
We associate each rectangle $S=\left[ x_{1},x_{2}\right] \times \left[
y_{1},y_{2}\right] $ lying in $R$ with the following functional:
\begin{equation*}
L(f,S)=\frac{1}{4}\left[
f(x_{1},y_{1})+f(x_{2},y_{2})-f(x_{1},y_{2})-f(x_{2},y_{1})\right] .
\end{equation*}
\bigskip
\textbf{Definition 3.1.} \textit{We say that a continuous function $f(x,y)$
belongs to the class $V_{c}(R)$ if }
\textit{1) $L(f,S)\geq 0$, for each $S\subset R_{1}$; }
\textit{2) $L(f,S)\leq 0$, for each $S\subset R_{2}$; }
\textit{3) $L(f,S)\geq 0$, for each $S=\left[ a_{1},b_{1}\right] \times %
\left[ y_{1},y_{2}\right] ,~\ S\subset R$.}
\bigskip
It can be shown that for any $c\in (a_{1},b_{1}]$ the class $V_{c}(R)$ is
not empty. Indeed, one can easily verify that the function
\begin{equation*}
v _{c}(x,y)=\left\{
\begin{array}{c}
w(x,y)-w(c,y),\;\ (x,y)\in R_{1} \\
w(c,y)-w(x,y),\;\ (x,y)\in R_{2}%
\end{array}%
\right.
\end{equation*}
where $w(x,y)=\left( \frac{x-a_{1}}{b_{1}-a_{1}}\right) ^{\frac{1}{n}}\cdot
y $ and $n\geq \log _{2}\frac{b_{1}-a_{1}}{c-a_{1}}$, satisfies conditions
1)-3) and therefore belongs to $V_{c}(R)$. The class $V_{c}(R)$ has the
following obvious properties:\newline
a) For given functions $f_{1},f_{2}\in V_{c}(R)$ and numbers $\alpha
_{1},\alpha _{2}\geq 0$, $\alpha _{1}f_{1}+\alpha _{2}f_{2}\in V_{c}(R)$. $%
V_{c}(R)$ is a closed subset of the space of continuous functions.\newline
b) $V_{b_{1}}(R)=M(R)$.\newline
c) If $f$ is a common element of $V_{c_{1}}(R)$ and $V_{c_{2}}(R)$, $%
a_{1}<c_{1}<c_{2}\leq b_{1}$ then $f(x,y)=\varphi (x)+\psi (y)$ on the
rectangle $\left[ c_{1},c_{2}\right] \times \left[ a_{2},b_{2}\right] $.
The properties a) and b) are clear. The property c) also becomes clear if
note that according to the definition of the classes $V_{c_{1}}(R)$ and $%
V_{c_{2}}(R)$, for each rectangle
\begin{equation*}
S\subset \left[ c_{1},c_{2}\right] \times \left[ a_{2},b_{2}\right]
\end{equation*}
we have
\begin{equation*}
L(f,S)\leq 0\;\; \mbox{and}\;\; L(f,S)\geq 0,
\end{equation*}
respectively. Hence
\begin{equation*}
L(f,S)=0\;\; \mbox{for each}\;\; S\subset \left[ c_{1},c_{2}\right] \times %
\left[ a_{2},b_{2}\right].
\end{equation*}
Thus it is not difficult to understand that $f$ is of the form $\varphi
(x)+\psi (y)$ on the rectangle $\left[ c_{1},c_{2}\right] \times \left[
a_{2},b_{2}\right] $.
\bigskip
\textbf{Lemma 3.1.} \textit{Assume a function $f(x,y)$ has the continuous
derivative $\frac{\partial ^{2}f}{\partial x\partial y}$ on the rectangle $R$
and satisfies the following conditions}
\textit{1) $\frac{\partial ^{2}f}{\partial x\partial y}\geq 0$, for all $%
(x,y)\in R_{1}$; }
\textit{2) $\frac{\partial ^{2}f}{\partial x\partial y}\leq 0$, for all $%
(x,y)\in R_{2}$; }
\textit{3) $\frac{df(a_{1},y)}{dy}\leq \frac{df(b_{1},y)}{dy}$, for all $%
y\in \left[ a_{2},b_{2}\right]$.}
\textit{Then $f(x,y)$ belongs to $V_{c}(R)$.}
\bigskip
The proof of this lemma is very simple and can be obtained by integrating
both sides of inequalities in conditions 1)-3) through sets $\left[
x_{1},x_{2}\right] \times \left[ y_{1},y_{2}\right] \subset R_{1}$, $\left[
x_{1},x_{2}\right] \times \left[ y_{1},y_{2}\right] \subset R_{2}$ and $%
\left[ y_{1},y_{2}\right] \subset \left[ a_{2},b_{2}\right]$, respectively.
\bigskip
\textbf{Example 3.1.} Consider the function $f(x,y)=y\sin \pi x$ on the unit
square $K=\left[ 0,1\right] \times \left[ 0,1\right] $ and rectangles $K_{1}=%
\left[ 0,\frac{1}{2}\right] \times \left[ 0,1\right] ,K_{2}=\left[ \frac{1}{2%
},1\right] \times \left[ 0,1\right] $. It is not difficult to verify that
this function satisfies all conditions of the lemma and therefore belongs to
$V_{\frac{1}{2}}(K)$.
\bigskip
\subsection{Construction of an extremal element}
The following theorem is valid.
\bigskip
\textbf{Theorem 3.1.}\ \textit{The approximation error of a function $f(x,y)$
from the class $V_{c}(R)$ can be calculated by the formula
\begin{equation*}
E(f,R)=L(f,R_{1})=\frac{1}{4}\left[
f(a_{1},a_{2})+f(c,b_{2})-f(a_{1},b_{2})-f(c,a_{2})\right] .
\end{equation*}%
Let $y_{0}$ be any solution from $\left[ a_{2},b_{2}\right] $ of the
equation
\begin{equation*}
L(f,Y)=\frac{1}{2}L(f,R_{1}),\;\;Y=\left[ a_{1},c\right] \times \left[
a_{2},y\right] .
\end{equation*}%
Then the function $\varphi _{0}(x)+\psi _{0}(y)$, where
\begin{equation*}
\varphi _{0}(x)=f(x,y_{0}),
\end{equation*}%
\begin{equation*}
\psi _{0}(y)=\frac{1}{2}\left[ f(a_{1},y)+f(c,y)-f(a_{1},y_{0})-f(c,y_{0})%
\right]
\end{equation*}%
is a best approximating sum from the manifold $D$ to $f$.}
\bigskip
To prove this theorem we need the following lemma.
\bigskip
\textbf{Lemma 3.2.}\ \textit{Let $f(x,y)$ be a function from $V_{c}(R)$ and $%
X=\left[ a_{1},x\right] \times \left[ y_{1},y_{2}\right] $ be a rectangle
with fixed $y_{1},y_{2}\in \left[ a_{2},b_{2}\right] $. Then the function $%
h(x)=L(f,X)$ has the properties: }
\textit{1) $h(x)\geq 0$, for any $x\in \left[ a_{1},b_{1}\right] $; }
\textit{2) $\max\limits_{\left[ a_{1},b_{1}\right] }h(x)=h(c)$ and $%
\min\limits_{\left[ a_{1},b_{1}\right] }h(x)=h(a_{1})=0$.}
\bigskip
\textbf{Proof.} If $X\subset R_{1}$, then the validity of $h(x)\geq 0$
follows from the definition of $V_{c}(R)$. If $X$ is from $R$ but not lying
in $R_{1}$, then by denoting $X^{\prime }=\left[ x,b_{1}\right] \times \left[
y_{1},y_{2}\right] ,S=X\cup X^{\prime }$ and using the obvious equality
\begin{equation*}
L(f,S)=L(f,X)+L(f,X^{\prime })
\end{equation*}
we deduce from the definition of $V_{c}(R)$ that $h(x)\geq 0$.
To prove the second part of the lemma, it is enough to show that $h(x)$
increases on the interval $\left[ a_{1},c\right] $ and decreases on the
interval $\left[ c,b_{1}\right] $. Indeed, if $a_{1}\leq x_{1}\leq x_{2}\leq
c$, then
\begin{equation*}
h(x_{2})=L(f,X_{2})=L(f,X_{1})+L(f,X_{12}),\eqno(3.1)
\end{equation*}%
where $X_{1}=\left[ a_{1},x_{1}\right] \times \left[ y_{1},y_{2}\right] ,$ $%
X_{2}=\left[ a_{1},x_{2}\right] \times \left[ y_{1},y_{2}\right] ,$ $X_{12}=%
\left[ x_{1},x_{2}\right] \times \left[ y_{1},y_{2}\right] $. Taking into
consideration that $L(f,X_{1})=h(x_{1})$ and $X_{12}$ lies in $R_{1}$ we
obtain from (3.1) that $h(x_{2})\geq h(x_{1})$. If $c\leq x_{1}\leq
x_{2}\leq b_{1}$, then $X_{12}$ lies in $R_{2}$ and we obtain from (3.1)
that $h(x_{2})\leq h(x_{1})$.
\bigskip
\textbf{Proof of Theorem 3.1.} It is obvious that $L(f,R_{1})=L(f-\varphi
-\psi ,R_{1})$ for each sum $\varphi (x)+\psi (y)$. Hence
\begin{equation*}
L(f,R_{1})\leq \left\Vert f-\varphi -\psi \right\Vert _{C(R_{1})}\leq
\left\Vert f-\varphi -\psi \right\Vert _{C(R)}.
\end{equation*}%
Since a sum $\varphi (x)+\psi (y)$ is arbitrary, $L(f,R_{1})\leq E(f,R)$. To
complete the proof it is sufficient to construct a sum $\varphi _{0}(x)+\psi
_{0}(y)$ for which the equality
\begin{equation*}
\left\Vert f-\varphi _{0}-\psi _{0}\right\Vert _{C(R)}=L(f,R_{1})\eqno(3.2)
\end{equation*}%
holds.
Consider the function
\begin{equation*}
g(x,y)=f(x,y)-f(x,a_{2})-f(a_{1},y)+f(a_{1},a_{2}).
\end{equation*}%
This function has the following obvious properties
1) $g(x,a_{2})=g(a_{1},y)=0$;
2) $L(f,R_{1})=L(g,R_{1})=\frac{1}{4}g(c,b_{2})$;
3) $E(f,R)=E(g,R)$;
4) The function of one variable $g(c,y)$ increases on the interval $\left[
a_{2},b_{2}\right] $.
The last property of $g$ allows us to write that
\begin{equation*}
0=g(c,a_{2})\leq \frac{1}{2}g(c,b_{2})\leq g(c,b_{2}).
\end{equation*}%
Since $g(x,y)$ is continuous, there exists at least one solution $y=y_{0}$
of the equation
\begin{equation*}
g(c,y)=\frac{1}{2}g(c,b_{2})
\end{equation*}%
or, in other notation, of the equation
\begin{equation*}
L(f,Y)=\frac{1}{2}L(f,R_{1}),\;\;\text{where}\;\;Y=\left[ a_{1},c\right]
\times \left[ a_{2},y\right] ,
\end{equation*}%
Introduce the functions
\begin{equation*}
\varphi _{1}(x)=g(x,y_{0}),
\end{equation*}%
\begin{equation*}
\psi _{1}(y)=\frac{1}{2}\left( g(c,y)-g(c,y_{0})\right) ,
\end{equation*}%
\begin{equation*}
G(x,y)=g(x,y)-\varphi _{1}(x)-\psi _{1}(y).
\end{equation*}%
Calculate the norm of $G(x,y)$ on $R$. Consider the rectangles $R^{\prime }=%
\left[ a_{1},b_{1}\right] \times \left[ y_{0},b_{2}\right] $ and $R^{\prime
\prime }=\left[ a_{1},b_{1}\right] \times \left[ a_{2},y_{0}\right] $. It is
clear that
\begin{equation*}
\left\Vert G\right\Vert _{C(R)}=\max \left\{ \left\Vert G\right\Vert
_{C(R^{\prime })},\left\Vert G\right\Vert _{C(R^{\prime \prime })}\right\} .
\end{equation*}%
First calculate the norm $\left\Vert G\right\Vert _{C(R^{\prime })}$:
\begin{equation*}
\left\Vert G\right\Vert _{C(R^{\prime })}=\max\limits_{(x,y)\in R^{\prime
}}\left\vert G(x,y)\right\vert =\max\limits_{y\in \left[ y_{0},b_{2}\right]
}\max\limits_{x\in \left[ a_{1},b_{1}\right] }\left\vert G(x,y)\right\vert .%
\eqno(3.3)
\end{equation*}%
For a fixed point $y$ (we keep it fixed until (3.6)) from the interval $%
\left[ y_{0},b_{2}\right] $ we can write that
\begin{equation*}
\max\limits_{x\in \left[ a_{1},b_{1}\right] }G(x,y)=\max\limits_{x\in \left[
a_{1},b_{1}\right] }\left( g(x,y)-g(x,y_{0})\right) -\psi _{1}(y)\eqno(3.4)
\end{equation*}%
and
\begin{equation*}
\min\limits_{x\in \left[ a_{1},b_{1}\right] }G(x,y)=\min\limits_{x\in \left[
a_{1},b_{1}\right] }\left( g(x,y)-g(x,y_{0})\right) -\psi _{1}(y).\eqno(3.5)
\end{equation*}%
By Lemma 3.2, the function
\begin{equation*}
h_{1}(x)=4L(f,X)=g(x,y)-g(x,y_{0}),\;\;\mbox{where}\;\;X=\left[ a_{1},x%
\right] \times \left[ y_{0},y\right] ,
\end{equation*}%
reaches its maximum on $x=c$ and minimum on $x=a_{1}$:
\begin{equation*}
\max\limits_{x\in \left[ a_{1},b_{1}\right] }h_{1}(x)=g(c,y)-g(c,y_{0})
\end{equation*}%
\begin{equation*}
\min\limits_{x\in \left[ a_{1},b_{1}\right]
}h_{1}(x)=g(a_{1},y)-g(a_{1},y_{0})=0.
\end{equation*}%
Considering these facts in (3.4) and (3.5) we obtain that
\begin{equation*}
\max\limits_{x\in \left[ a_{1},b_{1}\right] }G(x,y)=g(c,y)-g(c,y_{0})-\psi
_{1}(y)=\frac{1}{2}\left( g(c,y)-g(c,y_{0})\right) ,
\end{equation*}%
\begin{equation*}
\min\limits_{x\in \left[ a_{1},b_{1}\right] }G(x,y)=-\psi _{1}(y)=-\frac{1}{2%
}\left( g(c,y)-g(c,y_{0})\right) .
\end{equation*}%
Consequently,
\begin{equation*}
\max\limits_{x\in \left[ a_{1},b_{1}\right] }\left\vert G(x,y)\right\vert =%
\frac{1}{2}\left( g(c,y)-g(c,y_{0})\right) .\eqno(3.6)
\end{equation*}%
Taking (3.6) and the $4$-th property of $g$ into account in (3.3) yields
\begin{equation*}
\left\Vert G\right\Vert _{C(R^{\prime })}=\frac{1}{2}\left(
g(c,b_{2})-g(c,y_{0})\right) =\frac{1}{4}g(c,b_{2}).
\end{equation*}%
Similarly it can be shown that
\begin{equation*}
\left\Vert G\right\Vert _{C(R^{\prime \prime })}=\frac{1}{4}g(c,b_{2}).
\end{equation*}%
Hence
\begin{equation*}
\left\Vert G\right\Vert _{C(R)}=\frac{1}{4}g(c,b_{2})=L(f,R_{1}).
\end{equation*}%
But by the definition of $G$,
\begin{equation*}
G(x,y)=g(x,y)-\varphi _{1}(x)-\psi _{1}(y)=f(x,y)-\varphi _{0}(x)-\psi
_{0}(y),
\end{equation*}%
where
\begin{equation*}
\varphi _{0}(x)=\varphi
_{1}(x)+f(x,a_{2})-f(a_{1},a_{2})+f(a_{1},y_{0})=f(x,y_{0}),
\end{equation*}%
\begin{equation*}
\psi _{0}(y)=\psi _{1}(y)+f(a_{1},y)-f(a_{1},y_{0})=
\end{equation*}%
\begin{equation*}
=\frac{1}{2}\left( f(a_{1},y)+f(c,y)-f(a_{1},y_{0})-f(c,y_{0})\right) .
\end{equation*}%
Therefore,
\begin{equation*}
\left\Vert f-\varphi _{0}-\psi _{0}\right\Vert _{C(R)}=L(f,R_{1}).
\end{equation*}%
We proved (3.2) and hence Theorem 3.1. Note that the function $\varphi
_{0}(x)+\psi _{0}(y)$ is a best approximating sum from the manifold ${D}$ to
$f$.
\bigskip
\textbf{Remark 3.1.} In the special case $c=b_{1}$, Theorem 3.1 turns into
Babaev's result from \cite{6}.
\bigskip
\textbf{Corollary 3.1.} \textit{Let a function $f(x,y)$ have the continuous
derivative $\frac{\partial ^{2}f}{\partial x\partial y}$ on the rectangle $R$
and satisfy the following conditions }
\textit{1) $\frac{\partial ^{2}f}{\partial x\partial y}\geq 0$, for all $%
(x,y)\in R_{1}$; }
\textit{2) $\frac{\partial ^{2}f}{\partial x\partial y}\leq 0$, for all $%
(x,y)\in R_{2}$; }
\textit{3) $\frac{df(a_{1},y)}{dy}\leq \frac{df(b_{1},y)}{dy}$, for all $%
y\in \left[ a_{2},b_{2}\right] $. }
\textit{Then}
\begin{equation*}
E(f,R)=L(f,R_{1})=\frac{1}{4}\left[
f(a_{1},a_{2})+f(c,b_{2})-f(a_{1},b_{2})-f(c,a_{2})\right] .
\end{equation*}
The proof of this corollary can be obtained directly from Lemma 3.1 and
Theorem 3.1.
\bigskip
\textbf{Remark 3.2.} Rivlin and Sibner \cite{121} proved Corollary 3.1 in
the special case $c=b_{1}$.
\bigskip
\textbf{Example 3.2.} As we know (see Example 3.1) the function $f=y\sin \pi
x$ belongs to $V_{\frac{1}{2}}(K)$, where $K=\left[ 0,1\right] \times \left[
0,1\right] $. By Theorem 3.1, $E(f,K)=\frac{1}{4}$ and the function $\frac{1%
}{2}\sin \pi x+\frac{1}{2}y-\frac{1}{4}$ is a best approximating sum.
\bigskip
The following theorem shows that in some cases the approximation error
formula in Theorem 3.1 is valid for more general sets than rectangles with
sides parallel to the coordinate axes.
\bigskip
\textbf{Theorem 3.2.} \textit{Let $f(x,y)$ be a function from $V_{c}(R)$ and
$Q\subset R$ be a compact set which contains all vertices of $R_{1}$ (points
$(a_{1},a_{2}),(a_{1},b_{2}),(c,a_{2}),(c,b_{2})$). Then}
\begin{equation*}
E(f,Q)=L(f,R_{1})=\frac{1}{4}\left[
f(a_{1},a_{2})+f(c,b_{2})-f(a_{1},b_{2})-f(c,a_{2})\right] .
\end{equation*}
\textbf{Proof.} Since $Q\subset R,$ $E(f,Q)\leq E(f,R)$. On the other hand
by Theorem 3.1, $E(f,R)=L(f,R_{1})$. Hence $E(f,Q)\leq L(f,R_{1})$. It can
be shown, as it has been shown in the proof of Theorem 3.1, that $%
L(f,R_{1})\leq E(f,Q)$. But then automatically $E(f,Q)=L(f,R_{1})$.
\bigskip
\textbf{Example 3.3.} Calculate the approximation error of the function $%
f(x,y)=-(x-2)^{2n}y^{m}$ ($n$ and $m$ are positive integers) on the domain
\begin{equation*}
Q=\left\{ (x,y):0\leq x\leq 2,0\leq y\leq (x-1)^{2}+1\right\} .
\end{equation*}%
It can be easily verified that $f \in V_{2}(R)$, where $R=\left[ 0,4\right]
\times \left[ 0,2\right] $. Besides, $Q$ contains all vertices of $R_{1}=%
\left[ 0,2\right] \times \left[ 0,2\right] $. Consequently, by Theorem 3.2, $%
E(f,Q)=L(f,R_{1})=2^{2(n-1)+m}$.
\bigskip
\subsection{Characterization of $V_{c}(R)$}
The following theorem characterizes the class $V_{c}(R)$ in terms of the
approximation error calculation formulas.
\bigskip
\textbf{Theorem 3.3.} \textit{The following conditions are necessary and
sufficient for a continuous function $f(x,y)$ belong to $V_{c}(R):$ }
\textit{1) $E(f,S)=L(f,S)$, for each rectangle $S=\left[ x_{1},x_{2}\right]
\times \left[ y_{1},y_{2}\right] ,S\subset R_{1}$; }
\textit{2) $E(f,S)=-L(f,S)$, for each rectangle $S=\left[ x_{1},x_{2}\right]
\times \left[ y_{1},y_{2}\right] ,S\subset R_{2}$; }
\textit{3) $E(f,S)=L(f,S_{1})$, for each rectangle $S=\left[ a_{1},b_{1}%
\right] \times \left[ y_{1},y_{2}\right] ,S\subset R$ and $S_{1}=\left[
a_{1},c\right] \times \left[ y_{1},y_{2}\right] $.}
\bigskip
\textbf{Proof.} The necessity easily follows from the definition of $V_{c}(R)
$, Babaev's above-mentioned result (see Section 3.1.1) and Theorem 3.1. The
sufficiency is clear if pay attention to the fact that $E(f,S)\geq 0 $.
\bigskip
\subsection{Classes $V_{c}^{-}(R),U(R)$ and $U_{c}^{-}(R)$}
By $V_{c}^{-}(R)$ we denote the class of functions $f(x,y)$ such that $-f
\in V_{c}(R)$. It is clear that $E(f,R)=-L(f,R_{1})$ for each $f\in
V_{c}^{-}(R)$.
We define $U_{c}(R),a_{1}\leq c< b_{1}$, as a class of continuous functions $%
f(x,y)$ with the properties
1) $L(f,S)\leq 0$, for each rectangle $S=\left[ x_{1},x_{2}\right] \times %
\left[ y_{1},y_{2}\right] ,\;S\subset R_{1};$
2) $L(f,S)\geq 0$, for each rectangle $S=\left[ x_{1},x_{2}\right] \times %
\left[ y_{1},y_{2}\right] ,\;S\subset R_{2};$
3) $L(f,S)\geq 0$, for each rectangle $S=\left[ a_{1},b_{1}\right] \times %
\left[ y_{1},y_{2}\right] ,\;S\subset R.$
Using the same techniques in the proof of Theorem 3.1 it can be shown that
the following theorem is valid:
\bigskip
\textbf{Theorem 3.4.} \textit{The approximation error of a function $f(x,y)$
from the class $U_{c}(R)$ can be calculated by the formula
\begin{equation*}
E(f,R)=L(f,R_{2})=\frac{1}{4}\left[
f(c,a_{2})+f(b_{1},b_{2})-f(c,b_{2})-f(b_{1},a_{2})\right] .
\end{equation*}%
Let $y_{0}$ be any solution from $\left[ a_{2},b_{2}\right] $ of the
equation
\begin{equation*}
L(f,Y)=\frac{1}{2}L(f,R_{2}),\qquad Y=\left[ c,b_{1}\right] \times \left[
a_{2},y\right] .
\end{equation*}%
Then the function $\varphi _{0}(x)+\psi _{0}(y)$, where
\begin{equation*}
\varphi _{0}(x)=f(x,y_{0}),\quad \psi _{0}(y)=\frac{1}{2}\left[
f(c,y)+f(b_{1},y)-f(c,y_{0})-f(b_{1},y_{0})\right] ,
\end{equation*}%
is a best approximating sum from the manifold $D$ to $f$. }
\bigskip
By $U_{c}^{-}(R)$ denote the class of functions $f(x,y)$ such that $-f \in
U_{c}(R)$. It is clear that $E(f,R)=-L(f,R_{2})$ for each $f\in U_{c}^{-}(R)$%
.
\bigskip
\textbf{Remark 3.3.} The correspondingly modified versions of Theorems 2.2,
2.3 and Corollary 3.1 are valid for the classes $V_{c}^{-}(R),U_{c}(R)$ and $%
U_{c}^{-}(R)$.
\bigskip
\textbf{Example 3.4.} Consider the function $f(x,y)=\left( x-\frac{1}{2}%
\right) ^{2}y$ on the unit square $K=\left[ 0,1\right] \times \left[ 0,1%
\right] $. It can be easily verified that $f\in U_{\frac{1}{2}}(K)$. Hence,
by Theorem 3.4, $E(f,K)=\frac{1}{16}$ and the function $\frac{1}{2}\left( x-%
\frac{1}{2}\right) ^{2}+\frac{1}{8}y-\frac{1}{16}$ is a best approximating
function.
\bigskip
\section{Approximation by sums of univariate functions on certain domains}
The purpose of this section is to develop a method for obtaining explicit
formulas for the error of approximation of bivariate functions by sums of
univariate functions. It should be remarked that formulas of this type were
known only for functions defined on a rectangle with sides parallel to the
coordinate axes. Our method, based on a maximization process over closed
bolts, allows the consideration of functions defined on hexagons, octagons
and stairlike polygons with sides parallel to the coordinate axes.
\subsection{Problem statement}
Let $Q$ be a compact set in $\mathbb{R}^2$. Consider the approximation of a
continuous function $f \in C(Q)$ by functions from the set $D=\left\{
\varphi (x)+\psi (y)\right\} ,$ where $\varphi (x),\psi (y)$ are defined and
continuous on the projections of $Q$ into the coordinate axes $x$ and $y$,
respectively. The approximation error is defined as follows
\begin{equation*}
E(f,Q)=\inf\limits_{\varphi+\psi \in D}\left\Vert f-\varphi -\psi
\right\Vert _{C(Q)}.
\end{equation*}%
Our purpose is to develop a method for obtaining explicit formulas providing
precise and easy computation of $E(f,Q)$ for polygons $Q$ with sides
parallel to the coordinate axes. This method will be based on the herein
developed \textit{closed bolts maximization process} and can be used in
alternative proofs of the known results from \cite{6}, \cite{59} and \cite%
{121}. First, we show efficiency of the method in the example of a hexagon
with sides parallel to the coordinate axes. Then we formulate an analogous
theorem for staircase polygons and two theorems for octagons, which can be
proved in a similar way, and touch some aspects of the question about the
case of an arbitrary polygon with sides parallel to the coordinate axes. The
condition posed on sides of polygons (being parallel to the coordinate axes)
is essential for our method. This has several reasons, which get clear
through the proof of Theorem 3.5. Here we are able to explain one of these
reasons: by \cite[Theorem 3]{34}, a continuous function $f(x,y)$ defined
on a polygon with sides parallel to the coordinate axes has an extremal
element, the existence of which is required in our method. Now let $K$ be a
rectangle (not speaking about polygons) with sides not parallel to the
coordinate axes. Does any function $f\in C(K)$ have an extremal element? No one knows (see \cite{34}).
In the sequel, all the considered polygons are supposed to have sides
parallel to the coordinate axes.
\bigskip
\subsection{The maximization process}
Let $H$ be a closed hexagon. It is clear that $H$ can be uniquely
represented in the form
\begin{equation*}
H=R_{1}\cup R_{2},\eqno(3.7)
\end{equation*}%
where $R_{1},R_{2}$ are rectangles and there does not exist any rectangle $R$
such that $R_{1}\subset R\subset H$ or $R_{2}\subset R\subset H$.
We associate each closed bolt $p=\left\{ p_{1},p_{2},\cdots p_{2n}\right\} $
with the following functional
\begin{equation*}
l(f,p)=\frac{1}{2n}\sum\limits_{k=1}^{2n}(-1)^{k-1}f(p_{k}).
\end{equation*}
Denote by $M(H)$ the class of bivariate continuous functions $f$ on $H$
satisfying the condition
\begin{equation*}
f(x_{1},y_{1})+f(x_{2},y_{2})-f(x_{1},y_{2})-f(x_{2},y_{1})\geq 0
\end{equation*}
for any rectangle $\left[ x_{1},x_{2}\right] \times \left[ y_{1},y_{2}\right]
\subset H.$\newline
\textbf{Theorem 3.5.}\ \textit{Let $H$ be a hexagon and (3.7) be its
representation. Let $f \in M(H)$. Then
\begin{equation*}
E(f,H)=\max \left\{ \left\vert l(f,h)\right\vert ,\left\vert
l(f,r_{1})\right\vert ,\left\vert l(f,r_{2})\right\vert \right\} ,\eqno(3.8)
\end{equation*}%
where $h,r_{1},r_{2}$ are closed bolts formed by vertices of the polygons $%
H,R_{1},R_{2}$ respectively. }
\bigskip
\begin{proof} Without loss of generality, we may assume that the rectangles $%
R_{1}$ and $R_{2}$ are of the following form
\begin{equation*}
R_{1}=\left[ a_{1},a_{2}\right] \times \left[ b_{1},b_{3}\right] ,~\ R_{2}=%
\left[ a_{1},a_{3}\right] \times \left[ b_{1},b_{2}\right] ,~\
a_{1}<a_{2}<a_{3},\;b_{1}<b_{2}<b_{3}.
\end{equation*}%
Introduce the notation
\begin{equation*}
\begin{array}{c}
f_{11}=f\left( a_{1},b_{1}\right) ,~\ f_{12}=-f\left( a_{1},b_{2}\right)
,\;f_{13}=-f\left( a_{1},b_{3}\right) ; \\
f_{21}=-f\left( a_{2},b_{1}\right) ,\;f_{22}=-f\left( a_{2},b_{2}\right) ,~\
f_{23}=f\left( a_{2},b_{3}\right) ; \\
f_{31}=-f\left( a_{3},b_{1}\right) ,\;f_{32}=f\left( a_{3},b_{2}\right) .%
\end{array}%
\eqno(3.9)
\end{equation*}%
It is clear that
\begin{equation*}
\begin{array}{c}
\left\vert l(f,r_{1})\right\vert =\dfrac{1}{4}\left(
f_{11}+f_{13}+f_{23}+f_{21}\right) , \\
\left\vert l(f,r_{2})\right\vert =\dfrac{1}{4}\left(
f_{11}+f_{12}+f_{32}+f_{31}\right) , \\
\left\vert l(f,h)\right\vert =\dfrac{1}{6}\left(
f_{11}+f_{13}+f_{23}+f_{22}+f_{32}+f_{31}\right) .%
\end{array}%
\eqno(3.10)
\end{equation*}
Let $p=\left\{ p_{1},p_{2},\cdots p_{2n}\right\} $ be any closed bolt. We
group the points $p_{1},p_{2},\cdots p_{2n}$ by putting
\begin{equation*}
p_{+}=\left\{ p_{1},p_{3},\cdots p_{2n-1}\right\} ,\;p_{-}=\left\{
p_{2},p_{4},\cdots p_{2n}\right\}.
\end{equation*}
First, assume that $l(f,p)\geq 0$. We apply the following algorithm, which we call \textit{the maximization process over closed bolts}, to $p$.
\textbf{Step 1.} Consider sequentially the units $p_ip_{i+1}$ $\left(i=\overline{%
1,2n}, p_{2n+1}=p_1\right)$ with the vertices $p_{i}\left(
x_{i},y_{i}\right) ,~\ p_{i+1}\left( x_{i+1},y_{i+1}\right) $ having equal
abscissae: $x_{i}=x_{i+1}$. Four cases are possible.
1) $p_{i}\in p_{+}$ and $y_{i+1}>y_{i}$. In this case, replace the unit $%
p_{i}p_{i+1}$ by a new unit $q_{i}q_{i+1}$ with the vertices $%
q_{i}=(a_{1},y_{i}),\;\ q_{i+1}=(a_{1},y_{i+1})$.
2) $p_{i}\in p_{+}$ and $y_{i+1}<y_{i}$. In this case, replace the unit $%
p_{i}p_{i+1}$ by a new unit $q_{i}q_{i+1}$ with the vertices $q_{i}=\left(
a_{2},y_{i}\right) ,\;q_{i+1}=(a_{2},y_{i+1})$ if $b_{2}<y_{i}\leq b_{3}$ or
with the vertices $q_{i}=\left( a_{3},y_{i}\right),
q_{i+1}=(a_{3},y_{i+1})$ if $b_{1}\leq y_{i}\leq b_{2}$.
3) $p_{i}\in p_{-}$ and $y_{i+1}<y_{i}$. In this case, replace $p_{i}p_{i+1}$
by a new unit $q_{i}q_{i+1}$ with the vertices $%
q_{i}=(a_{1},y_{i}),~q_{i+1}=(a_{1},y_{i+1})$.
4) $p_{i}\in p_{-}$ and $y_{i+1}>y_{i}$. In this case, replace $p_{i}p_{i+1}$
by a new unit $q_{i}q_{i+1}$ with the vertices $%
q_{i}=(a_{2},y_{i}),~q_{i+1}=(a_{2},y_{i+1})$\ if\ $b_{2}<y_{i+1}\leq b_{3}$
or with the vertices $q_{i}=(a_{3},y_{i}),~q_{i+1}=(a_{3},y_{i+1})$ if $%
b_{1}\leq y_{i+1}\leq b_{2}$.
Since $f\in M(H)$, it is not difficult to verify that
\begin{equation*}
\begin{array}{c}
f(p_{i})-f(p_{i+1})\leq f(q_{i})-f(q_{i+1})\ \ \mbox{for cases 1)
and 2)}, \\
-f(p_{i})+f(p_{i+1})\leq -f(q_{i})+f(q_{i+1})\ \ \mbox{for cases 3) and 4)}%
\end{array}%
\eqno(3.11)
\end{equation*}
It is clear that after Step 1 the bolt $p$ will be replaced by the ordered set $%
q=\left\{ q_{1},q_{2},\cdots ,q_{2n}\right\} $. We do not say a bolt but an
ordered set because of a possibility of coincidence of some successive
points $q_{i}, q_{i+1}$ (this, for example, may happen if the 1-st case
takes place for the units $p_{i-1}p_{i}$ and $p_{i+1}p_{i+2}$). Let us exclude
simultaneously successive and coincident points from $q$. Then we obtain
some closed bolt, which we denote by $q^{\prime }=\left\{ q_{1}^{\prime
},q_{2}^{\prime },\cdots ,q_{2m}^{\prime }\right\}$. It is not difficult to
understand that all points of the bolt $q^{\prime}$ are located on straight
lines $x=a_{1},~x=a_{2},~x=a_{3}$.
From inequalities (3.11) and the fact that $2m\leq 2n,$ we deduce that
\begin{equation*}
l(f,p)\leq l(f,q^{\prime }).\eqno(3.12)
\end{equation*}
\textbf{Step 2.} Consider sequentially units $q_{i}^{\prime }q_{i+1}^{\prime
}\;\left( i=\overline{1,2m},q_{2m+1}^{\prime }=q_{1}^{\prime }\right) $ with
the vertices $q_{i}^{\prime }=\left( x_{i}^{\prime },y_{i}^{\prime }\right)
,~\ q_{i+1}^{\prime }\left( x_{i+1}^{\prime },y_{i+1}^{\prime }\right) $
having equal ordinates: $y_{i}^{\prime }=y_{i+1}^{\prime }$. The following
four cases are possible.
1) $q_{i}^{\prime }\in q_{+}^{\prime }$ and $x_{i+1}^{\prime }>x_{i}^{\prime
}$. In this case, replace the unit $q_{i}^{\prime }q_{i+1}^{\prime }$ by a
new unit $p_{i}^{\prime }p_{i+1}^{\prime }$ with the vertices $p_{i}^{\prime
}=\left( x_{i}^{\prime },b_{1}\right) ,~\ p_{i+1}^{\prime }=\left(
x_{i+1}^{\prime },b_{1}\right) $.
2) $q_{i}^{\prime }\in q_{+}^{\prime }$ and $x_{i+1}^{\prime }<x_{i}^{\prime
}$. In this case, replace the unit $q_{i}^{\prime }q_{i+1}^{\prime }$ by a
new unit $p_{i}^{\prime }p_{i+1}^{\prime }$ with the vertices $p_{i}^{\prime
}=\left( x_{i}^{\prime },b_{2}\right) ,$ $\ p_{i+1}^{\prime }=\left(
x_{i+1}^{\prime },b_{2}\right) $ if $x_{i}^{\prime }=a_{3}$ and with the
vertices $p_{i}^{\prime }=\left( x_{i}^{\prime },b_{3}\right) ,$ $%
p_{i+1}^{\prime }=\left( x_{i+1}^{\prime },b_{3}\right) $ if $x_{i}^{\prime
}=a_{2}$.
3) $q_{i}^{\prime }\in q_{-}^{\prime }$ and $x_{i+1}^{\prime }<x_{i}^{\prime
}$. In this case, replace $q_{i}^{\prime }q_{i+1}^{\prime }$ by a new unit $%
p_{i}^{\prime }p_{i+1}^{\prime }$ with the vertices $p_{i}^{\prime }=\left(
x_{i}^{\prime },b_{1}\right) ,~\ p_{i+1}^{\prime }=\left( x_{i+1}^{\prime
},b_{1}\right)$.
4) $q_{i}^{\prime }\in q_{-}^{\prime }$ and $x_{i+1}^{\prime }>x_{i}^{\prime
}$. In this case, replace $q_{i}^{\prime }q_{i+1}^{\prime }$ by a new unit $%
p_{i}^{\prime }p_{i+1}^{\prime }$ with the vertices $p_{i}^{\prime }=\left(
x_{i}^{\prime },b_{2}\right) ,~\ p_{i+1}^{\prime }=\left( x_{i+1}^{\prime
},b_{2}\right)$ if $x_{i+1}^{\prime }=a_{3}$ and with the vertices $%
p_{i}^{\prime }=\left( x_{i}^{\prime },b_{3}\right) ,~\ p_{i+1}^{\prime
}=\left( x_{i+1}^{\prime },b_{3}\right) $ if $x_{i+1}^{\prime }=a_{2}$.
It is easy to see that after Step 2 the bolt $q^{\prime }$ will be replaced by
the bolt $p^{\prime }=\left\{ p_{1}^{\prime },p_{2}^{\prime },\cdots
p_{2m}^{\prime }\right\} $ and
\begin{equation*}
l(f,q^{\prime })\leq l(f,p^{\prime }).\eqno(3.13)
\end{equation*}
From (3.12) and (3.13) we obtain that
\begin{equation*}
l(f,p)\leq l(f,p^{\prime }).\eqno(3.14)
\end{equation*}
It is clear that each point of the set $p_{+}^{\prime }$ coincides with one
of the points $\left( a_{1},b_{1}\right) ,~\left( a_{2},b_{3}\right) ,$ $%
\left( a_{3},b_{2}\right) $ and each point of the set $p_{-}^{\prime }$
coincides with one of the points $\left( a_{1},b_{2}\right) ,~\left(
a_{1},b_{3}\right) ,$ $~\left( a_{2},b_{1}\right) ,~\left(
a_{2},b_{2}\right) ,~\left( a_{3},b_{1}\right) .$ Denote by $m_{ij}$ the
number of points of the bolt $p^{\prime }$ coinciding with the point $\left(
a_{i},b_{j}\right) ,~i,j=\overline{1,3},~i+j\neq 6$. By (3.9), we can write that
\begin{equation*}
l(f,p^{\prime })=\frac{1}{2m}\sum\limits_{\substack{ i,j=\overline{1,3} \\ %
i+j\leq 5}}m_{ij}f_{ij}.\eqno(3.15)
\end{equation*}
On the straight line $x=a_{i}~\ $or $\ y=b_{i},~i=\overline{1,3}$, the
number of points of the set $p_{+}^{\prime }$ is equal to the number of
points of the set $p_{-}^{\prime }$. Hence
\begin{equation*}
m_{11}=m_{12}+m_{13}=m_{21}+m_{31};\ m_{23}=m_{22}+m_{21}=m_{13};\
m_{32}=m_{31}=m_{12}+m_{22}.
\end{equation*}%
From these equalities we deduce that
\begin{equation*}
m_{11}=m_{12}+m_{21}+m_{22};\ m_{13}=m_{21}+m_{22};\ m_{23}=m_{21}+m_{22};\
m_{31}=m_{12}+m_{22}.\eqno(3.16)
\end{equation*}%
Consequently,
\begin{equation*}
2m=\sum\limits_{\substack{ i,j=\overline{1,3} \\ i+j\leq 5}}%
m_{ij}=4m_{12}+4m_{21}+6m_{22}.\eqno(3.17)
\end{equation*}%
Considering (3.16) and (3.17) in (3.15) and taking (3.10) into account, we
obtain that
\begin{equation*}
l(f,p^{\prime })=\dfrac{4m_{12}\left\vert l(f,r_{2})\right\vert
+4m_{21}\left\vert l(f,r_{1})\right\vert +6m_{22}\left\vert
l(f,h)\right\vert }{4m_{12}+4m_{21}+6m_{22}}
\end{equation*}%
\begin{equation*}
\leq \max \left\{ \left\vert l(f,r_{1})\right\vert ,\left\vert
l(f,r_{2})\right\vert ,\left\vert l(f,h)\right\vert \right\} .
\end{equation*}%
Therefore, due to (3.14),
\begin{equation*}
l(f,p)\leq \max \left\{ \left\vert l(f,r_{1})\right\vert ,\left\vert
l(f,r_{2})\right\vert ,\left\vert l(f,h)\right\vert \right\} .\eqno(3.18)
\end{equation*}
Note that in the beginning of the proof the bolt $p$ has been chosen so that
$l(f,p)\geq 0$. Let now $p=\left\{ p_{1},p_{2},\cdots p_{2n}\right\} $ be
any closed bolt such that $l(f,p)\leq 0$. Since $l(f,p^{\prime
\prime })=$ $-l(f,p)\geq 0$ for the bolt $p^{\prime \prime }=\left\{
p_{2},p_{3},\cdots ,p_{2n},p_{1}\right\} $,we obtain from (3.18) that
\begin{equation*}
-l(f,p)\leq \max \left\{ \left\vert l(f,r_{1})\right\vert ,\left\vert
l(f,r_{2})\right\vert ,\left\vert l(f,h)\right\vert \right\} .\eqno(3.19)
\end{equation*}%
From (3.18) and (3.19) we deduce on the strength of arbitrariness of $p$
that
\begin{equation*}
\sup\limits_{p\subset H}\left\{ \left\vert l(f,p)\right\vert \right\} =\max
\left\{ \left\vert l(f,r_{1})\right\vert ,\left\vert l(f,r_{2})\right\vert
,\left\vert l(f,h)\right\vert \right\} ,\eqno(3.20)
\end{equation*}%
where the $sup$ is taken over all closed bolts of the hexagon $H$.
The hexagon $H$ satisfies the conditions of Theorem 1.10 on the
existence of a best approximation. By \cite[Theorem 2]{79} (see Section 3.3), we obtain that
\begin{equation*}
E(f,H)=\sup\limits_{p\subset H}\left\{ \left\vert l(f,p)\right\vert \right\}
.\eqno(3.21)
\end{equation*}%
From (3.20) and (3.21) we finally conclude that
\begin{equation*}
E(f,H)=\max \left\{ \left\vert l(f,r_{1})\right\vert ,\left\vert
l(f,r_{2})\right\vert ,\left\vert l(f,h)\right\vert \right\} .
\end{equation*}%
\end{proof}
\textbf{Corollary 3.2.}\ \textit{Let a function $f(x,y)$ have the continuous
nonnegative derivative $\dfrac{\partial ^{2}f}{\partial x\partial y}$ on $H$%
. Then the formula (3.8) is valid.}
\bigskip
The proof is very simple and can be obtained by integrating the inequality
\linebreak $\dfrac{\partial ^{2}f}{\partial x\partial y}\!\geq \!0$ over an
arbitrary rectangle $\left[ x_{1},x_{2}\right] \times \left[ y_{1},y_{2}%
\right] \subset H$ and applying Theorem 3.5.
The method used in the proof of Theorem 3.5 can be generalized to obtain
similar results for stairlike polygons. For example, let $S$ be a closed
polygon of the following form
\begin{equation*}
S=\bigcup\limits_{i=1}^{N-1}P_{i},
\end{equation*}%
where $N\geq 2,$ $P_{i}=\left[ a_{i},a_{i+1}\right] \times \left[
b_{1},b_{N+1-i}\right] ,$ $i=\overline{1,N-1},$ $a_{1}<a_{2}<\dots <a_{N},$ $%
b_{1}<b_{2}<\dots <b_{N}$. Such polygons will be called \textit{stairlike
polygons} (see \cite{56}).
A closed $2m$-gon $F$ with sides parallel to the coordinate axes is called a
maximal $2m$-gon of the polygon $S$ if $F\subset S$ and there is no another $%
2m$-gon $F^{\prime }$ such that $F\subset F^{\prime }\subset S$. Clearly, if
$F$ is a maximal $2m$-gon of the polygon $S$, then $m\leq N.$ A closed bolt
formed by the vertices of a maximal polygon $F$ is called a maximal bolt of $%
S$. By $S^{B}$ denote the set of all maximal bolts of the stairlike polygon $%
S.$
\bigskip
\textbf{Theorem 3.6.} \textit{Let $S$ be a stairlike polygon. The
approximation error of a function $f \in M(S)$ can be computed by the formula%
}
\begin{equation*}
E\left( f,S\right) =\max \left\{ \left\vert r(f,h)\right\vert ,\;h\in
S^{B}\right\} .
\end{equation*}
\bigskip
For the proof of this theorem see \cite{56}.
\bigskip
\subsection{$E$-bolts}
The main idea in the proof of Theorem 3.5 can be successfully used in
obtaining formulas of type (3.8) for functions $f(x,y)$ defined on another
simple polygons. The following two theorems include cases of some octagons
and can be proved in a similar way.
\bigskip
\textbf{Theorem 3.7.}\ \textit{Let $a_{1}<a_{2}<a_{3}<a_{4},$ $%
b_{1}<b_{2}<b_{3}$ and $Q$ be an octagon of the following form
\begin{equation*}
Q=\bigcup\limits_{i=1}^{4}R_{i},\ \ \ where
\end{equation*}%
$R_{1}=\left[ a_{1},a_{2}\right] \times \left[ b_{1},b_{2}\right] ,R_{2}=%
\left[ a_{2},a_{3}\right] \times \left[ b_{1},b_{2}\right] ,R_{3}=\left[
a_{3},a_{4}\right] \times \left[ b_{1},b_{2}\right] ,R_{4}=\left[ a_{2},a_{3}%
\right] \times \left[ b_{2},b_{3}\right] $.} \textit{Let $f\in M(Q)$. Then
the following formula holds
\begin{equation*}
E(f,Q)=\max \left\{ \left| l(f,q)\right| ,\left| l(f,r_{123} )\right|
,\left| l(f,r_{124} )\right| ,\left| l(f,r_{234} )\right| ,\left| l(f,r_{24}
)\right| \right\},
\end{equation*}
where $q,$ $r_{123},$ $r_{124},$ $r_{234},$ $r_{24} $ are closed bolts
formed by the vertices of the polygons $Q,$ $R_{1} \cup R_{2}\cup R_{3}
,R_{1}\cup R_{2}\cup R_{4} ,R_{2}\cup R_{3}\cup R_{4} $ and $R_{2}\cup R_{4}
$, respectively.}
\bigskip
\textbf{Theorem 3.8.}\ \textit{Let $a_{1}<a_{2}<a_{3}<a_{4},\
b_{1}<b_{2}<b_{3}$ and $Q$ be an octagon of the following form
\begin{equation*}
Q=\bigcup_{i=1}^{3}R_{i},
\end{equation*}%
where $R_{1}=\left[ a_{1},a_{4}\right] \times \left[ b_{1},b_{2}\right]
,R_{2}=\left[ a_{1},a_{2}\right] \times \left[ b_{2},b_{3}\right] ,R_{3}=%
\left[ a_{3},a_{4}\right] \times \left[ b_{2},b_{3}\right] $.} \textit{Let $%
f\in M(Q)$. Then
\begin{equation*}
E(f,Q)=\max \left\{ \left| l(f,r)\right| ,\left| l(f,r_{12} )\right| ,\left|
l(f,r_{13} )\right| \right\},
\end{equation*}
where $r,r_{12} ,r_{13} $ are closed bolts formed by the vertices of the
polygons $R=\left[ a_{1} ,a_{4} \right] \times \left[ b_{1} ,b_{3} \right],$
$R_{1}\cup R_{2} ,R_{1}\cup R_{3}$, respectively.}
\bigskip
Although the closed bolts maximization process can be applied to bolts of an
arbitrary polygon, some combinatorial difficulties arise when grouping
values at points of maximized bolts (bolts obtained after the maximization
process, see (3.15)-(3.18)). While we do not know a complete answer to this
problem, we can describe points of a polygon $F$ with which points of
maximized bolts coincide and state a conjecture concerning the approximation
error.
Let $F=A_{1}A_{2}...A_{2n}$ be any polygon with sides parallel to the
coordinate axes. The vertices $A_{1},$ $A_{2},$ $...,$ $A_{2n}$ in the given
order form a closed bolt, which we denote by $r_{F}$. By $\left[ r_{F}\right]
$ denote the length of $r_{F}$. In our case, $\left[ r_{F}\right] =2n$.
\bigskip
\textbf{Definition 3.2.}\ \textit{Let $F$ and $S$ be polygons with sides
parallel to the coordinate axes. We say that the closed bolt $r_{F}$ is an $%
e $-bolt (extended bolt) of $S$ if\ $r_{F}\subset S$\ and there does not
exist any polygon $F^{^{\prime }}$ such that $F\subset F^{^{\prime }},\ \
r_{F^{^{\prime }}}\subset S,\ \ \left[ r_{F^{^{\prime }}}\right] \leq \left[
r_{F}\right] .$}
\bigskip
For example, in Theorem 3.8 the octagon $Q$ has $3$ $e$-bolts. They are $%
r,r_{12}$ and $r_{13}$. In Theorem 3.7, the octagon $Q$ has $5$ $e$-bolts,
which are $q,r_{123},r_{124},r_{234}$ and $r_{24}$ . The polygon $%
S_{2n}=\bigcup\limits_{i=1}^{n-1}R_{i}$, where $R_{i}=\left[ a_{i},a_{i+1}%
\right] \times \left[ b_{1},b_{n+1-i}\right] ,i=\overline{1,n-1}%
,a_{1}<a_{2}<...<a_{n},b_{1}<b_{2}<...<b_{n}$ has exactly $2^{n-1}-1$ $e$%
-bolts. It is not difficult to observe that the set of points of a closed
bolt obtained after the maximization process is a subset of the set of
points of all $e$-bolts. This condition and Theorems 2.5-2.8 justify the
statement of the following conjecture:
\textit{Let $S$ be any polygon with sides parallel to the coordinate axes
and $f \in M(S)$. Then
\begin{equation*}
E(f,S)=\max_{h\in S^{E}}\left\{ \left\vert l(f,h)\right\vert \right\},
\end{equation*}%
where $S^{E}$ is a set of all $e$-bolts of the polygon $S$.}
\bigskip
\subsection{Error estimates}
Theorem 3.5 allows us to consider classes wider than $M(H)$ and establish
sharp estimates for the approximation error.
\bigskip
\textbf{Theorem 3.9.}\ \textit{Let $H$ be a hexagon and (3.7) be its
representation. The following sharp estimates are valid for a function $%
f(x,y)$ having the continuous derivative $\dfrac{\partial ^{2}f}{\partial
x\partial y}$ on $H$:
\begin{equation*}
A\leq E(f,H)\leq BC+\frac{3}{2}\left( B\left\vert l(g,h)\right\vert
-\left\vert l(f,h)\right\vert \right) ,\eqno(3.22)
\end{equation*}%
where
\begin{equation*}
B=\max_{(x,y)\in H}\left\vert \frac{\partial ^{2}f(x,y)}{\partial x\partial y%
}\right\vert ,\ \ \ g=g(x,y)=x\cdot y,
\end{equation*}%
\begin{equation*}
A=\max \left\{ \left\vert l(f,h)\right\vert ,\ \left\vert
l(f,r_{1})\right\vert ,\left\vert l(f,r_{2})\right\vert \right\} ,\ C=\max
\left\{ \left\vert l(g,h)\right\vert ,\left\vert l(g,r_{1})\right\vert ,\
\left\vert l(g,r_{2})\right\vert \right\},
\end{equation*}%
where $h,r_{1},r_{2}$ are closed bolts formed by vertices of the polygons $%
H,R_{1}$ and $R_{2}$, respectively.}
\bigskip
\textbf{Remark 3.4.} Inequalities similar to (3.22) were established in
Babaev \cite{8} for the approximation of a function $f(x)=f(x_{1},...,x_{n})$%
, defined on a parallelepiped with sides parallel to the coordinate axes, by
sums $\sum\limits_{i=1}^{n}\varphi _{i}(x\backslash x_{i})$. For the
approximation of bivariate functions, Babaev's result contains only
rectangular case.
\bigskip
\textbf{Remark 3.5.} Estimates (3.22) are easily calculable in contrast to
those established in \cite{5} for continuous functions defined on certain
domains, which are different from polygons.
\bigskip
To prove Theorem 3.9 we need the following lemmas.
\bigskip
\textbf{Lemma 3.3.}\ \textit{Let $X$ be a normed space, $F$ be a subspace of
$X$. The following inequality is valid for an element $x=x_{1}+x_{2}$ from $X
$:
\begin{equation*}
\left\vert E(x_{1})-E(x_{2})\right\vert \leq E(x)\leq E(x_{1})+E(x_{2}),
\end{equation*}%
where
\begin{equation*}
E(x)=E(x,F)=\inf_{y\in F}\left\Vert x-y\right\Vert .
\end{equation*}%
} \bigskip
\textbf{Lemma 3.4.}\ \textit{If $f\in M(H)$, then
\begin{equation*}
\left\vert l(f,r_{i})\right\vert \leq \frac{3}{2}\left\vert
l(f,h)\right\vert ,i=1,2.
\end{equation*}%
}
Lemma 3.3 is obvious. To prove Lemma 3.4, note that for any $f\in M(H)$
\begin{equation*}
6\left\vert l(f,h)\right\vert =4\left\vert l(f,r_{i})\right\vert
+4\left\vert l(f,r_{3})\right\vert ,\ \ i=1,2,
\end{equation*}%
where $r_{3}$ is a closed bolt formed by the vertices of the rectangle $%
R_{3}=H\backslash R_{i}.$
Now let us prove Theorem 3.9.
\begin{proof} It is not difficult to verify that if $\frac{\partial ^{2}u}{%
\partial x\partial y}\geq 0$ on $H$ for some $u(x,y),$ $\frac{\partial
^{2}u(x,y)}{\partial x\partial y}\in C(H)$, then $u\in M(H)$ (see the proof
of Corollary 3.2). Set $f_{1}=f+Bg$. Since $\frac{\partial ^{2}f_{1}}{%
\partial x\partial y}\geq 0$ on $H$, $f_{1}\in M(H)$. By Lemma 3.4,
\begin{equation*}
\left\vert l(f_{1},r_{i})\right\vert \leq \frac{3}{2}\left\vert
l(f_{1},h)\right\vert ,i=1,2.\eqno(3.23)
\end{equation*}
Theorem 3.5 implies that
\begin{equation*}
E(f_{1},H)=\max \left\{ \left\vert l(f_{1},h)\right\vert ,\left\vert
l(f_{1},r_{1})\right\vert ,\left\vert l\left( f_{1},r_{2}\right) \right\vert
\right\} .\eqno(3.24)
\end{equation*}
We deduce from (3.23) and (3.24) that
\begin{equation*}
E(f_{1},H)\leq \frac{3}{2}\left\vert l(f_{1},h)\right\vert .
\end{equation*}
First, let the closed bolt $h$ start at the point $(a_{1},b_{1})$. Then it
is clear that
\begin{equation*}
E(f_{1},H)\leq \frac{3}{2}l(f_{1},h).\eqno(3.25)
\end{equation*}
By Lemma 3.3,
\begin{equation*}
E(f,H)-E(Bg,H)\leq E(f_{1},H).\eqno(3.26)
\end{equation*}
Inequalities (3.25) and (3.26) yield
\begin{equation*}
E(f,H)\leq BE(g,H)+\frac{3}{2}l(f_{1},h).\eqno(3.27)
\end{equation*}
Since the functional $l(f,h)$ is linear,
\begin{equation*}
l(f_{1},h)=l(f,h)+Bl(g,h).
\end{equation*}%
Considering this expression of $l(f_{1},h)$ in (3.27), we obtain that
\begin{equation*}
E(f,H)\leq BE(g,H)+\frac{3}{2}Bl(g,h)+\frac{3}{2}l(f,h).\eqno(3.28)
\end{equation*}
Now consider the function $f_{2}=Bg-f$. Obviously, $\frac{\partial ^{2}f_{2}}{\partial x\partial y}%
\geq 0$ on $H$. It can be shown, in the same way as (3.28) has been
obtained, that
\begin{equation*}
E(f,H)\leq BE(g,H)+\frac{3}{2}Bl(g,h)-\frac{3}{2}l(f,h).\eqno(3.29)
\end{equation*}
From (3.28) and (3.29) it follows that
\begin{equation*}
E(f,H)\leq BE(g,H)+\frac{3}{2}Bl(g,h)-\frac{3}{2}\left\vert
l(f,h)\right\vert .\eqno(3.30)
\end{equation*}%
Since $g\in M(H)$ and $h$ starts at the point $(a_{1},b_{1}),$ we have $%
l(g,h)\geq 0$.
Let now $h$ start at a point such that $l(u,h)\leq 0$ for any $u\in M(H)$.
Then in a similar way as above we can prove that
\begin{equation*}
E(f,H)\leq BE(g,H)-\frac{3}{2}Bl(g,h)-\frac{3}{2}\left\vert
l(f,h)\right\vert ,\eqno(3.31)
\end{equation*}%
where $l(g,h)\leq 0$. From (3.30), (3.31) and the fact that $E(g,H)=C$ (in
view of Theorem 3.5), it follows that
\begin{equation*}
E(f,H)\leq BC+\frac{3}{2}\left( B\left\vert l(g,h)\right\vert -\left\vert
l(f,h)\right\vert \right) .
\end{equation*}%
The upper bound in (3.22) has been established. Note that it is attained
by $f=g=xy$.
The proof of the lower bound in (3.22) is simple. One of the obvious properties
of the functional $l(f,p)$ is that $\left\vert l(f,p)\right\vert \leq E(f,H)$
for any continuous function $f$ on $H$ and a closed bolt $p$. Hence,
\begin{equation*}
A=\max \left\{ \left\vert l(f,h)\right\vert ,\left\vert
l(f,r_{1})\right\vert ,\left\vert l(f,r_{2})\right\vert \right\} \leq E(f,H).
\end{equation*}
Note that by Theorem 3.5 the lower bound in (3.22) is attained by an arbitrary
function from $M(H)$. \end{proof}
\textbf{Remark 3.6.} Using Theorems 2.7 and 2.8 one can obtain sharp
estimates of type (3.22) for bivariate functions defined on the
corresponding simple polygons with sides parallel to the coordinate axes.
\bigskip
\section{On the theorem of M. Golomb}
Let $X_{1},...,X_{n}$ be compact spaces and $X=X_{1}\times \cdots \times
X_{n}.$ Consider the approximation of a function $f\in C(X)$ by sums $%
g_{1}(x_{1})+\cdots +g_{n}(x_{n}),$ where $g_{i}\in C(X_{i}),$ $i=1,...,n.$
In \cite{37}, M.Golomb obtained a formula for the error of this
approximation in terms of measures constructed on special points of $X$,
called ``projection cycles". However, his proof had a gap, which was pointed
out later by Marshall and O'Farrell \cite{107}. But the question if the
formula was correct, remained open. The purpose of this section is to prove
that Golomb's formula is valid, and moreover it holds in a stronger form.
\subsection{History of Golomb's formula}
Let $X_{i},i=1,...,n,$ be compact Hausdorff spaces. Consider the
approximation to a continuous function $f$, defined on $X=X_{1}\times \cdots
\times X_{n}$, from the manifold
\begin{equation*}
M=\left\{ \sum_{i=1}^{n}g_{i}(x_{i}):g_{i}\in C(X_{i}),~~i=1,...,n\right\} .
\end{equation*}%
The approximation error is defined as the distance from $f$ to $M$:
\begin{equation*}
E(f)\overset{def}{=}dist(f,M)=\underset{g\in M}{\inf }\left\Vert
f-g\right\Vert _{C(X)}.
\end{equation*}
The well-known duality relation says that
\begin{equation*}
E(f)=\underset{\left\Vert \mu \right\Vert \leq 1}{\underset{\mu \in M^{\bot }%
}{\sup }}\left\vert \int\limits_{X}fd\mu \right\vert ,\eqno(3.32)
\end{equation*}%
where $M^{\bot }$ is the space of regular Borel measures annihilating all
functions in $M$ and $\left\Vert \mu \right\Vert $ stands for the total
variation of a measure $\mu $. It should be noted that the $\sup $ in (3.32)
is attained by some measure $\mu ^{\ast }$ with total variation $\left\Vert
\mu ^{\ast }\right\Vert =1.$ We are interested in the problem: is it
possible to replace in (3.32) the class $M^{\bot }$ by some subclass of it
consisting of measures of simple structure? For the case $n=2,$ this problem
was first considered by Diliberto and Straus \cite{26}. They showed that the
measures generated by closed bolts are sufficient for the equality (3.32).
In case of general topological spaces, a lightning bolt is defined similarly
to the case $\mathbb{R}^2$. Let $X=X_{1}\times X_{2}$ and $\pi _{i}$ be the
projections of $X$ onto $X_{i},$ $i=1,2.$ A lightning bolt (or, simply, a
bolt) is a finite ordered set $\{a_{1},...,a_{k}\}$ contained in $X$, such
that $a_{i}\neq a_{i+1}$, for $i=1,2,...,k-1$, and either $\pi
_{1}(a_{1})=\pi _{1}(a_{2}),$ $\pi _{2}(a_{2})=\pi _{2}(a_{3})$, $\pi
_{1}(a_{3})=\pi _{1}(a_{4}),...,$ or $\pi _{2}(a_{1})=\pi _{2}(a_{2}),$ $\pi
_{1}(a_{2})=\pi _{1}(a_{3})$, $\pi _{2}(a_{3})=\pi _{2}(a_{4}),...$ A bolt $%
\{a_{1},...,a_{k}\}$ is said to be closed if $k$ is an even number and the
set $\{a_{2},...,a_{k},a_{1}\}$ is also a bolt.
Let $l=\{a_{1},...,a_{2k}\}$ be a closed bolt. Consider a measure $\mu _{l}$
having atoms $\pm \frac{1}{2k}$ with alternating signs at the vertices of $l$%
. That is,
\begin{equation*}
\mu _{l}=\frac{1}{2k}\sum_{i=1}^{2k}(-1)^{i-1}\delta _{a_{i}}\text{ \ or \ }%
\mu _{l}=\frac{1}{2k}\sum_{i=1}^{2k}(-1)^{i}\delta _{a_{i}},
\end{equation*}%
where $\delta _{a_{i}}$ is a point mass at $a_{i}.$ It is clear that $\mu
_{l}\in M^{\bot }$ and $\left\Vert \mu _{l}\right\Vert \leq 1$. $\left\Vert
\mu _{l}\right\Vert =1$ if and only if the set of vertices of the bolt $l$
having even indices does not intersect with that having odd indices. The
following duality relation was first established by Diliberto and Straus
\cite{26}
\begin{equation*}
E(f)=\underset{l\subset X}{\sup }\left\vert \int\limits_{X}fd\mu
_{l}\right\vert ,\eqno(3.33)
\end{equation*}%
where $X=X_{1}\times X_{2}$ and the $\sup $ is taken over all closed bolts
of $X$. In fact, Diliberto and Straus obtained the formula (3.33) for the
case when $X$ is a rectangle in $\mathbb{R}^{2}$ with sides parallel to the
coordinate axis. The same result was independently proved by Smolyak (see
\cite{113}). Yet another proof of (3.33), in the case when $X$ is a
Cartesian product of two compact Hausdorff spaces, was given by Light and
Cheney \cite{93}. For $X$'s other than a rectangle in $\mathbb{R}^{2}$, the
theorem under some additional assumptions appeared in the works \cite%
{62,79,107}. But we shall not discuss these works here.
Golomb's paper \cite{37} made a start to a systematic study of approximation
of multivariate functions by various compositions, including sums of
univariate functions. Golomb generalized the notion of a closed bolt to the $%
n$-dimensional case and obtained the analogue of formula (3.33) for the
error of approximation from the manifold $M$. The objects introduced in \cite%
{37} were called \textit{projection cycles} and they are defined as sets of
the form
\begin{equation*}
p=\{b_{1},...,b_{k};~c_{1},...,c_{k}\}\subset X,\eqno(3.34)
\end{equation*}%
with the property that $b_{i}\neq c_{j}$, $i,j=1,...,k$ and for all $\nu
=1,...,n,$ the group of the $\nu $-th coordinates of $c_{1},...,c_{k}$ is a
permutation of that of the $\nu $-th coordinates of $b_{1},...,b_{k}.$ Some
points in the $b$-part $\left( b_{1},...,b_{k}\right) $ or $c$-part $\left(
c_{1},...,c_{k}\right) $ of $p$ may coincide. The measure associated with $p$
is
\begin{equation*}
\mu _{p}=\frac{1}{2k}\left( \sum_{i=1}^{k}\delta
_{b_{i}}-\sum_{i=1}^{k}\delta _{c_{i}}\right).
\end{equation*}
It is clear that $\mu _{p}\in M^{\bot }$ and $\left\Vert \mu _{p}\right\Vert
=1.$ Besides, if $n=2,$ then a projection cycle is the union of closed bolts
after some suitable permutation of its points. Golomb's result states that
\begin{equation*}
E(f)=\underset{p\subset X}{\sup }\left\vert \int\limits_{X}fd\mu
_{p}\right\vert ,\eqno(3.35)
\end{equation*}%
where $X=X_{1}\times \cdots \times X_{n}$ and the $\sup $ is taken over all
projection cycles of $X$. It can be proved that in the case $n=2,$ the
formulas (3.33) and (3.35) are equivalent. Unfortunately, the proof of
(3.35) had a gap, which was pointed out many years later by Marshall and
O'Farrell \cite{107}. But the question if the formula (3.35) was correct,
remained unsolved (see also the monograph by Khavinson \cite{76}%
). Note that Golomb's result was used and cited in the literature, for
example, in works \cite{75,126}.
In the following subsection, we will construct families of normalized
measures (that is, measures with the total variation equal to $1$) on
projection cycles. Each measure $\mu _{p}$ defined above will be a member of
some family. We will also consider minimal projection cycles and measures
constructed on them. By properties of these measures, we show that Golomb's
formula (3.35) is valid in a stronger form.
\bigskip
\subsection{Measures supported on projection cycles}
Let us give an equivalent definition of a projection cycle. This will be
useful in constructing of certain measures having simple structure and
capability of approximating arbitrary measures in $M^{\bot }$.
In the sequel, $\chi _{a}$ will denote the characteristic function of a
single point set $\{a\}\subset \mathbb{R}$.
\bigskip
\textbf{Definition 3.3.} \textit{Let $X=X_{1}\times \cdots \times X_{n}$ and
$\pi _{i}$ be the projections of $X$ onto the sets $X_{i},$ $i=1,...,n.$ We
say that a set $p=\{x_{1},...,x_{m}\}\subset X$ is a projection cycle if
there exists a vector $\lambda =(\lambda _{1},...,\lambda _{m})$ with
nonzero real coordinates such that}
\begin{equation*}
\sum_{j=1}^{m}\lambda _{j}\chi _{\pi _{i}(x_{j})}=0,\text{ \ }i=1,...,n.\eqno%
(3.36)
\end{equation*}
\bigskip
Let us give some explanatory remarks concerning Definition 3.3. Fix the
subscript $i.$ Let the set $\{\pi _{i}(x_{j})$, $j=1,...,m\}$ have $s_{i}$
different values, which we denote by $\gamma _{1}^{i},\gamma
_{2}^{i},...,\gamma _{s_{i}}^{i}.$ Then (3.36) implies that
\begin{equation*}
\sum_{j}\lambda _{j}=0,
\end{equation*}%
where the sum is taken over all $j$ such that $\pi _{i}(x_{j})=\gamma
_{k}^{i},$ $k=1,...,s_{i}.$ Thus for fixed $i$, we have $s_{i}$ homogeneous
linear equations in $\lambda _{1},...,\lambda _{m}.$ The coefficients of
these equations are the integers $0$ and $1.$ By varying $i$, we obtain $%
s=\sum_{i=1}^{n}s_{i}$ such equations. Hence (3.36), in its expanded form,
stands for the system of these equations. One can observe that if this
system has a solution $(\lambda _{1},...,\lambda _{m})$ with nonzero real
components $\lambda _{i},$ then it also has a solution $(n_{1},...,n_{m})$
with nonzero integer components $n_{i},$ $i=1,...,m.$ This means that in
Definition 3.3, we can replace the vector $\lambda $ by the vector $%
n=(n_{1},...,n_{m})\,$, where $n_{i}\in \mathbb{Z}\backslash \{0\},$ $%
i=1,...,m.$ Thus, Definition 3.3 is equivalent to the following definition.
\bigskip
\textbf{Definition 3.4.} \textit{A set $p=\{x_{1},...,x_{m}\}\subset X$ is
called a projection cycle if there exist nonzero integers $n_{1},...,n_{m}$
such that}
\begin{equation*}
\sum_{j=1}^{m}n_{j}\chi _{\pi _{i}(x_{j})}=0,\text{ \ }i=1,...,n.\eqno(3.37)
\end{equation*}
\bigskip
\textbf{Lemma 3.5.} \textit{Definition 3.4 is equivalent to Golomb's
definition of a projection cycle.}
\bigskip
\begin{proof} Let $p=\{x_{1},...,x_{m}\}$ be a projection cycle with respect
to Definition 3.4. By $b$ and $c$ denote the set of all points $x_{i}$ such
that the integers $n_{i}$ associated with them in (3.37) are positive and
negative correspondingly. Write out each point $x_{i}$ $n_{i}$ times if $%
n_{i}>0$ and $-n_{i}$ times if $n_{i}<0.$ Then the set $\{b;c\}$ is a
projection cycle with respect to Golomb's definition. The
inverse is also true. Let a set $p_{1}=\{b_{1},...,b_{k};~c_{1},...,c_{k}\}$
be a projection cycle with respect to Golomb's definition. Here, some points
$b_{i}$ or $c_{i}$ may be repeated. Let $p=\{x_{1},...,x_{m}\}$ stand for
the set $p_{1}$, but with no repetition of its points. Let $n_{i}$ show how
many times $x_{i}$ appear in $p_{1}.$ We take $n_{i}$ positive if $x_{i}$
appears in the $b$-part of $p_{1}$ and negative if it appears in the $c$%
-part of $p_{1}.$ Clearly, the set $\{x_{1},...,x_{m}\}$ is a projection
cycle with respect to Definition 3.4, since the integers $n_{i},$ $%
i=1,...,m, $ satisfy (3.37). \end{proof}
In the sequel, we will use Definition 3.3. A pair $\left\langle p,\lambda
\right\rangle ,$ where $p$ is a projection cycle in $X$ and $\lambda $ is a
vector associated with $p$ by (3.36), will be called a ``projection
cycle-vector pair" of $X.$ To each such pair $\left\langle p,\lambda
\right\rangle $ with $p=\{x_{1},...,x_{m}\}$ and $\lambda =(\lambda
_{1},...,\lambda _{m})$, we correspond the measure
\begin{equation*}
\mu _{p,\lambda }=\frac{1}{\sum_{j=1}^{m}\left\vert \lambda _{j}\right\vert }%
\sum_{j=1}^{m}\lambda _{j}\delta _{x_{j}}.\eqno(3.38)
\end{equation*}
Clearly, $\mu _{p,\lambda }\in M^{\bot }$ and $\left\Vert \mu _{p,\lambda
}\right\Vert =1$. We will also deal with measures supported on some certain
subsets of projection cycles called \textit{minimal projection cycles}. A
projection cycle is said to be minimal if it does not contain any projection
cycle as its proper subset. For example, the set $p=%
\{(0,0,0),~(0,0,1),~(0,1,0),~(1,0,0),~(1,1,1)\}$ is a minimal projection
cycle in $\mathbb{R}^{3},$ since the vector $\lambda =(2,-1,-1,-1,1)$
satisfies Eq. (3.36) and there is no such vector for any other subset of $p$%
. Adding one point $(0,1,1)$ from the right to $p$, we will also have a
projection cycle, but not minimal. Note that in this case, $\lambda $ can be
taken as $(3,-1,-1,-2,2,-1).$
\bigskip
\textbf{Remark 3.7.} A minimal projection cycle under the name of a \textit{%
loop} was introduced and used in the works of Klopotowski, Nadkarni, Rao
\cite{81,80}.
\bigskip
To prove our main result we need some auxiliary facts.
\bigskip
\textbf{Lemma 3.6.} (1)\textit{\ The vector $\lambda =(\lambda
_{1},...,\lambda _{m})$ associated with a minimal projection cycle $%
p=(x_{1},...,x_{m})$ is unique up to multiplication by a constant.}
(2)\textit{\ If in (1), $\sum_{j=1}^{m}\left\vert \lambda _{j}\right\vert
=1, $ then all the numbers $\lambda _{j}$, $j=1,...,m,$ are rational.}
\bigskip
\begin{proof} Let $\lambda ^{1}=(\lambda _{1}^{1},...,\lambda _{m}^{1})$ and $%
\lambda ^{2}=(\lambda _{1}^{2},...,\lambda _{m}^{2})$ be any two vectors
associated with $p.$ That is,
\begin{equation*}
\sum_{j=1}^{m}\lambda _{j}^{1}\chi _{\pi _{i}(x_{j})}=0\text{ and }%
\sum_{j=1}^{m}\lambda _{j}^{2}\chi _{\pi _{i}(x_{j})}=0,\text{ \ }i=1,...,n.
\end{equation*}%
After multiplying the second equality by $c=\frac{\lambda _{1}^{1}}{\lambda
_{1}^{2}}$ and subtracting from the first, we obtain that
\begin{equation*}
\sum_{j=2}^{m}(\lambda _{j}^{1}-c\lambda _{j}^{2})\chi _{\pi _{i}(x_{j})}=0%
\text{, \ }i=1,...,n.
\end{equation*}%
Now since the cycle $p$ is minimal, $\lambda _{j}^{1}=c\lambda _{j}^{2},$
for all $j=1,...,m.$
The second part of the lemma is a consequence of the first part. Indeed, let
$n=(n_{1},...,n_{m})$ be a vector with the nonzero integer coordinates
associated with $p.$ Then the vector $\lambda ^{^{\prime }}=(\lambda
_{1}^{^{\prime }},...,\lambda _{m}^{^{\prime }}),$ where $\lambda
_{j}^{^{\prime }}=\frac{n_{j}}{\sum_{j=1}^{m}\left\vert n_{j}\right\vert },$
$j=1,...,m,$ is also associated with $p.$ All coordinates of $\lambda
^{^{\prime }}$ are rational and therefore by the first part of the lemma, it
is the unique vector satisfying $\sum_{j=1}^{m}\left\vert \lambda
_{j}^{^{\prime }}\right\vert =1.$ \end{proof}
By this lemma, a minimal projection cycle $p$ uniquely (up to a sign)
defines the measure
\begin{equation*}
~\mu _{p}=\sum_{j=1}^{m}\lambda _{j}\delta _{x_{j}},\text{ \ }%
\sum_{j=1}^{m}\left\vert \lambda _{j}\right\vert =1.
\end{equation*}
\bigskip
\textbf{Lemma 3.7}. \textit{Let $\mu $ be a normalized orthogonal measure on
a projection cycle $l\subset X$. Then it is a convex combination of
normalized orthogonal measures on minimal projection cycles of $l$. That is,}
\begin{equation*}
\mu =\sum_{i=1}^{s}t_{i}\mu _{l_{i}},\text{ }\sum_{i=1}^{s}t_{i}=1,~t_{i}>0,
\end{equation*}
\textit{where $l_{i},$ $i=1,...,s,$ are minimal projection cycles in $l.$}
\bigskip
This lemma follows from the result of Navada (see \cite[Theorem 2]{112}):
Let $S\subset X_{1}\times \cdots \times X_{n}$ be a finite set. Then any
extreme point of the convex set of measures $\mu $ on $S$, $\mu \in M^{\bot }
$, $\left\Vert \mu \right\Vert \leq 1$, has its support on a minimal
projection cycle contained in $S$.
\bigskip
\textbf{Remark 3.8.} In the case $n=2$, Lemma 3.7 was proved by Medvedev
(see \cite[p.77]{76}).
\bigskip
\textbf{Lemma 3.8} (see \cite[p.73]{76}). \textit{Let $X=X_{1}\times \cdots
\times X_{n}$ and $\pi _{i}$ be the projections of $X$ onto the sets $X_{i},$
$i=1,...,n.$ In order that a measure $\mu \in C(X)^{\ast }$ be orthogonal to
the subspace $M$, it is necessary and sufficient that}
\begin{equation*}
\mu \circ \pi _{i}^{-1}=0,\text{ }i=1,...,n.
\end{equation*}
\bigskip
\textbf{Lemma 3.9} (see \cite[p.75]{76}). \textit{Let $\mu \in M^{\bot }$
and $\left\Vert \mu \right\Vert =1.$ Then there exist a net of measures $%
\{\mu _{\alpha }\}\subset M^{\bot }$ weak$^{\text{*}}$ converging in $%
C(X)^{\ast }$ to $\mu $ and satisfying the following properties:}
1) $\left\Vert \mu _{\alpha }\right\Vert =1;$
2) \textit{The closed support of each $\mu _{\alpha }$ is a finite set.}
\bigskip
Our main result is the following theorem.
\bigskip
\textbf{Theorem 3.10.} \textit{The error of approximation from the manifold $%
M$ obeys the equality}
\begin{equation*}
E(f)=\underset{l\subset X}{\sup }\left\vert \int\limits_{X}fd\mu
_{l}\right\vert ,
\end{equation*}%
\textit{where the $\sup $ is taken over all minimal projection cycles of $X.$%
}
\bigskip
\begin{proof} Let $\overset{\sim }{\mu }$ be a measure with finite support $%
\{x_{1},...,x_{m}\}$ and orthogonal to the space $M.$ Put $\lambda _{j}=%
\overset{\sim }{\mu }(x_{j}),$ $j=1,...m.$ By Lemma 3.8, $\overset{\sim }{%
\mu }(\pi _{i}^{-1}(\pi _{i}(x_{j})))=0,$ for all $i=1,...,n,$ $j=1,...,m.$
Fix the indices $i$ and $j.$ Then we have the equation $\sum_{k}\lambda
_{k}=0,$ where the sum is taken over all indices $k$ such that $\pi
_{i}(x_{k})=\pi _{i}(x_{j}).$ Varying $i$ and $j,$ we obtain a system of
such equations, which concisely can be written as
\begin{equation*}
\sum_{k=1}^{m}\lambda _{k}\chi _{\pi _{i}(x_{k})}=0,\text{ \ }i=1,...,n.
\end{equation*}%
This means that the finite support of $\overset{\sim }{\mu }$ forms a
projection cycle. Therefore, a net of measures approximating the given
measure $\mu $ in Lemma 3.9 are all of the form (3.38).
Let now $\mu _{p,\lambda }$ be any measure of the form (3.38). Since $\mu
_{p,\lambda }\in M^{\bot }$ and $\left\Vert \mu _{p,\lambda }\right\Vert =1,$
we can write
\begin{equation*}
\left\vert \int\limits_{X}fd\mu _{p,\lambda }\right\vert =\left\vert
\int\limits_{X}(f-g)d\mu _{p,\lambda }\right\vert \leq \left\Vert
f-g\right\Vert ,\eqno(3.39)
\end{equation*}%
where $g$ is an arbitrary function in $M$. It follows from (3.39) that
\begin{equation*}
\underset{\left\langle p,\lambda \right\rangle }{\sup }\left\vert
\int\limits_{X}fd\mu _{p,\lambda }\right\vert \leq E(f),\eqno(3.40)
\end{equation*}%
where the $\sup $ is taken over all projection cycle-vector pairs of $X.$
Consider the general duality relation (3.32). Let $\mu _{0}$ be a measure
attaining the supremum in (3.32) and $\left\{ \mu _{p,\lambda }\right\} $ be
a net of measures of the form (3.38) approximating $\mu _{0}$ in the weak$^{%
\text{*}}$ topology of $C(X)^{\ast }.$ We already know that this is
possible. For any $\varepsilon >0,$ there exists a measure $\mu
_{p_{0},\lambda _{0}}$ in $\left\{ \mu _{p,\lambda }\right\} $ such that
\begin{equation*}
\left\vert \int\limits_{X}fd\mu _{0}-\int\limits_{X}fd\mu _{p_{0},\lambda
_{0}}\right\vert <\varepsilon .
\end{equation*}%
From the last inequality we obtain that
\begin{equation*}
\left\vert \int\limits_{X}fd\mu _{p_{0},\lambda _{0}}\right\vert >\left\vert
\int\limits_{X}fd\mu _{0}\right\vert -\varepsilon =E(f)-\varepsilon .
\end{equation*}%
Hence,
\begin{equation*}
\underset{\left\langle p,\lambda \right\rangle }{\sup }\left\vert
\int\limits_{X}fd\mu _{p,\lambda }\right\vert \geq E(f).\eqno(3.41)
\end{equation*}%
From (3.40) and (3.41) it follows that
\begin{equation*}
\underset{\left\langle p,\lambda \right\rangle }{\sup }\left\vert
\int\limits_{X}fd\mu _{p,\lambda }\right\vert =E(f).\eqno(3.42)
\end{equation*}
By Lemma 3.7,
\begin{equation*}
\mu _{p,\lambda }=\sum_{i=1}^{s}t_{i}\mu _{l_{i}},
\end{equation*}%
where $l_{i}$, $i=1,...,s,$ are minimal projection cycles in $p$ and $%
\sum_{i=1}^{s}t_{i}=1,~t_{i}>0.$ Let $k$ be an index in the set $\{1,...,s\}
$ such that
\begin{equation*}
\left\vert \int\limits_{X}fd\mu _{l_{k}}\right\vert =\max \left\{ \left\vert
\int\limits_{X}fd\mu _{l_{i}}\right\vert ,\text{ }i=1,...,s\right\} .
\end{equation*}%
Then
\begin{equation*}
\left\vert \int\limits_{X}fd\mu _{p,\lambda }\right\vert \leq \left\vert
\int\limits_{X}fd\mu _{l_{k}}\right\vert .\eqno(3.43)
\end{equation*}%
Now since
\begin{equation*}
\left\vert \int\limits_{X}fd\mu _{l}\right\vert \leq E(f),
\end{equation*}%
for any minimal cycle $l,$ from (3.42) and (3.43) we obtain the assertion of
the theorem. \end{proof}
\textbf{Remark 3.9.} Theorem 3.10 not only proves Golomb's formula, but also
improves it. Indeed, based on Lemma 3.5, one can easily observe that the
formula (3.35) is equivalent to the formula
\begin{equation*}
E(f)=\underset{\left\langle p,\lambda \right\rangle }{\sup }\left\vert
\int\limits_{X}fd\mu _{p,\lambda }\right\vert ,
\end{equation*}%
where the $\sup $ is taken over all projection cycle-vector pairs $%
\left\langle p,\lambda \right\rangle $ of $X$ provided that all the numbers $%
\lambda _{i}\diagup \sum_{j=1}^{m}\left\vert \lambda _{j}\right\vert $, $%
i=1,...,m,$ are rational. But by Lemma 3.6, minimal projection cycles enjoy
this property.
\newpage
\chapter{Generalized ridge functions and linear superpositions}
A ridge function $g(\mathbf{a}\cdot \mathbf{x})$ with a direction $\mathbf{a}%
\in \mathbb{R}^{d}\backslash \{\mathbf{0}\}$ admits a natural generalization
to a multivariate function of the form $g(\alpha _{1}(x_{1})+\cdot \cdot
\cdot +\alpha _{d}(x_{d}))$, where $\alpha _{i}(x_{i})$, $i=\overline{1,d},$
are real, presumably well behaved, fixed univariate functions. We know from
Chapter 1 that finitely many directions $\mathbf{a}^{j}$ are not enough for
sums $\sum g_{j}\left( \mathbf{a}^{j}\cdot \mathbf{x}\right) $ to
approximate multivariate functions. However, we will see in this chapter
that sums of the form $\sum g_{j}(\alpha _{1}^{j}(x_{1})+\cdot \cdot \cdot
+\alpha _{d}^{j}(x_{d}))$ with finitely many $\alpha _{i}^{j}(x_{i})$ is
capable not only approximating multivariate functions but also precisely
representing them. First we study the problem of representation of a
function $f:X\rightarrow \mathbb{R}$, where $X$ is any set, as a linear
superposition $\sum_{j}g_{j}(h_{j}(x))$ with arbitrary but fixed functions $%
h_{j}:X\rightarrow {{\mathbb{R}}}$. Then we apply the obtained result and
the famous Kolmogorov superposition theorem to prove representability of an
arbitrarily behaved multivariate function in the form of a generalized ridge
function $\sum g_{j}(\alpha _{1}^{j}(x_{1})+\cdot \cdot \cdot +\alpha
_{d}^{j}(x_{d}))$. We also study the uniqueness of representation
of functions by linear superpositions.
The material of this chapter is taken from \cite{49,Ism}.
\bigskip
\section{Representation theorems}
In this section, we study some problems of representation of real functions by
linear superpositions and linear combinations of generalized ridge functions.
\subsection{Problem statement and historical notes}
Let $X$ be any set and $h_{i}:X\rightarrow {{\mathbb{R}}},~i=1,...,r,$ be
arbitrarily fixed functions. Consider the set
\begin{equation*}
\mathcal{B}(X)=\mathcal{B}(h_{1},...,h_{r};X)=\left\{
\sum\limits_{i=1}^{r}g_{i}(h_{i}(x)),~x\in X,~g_{i}:\mathbb{R}\rightarrow
\mathbb{R},~i=1,...,r\right\} \eqno(4.1)
\end{equation*}%
Members of this set will be called linear superpositions with respect to the
functions $h_{1},...,h_{r}$ (see \cite{141}).
For a detailed study of linear superpositions and their approximation-theoretic
properties we refer the reader to the monograph by Khavinson \cite{76}.
Note that sums of generalized ridge functions $\sum g_{j}(\alpha _{1}^{j}(x_{1})+\cdot \cdot \cdot
+\alpha _{d}^{j}(x_{d}))$ with fixed $\alpha _{i}^{j}(x_{i})$
are a special case of linear superpositions. In Section 1.2,
we considered linear superpositions defined on a subset of the $d$%
-dimensional Euclidean space, while here $X$ is a set of arbitrary nature.
As in Section 1.2, we are interested in the question: what conditions on $X$
guarantee that each function on $X$ will be in the set $\mathcal{B}(X)$? The
simplest case $X\subset \mathbb{R}^{d},~r=d$ and $h_{i}$ are the coordinate
functions was solved in \cite{81}. See also \cite[p.57]{76} for the case $%
r=2.$
By $\mathcal{B}_{c}(X)$ and $\mathcal{B}_{b}(X)$ denote the right hand side
of (4.1) with continuous and bounded $g_{i}:\mathbb{R}\rightarrow \mathbb{R}%
,~i=1,...,r,$ respectively. Our starting point is the well-known
superposition theorem of Kolmogorov \cite{83}. It states that for the unit
cube $\mathbb{I}^{d},~\mathbb{I}=[0,1],~d\geq 2,$ there exists $2d+1$
functions $\{s_{q}\}_{q=1}^{2d+1}\subset C(\mathbb{I}^{d})$ of the form
\begin{equation*}
s_{q}(x_{1},...,x_{d})=\sum_{p=1}^{d}\varphi _{pq}(x_{p}),~\varphi _{pq}\in
C(\mathbb{I}),~p=1,...,d,~q=1,...,2d+1\eqno(4.2)
\end{equation*}%
such that each function $f\in C(\mathbb{I}^{d})$ admits the representation
\begin{equation*}
f(x)=\sum_{q=1}^{2d+1}g_{q}(s_{q}(x)),~x=(x_{1},...,x_{d})\in \mathbb{I}%
^{d},~g_{q}\in C({{\mathbb{R)}}}.\eqno(4.3)
\end{equation*}
Note that the functions $g_{q}(s_{q}(x))$, involved in the right hand side of
(4.3), are generalized ridge functions. In our notation, (4.3) means that $%
\mathcal{B}_{c}(s_{1},...,s_{2d+1};\mathbb{I}^{d})=C(\mathbb{I}^{d}).$ This
surprising and deep result, which solved (negatively) Hilbert's 13-th
problem, was improved and generalized in several directions. It was first
observed by Lorentz \cite{98} that the functions $g_{q}$ can be replaced by
a single continuous function $g.$ Sprecher \cite{128} showed that the
theorem can be proven with constant multiples of a single function $\varphi $
and translations. Specifically, $\varphi _{pq}$ in (4.2) can be chosen as $%
\lambda ^{p}\varphi (x_{p}+\varepsilon q),$ where $\varepsilon $ and $%
\lambda $ are some positive constants. Fridman \cite{31} succeeded in
showing that the functions $\varphi _{pq}$ can be constructed to belong to
the class $Lip(1).$ Vitushkin and Henkin \cite{141} showed that $\varphi
_{pq}$ cannot be taken to be continuously differentiable.
Ostrand \cite{115} extended the Kolmogorov theorem to general compact metric
spaces. In particular, he proved that for each compact $d$-dimensional
metric space $X$ there exist continuous real functions $\{\alpha
_{i}\}_{i=1}^{2d+1}\subset C(X)$ such that $\mathcal{B}_{c}(\alpha
_{1},...,\alpha _{2d+1};X)=C(X).$ Sternfeld \cite{130} showed that the
number $2d+1$ cannot be reduced for any $d$-dimensional space $X.$ Thus the
number of terms in the Kolmogorov superposition theorem is the best possible.
Some papers of Sternfeld were devoted to the representation of continuous
and bounded functions by linear superpositions. Let $C(X)$ and $B(X)$ denote
the space of continuous and bounded functions on some set $X$ respectively
(in the first case, $X$ is supposed to be a compact metric space). Let $%
F=\{h\}$ be a family of functions on $X.$ $F$ is called a uniformly
separating family (\textit{u.s.f.}) if there exists a number $0<\lambda \leq
1$ such that for each pair $\{x_{j}\}_{j=1}^{m}$, $\{z_{j}\}_{j=1}^{m}$ of
disjoint finite sequences in $X$, there exists some $h\in F$ so that if from
the two sequences $\{h(x_{j})\}_{j=1}^{m}$and $\{h(z_{j})\}_{j=1}^{m}$ in $%
h(X)$ we remove a maximal number of pairs of points $h(x_{j_{1}})$ and $%
h(z_{j_{2}})$ with $h(x_{j_{1}})=h(z_{j_{2}}),$ there remains at least $%
\lambda m$ points in each sequence (or , equivalently, at most $(1-\lambda )m
$ pairs can be removed). Sternfeld \cite{132} proved that for a finite
family $F=\{h_{1},...,h_{r}\}$ of functions on $X$, being a \textit{u.s.f.}
is equivalent to the equality $\mathcal{B}_{b}(h_{1},...,h_{r};X)=B(X),$ and
that in the case where $X$ is a compact metric space and the elements of $F$
are continuous functions on $X$, the equality $\mathcal{B}%
_{c}(h_{1},...,h_{r};X)=C(X)$ implies that $F$ is a \textit{u.s.f.} Thus, in
particular, Sternfeld obtained that the formula (4.3) is valid for all
bounded functions, where $g_{q}$ are bounded functions depending on $f$ (see
also \cite[p.21]{76}).
Let $X$ be a compact metric space. The family $F=\{h\}\subset C(X)$ is said
to be a measure separating family (\textit{m.s.f.}) if there exists a number
$0<\lambda \leq 1$ such that for any measure $\mu $ in $\ C(X)^{\ast },$ the
inequality $\left\Vert \mu \circ h^{-1}\right\Vert \geq \lambda \left\Vert
\mu \right\Vert $ holds for some $h\in F.$ Sternfeld \cite{131} proved that $%
\mathcal{B}_{c}(h_{1},...,h_{r};X)=C(X)$ if and only if the family $%
\{h_{1},...,h_{r}\}$ is a \textit{m.s.f.} In \cite{132}, it was also shown
that if $r=2,$ then the properties \textit{u.s.f.} and \textit{m.s.f.} are
equivalent. Therefore, the equality $\mathcal{B}_{b}(h_{1},h_{2};X)=B(X)$ is
equivalent to $\mathcal{B}_{c}(h_{1},h_{2};X)=C(X).$ But for $r\,>2$, these
two properties are no longer equivalent. That is, $\mathcal{B}%
_{b}(h_{1},...,h_{r};X)=B(X)$ does not always imply $\mathcal{B}%
_{c}(h_{1},...,h_{r};X)=C(X)$ (see \cite{131}).
Our purpose is to consider the above mentioned problem of representation by
linear superpositions without involving any topology (that of continuity or
boundedness). We start with characterization of those sets $X$ for which $%
\mathcal{B}(h_{1},...,h_{r};X)=T(X),$ where $T(X)$ is the space of all
functions on $X.$ As in Section 1.2, this will be done in terms of cycles.
We claim that nonexistence of cycles in $X$ is equivalent to the equality $%
\mathcal{B}(X)=T(X)$ for an arbitrary set $X$. In particular, we show that $%
\mathcal{B}_{c}(X)=C(X)$ always implies $\mathcal{B}(X)=T(X).$ This
implication will enable us to obtain some new results, namely extensions of
the previously known theorems from continuous to discontinuous multivariate
functions. For example, we will prove that the formula (4.3) is valid for
all discontinuous multivariate functions $f$ defined on the unite cube $%
\mathbb{I}^{d},$ where $g_{q}$ are univariate functions depending on $f.$
\bigskip
\subsection{Extension of Kolmogorov's superposition theorem}
In this subsection, we show that if some representation by linear
superpositions holds for continuous functions, then it holds for all
functions. This will lead us to natural extensions of some known
superposition theorems (such as Kolmogorov's superposition theorem,
Ostrand's superposition theorem, etc) from continuous to discontinuous
functions.
In the sequel, by $\chi _{A}$ we will denote the characteristic function of
a set $\ A\subset \mathbb{R}.$ That is,
\begin{equation*}
\chi _{A}(y)=\left\{
\begin{array}{c}
1,~if~y\in A \\
0,~if~y\notin A.%
\end{array}%
\right.
\end{equation*}
The following definition is a generalized version of Definition 1.1 from
Section 1.2, where in connection with ridge functions only subsets of $%
\mathbb{R}^{d}$ were considered.
\bigskip
\textbf{Definition 4.1.} \textit{Given an arbitrary set $X$ and functions $%
h_{i}:X\rightarrow \mathbb{R},~i=1,...,r$. A set of points $%
\{x_{1},...,x_{n}\}\subset X$ is called to be a cycle with respect to the
functions $h_{1},...,h_{r}$ (or, concisely, a cycle if there is no
confusion), if there exists a vector $\lambda =(\lambda _{1},...,\lambda
_{n})$ with the nonzero real coordinates $\lambda _{i},~i=1,...,n,$ such
that }
\begin{equation*}
\sum_{j=1}^{n}\lambda _{j}\chi _{h_{i}(x_{j})}=0,~i=1,...,r.\eqno(4.4)
\end{equation*}
\textit{A cycle $p=\{x_{1},...,x_{n}\}$ is said to be minimal if $p$ does
not contain any cycle as its proper subset.}
\bigskip
Note that in this definition the vector $\lambda =(\lambda _{1},\ldots
,\lambda _{n})$ can be chosen so that it has only integer components.
Indeed, let for $i=1,...,r,$ the set $\{h_{i}(x_{j}),~j=1,...,n\}$ have $%
k_{i}$ different values. Then it is not difficult to see that Eq. (4.4)
stands for a system of $\sum_{i=1}^{r}k_{i}$ homogeneous linear equations in
unknowns $\lambda _{1},...,\lambda _{n}.$ This system can be written in the
matrix form $(\lambda _{1},\ldots ,\lambda _{n})\times C=0,$ where $C$ is an
$n$ by $\sum_{i=1}^{r}k_{i}$ matrix. The basic property of this matrix is
that all of its entries are 0's and 1's and no row or column of $C$ is
identically zero. Since Eq. (4.4) has a nontrivial solution $(\lambda
_{1}^{^{\prime }},\ldots ,\lambda _{n}^{^{\prime }})\in \mathbf{R}^{n}$ and
all entries of $C$ are integers, by applying the Gauss elimination method we
can see that there always exists a nontrivial solution $(\lambda _{1},\ldots
,\lambda _{n})$ with the integer components $\lambda _{i}$, $i=1,...,n$.
For a number of simple examples, see Section 1.2.
Let $T(X)$ denote the set of all functions on $X.$ With each pair $%
\left\langle p,\lambda \right\rangle ,$ where $p=\{x_{1},...,x_{n}\}$ is a
cycle in $X$ and $\lambda =(\lambda _{1},...,\lambda _{n})$ is a vector
known from Definition 4.1, we associate the functional
\begin{equation*}
G_{p,\lambda }:T(X)\rightarrow \mathbb{R},~~G_{p,\lambda
}(f)=\sum_{j=1}^{n}\lambda _{j}f(x_{j}).
\end{equation*}%
In the following, such pairs $\left\langle p,\lambda \right\rangle $ will be
called \textit{cycle-vector pairs} of $X.$ It is clear that the functional $%
G_{p,\lambda }$ is linear. Besides, $G_{p,\lambda }(g)=0$ for all functions $%
g\in \mathcal{B}(h_{1},...,h_{r};X).$ Indeed, assume that (4.4) holds. Given
$i\leq r$, let $z=h_{i}(x_{j})$ for some $j$. Hence, $%
\sum_{j~(h_{i}(x_{j})=z)}\lambda _{j}=0$ and $\sum_{j~(h_{i}(x_{j})=z)}%
\lambda _{j}g_{i}(h_{i}(x_{j}))=0$. A summation yields $G_{p,\lambda
}(g_{i}\circ h_{i})=0$. Since $G_{p,\lambda }$ is linear, we obtain that $%
G_{p,\lambda }(\sum_{i=1}^{r}g_{i}\circ h_{i})=0$.
A minimal cycle $p=\{x_{1},...,x_{n}\}$ has the following obvious properties:
\begin{description}
\item[(a)] \textit{The vector $\lambda $ associated with $p$ by Eq. (4.4) is
unique up to multiplication by a constant;}
\item[(b)] \textit{If in (4.4), $\sum_{j=1}^{n}\left\vert \lambda
_{j}\right\vert =1,$ then all the numbers $\lambda _{j},~j=1,...,n,$ are
rational.}
\end{description}
Thus, a minimal cycle $p$ uniquely (up to a sign) defines the functional
\begin{equation*}
~G_{p}(f)=\sum_{j=1}^{n}\lambda _{j}f(x_{j}),\text{ \ }\sum_{j=1}^{n}\left%
\vert \lambda _{j}\right\vert =1.
\end{equation*}
\bigskip
\textbf{Proposition 4.1.} \textit{1) Let $X$ have cycles. A function $%
f:X\rightarrow \mathbb{R}$ belongs to the space $\mathcal{B}%
(h_{1},...,h_{r};X)$ if and only if $G_{p}(f)=0$ for any minimal cycle $%
p\subset X$ with respect to the functions $h_{1},...,h_{r}$.}
\textit{2) Let $X$ has no cycles. Then $\mathcal{B}(h_{1},...,h_{r};X)=T(X).$%
}
\bigskip
\textbf{Proposition 4.2.} \textit{$\mathcal{B}(h_{1},...,h_{r};X)=T(X)$ if
and only if $X$ has no cycles.}
\bigskip
These propositions are proved by the same way as Theorems 1.1 and 1.2. We
use these propositions to obtain our main result (see Theorem 4.1 below).
The condition whether $X$ have cycles or not, depends both on $X$ and the
functions $h_{1},...,h_{r}$. In the following, we see that if $%
h_{1},...,h_{r}$ are ``nice" functions (smooth functions with the simple
structure. For example, ridge functions) and $X\subset \mathbb{R}^{d}$ is a
``rich" set (for example, the set with interior points), then $X$ has always
cycles. Thus the representability by linear combinations of univariate
functions with the fixed ``nice" multivariate functions requires at least
that $X$ should not possess interior points. The picture is quite different
when the functions $h_{1},...,h_{r}$ are not ``nice". Even in the case when
they are continuous, we will see that many sets of $\mathbb{R}^{d}$ (the
unite cube, any compact subset of that, or even the whole space $\mathbb{R}%
^{d}$ itself) may have no cycles. If disregard the continuity, there exists
even one function $h$ such that every multivariate function is representable
as $g\circ h$ over any subset of $\mathbb{R}^{d}$. First, let us introduce
the following definition.
\bigskip
\textbf{Definition 4.2.} \textit{Let $X$ be a set and $h_{i}:X\rightarrow
\mathbb{R}, $ $i=1,...,r,$ be arbitrarily fixed functions. A class $A(X)$ of
functions on $X$ will be called a ``permissible function class" if for any
minimal cycle $p\subset X$ with respect to the functions $h_{1},...,h_{r}$
(if it exists), there is a function $f_{0}$ in $A(X)$ such that $%
G_{p}(f_{0})\neq 0. $}
\bigskip
Clearly, $C(X)$ and $B(X)$ are both permissible function classes (in case of
$C(X),$ $X$ is considered to be a normal topological space).
\bigskip
\textbf{Theorem 4.1.} \textit{Let $A(X)$ be a permissible function class. If
$A(X) \subset \mathcal{B}(h_{1},...,h_{r};X)$, then $\mathcal{B}%
(h_{1},...,h_{r};X)=T(X).$}
\bigskip
The proof is simple and based on Propositions 4.1 and 4.2. Assume for a
moment that $X$ admits a cycle $p$. By Proposition 4.1, the functional $G_{p}
$ annihilates all members of the set $B(h_{1},...,h_{r};X).$ By Definition
4.2 of permissible function classes, $A(X)\ $contains a function $f_{0}$
such that $G_{p}(f_{0})\neq 0.$ Therefore, $f_{0}\notin B(h_{1},...,h_{r};X)$%
. We see that the embedding $A(X) \subset B(h_{1},...,h_{r};X)$ is
impossible if $X$ has a cycle. Thus $X$ has no cycles. Then by Proposition
4.2, $\mathcal{B}(h_{1},...,h_{r};X)=T(X).$
In the ``if part" of Theorem 4.1, instead of $\mathcal{B}(h_{1},...,h_{r};X)$
and $A(X)$ one can take $\mathcal{B}_{c}(h_{1},...,h_{r};X)$ and $C(X)$ (or $%
\mathcal{B}_{b}(h_{1},...,h_{r};X)$ and $B(X)$) respectively. That is, the
following corollaries are valid.
\bigskip
\textbf{Corollary 4.1.} \textit{Let $X$ be a set and $h_{i}:X\rightarrow
\mathbb{R},$ $i=1,...,r,$ be arbitrarily fixed bounded functions\textit{. If
$\mathcal{B}_{b}(h_{1},...,h_{r};X)=B(X)$, then $\mathcal{B}%
(h_{1},...,h_{r};X)=T(X).$}}
\bigskip \textbf{Corollary 4.2.} \textit{Let $X$ be a normal topological
space and $h_{i}:X\rightarrow \mathbb{R},$ $i=1,...,r,$ be arbitrarily fixed
continuous functions\textit{. If $\mathcal{B}_{c}(h_{1},...,h_{r};X)=C(X)$,
then $\mathcal{B}(h_{1},...,h_{r};X)=T(X).$}}
\bigskip
The main advantage of Theorem 4.1 is that we need not check directly if the
set $X$ has no cycles, which in many cases may turn out to be very tedious
task. Using this theorem, we can extend free-of-charge\ the existing
superposition theorems from the classes $B(X)$ or $C(X)$ (or some other
permissible function classes) to all functions defined on $X.$ For example,
this theorem allows us to extend the Kolmogorov superposition theorem from
continuous to all multivariate functions.
\bigskip
\textbf{Theorem 4.2.} \textit{Let $d\geq 2$, $\mathbb{I}=[-1;1]$, and $%
~\varphi _{pq}, ~p=1,...,d, ~q=1,...,2d+1$, be the universal continuous
functions in (4.2). Then each multivariate function $f:\mathbb{I}%
^{d}\rightarrow \mathbb{R}$ can be represented in the form}
\begin{equation*}
f(x)=\sum_{q=1}^{2d+1}g_{q}(\sum_{p=1}^{d}\varphi
_{pq}(x_{p})),~x=(x_{1},...,x_{d})\in \mathbb{I}^{d}.
\end{equation*}%
\textit{where $g_{q}$ are univariate functions depending on $f.$}
\bigskip
It should be remarked that Sternfeld \cite{132}, in particular, obtained
that the formula (4.3) is valid for functions $f\in B(\mathbb{I}^{d})$
provided that $g_{q}$ are bounded functions depending on $f$ (see \cite[%
Chapter 1]{76} for more detailed information and interesting discussions).
Let $X$ be a compact metric space and $h_{i}\in C(X)$, $i=1,...,r.$ The
result of Sternfeld (see Section 4.1) and Corollary 4.1 give us the
implications
\begin{equation*}
\mathcal{B}_{c}(h_{1},...,h_{r};X)=C(X)\Rightarrow \mathcal{B}%
_{b}(h_{1},...,h_{r};X)=B(X)
\end{equation*}
\begin{equation*}
\Rightarrow \mathcal{B}(h_{1},...,h_{r};X)=T(X).
\end{equation*}
The first implication is invertible when $r=2$ (see \cite{132}). We want to
show that the second is not invertible even in the case $r=2.$ The following
interesting example is due to Khavinson \cite[p.67]{76}.
Let $X\subset \mathbb{R}^{2}$ consist of a broken line whose sides are
parallel to the coordinate axis and whose vertices are
\begin{equation*}
(0;0),(1;0),(1;1),(1+\frac{1}{2^{2}};1),(1+\frac{1}{2^{2}};1+\frac{1}{2^{2}}
),(1+\frac{1}{2^{2}}+\frac{1}{3^{2}};1+\frac{1}{2^{2}}),...
\end{equation*}
We add to this line the limit point of the vertices $(\frac{\pi ^{2}}{6},%
\frac{\pi ^{2}}{6})$. Let $r=2$ and $h_{1},h_{2}$ be the coordinate
functions. Then the set $X$ has no cycles with respect to $h_{1}$ and $%
h_{2}. $ By Proposition 4.1, every function $f$ on $X$ is of the form $%
g_{1}(x_{1})+g_{2}(x_{2})$, $(x_{1},x_{2})\in X$. Now construct a function $%
f_{0}$ on $X$ as follows. On the link joining $(0;0)$ to $(1;0)$ $%
f_{0}(x_{1},x_{2})$ continuously increases from $0$ to $1$; on the link from
$(1;0)$ to $(1;1)$ it continuously decreases from $1$ to $0$; on the link
from $(1;1)$ to $(1+\frac{1}{2^{2}};1)$ it increases from $0$ to $\frac{1}{2}
$; on the link from $(1+\frac{1}{2^{2}};1)$ to $(1+\frac{1}{2^{2}};1+\frac{1%
}{2^{2}})$ it decreases from $\frac{1}{2}$ to $0$; on the next link it
increases from $0$ to $\frac{1}{3}$, etc. At the point $(\frac{\pi ^{2}}{6},%
\frac{\pi ^{2}}{6})$ set the value of $f_{0}$ equal to $0.$ Obviously, $%
f_{0} $ is a continuous functions and by the above argument, $%
f_{0}(x_{1},x_{2})=g_{1}(x_{1})+g_{2}(x_{2}).$ But $g_{1}$ and $g_{2}$
cannot be chosen as continuous functions, since they get unbounded as $x_{1}$
and $x_{2}$ tends to $\frac{\pi ^{2}}{6}$. Thus, $\mathcal{B}%
(h_{1},h_{2};X)=T(X)$, but at the same time $\mathcal{B}_{c}(h_{1},h_{2};X)%
\neq C(X)$ (or, equivalently, $\mathcal{B}_{b}(h_{1},h_{2};X)\neq B(X)$).
\bigskip
\subsection{Some other superposition theorems}
We have seen in the previous subsection that the unit cube in $\mathbb{R}%
^{d} $ has no cycles with respect to some $2d+1$ continuous functions
(namely, the Kolmogorov functions $s_{q}$ (4.2)). From the result of Ostrand
\cite{115} (see Section 4.1) and Corollary 4.2 it follows that compact sets $%
X$ of finite dimension also lack cycles with respect to a certain family of
finitely many continuous functions on $X$. Namely, the following
generalization of Ostrand's theorem is valid.
\bigskip
\textbf{Theorem 4.3.} \textit{For $p=1,2,...,m$ let $X_{p}$ be a compact
metric space of finite dimension $d_{p}$ and let $n=\sum_{p=1}^{n}d_{p}.$
There exist continuous functions $\alpha _{pq}:X_{p}\rightarrow \lbrack
0,1], $ $p=1,...,m,$ $q=1,...,2n+1,$ such that every real function $f$
defined on $\Pi _{p=1}^{m}X_{p}$ is representable in the form}
\begin{equation*}
f(x_{1},...,x_{m})=\sum_{q=1}^{2n+1}g_{q}(\sum_{p=1}^{m}\alpha _{pq}(x_{p})).%
\eqno(4.5)
\end{equation*}%
\textit{where $g_{q}$ are real functions depending on $f$. If $f$ is
continuous, then the functions $g_{q}$ can be chosen continuous.}
\bigskip
Note that Ostrand proved ``if $f$ is continuous..." part of Theorem 4.3,
while we prove the validity of (4.5) for discontinuous $f$.
One may ask if there exists a finite family of functions $\{h_{i}:\mathbb{R}%
^{d}\rightarrow \mathbb{R}\}_{i=1}^{n}$ such that any subset of $\mathbb{R}%
^{d}$ does not admit cycles with respect to this family? The answer is
positive. This follows from the result of Demko \cite{23}: there exist $2d+1$
continuous functions $\varphi _{1},...,\varphi _{2d+1}$ defined on $\mathbb{R%
}^{d}$ such that every bounded continuous function on $\mathbb{R}^{d}$ is
expressible in the form $\sum_{i=1}^{2d+1}g\circ \varphi _{i}$ for some $%
g\in C(\mathbb{R})$. This theorem together with Corollary 4.1 yield that
every function on $\mathbb{R}^{d}$ is expressible in the form $%
\sum_{i=1}^{2d+1}g_{i}\circ \varphi _{i}$ for some $g_{i}:\mathbb{R}%
\rightarrow \mathbb{R},~i=1,...,2d+1$. We do not yet know if $g_{i}$ here
can be replaced by a single univariate function. We also don't know if the
number $2d+1$ can be reduced so that the whole space of $\mathbb{R}^{d}$ (or
any $d$-dimensional compact subset of that, or at least the unit cube $%
\mathbb{I}^{d}$) has no cycles with respect to some continuous functions $%
\varphi _{1},...,\varphi _{k}:\mathbb{R}^{d}\rightarrow \mathbb{R}$, where $%
k<2d+1$. One of the basic results of Sternfeld \cite{130} says that the
dimension of a compact metric space $X$ equals $d$ if and only if there
exist functions $\varphi _{1},...,\varphi _{2d+1}\in C(X)$ such that $%
\mathcal{B}_{c}(\varphi _{1},...,\varphi _{2d+1};X)=C(X)$ and for any fmily $%
\{\psi _{i}\}_{i=1}^{k}\subset C(X),$ $k<2d+1$, we have $\mathcal{B}%
_{c}(\psi _{1},...,\psi _{k};X)\neq C(X).$ In particular, from this result
it follows that the number of terms in the Kolmogorov superposition theorem
cannot be reduced. But since the equalities $\mathcal{B}_{c}(X)=C(X)$ and $%
\mathcal{B}(X)=T(X)$ are not equivalent, the above question on the
nonexistence of cycles in $\mathbb{R}^{d}$ with respect to less than $2d+1$
continuous functions is far from trivial.
If disregard the continuity, one can construct even one function $\varphi :%
\mathbb{R}^{d}\rightarrow \mathbb{R}$ such that the whole space $\mathbb{R}%
^{d}$ will not possess cycles with respect to $\varphi $ and therefore,
every function $f:\mathbb{R}^{d}\rightarrow \mathbb{R}$ will admit the
representation $f=g\circ \varphi $ with some univariate $g$ depending on $f$%
. Our argument easily follows from Corollary 4.2 and the result of Sprecher
\cite{127}: for any natural number $d$, $d\geq 2$, there exist functions $%
h_{p}:\mathbb{I}\rightarrow \mathbb{R}$, $p=1,...,d,$ such that every
function $f\in C(\mathbb{I}^{d})$ can be represented in the form
\begin{equation*}
f(x_{1},...,x_{d})=g\left( \sum_{p=1}^{d}h_{p}(x_{p})\right) ,\eqno(4.6)
\end{equation*}%
where $g$ is a univariate (generally discontinuous) function depending on $f$%
.
Note that the function involved in the right hand side of (4.6) is a generalized ridge function. Thus, the result of Sprecher together with our result means
that every multivariate function $f$ is representable as a generalized ridge
function $g\left( \cdot \right) $ and if $f$ is continuous, then $g$ can be
chosen continuous as well.
\bigskip
\textbf{Remark 4.1.} Concerning ordinary ridge functions $g(\mathbf{a}\cdot
\mathbf{x})$, representation of every multivariate function by linear
combinations of such functions may not be possible over many sets in $%
\mathbb{R}^{d}$. For example, this is not possible for sets having interior
points. More precisely, assume we are given finitely many nonzero directions
$\mathbf{a}^{1},...,\mathbf{a}^{r}$ in $\mathbb{R}^{d}$. Then $\mathcal{R}%
\left( \mathbf{a}^{1},...,\mathbf{a}^{r};X\right) \neq T(X)$ for any set $%
X\subset \mathbb{R}^{d}$ with a nonempty interior. Indeed, let $\mathbf{y}$
be a point in the interior of $X$. Consider vectors $\mathbf{b}^{i}$, $%
i=1,...,r,$ with sufficiently small coordinates such that $\mathbf{a}%
^{i}\cdot \mathbf{b}^{i}=0$, $i=1,...,r$. Note that the vectors $\mathbf{b}%
^{i}$, $i=1,...,r,$ can be chosen pairwise linearly independent. With each
vector $\mathbf{\varepsilon }=(\varepsilon _{1},...,\varepsilon _{r})$, $%
\varepsilon _{i}\in \{0,1\}$, $i=1,...,r,$ we associate the point
\begin{equation*}
\mathbf{x}_{\mathbf{\varepsilon }}=\mathbf{y+}\sum_{i=1}^{r}\varepsilon _{i}%
\mathbf{b}^{i}.
\end{equation*}%
Since the coordinates of $\mathbf{b}^{i}$ are sufficiently small, we may
assume that all the points $\mathbf{x}_{\mathbf{\varepsilon }}$ are in the
interior of $X$. We correspond each point $\mathbf{x}_{\mathbf{\varepsilon }%
} $ to the number $(-1)^{\left\vert \mathbf{\varepsilon }\right\vert }$,
where $\left\vert \mathbf{\varepsilon }\right\vert =\varepsilon _{1}+\cdots
+\varepsilon _{r}.$ One may easily verify that the pair $\left\langle \{%
\mathbf{x}_{\mathbf{\varepsilon }}\},\{(-1)^{\left\vert \mathbf{\varepsilon }%
\right\vert }\}\right\rangle $ is a cycle-vector pair of $X$. Therefore, by
Proposition 4.2, $\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}%
^{r};X\right) \neq T(X).$
Note that the above method of construction of the set $\{\mathbf{x}_{\mathbf{%
\varepsilon }}\}$ is due to Lin and Pinkus \cite{95}.
\bigskip
\textbf{Remark 4.2.} A different generalization of ridge functions was
considered in Lin and Pinkus \cite{95}. This generalization involves
multivariate functions of the form $g(A\mathbf{x})$, where $\mathbf{x}\in
\mathbb{R}^{d}$ is the variable, $A$ is a fixed $d\times n$ matrix, $1\leq
n<d$, and $g$ is a real-valued function defined on $\mathbb{R}^{n}$. For $%
n=1,$ this reduces to a ridge function.
\bigskip
\section{Uniqueness theorems}
Let $Q$ be a set such that every function on $Q$ can be represented by
linear superpositions. This representation is generally not unique. But for
some sets it may be unique provided that initial values of the representing
functions are prescribed at some point of $Q$. In this section, we are going
to study properties of such sets. All the obtained results are valid, in
particular, for linear combinations of generalized ridge functions.
\subsection{Formulation of the problem}
Assume $X$ is an arbitrary set, $h_{i}:X\rightarrow \mathbb{R}$, $i=1,\ldots
,r$, are fixed functions and $\mathcal{B}(X)$ is the set defined in (4.1).
Let $T(X)$ denote the set of all real functions on $X$. {Obviously, }$%
\mathcal{B}(X)${\ is a linear subspace of $T(X)$. For a set }$Q\subset X$,
let $T(Q)$ and $\mathcal{B}(Q)$ denote the restrictions of $T(X)$ and $%
\mathcal{B}(X)$ to $Q$, respectively. Sets $Q$ with the property $\mathcal{B}%
(Q)=T(Q)$ will be called \textit{representation sets}. Recall that
Proposition 4.2 gives a complete characterization of such sets. For a
representation set $Q$, we will also use the notation $Q\in RS.$ Here, $RS$
stands for the set of all representation sets in $X$.
Let $Q\in RS.$ Clearly for a function $f$ defined on $Q$ the representation
\begin{equation*}
f(x)=\sum_{i=1}^{r}g_{i}(h_{i}(x)),~x\in Q\eqno(4.7)
\end{equation*}%
is not unique. We are interested in the uniqueness of such representation
under some reasonable restrictions on the functions $g_{i}\circ h_{i}$.
These restrictions may be various, but in this section, we require that the
values of $g_{i}\circ h_{i}$ are prescribed at some point $x_{0}\in Q$. That
is, we require that
\begin{equation*}
g_{i}(h_{i}(x_{0}))=a_{i},~i=1,...,r-1,\eqno(4.8)
\end{equation*}%
where $a_{i}$ are arbitrarily fixed real numbers. Is representation (4.7)
subject to initial conditions (4.8) always unique? Obviously, not. We are
going to identify those representation sets $Q$ for which representation
(4.7) subject to conditions (4.8) is unique for all functions $%
f:Q\rightarrow \mathbb{R}$. In the sequel, such sets $Q$ will be called
\textit{unicity sets}.
\bigskip
\subsection{Complete representation sets}
From Proposition 4.2 it is easy to obtain the following set-theoretic
properties of representation sets:
\bigskip
(1) $Q\in RS$ $\Longleftrightarrow $ $A\in RS$ for every finite set $%
A\subset Q$;
(2) The union of any linearly ordered (under inclusion) system of
representation sets is also a representation set
(3) For any representation set $Q$ there is a maximal representation set,
that is, a set $M\in RS$ such that $Q\subset M$ and for any $P\supset M$, $%
P\in RS$ we have $P=M$.
(4) If $M\subset X$ is a maximal representation set, then $h_{i}(M)=h_{i}(X)$%
, $i=1,...,r$.
\bigskip
Properties (1) and (2) are obvious, since any cycle is a finite set. The
(3)-rd property follows from (2) and Zorn's lemma. To prove property (4)
note that if $x_{0}\in X$ and $h_{i}(x_{0})\notin h_{i}(M)$ for some $i$,
one can construct the representation set $M\cup \{x_{0}\}$, which is bigger
than $M$. But this is impossible, since $M$ is maximal.
\bigskip
\textbf{Definition 4.3.} \textit{A set $Q\subset X$ is called a complete
representation set if $Q$ itself is a representation set and there is no
other representation set $P$ such that $Q\subset P$ and $h_{i}(P)=h_{i}(Q)$,
$i=1,...,r$.}
\bigskip
The set of all complete representation sets of $X$ will be denoted by $CRS$.
Obviously, every representation set is contained in a complete
representation set. That is, if $A\in RS$, then there exists $B\in CRS$ such
that $h_{i}(B)=h_{i}(A),$ $i=1,...,r.$ It turns out that for the functions $%
h_{1},...,h_{r}$, complete representation sets entirely characterize unicity
sets. To prove this fact we need some auxiliary lemmas.
\bigskip
\textbf{Lemma 4.1.} \textit{Let $Q\subset X$ be a representation set and for
some point $x_{0}\in Q$ the zero function representation}
\begin{equation*}
0=\sum_{i=1}^{r}g_{i}(h_{i}(x)),\text{ \ }x\in Q,
\end{equation*}%
\textit{is unique, provided that $g_{i}(h_{i}(x_{0}))=0,$ $i=1,...,r-1$.
That is, all the functions $g_{i}\equiv 0$ on the sets $h_{i}(Q)$, $%
i=1,...,r.$ Then $Q\in CRS.$}
\bigskip
\begin{proof} Assume that $Q\notin CRS$. Then there exists a point $p\in X$ such
that $p\notin Q$, $h_{i}(p)\in h_{i}(Q)$, for all $i=1,...,r,$ and $%
Q^{^{\prime }}=Q\cup \{p\}$ is also a representation set. Consider a
function $f_{0}:Q^{^{\prime }}\rightarrow \mathbb{R}$ such that $f_{0}(q)=0$%
, for any $q\in Q$ and $f_{0}(p)=1.$ Since $Q^{^{\prime }}\in RS$,
\begin{equation*}
f_{0}(x)=\sum_{i=1}^{r}s_{i}(h_{i}(x)),\text{ \ }x\in Q^{^{\prime }}.
\end{equation*}%
Then
\begin{equation*}
f_{0}(x)=\sum_{i=1}^{r}g_{i}(h_{i}(x)),\text{ \ }x\in Q^{^{\prime }},\eqno%
(4.9)
\end{equation*}%
where
\begin{equation*}
g_{i}(h_{i}(x))=s_{i}(h_{i}(x))-s_{i}(h_{i}(x_{0})),\text{ }i=1,...,r-1
\end{equation*}%
and
\begin{equation*}
g_{r}(h_{r}(x))=s_{r}(h_{r}(x))+\sum_{i=1}^{r-1}s_{i}(h_{i}(x_{0})).
\end{equation*}%
\qquad
A restriction of representation (4.9) to the set $Q$ gives the equality
\begin{equation*}
\sum_{i=1}^{r}g_{i}(h_{i}(x))=0,\text{ for all }x\in Q.\eqno(4.10)
\end{equation*}%
Note that $g_{i}(h_{i}(x_{0}))=0,$ $i=1,...,r-1.$ It follows from the
hypothesis of the lemma that representation (4.10) is unique. Hence, $%
g_{i}(h_{i}(x))=0,$ for all $x\in Q$ and $i=1,...,r.$ But from (4.9) it
follows that
\begin{equation*}
\sum_{i=1}^{r}g_{i}(h_{i}(p))=f_{0}(p)=1.
\end{equation*}%
Since $h_{i}(p)\in h_{i}(Q)$ for all $i=1,...,r,$ the above relation
contradicts that the functions $g_{i}$ are identically zero on the sets $%
h_{i}(Q)$, $i=1,...,r.$ This means that our assumption is not true and $Q\in
CRS.$
\end{proof}
The following lemma is a strengthened version of Lemma 4.1.
\bigskip
\textbf{Lemma 4.2.} \textit{Let $Q\in RS$ and for some point $x_{0}\in Q$,
numbers $c_{1},c_{2},...,c_{r-1}\in \mathbb{R}$ and a function $v\in T(Q)$
the representation}
\begin{equation*}
v(x)=\sum_{i=1}^{r}v_{i}(h_{i}(x))
\end{equation*}%
\textit{is unique under the initial conditions $v_{i}(h_{i}(x_{0}))=c_{i},$ $%
i=1,...,r-1$. Then for any numbers $b_{1},b_{2}...,b_{r-1}\in \mathbb{R}$
and an arbitrary function $f\in T(Q)$ the representation}
\begin{equation*}
f(x)=\sum_{i=1}^{r}f_{i}(h_{i}(x))
\end{equation*}%
\textit{is also unique, provided that $f_{i}(h_{i}(x_{0}))=b_{i},$ $%
i=1,...,r-1$. Besides, $Q\in CRS.$}
\bigskip
\begin{proof} Assume the contrary. Assume that there is a function $f\in T(Q)$
having two different representations subject to the same initial conditions.
That is,
\begin{equation*}
f(x)=\sum_{i=1}^{r}f_{i}(h_{i}(x))=\sum_{i=1}^{r}f_{i}^{^{\prime }}(h_{i}(x))
\end{equation*}%
with $f_{i}(h_{i}(x_{0}))=f_{i}^{^{\prime }}(h_{i}(x_{0}))=b_{i},$ $%
i=1,...,r-1$ and $f_{i}\neq f_{i}^{^{\prime }}$ for some indice $i\in
\{1,...,r\}.$ In this case, the function $v(x)$ will possess the following
two different representations
\begin{equation*}
v(x)=\sum_{i=1}^{r}v_{i}(h_{i}(x))=\sum_{i=1}^{r}\left[
v_{i}(h_{i}(x))+f_{i}(h_{i}(x))-f_{i}^{^{\prime }}(h_{i}(x))\right] .
\end{equation*}%
both satisfying the initial conditions. The obtained contradiction and above
Lemma 4.1 complete the proof.
\end{proof}
In the sequel, we will assume that for any points $t_{i}\in h_{i}(X),$ $%
i=1,...,r,$ the system of equations $h_{i}(x)=t_{i}$, $i=1,...,r,$ has at
least one solution.
\bigskip
\textbf{Lemma 4.3.} \textit{Let $Q\in CRS.$ Then for any point $x_{0}\in Q$
the representation}
\begin{equation*}
0=\sum_{i=1}^{r}g_{i}(h_{i}(x)),\text{ }x\in Q,\eqno(4.11)
\end{equation*}%
\textit{subject to the conditions}
\begin{equation*}
g_{i}(h_{i}(x_{0}))=0,\text{ }i=1,...,r-1,\eqno(4.12)
\end{equation*}%
\textit{is unique. That is, $g_{i}\equiv 0$ on the sets $h_{i}(Q)$, $%
i=1,...,r.$}
\bigskip
\begin{proof} Assume the contrary. Assume that representation (4.11) subject to
(4.12) is not unique, or in other words, not all of $g_{i}$ are identically
zero. Without loss of generality, we may suppose that $g_{r}(h_{r}(y))\neq 0,
$ for some $y\in Q.$ Let $\xi \in X$ be a solution of the system of
equations $h_{i}(x)=h_{i}(x_{0}),$ $i=1,...,r-1,$ and $h_{r}(x)=h_{r}(y)$.
Therefore, $g_{i}(h_{i}(\xi ))=0,$ $i=1,...,r-1,$ and $g_{r}(h_{r}(\xi
))\neq 0.$ Obviously, $\xi \notin Q.$ Otherwise, we may have $%
g_{r}(h_{r}(\xi ))=0.$
We are going to prove that $Q^{\prime }=Q\cup \{\xi \}$ is a representation
set. For this purpose, consider an arbitrary function $f:Q^{\prime
}\rightarrow \mathbb{R}$. The restriction of $f$ to the set $Q$ admits a
decomposition
\begin{equation*}
f(x)=\sum_{i=1}^{r}t_{i}(h_{i}(x)),\text{ }x\in Q.
\end{equation*}
One is allowed to fix the values $t_{i}(h_{i}(x_{0}))=0,$ $i=1,...,r-1.$
Note that then $t_{i}(h_{i}(\xi ))=0,$ $i=1,...,r-1.$ Consider now the
functions
\begin{equation*}
v_{i}(h_{i}(x))=t_{i}(h_{i}(x))+\frac{f(\xi )-t_{r}(h_{r}(\xi ))}{%
g_{r}(h_{r}(\xi ))}g_{i}(h_{i}(x)),\text{ }x\in Q^{\prime },\text{ }%
i=1,...,r.
\end{equation*}
It can be easily verified that
\begin{equation*}
f(x)=\sum_{i=1}^{r}v_{i}(h_{i}(x)),\text{ }x\in Q^{\prime }.
\end{equation*}%
Since $f$ is arbitrary, we obtain that $Q^{\prime }\in RS,$ where $Q^{\prime
}\supset Q$ and $h_{i}(Q^{\prime })=h_{i}(Q),$ $i=1,...,r.$ But this
contradicts the hypothesis of the lemma that $Q\in CRS$.
\end{proof}
The following theorem is valid.
\bigskip
\textbf{Theorem 4.4.} \textit{$Q\in CRS$ if and only if for any $x_{0}\in Q,$
any $f\in T(Q)$ and any $a_{1},...,a_{r-1}\in \mathbb{R}$ the representation}
\begin{equation*}
f(x)=\sum_{i=1}^{r}g_{i}(h_{i}(x)),\text{ }x\in Q,
\end{equation*}%
\textit{subject to the conditions $g_{i}(h_{i}(x_{0}))=a_{i},$ $i=1,...,r-1,$
is unique. Equivalently, a set $Q\in CRS$ if and only if it is a unicity set.%
}
\bigskip
Theorem 4.4 is an obvious consequence of Lemmas 4.2 and 4.3.
\bigskip
\textbf{Remark 4.3.} In Theorem 4.4, all the words "any" can be replaced
with the word "some".
\bigskip
\textbf{Remark 4.4. }For the case $X=X_{1}\times \cdot \cdot \cdot \times
X_{n}$, the possibility and uniqueness of the representation by sums $%
\sum_{i=1}^{n}u_{i}(x_{i})$, \thinspace $u_{i}:X_{i}\rightarrow \mathbb{R}$,
$i=1,...,n$, were investigated in \cite{81,80}.
\bigskip
\textbf{Examples.} Let $r=2,$ $X=\mathbb{R}^{2},$ $%
h_{1}(x_{1},x_{2})=x_{1}+x_{2},$ $h_{2}(x_{1},x_{2})=x_{1}-x_{2},$ $Q$ be
the graph of the function $x_{2}=\arcsin (\sin x_{1}).$ The set $Q$ has no
cycles with respect to the functions $h_{1}$ and $h_{2}.$ Therefore, by
Proposition 4.2, $Q\in RS.$ By adding a point $p\notin Q$, we obtain the set
$Q\cup \{p\},$ which contains a cycle and hence is not a representation set.
Thus, $Q\in CRS$ and hence $Q$ is a unicity set.
Let now $r=2,$ $X=\mathbb{R}^{2},$ $h_{1}(x_{1},x_{2})=x_{1},$ $%
h_{2}(x_{1},x_{2})=x_{2},$ and $Q$ be the graph of the function $x_{2}=x_{1}.
$ Clearly, $Q\in RS$ and $Q\notin CRS.$ By the definition of complete
representation sets, there is a set $P\supset Q$ such that $P\in RS$ and for
any $T\supset P$, $T$ is not a representation set. There are many sets $P$
with this property. One of them can be obtained by adding to $Q$ any
straight line $l$ parallel to one of the coordinate axes. Indeed, if $%
y\notin Q\cup l,$ then the set $Q_{1}=Q\cup l\cup \{y\}$ contains a
four-point cycle (with one vertex as $y$, two vertices lying on $l$ and one
vertex lying on $Q$). This means that $Q_{1}\notin RS$ and hence $Q\cup l\in
CRS.$
\bigskip
The following corollary can be easily obtained from Theorem 4.4 and Lemma
4.2.
\bigskip
\textbf{Corollary 4.3.} \textit{$Q\in CRS$ if and only if $Q\in RS$ and in
the representation}
\begin{equation*}
0=\sum_{i=1}^{r}g_{i}(h_{i}(x)),\text{ }x\in Q,
\end{equation*}%
\textit{all the functions $g_{i},$ $i=1,...,r,$ are constants.}
\bigskip
We have seen that complete representation sets enjoy the unicity property.
Let us study some other properties of these sets. The following properties
are valid.
\bigskip
(a) If $Q_{1},Q_{2}\in CRS,$ $Q_{1}\cap Q_{2}\neq \emptyset $ and $Q_{1}\cup
Q_{2}\in RS$, then $Q_{1}\cup Q_{2}\in CRS.$
(b) Let $\{Q_{\alpha }\},$ $\alpha \in \Phi ,$ be a family of complete
representation sets such that $\cap _{\alpha \in \Phi }Q_{\alpha }\neq
\emptyset $ and $\cup _{\alpha \in \Phi }Q_{\alpha }\in RS.$ Then $\cup
_{\alpha \in \Phi }Q_{\alpha }\in CRS.$
\bigskip
The above two properties follow from Corollary 4.3. Note that (b) is a
generalization of (a). The following property is a consequence of (b) and
property (2) of representation sets.
\bigskip
(c) Let $\{Q_{\alpha }\},$ $\alpha \in \Phi ,$ be a totally ordered (under
inclusion) family of complete representation sets. Then $\cup _{\alpha \in
\Phi }Q_{\alpha }\in CRS.$
\bigskip
We know that every representation set $A$ is contained in a complete
representation set $Q$ such that $h_{i}(A)=h_{i}(Q),$ $i=1,...,r.$ What can
we say about the set $Q\backslash A$? Clearly, $Q\backslash A\in RS.$ But
can we chose $Q$ so that $Q\backslash A\in CRS$? The following theorem
answers this question.
\bigskip
\textbf{Theorem 4.5.} \textit{Let $A\in RS$ and $A\notin CRS.$ Then there
exists a set $B\in CRS$ such that $A\subset B,$ $h_{i}(A)=h_{i}(B),$ $%
i=1,...,r,$ and $B\backslash A\in CRS.$}
\bigskip
\begin{proof} Since the representation set $A$ is not complete, there exists a
point $p\notin A$ such that $h_{i}(p)\in h_{i}(A),$ $i=1,...,r,$ and $%
A^{\prime }=A\cup \{p\}\in RS$. By $\mathcal{M}$ denote the collection of
sets $M$ such that
1) $A\subset M$ and $M\in RS$;
2) $h_{i}(M)=h_{i}(A)$ for all $i=1,...,r$;
3) $M\backslash A\in CRS.$
Obviously, $\mathcal{M}$ is not empty. It contains the above set $A^{\prime
} $. Consider the partial order on $\mathcal{M}$ defined by inclusion. Let $%
\{M_{\beta }\},$ $\beta \in \Gamma $, be any chain in $\mathcal{M}$. The set
$\cup _{\beta \in \Gamma }M_{\beta }$ is an upper bound for this chain. To
see this, let us check that $\cup _{\beta \in \Gamma }M_{\beta }$ belongs to
$\mathcal{M}$. That is, all the above conditions 1)-3) are satisfied. Indeed,
1) $A\subset \cup _{\beta \in \Gamma }M_{\beta }$ and $\cup _{\beta \in
\Gamma }M_{\beta }\in RS.$ This follows from property (2) of representation
sets;
2) $h_{i}(\cup _{\beta \in \Gamma }M_{\beta })=\cup _{\beta \in \Gamma
}h_{i}(M_{\beta })=\cup _{\beta \in \Gamma }h_{i}(A)=h_{i}(A),$ $i=1,...,r$;
3) $\cup _{\beta \in \Gamma }M_{\beta }\backslash A\in CRS$. This follows
from property (c) of complete representation sets and the facts that $%
M_{\beta }\backslash A\in CRS$ for any $\beta \in \Gamma $ and the system $%
\{M_{\beta }\backslash A\}$, $\beta \in \Gamma $, is totally ordered under
inclusion.
Thus we see that any chain in $\mathcal{M}$ has an upper bound. By Zorn's
lemma, there are maximal sets in $\mathcal{M}$. Assume $B$ is one of such
sets.
Let us now prove that $B\in CRS$. Assume on the contrary that $B\notin CRS$.
Then by Lemma 4.2, for any point $x_{0}\in B$ the representation
\begin{equation*}
0=\sum_{i=1}^{r}g_{i}(h_{i}(x)),\text{ }x\in B,\eqno(4.13)
\end{equation*}%
subject to the conditions $g_{i}(h_{i}(x_{0}))=0,$ $i=1,...,r-1,$ is not
unique. That is, there is a point $y\in B$ such that for some index $i,$ $%
g_{i}(h_{i}(y))\neq 0.$ Without loss of generality we may assume that $%
g_{r}(h_{r}(y))\neq 0$. Clearly, $y$ cannot belong to $B\backslash A$, since
$B\backslash A\in CRS$ and over complete representation sets, the zero
function has a trivial representation provided that conditions (4.12) hold.
Thus, $y\in A$. Let $\xi \in X$ be a point such that $h_{i}(\xi
)=h_{i}(x_{0}),$ $i=1,...,r-1$, and $h_{r}(\xi )=h_{r}(y).$ The point $\xi
\notin B,$ otherwise from (4.13) we would obtain that $%
g_{r}(h_{r}(y))=g_{r}(h_{r}(\xi ))=0$. Following the techniques in the proof
of Lemma 4.3, it can be shown that $B_{1}=B\cup \{\xi \}\in RS$.
Now we prove that $B_{1}\backslash A\in CRS$. Consider the representation
\begin{equation*}
0=\sum_{i=1}^{r}g_{i}^{\prime }(h_{i}(x)),\text{ }x\in B_{1}\backslash A,%
\eqno(4.14)
\end{equation*}%
subject to the conditions $g_{i}^{\prime }(h_{i}(x_{0}))=0,$ $i=1,...,r-1,$
where \ $x_{0}$ is some point in $B\backslash A.$ Such representation holds
uniquely on $B\backslash A,$ since $B\backslash A\in CRS$. That is, all the
functions $g_{i}^{\prime }$ are identically zero on $h_{i}(B\backslash A),$ $%
i=1,...,r$. On the other hand, since $g_{i}^{\prime }(h_{i}(\xi
))=g_{i}^{\prime }(h_{i}(x_{0}))=0$, for all $i=1,...,r-1$, we obtain that $%
g_{r}^{\prime }(h_{r}(\xi ))=0.$ This means that representation (4.14)
subject to the conditions $g_{i}^{\prime }(h_{i}(x_{0}))=0,$ $i=1,...,r-1,$
is unique on $B_{1}\backslash A.$ That is, all the functions $g_{i}^{\prime }
$ in (4.14) are zero functions on $h_{i}(B_{1}\backslash A),$ $i=1,...,r.$
Hence by Lemma 4.1, $B_{1}\backslash A\in CRS$. Thus, $B_{1}\in \mathcal{M}$%
. But the set $B$ was chosen as a maximal set in $\mathcal{M}$. We see that
our assumption $B\notin CRS$ leads to the contradiction that there is a set $%
B_{1}\in \mathcal{M}$ bigger than the maximal set $B$. Thus, in fact, $B\in
CRS$.
\end{proof}
\bigskip
\subsection{$C$-orbits and $C$-trips}
Let $A$ be a representation set. The relation on $A$ defined by setting $%
x\sim y$ if there is a finite complete representation subset of $A$
containing both $x$ and $y$, is an equivalence relation. Indeed, it is
reflexive and symmetric. It is transitive by property (a) of complete
representation sets. The equivalence classes we call $C$\textit{-orbits}. In
the case $r=2$, $C$-orbits turn into classical orbits considered by Marshall
and O'Farrell \cite{108,107}, which have a very nice
geometric interpretation in terms of paths (see Section 1.3). A classical
orbit consists of all possible traces of an arbitrary point in it traveling
alternatively in the level sets of $h_{1}$ and $h_{2}.$ In the general
setting, one partial case of $C$-orbits were introduced by Klopotowski,
Nadkarni, Rao \cite{80} under the name of \textit{%
related components}. The case considered in \cite{80} requires that
$A\subset X=X_{1}\times \cdot \cdot \cdot \times
X_{n}$ and $h_{i}$ be the canonical projections of $X$ onto $X_{i},$ $%
i=1,...,r,$ respectively.
Finite complete representation sets containing $x$ and $y$ will be called $C$%
\textit{-trips} connecting $x$ and $y$. A $C$-trip of the smallest
cardinality connecting $x$ and $y$ will be called a \textit{minimal }$C$%
\textit{-trip}.
\bigskip
\textbf{Theorem 4.6.} \textit{Let $A$ be a representation set and $x$ and $y$
be any two points of some $C$-orbit in $A$. Then there is only one minimal $C
$-trip connecting them.}
\bigskip
\begin{proof} Assume that $L_{1}$ and $L_{2}$ are two minimal $C$-trips connecting $%
x$ and $y.$ By the definition, $L_{1}$ and $L_{2}$ are complete
representation sets. Note that $L_{1}\cup L_{2}$ is also complete. Let us
prove that the set $L_{1}\cap L_{2}$ is complete. Clearly, $L_{1}\cap
L_{2}\in RS.$ Let $x_{0}\in L_{1}\cap L_{2}$. In particular, $x_{0}$ can be
one of the points $x$ and $y$. Consider the representation
\begin{equation*}
0=\sum_{i=1}^{r}g_{i}(h_{i}(x)),\text{ }x\in L_{1}\cap L_{2},\eqno(4.15)
\end{equation*}%
subject to $g_{i}(h_{i}(x_{0}))=0,$ $i=1,...,r-1$. On the strength of Lemma
4.1, it is enough to prove that this representation is unique. For $i=1,...,r
$, let $g_{i}^{\prime }$ be any extension of $g_{i}$ from the set $%
h_{i}(L_{1}\cap L_{2})$ to the set $h_{i}(L_{1})$. Construct the function
\begin{equation*}
f^{\prime }(x)=\sum_{i=1}^{r}g_{i}^{\prime }(h_{i}(x)),\text{ }x\in L_{1}.%
\eqno(4.16)
\end{equation*}%
Since $f^{\prime }(x)=0$ on $L_{1}\cap L_{2}$, the following function is
well defined
\begin{equation*}
f(x)=\left\{
\begin{array}{c}
f^{\prime }(x),\text{ }x\in L_{1}, \\
0,\text{ }x\in L_{2}.%
\end{array}%
\right.
\end{equation*}%
Since $L_{1}\cup L_{2}\in CRS$, the representation
\begin{equation*}
f(x)=\sum_{i=1}^{r}w_{i}(h_{i}(x)),\text{ }x\in L_{1}\cup L_{2}.\eqno(4.17)
\end{equation*}%
subject to
\begin{equation*}
w_{i}(h_{i}(x_{0}))=0,\text{ }i=1,...,r-1.\eqno(4.18)
\end{equation*}%
is unique. Besides, since $L_{1}\in CRS$ and $g_{i}^{\prime
}(h_{i}(x_{0}))=g_{i}(h_{i}(x_{0}))=0,$ $i=1,...,r-1$, representation (4.16)
is unique. This means that for each function $g_{i}$, there is only one
extension $g^{\prime }$. Note that
\begin{equation*}
f(x)=f^{\prime }(x)=\sum_{i=1}^{r}w_{i}(h_{i}(x)),\text{ }x\in L_{1}.
\end{equation*}%
Now from the uniqueness of representation (4.16) we obtain that
\begin{equation*}
w_{i}(h_{i}(x))=g_{i}^{\prime }(h_{i}(x)),\text{ }i=1,...,r,\text{ }x\in
L_{1}.\eqno(4.19)
\end{equation*}
A restriction of formula (4.17) to the set $L_{2}$ gives
\begin{equation*}
0=\sum_{i=1}^{r}w_{i}(h_{i}(x)),\text{ }x\in L_{2}.\eqno(4.20)
\end{equation*}%
Since $L_{2}\in CRS$, representation (4.20) subject to conditions (4.18) is
unique, whence
\begin{equation*}
w_{i}(h_{i}(x))=0,\text{ }i=1,...,r\text{, \ }x\in L_{2}.\eqno(4.21)
\end{equation*}%
From (4.19) and (4.21) it follows that
\begin{equation*}
g_{i}(h_{i}(x))=g_{i}^{\prime }(h_{i}(x))=0,\text{ }i=1,...,r,\text{ }x\in
L_{1}\cap L_{2}.
\end{equation*}%
Thus, we see that representation (4.15) subject to the conditions $%
g_{i}(h_{i}(x_{0}))=0,$ $i=1,...,r-1$ is unique on the intersection $%
L_{1}\cap L_{2}.$ Therefore by Lemma 4.1, $L_{1}\cap L_{2}\in CRS.$
Let the cardinalities of $L_{1}$ and $L_{2}$ be equal to $n.$ Since $x,y\in
L_{1}\cap L_{2}$ and $L_{1}\cap L_{2}\in CRS$, we obtain from the definition
of minimal $C$-trips that the cardinality of $L_{1}\cap L_{2}$ is also $n.$
Hence, $L_{1}\cap L_{2}=L_{1}=L_{2}.$
\end{proof}
Let $Q$ be a representation set. That is, each function $f:Q\rightarrow
\mathbb{R}$ enjoys representation (4.7). Can we construct the functions $%
g_{i},$ $i=1,...,r,$ for a given $f$? There is a procedure for constructing
one certain collection of $g_{i}$, provided that $Q$ consists of a single $C$%
-orbit, that is, any two points of $Q$ can be connected by a $C$-trip. To
describe this procedure, take a point $x_{0}\in Q$ and fix it. We are going
to find $g_{i}$ from (4.7) and conditions (4.8). Let $y$ be any point in $Q$%
. To find the values of $g_{i}$ at the points $h_{i}(y),$ $i=1,...,r,$
connect $x_{0}$ and $y$ by a minimal $C$-trip $S=\{x_{1},...,x_{n}\},$ where
$x_{1}=x_{0}$ and $x_{n}=y.$ Since $S$ is a complete representation set,
equation (4.7) subject to (4.8) has a unique solution on $S$. That is, we
can find $g_{i}(h_{i}(y)),$ $i=1,...,r,$ by solving the system of linear
equations
\begin{equation*}
\sum_{i=1}^{r}g_{i}(h_{i}(x_{j}))=f(x_{j}),\text{ }j=1,...,n\text{.}
\end{equation*}
We see that each minimal $C$-trip containing $x_{0}$ generates a system of
linear equations, which is uniquely solvable. Since any point in $Q$ can be
connected with $x_{0}$ by such a trip, we can find $g_{i}(t)$ at each point $%
t\in h_{i}(Q),$ $i=1,...,r.$
The above procedure can still be effective for some particular
representation sets $Q$ consisting of many $C$-orbits. Let $\{C_{\alpha }\},$
$\alpha \in \Lambda ,$ denote the set of all $C$-orbits of $Q$. Fix some
points $x_{\alpha }\in C_{\alpha },$ $\alpha \in \Lambda $, one in each
orbit. Let $y_{\alpha }$ be any points of $C_{\alpha },$ $\alpha \in \Lambda
,$ respectively. We can apply the above procedure of finding the values of $%
g_{i}$ at each $y_{\alpha }$ if $h_{i}(y_{\alpha })\neq h_{i}(y_{\beta })$
for all $i$ and $\alpha \neq \beta $. For $h_{i}(y_{\alpha })=h_{i}(y_{\beta
}),$ one cannot guarantee that after solving the corresponding systems of
linear equations (associated with $y_{\alpha }$ and $y_{\beta }$), the
solutions $g_{i}(h_{i}(y_{\alpha })$ and $g_{i}(h_{i}(y_{\beta }))$ will be
equal. That is, for the case $h_{i}(y_{\alpha })=h_{i}(y_{\beta })$, the
constructed functions $g_{i}$ may not be well defined.
\bigskip
\textbf{Remark 4.5.} All the results in this section are valid, in
particular, for linear combinations of generalized ridge functions.
\newpage
\chapter{Applications to neural networks}
Neural networks have increasingly been used in many areas of applied
sciences. Most of the applications employ neural networks to approximate
complex nonlinear functional dependencies on a high dimensional data set.
The theoretical justification for such applications is that any continuous
function can be approximated within an arbitrary precision by carefully
selecting parameters in the network. The most commonly used model of neural
networks is the \textit{multilayer feedforward perceptron} (MLP) model. This model
consists of a finite number of successive layers. The first and the last
layers are called the input and the output layers, respectively. The
intermediate layers are called hidden layers. MLP models are usually
classified not by their number of layers, but by their number of hidden
layers. In this chapter, we study approximation properties of the single and
two hidden layer feedforward perceptron models. Our analysis is based on
ridge functions and the Kolmogorov superposition theorem.
The material of this chapter may be found in \cite{39,46,43,65}.
\bigskip
\section{Single hidden layer neural networks}
In this section, we consider single hidden layer neural networks with a set
of weights consisting of a finite number of directions or straight lines. For certain activation functions,
we characterize compact sets $X$ in the $d$-dimensional space such that the corresponding neural
network can approximate any continuous function on $X$.
\subsection{Problem statement}
Approximation capabilities of neural networks have
been investigated in a great deal of works over the last 30 years (see,
e.g., \cite%
{Alm,An,B,15,17,Ch,19,20,21,24,GI,39,40,41,67,68,71,70,89,104,110,119,123,135}).
In this section, we are interested in questions of density of a single
hidden layer perceptron model. A typical density result shows that this
model can approximate an arbitrary function in a given class with any degree
of accuracy.
\textit{A single hidden layer perceptron model} with $r$ units in the hidden layer
and input $\mathbf{x}=(x_{1},...,x_{d})$ evaluates a function of the form
\begin{equation*}
\sum_{i=1}^{r}c_{i}\sigma (\mathbf{w}^{i}\mathbf{\cdot x}-\theta _{i}),\eqno%
(5.1)
\end{equation*}%
where the \textit{weights} $\mathbf{w}^{i}$ are vectors in $\mathbb{R}^{d}$, the
\textit{thresholds} $\theta _{i}$ and the coefficients $c_{i}$ are real numbers and \
the \textit{activation function} $\sigma $ is a univariate function, which is
considered to be continuous here. Note that in Eq (5.1) each function $%
\sigma (\mathbf{w}^{i}\mathbf{\cdot x}-\theta _{i})$ is a ridge function with the direction $\mathbf{w}^{i}$.
For various activation functions $\sigma $, it has been proved in a number
of papers that one can approximate arbitrarily well a given continuous
function by functions of the form (5.1) ($r$ is not fixed!) over any compact
subset of $\mathbb{R}^{d}$. In other words, the set
\begin{equation*}
\mathcal{M}(\sigma )=span\text{\ }\{\sigma (\mathbf{w\cdot x}-\theta ):\
\theta \in \mathbb{R}\text{, }\mathbf{w\in }\mathbb{R}^{d}\}
\end{equation*}%
is dense in the space $C(\mathbb{R}^{d})$ in the topology of uniform
convergence on compact sets (see, e.g., \cite{17,21,41,67,68}). The most
general result of this type belongs to Leshno, Lin, Pinkus and Schocken \cite%
{89}. They proved that a necessary and sufficient condition for a
continuous activation function to have the density property is that it not
be a polynomial. This result shows the efficacy of the single hidden layer
perceptron model within all possible choices of the activation function $%
\sigma $, provided that $\sigma $ is continuous. In fact, density of the set
$\mathcal{M}(\sigma )$ also holds for some reasonable sets of weights and
thresholds. (see\cite{119}).
Some authors showed that a single hidden layer perceptron with a suitably
restricted set of weights can also have the density property
(or, in neural network terminology, the \textit{universal approximation property}).
For example, White and Stinchcombe \cite{135} proved that a
single layer network with a polygonal, polynomial spline or analytic
activation function and a bounded set of weights has the density property. Ito \cite{68}
investigated this property of networks using monotone sigmoidal functions
(tending to $0$ at minus infinity and $1$ at infinity), with only weights
located on the unit sphere. We see that weights required for the density property are
not necessary to be of an arbitrarily large magnitude. But what if they are
too restricted. How can one learn approximation properties of networks with
an arbitrarily restricted set of weights? This problem is too difficult to be
solved completely in this general formulation. But there are some cases that deserve a
special attention. The most interesting case is, of course, neural networks
with weights varying on a finite set of directions or lines. To the best of
our knowledge, approximation capabilities of such networks have not been
studied yet. More precisely, let $W$ be a set of weights consisting of a
finite number of vectors (or straight lines) in $\mathbb{R}^{d}$. It is
clear that if $w$ varies only in $W$, the set $\mathcal{M}(\sigma )$ can not
be dense in $C(\mathbb{R}^{d})$ in the topology of uniform convergence on compacta (compact sets).
In this case, one may want to determine boundaries of efficacy of the model. Over
which compact sets $X\subset \mathbb{R}^{d}$ does the model preserve its
general propensity to approximate arbitrarily well every continuous
multivariate function? In Section 5.1.2, we will consider this problem and
give both sufficient and necessary conditions for well approximation
(approximation with arbitrary precision) by
networks with weights from a finite set of directions or lines. For a set $W$
of weights consisting of two vectors, we show that there is a geometrically
explicit solution to the problem. In Section 5.1.3, we discuss some
aspects of the exact representation by neural networks with weights varying on finitely many straight lines.
\bigskip
\subsection{Density results}
In this subsection we give a sufficient and also a necessary conditions for
approximation by neural networks with finitely many weights and with weights
varying on a finite set of straight lines (through the origin).
Let $X$ be a compact subset of $\mathbb{R}^{d}$. Consider the following set
functions
\begin{equation*}
\tau _{i}(Z)=\{\mathbf{x}\in Z:~|p_{i}^{-1}(p_{i}(\mathbf{x}))\bigcap Z|\geq
2\},\quad Z\subset X,~i=1,\ldots ,k,
\end{equation*}%
where $p_{i}(\mathbf{x})=\mathbf{a}^{i}\cdot \mathbf{x}$, $|Y|$ denotes the
cardinality of a considered set $Y$. Define $\tau (Z)$ to be $%
\bigcap_{i=1}^{k}\tau _{i}(Z)$ and define $\tau ^{2}(Z)=\tau (\tau (Z))$, $%
\tau ^{3}(Z)=\tau (\tau ^{2}(Z))$ and so on inductively. These functions
first appeared in the work \cite{132} by Sternfeld, where he investigated
problems of representation by linear superpositions. Clearly, $\tau
(Z)\supseteq \tau ^{2}(Z)\supseteq \tau ^{3}(Z)\supseteq ...$It is possible
that for some $n$, $\tau ^{n}(Z)=\emptyset .$ In this case, one can see that
$Z$ does not contain a cycle. In general, if some set $Z\subset X$ forms a
cycle, then $\tau ^{n}(Z)=Z.$ But the reverse is not true. Indeed, let $%
Z=X=\{(0,0,\frac{1}{2}),(0,0,1),(0,1,0),(1,0,1),(1,1,0),(\frac{1}{2},\frac{1%
}{2},0),(\frac{1}{2},\frac{1}{2},\frac{1}{2})\}$, $\mathbf{a}^{i},i=1,2,3,$
are the coordinate directions in $\mathbb{R}^{3}$. It is not difficult to
verify that $X$ does not possess cycles with respect to these directions and
at the same time $\tau (X)=X$ (and so $\tau ^{n}(X)=X$ for every $n)$.
Consider the linear combinations of ridge functions with fixed directions $%
\mathbf{a}^{1},...,\mathbf{a}^{k}$
\begin{equation*}
\mathcal{R}\left( \mathbf{a}^{1},...,\mathbf{a}^{k}\right) =\left\{
\sum\limits_{i=1}^{k}g_{i}\left( \mathbf{a}^{i}\cdot \mathbf{x}\right)
:g_{i}\in C(\mathbb{R)},~i=1,...,k\right\} .\eqno(5.2)
\end{equation*}
Let $K$ be a family of functions defined on $\mathbb{R}^{d}$ and $X$ be a
subset of $\mathbb{R}^{d}.$ By $K_{X}$ we will denote the restriction of
this family to $X.$ Thus $\mathcal{R}_{X}\left( \mathbf{a}^{1},...,\mathbf{a}%
^{k}\right) $ stands for the set of sums of ridge functions in (5.2) defined
on $X$.
The following theorem is a particular case of the known general result of
Sproston and Strauss \cite{129} established for the sum of subalgebras of $%
C(X)$.
\bigskip
\textbf{Theorem 5.1. }\textit{Let $X$ be a compact subset of $\mathbb{R}^{d}$%
. If $\cap _{n=1,2,...}\tau ^{n}(X)=\emptyset $, then the set $\mathcal{R}%
_{X}\left( \mathbf{a}^{1},...,\mathbf{a}^{k}\right) $ is dense in $C(X)$.}
\bigskip
In our analysis, we need the following lemma.
\bigskip
\textbf{Lemma 5.1. } \textit{If $\mathcal{R}_{X}\left( \mathbf{a}^{1},...,%
\mathbf{a}^{k}\right) $ is dense in $C(X),$ then the set $X$ does not
contain a cycle with respect to the directions $\mathbf{a}^{1},...,\mathbf{a}%
^{k}$.}
\bigskip
\begin{proof}
Suppose the contrary. Suppose that the set $X$ contains cycles. Each cycle $%
l=(x_{1},\ldots ,x_{n})$ and the associated vector $\lambda =(\lambda
_{1},\ldots ,\lambda _{n})$ generate the functional
\begin{equation*}
G_{l,\lambda }(f)=\sum_{j=1}^{n}\lambda _{j}f(x_{j}),\quad f\in C(X).
\end{equation*}
Clearly, $G_{l,\lambda }$ is linear and continuous with the norm $%
\sum_{j=1}^{n}|\lambda _{j}|.$It is not difficult to verify that $%
G_{l,\lambda }(g)=0$ for all functions $g\in \mathcal{R}\left( \mathbf{a}%
^{1},...,\mathbf{a}^{k}\right) .$ Let $f_{0}$ be a continuous function such
that $f_{0}(x_{j})=1$ if $\lambda _{j}>0$ and $f_{0}(x_{j})=-1$ if $\lambda
_{j}<0$, $j=1,\ldots ,n$. For this function, $G_{l,\lambda }(f_{0})\neq 0$.
Thus, we have constructed a nonzero linear functional which belongs to the
annihilator of the manifold $\mathcal{R}_{X}\left( \mathbf{a}^{1},...,%
\mathbf{a}^{k}\right) $. This means that $\mathcal{R}_{X}\left( \mathbf{a}%
^{1},...,\mathbf{a}^{k}\right) $ is not dense in $C(X)$. The obtained
contradiction proves the lemma.
\end{proof}
Now we are ready to step forward from ridge function approximation to neural
networks. Let $\sigma \in C(\mathbb{R)}$ be a continuous activation
function. For a subset $W\subset \mathbb{R}^{d},$ let $\mathcal{M}(\sigma ;W,%
\mathbb{R})$ stand for the set of neural networks with weights from $W.$
That is,
\begin{equation*}
\mathcal{M}(\sigma ;W,\mathbb{R})=span\{\sigma (\mathbf{w}\cdot \mathbf{x}%
-\theta ):~\mathbf{w}\in W,~\theta \in \mathbb{R}\}.
\end{equation*}
\bigskip
\textbf{Theorem 5.2.} \textit{Let $\sigma \in C(\mathbb{R})\cap L_{p}(%
\mathbb{R)}$, where $1\leq p<\infty $, or $\sigma $ be a continuous,
bounded, nonconstant function, which has a limit at infinity (or minus
infinity). Let $W=\{\mathbf{a}^{1},...,\mathbf{a}^{k}\} \subset \mathbb{R}^{d}$
be the given set of weights and $X$ be a
compact subset of $\mathbb{R}^{d}$. The following assertions are valid:}
\textit{(1) if $\cap _{n=1,2,...}\tau ^{n}(X)=\emptyset $, then the set $%
\mathcal{M}_{X}(\sigma ;W,\mathbb{R})$ is dense in the space of all
continuous functions on $X$.}
\textit{(2) if $\mathcal{M}_{X}(\sigma ;W,\mathbb{R})$ is dense in $C(X)$,
then the set $X$ does not contain cycles.}
\bigskip
\begin{proof}
Part (1). Let $X$ be a compact subset of $\mathbb{R}^{d}$ for which \textit{%
\ }$\cap _{n=1,2,...}\tau ^{n}(X)=\emptyset $. By Theorem 5.1, the set $%
\mathcal{R}_{X}\left( \mathbf{a}^{1},...,\mathbf{a}^{k}\right) $ is dense in
$C(X)$. This means that for any positive real number $\varepsilon $ there
exist continuous univariate functions $g_{i},$ $i=1,...,k$ such that
\begin{equation*}
\left\vert f(\mathbf{x})-\sum_{i=1}^{k}{g_{i}\left( \mathbf{a}^{i}{\cdot }%
\mathbf{x}\right) }\right\vert <\frac{\varepsilon }{k+1}\eqno(5.3)
\end{equation*}%
for all $\mathbf{x}\in X$. Since $X$ is compact, the sets $Y_{i}=\{\mathbf{a}%
^{i}{\cdot }\mathbf{x:\ x}\in X\},\ i=1,2,...,k$ are also compacts. In 1947,
Schwartz \cite{124} proved that continuous and $p$-th degree Lebesgue integrable
univariate functions or continuous, bounded, nonconstant functions having a
limit at infinity (or minus infinity) are not mean-periodic. Note that a
function $f\in C(\mathbb{R}^{d})$ is called mean periodic if the set $span$\
$\{f(\mathbf{x}-\mathbf{b}):\ \mathbf{b}\in \mathbb{R}^{d}\}$ is not dense
in $C(\mathbb{R}^{d})$ in the topology of uniform convergence on compacta
(see \cite{124}). Thus, Schwartz proved that the set
\begin{equation*}
span\text{\ }\{\sigma (y-\theta ):\ \theta \in \mathbb{R}\}
\end{equation*}%
is dense in $C(\mathbb{R)}$ in the topology of uniform convergence. We
learned about this result from Pinkus \cite[p.162]{119}. This density result
means that for the given $\varepsilon $ there exist numbers $c_{ij},\theta
_{ij}\in \mathbb{R}$, $i=1,2,...,k$, $j=1,...,m_{i}$ such that%
\begin{equation*}
\left\vert g_{i}(y)-\sum_{j=1}^{m_{i}}c_{ij}\sigma (y-\theta
_{ij})\right\vert \,<\frac{\varepsilon }{k+1}\eqno(5.4)
\end{equation*}%
for all $y\in Y_{i},\ i=1,2,...,k.$ From (5.3) and (5.4) we obtain that
\begin{equation*}
\left\Vert f(\mathbf{x})-\sum_{i=1}^{k}\sum_{j=1}^{m_{i}}c_{ij}\sigma (%
\mathbf{a}^{i}{\cdot }\mathbf{x}-\theta _{ij})\right\Vert
_{C(X)}<\varepsilon .\eqno(5.5)
\end{equation*}%
Hence $\overline{\mathcal{M}_{X}(\sigma ;W,\mathbb{R})}=C(X).$
\bigskip
Part (2). Let $X$ be a compact subset of $\mathbb{R}^{d}$ and the set $%
\mathcal{M}_{X}(\sigma ;W,\mathbb{R})$ be dense in $C(X).$ Then for an
arbitrary positive real number $\varepsilon $, inequality (5.5) holds with
some coefficients $c_{ij},\theta _{ij},\ i=1,2,\ j=1,...,m_{i}.$ Since for
each $i=1,2,...,k$, the function $\sum_{j=1}^{m_{i}}c_{ij}\sigma (\mathbf{a}%
^{i}{\cdot }\mathbf{x}-\theta _{ij})$ is a function of the form $g_{i}(%
\mathbf{a}^{i}{\cdot }\mathbf{x}),$ the subspace $\mathcal{R}_{X}\left(
\mathbf{a}^{1},...,\mathbf{a}^{k}\right) $ is dense in $C(X)$. Then by Lemma
5.1, the set $X$ contains no cycles.
\end{proof}
The above theorem still holds if the set of weights $W=\{\mathbf{a}^{1},...,\mathbf{a}^{k}\}$
is replaced by the set $W_{1}=\{t_{1}\mathbf{a}^{1},...,t_{k}\mathbf{a}^{k}:\ t_{1},...,t_{k}\in \mathbb{R}\}$.
In fact, for $W_{1}$, the above restrictions on the activation function $\sigma $ may be weakened.
\bigskip
\textbf{Theorem 5.3.} \textit{Assume $\sigma \in C(\mathbb{R})$ is not a polynomial. Let
$W_{1}=\{t_{1}\mathbf{a}^{1},...,t_{k}\mathbf{a}^{k}:\ t_{1},...,t_{k}\in \mathbb{R}\}$ be the
given set of weights and $X$ be a compact subset of $\mathbb{R}^{d}$.
The following assertions are valid:}
\textit{(1) if $\cap _{n=1,2,...}\tau ^{n}(X)=\emptyset $, then the set $%
\mathcal{M}_{X}(\sigma ;W_{1},\mathbb{R})$ is dense in the space of all
continuous functions on $X$.}
\textit{(2) if $\mathcal{M}_{X}(\sigma ;W_{1},\mathbb{R})$ is dense in $C(X)$%
, then the set $X$ does not contain cycles.}
\bigskip
The proof of this theorem is similar to that of Theorem 5.2 and based on the
following result of Leshno, Lin, Pinkus and Schocken \cite{89}: if $\sigma $
is not a polynomial, then the set
\begin{equation*}
span\text{\ }\{\sigma (ty-\theta ):\ t,\theta \in \mathbb{R}\}
\end{equation*}%
is dense in $C(\mathbb{R)}$ in the topology of uniform convergence on compacta.
The above example with the set
\begin{equation*}
\{(0,0,\frac{1}{2}),(0,0,1),(0,1,0),(1,0,1),(1,1,0),(\frac{1}{2},\frac{1}{2}%
,0),(\frac{1}{2},\frac{1}{2},\frac{1}{2})\}
\end{equation*}%
shows that the sufficient condition in part (1) of Theorem 5.2 is not
necessary. The necessary condition in part (2), in general, is not
sufficient. But it is not easily seen. Here, is the nontrivial example
showing that nonexistence of cycles is not sufficient for the density $%
\overline{\mathcal{M}_{X}(\sigma ;W,\mathbb{R})}=C(X).$ For the sake of
simplicity, we restrict ourselves to $\mathbb{R}^{2}.$ Let $\mathbf{a}%
^{1}=(1;1),$ $\mathbf{a}^{2}=(1;-1)$ and the set of weights $W=\{\mathbf{a}%
^{1},\mathbf{a}^{2}\}.$ The set $X$ can be constructed as follows. Let $%
X_{1} $ be the union of the four line segments $[(-3;0),(-1;0)],$ $%
[(-1;2),(1;2)],$ $[(1;0),(3;0)]$ and $[(-1;-2),(1;-2)].$ Rotate one segment
in $X_{1}$ $90^{\circ }$ about its center and remove the middle one-third
from each line segment. The obtained set denote by $X_{2}$. By the same way,
one can construct $X_{3},X_{4},$ and so on. It is clear that the set $X_{i}$
has $2^{i+1}$ line segments. Let $X$ be a limit of the sets $X_{i}$, $%
i=1,2,...$. Note that there are no cycles.
By $S_{i}$, $i=\overline{1,4},$ denote the closed discs with the unit radius
and centered at the points $(-2;0),$ $(0;2),$ $(2;0)$ and $(0;-2)$
respectively. Consider a continuous function $f_{0}$ such that $f_{0}(%
\mathbf{x})=1$ for $\mathbf{x}\in (S_{1}\cup S_{3})\cap X$, $f_{0}(\mathbf{x}%
)=-1$ for $\mathbf{x}\in (S_{2}\cup S_{4})\cap X$, and $-1<f_{0}(\mathbf{x}%
)<1$ elsewhere on $\mathbb{R}^{2}$. Let $p=(\mathbf{y}^{1},\mathbf{y}%
^{2},...)$ be any infinite path in $X.$ Note that the points $\mathbf{y}%
^{i}, $ $i=1,2,...,$ are alternatively in the sets $(S_{1}\cup S_{3})\cap X$
and $(S_{2}\cup S_{4})\cap X$. Obviously,
\begin{equation*}
E(f_{0},X)\overset{def}{=}\inf_{g\in \mathcal{R}_{X}\left( \mathbf{a}^{1},%
\mathbf{a}^{2}\right) }\left\Vert f_{0}-g\right\Vert _{C(X)}\leq \left\Vert
f_{0}\right\Vert _{C(X)}=1.\eqno(5.6)
\end{equation*}
For each positive integer $k=1,2,...$, set $p_{k}=(\mathbf{y}^{1},...,%
\mathbf{y}^{k})$ and consider the path functionals
\begin{equation*}
G_{p_{k}}(f)=\frac{1}{k}\sum_{i=1}^{k}(-1)^{i-1}f(\mathbf{y}^{i}).
\end{equation*}
$G_{p_{k}}$ is a continuous linear functional obeying the following obvious
properties:
\begin{enumerate}
\item[(1)] $\left\Vert G_{p_{k}}\right\Vert =G_{p_{k}}(f_{0})=1;$
\item[(2)] $G_{p_{k}}(g_{1}+g_{2})\leq \frac{2}{k}(\left\Vert
g_{1}\right\Vert +\left\Vert g_{2}\right\Vert )$ for ridge functions $g_{1}={%
g_{1}\left( \mathbf{a}^{1}{\cdot }\mathbf{x}\right) }$ and $g_{2}={%
g_{2}\left( \mathbf{a}^{2}{\cdot }\mathbf{x}\right) .}$
\end{enumerate}
By property (1), the sequence $\{G_{p_{k}}\}_{k=1}^{\infty }$ has a weak$^{%
\text{*}}$ cluster point. This point will be denoted by $G.$ By property
(2), $G\in \mathcal{R} _{X}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right)
^{\bot}.$ Therefore,
\begin{equation*}
1=G(f_{0})=G(f_{0}-g)\leq \left\Vert f_{0}-g\right\Vert _{C(X)}\text{ \ for
any }g\in \mathcal{R}_{X}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) .
\end{equation*}
Taking $\inf$ over $g$ in the right-hand side of the last inequality, we
obtain that $1\leq E(f_{0},X).$ Now it follows from (5.6) that $%
E(f_{0},X)=1. $ Recall that $\mathcal{M}_{X}(\sigma ;W,\mathbb{R})\subset
\mathcal{R}_{X}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) .$ Thus%
\begin{equation*}
\inf_{h\in \mathcal{M}_{X}(\sigma ;W,\mathbb{R})}\left\Vert f-h\right\Vert
_{C(X)}\geq 1.
\end{equation*}
The last inequality finally shows that $\overline{\mathcal{M}_{X}(\sigma ;W,%
\mathbb{R})}\neq C(X).$
\bigskip
For neural networks with weights consisting of only two vectors (or
directions) the problem of density becomes more clear. In this case, under
some minor restrictions on $X,$ the necessary condition in part (2) of
Theorem 5.2 (nonexistence of cycles) is also sufficient for the density of $%
\mathcal{M}_{X}(\sigma ;W,\mathbb{R})$ in $C(X)$. These restrictions are
imposed on the following equivalent classes of $X$ induced by paths. The
relation $\mathbf{x}\thicksim \mathbf{y}$ when $\mathbf{x}$ and $\mathbf{y}$
belong to some path in a given compact set $X\subset \mathbb{R}^{d}$ defines
an equivalence relation. Recall that the equivalence classes are called
orbits (see Section 1.3.4).
\bigskip
\textbf{Theorem 5.4.} \textit{Let $\sigma \in C(\mathbb{R})\cap L_{p}(%
\mathbb{R)}$, where $1\leq p<\infty $, or $\sigma $ be a continuous,
bounded, nonconstant function, which has a limit at infinity (or minus
infinity). Let $W=\{\mathbf{a}^{1},\mathbf{a}^{2}\} \subset \mathbb{R}^{d}$
be the given set of weights and $X$ be a compact subset of $\mathbb{R}%
^{d}$ with all its orbits closed. Then $\mathcal{M}_{X}(\sigma ;W,\mathbb{R}%
) $ is dense in the space of all continuous functions on $X$ if and only
if $X$ contains no closed paths with respect to the directions
$\mathbf{a}^{1}$ and $\mathbf{a}^{2}$.}
\bigskip
\begin{proof} \textit{Sufficiency.} Let $X$ be a compact subset of $\mathbb{R}%
^{d}$ with all its orbits closed. Besides, let $X$ contain no closed paths.
By Theorem 1.6 (see Section 1.3.4), the set $\mathcal{R}_{X}\left( \mathbf{a}%
^{1},\mathbf{a}^{2}\right) $ is dense in $C(X)$. This means that for any
positive real number $\varepsilon $ there exist continuous univariate
functions $g_{1}$ and $g_{2}$ such that
\begin{equation*}
\left\vert f(\mathbf{x})-{g_{1}\left( \mathbf{a}^{1}{\cdot }\mathbf{x}%
\right) -g_{2}\left( \mathbf{a}^{2}{\cdot }\mathbf{x}\right) }\right\vert <%
\frac{\varepsilon }{3}\eqno(5.7)
\end{equation*}%
for all $\mathbf{x}\in X$. Since $X$ is compact, the sets $Y_{i}=\{\mathbf{a}%
^{i}{\cdot }\mathbf{x:\ x}\in X\},\ i=1,2,$ are also compacts. As mentioned
above, Schwartz \cite{124} proved that continuous and $p$-th degree Lebesgue
integrable univariate functions or continuous, bounded, nonconstant
functions having a limit at infinity (or minus infinity) are not
mean-periodic. Thus, the set
\begin{equation*}
span\text{\ }\{\sigma (y-\theta ):\ \theta \in \mathbb{R}\}
\end{equation*}%
is dense in $C(\mathbb{R)}$ in the topology of uniform convergence. This
density result means that for the given $\varepsilon $ there exist numbers $%
c_{ij},\theta _{ij}\in \mathbb{R}$, $i=1,2,$ $j=1,\dots ,m_{i}$ such that%
\begin{equation*}
\left\vert g_{i}(y)-\sum_{j=1}^{m_{i}}c_{ij}\sigma (y-\theta
_{ij})\right\vert \,<\frac{\varepsilon }{3}\eqno(5.8)
\end{equation*}%
for all $y\in Y_{i},\ i=1,2.$ From (5.7) and (5.8) we obtain that
\begin{equation*}
\left\Vert f(\mathbf{x})-\sum_{i=1}^{2}\sum_{j=1}^{m_{i}}c_{ij}\sigma (%
\mathbf{a}^{i}{\cdot }\mathbf{x}-\theta _{ij})\right\Vert
_{C(X)}<\varepsilon .\eqno(5.9)
\end{equation*}%
Hence $\overline{\mathcal{M}_{X}(\sigma ;W,\mathbb{R})}=C(X).$
\bigskip
\textit{Necessity.} Let $X$ be a compact subset of $\mathbb{R}^{n}$ with all
its orbits closed and the set $\mathcal{M}_{X}(\sigma ;W,\mathbb{R})$ be
dense in $C(X).$ Then for an arbitrary positive real number $\varepsilon $,
inequality (5.9) holds with some coefficients $c_{ij},\theta _{ij},\ i=1,2,\
j=1,\dots ,m_{i}.$ Since for $i=1,2,$ $\sum_{j=1}^{m_{i}}c_{ij}\sigma (%
\mathbf{a}^{i}{\cdot }\mathbf{x}-\theta _{ij})$ is a function of the form $%
g_{i}(\mathbf{a}^{i}{\cdot }\mathbf{x}),$ the subspace $\mathcal{R}%
_{X}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) $ is dense in $C(X)$. Then
by Theorem 1.6, the set $X$ contains no closed paths.
\end{proof}
\bigskip
\textbf{Remark 5.1.} It can be shown that the necessity of the theorem is
valid without any restriction on orbits of $X$. Indeed if $X$ contains a
closed path, then it contains a closed path $p=(\mathbf{x}^{1},\dots ,%
\mathbf{x}^{2m})$ with different points. The functional $G_{p}=%
\sum_{i=1}^{2m}(-1)^{i-1}f(\mathbf{x}^{i})$ belongs to the annihilator of
the subspace $\mathcal{R}_{X}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) .$
There exist nontrivial continuous functions $f_{0}$ on $X$ such that $%
G_{p}(f_{0})\neq 0$ (take, for example, any continuous function $f_{0}$
taking values $+1$ at $\{\mathbf{x}^{1},\mathbf{x}^{3},\dots ,\mathbf{x}%
^{2m-1}\}$, $-1$ at $\{\mathbf{x}^{2},\mathbf{x}^{4},\dots ,\mathbf{x}%
^{2m}\} $ and $-1<f_{0}(\mathbf{x})<1$ elsewhere). This shows that the
subspace $\mathcal{R}_{X}\left( \mathbf{a}^{1},\mathbf{a}^{2}\right) $ is
not dense in $C(X)$. But in this case, the set $\mathcal{M}_{X}(\sigma ;W,%
\mathbb{R})$ cannot be dense in $C(X)$. The obtained contradiction means
that our assumption is not true and $X$ contains no closed paths.
\bigskip
Theorem 5.4 remains valid if the set of weights $\ W=\{\mathbf{a}^{1},%
\mathbf{a}^{2}\}$ is replaced by the set $W_{1}=\{t_{1}\mathbf{a}^{1},t_{2}%
\mathbf{a}^{2}:\ t_{1},t_{2}\in \mathbb{R\}}$. In fact, for the set $W_{1}$,
the required conditions on $\sigma $ may be weakened. As in Theorem 5.3, the
activation function $\sigma $ can be taken only non-polynomial.
\bigskip
\textbf{Theorem 5.5.} \textit{Assume $\sigma \in C(\mathbb{R})$
is not a polynomial. Let $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ be
fixed vectors and $W_{1}=\{t_{1}\mathbf{a}^{1},t_{2}%
\mathbf{a}^{2}:\ t_{1},t_{2}\in \mathbb{R\}}$
be the set of weights. Let $X$ be a compact subset of $\mathbb{R%
}^{d}$ with all its orbits closed. Then $\mathcal{M}_{X}(\sigma ;W_{1},%
\mathbb{R})$ is dense in the space of all continuous functions on $X$ if
and only if $X$ contains no closed paths with respect to the directions
$\mathbf{a}^{1}$ and $\mathbf{a}^{2}$.}
\bigskip
The proof is analogous to that of Theorem 5.4 and based on the above
mentioned result of Leshno, Lin, Pinkus and Schocken \cite{89}.
\bigskip
\textbf{Examples:}
\begin{description}
\item[(a)] Let $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ be two noncollinear
vectors in $\mathbb{R}^{2}.$ Let $B=B_{1}...B_{k}$ be a broken line with the
sides $B_{i}B_{i+1},\ i=1,...,k-1,$ alternatively perpendicular to $\mathbf{a%
}^{1}$ and $\mathbf{a}^{2}$. Besides, let $B$ does not contain vertices of
any parallelogram with sides perpendicular to these vectors. Then the set $%
\mathcal{M}_{B}(\sigma ;\{\mathbf{a}^{1},\mathbf{a}^{2}\},\mathbb{R})$ is
dense in $C(B).$
\item[(b)] Let $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ be two noncollinear
vectors in $\mathbb{R}^{2}.$ If $X$ is the union of two parallel line
segments, not perpendicular to any of the vectors $\mathbf{a}^{1}$ and $%
\mathbf{a}^{2}$, then the set $\mathcal{M}_{X}(\sigma ;\{\mathbf{a}^{1},
\mathbf{a}^{2}\},\mathbb{R})$ is dense in $C(X).$
\item[(c)] Let now $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ be two collinear
vectors in $\mathbb{R}^{2}.$ Note that in this case any path consisting of
two points is automatically closed. Thus the set $\mathcal{M}_{X}(\sigma ;\{%
\mathbf{a}^{1},\mathbf{a}^{2}\},\mathbb{R})$ is dense in $C(X)$ if and only
if $X$ contains no path different from a singleton. A simple example is a
line segment not perpendicular to the given direction.
\item[(d)] Let $X$ be a compact set with an interior point. Then Theorem 5.4
fails, since any such set contains vertices of some parallelogram with sides
perpendicular to the given directions $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$,
that is a closed path.
\end{description}
\bigskip
\subsection{A necessary condition for the representation by neural networks}
In this subsection we give a necessary condition for the representation of
functions by neural networks with weights from a finitely many straight
lines. Before formulating our result, we introduce new objects, namely \textit{semicycles}
with respect to directions $\mathbf{a}^{1},...,%
\mathbf{a}^{k}\in \mathbb{R}^{d}\backslash \{\mathbf{0}\}$.
\bigskip
\textbf{Definition 5.1.} \textit{A set of points $l=(\mathbf{x}^{1},\ldots ,%
\mathbf{x}^{n})\subset \mathbb{R}^{d}$ is called a semicycle with respect to
the directions $\mathbf{a}^{1},...,\mathbf{a}^{k}$ if there
exists a vector $\lambda =(\lambda _{1},\ldots ,\lambda _{n})\in \mathbf{Z}%
^{n}\setminus \{\mathbf{0}\}$ such that for any $i=1,\ldots ,k,$ we have}
\begin{equation*}
\sum_{j=1}^{n}\lambda _{j}\delta _{\mathbf{a}^{i}\cdot \mathbf{x}%
^{j}}=\sum_{s=1}^{r_{i}}\lambda _{i_{s}}\delta _{\mathbf{a}^{i}\cdot \mathbf{%
x}^{i_{s}}},\quad where~r_{i}\leq k.\eqno(5.10)
\end{equation*}%
Here $\delta _{a}$ is the characteristic function of the single point set $%
\{a\}$. Note that for $i=1,\ldots ,k$, the set $\{\lambda
_{i_{s}},~s=1,...,r_{i}\}$ is a subset of the set $\{\lambda
_{j},~j=1,...,n\}$. Thus, Eq. (5.10) means that for each $i$, we actually
have at most $k$ terms in the sum $\sum_{j=1}^{n}\lambda _{j}\delta _{%
\mathbf{a}^{i}\cdot \mathbf{x}^{j}}$.
Recall that if in (5.10) for any $i=1,\ldots ,k$, we have
\begin{equation*}
\sum_{j=1}^{n}\lambda _{j}\delta _{\mathbf{a}^{i}\cdot \mathbf{x}^{j}}=0,
\end{equation*}%
then the set $l=(\mathbf{x}^{1},\ldots ,\mathbf{x}^{n})$ is a cycle with
respect to the directions $\mathbf{a}^{1},...,\mathbf{a}^{k}$
(see Section 1.2). Thus a cycle is a special case of a semicycle.
Let us give a simple example of a semicycle. Assume $k=2$ and $\mathbf{a}^{1}\cdot \mathbf{x}^{1}=\mathbf{a}%
^{1}\cdot \mathbf{x}^{2}$, $\mathbf{a}^{2}\cdot \mathbf{x}^{2}=\mathbf{a}%
^{2}\cdot \mathbf{x}^{3}$, $\mathbf{a}^{1}\cdot \mathbf{x}^{3}=\mathbf{a}%
^{1}\cdot \mathbf{x}^{4}$,..., $\mathbf{a}^{2}\cdot \mathbf{x}^{n-1}=\mathbf{%
a}^{2}\cdot \mathbf{x}^{n}$. Then it is not difficult to see that for a
vector $\lambda =(\lambda _{1},\ldots ,\lambda _{n})$ with the components $\lambda _{j}=(-1)^{j},$
the following equalities hold:
\begin{eqnarray*}
\sum_{j=1}^{n}\lambda _{j}\delta _{\mathbf{a}^{1}\cdot \mathbf{x}^{j}}
&=&\lambda _{n}\delta _{\mathbf{a}^{1}\cdot \mathbf{x}^{n}}, \\
\sum_{j=1}^{n}\lambda _{j}\delta _{\mathbf{a}^{2}\cdot \mathbf{x}^{j}}
&=&\lambda _{1}\delta _{\mathbf{a}^{2}\cdot \mathbf{x}^{1}}.
\end{eqnarray*}%
Thus, by Definition 5.1, the set $l=\{\mathbf{x}^{1},\ldots ,\mathbf{x}%
^{n}\}$ is a semicycle with respect to the directions $\mathbf{a}^{1}$
and $\mathbf{a}^{2}$. Note that this set, in the given order of its points, forms
a path with respect to the directions $\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ (see
Section 1.3). It is not difficult to see that any path with respect to
$\mathbf{a}^{1}$ and $\mathbf{a}^{2}$ is a semicycle with respect
to these directions. But semicycles may also involve some union of paths.
Note that one can construct many semicycles by adding not more than $k$
arbitrary points to a cycle with respect to the directions $\mathbf{a}^{1},%
\mathbf{a}^{2},...,\mathbf{a}^{k}$.
A cycle (or semicycle) $l$ is called a \textit{$q$-cycle} (\textit{$q$%
-semicycle}) if the vector $\lambda $ associated with $l$ can be chosen so
that $\left\vert \lambda _{i}\right\vert \leq q,$ $i=1,...,n,$ and $q$ is
the minimal number with this property.
The semicycle considered above is a $1$-semicycle. If in that example, $%
\mathbf{a}^{2}\cdot \mathbf{x}^{n-1}=\mathbf{a}^{2}\cdot \mathbf{x}^{1}$,
then the set $\{x_{1},x_{2},...,x_{n-1}\}$ is a $1$-cycle. Let us give a
simple example of a $2$-cycle with respect to the directions $\mathbf{a}%
^{1}=(1,0)$ and $\mathbf{a}^{2}=(0,1)$. Consider the union
\begin{equation*}
\{0,1\}^{2}\cup \{0,2\}^{2}=\{(0,0),(1,1),(2,2),(0,1),(1,0),(0,2),(2,0)\}.
\end{equation*}
It is easy to see that this set is a $2$-cycle with the associated vector $%
(2,1,1,-1,-1,-1,-1).$ Similarly, one can construct a $q$-cycle or $q$%
-semicycle for any positive integer $q$.
\bigskip
\textbf{Theorem 5.6.} \textit{Assume $W=\{t_{1}\mathbf{a}^{1},...,t_{k}\mathbf{a}^{k}:\ t_{1},...,t_{k}\in \mathbb{R}\}$
is the given set of weights. If $\mathcal{M}_{X}(\sigma ;W,\mathbb{R})=C(X)$, then $X$ contains no cycles
and the lengths (number of points) of all $q$-semicycles in $X$ are bounded
by some positive integer.}
\bigskip
\begin{proof} Let $\mathcal{M}_{X}(\sigma ;W,\mathbb{R})=C(X).$\textit{\ Then}
$\mathcal{R}_{1}+\mathcal{R}_{2}+...+\mathcal{R}_{k}=C\left( X\right) $,
where
\begin{equation*}
\mathcal{R}_{i}=\{g_{i}(\mathbf{a}^{i}\cdot \mathbf{x):~}g_{i}\in C(\mathbb{%
R)}\},~i=1,2,...,k.
\end{equation*}
Consider the linear space%
\begin{equation*}
\mathcal{U}=\prod_{i=1}^{k}\mathcal{R}_{i}=\{(g_{1},\ldots ,g_{k}):~g_{i}\in
\mathcal{R}_{i},~i=1,\ldots ,k\}
\end{equation*}%
endowed with the norm
\begin{equation*}
\Vert (g_{1},\ldots ,g_{k})\Vert =\Vert g_{1}\Vert +\cdots +\Vert g_{k}\Vert
.
\end{equation*}
By $\mathcal{U}^{\ast}$ denote the dual space of $\mathcal{U}$. Each functional $%
F\in \mathcal{U}^{\ast }$ can be written as
\begin{equation*}
F=F_{1}+\cdots +F_{k},
\end{equation*}%
where the functionals $F_{i}\in \mathcal{R}_{i}^{\ast }$ and
\begin{equation*}
F_{i}(g_{i})=F[(0,\ldots ,g_{i},\ldots ,0)],\quad i=1,\ldots ,k.
\end{equation*}%
We see that the functional $F$ determines the collection $%
(F_{1},\ldots ,F_{k})$. Conversely, every collection $(F_{1},\ldots ,F_{k})$
of continuous linear functionals $F_{i}\in \mathcal{R}_{i}^{\ast }$, $%
i=1,\ldots ,k$, determines the functional $F_{1}+\cdots +F_{k},$ on $%
\mathcal{U}$. Considering this, in what follows, elements of $\mathcal{U}%
^{\ast }$ will be denoted by $(F_{1},\ldots ,F_{k})$.
It is not difficult to verify that
\begin{equation*}
\Vert (F_{1},\ldots ,F_{k})\Vert =\max \{\Vert F_{1}\Vert ,\ldots ,\Vert
F_{k}\Vert \}.\eqno(5.11)
\end{equation*}
Let $l=(\mathbf{x}^{1},\ldots ,\mathbf{x}^{n})$ be any $q$-semicycle (with
respect to the directions $\mathbf{a}^{1}$,...,$\mathbf{a}%
^{k}$) in $X$ and $\lambda =(\lambda _{1},\ldots ,\lambda _{n})$ be a vector
associated with it. Consider the following functional
\begin{equation*}
G_{l,\lambda }(f)=\sum_{j=1}^{n}\lambda _{j}f(\mathbf{x}^{j}),\quad f\in
C(X).
\end{equation*}
Since $l$ satisfies (5.10), for each function $g_{i}\in \mathcal{R}_{i}$, $%
i=1,\ldots ,k$, we have
\begin{equation*}
G_{l,\lambda }(g_{i})=\sum_{j=1}^{n}\lambda _{j}g_{i}(\mathbf{a}^{i}\cdot
\mathbf{x}^{j})=\sum_{s=1}^{r_{i}}\lambda _{i_{s}}g_{i}(\mathbf{a}^{i}\cdot
\mathbf{x}^{i_{s}}),\eqno(5.12)
\end{equation*}%
where $r_{i}\leq k$. That is, for each set $\mathcal{R}_{i}$, $G_{l,\lambda
} $ can be reduced to a functional defined with the help of not more than $k$
points of the semicycle $l$.
Consider the operator
\begin{equation*}
A:\mathcal{U}\rightarrow C(X),\quad A[(g_{1},\ldots ,g_{k})]=g_{1}+\cdots
+g_{k}.
\end{equation*}%
Clearly, $A$ is a linear continuous operator with the norm $\Vert A\Vert =1$%
. Besides, since $\mathcal{R}_{1}+\mathcal{R}_{2}+...+\mathcal{R}_{k}=C(X)$,
$A$ is a surjection. Consider also the conjugate operator
\begin{equation*}
A^{\ast }:C(X)^{\ast }\rightarrow \mathcal{U}^{\ast },~A^{\ast
}[H]=(F_{1},\ldots ,F_{k}),
\end{equation*}%
where $F_{i}(g_{i})=H(g_{i})$, for any $g_{i}\in \mathcal{R}_{i}$, $%
i=1,\ldots ,k$. Set $A^{\ast }[G_{l,\lambda }]=(G_{1},\ldots ,G_{k})$. From
(5.12) it follows that
\begin{equation*}
|G_{i}(g_{i})|=|G_{l,\lambda }(g_{i})|\leq \Vert g_{i}\Vert
\sum_{s=1}^{r_{i}}|\lambda _{i_{s}}|\leq kq\Vert g_{i}\Vert ,\quad
i=1,\ldots ,k,
\end{equation*}%
Therefore,
\begin{equation*}
\Vert G_{i}\Vert \leq kq,\quad i=1,\ldots ,k.
\end{equation*}%
From (5.11) we obtain that
\begin{equation*}
\Vert A^{\ast }[G_{l,\lambda }]\Vert =\Vert (G_{1},\ldots ,G_{k})\Vert \leq
kq.\eqno(5.13)
\end{equation*}%
Since $A$ is a surjection, there exists a positive real number $\delta $
such that
\begin{equation*}
\Vert A^{\ast }[H]\Vert >\delta \Vert H\Vert
\end{equation*}%
for any functional $H\in C(X)^{\ast }$(see \cite[p.100]{122}). Taking into account that $%
\Vert G_{l,\lambda }\Vert =\sum_{j=1}^{n}|\lambda _{j}|$, for the functional
$G_{l,\lambda }$ we have
\begin{equation*}
\Vert A^{\ast }[G_{l,\lambda }]\Vert >\delta \sum_{j=1}^{n}|\lambda _{j}|.%
\eqno(5.14)
\end{equation*}%
It follows from (5.13) and (5.14) that
\begin{equation*}
\delta <\frac{kq}{\sum_{j=1}^{n}|\lambda _{j}|}.
\end{equation*}%
The last inequality shows that $n$ (the length of the arbitrarily chosen $q$%
-semicycle $l$) cannot be as great as possible, otherwise $\delta =0$. This
simply means that there must be some positive integer bounding the lengths
of all $q$-semicycles in $X$.
It remains to show that there are no cycles in $X$. Indeed, if $l=(\mathbf{x}%
^{1},\ldots ,\mathbf{x}^{n})$ is a cycle in $X$ and $\lambda =(\lambda
_{1},\ldots ,\lambda _{n})$ is a vector associated with it, then the above
functional $G_{l,\lambda }$ annihilates all functions from $\mathcal{R}_{1}+%
\mathcal{R}_{2}+...+\mathcal{R}_{k}$. On the other hand, $G_{l,\lambda
}(f)=\sum_{j=1}^{n}|\lambda _{j}|\neq 0$ for a continuous function $f$ on $X$
satisfying the conditions $f(\mathbf{x}^{j})=1$ if $\lambda _{j}>0$ and $f(%
\mathbf{x}^{j})=-1$ if $\lambda _{j}<0$, $j=1,\ldots ,n$. This implies that $%
\mathcal{R}_{1}+\mathcal{R}_{2}+...+\mathcal{R}_{k}\neq C\left( X\right) $.
Since $\mathcal{M}_{X}(\sigma ;W,\mathbb{R})\subseteq \mathcal{R}_{1}+%
\mathcal{R}_{2}+...+\mathcal{R}_{k}$, we obtain that $\mathcal{M}_{X}(\sigma
;W,\mathbb{R})\neq C\left( X\right) $ on the contrary to our assumption.
\end{proof}
\bigskip
\textbf{Remark 5.2.} Assume $\mathcal{M}_{X}(\sigma ;W,\mathbb{R})$ is dense
in $C(X).$ Is it necessarily closed? Theorem 5.6 may describe cases when it
is not. For example, let $\mathbf{a}^{1}=(1;-1),\ \mathbf{a}^{2}=(1;1),$ $%
W=\{\mathbf{a}^{1},\mathbf{a}^{2}\}$ and $\sigma $ be any continuous,
bounded and nonconstant function, which has a limit at infinity. Consider
the set
\begin{eqnarray*}
X &=&\{(2;\frac{2}{3}),(\frac{2}{3};\frac{2}{3}),(0;0),(1;1),(1+\frac{1}{2}%
;1-\frac{1}{2}),(1+\frac{1}{2}+\frac{1}{4};1-\frac{1}{2}+\frac{1}{4}), \\
&&(1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8};1-\frac{1}{2}+\frac{1}{4}-\frac{1}{8%
}),...\}.
\end{eqnarray*}
It is clear that $X$ is a compact set with all its orbits closed. (In fact,
there is only one orbit, which coincides with $X$). Hence, by Theorem 5.4, $%
\overline{\mathcal{M}_{X}(\sigma ;W,\mathbb{R})}=C(X).$ But by Theorem 5.6, $%
\mathcal{M}_{X}(\sigma ;W,\mathbb{R})\neq C(X).$ Therefore, the set $%
\mathcal{M}_{X}(\sigma ;W,\mathbb{R})$ is not closed in $C(X).$
\bigskip
\section{Two hidden layer neural networks}
A single hidden layer perceptron is able to approximate a given data with
any degree of accuracy. But in applications it is necessary to define how
many neurons one should take in a hidden layer. The more the number of
neurons, the more the probability of the network to give precise results.
Unfortunately, practicality decreases with the increase of the number of
neurons in the hidden layer. In other words, single hidden layer perceptrons
are not always effective if the number of neurons in the hidden layer is
prescribed. In this section, we show that this phenomenon is no longer true
for perceptrons with two hidden layers. We prove that a two hidden layer
neural network with $d$ inputs, $d$ neurons in the first hidden layer, $2d+2$
neurons in the second hidden layer and with a specifically constructed
sigmoidal and infinitely differentiable activation function can approximate
any continuous multivariate function with arbitrary accuracy.
\subsection{Relation of the Kolmogorov superposition theorem to two hidden
layer neural networks}
Note that if $r$ is fixed in (5.1), then the set
\begin{equation*}
\mathcal{M}_{r}(\sigma )=\left\{ \sum_{i=1}^{r}c_{i}\sigma (\mathbf{w}^{i}%
\mathbf{\cdot x}-\theta _{i}):~c_{i},\theta _{i}\in \mathbb{R},\mathbf{w\in }%
\mathbb{R}^{d}\right\}
\end{equation*}%
is no longer dense in in the space $C(\mathbb{R}^{d})$ (in the topology of
uniform convergence on compact sets) for any activation function $\sigma $.
The set $\mathcal{M}_{r}(\sigma )$ will not be dense even if we variate
over all univariate continuous functions $\sigma $ (see \cite[Theorem 5.1]%
{95}). In the following, we will see that this property of single hidden
layer neural networks does not carry over to networks with more than one
hidden layer.
A two hidden layer network is defined by iteration of the single hidden layer neural network
model. The output of two hidden layer perceptron with $r$ units in the first
layer, $s$ units in the second layer and the input $x=(x_{1},...,x_{d})$ is
\begin{equation*}
\sum_{i=1}^{s}d_{i}\sigma \left( \sum_{j=1}^{r}c_{ij}\sigma (\mathbf{w}%
^{ij}\cdot \mathbf{x-}\theta _{ij})-\gamma _{i}\right) .
\end{equation*}%
Here $d_{i},c_{ij},\theta _{ij},\gamma _{i}$ are real numbers, $\mathbf{w}%
^{ij}$ are vectors of $\mathbb{R}^{d}$ and $\sigma $ is a fixed univariate
function.
In many applications, it is convenient to take the activation function $%
\sigma $ as a \textit{sigmoidal function} which is defined as
\begin{equation*}
\lim_{t\rightarrow -\infty }\sigma (t)=0\quad \text{ and }\quad
\lim_{t\rightarrow +\infty }\sigma (t)=1.
\end{equation*}%
The literature on neural networks abounds with the use of such functions and
their superpositions. The following are typical examples of sigmoidal
functions:
\begin{align*}
\sigma (t)& =\frac{1}{1+e^{-t}} & & \text{(the squashing function),} \\
\sigma (t)& =%
\begin{cases}
0, & t\leq -1, \\
\dfrac{t+1}{2}, & -1\leq t\leq 1, \\
1, & t\geq 1%
\end{cases}
& & \text{(the piecewise linear function),} \\
\sigma (t)& =\frac{1}{\pi }\arctan t+\frac{1}{2} & & \text{(the arctan
sigmoid function),} \\
\sigma (t)& =\frac{1}{\sqrt{2\pi }}\int\limits_{-\infty }^{t}e^{-x^{2}/2}dx
& & \text{(the Gaussian function).}
\end{align*}
In this section, we prove that there exists a two hidden layer neural
network model with $d$ units in the first layer and $2d+2$ units in the
second layer such that it has the ability to approximate any $d$-variable
continuous function with arbitrary accuracy. As an activation function for
this model we take a specific sigmoidal function. The idea behind the proof
of this result is very much connected to the Kolmogorov superposition
theorem (see Section 4.1). This theorem has been much discussed in neural
network literature (see, e.g., \cite{119}). In our opinion, the most
remarkable application of the Kolmogorov superposition theorem to neural
networks was given by Maiorov and Pinkus \cite{99}. They showed that there
exists a sigmoidal, strictly increasing, analytic activation function, for
which a fixed number of units in both hidden layers are sufficient to
approximate arbitrarily well any continuous multivariate function. Namely,
the authors of \cite{99} proved the following theorem.
\bigskip
\textbf{Theorem 5.7 (Maiorov and Pinkus \cite{99}).} \textit{There exists an
activation function $\sigma $ which is analytic, strictly increasing and
sigmoidal and has the following property: For any $f\in C[0,1]^{d}$ and $%
\varepsilon >0,$ there exist constants $d_{i},$ $c_{ij},$ $\theta _{ij},$ $%
\gamma _{i}$, and vectors $\mathbf{w}^{ij}\in \mathbb{R}^{d}$ for which}
\begin{equation*}
\left\vert f(\mathbf{x})-\sum_{i=1}^{6d+3}d_{i}\sigma \left(
\sum_{j=1}^{3d}c_{ij}\sigma (\mathbf{w}^{ij}\cdot \mathbf{x-}\theta
_{ij})-\gamma _{i}\right) \right\vert <\varepsilon \eqno(5.15)
\end{equation*}%
\textit{for all $\mathbf{x}=(x_{1},...,x_{d})\in \lbrack 0,1]^{d}.$}
\bigskip
This theorem is based on the following version of the Kolmogorov
superposition theorem given by Lorentz \cite{98} and Sprecher \cite{128}.
\bigskip
\textbf{Theorem 5.8 (Kolmogorov's superposition theorem).} \textit{For the
unit cube $\mathbb{I}^{d},~\mathbb{I}=[0,1],~d\geq 2,$ there exists
constants $\lambda _{q}>0,$ $q=1,...,d,$ $\sum_{q=1}^{d}\lambda _{q}=1,$ and
nondecreasing continuous functions $\phi _{p}:[0,1]\rightarrow \lbrack 0,1],$
$p=1,...,2d+1,$ such that every continuous function $f:\mathbb{I}%
^{d}\rightarrow \mathbb{R}$ admits the representation}
\begin{equation*}
f(x_{1},...x_{d})=\sum_{p=1}^{2d+1}g\left( \sum_{q=1}^{d}\lambda _{q}\phi
_{p}(x_{q})\right) \eqno(5.16)
\end{equation*}%
\textit{for some $g\in C[0,1]$ depending on $f.$}
\bigskip
In the next subsection, using the general ideas developed in \cite{99}, we
show that the bounds of units in hidden layers in (5.15) may be chosen even
equal to the bounds in the Kolmogorov superposition theorem. More precisely,
these bounds can be taken as $2d+2$ and $d$ instead of $6d+3$ and $3d$. To
attain this purpose, we change the ``analyticity" of $\sigma $ to ``infinite
differentiability". In addition, near infinity we assume that $\sigma $ is ``$%
\lambda $-strictly increasing" instead of being ``strictly increasing".
\bigskip
\subsection{The main result}
\smallskip
We begin this subsection with a definition of a \textit{$\lambda$-monotone
function}. Let $\lambda $ be any nonnegative number. A real function $f$
defined on $(a,b)$ is called \textit{$\lambda $-increasing} (\textit{$\lambda $-decreasing})
if there exists an increasing (decreasing) function $u:(a,b)\rightarrow
\mathbb{R}$ such that $\left\vert f(x)-u(x)\right\vert \leq \lambda ,$ for
all $x\in (a,b)$. If $u$ is strictly increasing (or strictly decreasing),
then the above function $f$ is called a \textit{$\lambda $-strictly increasing} (or \textit{$%
\lambda $-strictly decreasing}) function. Clearly, $0$-monotonicity coincides
with the usual concept of monotonicity and a $\lambda _{1}$-monotone
function is $\lambda _{2}$-monotone if $\lambda _{1}\leq \lambda _{2}$. It
is also clear from the definition that a $\lambda $-monotone function
behaves like a usual monotone function as $\lambda $ gets very small.
Our purpose is to prove the following theorem.
\bigskip
\textbf{Theorem 5.9.} \textit{For any positive numbers $\alpha $ and $%
\lambda $,\textit{\ }there exists a $C^{\infty }(\mathbb{R}),$
sigmoidal activation function $\sigma :$ $\mathbb{R\rightarrow R}$ which is
strictly increasing on $(-\infty ,\alpha )$, $\lambda$-strictly
increasing on $[\alpha ,+\infty )$, and satisfies the following property:
For any $f\in C[0,1]^{d}$ and $\varepsilon >0,$ there exist constants $%
d_{p}, $ $c_{pq},$ $\theta _{pq},$ $\gamma _{p}$, and vectors $\mathbf{w}%
^{pq}\in \mathbb{R}^{d}$ for which}
\begin{equation*}
\left\vert f(\mathbf{x})-\sum_{p=1}^{2d+2}d_{p}\sigma \left(
\sum_{q=1}^{d}c_{pq}\sigma (\mathbf{w}^{pq}\cdot \mathbf{x-}\theta
_{pq})-\gamma _{p}\right) \right\vert <\varepsilon \eqno(5.17)
\end{equation*}%
\textit{for all $\mathbf{x}=(x_{1},...,x_{d})\in \lbrack 0,1]^{d}.$}
\bigskip
\begin{proof} Let $\alpha $ be any positive number. Divide the interval $%
[\alpha ,+\infty )$ into the segments $[\alpha ,2\alpha ],$ $[2\alpha
,3\alpha ],...$. Let $h(t)$ be any strictly increasing, infinitely
differentiable function on $[\alpha ,+\infty )$ with the properties
\bigskip
1) $0<h(t)<1$ for all $t\in \lbrack \alpha ,+\infty )$;
2) $1-h(\alpha )\leq \lambda ;$
3) $h(t)\rightarrow 1,$ as $t\rightarrow +\infty .$
\bigskip
The existence of a strictly increasing smooth function satisfying these
properties is easy to verify. Note that from conditions (1)-(3) it follows
that any function $f(t)$ satisfying the inequality $h(t)<f(t)<1$ for all $%
t\in \lbrack \alpha ,+\infty ),$ is $\lambda $-strictly increasing and $%
f(t)\rightarrow 1,$ as $t\rightarrow +\infty .$
We are going to construct $\sigma $ obeying the required properties in
stages. Let $\{u_{n}(t)\}_{n=1}^{\infty }$ be the sequence of all
polynomials with rational coefficients defined on $[0,1].$ First, we define $%
\sigma $ on the closed intervals $[(2m-1)\alpha ,2m\alpha ],$ $m=1,2,...$,
as the function
\begin{equation*}
\sigma (t)=a_{m}+b_{m}u_{m}(\frac{t}{\alpha }-2m+1),\text{ }t\in \lbrack
(2m-1)\alpha ,2m\alpha ],\eqno(5.18)
\end{equation*}%
or equivalently,
\begin{equation*}
\sigma (\alpha t+(2m-1)\alpha )=a_{m}+b_{m}u_{m}(t),\text{ }t\in \lbrack
0,1],\eqno(5.19)
\end{equation*}%
where $a_{m}$ and $b_{m}\neq 0$ are appropriately chosen constants. These
constants are determined from the condition
\begin{equation*}
h(t)<\sigma (t)<1,\eqno(5.20)
\end{equation*}%
\bigskip for all $t\in \lbrack (2m-1)\alpha ,2m\alpha ].$ There is a simple
procedure for determining a suitable pair of $a_{m}$ and $b_{m}$. Indeed,
let
\begin{equation*}
M=\max h(t)\text{, }A_{1}=\min u_{m}(\frac{t}{\alpha }-2m+1)\text{, }%
A_{2}=\max u_{m}(\frac{t}{\alpha }-2m+1),
\end{equation*}%
where in all the above $\max $ and $\min $, the variable $t$ runs over the
closed interval $[(2m-1)\alpha ,2m\alpha ].$ Note that $M<1$. If $%
A_{1}=A_{2} $ (that is, if the function $u_{m}$ is constant on $[0,1]$),
then we can set $\sigma (t)=(1+M)/2$ and easily find a suitable pair of $%
a_{m}$ and $b_{m}$ from (5.18). Let now $A_{1}\neq A_{2}$ and $y=a+bx,$ $%
b\neq 0,$ be a linear function mapping the segment $[A_{1},A_{2}]$ into $%
(M,1).$ Then it is enough to take $a_{m}=a$ and $b_{m}=b.$
At the second stage we define $\sigma $ on the intervals $[2m\alpha
,(2m+1)\alpha ],$ $m=1,2,...,$ so that it is in $C^{\infty }(\mathbb{R})$
and satisfies the inequality (5.20). Finally, in all of $(-\infty ,\alpha )$
we define $\sigma $ while maintaining the $C^{\infty }$ strict monotonicity
property, and also in such a way that $\lim_{t\rightarrow -\infty }\sigma
(t)=0.$ We obtain from the properties of $h$ and the condition (5.20) that $%
\sigma (t)$ is a $\lambda $-strictly increasing function on the interval $%
[\alpha ,+\infty )$ and $\sigma (t)\rightarrow 1$, as $t\rightarrow +\infty
. $
From the above construction of $\sigma $, that is, from (5.19) it follows
that for each $m=1,2,...,$ there exists numbers $A_{m}$,$\ B_{m}$ and $r_{m}$
such that
\begin{equation*}
u_{m}(t)=A_{m}\sigma (\alpha t-r_{m})-B_{m},\eqno(5.21)
\end{equation*}%
where $A_{m}\neq 0.$
Let $f$ be any continuous function on the unit cube $[0,1]^{d}.$ By the
Kolmogorov superposition theorem the expansion (5.16) is valid for $f.$ For
the exterior continuous univariate function $g(t)$ in (5.16) and for any $%
\varepsilon >0$ there exists a polynomial $u_{m}(t)$ of the above form such
that
\begin{equation*}
\left\vert g(t)-u_{m}(t)\right\vert <\frac{\varepsilon }{2(2d+1)},
\end{equation*}%
for all $t\in \lbrack 0,1].$ This together with (5.21) means that
\begin{equation*}
\left\vert g(t)-[a\sigma (\alpha t-r)-b]\right\vert <\frac{\varepsilon }{%
2(2d+1)},\eqno(5.22)
\end{equation*}%
for some $a,b,r\in \mathbb{R}$ and all $t\in \lbrack 0,1].$
Substituting (5.22) in (5.16) we obtain that
\begin{equation*}
\left\vert f(x_{1},...,x_{d})-\sum_{p=1}^{2d+1}\left( a\sigma \left( \alpha
\cdot \sum_{q=1}^{d}\lambda _{q}\phi _{p}(x_{q})-r\right) -b\right)
\right\vert <\frac{\varepsilon }{2}\eqno(5.23)
\end{equation*}%
for all $(x_{1},...,x_{d})\in \lbrack 0,1]^{d}.$
For each $p\in \{1,2,...,2d+1\}$ and $\delta >0$ there exist constants $%
a_{p},b_{p}$ and $r_{p}$ such that
\begin{equation*}
\left\vert \phi _{p}(x_{q})-[a_{p}\sigma (\alpha
x_{q}-r_{p})-b_{p}]\right\vert <\delta ,\eqno(5.24)
\end{equation*}%
for all $x_{q}\in \lbrack 0,1].$ Since $\lambda _{q}>0,$ $q=1,...,d,$ $%
\sum_{q=1}^{d}\lambda _{q}=1,$ it follows from (5.24) that
\begin{equation*}
\left\vert \sum_{q=1}^{d}\lambda _{q}\phi _{p}(x_{q})-\left[
\sum_{q=1}^{d}\lambda _{q}a_{p}\sigma (\alpha x_{q}-r_{p})-b_{p}\right]
\right\vert <\delta ,\eqno(5.25)
\end{equation*}%
for all $(x_{1},...,x_{d})\in \lbrack 0,1]^{d}.$
Now since the function $a\sigma (\alpha t-r)$ is uniformly
continuous on every closed interval, we can choose $\delta $ sufficiently
small and obtain from (5.25) that
\[
\left\vert \sum_{p=1}^{2d+1}a\sigma \left( \alpha
\sum_{q=1}^{d}\lambda _{q}\phi _{p}(x_{q})-r\right) \
-\sum_{p=1}^{2d+1}a\sigma \left( \alpha \left[ \sum_{q=1}^{d}\lambda
_{q}a_{p}\sigma (\alpha x_{q}-r_{p})-b_{p}\right] -r\right) \right\vert
\]
\[
< \frac{\varepsilon }{2}.
\]
This inequality may be rewritten as
\begin{equation*}
\left\vert \sum_{p=1}^{2d+1}a\sigma \left( \alpha
\sum_{q=1}^{d}\lambda _{q}\phi _{p}(x_{q})-r\right)
-\sum_{p=1}^{2d+1}d_{p}\sigma \left( \sum_{q=1}^{d}c_{pq}\sigma (\mathbf{w}%
^{pq}\cdot \mathbf{x}-\theta _{pq})-\gamma _{p}\right) \right\vert <\frac{%
\varepsilon }{2}.\eqno(5.26)
\end{equation*}%
From (5.23) and (5.26) it follows that
\begin{equation*}
\left\vert f(\mathbf{x})-\left[ \sum_{p=1}^{2d+1}d_{p}\sigma \left(
\sum_{q=1}^{d}c_{pq}\sigma (\mathbf{w}^{pq}\cdot \mathbf{x-}\theta
_{pq})-\gamma _{p}\right) -s\right] \right\vert <\varepsilon ,\eqno(5.27)
\end{equation*}%
where $s=(2d+1)b$. Since the constant $s$ can be written in the form
\begin{equation*}
s=d\sigma \left( \sum_{q=1}^{d}c_{q}\sigma (\mathbf{w}^{q}\cdot \mathbf{x-}%
\theta _{q})-\gamma \right) ,
\end{equation*}%
from (5.27) we finally obtain the validity of (5.17).
\end{proof}
\textbf{Remark 5.3.} It is easily seen in the proof of Theorem 5.9 that all
the weights $\mathbf{w}^{ij}$ are fixed (see (5.26)). Namely, $\mathbf{w}%
^{ij}=\alpha \mathbf{e}^{j},$ for all $i=1,...,2d+2,$ $j=1,...,d,$ where $%
\mathbf{e}^{j}$ is the $j$-th coordinate vector of the space $\mathbb{R}^{d}$%
.
\bigskip
The next theorem follows from Theorem 5.9 easily, since the Kolmogorov
superposition theorem is valid for all compact sets of $\mathbb{R}^{d}$.
\bigskip
\textbf{Theorem 5.10.} \textit{Let $Q$ be a compact set in $\mathbb{R}^{d}.$
For any numbers $\alpha \in \mathbb{R}$ and $\lambda >0,$ there exists a
\textit{$C^{\infty }(\mathbb{R}),$ sigmoidal activation function }$\sigma :$
$\mathbb{R\rightarrow R}$ which is strictly increasing on $(-\infty ,\alpha
) $, $\lambda$-strictly increasing on $[\alpha ,+\infty )$, and
satisfies the following property: For any $f\in C(Q)$ and $\varepsilon >0$
there exist real numbers $d_{i},$ $c_{ij},$ $\theta _{ij}$, $\gamma _{i},$
and vectors $\mathbf{w}^{ij}\in \mathbb{R}^{d}$ for which}
\begin{equation*}
\left\vert f(\mathbf{x})-\sum_{i=1}^{2d+2}d_{i}\sigma \left(
\sum_{j=1}^{d}c_{ij}\sigma (\mathbf{w}^{ij}\cdot \mathbf{x-}\theta
_{ij})-\gamma _{i}\right) \right\vert <\varepsilon
\end{equation*}%
\textit{for all $\mathbf{x}=(x_{1},...,x_{d})\in Q.$}
\bigskip
\textbf{Remark 5.4.} In some literature, a single hidden layer perceptron is
defined as the function
\begin{equation*}
\sum_{i=1}^{r}c_{i}\sigma (\mathbf{w}^{i}\mathbf{\cdot x}-\theta _{i})-c_{0}.
\end{equation*}%
A two hidden layer network then takes the form
\begin{equation*}
\sum_{i=1}^{s}d_{i}\sigma \left( \sum_{j=1}^{r}c_{ij}\sigma (\mathbf{w}%
^{ij}\cdot \mathbf{x-}\theta _{ij})-\gamma _{i}\right) -d_{0}.\eqno(5.28)
\end{equation*}%
The proof of Theorem 5.9 shows that for networks of type (5.28) the theorem
is valid if we take $2d+1$ neurons in the second hidden layer (instead of $%
2d+2$ neurons as above). That is, there exist networks of type (5.28) having
the universal approximation property and for which the number of units in
the hidden layers is equal to the number of summands in the Kolmogorov
superposition theorem.
\bigskip
\textbf{Remark 5.5.} It is known that the $2d+1$ in the Kolmogorov
superposition theorem is minimal (see Sternfeld \cite{130}). Thus it is
doubtful if the number of neurons in Theorems 5.9 and 5.10 can be reduced.
\bigskip
\textbf{Remark 5.6.} Inequality (5.22) shows that single hidden layer neural
networks of the form (5.28) with the activation function $\sigma$ and with
only one neuron in the hidden layer can approximate any continuous function
on the interval $[0,1]$ with arbitrary precision. Since the number $b$ in
(5.22) can always be written as $b=a_1\sigma (0 \cdot t-r_1)$ for some $a_1$
and $r_1$, we see that two neurons in the hidden layer are sufficient for
traditional single hidden layer neural networks with the activation function
$\sigma$ to approximate continuous functions on $[0,1]$. Applying the linear
transformation $x=a+(b-a)t$ it can be proven that the same argument holds
for any interval $[a,b]$.
\bigskip
\section{Construction of a universal sigmoidal function}
In the preceding section, we considered two theorems (Theorem 5.7 of Maiorov
and Pinkus, and Theorem 5.9) on the approximation capabilities of the MLP
model of neural networks with a prescribed number of hidden neurons. Note
that both results are more theoretical than practical, as they indicate only
the existence of the corresponding activation functions.
In this section, we construct algorithmically a smooth, sigmoidal, almost
monotone activation function $\sigma $ providing approximation to an
arbitrary continuous function within any degree of accuracy. This algorithm
is implemented in a computer program, which computes the value of $\sigma $
at any reasonable point of the real axis.
\bigskip
\subsection{A construction algorithm}
In this subsection, we construct algorithmically a sigmoidal function $%
\sigma $ which we use in our results in Section 5.3.3.
To start with the construction of $\sigma$, assume that we are given a
closed interval $[a, b]$ and a sufficiently small real number $\lambda$. We
construct $\sigma$ algorithmically, based on two numbers, namely $\lambda$
and $d := b - a$. The following steps describe the algorithm.
\textit{Step 1.} Introduce the function
\begin{equation*}
h(x) := 1 - \frac{\min\{1/2, \lambda\}}{1 + \log(x - d + 1)}.
\end{equation*}
Note that this function is strictly increasing on the real line and
satisfies the following properties:
\begin{enumerate}
\item $0 < h(x) < 1$ for all $x \in [d, +\infty)$;
\item $1 - h(d) \le \lambda$;
\item $h(x) \to 1$, as $x \to +\infty$.
\end{enumerate}
We want to construct $\sigma $ satisfying the inequalities
\begin{equation*}
h(x)<\sigma (x)<1\eqno(5.29)
\end{equation*}%
for $x\in \lbrack d,+\infty )$. Then our $\sigma $ will tend to $1$ as $x$
tends to $+\infty $ and obey the inequality
\begin{equation*}
|\sigma (x)-h(x)|\leq \lambda ,
\end{equation*}%
i.e., it will be a $\lambda $-increasing function.
\textit{Step 2.} Before proceeding to the construction of $\sigma$, we need
to enumerate the monic polynomials with rational coefficients. Let $q_n$ be
the Calkin--Wilf sequence (see~\cite{CW00}). Then we can enumerate all the
rational numbers by setting
\begin{equation*}
r_0 := 0, \quad r_{2n} := q_n, \quad r_{2n-1} := -q_n, \ n = 1, 2, \dots.
\end{equation*}
Note that each monic polynomial with rational coefficients can uniquely be
written as $r_{k_0} + r_{k_1} x + \ldots + r_{k_{l-1}} x^{l-1} + x^l$, and
each positive rational number determines a unique finite continued fraction
\begin{equation*}
[m_0; m_1, \ldots, m_l] := m_0 + \dfrac1{m_1 + \dfrac1{m_2 + \dfrac1{\ddots
+ \dfrac1{m_l}}}}
\end{equation*}
with $m_0 \ge 0$, $m_1, \ldots, m_{l-1} \ge 1$ and $m_l \ge 2$. We now
construct a bijection between the set of all monic polynomials with rational
coefficients and the set of all positive rational numbers as follows. To the
only zeroth-degree monic polynomial 1 we associate the rational number 1, to
each first-degree monic polynomial of the form $r_{k_0} + x$ we associate
the rational number $k_0 + 2$, to each second-degree monic polynomial of the
form $r_{k_0} + r_{k_1} x + x^2$ we associate the rational number $[k_0; k_1
+ 2] = k_0 + 1 / (k_1 + 2)$, and to each monic polynomial
\begin{equation*}
r_{k_0} + r_{k_1} x + \ldots + r_{k_{l-2}} x^{l-2} + r_{k_{l-1}} x^{l-1} +
x^l
\end{equation*}
of degree $l \ge 3$ we associate the rational number $[k_0; k_1 + 1, \ldots,
k_{l-2} + 1, k_{l-1} + 2]$. In other words, we define $u_1(x) := 1$,
\begin{equation*}
u_n(x) := r_{q_n-2} + x
\end{equation*}
if $q_n \in \mathbb{Z}$,
\begin{equation*}
u_n(x) := r_{m_0} + r_{m_1-2} x + x^2
\end{equation*}
if $q_n = [m_0; m_1]$, and
\begin{equation*}
u_n(x) := r_{m_0} + r_{m_1-1} x + \ldots + r_{m_{l-2}-1} x^{l-2} +
r_{m_{l-1}-2} x^{l-1} + x^l
\end{equation*}
if $q_n = [m_0; m_1, \ldots, m_{l-2}, m_{l-1}]$ with $l \ge 3$. For example,
the first few elements of this sequence are
\begin{equation*}
1, \quad x^2, \quad x, \quad x^2 - x, \quad x^2 - 1, \quad x^3, \quad x - 1,
\quad x^2 + x, \quad \ldots.
\end{equation*}
\textit{Step 3.} We start with constructing $\sigma$ on the intervals $%
[(2n-1)d, 2nd]$, $n = 1, 2, \ldots$. For each monic polynomial $u_n(x) =
\alpha_0 + \alpha_1 x + \ldots + \alpha_{l-1} x^{l-1} + x^l$, set
\begin{equation*}
B_1 := \alpha_0 + \frac{\alpha_1-|\alpha_1|}{2} + \ldots + \frac{%
\alpha_{l-1} - |\alpha_{l-1}|}{2}
\end{equation*}
and
\begin{equation*}
B_2 := \alpha_0 + \frac{\alpha_1+|\alpha_1|}{2} + \ldots + \frac{%
\alpha_{l-1} + |\alpha_{l-1}|}{2} + 1.
\end{equation*}
Note that the numbers $B_1$ and $B_2$ depend on $n$. To avoid complication
of symbols, we do not indicate this in the notation.
Introduce the sequence
\begin{equation*}
M_n := h((2n+1)d), \qquad n = 1, 2, \ldots.
\end{equation*}
Clearly, this sequence is strictly increasing and converges to $1$.
Now we define $\sigma $ as the function
\begin{equation*}
\sigma (x):=a_{n}+b_{n}u_{n}\left( \frac{x}{d}-2n+1\right) ,\quad x\in
\lbrack (2n-1)d,2nd],\eqno(5.30)
\end{equation*}%
where
\begin{equation*}
a_{1}:=\frac{1}{2},\qquad b_{1}:=\frac{h(3d)}{2},\eqno(5.31)
\end{equation*}%
and
\begin{equation*}
a_{n}:=\frac{(1+2M_{n})B_{2}-(2+M_{n})B_{1}}{3(B_{2}-B_{1})},\qquad b_{n}:=%
\frac{1-M_{n}}{3(B_{2}-B_{1})},\qquad n=2,3,\ldots .\eqno(5.32)
\end{equation*}
It is not difficult to notice that for $n>2$ the numbers $a_{n}$, $b_{n}$
are the coefficients of the linear function $y=a_{n}+b_{n}x$ mapping the
closed interval $[B_{1},B_{2}]$ onto the closed interval $%
[(1+2M_{n})/3,(2+M_{n})/3]$. Besides, for $n=1$, i.e. on the interval $%
[d,2d] $,
\begin{equation*}
\sigma (x)=\frac{1+M_{1}}{2}.
\end{equation*}%
Therefore, we obtain that
\begin{equation*}
h(x)<M_{n}<\frac{1+2M_{n}}{3}\leq \sigma (x)\leq \frac{2+M_{n}}{3}<1,\eqno%
(5.33)
\end{equation*}%
for all $x\in \lbrack (2n-1)d,2nd]$, $n=1$, $2$, $\ldots $.
\textit{Step 4.} In this step, we construct $\sigma $ on the intervals $%
[2nd,(2n+1)d]$, $n=1,2,\ldots $. For this purpose we use the \textit{smooth
transition function }%
\begin{equation*}
\beta _{a,b}(x):=\frac{\widehat{\beta }(b-x)}{\widehat{\beta }(b-x)+\widehat{%
\beta }(x-a)},
\end{equation*}%
where
\begin{equation*}
\widehat{\beta }(x):=%
\begin{cases}
e^{-1/x}, & x>0, \\
0, & x\leq 0.%
\end{cases}%
\end{equation*}%
Obviously, $\beta _{a,b}(x)=1$ for $x\leq a$, $\beta _{a,b}(x)=0$ for $x\geq
b$, and $0<\beta _{a,b}(x)<1$ for $a<x<b$.
Set
\begin{equation*}
K_n := \frac{\sigma(2nd) + \sigma((2n+1)d)}{2}, \qquad n = 1, 2, \ldots.
\end{equation*}
Note that the numbers $\sigma(2nd)$ and $\sigma((2n+1)d)$ have already been
defined in the previous step. Since both the numbers $\sigma(2nd)$ and $%
\sigma((2n+1)d)$ lie in the interval $(M_n, 1)$, it follows that $K_n \in
(M_n, 1)$.
First we extend $\sigma $ smoothly to the interval $[2nd,2nd+d/2]$. Take $%
\varepsilon :=(1-M_{n})/6$ and choose $\delta \leq d/2$ such that
\begin{equation*}
\left\vert a_{n}+b_{n}u_{n}\left( \frac{x}{d}-2n+1\right) -\left(
a_{n}+b_{n}u_{n}(1)\right) \right\vert \leq \varepsilon ,\quad x\in \lbrack
2nd,2nd+\delta ].\eqno(5.34)
\end{equation*}%
One can choose this $\delta $ as
\begin{equation*}
\delta :=\min \left\{ \frac{\varepsilon d}{b_{n}C},\frac{d}{2}\right\} ,
\end{equation*}%
where $C>0$ is a number satisfying $|u_{n}^{\prime }(x)|\leq C$ for $x\in
(1,1.5)$. For example, for $n=1$, $\delta $ can be chosen as $d/2$. Now
define $\sigma $ on the first half of the interval $[2nd,(2n+1)d]$ as the
function
\begin{equation*}
\sigma(x) :=K_{n}-\beta _{2nd,2nd+\delta }(x)
\end{equation*}
\begin{equation*}
\times \left(
K_{n}-a_{n}-b_{n}u_{n}\left( \frac{x}{d}-2n+1\right) \right) , x\in \left[
2nd,2nd+\frac{d}{2}\right].\eqno(5.35)
\end{equation*}
Let us prove that $\sigma (x)$ satisfies the condition~(5.29). Indeed, if $%
2nd+\delta \leq x\leq 2nd+d/2$, then there is nothing to prove, since $%
\sigma (x)=K_{n}\in (M_{n},1)$. If $2nd\leq x<2nd+\delta $, then $0<\beta
_{2nd,2nd+\delta }(x)\leq 1$ and hence from~(5.35) it follows that for each $%
x\in \lbrack 2nd,2nd+\delta )$, $\sigma (x)$ is between the numbers $K_{n}$
and $A_{n}(x):=a_{n}+b_{n}u_{n}\left( \frac{x}{d}-2n+1\right) $. On the
other hand, from~(5.34) we obtain that
\begin{equation*}
a_{n}+b_{n}u_{n}(1)-\varepsilon \leq A_{n}(x)\leq
a_{n}+b_{n}u_{n}(1)+\varepsilon ,
\end{equation*}%
which together with~(5.30) and~(5.33) yields $A_{n}(x)\in \left[ \frac{%
1+2M_{n}}{3}-\varepsilon ,\frac{2+M_{n}}{3}+\varepsilon \right] $ for $x\in
\lbrack 2nd,2nd+\delta )$. Since $\varepsilon =(1-M_{n})/6$, the inclusion $%
A_{n}(x)\in (M_{n},1)$ is valid. Now since both $K_{n}$ and $A_{n}(x)$
belong to $(M_{n},1)$, we finally conclude that
\begin{equation*}
h(x)<M_{n}<\sigma (x)<1,\quad \text{for }x\in \left[ 2nd,2nd+\frac{d}{2}%
\right] .
\end{equation*}
We define $\sigma $ on the second half of the interval in a similar way:
\begin{equation*}
\begin{split}
\sigma (x)& :=K_{n}-(1-\beta _{(2n+1)d-\overline{\delta },(2n+1)d}(x)) \\
& \times \left( K_{n}-a_{n+1}-b_{n+1}u_{n+1}\left( \frac{x}{d}-2n-1\right)
\right) ,\quad x\in \left[ 2nd+\frac{d}{2},(2n+1)d\right] ,
\end{split}%
\end{equation*}%
where
\begin{equation*}
\overline{\delta }:=\min \left\{ \frac{\overline{\varepsilon }d}{b_{n+1}%
\overline{C}},\frac{d}{2}\right\} ,\qquad \overline{\varepsilon }:=\frac{%
1-M_{n+1}}{6},\qquad \overline{C}\geq \sup_{\lbrack -0.5,0]}|u_{n+1}^{\prime
}(x)|.
\end{equation*}%
One can easily verify, as above, that the constructed $\sigma (x)$ satisfies
the condition~(5.29) on $[2nd+d/2,2nd+d]$ and
\begin{equation*}
\sigma \left( 2nd+\frac{d}{2}\right) =K_{n},\qquad \sigma ^{(i)}\left( 2nd+%
\frac{d}{2}\right) =0,\quad i=1,2,\ldots .
\end{equation*}
Steps 3 and 4 construct $\sigma$ on the interval $[d, +\infty)$.
\textit{Step 5.} On the remaining interval $(-\infty ,d)$, we define $\sigma
$ as
\begin{equation*}
\sigma (x):=\left( 1-\widehat{\beta }(d-x)\right) \frac{1+M_{1}}{2},\quad
x\in (-\infty ,d).
\end{equation*}%
It is not difficult to verify that $\sigma $ is a strictly increasing,
smooth function on $(-\infty ,d)$. Note also that $\sigma (x)\rightarrow
\sigma (d)=(1+M_{1})/2$, as $x$ tends to $d$ from the left and $\sigma
^{(i)}(d)=0$ for $i=1$, $2$, $\ldots $. This final step completes the
construction of $\sigma $ on the whole real line.
\bigskip
\subsection{Properties of the constructed sigmoidal function}
It should be noted that the above algorithm allows one to compute the
constructed $\sigma$ at any point of the real axis instantly. The code of
this algorithm is available at %
\url{http://sites.google.com/site/njguliyev/papers/monic-sigmoidal}. As a
practical example, we give here the graph of $\sigma$ (see Figure~\ref%
{fig:sigma}) and a numerical table (see Table~\ref{tbl:sigma}) containing
several computed values of this function on the interval $[0, 20]$. Figure~%
\ref{fig:sigma100} shows how the graph of $\lambda$-increasing function $%
\sigma$ changes on the interval $[0,100]$ as the parameter $\lambda$
decreases.
The above $\sigma$ obeys the following properties:
\begin{enumerate}
\item $\sigma$ is sigmoidal;
\item $\sigma \in C^{\infty}(\mathbb{R})$;
\item $\sigma$ is strictly increasing on $(-\infty, d)$ and $\lambda$%
-strictly increasing on $[d, +\infty)$;
\item $\sigma$ is easily computable in practice.
\end{enumerate}
All these properties are easily seen from the above exposition. But the
essential property of our sigmoidal function is its ability to approximate
an arbitrary continuous function using only a fixed number of translations
and scalings of $\sigma$. More precisely, only two translations and scalings
are sufficient. We formulate this important property as a theorem in the
next section.
\begin{figure}[tbp]
\includegraphics[width=1.0\textwidth]{monic-0-20}
\caption{The graph of $\protect\sigma$ on $[0, 20]$ ($d = 2$, $\protect%
\lambda = 1/4$)}
\label{fig:sigma}
\end{figure}
\begin{table}[tbp]
\caption{Some computed values of $\protect\sigma$ ($d = 2$, $\protect\lambda %
= 1/4$)}
\label{tbl:sigma}%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
$t$ & $\sigma$ & $t$ & $\sigma$ & $t$ & $\sigma$ & $t$ & $\sigma$ & $t$ & $%
\sigma$ \\ \hline
$0.0$ & $0.37462$ & $4.0$ & $0.95210$ & $8.0$ & $0.97394$ & $12.0$ & $%
0.97662 $ & $16.0$ & $0.96739$ \\ \hline
$0.4$ & $0.44248$ & $4.4$ & $0.95146$ & $8.4$ & $0.96359$ & $12.4$ & $%
0.97848 $ & $16.4$ & $0.96309$ \\ \hline
$0.8$ & $0.53832$ & $4.8$ & $0.95003$ & $8.8$ & $0.96359$ & $12.8$ & $%
0.97233 $ & $16.8$ & $0.96309$ \\ \hline
$1.2$ & $0.67932$ & $5.2$ & $0.95003$ & $9.2$ & $0.96314$ & $13.2$ & $%
0.97204 $ & $17.2$ & $0.96307$ \\ \hline
$1.6$ & $0.87394$ & $5.6$ & $0.94924$ & $9.6$ & $0.95312$ & $13.6$ & $%
0.97061 $ & $17.6$ & $0.96067$ \\ \hline
$2.0$ & $0.95210$ & $6.0$ & $0.94787$ & $10.0$ & $0.95325$ & $14.0$ & $%
0.96739$ & $18.0$ & $0.95879$ \\ \hline
$2.4$ & $0.95210$ & $6.4$ & $0.94891$ & $10.4$ & $0.95792$ & $14.4$ & $%
0.96565$ & $18.4$ & $0.95962$ \\ \hline
$2.8$ & $0.95210$ & $6.8$ & $0.95204$ & $10.8$ & $0.96260$ & $14.8$ & $%
0.96478$ & $18.8$ & $0.96209$ \\ \hline
$3.2$ & $0.95210$ & $7.2$ & $0.95725$ & $11.2$ & $0.96727$ & $15.2$ & $%
0.96478$ & $19.2$ & $0.96621$ \\ \hline
$3.6$ & $0.95210$ & $7.6$ & $0.96455$ & $11.6$ & $0.97195$ & $15.6$ & $%
0.96565$ & $19.6$ & $0.97198$ \\ \hline
\end{tabular}%
\end{table}
\begin{figure}[tbp]
\includegraphics[width=0.75\textwidth]{monic-0-100}
\caption{The graph of $\protect\sigma$ on $[0, 100]$ ($d = 2$)}
\label{fig:sigma100}
\end{figure}
\bigskip
\subsection{Theoretical results}
The following theorems are valid.
\bigskip
\textbf{Theorem 5.11.} \textit{Assume that $f$ is a continuous function on a
finite segment $[a,b]$ of $\mathbb{R}$ and $\sigma $ is the sigmoidal
function constructed in Section~5.3.1. Then for any sufficiently small $%
\varepsilon >0 $ there exist constants $c_{1}$, $c_{2}$, $\theta _{1}$ and $%
\theta _{2}$ such that}
\begin{equation*}
|f(x)-c_{1}\sigma (x-\theta _{1})-c_{2}\sigma (x-\theta _{2})|<\varepsilon
\end{equation*}%
\textit{for all $x\in \lbrack a,b]$.}
\bigskip
\begin{proof} Set $d:=b-a$ and divide the interval $[d,+\infty )$ into the segments
$[d,2d]$, $[2d,3d]$, $\ldots $. It follows from~(5.30) that
\begin{equation*}
\sigma (dx+(2n-1)d)=a_{n}+b_{n}u_{n}(x),\quad x\in \lbrack 0,1]\eqno(5.36)
\end{equation*}%
for $n=1$, $2$, $\ldots $. Here $a_{n}$ and $b_{n}$ are computed by~(5.31)
and~(5.32) for $n=1$ and $n>1$, respectively.
From~(5.36) it follows that for each $n=1$, $2$, $\ldots $,
\begin{equation*}
u_{n}(x)=\frac{1}{b_{n}}\sigma (dx+(2n-1)d)-\frac{a_{n}}{b_{n}}.\eqno(5.37)
\end{equation*}
Let now $g$ be any continuous function on the unit interval $[0,1]$. By the
density of polynomials with rational coefficients in the space of continuous
functions on any compact subset of $\mathbb{R}$, for any $\varepsilon >0$
there exists a polynomial $p(x)$ of the above form such that
\begin{equation*}
|g(x)-p(x)|<\varepsilon
\end{equation*}%
for all $x\in \lbrack 0,1]$. Denote by $p_{0}$ the leading coefficient of $p$%
. If $p_{0}\neq 0$ (i.e., $p\not\equiv 0$) then we define $u_{n}$ as $%
u_{n}(x):=p(x)/p_{0}$, otherwise we just set $u_{n}(x):=1$. In both cases
\begin{equation*}
|g(x)-p_{0}u_{n}(x)|<\varepsilon ,\qquad x\in \lbrack 0,1].
\end{equation*}%
This together with~(5.37) means that
\begin{equation*}
|g(x)-c_{1}\sigma (dx-s_{1})-c_{0}|<\varepsilon \eqno(5.38)
\end{equation*}%
for some $c_{0}$, $c_{1}$, $s_{1}\in \mathbb{R}$ and all $x\in \lbrack 0,1]$%
. Namely, $c_{1}=p_{0}/b_{n}$, $s_{1}=d-2nd$ and $c_{0}=p_{0}a_{n}/b_{n}$.
On the other hand, we can write $c_{0}=c_{2}\sigma (dx-s_{2})$, where $%
c_{2}:=2c_{0}/(1+h(3d))$ and $s_{2}:=-d$. Hence,
\begin{equation*}
|g(x)-c_{1}\sigma (dx-s_{1})-c_{2}\sigma (dx-s_{2})|<\varepsilon .\eqno(5.39)
\end{equation*}%
Note that~(5.39) is valid for the unit interval $[0,1]$. Using linear
transformation it is not difficult to go from $[0,1]$ to the interval $[a,b]$%
. Indeed, let $f\in C[a,b]$, $\sigma $ be constructed as above, and $%
\varepsilon $ be an arbitrarily small positive number. The transformed
function $g(x)=f(a+(b-a)x)$ is well defined on $[0,1]$ and we can apply the
inequality~(5.39). Now using the inverse transformation $x=(t-a)/(b-a)$, we
can write
\begin{equation*}
|f(t)-c_{1}\sigma (t-\theta _{1})-c_{2}\sigma (t-\theta _{2})|<\varepsilon
\end{equation*}%
for all $t\in \lbrack a,b]$, where $\theta _{1}=a+s_{1}$ and $\theta
_{2}=a+s_{2}$. The last inequality completes the proof.
\end{proof}
Since any compact subset of the real line is contained in a segment $[a,b]$,
the following generalization of Theorem 5.11 holds.
\bigskip
\textbf{Theorem 5.12.} \textit{Let $Q$ be a compact subset of the real line
and $d$ be its diameter. Let $\lambda $ be any positive number. Then one can
algorithmically construct a computable sigmoidal activation function $\sigma
\colon \mathbb{R}\rightarrow \mathbb{R}$, which is infinitely
differentiable, strictly increasing on $(-\infty ,d)$, $\lambda $-strictly
increasing on $[d,+\infty )$, and satisfies the following property: For any $%
f\in C(Q)$ and $\varepsilon >0$ there exist numbers $c_{1}$, $c_{2}$, $%
\theta _{1}$ and $\theta _{2}$ such that}
\begin{equation*}
|f(x)-c_{1}\sigma (x-\theta _{1})-c_{2}\sigma (x-\theta _{2})|<\varepsilon
\end{equation*}%
\textit{for all $x\in Q$.}
\bigskip
\textbf{Remark 5.7.} Theorems 5.11 and 5.12 show that single hidden layer
neural networks with the constructed sigmoidal activation function $\sigma $
and only two neurons in the hidden layer can approximate any continuous
univariate function. Moreover, in this case, one can fix the weights equal
to $1$. For the approximation of continuous multivariate functions two
hidden layer neural networks with $3d+2$ hidden neurons can be taken.
Namely, Theorem 5.9 (and hence Theorem 5.10) is valid with the constructed
in Section 5.3.1 activation function $\sigma$. Indeed, the proof of this
theorem shows that any activation function with the property (5.22)
suffices. But the activation function constructed
in Section 5.3.1 satisfies this property (see (5.38)).
\bigskip
\subsection{Numerical results}
We prove in Theorem 5.11 that any continuous function on $[a,b]$ can be
approximated arbitrarily well by single hidden layer neural networks with
the fixed weight $1$ and with only two neurons in the hidden layer. An
activation function $\sigma $ for such a network is constructed in
Section 5.3.1. We have seen from the proof that our approach is totally
constructive. One can evaluate the value of $\sigma $ at any point of the
real axis and draw its graph instantly, using the programming interface at
the URL shown at the beginning of Section 5.3.2. In the current subsection, we
demonstrate our result in various examples. For different error bounds we
find the parameters $c_{1}$, $c_{2}$, $\theta _{1}$ and $\theta _{2}$ in
Theorem 5.11. All computations were done in SageMath~\cite{Sage}. For
computations, we use the following algorithm, which works well for analytic
functions. Assume $f$ is a function, whose Taylor series around the point $%
(a+b)/2$ converges uniformly to $f$ on $[a,b]$, and $\varepsilon >0$.
\begin{enumerate}
\item Consider the function $g(t) := f(a + (b - a) t)$, which is
well-defined on $[0, 1]$;
\item Find $k$ such that the $k$-th Taylor polynomial
\begin{equation*}
T_k(x) := \sum_{i=0}^k \frac{g^{(i)}(1/2)}{i!} \left( x - \frac{1}{2}
\right)^i
\end{equation*}
satisfies the inequality $|T_k(x) - g(x)| \le \varepsilon / 2$ for all $x
\in [0, 1]$;
\item Find a polynomial $p$ with rational coefficients such that
\begin{equation*}
|p(x) - T_k(x)| \le \frac{\varepsilon}{2}, \qquad x \in [0, 1],
\end{equation*}
and denote by $p_0$ the leading coefficient of this polynomial;
\item If $p_0 \ne 0$, then find $n$ such that $u_n(x) = p(x) / p_0$.
Otherwise, set $n := 1$;
\item For $n=1$ and $n>1$ evaluate $a_{n}$ and $b_{n}$ by~(5.31) and~(5.32),
respectively;
\item Calculate the parameters of the network as
\begin{equation*}
c_1 := \frac{p_0}{b_n}, \qquad c_2 := \frac{2 p_0 a_n}{b_n (1 + h(3d))},
\qquad \theta_1 := b - 2 n (b - a), \qquad \theta_2 := 2 a - b;
\end{equation*}
\item Construct the network $\mathcal{N}=c_{1}\sigma (x-\theta
_{1})+c_{2}\sigma (x-\theta _{2}).$ Then $\mathcal{N}$ gives an $\varepsilon
$-approximation to $f.$
\end{enumerate}
In the sequel, we give four practical examples. To be able to make
comparisons between these examples, all the considered functions are given
on the same interval $[-1,1]$. First we select the polynomial function $%
f(x)=x^{3}+x^{2}-5x+3$ as a target function. We investigate the sigmoidal
neural network approximation to $f(x)$. This function was considered in
\cite{HH04} as well. Note that the authors of \cite{HH04} chose the sigmoidal
function as
\begin{equation*}
\sigma (x)=%
\begin{cases}
1, & \text{if }x\geq 0, \\
0, & \text{if }x<0,%
\end{cases}%
\end{equation*}%
and obtained the numerical results (see Table~\ref{tbl:Hahm}) for single
hidden layer neural networks with $8$, $32$, $128$, $532$ neurons in the
hidden layer (see also~\cite{CX10} for an additional constructive result
concerning the error of approximation in this example).
\begin{table}[tbp]
\caption{The Heaviside function as a sigmoidal function}
\label{tbl:Hahm}%
\begin{tabular}{|c|c|c|}
\hline
$N$ & Number of neurons ($2N^2$) & Maximum error \\ \hline
$2$ & $8$ & $0.666016$ \\ \hline
$4$ & $32$ & $0.165262$ \\ \hline
$8$ & $128$ & $0.041331$ \\ \hline
$16$ & $512$ & $0.010333$ \\ \hline
\end{tabular}%
\end{table}
As it is seen from the table, the number of neurons in the hidden layer
increases as the error bound decreases in value. This phenomenon is no
longer true for our sigmoidal function. Using Theorem 5.11, we can construct
explicitly a single hidden layer neural network model with only two neurons
in the hidden layer, which approximates the above polynomial with
arbitrarily given precision. Here by \textit{explicit construction} we mean
that all the network parameters can be computed directly. Namely, the
calculated values of these parameters are as follows: $c_{1}\approx
2059.373597$, $c_{2}\approx -2120.974727$, $\theta _{1}=-467$, and $\theta
_{2}=-3$. It turns out that for the above polynomial we have an exact
representation. That is, on the interval $[-1,1]$ we have the identity
\begin{equation*}
x^{3}+x^{2}-5x+3\equiv c_{1}\sigma (x-\theta _{1})+c_{2}\sigma (x-\theta
_{2}).
\end{equation*}
Let us now consider the other polynomial function
\begin{equation*}
f(x) = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \frac{x^5}{%
120} + \frac{x^6}{720}.
\end{equation*}
For this function we do not have an exact representation as above.
Nevertheless, one can easily construct a $\varepsilon$-approximating network
with two neurons in the hidden layer for any sufficiently small
approximation error $\varepsilon$. Table~\ref{tbl:polynomial} displays
numerical computations of the network parameters for six different
approximation errors.
\begin{table}[tbp]
\caption{Several $\protect\varepsilon$-approximators of the function $1 + x
+ x^2 / 2 + x^3 / 6 + x^4 / 24 + x^5 / 120 + x^6 / 720$}
\label{tbl:polynomial}%
\begin{tabular}{|c|c|c|l|c|c|}
\hline
Number of & \multicolumn{4}{|c|}{Parameters of the network} & Maximum \\
\cline{2-5}
neurons & $c_1$ & $c_2$ & \multicolumn{1}{c|}{$\theta_1$} & $\theta_2$ &
error \\ \hline
$2$ & $2.0619 \times 10^{2}$ & $2.1131 \times 10^{2}$ & $-1979$ & $-3$ & $%
0.95$ \\ \hline
$2$ & $5.9326 \times 10^{2}$ & $6.1734 \times 10^{2}$ & $-1.4260 \times
10^{8}$ & $-3$ & $0.60$ \\ \hline
$2$ & $1.4853 \times 10^{3}$ & $1.5546 \times 10^{3}$ & $-4.0140 \times
10^{22}$ & $-3$ & $0.35$ \\ \hline
$2$ & $5.1231 \times 10^{2}$ & $5.3283 \times 10^{2}$ & $-3.2505 \times
10^{7}$ & $-3$ & $0.10$ \\ \hline
$2$ & $4.2386 \times 10^{3}$ & $4.4466 \times 10^{3}$ & $-2.0403 \times
10^{65}$ & $-3$ & $0.04$ \\ \hline
$2$ & $2.8744 \times 10^{4}$ & $3.0184 \times 10^{4}$ & $-1.7353 \times
10^{442}$ & $-3$ & $0.01$ \\ \hline
\end{tabular}%
\end{table}
\begin{figure}[tbp]
\includegraphics[width=1.0\textwidth]{monic-f1}
\caption{The graphs of $f(x) = 1 + x + x^2 / 2 + x^3 / 6 + x^4 / 24 + x^5 /
120 + x^6 / 720$ and some of its approximators ($\protect\lambda = 1/4$)}
\label{fig:polynomial}
\end{figure}
At the end we consider the nonpolynomial functions $f(x) = 4x / (4 + x^2)$
and $f(x) = \sin x - x \cos(x + 1)$. Tables~\ref{tbl:nonpolynomial} and \ref%
{tbl:nonpolynomial2} display all the parameters of the $\varepsilon$%
-approximating neural networks for the above six approximation error bounds.
As it is seen from the tables, these bounds do not alter the number of
hidden neurons. Figures~\ref{fig:polynomial}, \ref{fig:nonpolynomial} and %
\ref{fig:nonpolynomial2} show how graphs of some constructed networks $%
\mathcal{N}$ approximate the corresponding target functions $f$.
\begin{table}[tbp]
\caption{Several $\protect\varepsilon$-approximators of the function $4x /
(4 + x^2)$}
\label{tbl:nonpolynomial}%
\begin{tabular}{|@{\hspace{4pt}}c|@{\hspace{4pt}}c|c|l|c|c|}
\hline
Number of & \multicolumn{4}{|c|}{Parameters of the network} & Maximum \\
\cline{2-5}
neurons & $c_1$ & $c_2$ & \multicolumn{1}{c|}{$\theta_1$} & $\theta_2$ &
error \\ \hline
$2$ & $\phantom{-}1.5965 \times 10^{2}$ & $\phantom{-}1.6454 \times 10^{2}$
& $-283$ & $-3$ & $0.95$ \\ \hline
$2$ & $\phantom{-}1.5965 \times 10^{2}$ & $\phantom{-}1.6454 \times 10^{2}$
& $-283$ & $-3$ & $0.60$ \\ \hline
$2$ & $-1.8579 \times 10^{3}$ & $-1.9428 \times 10^{3}$ & $-6.1840 \times
10^{11}$ & $-3$ & $0.35$ \\ \hline
$2$ & $\phantom{-}1.1293 \times 10^{4}$ & $\phantom{-}1.1842 \times 10^{4}$
& $-4.6730 \times 10^{34}$ & $-3$ & $0.10$ \\ \hline
$2$ & $\phantom{-}2.6746 \times 10^{4}$ & $\phantom{-}2.8074 \times 10^{4}$
& $-6.8296 \times 10^{82}$ & $-3$ & $0.04$ \\ \hline
$2$ & $-3.4218 \times 10^{6}$ & $-3.5939 \times 10^{6}$ & $-2.9305 \times
10^{4885}$ & $-3$ & $0.01$ \\ \hline
\end{tabular}%
\end{table}
\begin{table}[tbp]
\caption{Several $\protect\varepsilon$-approximators of the function $\sin x
- x \cos(x + 1)$}
\label{tbl:nonpolynomial2}%
\begin{tabular}{|c|c|c|l|c|c|}
\hline
Number of & \multicolumn{4}{|c|}{Parameters of the network} & Maximum \\
\cline{2-5}
neurons & $c_1$ & $c_2$ & \multicolumn{1}{c|}{$\theta_1$} & $\theta_2$ &
error \\ \hline
$2$ & $\phantom{-}8.950 \times 10^{3}$ & $\phantom{-}9.390 \times 10^{3}$ & $%
-3.591 \times 10^{53}$ & $-3$ & $0.95$ \\ \hline
$2$ & $\phantom{-}3.145 \times 10^{3}$ & $\phantom{-}3.295 \times 10^{3}$ & $%
-3.397 \times 10^{23}$ & $-3$ & $0.60$ \\ \hline
$2$ & $\phantom{-}1.649 \times 10^{5}$ & $\phantom{-}1.732 \times 10^{5}$ & $%
-9.532 \times 10^{1264}$ & $-3$ & $0.35$ \\ \hline
$2$ & $-4.756 \times 10^{7}$ & $-4.995 \times 10^{7}$ & $-1.308 \times
10^{180281}$ & $-3$ & $0.10$ \\ \hline
$2$ & $-1.241 \times 10^{7}$ & $-1.303 \times 10^{7}$ & $-5.813 \times
10^{61963}$ & $-3$ & $0.04$ \\ \hline
$2$ & $\phantom{-}1.083 \times 10^{9}$ & $\phantom{-}1.138 \times 10^{9}$ & $%
-2.620 \times 10^{5556115}$ & $-3$ & $0.01$ \\ \hline
\end{tabular}%
\end{table}
\begin{figure}[tbp]
\includegraphics[width=1.0\textwidth]{monic-f2}
\caption{The graphs of $f(x) = 4x / (4 + x^2)$ and some of its approximators
($\protect\lambda = 1/4$)}
\label{fig:nonpolynomial}
\end{figure}
\begin{figure}[tbp]
\includegraphics[width=1.0\textwidth]{monic-f3}
\caption{The graphs of $f(x)=\sin x-x\cos (x+1)$ and some of its
approximators ($\protect\lambda =1/4$)}
\label{fig:nonpolynomial2}
\end{figure}
\newpage
\addcontentsline{toc}{chapter}{Bibliography}
| {
"timestamp": "2020-09-01T02:06:22",
"yymm": "2005",
"arxiv_id": "2005.14125",
"language": "en",
"url": "https://arxiv.org/abs/2005.14125",
"abstract": "These notes are about ridge functions. Recent years have witnessed a flurry of interest in these functions. Ridge functions appear in various fields and under various guises. They appear in fields as diverse as partial differential equations (where they are called plane waves), computerized tomography and statistics. These functions are also the underpinnings of many central models in neural networks.We are interested in ridge functions from the point of view of approximation theory. The basic goal in approximation theory is to approximate complicated objects by simpler objects. Among many classes of multivariate functions, linear combinations of ridge functions are a class of simpler functions. These notes study some problems of approximation of multivariate functions by linear combinations of ridge functions. We present here various properties of these functions. The questions we ask are as follows. When can a multivariate function be expressed as a linear combination of ridge functions from a certain class? When do such linear combinations represent each multivariate function? If a precise representation is not possible, can one approximate arbitrarily well? If well approximation fails, how can one compute/estimate the error of approximation, know that a best approximation exists? How can one characterize and construct best approximations? If a smooth function is a sum of arbitrarily behaved ridge functions, can it be expressed as a sum of smooth ridge functions? We also study properties of generalized ridge functions, which are very much related to linear superpositions and Kolmogorov's famous superposition theorem. These notes end with a few applications of ridge functions to the problem of approximation by single and two hidden layer neural networks with a restricted set of weights.We hope that these notes will be useful and interesting to both researchers and students.",
"subjects": "Classical Analysis and ODEs (math.CA); Functional Analysis (math.FA)",
"title": "Notes on ridge functions and neural networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759638081522,
"lm_q2_score": 0.8354835371034369,
"lm_q1q2_score": 0.8195057197021778
} |
https://arxiv.org/abs/1805.12119 | A combinatorial characterization of finite groups of prime exponent | The power graph of a group $G$ is a simple and undirected graph with vertex set $G$ and two distinct vertices are adjacent if one is a power of the other. In this article, we characterize (non-cyclic) finite groups of prime exponent and finite elementary abelian $2$-groups (of rank at least $2$) in terms of their power graphs. | \section{Introduction}
Let $G$ be a group. The \emph{power graph} $\mathcal{G}(G)$ of $G$ is a simple and undirected graph whose vertex set is $G$ and distinct vertices $u$ and $v$ are adjacent if $v=u^n$ for some $n \in \mathbb{N}$ or $u=v^m$ for some $m \in \mathbb{N}$. The notion of (directed) power graph of a group was introduced by Kelarev and Quinn \cite{kelarev2000combinatorial, kelarevDirectedSemigr}. Afterwards, Chakrabarty et al. \cite{GhoshSensemigroups} gave the above definition of power graph of a group. Recently, many interesting results on power graphs have been obtained, for instance, see \cite{curtin2014edge,feng2015,ma2018,moghaddamfar2013} and the references therein.
In the study of graphs constructed from groups, a useful topic is the extent to which structure of a group is reflected by the corresponding graph. By considering it for power graphs, Cameron \cite{Cameron} proved that if two finite groups $G_1$ and $G_2$ have isomorphic power graphs, they have same number of elements of each order (in particular, have the same spectra). Additionally, if $G_1$ and $G_2$ are abelian, it was shown by Cameron and Ghosh \cite{Ghosh} that $G_1$ and $G_2$ are isomorphic. Mirzargar et al. \cite{mirzargar2012power} proved that finite simple groups and finite cyclic groups are uniquely determined by their power graphs. In this article, we characterize non-cyclic finite groups of prime exponent (cf. Theorem \ref{min.pg.all2}) and finite elementary abelian $2$-groups of rank at least $2$ (cf. Theorem \ref{min.vercon}) in terms of connectedness of their power graphs.
The terminology used throughout are standard; for instance, we can refer to \cite{Dummit} for groups and \cite{west1996graph} for graphs.
For any graph $\Gamma$, its (vertex) connectivity, edge-connectivity and minimum degree are denoted by $\kappa(\Gamma)$, $\kappa'(\Gamma)$ and $\delta(\Gamma)$, respectively. Let $\Gamma$ be a non-trivial and connected graph. Then it is said to be \emph{minimally connected} if $\kappa(\Gamma-\varepsilon) = \kappa(\Gamma)-1$ for every edge $\varepsilon$ of $\Gamma$. Analogously, $\Gamma$ is said to be \emph{minimally edge-connected} if $\kappa'(\Gamma-\varepsilon)= \kappa'(\Gamma)-1$ for every edge $\varepsilon$ of $\Gamma$.
These graphs have been explored in various problems in extremal and structural graph theory (cf. \cite{bollobas2004graph, kriesell2013minimal}).
We employ these notions on power graphs to study the corresponding finite groups.
Now we state our main results.
\begin{theorem}\label{min.pg.all2}
A finite group $G$ is a non-cyclic group of prime exponent if and only if $\mathcal{G}(G)$ is non-complete and minimally edge-connected.
\end{theorem}
\begin{theorem}\label{min.vercon}
A finite group $G$ is an elementary abelian $2$-group of rank at least $2$ if and only if $\mathcal{G}(G)$ is non-complete and minimally connected.
\end{theorem}
It is trivial to see that a finite group $G$ is a cyclic group of prime exponent $p$ if and only if $\mathcal{G}(G)$ is a complete graph on $p$ vertices. A similar statement can be made about a finite elementary abelian $2$-group of rank $1$.
\section{Preliminaries}
\label{prelim}
Let $G$ be a group with an element $x$. The order of $x$ in $G$ and the degree of $x$ in $\mathcal{G}(G)$ are denoted by $\mathrm{o}(x)$ and $\deg(x)$, respectively. We denote by $[x]$ the set of generators of the cyclic subgroup $\langle x \rangle$. Since an element of $G$ is a vertex of $\mathcal{G}(G)$ and vice versa, we use the terms element and vertex interchangeably. Let $\Gamma$ be a graph with vertex set $V(\Gamma)$ and edge set $E(\Gamma)$. For any subgraph $\Gamma'$ and a set of edges $S$ of $\Gamma$, we denote $\Gamma' - \{S \cap E(\Gamma')\}$ simply by $\Gamma' - S$.
We next recall some necessary results on power graphs of finite groups.
\begin{lemma}[\cite{GhoshSensemigroups,power2017conn,power2017mindeg}]\label{lemma1}
Suppose that $G$ is a finite group.
\begin{enumerate}[\rm(i)]
\item
$\mathcal{G}(G)$ is complete if and only if $G$ is a cyclic group of order one or prime power.
\item
For any induced subgraph $\Gamma$ of $\mathcal{G}(G)$ with identity vertex, $\kappa'(\Gamma)=\delta(\Gamma)$.
\item
Any minimum separating set of $\mathcal{G}(G)$ is of the form $\cup_{i=1}^s [x_i]$ for $x_1,\ldots, x_s$ in $G$.
\end{enumerate}
\end{lemma}
We use Lemma \ref{lemma1}(i) without referring to it explicitly.
Given any group $G$, we denote by $\mathcal{G}^*(G)$, the subgraph of $\mathcal{G}(G)$ obtained by deleting the identity vertex.
\begin{lemma}[{\cite[Theorem 4]{doostabadi2015power}}]\label{p.exponent}
For any finite group $G$, $\mathcal{G}^*(G)$ is regular if and only if $G$ is a cyclic group of prime power order or $G$ is of prime exponent.
\end{lemma}
\begin{lemma}[{\cite[Proposition 3.1]{power2017conn}}]\label{lemma3}
Suppose that $G$ is a finite $p$-group for some prime $p$. If $x$ is an element of order $p$ in $G$, then $x$ is adjacent to all other vertices of the component of $\mathcal{G}^*(G)$ that contains $x$.
\end{lemma}
\section{Proofs of the main results}
\label{proof.main.results}
In this section $G$ is a non-trivial group with identity element $e$. We first prove Theorem \ref{min.pg.all2} and then Theorem \ref{min.vercon}. We begin with an essential lemma.
\begin{lemma}\label{min.necessary}
Let $G$ be a finite group such that $\mathcal{G}(G)$ is minimally edge-connected.
\begin{enumerate}[\rm(i)]
\item If $x \in G$ and $\mathrm{o}(x) > 2$, then $\deg(x) = \delta(\mathcal{G}(G))$.
\item If $\langle y \rangle$ is a maximal cyclic subgroup of $G$ and $\mathrm{o}(y) > 2$, then $\mathrm{o}(y) = \delta(\mathcal{G}(G))+1$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Note that $[x]$ has two or more elements. Let $\varepsilon$ be the edge in $\mathcal{G}(G)$ with endpoints $x$ and $x' \in [x]-\{x\}$. Then by Lemma \ref{lemma1}(ii) and the fact that $\mathcal{G}(G)$ is minimally edge-connected, we have $\delta(\mathcal{G}(G)-\varepsilon) = \delta(\mathcal{G}(G))-1$. Thus at least one endpoint of $\varepsilon$ has the minimum degree in $\mathcal{G}(G)$. Furthermore, as $\deg(x) = \deg(x')$, we get $ \deg(x) = \delta(\mathcal{G}(G))$.
\smallskip
\noindent
(ii) Observe that $\deg(y) = \mathrm{o}(y)-1$. Hence by (i), the proof follows.
\end{proof}
\begin{lemma}\label{no.maximal}
Let $G$ be a finite group with no maximal subgroup of order two. If $\mathcal{G}(G)$ is minimally edge-connected, then $G$ is of prime power order.
\end{lemma}
\begin{proof}
It is known that every element of $G$ is in some of its maximal cyclic subgroups. Thus it follows from Lemma \ref{min.necessary}(ii) that the exponent of $G$ is $\delta(\mathcal{G}(G))+1$. The following arguments are similar to that of the proof of Lemma \ref{p.exponent}.
We denote $d = \delta(\mathcal{G}(G))$. If possible, suppose $G$ is not of prime power order.
Let $p_1 < p_2 < \ldots < p_r$, $r \geq 2$, be the prime factors of $d+1$. Then by Lemma \ref{min.necessary}(ii), $G$ has an element $x$ of order $\frac{d+1}{p_1}$. The degree of $x$ in $\mathcal{G}(G)$ is $\frac{d+1}{p_1}-1+m\phi(d+1)$, where $m$ is the number of maximal cyclic subgroups of $G$ containing $x$ and $\phi$ is the Euler's totient function. However, since $\mathrm{o}(x) > 2$, it follows from Lemma \ref{min.necessary}(i) that $\deg(x) = d$. As a result, $\frac{d+1}{p_1}+m\phi(d+1) = d+1$, which yields $m(p_2-1)(p_3-1)\ldots(p_r-1) = p_2p_3\ldots p_r$. This implies that if $q$ is a positive divisor of $p_2-1$, then $q=p_i$ for some $2 \leq i \leq r$, which is a contradiction. Hence $G$ is a group of prime power order.
\end{proof}
Now we provide the proof of Theorem \ref{min.pg.all2}.
\begin{proof}[Proof of Theorem \ref{min.pg.all2}]
Let $\mathcal{G}(G)$ be non-complete and minimally edge-connected. If $G$ has a maximal subgroup of order two, then $\kappa'(\mathcal{G}(G)) = 1$. Thus $\mathcal{G}(G)$ is a tree. Consequently, all elements of $G$ have order two, so that $G$ is an elementary abelian $2$-group of rank at least $2$.
Now suppose $G$ has no maximal subgroup of order two. Then by Lemma \ref{no.maximal}, $G$ is a finite $p$-group for some prime $p$. If $p > 2$, then it follows from Lemma \ref{min.necessary}(i) that $\mathcal{G}^*(G)$ is regular.
We next take $p=2$. If $\mathcal{G}(G)$ has more than one block, then the only vertex common to any two distinct blocks is $e$. Furthermore, by Lemma \ref{lemma3}, every block of $\mathcal{G}(G)$ has exactly one vertex that has order two in $G$.
Note that for any $x$ in $G$, all elements of $\langle x \rangle$ are vertices in the same block of $\mathcal{G}(G)$. If possible, suppose $\langle x_1 \rangle$ and $\langle x_2 \rangle$ are distinct maximal cyclic subgroups of $G$ whose elements are vertices in the same block, say $\Gamma$, of $\mathcal{G}(G)$. Let $y$ be the vertex in $\Gamma$ having order two in $G$ and $\varepsilon$ be the edge with endpoints $e$ and $y$. Then by assumption, $\langle y \rangle$ is not a maximal subgroup in $G$. Thus $\mathcal{G}(G)-\varepsilon$ and in particular, $\Gamma-\varepsilon$ is connected. We continue to write $d =\delta(\mathcal{G}(G))$. By applying Lemma \ref{lemma3} and Lemma \ref{min.necessary}(i), we have $d \geq 3$. Suppose $S$ is a minimum disconnecting set of $\mathcal{G}(G)-\varepsilon$. In view of Lemma \ref{lemma1}(ii), $|S| = d-1$. If $\Gamma$ is the only block of $\mathcal{G}(G)$, then $\Gamma = \mathcal{G}(G)$. We observe that the minimum degree (hence the edge-connectivity) of any block of $\mathcal{G}(G)$ is $d$. Accordingly, if $\mathcal{G}(G)$ has any block $\Gamma'$ different from $\Gamma$, then $\Gamma'-S$ is connected. Thus we deduce that $(\Gamma-\varepsilon)-S$ is disconnected. Moreover, as $\kappa'(\Gamma-\varepsilon) = d-1$, all elements of $S$ are therefore edges in $\Gamma$.
Since all elements of $\langle x_1 \rangle$ and $\langle x_2 \rangle$ are vertices in $\Gamma$, by applying Lemma \ref{min.necessary}(ii), we get $|V(\Gamma)| \geq d+2$ (in fact, $|V(\Gamma)| \geq d+3$). So considering Lemma \ref{lemma3}, $e$ is connected to $y$ by a path of length two in $(\Gamma-\varepsilon)-S$. Because $(\Gamma-\varepsilon)-S$ is disconnected, there exists a vertex $z$ in $\Gamma$, different from $e$ and $y$, that is not connected to $e$ by any path in $(\Gamma-\varepsilon)-S$. As a result, $S$ contains the edge with endpoints $e$ and $z$ as well as the edge with endpoints $y$ and $z$. Since $\deg(z) = d$ and $|S|=d-1$, there exists a path of length two between $e$ and $z$ in $(\Gamma-\varepsilon)-S$. However, this contradicts the initial assumption about $z$. We thus conclude that every block of $\mathcal{G}(G)$ is a clique induced by a maximal cyclic subgroup of $G$. Consequently, in view of Lemma \ref{min.necessary}(ii), $\mathcal{G}^*(G)$ is regular. Additionally, $\mathcal{G}(G)$ is non-complete. Hence it follows from Lemma \ref{p.exponent} that $G$ is a non-cyclic group of prime exponent.
Conversely, let $G$ be a non-cyclic group of prime exponent $p$. Then two non-identity vertices $u$ and $v$ are adjacent in $\mathcal{G}(G)$ if and only if $\langle u \rangle = \langle v \rangle$. Thus $\mathcal{G}(G)$ is union of finitely many maximal cliques of size $p$ with any two distinct maximal cliques having $e$ as the only common vertex. Therefore, $\mathcal{G}(G)$ is non-complete and minimally edge-connected.
\end{proof}
We next give a simple lemma and then prove Theorem \ref{min.vercon}.
\begin{lemma}\label{min.sepset.endpts}
Let $\Gamma$ be a graph with an edge $\varepsilon$ such that $\Gamma-\varepsilon$ is connected. If $\kappa(\Gamma - \varepsilon) = \kappa(\Gamma)-1$, then no minimum separating set of $\Gamma - \varepsilon$ contains endpoints of $\varepsilon$.
\end{lemma}
\begin{proof}
Suppose $u$ and $v$ are the endpoints of $\varepsilon$. If possible, let $S$ be a minimum separating set of $\Gamma - \varepsilon$ containing at least one of $u$ or $v$. We have $(\Gamma - \varepsilon)-S = \Gamma-S$, so that $S$ is a separating set of $\Gamma$. This implies $\kappa(\Gamma) \leq \kappa(\Gamma - \varepsilon)$, which contradicts the given condition. This proves the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{min.vercon}]
Let $\mathcal{G}(G)$ be non-complete and minimally connected. We notice that $|G| \geq 4$. Let $x$ be a non-identity element of $G$ and if possible, let there exist $y \neq x$ such that $\langle x \rangle = \langle y \rangle$. Suppose $\varepsilon$ is the edge with endpoints $x$ and $y$. Since $x$ and $y$ are connected by the path $x,e,y$, $\mathcal{G}(G)-\varepsilon$ is connected. By using the fact that $\mathcal{G}(G)$ is minimally connected and $\kappa(\mathcal{G}(G)) \leq |G|-2$, we have $\kappa(\mathcal{G}(G)-\varepsilon) \leq |G|-3$. Let $S$ be a minimum separating set of $\mathcal{G}(G)-\varepsilon$. Then $\mathcal{G}(G)-S$ is a connected graph with three or more vertices, and by Lemma \ref{min.sepset.endpts}, $x,y \notin S$. We thus deduce that there is no path in $\mathcal{G}(G)-S$ that has vertices $x$ and $y$, and does not have the edge $\varepsilon$. Let $z$ be a vertex different from $x$ and $y$ in $\mathcal{G}(G)-S$. Then in $\mathcal{G}(G)-S$, $z$ is connected to exactly one of $x$ and $y$ by a path that does not have $\varepsilon$. Without loss of generality, let $z$ be connected to $x$ by a path that does not have $\varepsilon$ in $\mathcal{G}(G)-S$. Then $S \cup \{x\}$ is a separating set of $\mathcal{G}(G)$. Since $\kappa(\mathcal{G}(G)-\varepsilon)=\kappa(\mathcal{G}(G))-1$, we conclude that $S \cup \{x\}$ is a minimum separating set of $\mathcal{G}(G)$. Then by Lemma \ref{lemma1}(iii), $[x] \subseteq S \cup \{x\}$, which contradicts the fact that $y \notin S$. Accordingly, we get $[x]=\{x\}$. Because $|[x]|=\phi(\mathrm{o}(x))$ and $x \neq e$, we have $\mathrm{o}(x)=2$. Hence $G$ is an elementary abelian $2$-group. Additionally, as $\mathcal{G}(G)$ is non-complete, the rank of $G$ is at least $2$.
For converse, suppose $G$ is an elementary abelian $2$-group of rank at least $2$. Then $\mathcal{G}(G)$ is a star on three or more vertices. As a result, $\kappa(\mathcal{G}(G))=1$ and $\mathcal{G}(G)-\varepsilon$ is disconnected for every edge $\varepsilon$ of $\mathcal{G}(G)$. Therefore we conclude that $\mathcal{G}(G)$ is non-complete and minimally connected.
\end{proof}
\section*{Acknowledgement}
The author is thankful to Dr. K. V. Krishna for his constructive suggestions. This research work was partially supported by the fellowship of IIT Guwahati, India.
| {
"timestamp": "2019-03-20T01:26:12",
"yymm": "1805",
"arxiv_id": "1805.12119",
"language": "en",
"url": "https://arxiv.org/abs/1805.12119",
"abstract": "The power graph of a group $G$ is a simple and undirected graph with vertex set $G$ and two distinct vertices are adjacent if one is a power of the other. In this article, we characterize (non-cyclic) finite groups of prime exponent and finite elementary abelian $2$-groups (of rank at least $2$) in terms of their power graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "A combinatorial characterization of finite groups of prime exponent",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9833429624315188,
"lm_q2_score": 0.8333246015211009,
"lm_q1q2_score": 0.8194438823268242
} |
https://arxiv.org/abs/1911.08067 | Sets in $\mathbb{R}^d$ determining $k$ taxicab distances | We address an analog of a problem introduced by Erdős and Fishburn, itself an inverse formulation of the famous Erdős distance problem, in which the usual Euclidean distance is replaced with the metric induced by the $\ell^1$-norm, commonly referred to as the $\textit{taxicab metric}$. Specifically, we investigate the following question: given $d,k\in \mathbb{N}$, what is the maximum size of a subset of $\mathbb{R}^d$ that determines at most $k$ distinct taxicab distances, and can all such optimal arrangements be classified? We completely resolve the question in dimension $d=2$, as well as the $k=1$ case in dimension $d=3$, and we also provide a full resolution in the general case under an additional hypothesis. | \section{Introduction}
In 1946, Erd\H{o}s \cite{Erdos} asked a now famous question: given $n\in \mathbb{N}$, what is the minimum number of distinct distances determined by $n$ points in a plane? Denoting this minimum by $f(n)$, he proved via an elementary counting argument that $f(n)=\Omega(\sqrt{n})$, and he conjectured that the correct order of growth is $n/\sqrt{\log n}$, as attained by a $\sqrt{n}\times\sqrt{n}$ integer grid. After decades of incremental progress, this conjecture was effectively resolved in a celebrated result of Guth and Katz \cite{GuthKatz}, who established that $f(n)=\Omega(n/\log n)$.
50 years after Erd\H{o}s's original paper, Erd\H{o}s and Fishburn \cite{EF} addressed the same question from the inverse perspective, and aspired to precise results in fixed cases rather than general asymptotic results. Specifically, they investigated the following: given $k\in \mathbb{N}$, what is the maximum number of points in a plane that determine at most $k$ distinct distances, and can such optimal arrangements be classified? This question, which we refer to as the \textit{Erd\H{o}s-Fishburn problem}, was fully resolved by Erd\H{o}s and Fishburn for $1\leq k \leq 4$, then by Shinahara \cite{Shin} for $k=5$, and Wei \cite{Wei} for $k=6$, while it remains open for $k\geq 7.$ By convention, in the quoted results and throughout this paper, $0$ is not counted as a distance determined by a set of points.
These questions can also be adapted to higher dimensions, and to alternative notions of distance. Here we focus on a particular, well-known alternative metric.
\begin{definition} For $d\in \mathbb{N}$ and $x=(x_1,\dots,x_d)\in \R^d$, we define the \textit{$\ell^1$-norm} of $x$ by $$\norm{x}_1=|x_1|+\cdots+|x_d|,$$ which in particular satisfies the \textit{triangle inequality} $\norm{x+y}_1\leq \norm{x}_1+\norm{y}_1$. Like every norm, the $\ell^1$-norm induces a metric on $\R^d$ by defining $\norm{x-y}_1$ to be the \textit{$\ell^1$-distance} between $x,y\in \R^d.$
\end{definition}
The metric induced by the $\ell^1$-norm is commonly referred to as the \textit{taxicab metric}, because it measures the length of the shortest path between two points in space, under the restriction that one can only travel in directions parallel to the coordinate axes, as if in a taxicab on a grid of city streets. For example, if two people at city intersections are separated by 3 blocks horizontally and 4 blocks vertically, then, as the crow flies, they are 5 blocks apart by the Pythagorean theorem. However, to actually make the journey without cutting through buildings, they must walk 7 blocks, which is the $\ell^1$-distance.
As noted in Chapters 0 and 1 of \cite{ErdosBook}, one can show that the minimum number of $\ell^1$-distances determined by $n$ points in $\R^d$ is $\Omega(n^{1/d})$, and this order of growth is attained by $\{1,2,3,\dots,\lceil n^{1/d}\rceil\}^d$. Therefore, in the case of the taxicab metric, the big picture asymptotic question is immediately resolved, which begs the question of whether this case can be analyzed more precisely. To begin this journey, we first consider the Erd\H{o}s-Fishburn problem in the plane with $k=1$.
We fix two points $U,V\in \R^2$, say $U=(-1,0)$ and $V=(1,0)$. With the usual notion of distance, if any additional points can be added without determining an additional distance, those points necessarily lie on the circles of radius $2$ centered at $U$ and $V$, respectively. Those two circles intersect in only two points, and we find that they are $Q=(0,\sqrt{3})$ and $R=(0,-\sqrt{3})$. Since the distance between $Q$ and $R$ is greater than $2$, only one of the two can be added to $\{U,V\}$ while maintaining only a single distance. In summary, a set $P\subseteq R^2$ determining a single distance satisfies $|P|\leq 3$, and equality holds if and only if $P$ is the set of vertices of an equilateral triangle.
However, even in this simplest case, the taxicab metric case diverges from that of the usual distance. With the taxicab metric, the ``circle" (which we refer to as an \textit{$\ell^1$-circle}) of radius $2$ centered at $U$ is in fact a square, rotated $45^{\circ}$ from axis-parallel, with the four sides connecting the points $(-3,0)$, $(-1,2)$, $(1,0)$, and $(-1,-2)$. Similarly, the $\ell^1$-circle of radius $2$ centered at $V$ is a square with sides connecting $(3,0)$, $(1,-2)$, $(-1,0)$, and $(1,2)$. Like the usual distance case, these two circles intersect in exactly two points, this time $Q=(0,1)$ and $R=(0,-1)$. The difference is that here $Q$ and $R$ are indeed separated by $\ell^1$-distance $2$, and hence the four-point configuration $\{U,V,Q,R\}$ determines a single $\ell^1$-distance.
\section{Main Definition and Results} \label{mainres}
Inspired by the four-point construction above, as well as additional trial and error, we define the following family of sets, which serve as our candidates for resolving the Erd\H{o}s-Fishburn problem for the taxicab metric.
\begin{definition}For integers $d>0$ and $k\geq 0$, we define $$\Lambda_d(k)=\left\{ n=(n_1,n_2,\dots,n_d)\in \mathbb{Z}^d: \norm{n}_1\leq k, \ n_1+\cdots+n_d \equiv k \ (\text{mod }2)\right\}. $$
\end{definition}
\noindent $\Lambda_d(k)$ is the union of the integer lattice points lying on the $\ell^1$-spheres (which in dimension $d$ are $2^d$-faced polytopes) centered at the origin of \textit{every other} integer radius, starting with either $0$ or $1$ depending on the parity of $k$. The four-point configuration discussed in the introduction is $\Lambda_2(1)$, and some additional examples are pictured below.
\begin{figure}[H]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.5\linewidth]{Lambda22.pdf}
\caption{$\Lambda_2(2)$: $9$ points in $\R^2$ determining two $\ell^1$-distances}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.5\linewidth]{Lambda32.pdf}
\caption{$\Lambda_3(2)$: $19$ points in $\R^3$ determining two $\ell^1$-distances}
\label{fig:sub2}
\end{subfigure}
\caption{}\label{fig:test}
\end{figure}
\noindent In Section \ref{propsec}, we establish the following properties of $\Lambda_d(k)$, including the crucial fact that it determines exactly $k$ distinct $\ell^1$-distances, the primary motivation for its definition.
\begin{theorem} \label{prop} The following hold for all $d,k\in \mathbb{N}$: \begin{enumerate}[(i)] \item \label{2k} $\Lambda_d(k)$ determines exactly $k$ distinct $\ell^1$-distances, specifically $2,4,\dots, 2k$ \\ \item \label{start} $|\Lambda_1(k)|=k+1$, $|\Lambda_d(1)|=2d$ \\ \item \label{recur} $\displaystyle{|\Lambda_{d+1}(k)|=|\Lambda_{d}(k)|+2\sum_{j=0}^{k-1} |\Lambda_{d}(j)|}$ \end{enumerate}
\end{theorem}
\noindent Parts (\ref{start}) and (\ref{recur}) of Theorem \ref{prop}, combined with known formulas for sums of powers, allow one to determine explicit formulas for $|\Lambda_d(k)|$ for any fixed $d\in \mathbb{N}$. We include the first few examples in the following table:
\begin{center}
\begin{table}[H]
\caption{Explicit Formulas for $|\Lambda_d(k)|$}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{||c | c ||}
\hline
$d$ & $|\Lambda_d(k)|$ \\
\hline\hline
$2$ & $(k+1)^2$\\
\hline
$3$ & $\frac{2}{3}(k+1)^3+\frac{1}{3}(k+1)$\\
\hline
$4$ & $\frac{1}{3}(k+1)^4+\frac{2}{3}(k+1)^2$ \\
\hline
$5$ & $\frac{2}{15}(k+1)^5+\frac{2}{3}(k+1)^3+\frac{1}{5}(k+1) $ \\
\hline
$6$ & $ \frac{2}{45}(k+1)^6+\frac{4}{9}(k+1)^4+\frac{23}{45}(k+1)^2 $ \\
\hline
$7$ & $ \frac{4}{315}(k+1)^7+\frac{2}{9}(k+1)^5 + \frac{28}{45}(k+1)^3 + \frac{1}{7}(k+1) $\\
\hline
\end{tabular}
\label{one}
\end{table}
\end{center}
\noindent Some of the patterns observed in Table \ref{one} can be generalized using Faulhaber's Formula for sums of powers, as seen in the following formulation, which we also prove in Section \ref{propsec}.
\begin{theorem}\label{ffthm} For each $d\in \mathbb{N}$ and each integer $k\geq 0$, we have the formula \begin{equation*}|\Lambda_d(k)|=\sum_{i=0}^{\lceil d/2 \rceil-1} a_{d,i}(k+1)^{d-2i},\end{equation*} where the coefficients $a_{d,i}$ satisfy the recursive formula \begin{equation*} a_{d,i}=2\sum_{\ell=0}^i \frac{a_{d-1,\ell}}{d-2\ell} {d-2\ell \choose 2(i-\ell)}B_{2(i-\ell)}, \end{equation*} where $B_i$ is the $i$-th Bernoulli number. In particular, we have the explicit formulas $a_{d,0}=2^{d-1}/d! $ for all $d\in \mathbb{N}$ and $a_{d,1}=2^{d-3}/(3(d-3)!)$ for all $d\geq 3$.
\end{theorem}
\noindent Detailed analysis of $\Lambda_d(k)$ is perhaps of independent interest, but to make headway toward our goal, we need to address the important questions: does $\Lambda_d(k)$ have maximal size amongst subsets of $\R^d$ determining at most $k$ distinct $\ell^1$-distances? If so, is $\Lambda_d(k)$ the only such optimal arrangement? In anticipation of the latter question, we observe that any optimal arrangement can undergo any scaling, or any transformation that preserves the $\ell^1$-norm, and remain optimal, leading to the following definition.
\begin{definition} \label{simdef} For $d\in \mathbb{N}$, we say that two subsets of $\R^d$ are \textit{$\ell^1$-similar} if one can be mapped to the other via a composition of translations, reflections about coordinate hyperplanes, dilations, and coordinate permutations, as these transformations either preserve or uniformly scale collections of $\ell^1$-distances.
\end{definition}
\noindent We note that the list of transformations in Definition \ref{simdef} does not include rotations, because, unlike the usual Euclidean metric, the taxicab metric is \textit{not} invariant under rotation, unless the rotation can alternatively be obtained through reflection about coordinate hyperplanes and permutation of coordinates. This fact rears its head in our exploration of the taxicab metric in higher dimensions, and plays a key role in our discussions in Section \ref{hdsec}. For now, though, the following result established in Section \ref{2dsec} completely resolves the taxicab analog of the Erd\H{o}s-Fishburn problem in the plane.
\begin{theorem} \label{opt} If $k\in \mathbb{N}$ and $P\subseteq \R^2$ determines at most $k$ distinct $\ell^1$-distances, then $|P|\leq (k+1)^2$. Further, $|P|=(k+1)^2$ if and only if $P$ is $\ell^1$-similar to $\Lambda_2(k)$.
\end{theorem}
\noindent As we discuss in Section \ref{2dsec}, the $d=2$ case is simplified by the fact that, for the purposes of analyzing distance sets, the $\ell^1$-norm in $\R^2$ is effectively the same as the $\ell^{\infty}$-norm defined by $\norm{(x,y)}_{\infty}=\max\{|x|,|y|\}$. However, this equivalence does not persist in dimension $d\geq 3$, and for this reason, our proof strategy does not immediately generalize to higher dimensions. (Although, for the interested reader, the proof does generalize to show that if $P\subseteq \R^d$ determines at most $k$ distinct $\ell^{\infty}$-distances, then $|P|\leq (k+1)^d$, and equality holds if and only if $P$ is $\ell^1$-similar to $\{0,1,2,\dots,k\}^d$.)
\noindent With considerable additional effort, we successfully get our foot into the higher-dimensional door in Section \ref{3dsec}, which assures us that the unique optimality of $\Lambda_d(k)$ is not completely dependent on a connection to the $\ell^{\infty}$-norm.
\begin{theorem}\label{singdist} If $P\subseteq \R^3$ determines a single $\ell^1$-distance, then $|P|\leq 6$. Further, $|P|=6$ if and only if $P$ is $\ell^1$-similar to $\Lambda_3(1)$.
\end{theorem}
\noindent \textbf{Remark on previous work for $k=1$.} After the initial posting of this paper to the arxiv server, we were alerted to previous work done in the $k=1$ case (referred to as \textit{equilateral sets}) in a variety of metric spaces, including $\R^d$ with the taxicab metric (referred to as \textit{rectilinear space}). Specifically, Theorem \ref{singdist} above follows from Corollary 4.2 of \cite{Band}, due to Bandelt, Chepoi, and Laurent, while Koolen, Laurent, and Schrijver \cite{Koolen} showed that if $P\subseteq \R^4$ determines a single $\ell^1$-distance, then $|P|\leq 8=|\Lambda_4(1)|$. This partially settles a question of Kusner (see Problem 0 in \cite{Guy}), who asked if $|P|\leq 2d=|\Lambda_d(1)|$ holds for subsets of $\R^d$ determining a single $\ell^1$-distance, and this remains open for $d\geq 5$. Conjecture \ref{optcon} below can be thought of as a precise, multi-distance generalization of Kusner's question. While the conclusion of Theorem \ref{singdist} was known previously, we believe our alternative, elementary proof given in Section \ref{3dsec} remains of interest.
\noindent In Section \ref{hdsec}, we explore the question of what additional hypotheses are required to prove the optimality of $\Lambda_3(k)$ for all $k\in \mathbb{N}$, or even $\Lambda_d(k)$ in full generality. We find that the proof of Theorem \ref{opt} can be fully adapted with a seemingly mild additional assumption, leading us to make the following general conjecture.
\begin{conjecture} \label{optcon} If $d,k\in \mathbb{N}$ and $P\subseteq \R^d$ determines at most $k$ distinct $\ell^1$-distances, then $|P|\leq |\Lambda_d(k)|$. Further, $|P|=|\Lambda_d(k)|$ if and only if $P$ is $\ell^1$-similar to $\Lambda_d(k)$.
\end{conjecture}
\section{Properties of $\Lambda_d(k)$: Proof of Theorems \ref{prop} and \ref{ffthm}} \label{propsec}
We begin this section by proving the essential properties of $\Lambda_d(k)$ that make it a worthy candidate for resolving the Erd\H{o}s-Fishburn problem for the taxicab metric.
\subsection{Proof of Theorem \ref{prop}} Fix $k\in \mathbb{N}$. For (\ref{2k}), fix $d\in \mathbb{N}$, note that by definition of $\Lambda_d(k)$, we have $\norm{n}_1\leq k$ for all $n\in \Lambda_d(k)$. In particular, for any $n,m\in \Lambda_d(k)$, we have by the triangle inequality that $$\norm{n-m}_1 \leq \norm{n}_1+\norm{m}_1 \leq k+k=2k. $$
\noindent Further, $\norm{n-m}_1=|n_1-m_1|+\cdots+|n_d-m_d|$ is certainly an integer, and by definition of $\Lambda_d(k)$, and the fact that an integer is congruent to its absolute value modulo 2, we have \begin{align*} |n_1-m_1|+\cdots+|n_d-m_d| & \equiv n_1-m_1+\cdots +n_d-m_d \\ & \equiv (n_1+\cdots+n_d)-(m_1+\cdots+m_d) \\ & \equiv k-k \\& \equiv 0 \ (\text{mod }2). \end{align*} Therefore, the only possible nonzero values of $\norm{n-m}_1$ are $2,4,\dots, 2k$, and for each $1\leq j \leq k$ the distance $2j$ is attained between the points $(j,0,\dots,0)$ and $(-j,0,\dots,0)$ if $j\equiv k \ (\text{mod }2)$, or between $(j,1,\dots,0)$ and $(-j,1,\dots,0)$ if $j \not \equiv k \ (\text{mod }2)$.
\noindent For (\ref{start}), we first see that $$\Lambda_1(k)=\begin{cases}\{-k,-k+2,\dots,-1,1,\dots,k-2,k\} & \text{ if }k \text{ is odd} \\ \{-k, -k+2,\dots, -2, 0, 2,\dots, k-2, k\} & \text{ if }k \text{ is even}\end{cases}.$$ In particular, $|\Lambda_1(k)|=2\lceil k/2 \rceil=k+1$ if $k$ is odd and $|\Lambda_1(k)|=2(k/2)+1=k+1$ if $k$ is even. Secondly, we see that $\Lambda_d(1)$ is precisely $\{\pm e_i: 1\leq i \leq d\}$, where $\{e_i\}$ is the standard basis for $\R^d$.
\noindent For (\ref{recur}), we see that the possible values of the final coordinate for elements of $\Lambda_{d+1}(k)$ are integers satisfying $-k\leq x_{d+1} \leq k$. Further, for a fixed value $x_{d+1}=c$, the intersection of this hyperplane with $\Lambda_{d+1}(k)$ is $$\left\{(n_1,\dots,n_{d},c)\in \mathbb{Z}^{d+1}: |n_1|+\cdots+|n_{d}|\leq k-|c|, n_1+\cdots+n_{d}\equiv k-c\equiv k-|c| \ (\text{mod }2) \right\}, $$ which is in natural bijection with $\Lambda_{d}(k-|c|)$. Therefore, $$|\Lambda_{d+1}(k)|=\sum_{c=-k}^k |\Lambda_{d}(k-|c|)|=|\Lambda_{d}(k)|+2\sum_{j=0}^{k-1} |\Lambda_{d}(j)|. $$\qed
\noindent We continue by establishing a detailed formula for $|\Lambda_d(k)|$, which in particular guarantees that it has the correct order of magnitude $\Omega(k^d)$.
\subsection{Proof of Theorem \ref{ffthm}} We first note that by Theorem \ref{prop}(ii), we have $|\Lambda_1(k)|=k+1$ for all $k\geq 0$. We now fix $d\geq 2$, let $h=\lceil d/2\rceil -1$, and make the inductive hypothesis that \begin{equation} \label{indhyp} |\Lambda_{d-1}(k)|=a_{d-1,0}(k+1)^{d-1}+a_{d-1,1}(k+1)^{d-3}+\cdots+a_{d-1,h}(k+1)^{d-1-2h} \end{equation} for all $k\geq 0$. Faulhaber's formula gives \begin{equation}\label{ff} F_{p}(n)=\sum_{j=1}^n j^p=\frac{n^{p+1}}{p+1}+\frac{n^p}{2}+\frac{1}{p+1}\sum_{i=0}^{p-1}{p+1 \choose i}B_{p+1-i} n^i, \end{equation} for all $n,p\in \mathbb{N}$, where $B_i$ is the $i$-th Bernoulli number. By Theorem \ref{prop}(iii), we have \begin{equation} \label{recur2}|\Lambda_d(k)|=2\sum_{j=0}^{k-1}|\Lambda_{d-1}(j)|+|\Lambda_{d-1}(k)|=2\sum_{j=0}^{k}|\Lambda_{d-1}(j)|-|\Lambda_{d-1}(k)|, \end{equation} which combines with (\ref{indhyp}) to yield \begin{align*} |\Lambda_d(k)| &=2\left(a_{d-1,0}\sum_{j=0}^k (j+1)^{d-1}+\cdots+a_{d-1,h}\sum_{j=0}^k (j+1)^{d-1-2h}\right)-|\Lambda_{d-1}(k)| \\ &=2\left(a_{d-1,0}\sum_{j=1}^{k+1} j^{d-1}+\cdots+a_{d-1,h}\sum_{j=1}^{k+1} j^{d-1-2h}\right)-|\Lambda_{d-1}(k)| \\ &=2\left(a_{d-1,0}F_{d-1}(k+1)+\cdots+a_{d-1,h}F_{d-1-2h}(k+1)\right)-|\Lambda_{d-1}(k)|.\end{align*} This tells us that we can indeed write $|\Lambda_d(k)|$ as a polynomial in $k+1$, but we wish to establish the claimed explicit and recursive formulas for the coefficients, as well as the fact that every other coefficient is zero. First we consider the $(k+1)^d$ coefficient, which only arises from the term $2a_{d-1,0}F_{d-1}(k+1)$. Since the $n^{p+1}$ coefficient of $F_{p}(n)$ is $1/(p+1)$, we have $a_{d,0}=2a_{d-1,0}/d$. Using the base case $a_{1,0}=1$, we have by induction that $a_{d,0}=2^{d-1}/d!$, as claimed.
\noindent Next we consider the $(k+1)^{d-1}$ coefficient, which arises from two sources: the $(k+1)^{d-1}$ coefficients of $2a_{d-1,0}F_{d-1}(k+1)$ and $-|\Lambda_{d-1}(k)|$, respectively. The former is $2a_{d-1,0}(1/2)=a_{d-1,0}$, while the latter is $-a_{d-1,0}$, which means that the $(k+1)^{d-1}$ coefficient of $|\Lambda_d(k)|$ is indeed $0$. More generally, for other coefficients corresponding to terms of the form $(k+1)^{d-1-2i}$, we use the following three facts: the $(k+1)^{d-1-2i}$ coefficient on $2a_{d-1,i}F_{d-1-2i}(k+1)=a_{d-1,i}$ by the same logic as above, the $(k+1)^{d-1-2i}$ coefficient of $-|\Lambda_{d-1}(k)|$ is $-a_{d-1,i}$, and the $(k+1)^{d-1-2i}$ coefficient of $F_{d-1-2\ell}(k+1)$ is $0$ for all $\ell<i$, because $B_n=0$ for all odd $n\geq 3$. Therefore, all $(k+1)^{d-1-2i}$ coefficients of $|\Lambda_d(k)|$ are $0$.
\noindent For the the $(k+1)^{d-2}$ coefficient, we begin by noting that a direct calculation using (\ref{ff}) and (\ref{recur2}) yields $|\Lambda_3(k)|=\frac{2}{3}(k+1)^3+\frac{1}{3}(k+1)$, hence $a_{3,1}=1/3$, which serves as the base case for another induction. Fixing $d\geq 4$ and assuming the claimed formula $a_{d-1,1}=2^{d-4}/(3(d-4)!)$ holds, the $(k+1)^{d-2}$ coefficient of $|\Lambda_d(k)|$ is formed by two contributions, from $2a_{d-1,0}F_{d-1}(k+1)$ and $2a_{d-1,1}F_{d-3}(k+1)$, respectively.
\noindent The former is given by $$2a_{d-1,0}\left(\frac{1}{d}\right){d \choose d-2}B_2=2\cdot\frac{2^{d-2}}{(d-1)!}\cdot\frac{1}{d}\cdot\frac{d(d-1)}{2}\cdot\frac{1}{6}=\frac{2^{d-3}}{3(d-2)!},$$ while the latter is given by $$2a_{d-1,1}\cdot\frac{1}{d-2}=2\cdot\frac{2^{d-4}}{3(d-4)!}\cdot\frac{1}{d-2}=\frac{2^{d-3}}{3(d-4)!(d-2)}.$$ Therefore, we have $$a_{d,1}=\frac{2^{d-3}}{3(d-2)!}+\frac{2^{d-3}}{3(d-4)!(d-2)}=\frac{2^{d-3}+(d-3)2^{d-3}}{3(d-2)!}=\frac{2^{d-3}(d-2)}{3(d-2)!}=\frac{2^{d-3}}{3(d-3)!},$$ as claimed.
\noindent More generally, by (\ref{ff}) and (\ref{recur2}), we see that the $(k+1)^{d-2i}$ coefficient of $|\Lambda_d(k)|$ receives a contribution from $2a_{d-1,\ell}F_{d-1-2\ell}(k+1)$ for each $0\leq \ell \leq i$. Specifically, that contribution is $$2a_{d-1,\ell}\cdot \frac{1}{d-2\ell} \cdot {d-2\ell \choose d-2i} \cdot B_{2i-2\ell}=\frac{2a_{d-1,\ell}}{d-2\ell}{d-2\ell \choose 2(i-\ell)} B_{2(i-\ell)}, $$ and the recursive formula for $a_{d,i}$ follows. \qed
\section{Optimality in Two Dimensions: Proof of Theorem \ref{opt}} \label{2dsec}
In this section, we prove the unique optimality of $\Lambda_2(k)$, in that it is the unique subset of $\R^2$, up to $\ell^1$-similarity, of maximal size amongst sets determining at most $k$ distinct $\ell^1$-distances. As referenced in Section \ref{mainres}, the proof is in part enabled by an equivalence between the $\ell^1$-norm and the $\ell^{\infty}$-norm on $\R^2$. We frame our discussion entirely in the context of the $\ell^1$-norm, but the connection is implicit in our proof, particularly the following lemma.
\begin{lemma} \label{linfty} Let $v_1=(1,1)$ and $v_2=(-1,1)$. If $x \in \R^2$ with $x=c_1v_1+c_2v_2$, then $$\norm{x}_1=2\max\{|c_1|, |c_2|\}. $$
\end{lemma}
\begin{proof} Let $v_1=(1,1)$, $v_2=(-1,1)$, fix $x\in \R^2$, and write $x$ uniquely as $x=c_1v_1+c_2v_2=(c_1-c_2,c_1+c_2)$. By potentially reflecting over the diagonal $x_1=x_2$ and/or replacing $x$ by $-x$, both of which preserve the $\ell_1$-norm, we can assume without loss of generality that $|c_1|\geq |c_2|$ and $c_1 \geq 0$. In this case, $$\norm{x}_1=|c_1-c_2|+|c_1+c_2|=c_1-c_2+c_1+c_2=2c_1=2\max\{|c_1|,|c_2|\}.$$\end{proof}
\noindent \textbf{Remark.} As an alternative approach to Section 4, one could treat the connection between the $\ell^1$ and $\ell^{\infty}$ norms on $\R^2$ in a more explicit way. Namely, Lemma \ref{linfty} can be reframed as the statement that the map $f: (\R^2,\norm{\cdot}_1)\to (\R^2,\norm{\cdot}_{\infty})$ defined by $f(x,y)=(x+y,x-y)$ is a linear isomorphism satisfying $\norm{(x,y)}_1=\norm{f(x,y)}_{\infty}$. Therefore, any results results related to the $\ell^{\infty}$-norm can be immediately transferred to the $\ell^1$-norm via this isomorphism. In particular, the proofs that follow could be rewritten in a slightly cleaner way in the $\ell^{\infty}$ context. However, in order to maintain our hands-on approach with the taxicab metric, we have chosen to leave the proofs in their $\ell^1$ form.\\
\noindent Our main strategy for proving Theorem \ref{opt} is inspired by Erd\H{o}s and Fishburn \cite{EF}. Specifically, we suppose that $P \subseteq \R^2$ determines at most $k$ distinct $\ell^1$-distances, and we seek an upper bound on the number of points we must remove from $P$ in order to eliminate the largest $\ell^1$-distance, hence reducing to the case of $k-1$ distinct $\ell^1$-distances and allowing us to invoke an inductive hypothesis. The following sequence of lemmas formalizes this strategy. Here we define an \textit{$\ell^1$-ball} in the expected way, as the region bounded by an $\ell^1$-sphere, which for $d=2$ is an $\ell^1$-circle.
\
\begin{lemma} \label{SD} Suppose $P\subseteq \R^2$ is finite. If $D$ is the largest $\ell^1$-distance determined by $P$, then $P$ is contained in a closed $\ell^1$-ball of diameter $D$.
\end{lemma}
\begin{proof}
Suppose $P \subseteq \R^2$ is finite. Let $v_1 =(1,1)$ and $v_2 = (-1,1)$.
Since $\{v_1,v_2\}$ forms a basis for $\R^2$, every $x \in P$ can be written uniquely as $x= c_1 v_1 + c_2 v_2$. Choose $x_1,x_2,x_3,x_4\in P$ such that $x_1$ maximizes $c_1$, $x_2$ minimizes $c_1$, $x_3$ maximizes $c_2$, and $x_4$ minimizes $c_2$. Call these values $c_{1,\max}$, $c_{1,\min}$, $c_{2,\max}$, and $c_{2,\min}$, respectively. These choices contain $P$ inside of a rectangle $R$, rotated $45^{\circ}$ from axis parallel, determined by the inequalities $c_{1,\min}\leq c_1 \leq c_{1, \max}$ and $c_{2,\min} \leq c_2 \leq c_{2,\max}$.
\noindent Let $w_1 = c_{1, \max} - c_{1, \min}$ and $w_2 = c_{2, \max} - c_{2, \min}$, and assume without loss of generality that $w_1 \geq w_2$. By Lemma \ref{linfty}, we have that $\norm{x_1-x_2}_1=2w_1$ and $\norm{p_1-p_2}_1 \leq 2w_1$ for all $p_1,p_2\in R$, so $D=2w_1$ is the largest $\ell_1$-distance determined by $P$. Let $c_{2,\text{new}} = c_{2, \max} - w_1\leq c_{2,\min}$, and let $B\supseteq R \supseteq P$ be defined by the inequalities $c_{1,\min}\leq c_1 \leq c_{1, \max}$ and $c_{2,\text{new}} \leq c_2 \leq c_{2,\max}$. $B$ is a square rotated $45^{\circ}$ from axis parallel, or in other words a closed $\ell^1$-ball, of diameter $D$, as required. \end{proof}
\
\begin{lemma} \label{remove} If $P\subseteq \R^2$ is contained in a closed $\ell^1$-ball $B$ of diameter $D$, then the $\ell^1$-distance $D$ can be eliminated from $P$ by removing the points of P contained in any two adjacent sides of the boundary of $B$.
\end{lemma}
\begin{proof} Suppose $P\subseteq \R^2$ is contained in a closed $\ell^1$-ball $B$ of diameter $D$.
\noindent Let $a_1, a_2$ be the left and right vertices of $B$, respectively, so in particular $\norm{a_1-a_2}_1=D$. Let $U$ denote the closed (including $a_1,a_2$) upper $\ell^1$-semicircle connecting $a_1$ and $a_2$, and let $L$ denote the open (not including $a_1,a_2$) lower $\ell^1$-semicircle connecting $a_1$ and $a_2$. Since the $\ell^1$-norm is invariant under $90^{\circ}$ rotation, it suffices to establish the conclusion of the lemma for removing the points of $P$ lying in $U$. Suppose $x_1,x_2\in P\setminus U$.
\noindent \textbf{Case 1:} At least one of $x_1,x_2$ lies in $B\setminus(U\cup L)$, which is an open $\ell^1$-ball of radius $D/2$.
\noindent Assume without loss of generality that $x_1\in B\setminus(U\cup L)$, and let $c$ be the center of $B$. Therefore, $\norm{x_1-c}_1 < D/2$ and $\norm{x_2-c}_1\leq D/2$. By the triangle inequality, $\norm{x_1-x_2}_1\leq \norm{x_1-c}_1+\norm{c-x_2}_1 < D/2+D/2=D.$
\noindent \textbf{Case 2:} $x_1,x_2\in L$. After possibly reflecting, assume without loss of generality that $x_1$ is to the left of $x_2$ and $\norm{x_1-a_1}_1\leq \norm{x_2-a_2}_1$, so $x_1$ is positioned at least as high as $x_2$, By replacing $x_1$ with $a_1$, we move up and to the left, so both the horizontal and vertical components of the $\ell^1$-distance to $x_2$ get larger, hence $$\norm{x_1-x_2}_1<\norm{a_1-x_2}_1=D. $$
\noindent In both cases, all distances amongst points in $P\setminus U$ are strictly less than $D$, and the lemma follows. \end{proof}
\
\begin{lemma}\label{line} If $P\subseteq \R^d$ is contained in a line and determines at most $k$ distinct $\ell^1$-distances, then $|P|\leq k+1$. Further, if $|P|=k+1$, then $P$ is an arithmetic progression, meaning the $\ell^1$-distances are $\lambda,2\lambda,\dots, k\lambda$ for some $\lambda>0$.
\end{lemma}
\begin{proof} Since $\ell^1$-distance along a straight line in $\R^d$ is just a constant multiple, depending on the direction of the line, times the standard Euclidean distance, it suffices to establish the lemma with $d=1$, for which we induct on $k$.
\noindent The base case $k=1$ is trivial, as three points $x_1<x_2<x_3$ in $\R$ automatically determine two distances $x_2-x_1<x_3-x_1$, and any two points form an arithmetic progression.
\noindent Now, fix $k\geq 2$, and assume that if $Q\subseteq \R$ determines at most $k-1$ distances, then $|Q|\leq k$, and further, if $|Q|=k$, then $Q$ is an arithmetic progression. Now suppose $P\subseteq \R$ determines at most $k$ distances.
\noindent Let $P=\{x_1<x_2<\cdots<x_n\}$. The $n-1$ distances $x_2-x_1<x_3-x_1<\cdots<x_n-x_1$ are all distinct, hence $n-1\leq k$, or in other words $n\leq k+1$. Further, suppose $n=k+1$. By removing $x_{k+1}$, we also remove the longest distance $x_{k+1}-x_1$, so the set $Q=\{x_1,\dots, x_k\}$ determines $k-1$ distances. By our inductive hypothesis, $Q$ must be an arithmetic progression, in other words $Q=\{x_1,x_1+\lambda,x_1+2\lambda, \dots x_1+(k-1)\lambda\}$.
\noindent If $x_{k+1}<x_1+k\lambda$, then both $x_{k+1}-x_1>(k-1)\lambda$ and $x_{k+1}-x_k<\lambda$ are new distances not determined by $Q$. If $x_{k+1}>x_1+k\lambda$, then both $x_{k+1}-x_1>k\lambda$ and $x_{k+1}-x_2>(k-1)\lambda$ are new distances not determined by $Q$. In either case, $P$ determines at least $k+1$ distinct distances, contradicting the assumption that it determines at most $k$ distances. Therefore, $x_{k+1}$ must be $x_1+k\lambda$, and the lemma follows.\end{proof}
\begin{lemma} \label{semicirc} If $S \subseteq \R^2$ is contained in the union of two adjacent sides of an $\ell^1$-circle and determines at most $k$ distinct $\ell^1$-distances, then $|S|\leq 2k+1$. Further, if $|S|=2k+1$, then the points of $S$ on each side form an arithmetic progression containing the shared vertex.
\end{lemma}
\begin{proof}Suppose $S \subseteq \R^2$ is contained in the union of two adjacent sides of an $\ell^1$-circle and determines at most $k$ distinct $\ell^1$-distances. Assume without loss of generality that the two adjacent sides are the closed upper semicircle. We know from Lemma \ref{line} that there are at most $k+1$ points on each of the two sides.
\noindent Further, if $|S|\geq 2k+1$, then there are exactly $k+1$ points on one side, assume the left, and at least $k$ points on the right side. Let $x_1,\dots,x_{k+1}$ denote the points of $P$ on the left side, ordered left to right, and let $y$ be any point of $P$ on the right side. We note that $$\norm{x_1-x_2}_1<\norm{x_1-x_3}_1<\cdots<\norm{x_1-x_{k+1}}_1\leq \norm{x_1-y}_1, $$ and $\norm{x_1-x_{k+1}}_1= \norm{x_1-y}_1$ is only possible if $x_{k+1}$ is the vertex shared by the two sides. In particular, if the shared vertex is not included amongst the $k+1$ points on the left side, then at least $k+1$ distinct $\ell^1$-distances occur from the leftmost point, contradicting our assumption.
\noindent Therefore, if $|S|\geq 2k+1$, it must be the case that there are exactly $k+1$ points on both the left and right sides, including the shared vertex, meaning in fact $|S|=2k+1$. Finally, by Lemma \ref{line}, we know that the $k+1$ points on each side must form an arithmetic progression. \end{proof}
\noindent We are now fully armed to show the unique optimality of $\Lambda_2(k)$.
\begin{proof}[Proof of Theorem \ref{opt}] We induct on $k$. For our base case, consider $k=0$. In order for a set to determine $0$ $\ell^1$-distances (as always, not including $0$), it can contain at most $1=(0+1)^2$ point, and if it contains a point, then it is trivially a translation of $\Lambda_2(0)=\{(0,0)\}$.
\noindent Now, fix $k\in \mathbb{N}$, assume the conclusion of the theorem holds for $k-1$, and suppose $P\subseteq \R^2$ determines at most $k$ distinct $\ell^1$-distances. By Lemma \ref{SD}, $P$ is contained in a closed $\ell^1$-ball $B$ of diameter $D$, where $D$ is the largest $\ell^1$-distance determined by $P$. By Lemma \ref{remove}, we can remove the distance $D$ by removing the points of $P$ that lie on the closed upper $\ell^1$-semicircle $U$ on the boundary of $B$. Since $D$ has been removed as an $\ell^1$-distance, we know that $T=P\setminus U$ determines at most $k-1$ distinct $\ell^1$-distances. By our inductive hypothesis, $|T|\leq k^2$, and if $|T|=k^2$, then $T$ is $\ell^1$-similar to $\Lambda_2(k-1)$.
\noindent Further, by Lemma \ref{semicirc}, we know that $S=P\cap U$ satisfies $|S|\leq 2k+1$, and if $|S|=2k+1$, then $S$ consists of two $(k+1)$-term arithmetic progressions, one on each side of $U$, which meet at the shared vertex. Therefore, $|P|\leq |T|+|S|\leq k^2+2k+1=(k+1)^2$, and $|P|=(k+1)^2$ if and only if $T$ is $\ell^1$-similar to $\Lambda_2(k-1)$ and $S$ is a union of two arithmetic progressions meeting at the shared vertex. Finally, the only way these two sets can be combined without creating additional $\ell^1$-distances is for $S\cup T$ to be $\ell^1$-similar to $\Lambda_2(k)$. \end{proof}
\section{Single $\ell^1$-distance in three dimensions: Proof of Theorem \ref{singdist}} \label{3dsec}
Without analogs of Lemmas \ref{linfty} and \ref{SD} in dimension $d\geq 3$, our strategy for proving Theorem \ref{opt} does not naturally generalize to higher dimensions. However, in the case of $k=1$, we make the observation that if $P\subseteq \R^d$ determines a single $\ell^1$-distance, then all but the ``southernmost" point (the point minimizing the last coordinate) of $P$ lie on a single closed upper $\ell^1$-hemisphere. The following sequence of lemmas provide a detailed investigation into how $\ell^1$-distance behaves between points on a single upper $\ell^1$-hemisphere in $\R^3$, which consists of four flat faces, one for each quadrant determined by the first two coordinates, intersecting at a single northernmost point. The three lemmas correspond to the cases where the points lie on the same face, opposite faces, or neighboring faces, respectively.
\
\begin{lemma} \label{face} Suppose $V,W\in \R^3$ with $V=(x_1,y_1,z_1)$ and $W=(x_2,y_2,z_2)$. If $\norm{V}_1=\norm{W}_1$ and $x_1x_2,y_1y_2,z_1z_2 \geq 0$, then $$\norm{V-W}_1=2\max\{|x_1-x_2|,|y_1-y_2|,|z_1-z_2|\}. $$
\end{lemma}
\
\begin{proof} Suppose $V,W\in \R^3$, $V=(x_1,y_1,z_1)$, $W=(x_2,y_2,z_2)$, $\norm{V}_1=\norm{W}_1=\lambda$, and $x_1x_2,y_1y_2,z_1z_2 \geq 0$. After reflections about coordinate planes, coordinate permutations, and relabeling $V$ and $W$ (which all preserve both sides of the equation in the conclusion of the lemma), we can assume without loss of generality that all coordinates are nonnegative and $x_1-x_2\geq |y_1-y_2| \geq |z_1-z_2|$. Since $$\norm{V}_1=x_1+y_1+z_1=\norm{W}_1=x_2+y_2+z_2=\lambda,$$ we have in particular that $(x_1-x_2)+(y_1-y_2)+(z_1-z_2)=0$. Since the largest coordinate distance is in the $x$-direction, and $x_1\geq x_2$, we must have $y_1\leq y_2$ and $z_1\leq z_2$. Therefore \begin{align*}\norm{V-W}_1&=(x_1-x_2)+(y_2-y_1)+(z_2-z_1) \\ &=x_1-x_2+y_2-y_1+(\lambda-x_2-y_2)-(\lambda-x_1-y_1) \\ &=2(x_1-x_2), \end{align*} and the lemma follows.
\end{proof}
\
\begin{lemma} \label{oppface} Suppose $V,W\in \R^3$ with $V=(x_1,y_1,z_1)$ and $W=(x_2,y_2,z_2)$. If $\norm{V}_1=\norm{W}_1=\lambda$, $x_1x_2\leq 0$, $y_1y_2\leq 0$, and $z_1,z_2\geq 0$, then $$\norm{V-W}_1=2(\lambda-\min\{z_1,z_2\}). $$
\end{lemma}
\begin{proof} Suppose $V,W\in \R^3$ with $V=(x_1,y_1,z_1)$, $W=(x_2,y_2,z_2)$, $\norm{V}_1=\norm{W}_1=\lambda$, $x_1x_2\leq 0$, $y_1y_2\leq 0$, and $z_1,z_2\geq 0$. After reflections about coordinate planes and relabeling $V$ and $W$, we can assume without loss of generality that $x_1,y_1\geq 0$, $x_2,y_2\leq 0$, and $z_1\leq z_2$.
\noindent Therefore, $x_1+y_1=\lambda-z_1$ while $-x_2-y_2=\lambda-z_2$, hence \begin{align*} \norm{V-W}_1&=(x_1-x_2)+(y_1-y_2)+(z_2-z_1) \\ &= \lambda-z_1+\lambda-z_2+z_2-z_1 \\ &=2(\lambda-z_1),
\end{align*} and the lemma follows. \end{proof}
\
\begin{lemma} \label{neighbor} Suppose $V,W\in \R^3$ with $V=(x_1,y_1,z_1)$, $W=(-x_2,y_2,z_2)$, $\norm{V}_1=\norm{W}_1=\lambda$, and $x_1x_2, y_1y_2, z_1z_2 \geq 0$. If $\norm{V-W}_1=\lambda$, then $|x_1|\leq \lambda/2$.
\end{lemma}
\begin{proof} Suppose $V,W\in \R^3$ with $V=(x_1,y_1,z_1)$, $W=(-x_2,y_2,z_2)$, $\norm{V}_1=\norm{W}_1=\lambda$, $x_1x_2, y_1y_2, z_1z_2 \geq 0$. After reflecting about coordinate planes and scaling, we can assume $x_1,x_2,y_1,y_2,z_1,z_2\geq 0$, and $\lambda=2$. If $\norm{V-W}_1=2$, then the largest possible value of $y_2+z_2$ is $y_1+z_1+2-(x_1+x_2)$. However, since $\norm{W}_1=2$, we must have $y_2+z_2=2-x_2$, hence $2-x_2\leq y_1+z_1+2-(x_1+x_2)$, which rearranges to $x_1\leq y_1+z_2=2-x_1$, hence $x_1\leq 1$, as required. \end{proof}
\
\noindent We now establish the unique optimality of $\Lambda_3(1)$ by conducting a case analysis based on the concentration of the points of $P$, apart from the southernmost point, on the four faces of a single closed upper $\ell^1$-hemisphere.
\
\begin{proof}[Proof of Theorem \ref{singdist}] Suppose $P\subseteq \R^3$ determines a single $\ell^1$-distance $\lambda$, and choose a point $c\in P$ that minimizes the $z$-coordinate. By translating and dilating, we can assume without loss of generality that $c=(0,0,0)$ and $\lambda=2$, and hence the remaining elements of $P$ are all contained in the closed upper $\ell^1$-hemisphere $H$ of radius $2$ centered at $(0,0,0)$. We note that the southernmost point of $\Lambda_3(1)$ is $(0,0,-1)$, so our end goal in this proof is to show that $|P|<6$ unless $P$ is $\Lambda_3(1)$ shifted up by $1$.
\noindent We consider the different ways that $P$ can be concentrated on the faces of $H$. To this end, we define $H_{++}=\{(x,y,z)\in H: x,y\geq 0\}$ and $H_{+-}=\{(x,y,z)\in H: x\geq 0, y\leq 0\}$, with analogous definitions for $H_{-+}$ and $H_{--}$. We refer to the pair $H_{++}$, $H_{--}$ as \textit{opposite} faces, and likewise for
$H_{+-}$, $H_{-+}$. The three lemmas proven at the beginning of this section allow us to make the following assertions:
\begin{enumerate}[(i)] \item \label{oppitem} For any pair of distinct points $U=(x_1,y_1,z_1),V=(x_2,y_2,z_2)\in P\cap H$, with $U$ and $V$ lying on opposite faces, we have by Lemma \ref{oppface} that $\min\{z_1,z_2\}=1$. \\
\item \label{sameitem} For any pair of distinct points $U=(x_1,y_1,z_1),V=(x_2,y_2,z_2)\in P\cap H$, with $U$ and $V$ lying on the same face, we have by Lemma \ref{face} that $\max\{ |x_1-x_2|,|y_1-y_2|,|z_1-z_2| \}=1$. \\
\item \label{nitem} For distinct points $U=(x_1,y_1,z_1),V=(x_2,y_2,z_2)\in P\cap H$, with $U\in H_{++}$ and $V\in H_{-+}$, we have by Lemma \ref{neighbor} that $x_1\leq 1$. Similarly, by permuting coordinates, if $U\in H_{++}$ and $V\in H_{+-}$, then $y_1\leq 1$.
\end{enumerate}
\noindent If $|P|\geq 6$, then at least five points of $P$ lie on $H$, and in particular the sizes of the four intersections of $P$ with the respective faces of $H$ must add to at least five. Therefore, the only possible arrangements of $P\cap H$ include either three points on a single face, or two points on one face and a point on the opposite face. Further, the proof is greatly simplified in the case that the ``north pole" $(0,0,2)\in P$, so we divide the argument into the following three cases:
\begin{itemize}
\item{\textbf{Case 1:}} $(0,0,2)\in P$. \\
\item{\textbf{Case 2:}} $(0,0,2)\notin P$, and $P$ contains three points $U,V,W\in H$ such that $U$ and $V$ lie on the same face, and $W$ lies on the opposite face. \\
\item{\textbf{Case 3:}} $(0,0,2)\notin P$, and there exists a face of $H$ containing at least three points of $P$.
\end{itemize}
\noindent \textbf{Proof for Case 1:} Let $V=(0,0,2)$. For $Q=(x,y,z)\in (P\cap H)\setminus \{V\}$, we have by (\ref{oppitem}) that $z=1$. In particular, the elements of $P$ other than $(0,0,0)$ and $(0,0,2)$ take the form $(x,y,1)$ with $|x|+|y|=1$, and all pairs are separated by $\ell^1$-distance $2$. By Theorem \ref{opt}, there can be at most four such elements, and the only choice of four that works is $(1,0,1)$, $(-1,0,1)$, $(0,1,1)$, and $(0,-1,1)$. The resulting arrangement is $\Lambda_3(1)$ translated up by $1$, which establishes Theorem \ref{singdist} in this case.
\noindent \textbf{Proof for Case 2:} After reflecting about coordinate planes, we can assume that $U,V\in H_{++}$ and $W\in H_{--}$. Letting $U=(x_0,y_0,z_0)$, $V=(x_1,y_1,z_1)$, and $W=(x_2,y_2,z_2)$, we have by (\ref{oppitem}) and (\ref{sameitem}) that $$\max\{|x_0-x_1|,|y_0-y_1|,|z_0-z_1|\}=\min\{z_0,z_2\}=\min\{z_1,z_2\}=1.$$ In particular, all three $z$ coordinates are at least $1$, and since $(0,0,2)\notin P$, we have $|z_0-z_1|<1$. Therefore, we simultaneously know that $0\leq x_0,x_1,y_0,y_1 \leq 1$ and $\max\{|x_0-y_0|,|x_1-y_1|\}=1$.
\noindent This implies that (after potentially relabeling) either $U=(1,0,1)$ and $V=(0,y,2-y)$ for some $0<y\leq 1$ or $U=(x,0,2-x)$ for some $0<x\leq 1$ and $V=(0,1,1)$. In either case, $U\in H_{++}\cap H_{+-}$, and $V\in H_{++}\cap H_{-+}$, so $P$ contains at least one element on every face of $H$. Therefore, by (\ref{oppitem}), all points of $P$ lying on $H$ have $z$-coordinate at least $1$. Further, by the same reasoning as above, there are at most two points of $P$ on each face, and the only way two points of $P$ can lie on the same face is if they lie on opposite sides of the boundary, as with $U$ and $V$. In particular, at most four points of $P$ lie on $H$, and hence $P$ contains at most five points in total.
\noindent \textbf{Proof for Case 3:} This case gets a bit stickier, because, as some trial and error reveals, there are a variety of possible arrangements of three points on a single face of $H$ that are all separated by $\ell^1$-distance $2$.
\noindent Focusing on $H_{++}$ for the sake of exposition, we see that our desired configuration of $\{(1,0,1), (0,1,1), (0,0,2)\}$ is merely one member of a family of arrangements obtained from the following process:
\begin{itemize} \item Choose $x_0,y_0,z_0\geq 0$ with $x_0+y_0+z_0\leq 1$, and let $\alpha=1-(x_0+y_0+z_0)$ \\ \item Starting from $(x_0,y_0,z_0)$, construct a point by adding $1$ to one coordinate and $\alpha$ to another coordinate (so the coordinates add to $2$), then produce two additional points in a similar way by rotating the original choice of coordinates. For example, the initial choice of $U=(x_0+1,y_0+\alpha,z_0)$ uniquely determines the two additional points $V=(x_0,y_0+1,z_0+\alpha)$ and $W=(x_0+\alpha,y_0,z_0+1)$. All of these points lie on $H_{++}$, and by (\ref{sameitem}) they are all separated by $\ell^1$-distance $2$. In fact, the only other possible set of three points yielded by this process (up to labeling) is $U=(x_0+1,y_0,z_0+\alpha)$, $V=(x_0+\alpha, y_0+1, z_0)$, and $W=(x_0,y_0+\alpha, z_0+1)$. For additional clarity, a specific example is $x_0=0.1$, $y_0=0.3$, $z_0=0.4$, hence $\alpha=0.2$, which could yield the three-point arrangements $\{(1.1,0.5,0.4),(0.1,1.3,0.6), (0.3,0.3,1.4)\}$ or $\{(1.1,0.3,0.6),(0.3,1.3,0.4),(0.1,0.5,1.4)\}$.
\end{itemize}
\noindent We hope to demystify the situation by arguing that the arrangements discussed above are in fact the only possible arrangements. To this end, after reflections, we can assume $P$ contains three points $U,V,W\in H_{++}$, and we settle Case 3 with the following steps:
\begin{itemize} \item \textbf{Step 1 :} Show that $U,V,W$ take the form discussed above. In particular, after specifying the minimum values of each coordinate and a single point, the second and third points are uniquely determined, hence there cannot be a fourth point in $P\cap H_{++}$. \\
\item \textbf{Step 2:} Show that $P$ can contain at most one point in $(H_{+-}\cup H_{-+})\setminus H_{++}$ before necessarily reducing to Case 2. This means that any hypothetical fifth point of $P\cap H$ necessarily lies on $H_{--}$, which itself reduces the argument back to Case 2.\\
\end{itemize}
\noindent \textbf{Step 1:} Let $x_0$, $y_0$, and $z_0$ be the minimum $x$, $y$, and $z$-coordinates, respectively, attained by $U$, $V$, and $W$. In what follows, we repeatedly appeal to (\ref{sameitem}), which tells us that for every pair of points in $\{U,V,W\}$, the maximum coordinate distance is exactly $1$. In particular, the maximum $x$, $y$, and $z$-coordinates attained by $U$,$V$, and $W$ are at most $x_0+1$, $y_0+1$, and $z_0+1$, respectively, and we begin by arguing that this inequality must be equality in all three coordinates.
\noindent Suppose that this inequality is strict in at least one coordinate. By permuting coordinates and relabeling points we may assume that $U=(x_0,y,z)$, and neither of $V$ and $W$ has $x$-coordinate $x_0+1$. Therefore, the maximum coordinate distance of $1$ required by (\ref{sameitem}) must occur in either the $y$ or $z$-coordinates, and since $x_0$ is the minimum $x$-coordinate, $V$ and $W$ must both take one of the following forms: $(x_0+\alpha,y-1, z+(1-\alpha))$ for some $0\leq\alpha< 1$, or $(x_0+\beta, y+(1-\beta),z-1)$ for some $0\leq \beta <1$. However, no combination of these choices for $V$ and $W$ have a maximum coordinate distance of $1$ from each other, so this arrangement is impossible. Therefore, all the maxima $x_0+1$, $y_0+1$, and $z_0+1$ are indeed achieved. For the remainder of the proof, we will refer to the respective coordinate values $x_0$, $y_0$, and $z_0$ as \textit{minimum coordinates}, and we will similarly refer to the respective coordinate values $x_0+1$, $y_0+1$, $z_0+1$ as \textit{maximum coordinates}. We complete step one by considering the following three subcases.
\begin{itemize} \item \textbf{Subcase A:} Two maximum coordinates appear simultaneously in a single point. \\
\noindent Since all the points have $\ell^1$-norm $2$, this subcase necessitates that $x_0=y_0=z_0=0$, and we assume without loss of generality that $U=(1,1,0)$. Since the minimum $x$ and $y$ coordinates of $0$ must be attained, $\{V,W\}$ contains points of the form $(x,0,2-x)$ and $(0,y,2-y)$, respectively, for some $0\leq x,y\leq 1$. However, since the maximum $z$-coordinate is $1$, the only admissible choices are $x=y=1$, hence the three points are $(1,1,0)$, $(0,1,1)$, and $(1,0,1)$, which take the required form with $\alpha=1$. \\
\item \textbf{Subcase B:} Two minimum coordinates appear simultaneously in a single point.\\
\noindent Assume without loss of generality that $U=(x_0,y_0,z)$. Since $x_0$ and $y_0$ are minimum coordinates, each of $V$ and $W$ must take the form $(x_0+\alpha, y_0+\beta, z-(\alpha+\beta))$ for some $\alpha,\beta\geq 0$, and by (\ref{sameitem}) we must have $\alpha+\beta=1$. Further, since the $z$-coordinate of both $V$ and $W$ is $z-1$ (which is hence the minimum coordinate $z_0$), the maximum coordinate distance of $1$ must occur in the first two coordinates, meaning that $\{V,W\}=\{(x_0+1,y_0,z_0),(x_0,y_0+1,z_0)\}$. In particular, the arrangement takes the required form with $\alpha=0$. \\
\item \textbf{Subcase C:} Exactly one minimum coordinate and one maximum coordinate occurs in each point.\\
\noindent After permuting coordinates and relabeling points we assume $U=(x_0+1,y_0+\alpha,z_0)$ where $0<\alpha=1-(x_0+y_0+z_0)<1$. In order to meet the subcase conditions, have a maximum coordinate distance of $1$ from $U$, and have $\ell^1$-norm $2$, the options for $V$ and $W$ are $(x_0, y_0+1, z_0+\alpha)$, $(x_0, y_0+\alpha, z_0+1)$, and $(x_0+\alpha, y_0, z_0+1)$. Of these three possibilities, there is only one pair that are separated by $\ell^1$-distance $2$ from each other, hence $\{V,W\}=\{(x_0, y_0+1, z_0+\alpha),(x_0+\alpha, y_0, z_0+1)\}$, as required.
\end{itemize}
\noindent \textbf{Step 2:} Suppose $P$ contains a point $Q\in H_{-+}\setminus H_{++}$ (the argument is completely analogous for $Q\in H_{+-}\setminus H_{++}$). By (\ref{nitem}), in order for $Q$ to be separated from $U=(x_0+1,y_0+\alpha, z_0)$ by $\ell^1$-distance $2$, we must have $x_0+1\leq 1$, and hence $x_0=0$. In particular, $V=(0,y_0+1, z_0+\alpha) \in P \cap (H_{-+}\cap H_{++})$, so $P$ contains at least two points on $H_{-+}$. This means that, in order to avoid reducing to Case 2, $P$ cannot contain any elements of $H_{+-}$.
\noindent If instead $P$ contains a second point $R\in H_{-+}\setminus H_{++}$, hence a third point in $H_{-+}$, then we fall back to our previous analysis of three points on a single face, adapted by taking negatives of all $x$-coordinates. In particular, because $V$ has $x$-coordinate $0$, which minimizes the $x$-coordinate in absolute value among the points in $P\cap H_{-+}$, either $Q$ or $R$ must maximize the $x$-coordinate in absolute value and have $x$-coordinate $-1$. Assuming $Q$ has $x$-coordinate $-1$, in order for $Q$ to be separated from $U=(1,y_0+\alpha,z_0)$ by $\ell^1$-distance $2$, we must have $Q=(-1,y_0+\alpha,z_0)$. In order for $\{V,Q,R\}$ to meet the required form for three points of $P$ on $H_{-+}$ established in Step 1, we must have $R=(-\alpha,y_0,z_0+1)$. However, in this case we see that $\norm{U-R}_1=2+2\alpha=2$, hence $\alpha=0$, which contradicts the assumption that $R\notin H_{++}$.
\end{proof}
\section{Conditional Results in Higher Dimensions} \label{hdsec}
In the remainder of our discussion, we use the terms \textit{$\ell^1$-sphere} and \textit{$\ell^1$-ball} as before, defined analogously to regular spheres and balls in $\R^d$, with the usual distance replaced by $\ell^1$-distance. In an effort to establish results in higher dimensions, we make the following observations, heavily inspired by our journey thus far:
\begin{enumerate}[(a)] \item \label{capt} As noted at the beginning of Section \ref{3dsec}, our proof of Theorem \ref{opt} does not naturally generalize to higher dimensions, because in dimension $d\geq 3$, it is not necessarily the case that if the largest $\ell^1$-distance determined by a finite set $P\subseteq \R^d$ is $\lambda$, then $P$ is contained in a closed $\ell^1$-ball of diameter $\lambda$. However, the argument in Lemma \ref{remove} does generalize to all dimensions: the distance $\lambda$ can be removed from an $\ell^1$-ball of diameter $\lambda$ by removing the closed upper $\ell^1$-hemisphere. In particular, if we somehow could capture our set inside such a ball, then by mimicking the proof of Theorem \ref{opt}, the problem is reduced to determining maximal configurations of points arranged on single closed upper $\ell^1$-hemisphere, which would then facilitate an induction on the number of distinct distances. \\
\item \label{ballitem} Suppose $P \subseteq \R^d$ is a finite set determining at most $k$ distinct $\ell^1$-distances, with largest $\ell^1$-distance $\lambda$. By translating and scaling, we can assume that $\lambda=2k$ and the ``southernmost point" of $P$, minimizing the $x_d$ coordinate, is $-ke_d$, where $e_d$ is the $d$-th standard basis vector. An enticing observation, particularly in juxtaposition with (\ref{capt}), is the following: if $ke_d$ is also in $P$, then, since $2k$ is the largest $\ell^1$-distance, $P$ is contained in the intersection of the closed $\ell^1$-ball of radius $2k$ centered at $-ke_d$ and the closed $\ell^1$-ball of radius $2k$ centered at $ke_d$, which is conveniently the closed $\ell^1$-ball of radius $k$ centered at the origin. \\
\item \label{hyper} Inspired by the simplicity of Case 1 in the proof of Theorem \ref{singdist}, we see that if $U$ lies on an upper $\ell^1$-hemisphere $H\subseteq \R^d$, then the $\ell^1$-distance between $U$ and the ``north pole" of $H$ is determined entirely by the $x_d$-coordinate of $U$. More specifically, if $H$ is the closed upper $\ell^1$-hemisphere of radius $k$ centered at the origin and $U=(x_1,\dots,x_d)\in H$, then $$\norm{U-ke_d}_1=|x_1|+\cdots+|x_{d-1}|+k-x_d=2(k-x_d). $$ In particular, if $ke_d\in P$ and $P$ determines only $k$ distinct $\ell^1$-distances $\{\lambda_i\}_{i=1}^k$, then the points of $(P\cap H)\setminus ke_d$ are restricted to the hyperplanes $\{x_d=c_i\}$ for $1\leq i \leq k$., where $c_i=k-\lambda_i/2$. Further, the intersection of $H$ with the hyperplane $\{x_d=c_i\}$ is $$\{(x_1,\dots,x_{d-1},c_i): |x_1|+\cdots+|x_{d-1}|=k-c_i\},$$ which is a copy of the $\ell^1$-sphere of radius $k-c_i$ centered at the origin in $\R^{d-1}$. This would allow us to analyze $P\cap H$ by inducting on dimension, analogous to the invocation of Theorem \ref{opt} during Case 1 in the proof of Theorem \ref{singdist}.
\end{enumerate}
\noindent These three items combine to a clear aspirational reality: given a finite set $P\subseteq \R^d$ that determines at most $k$ distinct $\ell^1$-distances, the largest of which is $\lambda$, letting $H$ denote the closed upper $\ell^1$-hemisphere of radius $\lambda$ centered at the ``southernmost" point of $P$, we could fully adapt the proof of Theorem \ref{opt} and induct on both $d$ and $k$, if only we could assume that the ``north pole" of H is also in $P$. If we were considering the usual Euclidean distance, this would be no obstruction at all, as we could rotate our set and assume without loss of generality that the largest distance $\lambda$ occurs parallel to the $x_d$-axis. However, since $\ell^1$-distance is not invariant under rotation, we require an additional assumption to establish a conditional version of Conjecture \ref{optcon}. The following definition, conjecture, and theorem fully formalize this conditional result, after which we conclude our discussion.
\begin{definition} Given $P\subseteq \R^d$ and an $\ell^1$-distance $\lambda>0$, we say that $\lambda$ \textit{occurs in an axis-parallel direction} if there exists $x\in P$ and $1\leq i \leq d$ such that $x+\lambda e_i\in P$, where $e_i$ is the $i$-th standard basis vector. Further, if $P$ is bounded, we say that $P$ is \textit{axis-parallel} if the largest $\ell^1$-distance determined by $P$ occurs in an axis-parallel direction.
\end{definition}
\begin{conjecture} \label{axisp} Suppose $d,k\in \mathbb{N}$. If $P$ is of maximal size amongst subsets of $\R^d$ determining at most $k$ distinct $\ell^1$-distances, then $P$ is axis-parallel. The same holds within the class of sets contained in an $\ell^1$-sphere in $\R^d$.
\end{conjecture}
\begin{theorem}\label{condinc} Conjecture \ref{axisp} implies Conjecture \ref{optcon}.
\end{theorem}
\begin{proof} We proceed via two inductions, one on the dimension $d$ and another on the number of distinct $\ell^1$-distances $k$. We streamline the argument by defining the following propositions for each $d\in \mathbb{N}$ and each nonnegative integer $k$:
\begin{itemize} \item Opt$(d,k)$: $\Lambda_d(k)$ is the unique set, up to $\ell^1$-similarity, of maximal size amongst subsets of $\R^d$ determining at most $k$ distinct $\ell^1$-distances. Conjecture \ref{optcon} is precisely the statement that Opt$(d,k)$ holds for all $d,k\in \mathbb{N}$. \\
\item S-Opt$(d,k)$: $\Lambda_d(k)\setminus \Lambda_d(k-2)$ is the unique set, up to $\ell^1$ similarity, of maximal size amongst sets contained in an $\ell^1$-sphere in $\R^d$ determining at most $k$ distinct $\ell^1$-distances. For $k=0$ or $1$, we take the convention that $\Lambda_d(-1)=\Lambda_d(-2)=\emptyset$.\\
\item H-Opt$(d,k)$: Let $H$ denote the closed upper $\ell^1$-hemisphere of radius $k$ centered at the origin in $\R^d$. If $ke_d\in E\subseteq H$ and $E$ determines at most $k$ distinct $\ell^1$-distances, the largest of which is $2k$, then $|E|\leq |\Lambda_d(k)\cap H|$, and $|E|=|\Lambda_d(k)\cap H|$ if and only if $E=\Lambda_d(k)\cap H$.\\
\end{itemize}
\noindent For the necessary base cases, we note that S-Opt$(1,k)$ and H-Opt$(1,k)$ trivially hold for all $k\in \mathbb{N}$ as $\ell^1$-spheres and $\ell^1$-hemispheres in $\R$ contain just two points and one point, respectively. Also, Opt$(d,0)$ holds trivially for all $d\in \mathbb{N}$ because a set determining no $\ell^1$-distances contains at most a single point. Under the assumption that Conjecture \ref{axisp} holds, we verify Conjecture \ref{optcon} by establishing the following implications:
\begin{enumerate} \item \label{SH} S-Opt$(d-1,k)$ for all $k\in \mathbb{N} \implies$ H-Opt$(d,k)$ and S-Opt$(d,k)$ for all $k\in \mathbb{N}$, so S-Opt$(d,k)$ and H-Opt$(d,k)$ hold for all $d,k\in \mathbb{N}$. \\
\item \label{OI} Opt$(d,k-1)$ and H-Opt$(d,k) \implies$ Opt$(d,k)$, so Opt$(d,k)$ holds for all $d,k\in \mathbb{N}$, as required.\\
\end{enumerate}
\noindent \textbf{Proof of (\ref{OI}):} Fix $d,k\in \mathbb{N}$, and suppose Opt$(d,k-1)$ and H-Opt$(d,k)$ hold. Suppose $P\subseteq \R^d$ determines at most $k$ distinct $\ell^1$-distances, and has maximal size amongst sets with this property. By scaling we can assume that the largest $\ell^1$-distance determined by $P$ is $2k$, which by Conjecture \ref{axisp} we know occurs in an axis-parallel direction. After permuting coordinates and translating, we can assume that $-ke_d,ke_d\in P$. As noted in (\ref{ballitem}), this implies that $P$ is contained the closed $\ell^1$-ball of radius $k$ centered at the origin. To verify this, suppose $U=(x_1,\dots,x_d)\in P$ with $\norm{U}_1 >k$. If $x_d\geq0$, then $\norm{U-(-ke_d)}_1=\norm{U}_1+k>2k$, while if $x_d\leq 0$, the same holds for the $\ell^1$-distance between $U$ and $ke_d$, contradicting the fact that $2k$ is the largest $\ell^1$-distance determined by $P$.
\noindent As noted in (\ref{capt}), the $\ell^1$-distance $2k$ can be eliminated from $P$ by removing the points in $P\cap H$, where $H$ is the closed upper $\ell^1$-hemisphere of radius $k$ centered at the origin. This is because, within a closed $\ell^1$-ball of radius $k$, the $\ell^1$-distance $2k$ only occurs between pairs of points on opposite faces of the boundary. By H-Opt$(d,k)$, we know that $|P\cap H|\leq |\Lambda_d(k)\cap H|$, and further $|P\cap H|= |\Lambda_d(k)\cap H|$ if and only if $P\cap H=\Lambda_d(k)\cap H$, in which case the $\ell^1$-distances determined by $P$ are $2,4,\dots,2k$.
\noindent Because the $\ell^1$-distance $2k$ does not occur in $P\setminus H$, we have that $P\setminus H$ determines at most $k-1$ distinct $\ell^1$-distances. By Opt$(d,k-1)$, we know that $|P\setminus H|\leq |\Lambda_d(k-1)|$, and further $|P\setminus H|= |\Lambda_d(k-1)|$ if and only if $P\setminus H$ is $\ell^1$-similar to $\Lambda_d(k-1)$. In order for both $P\cap H$ and $P\setminus H$ to attain their maximum possible sizes, the $\ell^1$-distances determined by $P\setminus H$ must be $2,4,\dots,2k-2$. In this case, since an $\ell^1$-similar copy of $\Lambda_d(k-1)$ is uniquely determined by its ``south pole" and its largest distance, we know that $P\setminus H$ must be $\Lambda_d(k-1)$ shifted down by $1$. In other words, \begin{align*} P\setminus H&= \{(x_1,\dots,x_{d}-1)\in \mathbb{Z}^d: |x_1|+\cdots+|x_d|\leq k-1, \ x_1+\dots+x_d\equiv k-1 \ (\text{mod }2)\} \\ &=\{(x_1,\dots,x_{d})\in \mathbb{Z}^d: |x_1|+\cdots+|x_d+1|\leq k-1, \ x_1+\dots+x_d\equiv k \ (\text{mod }2)\}.
\end{align*} The latter description ensures that $P\setminus H \subseteq \Lambda_d(k) \setminus H$, and conversely, if $U=(x_1,\dots,x_d) \in \Lambda_d(k)\setminus H$, then either $x_d<0$ or $\norm{U}_1\leq k-2$. In either case $|x_1|+\cdots+|x_d+1|\leq k-1$, and hence $U\in P\setminus H$. Bringing everything together, we have that if $P\cap H$ and $P\setminus H$ both attain their maximum possible size, then $P\cap H= \Lambda_d(k)\cap H$ and $P\setminus H=\Lambda_d(k)\setminus H$, hence $P=\Lambda_d(k)$, so Opt$(d,k)$ holds.
\noindent \textbf{Proof of (\ref{SH}):} \noindent Fix $d\geq 2$, suppose S-Opt$(d-1,k)$ holds for all $k\in \mathbb{N}$, and fix $k\in \mathbb{N}$. Suppose $P\subseteq \R^d$ is contained in the $\ell^1$-sphere $S$ of radius $k$ centered at the origin, and that $P$ has maximal size amongst all such sets determining at most $k$ distinct $\ell^1$-distances. To establish S-Opt$(d,k)$, we must show that $P=\Lambda_d(k)\setminus \Lambda_d(k-2)$. Thanks to our inductive hypothesis, we can assume $P$ determines exactly $k$ distinct $\ell^1$-distances, not fewer, and we denote those $\ell^1$-distances by $\lambda_1<\cdots<\lambda_k$. By the $\ell^1$-sphere component of Conjecture \ref{axisp}, we know that $\lambda_k$ occurs in an axis-parallel direction. By permuting coordinates, we assume that $\lambda_k$ occurs in the last coordinate direction, in other words $(x_1,\dots,x_d),(x_1,\dots,x_d+\lambda_k)\in P$ for some $x_1,\dots,x_d\in \R$ with $|x|+\cdots+|x_d|=|x_1|+\cdots+|x_d+\lambda_k|=k$, which in particular forces $x_d=-\lambda_k/2$ and $|x_1|+\cdots+|x_{d-1}|=k-\lambda_k/2$. This transformation is allowable because our end goal, $\Lambda_d(k)\setminus \Lambda_d(k-2)$, is invariant under coordinate permutation.
\noindent We argue (informally for the moment) that the only reasonable choice is $\lambda_k=2k$ and $x_1=\cdots=x_{d-1}=0$, meaning $ke_d,-ke_d\in P$. This is because, since $\lambda_k$ is the largest $\ell^1$-distance, $P$ is contained in the intersection of the closed $\ell^1$-balls of radius $\lambda_k$ centered at $(x_1,\dots,x_{d-1},-\lambda_k/2)$ and $(x_1,\dots,x_{d-1},\lambda_k/2)$, respectively, and this intersection is the $\ell^1$-ball of radius $\lambda_k/2$ centered at $(x_1,\dots,x_{d-1},0)$. However, $P$ is also contained in $S$, so if it is not the case that $\lambda_k=2k$, then $P$ would in fact be contained in the intersection of an $\ell^1$-sphere with a closed $\ell^1$-ball of a smaller radius, which is at most a closed $\ell^1$-hemisphere. The idea that a maximal subset of an $\ell^1$-sphere determining at most $k$ distinct $\ell^1$-distances could actually be contained in a closed $\ell^1$-hemisphere is intuitively suspect, and we return to this issue near the end of the proof. For now, we assume $-ke_d,ke_d\in P$.
\noindent We let $H$ denote the closed upper $\ell^1$-hemisphere of $S$, and we establish H-Opt$(d,k)$ along the way. As discussed in (\ref{hyper}), all the points of $(P\cap H)\setminus ke_d$ have $x_d$-coordinates in the list $c_1> c_2 > \cdots > c_k$, where $c_i=k-\lambda_i/2$. For each $c_i$, the points of $H$ with $x_d$-coordinate equal to $c_i$ take the form $(x_1,\dots,x_{d-1},c_i)$ where $|x_1|+\cdots+|x_{d-1}| = k-c_i$, and we refer to the set of such points as $S_i$. With regard to $\ell^1$-distances, $S_i$ is equivalent to an $\ell^1$-sphere in $\R^{d-1}$, centered at the origin with radius $k-c_i$. All $\ell^1$-distances determined by $P\cap S_i$ are at most $2(k-c_i)=\lambda_i$, so $P\cap S_i$ determines at most $i$ distinct $\ell^1$-distances. By our inductive hypothesis, $|P\cap S_i| \leq |\Lambda_{d-1}(i)|-|\Lambda_{d-1}(i-2)|$, with equality holding if and only if the projection $P\cap S_i$ onto the first $d-1$ coordinates is $\ell^1$-similar to $\Lambda_{d-1}(i)\setminus \Lambda_{d-1}(i-2)$, so in particular $\lambda_1,\dots,\lambda_i$ form an arithmetic progression. Since $k-z_i=\lambda_i/2$ and $\lambda_k=2k$, this equality holds for all $1\leq i \leq k$ if and only if $P\cap S_i=\{(x,k-i): x\in \Lambda_{d-1}(i)\setminus \Lambda_{d-1}(i-2)\}$ for all $1\leq i \leq k$. Here we note that if it were not the case that $ke_d,-ke_d\in P$ as previously assumed, then $P\cap S_i$ would be, at most, equivalent to an $\ell^1$-hemisphere in $\R^{d-1}$, in which case our inductive hypothesis would prohibit it from having $|\Lambda_{d-1}(i)|-|\Lambda_{d-1}(i-2)|$ elements.
\noindent In summary, \begin{equation}\label{hemiub}|P\cap H| \leq \sum_{i=0}^{k} |\Lambda_{d-1}(i)|-|\Lambda_{d-1}(i-2)|,\end{equation} taking $\Lambda_{d-1}(-1)$ and $\Lambda_{d-1}(-2)$ to be empty, and equality holds if and only if $$P\cap H = \bigcup_{i=0}^k\left\{(x,k-i): x\in \Lambda_{d-1}(i)\setminus \Lambda_{d-1}(i-2)\right\}= \Lambda_d(k)\cap H,$$ which establishes H-Opt$(d,k)$.
\noindent Further, $P\cap H$ can contain at most $\sum_{i=0}^{k-1} |\Lambda_{d-1}(i)|-|\Lambda_{d-1}(i-2)|$ points with $x_d>0$. Letting $H'$ denote the closed lower $\ell^1$-hemisphere of $S$, we employ the identical reasoning as above to yield the same upper bound (\ref{hemiub}) on $|P\cap H'|$, with equality holding if and only if $$P\cap H' = \bigcup_{i=0}^{k}\{(x,i-k): x\in \Lambda_{d-1}(i)\setminus \Lambda_{d-1}(i-2)\}=\Lambda_d(k)\cap H.$$ Further, $P\cap H'$ can contain at most $\sum_{i=0}^{k-1} |\Lambda_{d-1}(i)|-|\Lambda_{d-1}(i-2)|$ points with $x_d<0$. Putting all this together, we have \begin{align*}|P|&=|P\cap (H\setminus H')|+|P\cap (H' \setminus H)|+|P\cap H\cap H'| \\&\leq 2\left(\sum_{i=0}^{k-1} |\Lambda_{d-1}(i)|-|\Lambda_{d-1}(i-2)|\right)+\left(|\Lambda_{d-1}(k)|-|\Lambda_{d-1}(k-2)|\right), \end{align*} and equality holds if and only if $$ P= \bigcup_{i=-k}^k\left\{(x,i): x\in \Lambda_{d-1}(k-|i|)\setminus \Lambda_{d-1}(k-|i|-2)\right\} =\Lambda_d(k)\setminus \Lambda_d(k-2).$$ Therefore, S-Opt$(d,k)$ holds, and the induction on dimension is complete. \end{proof}
\noindent \textbf{Remark.} If one is specifically interested in dimension $d=3$, then, because we have fully resolved the problem in dimension $d=2$, no inductive hypothesis is needed for dimension, just for the number of $\ell^1$-distances. In other words, if Opt$(3,k-1)$ holds, then $\Lambda_3(k)$ is the unique set, up to $\ell^1$-similarity, of maximal size amongst axis-parallel subsets of $\R^3$ determining at most $k$ distinct $\ell^1$-distances. In particular, because Theorem \ref{singdist} tells us that O$(3,1)$ holds, we know that $\Lambda_3(2)$, which contains $19$ points and is pictured in Figure \ref{fig:sub2}, is uniquely optimal amongst axis-parallel sets determining only two $\ell^1$-distances. However, we cannot make the analogous claim for $\Lambda_3(3)$, because we cannot exclude the possibility of a non-axis-parallel set determining two $\ell^1$-distances that contains more than $19$ points (or that contains exactly $19$ points but is not $\ell^1$-similar to $\Lambda_3(2)$). This possibility disables our bridge from two $\ell^1$-distances to three, and exemplifies the need in assuming Conjecture \ref{axisp} if we wish to glean additional information in dimension $d\geq 3$.
\noindent \textbf{Acknowledgements:} This research was initiated during the Summer 2019 Kinnaird Institute Research Experience at Millsaps College. All authors were supported during the summer by the Kinnaird Endowment, gifted to the Millsaps College Department of Mathematics. At the time of submission, all authors except Alex Rice were Millsaps College undergraduate students. The authors would like to thank Alex Iosevich for his helpful references, and Tomasz Tkocz for alerting us to previous research done in the $k=1$ case. Finally, the authors would like to thank the anonymous referee for their encouraging comments and helpful recommendations, particular regarding Sections 5 and 6.
| {
"timestamp": "2020-05-26T02:08:51",
"yymm": "1911",
"arxiv_id": "1911.08067",
"language": "en",
"url": "https://arxiv.org/abs/1911.08067",
"abstract": "We address an analog of a problem introduced by Erdős and Fishburn, itself an inverse formulation of the famous Erdős distance problem, in which the usual Euclidean distance is replaced with the metric induced by the $\\ell^1$-norm, commonly referred to as the $\\textit{taxicab metric}$. Specifically, we investigate the following question: given $d,k\\in \\mathbb{N}$, what is the maximum size of a subset of $\\mathbb{R}^d$ that determines at most $k$ distinct taxicab distances, and can all such optimal arrangements be classified? We completely resolve the question in dimension $d=2$, as well as the $k=1$ case in dimension $d=3$, and we also provide a full resolution in the general case under an additional hypothesis.",
"subjects": "Combinatorics (math.CO); Metric Geometry (math.MG)",
"title": "Sets in $\\mathbb{R}^d$ determining $k$ taxicab distances",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9911526443943918,
"lm_q2_score": 0.8267117962054048,
"lm_q1q2_score": 0.8193975829610245
} |
https://arxiv.org/abs/1901.06759 | Submultiplicativity of the numerical radius of commuting matrices of order two | Denote by $w(T)$ the numerical radius of a matrix $T$. An elementary proof is given to the fact that $w(AB) \leq w(A)w(B)$ for a pair of commuting matrices of order two, and characterization is given for the matrix pairs that attain the quality. | \section{Introduction}
Let $M_n$ be the set of $n\times n$ matrices. The numerical range and numerical radius
of $A \in M_n$ are defined by
$$W(A) = \{x^*Ax: x \in {\mathbb C}^n, x^*x = 1\} \qquad \hbox{ and } \qquad
w(A) = \max\{|\mu|: \mu \in W(A)\},$$
respectively.
The numerical range and numerical radius are useful tools in studying matrices and
operators.
There are strong connection between the algebraic properties
of a matrix $A$ and the geometric properties of $W(A)$.
For example, $W(A) = \{\mu I\}$ if and only if $A = \mu I$;
$W(A) \subseteq {\mathbb R}$ if and only if $A = A^*$; $W(A) \subseteq [0, \infty)$
if and only if $A$ is positive semi-definite.
The numerical radius is a norm on $M_n$, and
has been used in the analysis of basic iterative solution
methods \cite{Ax}. Researchers have
obtained interesting inequalities related to the numerical radius;
for example, see
\cite{G,H,Hol,HolS,HJ1}. We mention a few of them in the following.
Let $\|A\|$
be the operator norm of $A$.
It is known that
$$w(A) \le \|A\| \le 2w(A).$$
While the spectral norm is submultiplicative,
i.e., $\|AB\| \le \|A\| \|B\|$ for all $A, B \in M_n$,
the numerical radius is not.
In general,
$$w(AB) \le \xi w(A)w(B) \quad \hbox{ for all } A, B \in M_n$$
if and only if $\xi \ge 4$; e.g., see \cite{GW}.
Despite the fact that the numerical radius is not submultiplicative,
$$w(A^m) \le w(A)^m \qquad \hbox{ for all positive integers } m.$$
For a normal matrix $A\in M_n$, we have $w(A) = \|A\|$. Thus,
for a normal matrix $A$ and any $B \in M_n$,
$$w(AB) \le \|AB\| \le \|A\| \|B\| = w(A) \|B\| \le 2 w(A)w(B),$$
and also
$$w(BA) \le \|BA\| \le \|B\|\|A\| = \|B\| w(A) \le 2w(B)w(A).$$
In case $A, B\in M_n$ are normal matrices,
$$w(AB) \le \|AB\| \le \|A\| \|B\| = w(A) w(B).$$
Also, for any pairs of commuting matrices $A, B \in M_n$,
$$w(AB) \le 2w(A)w(B).$$
To see this, we may assume $w(A) = w(B) = 1$, and observe that
\begin{eqnarray*}
4w(AB) &=& w((A+B)^2 - (A-B)^2) \le w((A+B)^2) + w((A-B)^2) \\
&\le& w(A+B)^2 + w(A-B)^2 \le 8.
\end{eqnarray*}
The constant 2 is best (smallest) possible for matrices of order at least 4
because $w(AB) = 2w(A)w(B)$ if $A = E_{12} + E_{34}$ and $B = E_{13} + E_{24}$, where $ E_{ij} \in M_n$ has $1$ at the $(i,j)$ position and $0$ elsewhere; see \cite[Theorem 3.1]{GW}.
In connection to the above discussion,
there has been interest in studying the best (smallest) constant
$\xi > 0$ such that
$$w(AB) \le \xi w(A)w(B)$$
for all commuting matrices $A, B \in M_n$ with $n \le 3$.
For $n = 2$, the best constant $\xi$ is one;
the existing proofs of the $2\times 2$ case depend on deep theory on analytic functions,
von Neumann inequality, and functional calculus
on operators with numerical radius equal to one, etc.; for example, see \cite{Hol,HolS}.
Researchers have been trying to find an elementary proof for this result in view of the fact
that the numerical range of $A \in M_2$ is well understood,
namely, $W(A)$is an elliptical disk with the eigenvalues
$\lambda_1, \lambda_2$ as
foci and the length of minor axis $\sqrt{({\mathrm tr}\, A^*A) - |\lambda_1|^2 - |\lambda_2|^2}$;
for example, see \cite{L,M} and \cite[Theorem 1.3.6]{HJ1}.
The purpose of this note is to provide such a proof.
Our analysis is based on elementary theory in convex analysis, co-ordinate
geometry, and inequalities.
Using our approach, we readily give a characterization of commuting pairs of matrices
$A, B \in M_2$ satisfying $w(AB) = w(A)w(B)$, which was done
in \cite[Theorem 4.1]{GW} using yet another deep result of Ando \cite{An} that a
matrix $ A$ has numerical radius bounded by one if and only if $A = (I-Z)^{1/2}C(A+Z)^{1/2}$
for some contractions $C$ and $Z$, where $Z = Z^*$.
Here is our main result.
\begin{theorem} Let $A, B \in M_2$ be nonzero matrices such that $AB = BA$.
Then $w(AB) \le w(A)w(B)$. The equality holds if and only if one of the following holds.
\begin{itemize}
\item[{\rm (a)}] $A$ or $B$ is a scalar matrix, i.e. of the form $ \mu I_2$ for some
$\mu \in{\mathbb C}$.
\item[
{\rm (b)}] There is a unitary $U$ such that $U^*AU = {\rm diag}\,(a_1,a_2)$
and $U^*BU = {\rm diag}\,(b_1, b_2)$ with $|a_1| \ge |a_2|$ and $|b_1| \ge |b_2|$.
\iffalse
\item[
{\rm (c)}] There is a unitary $U$ such that
$U^*AU = a R$ and $ U^*BU =b\overline R$ for some $a, b \in {\mathbb C}$ with $|a|= w(A)$, $|b| = w(B)$,
$R = ic^2 \gamma I_2 + \sqrt{1- (c\gamma )^2}
\begin{pmatrix} c & 2\sqrt{1-c^2} \cr 0
& -c \end{pmatrix}$ for some $\gamma, c \in (0,1]$.
\fi
\end{itemize}
\end{theorem}
One can associate the conditions (a) and (b) in the theorem with the geometry of the numerical
range of $A$ and $B$ as follows.
Condition (a) means that $W(A)$ or $W(B)$ is a single point; condition (b) means that
$W(A)$, $W(B)$, $W(AB)$ are line segments with three sets of
end points,
$\{a_1, a_2\}, \{b_1, b_2\}, \{a_1b_1, a_2b_2\}$, respectively, such that
$|a_1|\ge |a_2|$ and $|b_1| \ge |b_2|$.
\iffalse
condition (c) means that there are $a, b \in {\mathbb C}$ with $(|a|, |b|) = w(A), w(B))$ such that
$W(A/a) = {\mathcal E}$ is an elliptical disk symmetric about the imaginary axis insider the unit
circle at two points, and $W(B/b) = \overline{{\mathcal E}} =
\{\bar\mu \in {\mathbb C}: \mu \in {\mathcal E}\}.$
\fi
\section{Proof of Theorem 1}
Let $A, B \in M_2$ be commuting matrices. We may replace
$(A,B)$ by $(A/w(A),B/w(B))$ and assume that $w(A) = w(B) = 1$.
We need to show that $w(AB)\le 1$.
Since $AB = BA$, there is a unitary matrix $U \in M_2$ such that
both $U^*AU$ and$U^*BU$ are in triangular form; for example, see
\cite[Theorem 2.3.3]{HJ2}.
We may replace $(A,B)$ by $(U^*AU,U^*BU)$ and assume that
$A = \begin{pmatrix} a_1&a_3\cr 0 & a_2\cr \end{pmatrix}$,
$B = \begin{pmatrix} b_1&b_3\cr 0 & b_2\cr \end{pmatrix}$ and $w(A) = w(B) = 1$.
The result is clear if $A$ or $B$ is normal. So, we assume that $a_3, b_3 \ne 0$.
Furthermore, comparing the $(1,2)$ entries on both sides of $AB = BA$, we see that
$\displaystyle\frac{a_1-a_2}{a_3}= \displaystyle\frac{b_1-b_2}{b_3}$.
Applying a diagonal unitary similarity to both $A$ and $B$, we may further assume that $\gamma =\displaystyle\frac{a_1-a_2}{a_3}\ge 0$. Let
$r=\displaystyle\frac{1}{\sqrt{\gamma^2+1}}$. We have $0<r\le 1$. Then
$A = z_1 I + s_1C$ and $B = z_2 I + s_2 C$ with
\bigskip
\centerline{
$z_1 = \displaystyle\frac{a_1+a_2}{2}, \quad
z_2 =\displaystyle\frac{b_1+b_2}{2} , \quad s_1=\displaystyle\frac{a_3}{2r}, \quad s_2 =\displaystyle\frac{b_3}{2r}$,
\quad and \quad
$C = \begin{pmatrix} \sqrt{1-r^2} & 2r \cr 0 & -\sqrt{1-r^2}\end{pmatrix}$.}
\medskip\noindent
Note that $W(C)$ is the elliptical disk with boundary
$$\{\cos\theta + i r\sin\theta: \theta \in [0, 2\pi]\};$$
see \cite {L} and \cite[Theorem 1.3.6]{HJ1}.
Replacing $(A, B)$ with $(e^{it_1} A, e^{it_2 } B)$
for suitable $t_1, t_2\in [0,2\pi ]$, if necessary,
we may assume that
${\rm Re}\,z_1,\ {\rm Re}\,z_2 \ge 0 $ and $s_1,s_2$ are real.
Suppose $z_1 = \alpha_1 + i \alpha_2$ with $\alpha_1\ge 0$ and the boundary of $W(A)$ touches the unit
circle at the point $\cos\phi_1+i\sin\phi_1$ with $\phi_1 \in
[-\pi/2, \pi/2]$.
Then $W(A)$ has boundary
$$\{\alpha_1 +| s_1|\cos\theta + i (\alpha_2 + |s_1|r\sin\theta): \theta \in [0, 2\pi]\} .$$
\noindent
We {\bf claim} that the matrix $A$ is a convex combination of
$A_0 = e^{i\phi_1} I$ and another matrix $A_1$ of the form
$A_1 = i(1-r^2)\sin \phi_1I + \xi C$ for some $\xi \in {\mathbb R}$ such that $w(A_1) \le 1$.
To prove our claim,
we first determine $\theta_1 \in [-\pi/2, \pi/2]$ satisfying
$$\cos\phi_1+i\sin\phi_1=(\alpha_1 + |s_1| \cos\theta_1) + i (\alpha_2
+ |s_1| r\sin\theta_1).$$
Since the boundary of $W(A)$ touches the unit
circle at the point $\cos\phi_1+i\sin\phi_1$, using the parametric equation
\begin{equation}\label{para}
x+iy = (\alpha_1 + |s_1| \cos\theta) + i (\alpha_2 + |s_1| r\sin\theta),
\end{equation}
of the boundary of $W(A)$,
we see that the direction of the tangent
at the intersection point $\cos\phi_1+i\sin\phi_1$
is $- \sin \theta_1 + i r \cos \theta_1$, which agrees with $-\sin\phi_1+i \cos \phi_1 $, the
direction of the tangent line of the unit circle at the same point.
As a result, we have
$$(\cos \theta_1, \sin \theta_1)
= \displaystyle\frac{(\cos \phi_1, r\sin\phi_1)}{\sqrt{\cos^2\phi_1 + r^2\sin^2\phi_1}}.$$
Furthermore, since
$\cos\phi_1+i\sin\phi_1=(\alpha_1 + |s_1| \cos\theta_1)
+ i (\alpha_2 + |s_1| r\sin\theta_1)$,
we have
$$
\alpha_1=\cos\phi_1 - \displaystyle\frac{|s_1|\cos\phi_1}{\sqrt{\cos^2\phi_1 + r^2\sin^2\phi_1}}\ge 0 \quad
\hbox{ and } \quad
\alpha_2=\sin \phi_1-\displaystyle\frac{|s_1|r^2\sin\phi_1}{\sqrt{\cos^2\phi_1 + r^2\sin^2\phi_1}}.
$$
\noindent
\bf Assertion. \it If
$\hat s_1 = \sqrt{\cos^2\phi_1+r^2 \sin^2 \phi_1}$, then
$|s_1|\le \hat s_1$.
\rm
If $\cos \phi_1>0$, then
$\alpha_1=\(1 - \displaystyle\frac{|s_1| }{\hat s_1}\)\cos\phi_1\ge 0$, and hence
$|s_1|\le \hat s_1$.
If $\cos\phi_1=0$, then $\sin\phi_1=\pm 1$, $\hat s_1=r$ and
$(\alpha_1,\alpha_2) = (0,\sin\phi_1 (1-|s_1|r))$
so that the parametric equation of
the boundary of $W(A)$ in (\ref{para}) becomes
$$x+iy = |s_1| \cos\theta
+ i ( \sin\phi_1 (1-|s_1|r)\ + |s_1| r\sin\theta)\,.$$
Since $w(A)=1$ and $\sin\phi_1=\pm 1$, for all $\theta\in [0,2\pi)$ , we have
\begin{eqnarray*}
0 &\le& 1 - \[(|s_1| \cos\theta)^2+(\sin\phi_1 (1-|s_1|r)
+ |s_1| r\sin\theta)^2 \]\\
&= &1- \[ |s_1| (1-\sin^2\theta) +(\pm (1-|s_1|r)
+ |s_1| r\sin\theta)^2 \]\\
&= &1- \[ |s_1|^2 (1-(\pm 1\mp(1\mp\sin\theta))^2) +( 1-|s_1|r(1\mp
\sin\theta))^2 \]\\
&= &1- \[ |s_1|^2 ( 2(1\mp\sin\theta) - (1\mp\sin\theta)^2) + 1-2|s_1|r(1\mp
\sin\theta)+ |s_1|^2r^2(1\mp
\sin\theta)^2 \]\\
&= & 2|s_1|(r-|s_1|)(1\mp\sin\theta)+(1-r^2)|s_1|^2(1\mp
\sin\theta)^2 .
\end{eqnarray*}
Therefore, $(r-|s_1|)\ge 0$, which gives $|s_1|\le r=\hat s_1$.
\medskip
Now, we show that our {\bf claim} holds with
\begin{equation}\label{a0a1}
A_0=e^{i\phi_1}I \qquad \hbox{ and } \qquad A_1 = i(1-r^2)\sin\phi_1I+\nu_1 \hat s_1 C,
\end{equation}
where $\nu_1 =1$ if $s_1\ge 0$ and $\nu_1 =-1$ if $s_1< 0$.
Note that
$W(A_1)$ is the elliptical disk with boundary
$\{ \hat s_1 \cos \theta + i[(1-r^2) \sin\phi_1+ \hat s_1r\sin\theta): \theta \in [0, 2\pi)\}$,
and for every $\theta\in [0, 2\pi]$, we have
\begin{eqnarray*}
&&(\hat s_1\cos \theta)^2+((1-r^2)\sin\phi_1+\hat s_1r\sin \theta)^2\\
&=&\hat s_1^2(1-\sin^2 \theta)+(1-r^2)^2\sin^2\phi_1+\hat s_1^2r^2\sin^2 \theta+2\hat s_1r(1-r^2)\sin\phi_1\sin \theta\\
&=&\hat s_1^2 +(1-r^2)^2\sin^2\phi_1+(1-r^2)r^2\sin^2\phi_1 -(1-r^2)\(\hat s_1^2\sin^2 \theta-2\hat s_1r \sin\phi_1\sin \theta+r^2\sin^2\phi_1\)\\
&=&(\cos^2\phi_1+r^2 \sin^2 \phi_1) +(1-r^2)^2\sin^2\phi_1+(1-r^2)r^2\sin^2\phi_1-(1-r^2)(\hat s_1\sin \theta-r\sin\phi_1)^2
\\
&=&1-(1-r^2)(\hat s_1\sin \theta-r\sin\phi_1)^2\\
&\le& 1.
\end{eqnarray*}
Therefore, $w(A_1)\le 1$. By the Assertion, $|s_1| \le \hat s_1$.
Hence
$A=\(1-\displaystyle\frac{|s_1|}{\hat s_1}\)A_0+\displaystyle\frac{|s_1|}{\hat s_1} A_1$
is a convex combination of $ A_0$ and $A_1$.
\medskip
Similarly, if $W(B)$ touches the unit circle at $e^{i\phi_2}$
with $\phi_2\in[-\pi/2,\pi/2]$, then $B$ is a
convex combination of
\begin{equation}\label{b0b1}
B_0=e^{i\phi_2}I \qquad \hbox{ and } \qquad B_1 = i(1-r^2)\sin\phi_2I+\nu_2\hat s_2 C
\end{equation}
with $\hat s_2 = \sqrt{\cos^2\phi_2+r^2 \sin^2 \phi_2}$ and $\nu_2\in\{1,-1\}$.
Let $U = \begin{pmatrix}-r& \sqrt{1-r^2} \cr \sqrt{1-r^2}& r\end{pmatrix}$. Then
$U^*CU=-C$. If $\nu_2 = -1$, we may replace $(A,B)$ by $(U^*AU,U^*BU)$
so that $(\nu_1,\nu_2)$ will change to $(-\nu_1, -\nu_2)$. So, we may further
assume that $\nu_2=1$.
\medskip
By the above analysis,
$AB$ is a convex combination of $A_0B_0, A_0B_1, A_1B_0$ and $A_1B_1$. Since
$w(e^{it}T)=w(T)$ for all $t\in{\mathbb R}$ and $T\in M_n$,
the first three matrices have numerical radius 1.
We will prove that
\begin{equation}
\label{a1b1}
w(A_1 B_1) < 1.
\end{equation}
It will then follow that
$w(AB) \le 1$, where the equality holds only when $A=A_0$ or $B=B_0$.
\medskip
For simplicity of notation,
let $w_1=\sin\phi_1$ and $w_2=\sin\phi_2$. Then
\begin{equation}\label{hats}\hat s_i=\sqrt{1-(1-r^2)w_i^2}\ \mbox{ for }\ i=1,2.
\end{equation}
Recall from (\ref{a0a1}) and (\ref{b0b1}) that $A_1 = i(1-r^2)w_1 I +\nu_1 \hat s_1 C$ and
$B_1 = i(1-r^2)w_2 I + \hat s_2 C$ because $\nu_2=1$. Since $C^2 = (1-r^2)I_2$, we have
$$A_1B_1 = (1-r^2)(u I_2 + iv C),$$
where
$$
u = \nu_1 \hat s_1\hat s_2-w_1w_2(1-r^2)
\quad\hbox{ and } \quad v = w_1 \hat s_2
+\nu_1w_2\hat s_1.
$$
If $r=1$, then $A_1B_1 =0$. Assume that $0<r<1$.
We need to show that
$$
\frac{1}{1-r^2}w(A_1B_1) = w(uI+ivC) < \displaystyle\frac{1}{(1-r^2)}.
$$
Because $W(uI+ivC)$ is an elliptical disk with boundary
$\{u+iv(\cos\theta+ir\sin\theta): \theta \in [0, 2\pi]\}$,
it suffices to show that
$$f(\theta) = |u + iv(\cos\theta + i r\sin \theta)|^2
< \frac{1}{(1-r^2)^2}
\quad \hbox{ for all } \ \theta \in [0, 2\pi].$$
Note that
\begin{eqnarray*}
f(\theta) &=& (u-rv\sin\theta)^2 + (v\cos\theta)^2 \\
&=&u^2-2ruv\sin\theta+r^2v^2\sin^2\theta+v^2(1-\sin^2\theta) \\
&=&\displaystyle\frac{u^2}{1-r^2}+v^2-\(\sqrt{1-r^2}v\sin\theta+\displaystyle\frac{ru}{\sqrt{1-r^2}}\)^2\\
&\le&\displaystyle\frac{u^2}{1-r^2}+v^2\\
&=&\displaystyle\frac{1}{(1-r^2)}\[u^2+(1-r^2)v^2\]\\
&=&\displaystyle\frac{1}{(1-r^2)}\[ (\nu_1 \hat s_1\hat s_2-w_1w_2(1-r^2) )^2+(1-r^2)(w_1 \hat s_2+\nu_1w_2\hat s_1)^2 \]\\
&=&\displaystyle\frac{1}{(1-r^2)}\[ \hat s_1^2\hat s_2^2+w_1^2w_2^2(1-r^2)^2 +(1-r^2)
(w_1^2 \hat s_2^2+ w_2^2\hat s_1^2)\]\quad
\mbox{because }\nu_1=\pm 1 \\
&=&\displaystyle\frac{1}{(1-r^2)}\[ (\hat s_1^2+ (1-r^2) w_1^2)( \hat s_2^2 +(1-r^2) w_2^2)\]\\
&=&\displaystyle\frac{1}{(1-r^2)}\hskip 3.25in \ \mbox{ by (\ref{hats})} \\
&< &\displaystyle\frac{1}{(1-r^2)^2}\hskip 3.25in \ \mbox{because }0<r<1.
\end{eqnarray*}
Consequently, we have $w(A_1B_1)<1$ as asserted in (\ref{a1b1}).
Moreover, by the comment after (\ref{a1b1}), if
$w(AB) = w(A)w(B)$, then $A = A_0$ or $B = B_0$.
Conversely, if $A = A_0$ or $B_0$, then we clearly have $W(AB) = w(A)w(B)$.
The proof of the theorem is complete. \hfill $\Box$\medskip
\bigskip\noindent
{\bf Acknowledgment}
We would like to thank Professor Pei Yuan Wu, Professor
Hwa-Long Gau, and the referee for some helpful comments.
Li is an affiliate member of the Institute
for Quantum Computing, University of Waterloo, and is an
honorary professor of the
Shanghai University. His research was supported by USA
NSF grant DMS 1331021, Simons Foundation Grant 351047,
and NNSF of China Grant 11571220.
| {
"timestamp": "2019-03-01T02:02:57",
"yymm": "1901",
"arxiv_id": "1901.06759",
"language": "en",
"url": "https://arxiv.org/abs/1901.06759",
"abstract": "Denote by $w(T)$ the numerical radius of a matrix $T$. An elementary proof is given to the fact that $w(AB) \\leq w(A)w(B)$ for a pair of commuting matrices of order two, and characterization is given for the matrix pairs that attain the quality.",
"subjects": "Functional Analysis (math.FA)",
"title": "Submultiplicativity of the numerical radius of commuting matrices of order two",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9910145731051029,
"lm_q2_score": 0.8267117962054049,
"lm_q1q2_score": 0.8192834377974522
} |
https://arxiv.org/abs/2010.01539 | A discussion on the approximate solutions of first order systems of non-linear ordinary equations | We develop a one step matrix method in order to obtain approximate solutions of first order systems and non-linear ordinary differential equations, reducible to first order systems. We find a sequence of such solutions that converge to the exact solution. We study the precision, in terms of the local error, of the method by applying it to different well known examples. The advantage of the method over others widely used lies on the simplicity of its implementation. | \section{Introduction}
This paper pretends to be a contribution to methods to find the approximate solutions of nonlinear first order equations (or systems) with given initial values of the form
\begin{equation}\label{1}
{\mathbf y}'(t) = {\mathbf f}({\mathbf y}(t))\,,\qquad {\mathbf y}(t_0)={\mathbf y}_0\,,
\end{equation}
where the prime represents first derivative with respect to the variable $t$. As many higher order ordinary differential equations either linear or non-linear may be written as a system of the form \eqref{1}, our method to obtain approximate solutions will apply also to these kind of equations.
Our approach is based in a generalization of the one-step matrix method developed by Demidovich and other authors \cite{DEM,KOT,HSU} some time ago, valid for systems of the form $\mathbf y'(t)=A(t)\mathbf y(t)$. One clear presentation of this method appears in the textbook by Farkas \cite{FAR} and it is interesting to compare it with some related procedures, see for instance \cite{NEP,NEP1}. It is quite important to remark that, while the Demidovich matrix method is applied to linear equations (with variable coefficients), our matrix method is a generalization to non-linear systems. The advantage that our method may have in comparison of other one step methods lies on its great algorithmic simplicity. In addition, our solutions have a reasonable level of accuracy in few steps and this is workable in a table computer. This method is quite easily programmable and is very suitable for its use with the package Mathematica. It may be also seen as an alternative to Runge-Kutta and Taylor methods due precisely to its simplicity and precision. The objetive of the present study is to obtain approximate solutions of all kind of systems described by non-linear systems of differential equations (although we may include those linear systems with variable coefficients), including those who appear from physics.
In the derivation of the present approach, our motivation was rooted in the practice of operational numerical calculus. Nevertheless, we are mainly focused in the mathematical analysis of our method instead of a detailed analysis of the algorithm or CPU times. As for the case of one step Taylor polynomial method, ours shows a great conceptual simplicity. As a consequence, our proposal for obtaining approximate solutions of non-linear systems can be very easily implemented.
The presentation of the method to obtain the approximate solutions is introduced in Section 2. Thus, we have a sequence of approximate solutions, which are defined on a given interval of the real line. This sequence converges uniformly to the exact solution on the given interval as is proven in Section 3, where we also discuss a question of order. It is important to remark that we do not impose any periodicity conditions, so that our results are valid either for periodic or for non-periodic solutions.
We have applied our method to various examples of two or three dimensional examples. It is also necessary to test the applicability and accuracy of the method on widely used equations and/or systems. To this end, we have used the van der Pol \cite{VDP}, Duffing \cite{DUF}, Lorentz \cite{LOR} equations, a pseudo diffusive equation depending on a parameter, studied in the standard literature \cite{HAL}, an epidemic equation and the predator-prey Lotka-Volterra \cite{LOT,VOL} equation. We have compared the precision of our solutions with the exact solution, whenever this is known. If not, the comparison is based on Runge-Kutta solutions, for a modern presentation, see \cite{BEL,EGG,KAL}. Also with the widely used Taylor method.
The analysis of a variety of examples suggest that our method may be more precise for two dimensional systems than for higher dimensional ones, although this is not always exact: we give two examples of three dimensional systems (Lorentz and epidemic equations), for which our method gives different precisions. And it looks like particularly efficient in the case of the pseudo diffusive equation we mentioned earlier, at least when compared with the standard perturbative method studied in \cite{MIC}.
We usually obtain precisions in between of those obtained for the Taylor method of third and fourth order (although the precision depends also on the length of subintervals in which is divided the interval in which we are looking for solutions), which is reasonable when we use a table computer. We close this article with concluding remarks and a conjecture on the Li\'enard equation which is motivated by some of our results and confirmed through numerical experiments.
\section{A matrix method.}
We begin with an initial value problem as given in \eqref{1}. While the function ${\mathbf y}(t)$ is a $\mathbb R^n$ valued function with real variable $t$ of class $C^1$ on the neighbourhood $|t-t_0|\le a$, $\mathbf f(-)$ is a $\mathbb R^n$ valued function with variable in $\mathbb R^n$, which is continuous on $D\equiv \{ \mathbf y \in \mathbb R^n\;\;/\;\; ||\mathbf y - \mathbf y_0||\le d\}$ and satisfies a Lipschitz condition with respect to $\mathbf y$. Needless to say that $a,d$ are positive constants.
In relation with the identity \eqref{1}, we shall use either the denomination of ``equation'' or ``system'' indistinctly. In any case, it is well known that the initial value problem \eqref{1} has one unique solution on the interval $|t-t_0|\le \inf(a,d/M)$ with $M:=\sup_D||\mathbf f||$.
Our objective is to introduce a generalization of a method of solutions of \eqref{1} proposed in \cite{FAR}. This generalization is based in an iterative procedure of numerical integration for equations of the type $\mathbf f(\mathbf y) =A(\mathbf y) \cdot \mathbf y $, where $A(-)$ is an $n\times n$ matrix. With this choice for the function $\mathbf f(-)$, the differential equation in \eqref{1} has the form
\begin{equation}\label{2}
\mathbf y'(t) = A(\mathbf y(t))\cdot \mathbf y(t)\,.
\end{equation}
We assume that the entries of $A(\mathbf y)$ are continuous on $D$. Let us define a uniform partition of the interval $[0,t_N]$ into subintervals $I_k\equiv [t_k,t_{k+1}]$, where $t_k=hk$ with $k=0,1,2,\dots,N$, with $N$ natural and $h=t_N)/N$, $t_N<a$. We have chosen this form of the interval by simplicity, needless to say that if the original interval is somehow else, it always may be transformed into $[0,t_N]$ by a translation. Also, the equal spacing of all subintervals is not strictly necessary, although it simplifies our notation. Conventionally, we may call {\it nodes} to the points $\{t_k\}$.
We proceed as follows: On each interval $I_k$, we approximate Equation \eqref{2} by
\begin{equation}\label{3}
\mathbf y'_{N,k}(t) = A(\mathbf y^*_{N,k}) \cdot \mathbf y_{N,k}(t)\,.
\end{equation}
At each node, $t_k$, we impose $\mathbf y_{N,k}(t_k) = \mathbf y_{N,k-1}(t_k)$, while $\mathbf y^*_{N,k} \in \mathbb R^n$ is a constant to be determined. This gives the segmentary solution, which has to be of the form $\mathbf y_N(t)\equiv \{\mathbf y_{N,k}(t)\;;\; k=0,1,\dots,N-1\}$. It satisfies
\begin{equation}\label{4}
\mathbf y'_N(t) = A(\mathbf y^*_N) \cdot \mathbf y_N(t)\,,
\end{equation}
where $\mathbf y^*_N$ coincides with $\mathbf y^*_{N,k}$ on each of the $k$-th intervals.
These segmentary solutions give a sequence of functions $\{\mathbf y_N(t)\}$, $t\in [0,t_N]$, that are approximations to the solution of \eqref{2}. Here, we shall give a method to obtain each of the $\mathbf y_N(t)$ and, in next sections, we shall discuss the properties of the sequence.
Then, we proceed to an iterative integration of \eqref{4} as follows: Take the first interval $I_0$ and fix an initial value $\mathbf y(t_0)$. This initial value gives the solution $\mathbf y_{N,0}(t)$ on $I_0$. Thus, we have the value $\mathbf y_{N,0}(t_1)= \mathbf y_{N,1}(t_1)$, which serves as the initial value for the solution on the interval $I_1$. Then, we repeat the procedure in an obvious manner for $I_2$ and so on. For each interval $I_k$ the matrix $A(\mathbf y^*_{N,k})$, which appears in \eqref{3}, is a constant matrix and, therefore, \eqref{3} is a system with constant coefficients. Therefore, the solution of \eqref{3} has the form
\begin{equation}\label{5}
\mathbf y_{N,k}(t) = \exp \{ A(\mathbf y^*_{N,k})(t-t_k) \} \cdot \mathbf y_{N,k}(t_k)\,,\qquad k=0,1,2,\dots, N-1\,,
\end{equation}
where we have used the notation $\mathbf y_{N,0}(t_0)= \mathbf y(t_0)$. We determine the numbers $\mathbf y^*_{N,k}$ through the following expression:
\begin{equation}\label{6}
\mathbf y^*_{N,k} = \mathbf y_{N,k}\left(t_k+\frac h2 \right) = \exp \left\{ A(\mathbf y_{N,k}(t_k))\,\frac h2 \right\} \cdot \mathbf y_{N,k}(t_k)\,,
\end{equation}
where $k=0,1,2,\dots N-1$.
Then, the approximate solution $\mathbf y_N(t)$ gives at $t\in I_k$ and on each of the intervals $I_k$:
\begin{equation}\label{7}
\mathbf y_N(t) = \exp \left\{ A(\mathbf y^*_{N,k}) (t-t_k) \right\} \cdot \prod_{j=0}^{k-1} \exp\left\{ A(\mathbf y^*_{N,j})\,h \right\} \cdot \mathbf y_0\,.
\end{equation}
The determination of the exponential of a matrix may often be rather complicated for large matrices. Then, we may use the Putzer spectral formula. This establish that if $A$ is a constant matrix of order $n\times n$ with eigenvalues $\{\lambda_k\}_{k=1}^n$, then, its exponential verifies the following expression:
\begin{equation}\label{8}
\exp\{ A\,t \} = \sum_{k=1}^n r_k(t)\,P_{k-1}\,,
\end{equation}
with
\begin{equation}\label{9}
P_0\equiv I\,,\qquad P_k= \prod_{j=1}^k (A-\lambda_j I)\,, \qquad k=1,2,\dots,n-1\,,
\end{equation}
where $I$ is the $n\times n$ identity matrix and the coefficients $r_k(t)$ are to be determined through the following first order system of differential equations:
\begin{eqnarray}\label{10}
r'_1(t) = \lambda_1\,r_1(t)\,, \quad r_1(0)=1\,; \nonumber \\[2ex] r'_k(t) =\lambda_k\,r_k(t) + r_{k-1}(t)\,, \quad r_k(0)=0\,,
\end{eqnarray}
for $k=2,3,\dots,n$.
For simplicity, let us consider the particular case, in which $A(\mathbf y^*_{N,j})$ are matrices of order $2\times 2$. Each of these matrices has two eigenvalues, $\lambda_{1,j}$ and $\lambda_{2,j}$, which may be either different or equal. Let us assume that $\lambda_{1,j} \ne \lambda_{2,j}$. Then,
\begin{eqnarray}\label{11}
\exp\left\{ A(\mathbf y^*_{N,j})\,h \right\} \nonumber\\[2ex]= \frac{1}{\lambda_{1,j} - \lambda_{2,j}} \left\{ (A(\mathbf y^*_{N,j}) - \lambda_{2,j}\,I ) \exp\{\lambda_{1,j}\,h \} - (A(\mathbf y^*_{N,j}) - \lambda_{1,j}\,I ) \exp\{\lambda_{2,j}\,h \} \right\}\,.
\end{eqnarray}
On the other hand, when $\lambda_{1,j}=\lambda_{2,j}=\lambda_j$, we have for the exponential
\begin{equation}\label{12}
\exp\left\{ A(\mathbf y^*_{N,j})\,h \right\} = \exp\{ \lambda_j\,h \}\, \{I + h(A(\mathbf y^*_{N,j}) -\lambda_j\,I)\}\,.
\end{equation}
We conclude here the crude description of the method. In the sequel, we shall show the convergence of the sequence, $\{\mathbf y_N(t)\}$, of approximate segmentary solutions and shall evaluate the precision of the method.
\section{On the convergence of approximate solutions}
In the previous Section, we have obtained a set of approximate solutions of the initial value problem on a compact interval of the real line. The question is now, assuming we have obtained by the previous method a sequence of solutions. Does this sequence converges to the exact solution in any reasonable sense as the length of the sub intervals, here called $h$, becomes arbitrarily small. To investigate this possibility is the goal of the present Section. Let us go back to \eqref{4} and rewrite it as
\begin{equation}\label{13}
\mathbf y'_N(t)= A(\mathbf y_N) \cdot \mathbf y_N(t) +\eta_N\,,
\end{equation}
so that,
\begin{equation}\label{14}
\eta_N(t) =(A(\mathbf y^*_N) - A(\mathbf y_N)) \cdot \mathbf y_N(t)\,.
\end{equation}
Let us add and subtract $A(\mathbf y^*_N) \cdot \mathbf y^*_N$ in the right hand side of \eqref{14}. Then, let us take the supremum norm on the interval $[t-t_0,t+t_0]$ and use the triangle inequality of the norm, so as to obtain the following inequality:
\begin{equation}\label{15}
||\eta_N|| \le || (A(\mathbf y^*_N) \cdot \mathbf y^*_N -A(\mathbf y_N) \cdot \mathbf y_N)|| + ||A(\mathbf y^*_N)|| \,||\mathbf y^*_N - \mathbf y_N ||\,.
\end{equation}
Then, we apply in \eqref{15} the Lipschitz condition with respect to the variable $\mathbf y$ with constant $K>0$. Then, it comes that
\begin{equation}\label{16}
||\eta_N|| \le (K +|| A(\mathbf y_N^*)||) \, ||\mathbf y_N^*-\mathbf y_N||\,.
\end{equation}
On each one of the intervals $I_k$, let us expand $\mathbf y_N(t)$ in Taylor series around $t_k$. We obtain the following inequality:
\begin{equation}\label{17}
||\mathbf y_N^* - \mathbf y_N(t)|| \le \frac h2 \,||\mathbf y'_n(t_k)|| \le \frac h2\,\max_{t_0\le t\le t_N} ||\mathbf y'_N(t)||\,.
\end{equation}
Since $\mathbf y'_N(t)$ is continuous on the interval $t_0 \le t \le t_N$, the maximum in the right hand side of \eqref{17} exists. Furthermore, $A(\mathbf y)$ is continuous with respect to $\mathbf y$ on the neighborhood $||\mathbf y- \mathbf y_0||\le d$, so that there exists a constant $C>0$ such that $||A(\mathbf y)|| \le C$. Taking norms in \eqref{4}, we have that
\begin{equation}\label{18}
||\mathbf y'_N|| \le ||A(\mathbf y^*_N)||\,||\mathbf y_N|| \le C\, ||\mathbf y_N||\,.
\end{equation}
Equation \eqref{7} implies that
\begin{equation}\label{19}
\max_{t_0 \le t \le t_N} ||\mathbf y_N(t)|| \le C' \exp\{C(t_N-t_0)\}\,,
\end{equation}
where $C'>0$ is a constant. After (\ref{18}-\ref{19}), we see that $||\mathbf y'(t)||$ is bounded for all $t$ in the interval $[t_0,t_N]$. Consequently after \eqref{16} and (\ref{18}-\ref{19}), we have that
\begin{equation}\label{20}
||\eta_N|| \le \frac h2 \, (K+C)\, C' \, \exp\{C(t_N-t_0)\} =S\,h\,,
\end{equation}
where the meaning of the constant $S>0$ is obvious.
Next, le us integrate \eqref{13} on the interval $[t_0,t]$. Since for all value of $N$, we use the same initial value $y(t_0)$, we have
\begin{equation}\label{21}
\mathbf y_N(t) = \mathbf y(t_0) + \int_{t_0}^t A(\mathbf y_N(s))\cdot \mathbf y_N(s)\,ds + \int_{t_0}^t \eta_N(s)\,ds\,.
\end{equation}
From \eqref{21}, we have that
\begin{equation}\label{22}
||\mathbf y_{N+M}(t) -\mathbf y_N(t)|| \le \int_{t_0}^t ||A(\mathbf y_{N+M}(s))\cdot \mathbf y_{N+M}(s) - A(\mathbf y_N(s)) \cdot \mathbf y_N(s)||\,ds + \delta_N(t)\,,
\end{equation}
with
\begin{equation}\label{23}
\delta_N(t) = \int_{t_0}^t ||\eta_{N+M}(s)-\eta_N(s)||\,ds \le 2Sh (t_N-t_0)\,.
\end{equation}
Using the Lipschitz condition in \eqref{22}, we obtain
\begin{equation}\label{24}
||\mathbf y_{N+M}(t) -\mathbf y_N(t)|| \le K \int_{t_0}^t ||\mathbf y_{N+M}(s) -\mathbf y_N(s)|| \, ds + 2Sh (t_N-t_0)\,.
\end{equation}
At this point, we use the Gronwall lema, which states the following
\medskip
{\bf Lemma (Gronwall)}.- Let $f(t): I\longmapsto \mathbb R$ an integrable function on the compact real interval $I$ such that there exists two positive constants $A$ and $B$, with
\begin{equation}\label{25}
0 \le f(t) \le A + B \int_{t_0}^t f(s)\,ds\,, \qquad t_0 \in I\,
\end{equation}
for all $t \in I$. Then,
\begin{equation}\label{26}
f(t) \le A\, e^{B(t-t_0)}\,.
\end{equation}
\hfill $\blacksquare$
\bigskip
Then, we use the Gronwall lema with
\begin{equation}\label{27}
f(t)\equiv ||\mathbf y_{N+M}(t) -\mathbf y_N(t)|| \,, \quad A\equiv Mh := 2S(t_N-t_0)h\,, \quad B\equiv K\,,
\end{equation}
to conclude that
\begin{equation}\label{28}
||\mathbf y_{N+M}(t) -\mathbf y_N(t)|| \le Mh\, e^{K(t-t_0)} \le [M\,e^{K(t_n-t_0)}]\,h =K'\,h\,,
\end{equation}
With $K'>0$ a positive constant. Therefore,
\begin{equation}\label{29}
||\mathbf y_{N+M}(t) -\mathbf y_N(t)|| \longmapsto 0\,,
\end{equation}
as $h\longmapsto 0$. Since the space $C^0[t_0,t_N]$ is complete\footnote{Note that $t_N$ is fixed and $N$ just denotes the number of intervals in the partition or equivalently, the length of $h$.}, \eqref{29} implies the existence of a continuous function $\mathbf z(t): [t_0,t_N] \longmapsto \mathbb R$, such that
\begin{equation}\label{30}
\mathbf z(t) := \lim_{N\to\infty} \mathbf y_N(t)\,,
\end{equation}
uniformly.
Now, we claim that $\mathbf z(t)$ is differentiable on $(t_0,t_N)$. Furthermore, $\mathbf z(t)$ is a solution of the differential equation \eqref{2}.
The proof goes as follows: The Lipschitz condition applied to our situation implies that
\begin{equation}\label{31}
||A(\mathbf y_{N+M}(t)) \cdot \mathbf y_{N+M}(t) -A(\mathbf y_N(t)) \cdot \mathbf y_N(t)|| \le K\,||\mathbf y_{N+M}(t)-\mathbf y_N(t)||\,,
\end{equation}
so that $A(\mathbf y_N(t)) \cdot \mathbf y_N(t)$ converges uniformly to $A(\mathbf z(t))\cdot \mathbf z(t)$. In addition, after \eqref{20}, we have that
\begin{equation}\label{32}
||\eta_N(t)|| \le S h \le S(t_N-t_0)\,.
\end{equation}
Recall that $t_N$ is fixed for whatever value of $N$. Then, taken limits in \eqref{21}, we have
\begin{eqnarray}\label{33}
\mathbf z(t) = \mathbf y(t_0) + \lim_{N\to\infty} \int_{t_0}^t A(\mathbf y_N(s)) \cdot \mathbf y_N(s)\,ds + \lim_{N\to\infty} \int_{t_0}^t \eta_N(s)\,ds \nonumber\\[2ex] = \mathbf y(t_0) + \int_{t_0}^t [ \lim_{N\to\infty} A(\mathbf y_N(s)) \cdot \mathbf y_N(s)]\,ds + \int_{t_0}^t \lim_{N\to\infty}[\eta_N(s)]\,ds\,.
\end{eqnarray}
In the second integral, we may interchange the limit and the integral due to the uniform convergence of the sequence under the integral to its limit. In the case of the second integral, we have used the Lebesgue convergence theorem, which can be applied here due to \eqref{32}. Since obviously $\lim_{N\to\infty}[\eta_N(s)]=0$, we finally conclude that
\begin{equation}\label{34}
\mathbf z(t)= \mathbf y(t_0) + \int_{t_0}^t A(\mathbf z(s))\cdot \mathbf z(s) \,ds\,.
\end{equation}
From \eqref{34}, we conclude the following:
\medskip
1.- The function $\mathbf z(t)$ is differentiable in the considered interval.
\smallskip
2.- The function $\mathbf z(t)$ is the solution of equation \eqref{2} with initial value $\mathbf z(t_0)=\mathbf y(t_0)$.
\bigskip
\subsection{A question of order}
The expansion into Taylor series on a neighborhood of $t_k$ of the solutions of equations \eqref{2} and \eqref{3} have these forms, respectively:
\begin{equation}\label{35}
\mathbf y(t_{k+1}) = \mathbf y(t_k) +A(y_k)\,\mathbf y(t_k)\,h +\frac 12 (A^2(\mathbf y(t_k))\cdot \mathbf y(t_k))h^2 + \frac 12 \frac{d}{dt} [A(\mathbf y(t_k)) \cdot \mathbf y(t_k)] h^2 + O(h^3)\,.
\end{equation}
Taking into account that
\begin{equation}\label{36}
\frac{d}{dt}\,A(\mathbf y(t)) = \sum_{j=1}^n \frac{\partial}{\partial\,y_j}\, A(\mathbf y(t)) \,\frac{d}{dt}\, y_j(t)\,,
\end{equation}
where $y_j$ is the $j$-th component of $\mathbf y$, equation \eqref{35} becomes:
\begin{eqnarray}\label{37}
\mathbf y(t_{k+1}) = \mathbf y(t_k) +A(y_k)\,\mathbf y(t_k)\,h +\frac 12 (A^2(\mathbf y(t_k))\cdot \mathbf y(t_k))h^2 \nonumber\\[2ex] + \frac12 \left( \sum_{j=1}^n \frac{\partial}{\partial\,y_j}\, A(\mathbf y(t_k)) \,\frac{d}{dt}\, y_j(t_k) \right) \cdot \mathbf y(t_k) \,h^2 + o(h^3)\,.
\end{eqnarray}
On the $k$-th interval, equation \eqref{4} takes te formula
\begin{equation}\label{38}
\mathbf y_N(t_{k+1}) = \mathbf y_N(t_k) + A(\mathbf y^*_{N_k}) \cdot \mathbf y_N(t_k)\,h +\frac 12\, A^2(\mathbf y^*_{N_k}) \cdot \mathbf y_N(t_k)\,h^2 +o(h^3)\,.
\end{equation}
Then, we may proceed to expand into Taylor series the matrix $A(\mathbf y^*_{N_k})$ on a neighborhood of $\mathbf y_{N_k}$. Taking into account \eqref{5} and after some simple calculations, we obtain:
\begin{equation}\label{39}
A(\mathbf y^*_{N_k}) = A(\mathbf y_N(t_k) ) + \sum_{j=1}^n \frac{\partial}{\partial \,y_j}\, A(\mathbf y_N(t_k) ) \,(y_{N,j}(t_k+h/2)- y_{N,j}(t_k))\,,
\end{equation}
where $y_{N,j}(t)$ is the $j$-th component of $\mathbf y_N(t)$. A first order expansion on the last factor on the right hand side of \eqref{39} gives
\begin{equation}\label{40}
y_{N,j}(t_k+h/2)- y_{N,j}(t_k) = \frac 12\,\frac{d}{dt}\, y_{N,j}(t_k) \, h + o(h^2)\,,
\end{equation}
so that using \eqref{40} in \eqref{39}, we have
\begin{equation}\label{41}
A(\mathbf y^*_{N_k}) = A(\mathbf y_N(t_k) ) + \frac h2 \sum_{j=1}^n \frac{\partial}{\partial \,y_j}\, A(\mathbf y_N(t_k) ) \, \frac{d}{dt} \, y_{N,j}(t_k)\,.
\end{equation}
Then, we replace \eqref{41} into \eqref{38}, and performing some simple manipulations, taking into account that up to second order in $h$:
\begin{equation}\label{42}
\frac12 \, A^2(\mathbf y^*_{N_k}) \cdot \mathbf y_N(t_k)\,h^2 \approx \frac 12\,A^2(\mathbf y(t_k)) \cdot \mathbf y_N(t_k)\, h^2\,,
\end{equation}
we finally obtain that
\begin{eqnarray}\label{43}
\mathbf y(t_{k+1})= \mathbf y_N(t_k) + A(\mathbf y_N(t_k)) \cdot \mathbf y_N(t_k)\,h +\frac{h^2}{2}\sum_{j=1}^n \,\frac{\partial}{\partial\,y_j} A(\mathbf y_n(t_k))\,\frac{d}{dt}\, y_{N,j}(t_k) \nonumber \\[2ex] + \frac12 \,A^2(\mathbf y_N(t_k)) \cdot \mathbf y_N(t_k)\,h^2 +o(h^3)\,.
\end{eqnarray}
The advantage that \eqref{43} offers with respect to \eqref{37} is that in \eqref{43} the terms up to second order in $h$ are correctly shown.
\subsubsection{Going beyond second order}
In (3.1), we have found the solution up to second order in $h$. Would we obtain a better precision for third or higher order keeping at the same time the simplicity of the method? First of all our construction is based on equation \eqref{3}, which is not longer valid if we require an approximation of order higher than two. To fix ideas, let us take ${\mathbf y}(t)=(y_1(t),y_2(t))$ bidimensional for simplicity, a choice which does not affect to our argument. Let us expand $A({\mathbf y}(t))$ around ${\mathbf y}^*(t)=(y_1^*,y_2^*)$ (where we have omitted the sub-indices $N,k$ for simplicity) and take one more term in the Taylor span. The result is
\begin{equation}\label{44}
A({\mathbf y}(t)) = A({\mathbf y}^*(t)) + \frac{\partial}{\partial y_1}\, A({\mathbf y}^*(t)) (y_1(t)- y_1^*) + \frac{\partial}{\partial y_2}\, A({\mathbf y}^*(t)) (y_2(t)- y_2^*)\,.
\end{equation}
Then, instead of the approximation \eqref{3}, we have the following, where again we have suppressed the indices $N$ and $k$:
\begin{equation}\label{45}
{\mathbf y}'(t) = \left[ A({\mathbf y}^*(t)) + \frac{\partial}{\partial y_1}\, A({\mathbf y}^*(t)) (y_1(t)- y_1^*) + \frac{\partial}{\partial y_2}\, A({\mathbf y}^*(t)) (y_2(t)- y_2^*) \right] \cdot {\mathbf y}(t)\,.
\end{equation}
The solution to be determined is just ${\mathbf y}(t)=(y_1(t),y_2(t))$, which now is a part of the Ansatz \eqref{45}. One possibility to solve this contradiction is to proceed with the following span on each of the $I_k$ intervals:
\begin{equation}\label{46}
y_1(t)= y_1(t_k) + \frac{\partial y_1}{\partial t}(t_k) \,(t-t_k) +\dots\,,
\end{equation}
and same for $y_2(t)$. If we use these manipulations in \eqref{3}, we obtain an equation of the type:
\begin{equation}\label{47}
{\mathbf y'_{N,k}}(t) = G_{N,k}(t) \cdot {\mathbf y_{N,k}}(t) \,,
\end{equation}
with
\begin{equation}\label{48}
G_{N,k}(t) = A({\mathbf y^*_{N,k}}(t)) + \frac{\partial }{\partial y_1}\, A({\mathbf y^*_{N,k}}(t)) (y_1(t)-y_1^*) + \frac{\partial }{\partial y_2}\, A({\mathbf y^*_{N,k}}(t)) (y_2(t)-y_2^*) + \dots\,.
\end{equation}
Obviously, system \eqref{48} is non-autonomic. We have to use a new approximation of $G_{N,k}(t) $ by a constant on the interval $I_k$ and re-start again.
As we see, advancing to just one higher order of accuracy destroys the simplicity of the method which is one of its more interesting added values. Therefore, we cannot consider going to higher orders an advantage. It may slightly improve the precision at the price of destroying the efficiency and simplicity of the method.
\subsection{Some examples}
\begin{itemize}
\item{{\bf The van der Pol equation}
The van der Pol equation
\begin{equation}\label{49}
y''(t) +\mu(y^2(t)-1)y'(t) + y(t)=0
\end{equation}
is a particular case of the Li\'enard equation,
\begin{equation}\label{50}
y''(t)+f(y)\,y'(t) +g(y)=0\,.
\end{equation}
which will be discussed in the Appendix. In the van der Pol equation, we have obviously that $f(y)=\mu(y^2(t)-1)$ and $g(y)\equiv y$. This equation can be easily written in the matrix form \eqref{2} by writing $y_1(t)\equiv y(t)$ and $y_2(t)\equiv y'(t)$, as
\begin{equation}\label{51}
\left(\begin{array}{c} y'_1(t)\\[2ex] y'_2(t) \end{array} \right) = \left( \begin{array}{cc} 0 & 1 \\[2ex] -1 & \mu(1-y_1^2) \end{array} \right) \left(\begin{array}{c} y_1(t)\\[2ex] y_2(t) \end{array} \right)\,.
\end{equation}
Our goal is to compare the precision of our method with a reference solution. No explicit solutions to the van der Pol equation \eqref{49} are known, so that we use the Runge-Kutta solution of eight order, $y_{rk}(t)$, as reference solution (alternatively, one may consider a Taylor solution of eighth order, which has a comparable precision). We compare the precision of our method with the precision of the solutions as obtained by a third or fourth order Taylor method. As a measure of the error, we use
\begin{equation}\label{52}
e_h:= \frac 1N \sum_{j=0}^{N-1} (y_{rk}(t_j)-y(t_j))^2\,,
\end{equation}
where $y(t)$ is the solution obtained using our method or any other, like the Taylor method. Then, we need to use given values of the parameters and inicial conditions. In Table 1 below, we compre the errors produced by our method as compared with the error given with the use of third and fourth order Taylor method for different values of the interval width $h$ for $T=20$, $\mu=1/2$ and the initial conditions $y(0)=0$ and $y'(0)=2$.
\vskip1cm
\centerline{$
\begin{array}
[c]{cccc}
h & {\rm Matrix} & {\rm Taylor\, 3^{\rm rd}} & {\rm Taylor\, 4^{\rm rd}}\\[2ex]
10^{-4} & 2.63\, 10^{-11} & 8.60\,10^{-7} & 6.40 \,10^{-7} \\
10^{-3} & 2.08\, 10^{-11} & 8.30\, 10^{-9} & 6.45\, 10^{-9} \\
10^{-2} & 1.46\, 10^{-8} & 9.11\, 10^{-9} & 6.60\, 10^{-11} \\
10^{-1} & 2.04\, 10^{-5} & 1.17\, 10^{-4} & 1.67\, 10^{-7} \\
2\, 10^{-1} & 2.30\, 10^{-4} & 2.23\, 10^{-3} & 7.94\, 10^{-6} \\
5\, 10^{-1} & 8.10\, 10^{-3} & 1.49\, 10^{-1} & 2.75\, 10^{-3}
\end{array}
$}
\medskip
TABLE 1.- Values for the error $e_h$ for our matrix method and the Taylor method of orders three and fourth for distinct values of $h$ for the van der Pol equation.
\vskip 1cm
It is clear that our matrix method has a precision in between those of the third and fourth orden Taylor method. These results are just an example of the results obtain in multiple numerical experiments, we have performed showing essentially the same result. However, Table 1 as well as other numerical experiments show and important tendency: the lower $h$ the better our precision as compared with the precision given by the Taylor method. The reason is clear, the smaller $h$ the bigger the number of operations needed to obtain the approximate solution. Our matricidal method requieres less operations than the Taylor method, so that our precision gets better as $h$ gets smaller.
In Appendix B, we give our source code when our method is to be applied in the present case.
}
\item{{\bf The Duffing equation}
This is another second order non linear equation, which has the following form:
\begin{equation}\label{53}
y''(t) + y(t) + y^3(t)=0\,,
\end{equation}
where we have omitted the term in the first derivative of the indeterminate, $y'(t)$. For the Duffing equation, there are explicit solutions in terms of the Jacobi elliptic functions. For instance, being given the initial conditions $y(0)=1$ and $y;(0)=0$, we have the following solution:
\begin{equation}\label{54}
y_e(t) = -i \sqrt{1+k} \,{\rm sn}\,(u;m)\,,
\end{equation}
where ${\rm sn}\,(u;m)$ denotes the elliptic sine. The arguments in \eqref{49} denote the following:
\begin{eqnarray}\label{55}
u= \frac 1{\sqrt 2} \, (t^2(1-k) + 2t(c_2-kc_1) + (1-k)c_2^2)\,, \nonumber\\[2ex] m= \frac{1+k}{1-k}\,,\qquad k=\sqrt{1+2c_1}\,.
\end{eqnarray}
The value of the constants included in \eqref{55} are $c_1= 1.5$ and $c_2= 1.1920055$. The Duffing equation may be written in matrix form, if we write again $y_1(t)\equiv y(t)$ and $y_2(t)\equiv y'(t)$, as
\begin{equation}\label{56}
\left(\begin{array}{c} y'_1(t)\\[2ex] y'_2(t) \end{array} \right) = \left( \begin{array}{cc} 0 & 1 \\[2ex] -(1+y_1^2) & 0 \end{array} \right) \left(\begin{array}{c} y_1(t)\\[2ex] y_2(t) \end{array} \right)\,.
\end{equation}
We define the error $e_h$ as in \eqref{52}, where we replace $y_{rk}(t)$ by the exact solution, $y_e(t)$, which does exist in the present case. Also $y(t)$ is the solution for which we want to compare its precision with the exact solution, in our case the solutions obtained by our matrix method as well as the third or fourth order Taylor solutions. The errors produced by each method are given on Table 2 below.
\vskip1cm
\centerline{$
\begin{array}
[c]{cccc}
h & {\rm Matrix} & {\rm Taylor\, 3^{\rm rd}} & {\rm Taylor\, 4^{\rm rd}}\\[2ex]
10^{-4} & 1.69\, 10^{-9} & 5.60\,10^{-5} & 4.13 \,10^{-5} \\
10^{-3} & 1.03\, 10^{-10} & 5.06\, 10^{-7} & 4.24\, 10^{-7} \\
10^{-2} & 9.61\, 10^{-9} & 2.47\, 10^{-8} & 4.24\, 10^{-9} \\
10^{-1} & 1.16\, 10^{-5} & 4.92\, 10^{-4} & 4.12\, 10^{-8} \\
2\, 10^{-1} & 1.18\, 10^{-4} & 6.78\, 10^{-3} & 7.56\, 10^{-6} \\
5\, 10^{-1} & 82.00\, 10^{-3} & 1.04\, 10^{-1} & 7.73\, 10^{-3}
\end{array}
$}
\medskip
\medskip
TABLE 2.- Values for the error $e_h$ for our matrix method and the Taylor method of orders three and fourth for distinct values of $h$ for the Duffing equation.
\vskip1cm
We see that the results are quite similar to those obtained with the van der Pol equation. Similarly, we have made some numerical experiments that confirm these results.
}
\item{ {\bf The Lorenz equation}
The Lorenz equation, which is a model for the study of chaotic systems has been introduced in the study of atmospheric behaviour \cite{LOR}. This equations arises in many problems of physics, where chaoticity is present \cite{HAK,GOR,COU,MIS}. The Lorenz equation is usually written in matrix form and is three-dimensional:
\begin{equation}\label{57}
\left(\begin{array}{c} y'_1(t) \\[2ex] y'_2(t) \\[2ex] y'_3 (t) \end{array} \right) = \left(\begin{array}{ccc} -a & a & 0 \\[2ex] b-y_3(t) & -1 & 0 \\[2ex] y_2(t) & 0 & -c \end{array} \right) \left(\begin{array}{c} y_1(t) \\[2ex] y_2(t) \\[2ex] y_3 (t) \end{array} \right)\,,
\end{equation}
$a$, $b$ and $c$ being positive constants.
This system is very sensitive to the particular choice of the parameters and of initial values, as may numerical experiments show. Based in these experiments, we have choosed the following values for the parameters, $a=10$, $b=9.996$ and $c=8/3$, the fixed point $(y_1,y_2,y_3) =(4.88808,4.88808,8.996)$ is an attractor. We proceed as in the previous case and compare solutions of the errors \eqref{52} resulting of the use of the Taylor method of order two and this Matrix method using as reference the solution obtained with the Ruge Kutta method of eighth order. We use $T=10$ and the solution with initial values $(y_1(0),y_2(0),y_3(0))= (1,0,1)$. We have obtained a table (Table 3) of errors given in terms of the interval width $h$ as
\vskip1cm
\centerline{$
\begin{array}
[c]{ccccc}
h & {\rm Matrix} & \text{Taylor second order} & \text{Taylor third order} & \text{Taylor fourth order}\\[2ex]
10^{-2} & 5.2\,10^{-6} & 6.73\,10^{-5} & 3.6\,10^{-8} & 3.1\, 10^{-10} \\
10^{-1} & 5.3\,10^{-2} & 7.1 \, 10^{-1} & 1.8\, 10^{-1} & 5.2\, 10^{-2} \\
2\, 10^{-1} & 3.7 \,10^{-1} & {\rm error} & {\rm error} & {\rm error}
\end{array}
$}
\medskip
\centerline{TABLE 3.- Values of the precision $e_h$ in terms of $h$ for $T=10$, $y_1(0)=1$, $y_2=0$ and $y_3(0)=1$.}
\vskip1cm
The word ``error'' written in two entries in Table 2 means that the error that appears if the interval width is of the order of 0.2, when we use the Taylor method, is incontrollable. We also see that the precision obtained with the matrix method clearly improves the precision by the Taylor method the larger the length of the subintervals.
\item{{\bf Neutral damping equation}.
This equation has been discussed in the literature \cite{HAL,MIC} and is
\begin{equation}\label{58}
x''(t) +\varepsilon\, ( x'(t))^2 +x(t)=0\,,
\end{equation}
where the tilde means derivative with respect to the variable $t$ and $\varepsilon$ is a real parameter. As in previous cases, let us define $y(t):= x'(t)$, so that \eqref{58} can be written in the following form:
\begin{equation}\label{59}
(-x+\varepsilon y^2)\,dx -y\,dy=0\,.
\end{equation}
This equation is integrable with integrating factor:
\begin{equation}\label{60}
\mu(x,y)\equiv e^{2\varepsilon x}\,.
\end{equation}
It is readily shown that equation \eqref{60} admits the following first integral:
\begin{equation}\label{61}
f(x,y) = \left[\frac 12 \,y^2 + \frac{1}{4\varepsilon^2}\,(2\varepsilon x -1) \right] e^{2\varepsilon x}
\end{equation}
If we write \eqref{58} in the standard matrix form as
\begin{equation}\label{62}
\left(\begin{array}{c} x' \\[2ex] y' \end{array} \right) = \left( \begin{array}{cc} 0 & 1 \\[2ex] -1 & -\varepsilon y \end{array} \right) \left( \begin{array}{c} x\\[2ex] y \end{array} \right)\,,
\end{equation}
we readily observe that the only fixed point is the origin. It is also the origin the point at which the minimum of the first integral \eqref{61} lies. All orbits are periodic around the origin.
In Figure 1, we have depicted the periodic solutions in phase space. Note that, although equation \eqref{58} may look like dissipative, it is not as Figure 1 manifests.
Now, let us repeat the comparison between errors given by our matrix method with the errors given by the Taylor method at some low orders. As always, we use \eqref{47} as the definition of the error and the numerical Runge-Kutta solution of eighth order as the reference solution. We obtained the following results, which appear in Table 4, where $h$ is, as always, the distance between two consecutive nodes:
\vskip1cm
\centerline{$
\begin{array}[c]{cccc}
h & {\rm Matrix} & {\rm Taylor\, 3^{\rm rd}} & {\rm Taylor\, 4^{\rm rd}}\\[2ex]
10^{-4} & 2.2\, 10^{-14} & 4.2\, 10^{-14} & 3.2 \,10^{-11} \\
10^{-3} & 1.9\,10^{-14} & 4.1\,10^{-11} & 3.1\, 10^{-11} \\
10^{-2} & 9.0\, 10^{-14} & 4.6\, 10^{-13} & 4.0\,10^{-12} \\
10^{-1} & 1.6\, 10^{-9} & 2.7\, 10^{-7} & 8.2\, 10^{-11} \\
2.10^{-1} & 2.6\, 10^{-8} & 8.6\, 10^{-6} & 1.0\, 10^{-9} \\
5.10^{-1} & 1.1 \, 10^{-6} & 7.5\, 10^{-4} & 6.1\, 10^{-6}
\end{array}
$}
\medskip
TABLE 4. Values of the error in terms of $h$ for $\varepsilon=0.1$, $T=2\pi$ (see \eqref{47}) and the initial values $x(0)=1$, $ x'(0)=0$.
\vskip1cm
We observe that the precision of the matrix method is much higher than the precision of the third and forth order Taylor method for a distance between nodes $h\le 0.01$. Moreover, we have to underline that our matrix one step method is much simpler to programming that the Taylor method as the reader can easily convince him/herself using any of these examples.
There is another method based in the theory of perturbations in order to obtain approximate solutions to \eqref{53}, which receives the name of Lindstedt-Poincar\'e. It is described in \cite{MIC}. It consists in a series in terms of $\varepsilon$ of the form:
\begin{equation}\label{63}
x(t) = x_0(t) +\varepsilon\, x_1(t) + \varepsilon^2\,x_2(t) +\dots\,.
\end{equation}
Coefficients $x_i(t)$ may be obtain iteratively, once we have fixed the initial values. For instance, for $x(0)=1$ and
$ x'(0)=0$, we obtain:
\begin{eqnarray}\label{64}
x_0(t)= \cos\omega t\,,\qquad x_1(t)=\frac 16 \left(-3+4 \cos \omega t- \cos 2\omega t \right)\,,\nonumber\\[2ex]
x_2(t)= \frac 13 \left(-2 +\frac{61}{24}\, \cos\omega t -\frac 23 \,\cos 2\omega t + \frac 18\, \cos 3\omega t \right)\,, \nonumber\\[2ex]
\omega= 1-\frac 16\,\varepsilon + o(\varepsilon^3)\,.
\end{eqnarray}
We may also evaluate the error for this perturbative method, which is independent of any division of the interval, in which the solution is considered, into subintervals. This error is $1.1\,10^{-4}$, which is obviously higher to those obtained using any of the numerical method considered.
We finish the present example by proposing another approach to an approximate solution. By either method, matrix or Taylor, we obtain on each of the nodes $\{t_k\}$ a value, $x_k$, of the approximate solution. Let us interpolate each interval by means of cubic splines, so that we obtain a segmentary approximation by cubic polynomials \footnote{In \cite{LG}, we have already proposed segmentary cubic solutions and studied their properties, although in \cite{LG} they were not necessarily cubic splines.}. Let us assume that the cubit interpolating solution is $s(t)$, then the error of this solution on an interval $T$, with respect to the exact solution is given by (recall that cubic splines admit first and second continuous derivatives at the nodes)
\begin{equation}\label{65}
e=\frac 1T \int_0^T (s''(t)+\varepsilon\,[s'(t)]^2+s(t))^2\,dt\,.
\end{equation}
The form of the error for the Taylor method is also given by \eqref{60}. The resulting errors appear in the following table (Table 5):
\vskip1cm
\centerline{$
\begin{array}[c]{ccccc}
h & {\rm Spline} & {\rm Taylor\, 2^{\rm rd}} & {\rm Taylor\, 3^{\rm rd}} & {\rm Taylor\, 4^{\rm rd}}\\[2ex]
10^{-4} & 7.2\, 10^{-16} & 4.7 \,10^{-16} & 6.7\,10^{-16} & 6.5\,10^{-16}\\
10^{-3} & 1.8\,10^{-14} & 1.8\,10^{-14} & 1.8\,10^{-14} & 1.8\,10^{-14}\\
10^{-2} & 1.9\,10^{-10} & 1.9\,10^{-11} & 1.1\,10^{-10} & 1.1\,10^{-10} \\
10^{-1} & 4.6\,10^{-6} & 6.4\,10^{-6} & 4.6\,10^{-6} & 4.6\,10^{-6} \\
2.10^{-1} & 6.1\,10^{-5} & 9.8\,10^{-5} & 6.0\,10^{-5} & 6.0\,10^{-5}\\
5.10^{-1} & 4.8\,10^{-3} & 1.2\,10^{-3} & 4.6\,10^{-3} \ & 4.9\,10^{-3}
\end{array}
$}
\medskip
TABLE 5.- Comparison between errors by the cubic spline and Taylor methods.
\vskip1cm
Finally, for the perturbative method the error obtained is $7.4\,10^{-5}$. Concerning the conservation of the constant of motion, we measure its dispersion by means of the following parameter, $e_f$, defined as
\begin{equation}\label{66}
e_f:= \frac 1T \int_0^T(f(x_0,y_0)-f(x,y))^2\, dt\,,
\end{equation}
where $f(x,y)$ should be calculated using different approximations such as Taylor, matrix and the analytic approximate solution as obtained by the perturbative Lindstedt-Poincar\'e method mentioned earlier. The point $(x_0,y_0)$ gives the chosen initial conditions. In the latter case, we have obtained $e_f = 1.1\,10^{-4}$, in all others, we always got $e_f<10^{-8}$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{StreamPlot}
\caption{\small Periodic orbits around the origin for the neutral damping equation. The horizontal coordinate represents the values of $x$, while vertical coordinate gives the values of $y=x'$. Note that these periodic orbits are represented in phase space.
\label{Figure1}}
\end{figure}
}
\item{{\bf An epidemic equation}
A model for an epidemia has been proposed as early as in 1927 \cite{KK,STR,TC}. If $x(t)$, $y(t)$ and $z(t)$ are the number, at certain time $t$, of healthy, sick and dead persons, respectively, in some society, the model assumes that these functions satisfy the following non-linear system:
\begin{eqnarray}\label{67}
\dot x(t) &=& -a x(t)\,y(t)\,, \nonumber\\[2ex] \dot y(t) &=& a x(t)\,y(t) -b y(t)\,, \nonumber\\[2ex]
\dot z(t) &=& by(t)\,,
\end{eqnarray}
where $a$ and $b$ are positive constants and the dot means derivation with respect to time $t$. The model assumes infection of healthy persons from sick persons. The latter died after some time. Note that, in the studied time interval, the sole cause of population dynamics is the epidemic. For obvious reasons, we consider only positive solutions. The fixed points for \eqref{67} have the form $(\alpha,0,\beta)$ with $\alpha,\beta>0$.
Let us consider the following vector field, also called the {\it flux} of \eqref{62},
\begin{equation}\label{68}
F(t):=(-a x(t)\,y(t),a x(t)\,y(t) -b y(t),by(t)) \,.
\end{equation}
Note that $F_x<0$. For $x(t)<b/a$, we have $F_y<0$, while $x>b/a$, it results that $F_y>0$. Thus, the fixed point is inestable if $\alpha >b/a$. This is a necessary condition for the existence of the epidemic. If $\alpha <b/a$, then the fixed points are asymptotically stable as depicted in Figure 2, where we have chosen $a=0.0005$ and $b=0.1$. Different curves in Figure 2 represent different initial conditions.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{FIXEDPOINTS}
\caption{\small Asymptotic stability of fixed points with $a=0.0005$ and $b=0.1$ in \eqref{67}. The horizontal line represent the scaled number of healthy people, while the vertical figures give the scaled number of sick people. The arrow means the direction of time. Different curves are obtained using different initial conditions. Observe the presence of a maximum at a point which is independent on the initial conditions.
\label{Figure2}}
\end{figure}
Let us write \eqref{67} in matricial form as
\begin{equation}\label{69}
\dot X(t) =A(x,y,z)\cdot X(t)\,,
\end{equation}
where,
\begin{equation}\label{70}
X(t):= \left(\begin{array}{c} x(t)\\[2ex] y(t)\\[2ex] z(t) \end{array} \right)\,, \qquad A(x,y,z):= \left( \begin{array}{ccc} 0 & -ax & 0 \\[2ex] 0 & ax-b & 0 \\[2ex] 0 & b & 0 \end{array} \right)\,.
\end{equation}
In Figure 3, we obtain the curve giving the total number of infected people with time. After a maximum, the number of sick people decays quickly. We have used the values for the parameters $a=0.0005$ and $b=0.1$ and the initial values
$x(0)=300 >b/a$, $y(0)=20$, $z(0)=0$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{SICK}
\caption{\small Number of infected people in relation with time. Observe the existence of a maximum. After the maximum the curve decreases steeply.
\label{Figure3}}
\end{figure}
The form of the error for the method is obtained using \eqref{65} again. Next in Table 6, we compare the errors for given values of $h$ of our Matrix Method as compared to second and third order Taylor.
\vskip1cm
\centerline{$
\begin{array}[c]{cccc}
h & \text{Matrix Method} & {\rm Taylor\, 2^{\rm rd}} & {\rm Taylor\, 3^{\rm rd}} \\[2ex]
0.01 & 2.2\, 10^{-11} & 4.0 \, 10^{-11} & 1.9\, 10^{-11} \\
0.1 & 2.0\, 10^{-8} & 7.3 \, 10^{-8} & 2.0 \, 10^{-11} \\
1.0 & 2.1\, 10^{-4} & 7.2 \, 10^{-4} & 1.7\,10^{-7} \\
2.0 & 3.6\, 10^{-3} & 1.1\, 10^{-2} & 1.2\, 10^{-5}
\end{array}
$}
\medskip
TABLE 6.- Comparison between the errors for the Matrix Method and the Taylor method in the case of the epidemic model.
\vskip1cm
We see that the level of error is similar, although our method keeps the advantage of needing much less arithmetics than any Taylor method.
}
\item{{\bf Lotka-Volterra equation.}
Models in population dynamics, as for instance the predator-prey competition, were independently developed by the american biologist A.J. Lotka \cite{LOT} and the Italian mathematician V. Volterra \cite{VOL}, see modern references for the Lotka-Volterra equation in \cite{FAR,CI,MUR,LV}. The most general form of the Lotka-Volterra equation has the form
\begin{eqnarray}\label{71}
\dot x_1= x_1(\varepsilon_1 -a_{11}\,x_1 - a_{12}\, x_2)\,, \nonumber\\[2ex]
\dot x_2 =x_2(\varepsilon_2 -a_{12}\,x_1 - a_{22}\, x_2)\,,
\end{eqnarray}
where $x_1$ and $x_2$ are functions of time $t$ and $\varepsilon_i$, $a_{ij}$, $i,j=1,2$ are constants. Following \cite{CI}, we consider here a simpler version, yet non-linear, of \eqref{71}, which is
\begin{eqnarray}\label{72}
\dot x= a\,x-b\,xy\,,\nonumber\\[2ex] \dot y = d\,xy - c\,y\,,
\end{eqnarray}
and the initial conditions $x(t_0)=x_0$, $y(t_0)= y_0$.
We may consider the solutions, $x=x(t)$, $y=y(t)$ of \eqref{72} as the equations determining a parametric curve on the $x-y$ plane. Then, by elimination of $t$ and integration, we obtain:
\begin{equation}\label{73}
C(x,y)= x^cy^a \,\exp\{-(b+d)x\}\,.
\end{equation}
Note that $C(x,y)$ is a constant on each curve solution (constant of motion) and its value over each solution is determined via the initial conditions. Equations \eqref{71} have two fixed points, which are $(0,0)$ and $(c/d,a/b)$. For $x>0$ and $y>0$ all solutions are periodic.
To apply the method proposed in the present article to this system, let us write it in matrix form as
\begin{equation}\label{74}
\dot X =A(x,y)\,X\,,
\end{equation}
where,
\begin{equation}\label{75}
X(t)\equiv \left(\begin{array}{c} x(t)\\[2ex] y(t) \end{array} \right)\,,\qquad A\equiv \left(\begin{array}{cc} a-by & 0 \\[2ex] 0 & dx-c \end{array} \right)\,.
\end{equation}
In order to estimate the precision of the method, we need to choose values for the initial values as well as for the parameters $a$, $b$, $c$ and $d$. We have use several choices and obtain in all of them similar values. to show a table comparing the precision of our method with some others, let us choose as the values of the parameters, $a=1.2$, $b=0.6$, $c=0.8$and $d=0.3$. As the initial values, let us choose, $x_0=y_0=3$. In addition, we have to take an integration time, in our case we took $T=20$, which approximately accounts for three periods.
In order to determine the error produced by our matrix method is determined by the formula \eqref{65} above. We compare this error with those of second and third order Taylor method as compared to the numerical solution obtained by a forth order Runge-Kutta. These errors are shown in Table 7.
\vskip1cm
\centerline{$
\begin{array}[c]{cccc}
h & \text{Matrix Method} & {\rm Taylor\, 2^{\rm rd}} & {\rm Taylor\, 3^{\rm rd}} \\[2ex]
0.01 & 4.1\, 10^{-8} & 3.5 \, 10^{-8} & 8.2\, 10^{-13} \\
0.1 & 3.5\, 10^{-8} & 5.4 \, 10^{-8} & 2.3 \, 10^{-9} \\
0.2 & 2.6\, 10^{-3} & 5.4 \, 10^{-3} & 1.0\,10^{-5}
\end{array}
$}
\medskip
TABLE 7.- Comparison between the errors for the Matrix Method and the Taylor method in the case of the Lotka-Volterra equation.
\vskip1cm
We see that the precision of our method is equivalent to those of a second order Taylor, with much less arithmetic operations.
}
\item{{\bf On the possibility of extending the method to PDE: The Burger's equation.}
Can we extend this precedent discussion to partial differential equations admitting separation of variables? One possible example would have been the convection diffusion equation in one spatial dimension \cite{HWX}:
\begin{equation}\label{76}
\frac{\partial}{\partial t} u(x,t)= \frac{\partial}{\partial x} \left[ D(u)\,\frac{\partial}{\partial x}\, u(x,t) \right] +Q(x,u)\, \frac{\partial}{\partial x}\,u(x,t) + P(x,u)\,.
\end{equation}
Separation of variables for \eqref{76} is discussed in \cite{HWX}. In general, our method is not applicable here since the resulting equations after separation of variables are not of the form \eqref{1}. Nevertheless, another point of view is possible. Assume we want to obtain approximate solutions of an equation of the type \eqref{76} on the interval $[0,X]$ under the conditions $u(0,t)=0$, $u(X,t)=0$, $u(x,0)=h(x)$, $h(x)$ being a given smooth function and $u(0,0)=u(X,0)=0$. On the interval $[0,X]$, we define a uniform partition of width $h:=X/n$ and nodes $x_k=kh$, $k=0,1,\dots,n$.
One may propose one discretization of the solution of the form $u_k(t):= u(x_k,t)$, $k=0,1,\dots,n$. Then, the second derivative in \eqref{76} could be approximated using finite differences \cite{KIN}:
\begin{equation}\label{77}
\frac{\partial^2}{\partial x^2}\,u(x_k,t) = \frac{u(x_k-1,t)-2u(x_k,t) + u(x_{k+1},t)}{h^2}\,,
\end{equation}
while for the first spatial derivative, we have
\begin{equation}\label{78}
\frac{\partial}{\partial x}\,u(x_k,t) = \frac{u(x_k+1,t) - u(x_{k-1},t)}{2h}\,.
\end{equation}
with $k=1,2,\dots,n-1$, so that equation \eqref{76} takes the form
\begin{equation}\label{79}
\frac{d}{dt}\,U(t) =F(U(t))\,,
\end{equation}
where $F$ is a square matrix of order $n-1$ and $U(t)= (u_1(t),u_2(t),\dots,u_{n-1}(t)$ with $u_k(t):= u(x_k,t)$, $k=1,2,\dots,n-1$ and initial values $U(0)=(u(x_1,0), u(x_2,0),\dots,u(x_{n-1},0))$ and initial value $u(x,0)=h(x)$. If it were possible to write equation \eqref{79} in the form:
\begin{equation}\label{80}
\frac{d}{dt}\,U(t) = A(U(t)) \cdot U(t)\,,
\end{equation}
then, we would be able to apply our method to find an approximate solution of \eqref{80} on a given finite interval. This property is not fulfilled by the general convection diffusion equation \eqref{76}. However, it is satisfied by a particular choice of this type of parabolic equations: {\it the non-linear Burger's diffusion equation}, which is \cite{BUR,BA}:
\begin{equation}\label{81}
\frac{\partial}{\partial t}\, u(x,t) = \frac{\partial^2}{\partial x^2}\, u(x,t) - u(x,t)\, \frac{\partial}{\partial x}\, u(x,t)\,.
\end{equation}
We choose $[0,1]$ as the integration interval for the coordinate variable, so that $X=1$ as above. For the time variable, we use $t\in [0,1]$. This choice is made just for simplicity and as an example to implement our numerical calculations. We use the finite difference method, where the second derivative and first derivatives are replaced as in \eqref{77} and \eqref{78}, respectively.
Using \eqref{77} and \eqref{78} in \eqref{81}, we have for each of the nodes the following recurrence relation:
\begin{eqnarray}\label{82}
\frac{d}{dt}\, u(x_k,t) = \frac{1}{h^2} \left( \left( 1-\frac h2\, u(x_k,t) \right) u(x_{k-1},t) -2 u(x_k,t) + \left( 1 + \frac h2 \, u(x_k,t) \right) u(x_{k+1},t) \right)\,,
\end{eqnarray}
for $k=1,2,\dots,n-1$. When written expressions \eqref{82} in matrix form, we obtain a matrix equation of the form \eqref{6} and, then, suitable for applying our method.
In our numerical realization, we have used a small number of nodes, to begin with, say $n=5$. Then, we integrate on the time variable, on the interval $t\in [0,1]$, using $h_t:=1/m$, where $m$ is a given integer, as the distance between time nodes. Then, we compare our solution with the solution of \eqref{81} given by the sentence NDSolve provided by the Mathematica software, solution that we denote as $v(x,t)$.
Then, we can estimate the error of our solution as compared with $v(x,t)$. This is given by
\begin{equation}\label{83}
{\rm error}:= \frac 1m \sum_{j=0}^m \sum_{k=1}^{n-1} (u(x_k,t_j)-v(x_k,t_j))^2\,,
\end{equation}
where $t_j:=jh_t$ for all $j=0,1,2,\dots,m$. To estimate the error, we have to give values to $m$. For $m=10$, the error is $1.07\, 10^{-4}$. A similar result can be obtained with $m=20$ or even higher, so that $m=10$ gives already a reasonable approximation. The solution for $t=1$ is nearly zero, which one may have expected taking into account that equation \eqref{80} describes a dissipative model. Thus, our approximate solution may be considered satisfactory also in this case.
A few more words on the comparison of our solution and the solution using the Euler method, performed through NDSolve. First of all, we use the explicit Euler method, for which the local error is $o(h^2_t)$, through the option ``Method $\to$ ``ExplicitEuler'', ``StartingStepSize'' $\to$ $1/100$'', since for $h_t>0.01$ the result is unstable. Once we have done the spatial discretization, we integrate \eqref{82} with respect to time. The instability often appears when one uses an explicit method of spatial and time discretization for parabolic PDE \cite{CLW}. This instability comes after the errors due to the arithmetic calculations and are amplified after time integration. Then, let us consider $n=5$, where $n$ is the number of spatial nodes, $0\le t \le 1$ and $m=100$, so that the time integration interval becomes $h_t=0.01$. Then, the error \eqref{83} is $3.40\, 10^{-4}$.
We may improve time integration using Euler mid-point integration. In this case, one uses the option ``Method $\to$ ``ExplicitMidpoint'', ``StartingStepSize'' $\to$ 1/10''. Choosing $m=10$, we obtain an error of $1.76\, 10^{-4}$, so that $h_t=0.1$. This error is of the order given by our method. Just recalling that the local error given by the mid-point Euler method is of the order of $o(h_t^3)$.
In addition, we may compare the precision of our method and both Euler methods mentioned above by the errors obtained using the third and forth order Adams method (see Chapter III in \cite{HNW}), which for $h_t=0.1$ are respectively given by $1.76\, 10^{-4}$ and $1.74\,10^{-4}$. This error has always been obtained using \eqref{83}, where $u(x,t)$ is our solution and $v(x,t)$ is the solution given by either Euler, Euler mid-point or Adams methods. Nevertheless, the Adams method is a multi-step method while ours is one step method, which means that ours is more easily programmable.
}
}
\end{itemize}
\section{Concluding remarks}
In the present paper, we have generalized a one step integration method, which has been developed in the seventies of last century by Demidovich and some other authors \cite{DEM,KOT,HSU}. While Demidovich and others restrict themselves to the search for approximate solutions of linear systems of first order differential equations, albeit with variable coefficients, we propose a way to extend the ideas of the mentioned authors to non-linear systems. The solutions we have obtained have a similar degree of precision than those proposed in \cite{DEM,KOT,HSU}. In our method, we obtain a sequence of approximate solutions on a finite interval of ordinary differential equations, and we have proven that this sequence converges uniformly to the exact solution. In order to obtain each approximate solution, we have divided the integration interval into subintervals of length $h$. The sequence of approximate solutions can be obtained after successive refinements of $h$. For each approximate solution, characterized by a value of $h$, we have determined the local error up to $o(h^3)$.
It is certainly true, that our one step matrix method does not improve the precision obtained with the fourth order Runge-Kutta method or other equivalent. However, the great advantage of the proposed method with any other is that its algorithm of construction, through the exponential matrix as described in Section 2, is much simpler than their competitors. Simplicity that is inherited from its antecesor the Demidovich method \cite{DEM,KOT,HSU}. This paper is somehow the continuation of previous research of the authors in the same field \cite{GL1,GLP}.
We have added some examples of the application of the method, on where we have performed numerous numerical test on the precision of the method, which lies between second and third Taylor method. We have used the sofware Mathematica to implement these numerical tests.
\section*{Acknowledgements}
We acknowledge partial financial support to the Spanish MINECO, grant MTM2014-57129-C2-1-P, and the Junta de Castilla y Le\'on, grants VA137G18, BU229P18 and VA057U16. We are grateful to Prof L.M. Nieto (Valladolid) for some useful suggestions.
\section{Appendix A: A conjecture relative to the Li\'enard equation.}
As is well known, the Li\'enard equation has the following form:
\begin{equation}\label{84}
y''(t)+f(y)\,y'(t) +g(y)=0\,.
\end{equation}
Let us assume that the function $g(y)$ is a product of some function, that we also call $g(y)$ for simplicity, and $y$, so that equation \eqref{67} takes the form:
\begin{equation}\label{85}
y''(t)+f(y)\,y'(t) +g(y)\,y(t)=0\,.
\end{equation}
This second order equation may be easily transformed into a two dimensional first order equation ($y(t)=y(t)$, $z(t)=y'(t)$):
\begin{equation}\label{86}
\left( \begin{array}{c} y' \\[2ex] z' \end{array} \right) = \left( \begin{array}{cc} 0 & 1 \\[2ex] -g(y) & -f(y) \end{array} \right) \left( \begin{array}{c} y \\[2ex] z \end{array} \right) = A \left( \begin{array}{c} y \\[2ex] z \end{array} \right)\,,
\end{equation}
where the meaning of the matrix $A$ is obvious. Its eigenvalues are
\begin{equation}\label{87}
\lambda_\pm(y) = -\frac12 \left(f(y) \pm \sqrt{f^2(y)-4g(y)}\,\right)\,.
\end{equation}
The conjecture is the following: A sufficient condition for the solutions of \eqref{67} to be bounded for $t>0$ is that the following three properties hold simultaneously: i.) The discriminant in \eqref{60} be negative, i.e., $f^2(y)-4g(y)<0$; ii.) The function $f(y)$ be non-negative, $f(y)\ge 0$ and iii.) The function $g(y)$ is positive and smaller than one, $0< g(y)<1$.
This conjecture is based in the form of equations \eqref{11} and \eqref{12} and we have tested it in several numerical experiments.
Two more comments in relation to the Li\'enard equation:
1.- The vector field associated to the equation is given by $J\equiv(z,-g(y)y-f(y)z)$, for which the divergence is ${\rm div}\, (J)=-f(y) \le 0$ if $f(y)\ge 0$. Therefore, if $f(y)$ is non-negative the divergence is always negative, so that the origin $(0,0)$ is an attractor. We conjecture that this attractor is also global.
2.- If we take $f(y)\equiv 0$, then \eqref{72} represents a Hamiltonian flow with Hamiltonian given by
\begin{equation}\label{88}
H(y,z)=\frac 12 \,z^2+V(y)\,,
\end{equation}
with
\begin{equation}\label{89}
V(y)=\int_0^y ug(u)\,du
\end{equation}
Under the assumption $g(y)>0$, the derivative $V'(y)=yg(y)$ is positive for $y>0$ and negative for $y<0$. Thus the only critical point is $y^*=0$, at this point $V'(0)=0$ and $V''(0)=g(0)>0$, so that all the orbits are closed, hence periodic.
\section{Appendix B: Source code to use our method in the van der Pole equation}
\begin{figure}
\centering
\includegraphics[width=1.4\textwidth]{S_C.jpeg}
\caption{\small Source Code used when applying the method to our example using the van der Pole equation.
\label{Figure5}}
\end{figure}
\vfill\eject
| {
"timestamp": "2021-03-12T02:07:33",
"yymm": "2010",
"arxiv_id": "2010.01539",
"language": "en",
"url": "https://arxiv.org/abs/2010.01539",
"abstract": "We develop a one step matrix method in order to obtain approximate solutions of first order systems and non-linear ordinary differential equations, reducible to first order systems. We find a sequence of such solutions that converge to the exact solution. We study the precision, in terms of the local error, of the method by applying it to different well known examples. The advantage of the method over others widely used lies on the simplicity of its implementation.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A discussion on the approximate solutions of first order systems of non-linear ordinary equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180677531123,
"lm_q2_score": 0.8311430436757312,
"lm_q1q2_score": 0.8192727150384824
} |
https://arxiv.org/abs/1701.07963 | Negative (and Positive) Circles in Signed Graphs: A Problem Collection | A signed graph is a graph whose edges are labelled positive or negative. The sign of a circle (cycle, circuit) is the product of the signs of its edges. Most of the essential properties of a signed graph depend on the signs of its circles. Here I describe several questions regarding negative circles and their cousins the positive circles. Topics include incidence between signed circles and edges or vertices, characterizing signed graphs with special circle properties, counting negative circles, signed-circle packing and covering, signed circles and eigenvalues, and directed cycles in signed digraphs. A few of the questions come with answers. | \section*{Introduction}\label{intro}
A signed graph is a graph with a \emph{signature} that assigns to each edge a positive or negative sign. To me the most important thing about a signed graph is the signs of its circles,\footnote{A circle is a connected, 2-regular graph. The common name ``cycle'' has too many other meanings.}
which are calculated by multiplying the signs of the edges in the circle. Thus a signature is essentially its list of negative circles, or (of course) its list of positive circles. I will describe some of the uses of and questions about circles of different signs in a signed graph. Both theorems and algorithms will be significant.
The topic of this report is broad. Of necessity, I will be very selective and arbitrarily so, omitting many fine contributions. (Let no one take offense!)
I chose this topic in part because it has many fine open problems, but especially in honor of our dear friend Dr.~B.~Devadas Acharya---``our'' because he was the dear friend of so many. Among Dr.~Acharya's wide combinatorial interests, I believe signed graphs were close to his heart, one of his---and his collaborator and wife's, Prof.~Mukti Acharya's---first and lasting areas of research. Circles (or ``cycles'') in signed graphs exemplify well Dr.~B.~D.~Acharya's approach to mathematics, that new ideas and new problems are its lifeblood. He himself was an enthusiastic and inspiring font of new ideas. I hope some of his spirit will be found in this survey.
\section{Groundwork}\label{ground}
\subsection{Signed graphs}\
A \emph{signed graph} $\S = (\G,\s) = (V,E,\s)$ is defined as an \emph{underlying graph} $\G=(V,E)$, also written $|\S|$, and a signature $\s: E \to \{+,-\}$ (or $\{+1,-1\}$), the sign group. The sets of positive and negative edges are $E^+(\S)$ and $E^-(\S)$.
In the literature $\G$ may be assumed to be simple, or it may not (this is graph theory); I do not assume simplicity. Each circle and indeed each walk $W = e_1e_2\cdots e_l$ has a sign $\s(W) := \s(e_1)\s(e_2)\cdots\s(e_l)$. $\S$ is called \emph{balanced} if every circle is positive.
Two important signatures are the all-positive one, denoted by $+\G=(\G,+)$, and the all-negative one, $-\G=(\G,-)$, where every edge has the same sign. In most ways an unsigned graph behaves like $+\G$, while $-\G$ acts rather like a generalization of a bipartite graph. In particular, in $+\G$ every circle is positive. In $-\G$ the even circles are positive while the odd ones are negative, so $-\G$ is balanced if and only if $\G$ is bipartite.
Signed graphs and balance were introduced by Frank Harary\footnote{Signed graphs, like graphs, have been rediscovered many times; but Harary was certainly the first. K\"onig \cite[Chapter X]{Konig} had an equivalent idea but he missed the idea of labelling edges by the sign group, which leads to major generalizations; cf.\ \cite[Section 5]{BG1}.} in \cite{NB} with this fundamental theorem:
\begin{thm}[Harary's Balance Theorem]\label{T:balance}
A signed graph $\S$ is balanced if and only if there is a bipartition of its vertex set, $V = X \cup Y$, such that every positive edge is induced by $X$ or $Y$ while every negative edge has one endpoint in $X$ and one in $Y$.
Also, if and only if for any two vertices $v,w$, every path between them has the same sign.
\end{thm}
A \emph{bipartition} of a set $V$ is any pair $\{X,Y\}$ of complementary subsets, including the possibility that one subset is empty (in which case the bipartition is not, technically, a partition). I call a bipartition of $V$ as in the Balance Theorem a \emph{Harary bipartition} of $V$. The Harary bipartition is unique if and only if $\S$ is connected; if $\S$ is also all positive (all edges are positive), then $X$ or $Y$ is empty.
Harary later defined $\S$ to be \emph{antibalanced} if every even circle is positive and every odd circle is negative; equivalently, $-\S$ is balanced \cite{SDual}. (The negative of $\S$, $-\S$, has signature $-\s$.)
A basic question about a signed graph is whether it is balanced; in terms of our theme, whether there exists a negative circle. If $\S$ is unbalanced, any negative circle provides a simple verification (a \emph{certificate}) that it is unbalanced, since computing the sign of a circle is easy. The Balance Theorem tells us how to provide a certificate that $\S$ is balanced, if in fact it is; namely, one presents the bipartition $\{X,Y\}$, since any mathematical person can easily verify that a given bipartition is, or is not, a Harary bipartition. What is hard about deciding whether $\S$ is balanced is to find a negative circle out of the (usually) exponential number of circles, or a Harary bipartition out of all $2^{n-1}$ possible bipartitions. Fortunately, there is a powerful technique that enables us to quickly find a certificate for imbalance.
\emph{Switching} $\S$ consists in choosing a function $\zeta: V \to \{+,-\}$ and changing the signature $\s$ to $\s^\zeta$ defined by $\s^\zeta(e_{vw}) := \zeta(v)\s(e_{vw})\zeta(w)$. The resulting switched signed graph is $\S^\zeta := (|\S|,\s^\zeta)$. It is clear that switching does not change the signs of circles. Let us denote by $\cC(\S)$ the set of all circles of a signed graph (and similarly for an unsigned graph) and by $\cC^+(\S)$ or $\cC^-(\S)$ the set of all positive or, respectively, negative circles. Thus, $\cC^+(\S^\zeta) = \cC^+(\S)$. There is a converse due to Zaslavsky \cite{CSG} and, essentially, Soza\'nski \cite{Soz}.
\begin{thm}\label{T:switching}
Let $\S$ and $\S'$ be two signed graphs with the same underlying graph $\G$. Then $\cC^+(\S) = \cC^+(\S')$ if and only if $\S'$ is obtained by switching $\S$. In particular, $\S$ is balanced if and only if it switches to the all-positive signed graph $+\G$.
\end{thm}
\subsubsection*{Algorithmics of balance}\
How do we use this to determine balance or imbalance of $\S$? Assume $\S$ is connected, since we can treat each component separately. Find a spanning tree $T$ and choose a vertex $r$ to be its root. For each vertex $v$ there is a unique path $T_{rv}$ in $T$ from $r$ to $v$. Calculate $\zeta(v) = \s(T_{rv})$ (so, for instance, $\zeta(r)=+$) and switch $\S$ by $\zeta$. In $\S^\zeta$ every tree edge is positive. Every non-tree edge $e$ belongs to a unique circle $C_e$ in $T \cup e$ and $\s(C_e) = \s^\zeta(C_e) = \s^\zeta(e)$. If there is an edge $e$ that is negative in $\S^\zeta$, then there is a circle $C_e$ that is negative in $\S$ and $\S$ is unbalanced. If there is no such edge, then $\{X,Y\}$ with $X = \zeta\inv(+) \subseteq V$ and $Y = \zeta\inv(-)$ is a Harary bipartition of $\S$, confirming that $\S$ is balanced.
Since $T$ can be found quickly by standard algorithms and it is obviously fast to find $\zeta$, this gives us a quick way of determining whether $\S$ is balanced or not. This simple algorithm was first published (in different terminology) independently by Hansen \cite{Hansen} and then by Harary and Kabell \cite{HK}.
\subsubsection*{About circles}\
A \emph{chordless} or \emph{induced circle} is a circle $C$ that is an induced subgraph. Any extra induced edge besides $C$ itself is considered a \emph{chord} of $C$.
An unsigned graph has girth, $g(\G) = \min_C |C|$, minimized over all circles $C$. It also has (though less frequently mentioned) even girth and odd girth, where $C$ varies over circles of even or odd length. These quantities are naturally signed-graphic. A signed graph has, besides its girth $g(\S)=g(\G)$, also \emph{positive girth} and \emph{negative girth}, $g_+(\S)$ and $g_-(\S)$, which are the minimum lengths of positive and negative circles; they reduce to even and odd girth when applied to $\S=-\G$. Girth is not explicit in any of my questions but signed girth may be worth keeping in mind.
\subsubsection*{Contraction}\
Contracting an edge $e=vw$ with two distinct endpoints (a ``link'') in an ordinary graph means shrinking it to a point, i.e., identifying $v$ and $w$ to a single vertex and then deleting the edge $e$. In a signed graph $\S$, first $\S$ must be switched so that $e$ is positive. Then contraction is the same as it is without signs; the remaining edges retain the sign they have after switching.
\subsubsection*{Balancing edges and vertices}\
A \emph{balancing vertex} is a vertex $v$ of an unbalanced signed graph $\S$ that lies in every negative circle; that is, $\S \setm v$ is balanced.
A \emph{balancing edge} is an edge $e$ in an unbalanced signed graph such that $\S \setm e$ is balanced; that is, $e$ is in every negative circle.
An endpoint of a balancing edge is a balancing vertex but there can (easily) be a balancing vertex without there being a balancing edge.
A constructive characterization of balancing vertices is the next proposition. \emph{Contracting} a negative edge $vw$ that is not a loop means switching $w$ (so $vw$ becomes positive) and then identifying $v$ with $w$ and deleting the edge.
\begin{prop}\label{P:balvert}
Let $\S$ be a signed graph and $v$ a vertex in it. The following statements about $v$ are equivalent.
\begin{enumerate}[{\rm (1)}]
\item $v$ is a balancing vertex.
\label{P:balvert:bv}
\item $\S$ is obtained, up to switching, by adding a negative nonloop edge $vw$ to a signed graph with only positive edges and then contracting $vw$ to a vertex, which is the balancing vertex $v$.
\label{P:balvert:contract}
\item $\S$ can be switched so that all edges are positive except those incident with $v$, and at $v$ there is at least one edge of each sign.
\label{P:balvert:vsigns}
\end{enumerate}
\end{prop}
\begin{proof}
The equivalence of \eqref{P:balvert:bv} with \eqref{P:balvert:contract} is from \cite{VLI}.
The result of contraction in \eqref{P:balvert:contract} is precisely the description in \eqref{P:balvert:vsigns}.
\end{proof}
\subsubsection*{Blocks and necklaces}\
A \emph{cutpoint} is a vertex $v$ that has a pair of incident edges such that every path containing those edges passes through $v$. For instance, a vertex that supports a loop is a cutpoint unless the vertex is only incident with that loop and no other edge. A graph is called \emph{inseparable} if it is connected and has no cutpoints. A maximal inseparable subgraph of $\G$ is called a \emph{block of $\G$}; a graph that is inseparable is also called a \emph{block}. A \emph{block of $\S$} means just a block of $|\S|$. Blocks are important to signed graphs because every circle lies entirely within a block.
An \emph{unbalanced necklace of balanced blocks} is an unbalanced signed graph constructed from balanced signed blocks $B_{1}, B_{2}, \ldots, B_{k}$ ($k\geq2$) and distinct vertices $v_i, w_i \in B_i$ by identifying $v_i$ with $w_{i-1}$ for $i=2,\ldots,k$ and $v_1$ with $w_k$. To make the necklace unbalanced, before the last step (identifying $v_1$ and $w_k$) make sure by switching that a path between them in $B_{1}\cup B_{2}\cup \cdots\cup B_{k}$ has negative sign. (All such paths have the same sign by the second half of Theorem \ref{T:balance}, because the union is balanced before the last identification.)
An unbalanced necklace of balanced blocks is an unbalanced block in which each $v_i$ is a balancing vertex and there are no other balancing vertices.
If a $B_i$ has only a single edge, that edge is a balancing edge. In fact, any signed block $\S$ with a nonloop balancing edge $e$ is an unbalanced necklace of balanced blocks: the balancing edge is one of the $B_i$'s, and the others are the blocks of $\S \setm e$.
Unbalanced necklaces of balanced blocks are important in signed graphs; for instance, they require special treatment in matroid structure \cite{BG2}.
If we allow $k=1$ in the definition of a necklace we can say that any signed block with a balancing vertex is an unbalanced necklace of balanced blocks.
\subsection{Parity}\label{parity}\
There is a close connection between negative and positive circles in signed graphs on the one hand, and on the other hand odd and even circles in unsigned graphs---that is, parity of unsigned circles.
First, parity is what one sees when all edges are negative, or (with switching) when the signature is antibalanced. There is considerable literature on parity problems that can be studied for possible generalization to signed graphs; I mention some of it in the following sections. The point of view here is that parity problems about circles are a special case of problems about signed circles.
Some existing work on odd or even circles will generalize easily to negative or positive circles. For example, the computational difficulty of a signed-graph problem cannot be less than that of the specialization to antibalanced signatures---that is, the corresponding parity problem---and this may imply that the two problems have the same level of difficulty.
\subsubsection*{Negative subdivision}\label{negsub}\
Second, there is \emph{negative subdivision}, which means replacing a positive edge by a path of two negative edges. Negatively subdividing every positive edge converts positive circles to even ones and negative circles to odd ones. Many problems on signed circles have the same answer after negative subdivision. The point of view here is that those signed-circle problems are a special case of parity-circle problems.
Negative subdivision most obviously fails when connectivity is involved since the subdivided graph cannot be 3-connected. Another disadvantage is that contraction of edges makes sense only in signed graphs; a solution that involves contraction should be done in the signed framework.
Denote by $\S^\sim$ the all-negative graph that results from negatively subdividing every positive edge. Let $\tilde e$ be the path of length 1 (if $\s(e)=-$) or 2 (if $\s(e)=+$) in $\S^\sim$ that corresponds to the edge $e \in E(\S)$, and for a negative $e$ let $v_e$ be the middle vertex of $\tilde e$; thus, $V(\S^\sim)=V(\S) \cup \{v_e : e \in E^+(\S)\}$.
The essence of negative subdivision is the canonical sign-preserving bijection between the circles of $\S$ and those of $\S^\sim$, induced by mapping $e \in E(\S)$ to $\tilde e$ in $\S^\sim$. (There is such a bijection for every choice of positive edges to subdivide, even if that is not all positive edges.)
\begin{prop}\label{T:negsub-bal}
A signed graph $\S$ is balanced if and only if $|\S^\sim|$ is bipartite.
\end{prop}
\begin{proof}
It follows from the sign-preserving circle bijection that $\S$ is balanced if and only if $\S^\sim$ is balanced. Since $\S^\sim$ is all negative, it is balanced if and only if its underlying graph is bipartite.
\end{proof}
\subsection{Weirdness}\label{terms}\
\subsubsection*{Groups or no group}\
Any two-element group will do instead of the sign group. Some people prefer to use the additive group $\bbZ_2$ of integers modulo $2$, which is the additive group of the two-element field $\bbF_2$. This is useful when the context favors a vector space over $\bbF_2$.
Another variant notation is to define a signed graph as a pair $(\G,\S)$ where $\S \subseteq E(\G)$; the understanding is that the edges in $\S$ are negative and the others are positive. I do not use this notation.
\subsubsection*{Terminologies}\
Switching has been called ``re-signing'' and other names.
Stranger terminology exists. Several otherwise excellent works redefine the words ``even'', ``odd'', and ``bipartite'' to mean positive, negative, and balanced, all of which empty those words of their standard meanings and invite confusion. I say, ``That way madness lies'' \cite{Lear}.
| {
"timestamp": "2018-01-17T02:01:47",
"yymm": "1701",
"arxiv_id": "1701.07963",
"language": "en",
"url": "https://arxiv.org/abs/1701.07963",
"abstract": "A signed graph is a graph whose edges are labelled positive or negative. The sign of a circle (cycle, circuit) is the product of the signs of its edges. Most of the essential properties of a signed graph depend on the signs of its circles. Here I describe several questions regarding negative circles and their cousins the positive circles. Topics include incidence between signed circles and edges or vertices, characterizing signed graphs with special circle properties, counting negative circles, signed-circle packing and covering, signed circles and eigenvalues, and directed cycles in signed digraphs. A few of the questions come with answers.",
"subjects": "Combinatorics (math.CO)",
"title": "Negative (and Positive) Circles in Signed Graphs: A Problem Collection",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850872288504,
"lm_q2_score": 0.8333246015211009,
"lm_q1q2_score": 0.8192289885763184
} |
https://arxiv.org/abs/1612.07904 | On Garland's vanishing theorem for $\mathrm{SL}_n$ | This is an expository paper on Garland's vanishing theorem specialized to the case when the linear algebraic group is $\mathrm{SL}_n$. Garland's theorem can be stated as a vanishing of the cohomology groups of certain finite simplicial complexes. The method of the proof is quite interesting on its own. It relates the vanishing of cohomology to the assertion that the minimal positive eigenvalue of a certain combinatorial laplacian is sufficiently large. Since the 1970's, this idea has found applications in a variety of problems in representation theory, group theory, and combinatorics, so the paper might be of interest to a wide audience. The paper is intended for non-specialists and graduate students. | \section{Introduction}
\subsection{Statement of the theorem}
This is an expository paper on Howard Garland's work \cite{Garland} specialized to the case when the
linear algebraic group is $\mathrm{SL}_n$. Reading \cite{Garland} requires
knowledge of the theory of buildings. On the other hand, the
ideas in \cite{Garland}, in their essence, are combinatorial.
Since the relevant Bruhat-Tits building in the $\mathrm{SL}_n$ case has a simple
description in terms of lattices in a finite dimensional vector space,
one can give a proof of Garland's vanishing
theorem which requires from the reader only familiarity with linear
algebra and some group theory.
There is already an excellent Bourbaki Expos\'e by Borel
\cite{Borel} on Garland's work. The main difference of our
exposition, besides the more detailed proofs, is the absence of any
references to the theory of buildings, which makes the article
completely self-contained. Since Garland's result applies to quite
general discrete subgroups of $p$-adic groups, to give a full
account of his work one cannot avoid a discourse on the
theory of buildings and representation theory. Other expositions
of Garland's method and its generalizations can be found in \cite{BS} and \cite{ABM};
these papers also give nice applications of Garland's method to group theory and combinatorics,
although they do not give a complete proof of Theorem \ref{thm1.1}.
Let $K$ be a non-archimedean local field with residue field of order $q$;
such a field is either isomorphic to a finite extension of the $p$-adic numbers $\mathbb{Q}_p$, or
to the field of formal Laurent series $\mathbb{F}_q(\!(T)\!)$ over the finite field $\mathbb{F}_q$ with $q$ elements.
Let $\Gamma$ be a discrete subgroup of the topological group $\mathrm{SL}_n(K)$, $n\geq 2$.
There is an infinite contractible $(n-1)$-dimensional simplicial complex $\mathfrak B$, the \textit{Bruhat-Tits
building of $\mathrm{SL}_{n}(K)$}, on which $\Gamma$ naturally acts.
The complex $\mathfrak B$ can be described in terms of lattices in the vector space $K^n$.
One way to formulate the main result of \cite{Garland} for $\mathrm{SL}_n$ is as follows:
\begin{thm}\label{thm1.1} Assume the quotient $\mathfrak B/\Gamma$ is a finite complex.
There is a constant $q(n)$ depending only on $n$ such that if $q\geq q(n)$
then for all $0<i<n-1$ the simplicial cohomology groups $H^i(\mathfrak B/\Gamma, \mathbb{R})$ are zero.
\end{thm}
Such vanishing theorems were originally conjectured by Serre \cite{Serre}.
We will prove Theorem \ref{thm1.1} in Section \ref{sBTB} under a mild assumption on $\Gamma$.
In the same section we also explain how central division algebras
naturally give rise to such
groups.
We should mention that the restriction on $q$ being sufficiently
large in Theorem \ref{thm1.1} was removed by Casselman
\cite{Casselman}, who proved the vanishing of the middle cohomology
groups by a completely different method, using representation theory of $p$-adic groups.
Garland's vanishing theorem plays an important role in some problems
in representation theory;
for example, it puts strict restrictions on the continuous cohomology of
topological groups with coefficients in infinite dimensional representations (cf. \cite{Casselman} and $\S$\ref{ss4.3}).
An application of Garland's theorem in arithmetic geometry arises
in the calculation of the cohomology groups of certain algebraic varieties possessing
rigid-analytic uniformization. More precisely,
$\mathfrak B$ can be realized as the skeleton of Drinfeld's symmetric space $\Omega^n$, so the
cohomology groups of the algebraic variety uniformized as $\Omega^n/\Gamma$
are related to the cohomology groups of $\mathfrak B/\Gamma$ (cf. \cite{SS}).
\subsection{Outline of the paper}
Now we give an outline of the proof of Theorem \ref{thm1.1} and the contents of this paper.
Let $X$
be a finite simplicial complex of dimension $n$. Let $w$ be a ``Riemannian metric'' on $X$, by which we mean,
following \cite{Garland2}, a function from the non-oriented simplices of $X$ to the positive real numbers.
Let $C^i(X)$ denote the $\mathbb{R}$-vector space of $i$-cochains of $X$ with values in $\mathbb{R}$.
Define an inner product on $C^i(X)$:
$$
(f, g)=\sum_\sigma w(\sigma) f(\hat{\sigma}) g(\hat{\sigma}),
$$
where the sum is over all non-oriented $i$-simplices of $X$ and $\hat{\sigma}$ is
an oriented simplex corresponding to $\sigma$. Let $d: C^i(X)\to C^{i+1}(X)$
denote the coboundary operator, and $\delta: C^i(X)\to C^{i-1}(X)$ denote the
adjoint of $d$ with respect to $(\cdot ,\cdot )$. Let $\harm{i}(X)\subset C^i(X)$ be the subspace
of \textit{harmonic cocycles}; by definition, these are the
$i$-cochains annihilated by both $d$ and $\delta$. It is not hard to show that $H^i(X, \mathbb{R})\cong \harm{i}(X)$.
This isomorphism is a consequence of the ``Hodge decomposition'' for $C^i(X)$. In Section \ref{SecGM},
after recalling some standard terminology related to simplicial complexes, we prove this well-known fact.
Thus, to prove $H^i(X, \mathbb{R})=0$, it suffices to prove that there are no non-zero
harmonic cocycles. Motivated by the work of Matsushima \cite{Matsushima},
who reduced the study of harmonic forms on real locally symmetric spaces to the
computation of the minimal eigenvalues of certain curvature transformations, Garland reduces
the study of $\harm{i}(X)$ to estimating the minimal non-zero eigenvalue $m^i(X)$ of
the linear operator $\Delta=\delta d$ acting on $C^i(X)$. Section \ref{sFI} contains a key part of
Garland's argument: it gives the precise relationship
between the vanishing of $\harm{i}(X)$ and lower bounds on $m^i(X)$, and it also
gives a method for estimating $m^i(X)$ inductively.
In Section \ref{sBTB}, we describe the Bruhat-Tits building $\mathfrak B$ of $\mathrm{SL}_n(K)$ as a simplicial complex
and explain how Theorem \ref{thm1.1} follows from the results in Section \ref{sFI}, assuming
a certain lower bound on $m^i(\mathrm{Lk}(v))$ for vertices of $\mathfrak B$, where $\mathrm{Lk}(v)$ denotes the link of the vertex $v$.
We relegate the proof of this lower bound, which is the most technical part of the paper, to
Section \ref{sec3}. At the end of Section \ref{sBTB} we give a brief discussion of some of the
more recent applications of Garland's method to producing examples of groups having Kazhdan's property (T).
Based on numerical calculations, in $\S$\ref{ss4.1} we state a conjecture about the asymptotic behaviour of the eigenvalues of
$\Delta$ acting on $C^i(\mathrm{Lk}(v))$ for vertices of $\mathfrak B$, and in $\S$\ref{ss4.3} we give some evidence for this conjecture.
None of the results of this paper are original, except possibly those in $\S$\ref{ss4.3}.
\section{Simplicial cohomology and harmonic cocycles}\label{SecGM}
In this section we recall the basic definitions from the theory of simplicial cohomology
and prove a combinatorial analogue of the Hodge decomposition theorem. This last
theorem identifies simplicial cohomology groups with spaces of harmonic cocycles.
Its importance for the proof of Theorem \ref{thm1.1} is that, instead of
proving that $H^i(\mathfrak B/\Gamma, \mathbb{R})=0$ directly, we will actually show that there are no non-zero harmonic
$i$-cocycles on $\mathfrak B/\Gamma$.
\subsection{Basic concepts} An (abstract) \textit{simplicial complex} is a collection $X$ of finite nonempty
sets, called simplices, such that if $s$ is an element of $X$, so is every nonempty
subset of $s$. A nonempty subset of a simplex
$s$ is called a \textit{face} of $s$. A \textit{simplex of dimension $i$}, or simply an \textit{$i$-simplex}, is a simplex
with $i+1$ elements. The vertex set $\mathrm{Ver}(X)$ of $X$ is the union of its $0$-simplices.
A subcollection of $X$ that is itself a complex is called a \textit{subcomplex} of $X$.
The \textit{dimension} of $X$ is the largest dimension of one of its simplices, or is infinite if there is no such largest dimension.
Let $s$ be a simplex of $X$. The \textit{star} of $s$ in $X$,
denoted $\mathrm{St}(s)$, is the subcomplex of $X$ consisting of the union
of all simplices of $X$ having $s$ as a face. The \textit{link} of
$s$, denoted $\mathrm{Lk}(s)$, is the subcomplex of $\mathrm{St}(s)$ consisting of
the simplices which are disjoint from $s$. If one thinks of $\mathrm{St}(s)$
as the ``unit ball'' around $s$ in $X$, then $\mathrm{Lk}(v)$ is the ``unit
sphere'' around $s$.
Let $X$ and $Y$ be simplicial complexes. The \textit{join} of $X$
and $Y$ is the simplicial complex $X\ast Y$ such that $s\in X\ast Y$
if either $s\in X$ or $s\in Y$, or $s=x\ast y:=\{x_0,\dots, x_i,
y_0, \dots, y_j\}$, where $x=\{x_0,\dots, x_i\}\in X$ and
$y=\{y_0,\dots, y_j\}\in Y$. It is clear that $X\ast Y$ is a
simplicial complex and $\dim(X\ast Y)=\dim(X)+\dim(Y)+1$. Note that,
as a special case of this construction, $\mathrm{St}(s)=s\ast \mathrm{Lk}(s)$.
A specific ordering of the vertices of $s$, up to an even permutation,
is called an \textit{orientation} of $s$. Each positive dimensional simplex
has two orientations.
Denote the set of $i$-simplices by $\widehat{S}_i(X)$, and the set of
oriented $i$-simplices by $S_i(X)$. Note that $\widehat{S}_0(X)=S_0(X)=\mathrm{Ver}(X)$.
For $s\in S_i(X)$, $\bar{s}\in S_i(X)$ denotes the same simplex but with
opposite orientation. An $\mathbb{R}$-valued \textit{$i$-cochain} on $X$ is
a function $f: S_i(X)\to \mathbb{R}$ which is alternating if $i\geq 1$, i.e., $f(s)=-f(\bar{s})$.
(A $0$-cochain is just a function on $\mathrm{Ver}(X)$.)
The $i$-cochains naturally form an $\mathbb{R}$-vector
space which is denoted $C^i(X)$. If $i<0$ or $i>\dim(X)$, we set
$C^i(X)=0$.
The \textit{coboundary} operator is the linear transformation $d:C^i(X)\to C^{i+1}(X)$ defined by
\begin{equation}\label{eq-d}
df([v_0,\dots,
v_{i+1}])=\sum_{j=0}^{i+1}(-1)^jf([v_0,\dots,\hat{v}_j,\dots,
v_{i+1}]),
\end{equation}
where $[v_0,\dots,v_{i+1}]\in S_{i+1}(X)$ and the symbol $\hat{v}_j$ means that the vertex $v_j$ is to be
deleted from the array.
The kernel of $d:C^i(X)\to C^{i+1}(X)$ is called the subspace of \textit{$i$-cocycles} and denoted $Z^i(X)$.
As one easily verifies $d\circ d = 0$ (cf. \cite[p. 30]{Munkres}), so $dC^{i-1}(X)\subset Z^i(X)$.
The \textit{$i$-th cohomology group of $X$} (with real coefficients) is
$$H^i(X):=Z^i(X)/dC^{i-1}(X).$$
Let $\mathbf{1}\in C^0(X)$ be the function defined by $\mathbf{1}(v)=1$ for all $v\in \mathrm{Ver}(X)$. The
subspace $\mathbb{R} \mathbf{1}\subset C^0(X)$ spanned by $\mathbf{1}$ is the space of constant function. It is easy to see that
$\mathbb{R} \mathbf{1} \subset Z^0(X)$. One defines the \textit{reduced $i$-th cohomology group} $\tilde{H}^i(X)$ of
$X$ by setting $\tilde{H}^i(X)=H^i(X)$ if $i\geq 1$, and $\tilde{H}^0(X)=Z^0(X)/\mathbb{R} \mathbf{1}$.
It is easy to show that the ``geometric
realization'' of $X$ is connected if and only if $\tilde{H}^0(X)=0$; see \cite[p. 256]{Munkres}.
Now assume $X$ is finite, i.e., has finitely many vertices. To each simplex $s$ of $X$ assign a positive real number $w(s)=w(\bar{s})$,
which we call the \textit{weight} of $s$. Define an inner-product on $C^i(X)$ by
\begin{equation}\label{eq-pairing}
(f,g)=\sum_{{s} \in \widehat{S}_i(X)} w(s)f(s)g(s),\qquad f,g\in C^i(X),
\end{equation}
where in $w(s)f(s)g(s)$ we choose
some orientation of $s$; this is well-defined since $f(s)g(s)=f(\bar{s})g(\bar{s})$.
Let $s=[v_0,\dots, v_i]\in S_i(X)$ and $v\in \mathrm{Ver}(X)$. If the set
$\{v,v_0,\dots, v_i\}$ is an $(i+1)$-simplex of $X$, then we denote
by $[v, s]\in S_{i+1}(X)$ the oriented simplex $[v,v_0,\dots, v_i]$.
Define a linear transformation $\delta: C^i(X)\to C^{i-1}(X)$ by
\begin{equation}\label{eq-delta}
\delta f(s)=\sum_{\substack{v\in \mathrm{Ver}(X)\\ [v,s]\in S_i(X)}}
\frac{w([v,s])}{w(s)}f([v,s]).
\end{equation}
This operator is the adjoint of $d$ with respect to \eqref{eq-pairing}:
\begin{lem}\label{v-prop1.12}
If $f\in C^i(X)$ and $g\in C^{i+1}(X)$, then $(df,g)=(f,\delta g)$.
\end{lem}
\begin{proof}
\begin{align*}
&(df,g)\\
&=\sum_{\substack{s=[v_0,\dots,v_{i+1}] \in
\widehat{S}_{i+1}(X)}}
w(s)\sum_{j=0}^{i+1}f([v_0,\dots, \hat{v}_j,\dots,
v_{i+1}])g([v_j,v_0,\dots,\hat{v}_j,\dots, v_{i+1}])\\
&=\sum_{\substack{\sigma \in \widehat{S}_{i}(X)}}
w(\sigma)f(\sigma)\sum_{\substack{v\in \mathrm{Ver}(X) \\ [v,\sigma]\in
S_{i+1}(X)}}\frac{w([v,\sigma])}{w(\sigma)}g([v, \sigma])=(f,\delta g).
\end{align*}
\end{proof}
The kernel of $\delta: C^i(X)\to C^{i-1}(X)$ will be denoted by $\sZ^i(X)$.
\subsection{Combinatorial Hodge decomposition}
The intersection $$\harm{i}(X):=\sZ^i(X)\cap Z^i(X)$$ in $C^i(X)$
is the subspace of \textit{harmonic $i$-cocycles}. (This subspace depends on the
choice of the inner-product \eqref{eq-pairing}.)
\begin{rem}
The term \textit{harmonic} comes from the fact that $f\in C^i(X)$ is in $\harm{i}(X)$ if and only if
$f$ is in the kernel of the operator $d\delta+\delta d$, which is a combinatorial analogue
of the Laplacian.
\end{rem}
\begin{thm}\label{lem1.11}
Relative to the inner-product \eqref{eq-pairing}, we have the orthogonal direct sum decompositions
\begin{align}
\label{eq-ci} C^i(X) &=\harm{i}(X)\oplus dC^{i-1}(X)\oplus \delta C^{i+1}(X), \\
\label{eq-zi} Z^i(X) &=\harm{i}(X)\oplus dC^{i-1}(X), \\
\label{eq-hi} \sZ^i(X) &=\harm{i}(X)\oplus \delta C^{i+1}(X).
\end{align}
This implies
\begin{equation}\label{eq-CohomHarm}
H^i(X)=Z^i(X)/dC^{i-1}(X)\cong \sZ^i(X)/\delta C^{i+1}(X)\cong \harm{i}(X).
\end{equation}
\end{thm}
\begin{proof} Let $f\in dC^{i-1}(X)$ and $g\in \delta C^{i+1}(X)$. We can write $f=df'$ and $g=\delta g'$ for some
$f'\in C^{i-1}(X)$ and $g'\in C^{i+1}(X)$. Since $d^2=0$, using Lemma \ref{v-prop1.12} we get
$$
(f, g)=(df', \delta g')=(d^2 f', g')=(0, g')=0.
$$
Hence $dC^{i-1}(X)\perp \delta C^{i+1}(X)$. Now suppose $h\in C^i(X)$ is orthogonal to $dC^{i-1}(X)\oplus \delta C^{i+1}(X)$. Then
$$
0=(h, df')=(\delta h, f')
$$
for all $f'\in C^{i-1}(X)$, which implies $\delta h=0$. Similarly, $0=(h, \delta g')$ implies that $dh=0$.
Therefore, $h\in \harm{i}(X)$. In other words, the orthogonal complement of $dC^{i-1}(X)\oplus \delta C^{i+1}(X)$ in $C^i(X)$
is $\harm{i}(X)$. This proves \eqref{eq-ci}.
A cochain $f\in C^i(X)$ is in the orthogonal complement of $ \delta C^{i+1}(X)$ if and only if
$(f, \delta g')=(df, g')=0$ for all $g'\in C^{i+1}(X)$, which implies $df=0$. Thus
\begin{equation}\label{eq-ci2}
C^i(X) =Z^i(X)\oplus \delta C^{i+1}(X).
\end{equation}
A similar argument shows that the orthogonal complement of $dC^{i-1}(X)$ is $\sZ^i(X)$:
\begin{equation}\label{eq-ci3}
C^i(X) =\sZ^i(X)\oplus dC^{i-1}(X).
\end{equation}
Comparing \eqref{eq-ci2} and \eqref{eq-ci3} with \eqref{eq-ci}, we get \eqref{eq-zi} and \eqref{eq-hi}.
Finally, \eqref{eq-zi} and \eqref{eq-hi} imply \eqref{eq-CohomHarm}.
\end{proof}
\begin{defn}\label{defn1.7} Following \cite{Garland}, we call the linear
transformation $\Delta=\delta d$ on $C^i(X)$ the \textit{curvature transformation}.
(What we denote by $\Delta$ in this paper is denoted by $\Delta^+$ in \cite{Garland} and \cite{Borel}.)
\end{defn}
By Lemma \ref{v-prop1.12}, for any $f, g\in C^i(X)$ we have
$$
(\Delta f, g)= (\delta d f, g)= (d f, dg) = (f, \delta d g)= (f, \Delta g)
$$
and
\begin{equation}\label{eq-see}
(\Delta f, f)= (df, df)\geq 0.
\end{equation}
Hence $\Delta$ is a self-adjoint positive operator on $C^i(X)$, which implies that
$C^i(X)$ has an orthonormal basis consiting of eigenvectors of $\Delta$, and the
eigenvalues of $\Delta$ are nonnegative.
\begin{lem}\label{lem1.7} Let $i\geq 0$.
\begin{enumerate}
\item The subspace of $C^i(X)$ spanned by the eigenfunctions of $\Delta$ with positive eigenvalues
is $\delta C^{i+1}(X)=Z^i(X)^\perp$.
\item If $H^i(X)=0$, then $\delta C^{i+1}(X)=\sZ^i(X)$.
\end{enumerate}
\end{lem}
\begin{proof}
It is clear from \eqref{eq-see} that if $\Delta f=0$ then $df=0$.
Hence $\ker(\Delta)\subseteq Z^i(X)$. Conversely, if $f\in Z^i(X)$ then $\Delta f=\delta df=\delta 0=0$.
Therefore the subspace of $C^i(X)$ spanned by the eigenfunctions of $\Delta$ with positive eigenvalues
is $Z^i(X)^\perp$. This latter subspace is $\delta C^{i+1}(X)$, as follows from \eqref{eq-ci2}.
The second claim of the lemma follows from \eqref{eq-CohomHarm}.
\end{proof}
\section{Garland's method}\label{sFI}
In this section we discuss what is nowadays is called \textit{Garland's method}.
Let $X$ be a finite simplicial complex.
Let $$\lambda_{\min}^i(X):=\min_{v\in \mathrm{Ver}(X)} m^i(\mathrm{Lk}(v)).$$
Garland's method shows that a strong enough lower
bound on $\lambda_{\min}^{i-1}(X)$ implies
that $\harm{i}(X)=0$ (hence also $H^i(X, \mathbb{R})=0$ by the Hodge decomposition
discussed in the previous section). Moreover, the method
gives a lower bound on $m^k(X)$ in terms of $\lambda_{\min}^{k-1}(X)$,
hence allows to estimate $\lambda_{\min}^{i-1}(X)$ inductively in certain situations.
(We will see an example of such an inductive estimate in Section \ref{sec3}.)
The observation that Garland's ideas from \cite{Garland} apply to any finite simplicial complex
satisfying a certain condition is due to Borel;
in this section we partly follow \cite[$\S$1]{Borel}.
The proof of the main results has two parts: It starts with a decomposition of $(\Delta f, f)$
into a sum $\sum_v (\Delta f_v, f_v)$ over the vertices of $X$, where $f_v$
is the restriction of $f$ to the ``unit ball'' around $v$; this is the content of $\S$\ref{ss3.1}.
Then one bounds $(\Delta f_v, f_v)$ in terms of $m^{i-1}(\mathrm{Lk}(v))$ by studying the local version of the
curvature transformation; this is the content of $\S$\ref{ss3.2}.
We combine these two parts in $\S$\ref{ss3.3} to prove the main results.
One of the subtleties is that to make this strategy work one has to choose
an appropriate metric \eqref{eq-themetric}.
In this section we assume that $X$ is a finite $n$-dimensional
complex which satisfies the following property:
\begin{center}
$(\star)$ each simplex of $X$ is a face of some $n$-simplex.
\end{center}
For $s\in S_i(X)$, let
\begin{equation}\label{eq-themetric}
w(s) = \text{the number of (non-oriented) $n$-simplices containing $s$.}
\end{equation}
Note that, due to $(\star)$, $w(s)\neq 0$.
\subsection{Decomposition of $(\Delta f,f)$ into local factors}\label{ss3.1} We start with a simple lemma:
\begin{lem}\label{lem-w} Let $s\in \widehat{S}_i(X)$ be fixed. Then
$$
\sum_{\substack{\sigma\in \widehat{S}_{i+1}(X)\\ s\subset
\sigma}}w(\sigma)=(n-i)w(s).
$$
\end{lem}
\begin{proof}
Given an $n$-simplex $t$ such that $s\subset t$ there are
exactly $(n-i)$ simplices $\sigma$ of dimension $(i+1)$ such that
$s\subset \sigma\subset t$. Hence in the sum of the lemma we count
every $n$-simplex containing $s$ exactly $(n-i)$ times.
\end{proof}
\begin{comment}
\begin{rem}
The appearance of the weights $w(s)$ in \eqref{eq-pairing} might
seem unmotivated at the moment, but for the arguments in the next
subsection to go through one crucially needs exactly this weighted
version of the pairing.
\end{rem}
\end{comment}
For a fixed $v\in \mathrm{Ver}(X)$ define a linear transformation $\rho_v: C^i(X)\to C^i(X)$ by:
$$
\rho_vf (s)=\left\{
\begin{array}{ll}
f(s) & \hbox{if } v\in s; \\
0 & \hbox{otherwise.}
\end{array}
\right.
$$
It is clear that
\begin{equation}\label{eq-rho^2}
\rho_v\rho_v=\rho_v,
\end{equation}
and for $f,g\in C^i(X)$
\begin{equation}\label{eq-(rho)}
(\rho_vf, g)= (\rho_vf, \rho_v g)=(f, \rho_vg).
\end{equation}
Moreover, since any $i$-simplex has $(i+1)$-vertices, for $f\in C^i(X)$ we
have the equality
\begin{equation}\label{eq-obv}
\sum_{v\in \mathrm{Ver}(X)}\rho_vf=(i+1)f.
\end{equation}
\begin{comment}
We also have the following obvious lemma:
\begin{lem}\label{v-prop1.13} \hfill
\begin{enumerate}
\item $\rho_v\rho_v=\rho_v$;
\item For $f \in C^i(X)$ and $g\in C^i(X)$, $(\rho_vf, g)=(f,
\rho_vg)$.
\end{enumerate}
\end{lem}
\end{comment}
\begin{lem}\label{lem-borel} For $f\in C^i(X)$, we have
$$
i\cdot (\Delta f, f)=\sum_{v\in \mathrm{Ver}(X)}(\Delta \rho_v f,\rho_v f) - (n-i)(f,f).
$$
\end{lem}
\begin{proof} First, to simplify the notation in our calculations we introduce new notation.
Let $\sigma\in S_{i+1}(X)$ and $s\in S_i(X)$ be a face of $\sigma$.
The orientation on $\sigma$ induces an orientation on $s$; we define $[\sigma:s]=\pm 1$
depending on whether this induces orientation is the original orientation of $s$ or its opposite.
With this definition, for $f\in C^{i}(X)$ we have
$$
df(\sigma) = \sum_{\substack{s\in \widehat{S}_i(X)\\ s\subset \sigma}}[\sigma:s]f(s),
$$
where for each face $s$ of $\sigma$ we choose some
orientation. (Note that $[\sigma:s]f(s)$ does not depend on the choice of the orientation of $s$.)
Let $v\in \sigma$ be a fixed vertex and $s_0\in \widehat{S}_i(X)$ be the unique face of $\sigma$
not containing $v$. Then
$$
df(\sigma) = d\rho_v f(\sigma)+[\sigma:s_0]f(s_0).
$$
Hence
$$
df(\sigma)^2 = d\rho_v f(\sigma)^2+2[\sigma:s_0]f(s_0)\sum_{v\in s\subset \sigma}[\sigma:s]f(s)+f(s_0)^2.
$$
Summing both sides over all vertices of $\sigma$ we get
$$
(i+2)df(\sigma)^2=\sum_{v\in \sigma}d\rho_v f(\sigma)^2+2\sum_{\substack{s, s'\subset \sigma\\ s\neq s'}}[\sigma:s][\sigma:s']f(s)f(s')
+\sum_{s\subset \sigma} f(s)^2
$$
$$
=\sum_{v\in \sigma}d\rho_v f(\sigma)^2 + 2\sum_{\substack{s, s'\subset \sigma}}[\sigma:s][\sigma:s']f(s)f(s') -\sum_{s\subset \sigma} f(s)^2
$$
$$
=\sum_{v\in \sigma}d\rho_v f(\sigma)^2 + 2df(\sigma)^2 -\sum_{s\subset \sigma} f(s)^2.
$$
Hence
\begin{equation}\label{Borel-noproof}
i\cdot df(\sigma)^2 = \sum_{v\in \sigma}d\rho_v f(\sigma)^2 -\sum_{s\subset \sigma} f(s)^2.
\end{equation}
Now
\begin{align*}
i\cdot (\Delta f, f) &\overset{\mathrm{Lem.} \ref{v-prop1.12}}{=} i\cdot (df, df) = i\sum_{\sigma\in \widehat{S}_{i+1}(X)} w(\sigma) df(\sigma)^2
\\
&\overset{\eqref{Borel-noproof}}{=}\sum_{\sigma\in \widehat{S}_{i+1}(X)} \sum_{v\in \sigma}w(\sigma) d\rho_v f(\sigma)^2-
\sum_{\sigma\in \widehat{S}_{i+1}(X)}\sum_{s\subset \sigma} w(\sigma)f(s)^2
\\
&=\sum_{v\in \mathrm{Ver}(X)} (d\rho_v f,d\rho_v f) -
\sum_{s\in \widehat{S}_{i}(X)}f(s)^2 \sum_{\substack{\sigma\in \widehat{S}_{i+1}(X)\\ s\subset \sigma}} w(\sigma)
\\
&\overset{\mathrm{Lem.} \ref{lem-w}}{=} \sum_{v\in \mathrm{Ver}(X)} (\Delta\rho_v f,\rho_v f)-(n-i)\sum_{s\in \widehat{S}_{i}(X)}w(s)f(s)^2
\\
&= \sum_{v\in \mathrm{Ver}(X)} (\Delta\rho_v f,\rho_v f)-(n-i)(f,f).
\end{align*}
\end{proof}
\subsection{Local curvature transformations}\label{ss3.2}
\begin{comment}
\begin{cor}\label{cor1.14} Let $f\in C^i(X)$.
Suppose there is a positive real number $\lambda$ such that $(\Delta
\rho_v f, \rho_v f)\geq \lambda\cdot (\rho_v f, f)$ for all $v\in
\mathrm{Ver}(X)$. Then
$$i\cdot (\Delta f, f)\geq \left(\lambda\cdot (i+1)-(n-i)\right)(f,f).$$
Similarly, if there is a positive real number $\Lambda$ such that
$(\Delta \rho_v f, \rho_v f)\leq \Lambda\cdot (\rho_v f, f)$ for all
$v\in \mathrm{Ver}(X)$, then
$$i\cdot (\Delta f, f)\leq \left(\Lambda\cdot (i+1)-(n-i)\right)(f,f).$$
\end{cor}
\begin{proof}
This follows from Lemma \ref{v-prop1.13}, Lemma \ref{lem-borel} and
\eqref{obv}.
\end{proof}
\end{comment}
For
$f,g\in C^i(\mathrm{Lk}(v))$ define their inner-product by
\begin{equation}\label{eq-locp}
(f,g)_v=\sum_{s\in \widehat{S}_i(\mathrm{Lk}(v))}w_v(s)\cdot f(s)\cdot
g(s),
\end{equation}
where $w_v(s)$ is the number of $(n-1)$-simplices in $\mathrm{Lk}(v)$
containing $s$. Note that $\mathrm{Lk}(v)$ is an
$(n-1)$-dimensional complex satisfying $(\star)$.
Another simple observation is that for a simplex $\sigma$ in $\mathrm{Lk}(v)$ there is a one-to-one correspondence between
the $n$-simplices of $X$ containing $[v,\sigma]$ and the
$(n-1)$-simplices of $\mathrm{Lk}(v)$ containing $\sigma$. Hence
\begin{equation}\label{eq-w_vw}
w_v(\sigma)=w([v,\sigma]) \quad \text{for any }\sigma\in \mathrm{Lk}(v).
\end{equation}
Let $$d_v: C^i(\mathrm{Lk}(v))\to C^{i+1}(\mathrm{Lk}(v))$$ be the
coboundary operator acting on the cochains of the finite simplicial complex $\mathrm{Lk}(v)$.
Let $\delta_v$ be the adjoint of $d_v$ with respect to \eqref{eq-locp},
and let $$\Delta_v:=\delta_vd_v.$$
Lemma \ref{lem-borel} essentially decomposes $\Delta$ into a sum of its restrictions $\Delta\rho_v$ to $\mathrm{St}(v)$ over all vertices.
We want to relate $\Delta\rho_v$ to $\Delta_v$, and hence to relate the eigenvalues of $\Delta$
to the eigenvalues of its local version $\Delta_v$.
For this we need to introduce one more linear operator: For $i\geq 1$, define
\begin{align*}
&\tau_v:C^{i}(X)\to C^{i-1}(X),\\
& \tau_vf(s)=\left\{
\begin{array}{ll}
f([v,s]), & \hbox{if $s\in S_{i-1}(\mathrm{Lk}(v))$;}\\
0, & \hbox{otherwise.}
\end{array}
\right.
\end{align*}
Given $f\in C^i(X)$, its restriction to $\mathrm{Lk}(v)$ defines a function in $C^i(\mathrm{Lk}(v))$, which
by slight abuse of notation we denote by the same letter. With this convention, for $f\in C^i(X)$
we can consider $d_v f$ and $\delta_v f$, which are now functions on $\mathrm{Lk}(v)$. Similarly, we
can compute the pairing \eqref{eq-locp} on $C^i(X)$.
\begin{lem}\label{prop7.12} Assume $i\geq 1$.
For $f, g\in C^i(X)$, we have
$$(\tau_vf,\tau_vg)_v=(\rho_vf,\rho_vg).$$
\end{lem}
\begin{proof} We have
\begin{align*}
(\tau_vf,\tau_vg)_v &=\sum_{\sigma\in
\widehat{S}_{i-1}(\mathrm{Lk}(v))}w_v(\sigma)\cdot \tau_vf(\sigma)\cdot
\tau_vg(\sigma)\\
&\overset{\eqref{eq-w_vw}}{=}
\sum_{s\in \widehat{S}_{i}(\mathrm{St}(v))}w(s)\cdot \rho_vf(s)\cdot
\rho_vg(s).
\end{align*}
Since $\rho_v f$ is zero away from $\mathrm{St}(v)$, the last sum can be extended
to the whole $\widehat{S}_{i}(X)$, so the lemma follows.
\end{proof}
\begin{lem} Let $f\in C^i(X)$. We have
\begin{align}
\label{eq-taud} \tau_v d\rho_v f &=-d_v\tau_v f, \quad i\geq 1,\\
\label{eq-taudelta} \tau_v \delta f &=-\delta_v\tau_v f, \quad i\geq 2, \\
\label{eq-tauDelta} \tau_v \Delta \rho_v f &=\Delta_v\tau_v f, \quad i\geq 1.
\end{align}
\end{lem}
\begin{proof}
Let $s\in S_i(\mathrm{Lk}(v))$, $i\geq 1$. We have
$$
\tau_v d\rho_v f(s)=d\rho_vf([v,s])=\rho_v(f(s)-f([v, ds]))=-f([v, ds]).
$$
In the last term $ds$ denotes the image of $s$ under the boundary
operator and $[\cdot]$ is extended linearly to $\mathbb{Z}[S_i(\mathrm{Lk}(v))]$. Since $d$ restricted to $\mathrm{Lk}(v)$ coincides with $d_v$,
we have $f([v, ds])=d_v\tau_v f(s)$.
This proves \eqref{eq-taud}.
Now assume $i\geq 2$ and let $s\in S_{i-2}(\mathrm{Lk}(v))$. We have
$$
\tau_v \delta f(s)=\delta f([v, s])= \sum_{\substack{x\in \mathrm{Ver}(X)\\ [x,v, s]\in S_{i}(X)}}\frac{w([x,v,s])}{w([v, s])}f([x,v,s])
$$
$$
\overset{\eqref{eq-w_vw}}{=}- \sum_{\substack{x\in \mathrm{Ver}(\mathrm{Lk}(v))\\ [x, s]\in S_{i-1}(\mathrm{Lk}(v))}}\frac{w_v([x, s])}{w_v(s)}\tau_v f([x,s])=-\delta_v\tau_v f(s).
$$
This proves \eqref{eq-taudelta}.
Finally,
$$
\tau_v \Delta \rho_v f =\tau_v \delta d \rho_v f\overset{\eqref{eq-taudelta}}{=}-\delta_v \tau_v d\rho_v f
\overset{\eqref{eq-taud}}{=} \delta_v d_v\tau_v f=\Delta_v\tau_v f.
$$
This proves \eqref{eq-tauDelta}.
\end{proof}
\begin{lem}\label{prop7.14} Assume $i\geq 1$. For $f\in C^i(X)$, we have
$$(\Delta \rho_v f,\rho_v f)=(\Delta_v\tau_vf,\tau_vf)_v.$$
\end{lem}
\begin{proof} We have
$$
(\Delta \rho_v f,\rho_vf)\overset{\eqref{eq-(rho)}}{=}(\rho_v\Delta \rho_v
f,\rho_vf)\overset{\mathrm{Lem.} \ref{prop7.12}}{=}(\tau_v\Delta \rho_vf,\tau_vf)_v
\overset{\eqref{eq-tauDelta}}{=}(\Delta_v\tau_vf,\tau_vf)_v.
$$
\begin{comment}
Now it is enough to show $\tau_v\Delta\rho_v f=\Delta_v\tau_vf$. For $s\in S_{i-1}(\mathrm{Lk}(v))$, we have
$$
\tau_v\Delta\rho_v f(s)=\delta d \rho_vf([v,s])=\sum_{\substack{x\in \mathrm{Ver}(\mathrm{Lk}(v))\\
[x,s]\in S_{i}(\mathrm{Lk}(v))}}\frac{w([x,v,s])}{w([v,s])}d\rho_vf([x,v,s])
$$
$$
=\sum_{\substack{x\in \mathrm{Ver}(\mathrm{Lk}(v))\\
[x,s]\in
S_{i}(\mathrm{Lk}(v))}}\frac{w_v([x,s])}{w_v(s)}(\rho_vf([v,s])-\rho_vf([x,s])+\rho_vf([x,v,ds])).
$$
In the last term $ds$ denotes the image of $s$ under the boundary
operator and $[\cdot]$ is extended linearly to $\mathbb{Z}[S_i(\mathrm{Lk}(v))]$.
Continuing our calculation
$$
=\sum_{\substack{x\in \mathrm{Ver}(\mathrm{Lk}(v))\\
[x,s]\in
S_{i}(\mathrm{Lk}(v))}}\frac{w_v([x,s])}{w_v(s)}(f([v,s])-f([v,x,ds]))
$$
$$
=\sum_{\substack{x\in \mathrm{Ver}(\mathrm{Lk}(v))\\
[x,s]\in
S_{i}(\mathrm{Lk}(v))}}\frac{w_v([x,s])}{w_v(s)}(\tau_vf(s)-\tau_vf([x,ds]))
$$
$$
=\sum_{\substack{x\in \mathrm{Ver}(\mathrm{Lk}(v))\\
[x,s]\in
S_{i}(\mathrm{Lk}(v))}}\frac{w_v([x,s])}{w_v(s)}d_v\tau_vf([x,s])=\delta_v
d_v\tau_v f(s)=\Delta_v\tau_vf(s).
$$
\end{comment}
\end{proof}
\begin{notn} Given a simplicial complex $Y$, let
$M^i(Y)$ and $m^i(Y)$ be the maximal and minimal non-zero
eigenvalues of $\Delta$ acting on $C^i(Y)$, respectively. Denote
$$
\lambda^i_{\max}(Y)= \max_{\substack{v\in \mathrm{Ver}(Y)}}M^i(\mathrm{Lk}(v)),
$$
$$
\lambda^i_{\min}(Y)= \min_{\substack{v\in
\mathrm{Ver}(Y)}}m^i(\mathrm{Lk}(v)).
$$
\end{notn}
\begin{lem}\label{lemDec5} Assume $i\geq 1$. For $f\in C^i(X)$, we have
$$
(\Delta_v \tau_v f, \tau_v f)_v\leq M^{i-1}(\mathrm{Lk}(v))\cdot(\tau_v f,
\tau_v f)_v.
$$
If $\tilde{H}^{i-1}(\mathrm{Lk}(v))=0$, then for $f\in \sZ^i(X)$ we have
$$
(\Delta_v \tau_v f, \tau_v f)_v\geq m^{i-1}(\mathrm{Lk}(v))\cdot(\tau_v f,
\tau_v f)_v.
$$
\end{lem}
\begin{proof}
We can choose an orthonormal basis $\{e_1, \dots, e_h\}$ of
$C^{i-1}(\mathrm{Lk}(v))$ with respect to $(\cdot , \cdot)_v$ which consists
of $\Delta_v$-eigenvectors. Let $\{\kappa_1,\dots, \kappa_h\}$ be
the corresponding eigenvalues. We have $\kappa_j\geq 0$ for all $j$. Write
$\tau_vf=\sum_{j=1}^h a_j e_j$. Then
$$
(\Delta_v \tau_v f, \tau_v f)_v = (\sum_{j=1}^h a_j \kappa_j e_j,
\sum_{j=1}^h a_j e_j)_v=\sum_{j=1}^h a_j^2 \kappa_j
$$
$$
\leq M^{i-1}(\mathrm{Lk}(v)) \sum_{j=1}^h a_j^2 =
M^{i-1}(\mathrm{Lk}(v)) (\tau_v f, \tau_v f)_v.
$$
The second claim will follow from a similar argument if we show
that $\tau_vf$ belongs to the subspace of $C^{i-1}(\mathrm{Lk}(v))$ spanned by $\Delta_v$-eigenfunctions
with \textbf{positive} eigenvalues.
First assume $i\geq 2$. In this case ${H}^{i-1}(\mathrm{Lk}(v))=\tilde{H}^{i-1}(\mathrm{Lk}(v))=0$.
Hence, thanks to Lemma \ref{lem1.7}, it is enough to show that $\tau_vf \in\cH^{i-1}(\mathrm{Lk}(v))$.
\begin{comment}
For $s\in S_{i-2}(\mathrm{Lk}(v))$ we have
$$
\delta_v\tau_v f(s)=\sum_{\substack{x\in \mathrm{Ver}(\mathrm{Lk}(v))\\ [x, s]\in S_{i}(\mathrm{Lk}(v))}}\frac{w_v([x, s])}{w_v(s)}\tau_v f([x,s])
$$
$$
= \sum_{\substack{x\in \mathrm{Ver}(X)\\ [x,v, s]\in S_{i+1}(X)}}\frac{w([x,v,s])}{w([v, s])}f([v, x,s]) = -\delta f([v, s])=-\tau_v \delta f(s).
$$
\end{comment}
Since by assumption $\delta f=0$, from \eqref{eq-taudelta} we get
$\delta_v\tau_v f=-\tau_v \delta f=0$. Thus $\tau_v f\in \sZ^{i-1}(\mathrm{Lk}(v))$.
Now assume $i=1$ and $\tilde{H}^0(\mathrm{Lk}(v))=0$. This last assumption is equivalent to $\mathrm{Lk}(v)$
being connected. In this case $Z^0(\mathrm{Lk}(v))$ is spanned by the function
$\mathbf{1}\in C^0(\mathrm{Lk}(v))$ which assumes value $1$ on all vertices of $\mathrm{Lk}(v)$.
By Lemma \ref{lem1.7}, we need to show that $\mathbf{1}$ is orthogonal to
$\tau_v f$ with respect to the inner-product \eqref{eq-locp}. We compute
\begin{align}
\label{eq-1tuaf} (\mathbf{1}, \tau_v f)_v &=\sum_{x\in
\mathrm{Ver}(\mathrm{Lk}(v))}{w}_v(x)\cdot \tau_vf(x) \overset{\eqref{eq-w_vw}}{=}\sum_{x\in
\mathrm{Ver}(\mathrm{Lk}(v))}{w}([v,x]) f([v,x])\\
\nonumber &=-w(v)\delta f(v)=0.
\end{align}
\end{proof}
\begin{comment}
To see this let $[v, v']\in S_1(X)$ and $f\in Z^0(X)$. We have
$$
0=df([v, v'])=f(v')-f(v).
$$
Hence $f$ assumes the same value on adjacent vertices. But $\tilde{H}^0(X)=0$ is equivalent to $X$
being connected, so $f$ must be a constant function.
\end{comment}
\begin{comment}
\begin{lem}\label{cor7.15} Assume $i\geq 1$. Let $f\in C^i(X)$. We have
$$
(\Delta \rho_vf,\rho_vf)\leq \lambda^{i-1}_{\max}(X)\cdot(\rho_vf,f).
$$
If $\tilde{H}^{i-1}(\mathrm{Lk}(v))=0$ and $\delta f(s)=0$ for every
$s\in S_{i-1}(X)$ containing $v$, then
$$
(\Delta \rho_vf,\rho_vf)\geq \lambda^{i-1}_{\min}(X)\cdot(\rho_vf,f).
$$
\end{lem}
\begin{proof} By Lemma \ref{v-prop1.13} and Lemma
\ref{prop7.12},
$$
(\tau_vf,\tau_vf)_v =(\rho_vf,\rho_vf)= (\rho_vf, f).
$$
On the other hand, by Lemma \ref{prop7.14} and Lemma \ref{lemDec5},
$$
(\Delta \rho_vf,\rho_vf)=(\Delta_v\tau_vf,\tau_vf)_v\leq
\lambda^{i-1}_{\max}(X)\cdot (\tau_vf,\tau_vf)_v,
$$
and if $\tilde{H}^{i-1}(\mathrm{Lk}(v))=0$ and $\delta f(s)=0$ for
every $s\in S_{i-1}(X)$ containing $v$, then
$$
(\Delta \rho_vf,\rho_vf)\geq \lambda^{i-1}_{\min}(X)\cdot
(\tau_vf,\tau_vf)_v.
$$
\end{proof}
\end{comment}
\subsection{Fundamental inequalities}\label{ss3.3} Now we are ready to prove the main results of this section.
\begin{thm}\label{thmFI} Assume $i\geq 1$. For $f\in C^i(X)$ we have
$$
i\cdot (\Delta f, f)\leq \left((i+1)\cdot
\lambda^{i-1}_{\max}(X)-(n-i)\right)(f,f).
$$
If $\tilde{H}^{i-1}(\mathrm{Lk}(v))=0$ for every $v\in \mathrm{Ver}(X)$, then for $f\in \sZ^i(X)$ we have
$$
i\cdot (\Delta f, f)\geq \left((i+1)\cdot
\lambda^{i-1}_{\min}(X)-(n-i)\right)(f,f).
$$
\end{thm}
\begin{proof} Combining Lemma \ref{lem-borel} with Lemma \ref{prop7.14}, we get
\begin{equation}\label{eq-locexp}
i\cdot (\Delta f, f)=\sum_{v\in \mathrm{Ver}(X)}(\Delta_v \tau_v f,\tau_v f)_v - (n-i)(f,f).
\end{equation}
If $\tilde{H}^{i-1}(\mathrm{Lk}(v))=0$ for every $v\in \mathrm{Ver}(X)$ and $f\in \sZ^i(X)$, then
by Lemma \ref{lemDec5}
\begin{align*}
\sum_{v\in \mathrm{Ver}(X)}(\Delta_v \tau_v f,\tau_v f)_v & \geq \sum_{v\in \mathrm{Ver}(X)}m^{i-1}(\mathrm{Lk}(v))(\tau_v f,\tau_v f)_v \\
& \geq \lambda^{i-1}_{\min}(X) \sum_{v\in \mathrm{Ver}(X)}(\tau_v f,\tau_v f)_v.
\end{align*}
On the other hand,
$$
\sum_{v\in \mathrm{Ver}(X)}(\tau_v f,\tau_v f)_v \overset{\mathrm{Lem.} \ref{prop7.12}}{=} \sum_{v\in \mathrm{Ver}(X)}(\rho_v f,\rho_v f)
\overset{\eqref{eq-(rho)}}{=} \sum_{v\in \mathrm{Ver}(X)}(\rho_v f, f)
$$
$$
= (\sum_{v\in \mathrm{Ver}(X)}\rho_v f, f) \overset{\eqref{eq-obv}}{=} (i+1)(f,f).
$$
Hence
\begin{equation}\label{eq-suminequl}
\sum_{v\in \mathrm{Ver}(X)}(\Delta_v \tau_v f,\tau_v f)_v\geq \lambda^{i-1}_{\min}(X) (i+1)(f,f).
\end{equation}
Substituting this inequality into \eqref{eq-locexp} we get the second claim of the theorem. The first claim
follows from a similar argument.
\end{proof}
\begin{cor}\label{CorFI} Assume $i\geq 1$. If $\tilde{H}^{i-1}(\mathrm{Lk}(v))=0$ for every $v\in \mathrm{Ver}(X)$ and
$\lambda^{i-1}_{\min}(X)>\frac{n-i}{i+1}$, then $\harm{i}(X)=0$.
\end{cor}
\begin{proof}
Let $f\in \harm{i}(X)$. Then $df=\delta f=0$.
We obviously have $(\Delta f, f)=(df, df)=0$.
Under our current assumptions, Theorem \ref{thmFI} then implies $(f,f)\leq 0$,
which implies $f=0$.
\end{proof}
\begin{thm}\label{thmFIeigen} Assume $i\geq 1$.
We have
$$
i\cdot M^i(X)\leq (i+1)\cdot \lambda^{i-1}_{\max}(X)-(n-i),
$$
and
$$
i\cdot m^i(X)\geq (i+1)\cdot \lambda^{i-1}_{\min}(X)-(n-i).
$$
\end{thm}
\begin{proof}
Let $f\in C^i(X)$ be an eigenfunction of $\Delta$ with non-zero eigenvalue $c\neq 0$.
Then, by Lemma \ref{lem1.7} (1), $f=\delta g$ for some $g\in C^{i+1}(X)$.
By \eqref{eq-taudelta} $$\tau_v f=\tau_v \delta g=-\delta_v\tau_v g.$$
Hence, again by Lemma \ref{lem1.7} (1), $\tau_v f$ belongs to the subspace of $C^{i-1}(\mathrm{Lk}(v))$ spanned by
the eigenfunctions of $\Delta_v$ with positive eigenvalues. As in the proof of Lemma \ref{lemDec5},
this implies $$(\Delta_v \tau_v f, \tau_v f)_v\geq m^{i-1}(\mathrm{Lk}(v)) (\tau_v f, \tau_v f)_v \geq \lambda^{i-1}_{\min}(X) (\tau_v f, \tau_v f)_v.$$
This inequality, as in the proof of Theorem \ref{thmFI}, implies \eqref{eq-suminequl}.
Combining \eqref{eq-suminequl} with \eqref{eq-locexp} we get
$$
ic(f, f)=i(\Delta f, f)\geq \left((i+1)\cdot \lambda^{i-1}_{\min}(X)-(n-i)\right)(f,f),
$$
which implies the second inequality of the theorem. The first inequality can be proven by a similar argument.
\end{proof}
\begin{rem}
Note that in Theorem \ref{thmFIeigen} we do not assume $\tilde{H}^{i-1}(\mathrm{Lk}(v))=0$.
Also, since $\Delta$ is not the zero operator, we must have $M^i(X)>0$ for $i\leq n-1$, which
implies $\lambda_{\max}^{i-1}(X)>\frac{n-i}{i+1}$.
\end{rem}
\begin{notn} For $m\geq 1$, let $I_m$ denote the $m\times m$ identity matrix and let $J_m$
denote the $m\times m$ matrix whose entries are all equal to $1$.
The minimal polynomial of $J_m$ is $x(x-m)$.
\end{notn}
\begin{example}\label{Ex-P} Let $X$ be an $n$-simplex. We claim that the
eigenvalues of $\Delta$ acting on $C^i(X)$ are $0$ and $(n+1)$ for
any $0\leq i \leq n-1$. It is easy to see that $0$ is an eigenvalue,
so we need to show that the only non-zero eigenvalue of $\Delta$ is
$(n+1)$. It is enough to show
that $m^i(X)=M^i(X)=n+1$. First, suppose $i=0$. Since for any
simplex of $X$ there is a unique $n$-simplex containing it, one
easily checks that $\Delta$ acts on $C^0(X)$ as the matrix
$(n+1)I_{n+1}-J_{n+1}$. The only eigenvalues of this matrix are $0$
and $(n+1)$. Now let $i\geq 1$.
The link of any vertex is an $(n-1)$-simplex, so by induction
$\lambda_{\min}^{i-1}(X)=\lambda^{i-1}_{\max}(X)=n$.
Theorem \ref{thmFIeigen} implies
$$
i\cdot M^i(X)\leq (i+1)n-(n-i)=i(n+1)
$$
and
$$
i\cdot m^i(X)\geq (i+1)n-(n-i)=i(n+1).
$$
Hence $(n+1)\leq m^i(X)\leq M^i(X)\leq (n+1)$, which implies the
claim. (Of course, for this simple example it is possible, but not completely trivial, to compute
the eigenvalues of $\Delta$ directly.)
\end{example}
\begin{notn} Let $0\leq j\leq n-1$. Given $s\in \widehat{S}_j(X)$,
its link $\mathrm{Lk}(s)$ in $X$ has dimension $n-(j+1)$ and satisfies $(\star)$. For $0\leq i\leq n-(j+1)$, denote
$$
\lambda^{i,j}_{\max}(X)= \max_{\substack{s\in \widehat{S}_j(X)}}M^i(\mathrm{Lk}(s)),
$$
$$
\lambda^{i,j}_{\min}(X)= \min_{\substack{s\in \widehat{S}_j(X)}}m^i(\mathrm{Lk}(s)).
$$
With our earlier notation, we have $\lambda^{i,0}_{\max}(X)=\lambda^{i}_{\max}(X)$ and $\lambda^{i,0}_{\min}(X)=\lambda^{i}_{\min}(X)$.
\end{notn}
\begin{cor}\label{corFIeigen} Let $0\leq j<i\leq n-1$.
We have
$$
(i-j)\cdot M^i(X)\leq (i+1)\cdot \lambda^{i-(j+1),j}_{\max}(X)-(j+1)(n-i).
$$
and
$$
(i-j)\cdot m^i(X)\geq (i+1)\cdot \lambda^{i-(j+1),j}_{\min}(X)-(j+1)(n-i).
$$
\end{cor}
\begin{proof} We will prove the second inequality. The first inequality can be proven
by a similar argument.
If $j=0$, then the claim is just Theorem \ref{thmFIeigen}. Now, given $j\geq 1$, assume
that we proved the inequality for $j-1$:
\begin{equation}\label{eqNN1}
(i-(j-1))\cdot m^i(X)\geq (i+1)\cdot \lambda^{i-j,j-1}_{\min}(X)-j(n-i).
\end{equation}
Let $s\in \widehat{S}_{j-1}(X)$. The dimension of $\mathrm{Lk}(s)$ is $n-j$, so by Theorem \ref{thmFIeigen} we have
$$
(i-j)m^{i-j}(\mathrm{Lk}(s))\geq (i-j+1)\lambda^{i-j-1,0}_{\min}(\mathrm{Lk}(s))-((n-j)-(i-j)).
$$
On the other hand, the link of a vertex $v\in \mathrm{Lk}(s)$ in $\mathrm{Lk}(s)$ is the same as the
link of the $j$-simplex $[v, s]$ in $X$. Thus,
$$
\lambda^{i-j-1,0}_{\min}(\mathrm{Lk}(s))\geq \lambda^{i-j-1,j}_{\min}(X),
$$
and
$$
(i-j)m^{i-j}(\mathrm{Lk}(s))\geq (i-j+1)\lambda^{i-j-1,j}_{\min}(X)-(n-i).
$$
Taking the minimum over all $s\in \widehat{S}_{j-1}(X)$, we get
\begin{equation}\label{eqNN2}
(i-j)\lambda^{i-j,j-1}_{\min}(X)\geq (i-j+1)\lambda^{i-j-1,j}_{\min}(X)-(n-i).
\end{equation}
Substituting \eqref{eqNN2} into \eqref{eqNN1}, gives the desired inequality for $j$.
\end{proof}
The argument in the previous proof can be easily adapted to prove the following inequality:
$$
(i+1)\lambda^{i-1,0}_{\min}(X)-(n-i)\geq \frac{i}{i-j}\left((i+1)\lambda^{i-j-1,j}_{\min}(X)-(j+1)(n-i)\right).
$$
Now substituting this inequality into the second inequality of Theorem \ref{thmFI}, we get:
\begin{cor}\label{corNN1}
Assume $0\leq j< i\leq n-1$.
If $\tilde{H}^{i-1}(\mathrm{Lk}(v))=0$ for every $v\in \mathrm{Ver}(X)$, then for $f\in \sZ^i(X)$ we have
$$
(i-j)\cdot (\Delta f, f)\geq \left((i+1)\cdot \lambda^{i-j-1,j}_{\min}(X)-(j+1)(n-i)\right)(f,f).
$$
In particular, if $\lambda^{i-j-1,j}_{\min}(X)>\frac{(j+1)(n-i)}{(i+1)}$, then $\harm{i}(X)=0$.
\end{cor}
\begin{rem}\label{remAddedLast} Using \eqref{eqNN2} it is easy to check
that $\lambda^{i-j-1,j}_{\min}(X)>\frac{(j+1)(n-i)}{(i+1)}$ implies
$\lambda^{i-k-1,k}_{\min}(X)>\frac{(k+1)(n-i)}{(i+1)}$ for all $0\leq k\leq j$.
Hence the strongest assumption for Corollary \ref{corNN1} is $\lambda^{0,i-1}_{\min}(X)>\frac{i(n-i)}{(i+1)}$.
The advantage in trying to prove this last inequality, besides the fact that it implies all the others,
is that the question about the vanishing of $H^i(X)$ reduces to estimating the minimal non-zero eigenvalues of laplacians on graphs.
\end{rem}
For the purposes of proving Theorem \ref{thm1.1} we will need a variant of Corollary \ref{CorFI}.
Let $X$ be an $n$-dimensional simplicial complex satisfying $(\star)$ but which is
not necessarily finite. Let $\Gamma$ be a group \textit{acting} on $X$.
This means that $\Gamma$ acts on the vertices of $X$ and preserves the
simplicial structure of $X$, i.e., whenever the vertices
$\{v_0,\dots, v_i\}$ of $X$ form an $i$-simplex, $0\leq i\leq
\dim(X)$, then for any $\gamma\in \Gamma$ the vertices $\{\gamma
v_0,\dots,\gamma v_i\}$ also form an $i$-simplex. Consider the
following condition on the action of $\Gamma$:
\begin{center}
$(\dag)\quad \mathrm{St}(v)\cap \mathrm{St}(\gamma v)=\varnothing$ for any $v\in \mathrm{Ver}(X)$ and any
$1\neq \gamma\in \Gamma$.
\end{center}
In particular, this implies that the stabilizer of any simplex is trivial.
\begin{defn}
Let $X/\Gamma$ be the simplicial complex whose vertices $\mathrm{Ver}(X/\Gamma)$ are
the orbits $\mathrm{Ver}(X)/\Gamma$ and a subset $\{\tilde{v}_0,\dots,
\tilde{v}_i\}$ of $\mathrm{Ver}(X/\Gamma)$ forms an $i$-simplex, $0\leq i\leq
\dim(X)$, if we can choose a representative $v_j\in \mathrm{Ver}(X)$ from
the orbit $\tilde{v}_j$ for each $0\leq j\leq i$ so that that
$\{v_0, \dots, v_i\}$ form an $i$-simplex in $X$.
\end{defn}
It is obvious that $X/\Gamma$ is a simplicial complex. Moreover, if
$(\dag)$ holds then $S_i(X/\Gamma)$ is in bijection with the orbits
$S_i(X)/\Gamma$ for any $i$. Indeed, as is easy to check, $(\dag)$
implies that if $\{v_0,\dots, v_{i}\}$ and $\{\gamma_0 v_0,\dots,
\gamma_{i} v_{i}\}$ are in $\widehat{S}_i(X)$ for some
$\gamma_0,\dots, \gamma_i\in \Gamma$, then $\gamma_0=\cdots=\gamma_i$.
\begin{cor}\label{CorFII} Let $\Gamma$ be a group acting on
$X$ so that $(\dag)$ is satisfied. Assume $X/\Gamma$ is finite and $i\geq 1$. If $\tilde{H}^{i-1}(\mathrm{Lk}(v))=0$ for every $v\in \mathrm{Ver}(X)$ and
$\lambda^{i-1}_{\min}(X)>\frac{n-i}{i+1}$, then $H^i(X/\Gamma)=0$.
\end{cor}
\begin{proof} $X/\Gamma$ is a finite $n$-dimensional complex
satisfying $(\star)$. Let $v\in \mathrm{Ver}(X)$ and let $\tilde{v}$ be the
image of $v$ in $X/\Gamma$. Due to $(\dag)$, $\mathrm{Lk}(v)\cong
\mathrm{Lk}(\tilde{v})$. Hence
$\tilde{H}^{i-1}(\mathrm{Lk}(\tilde{v}))=0$ for every $\tilde{v}\in
\mathrm{Ver}(X/\Gamma)$ and $\lambda^{i-1}_{\min}(X/\Gamma)>\frac{n-i}{i+1}$. By Corollary \ref{CorFI}, we have
$\harm{i}(X/\Gamma)=0$. Therefore, by Theorem \ref{lem1.11}, $H^i(X/\Gamma)=0$.
\end{proof}
\section{The Bruhat-Tits building of $\mathrm{SL}_n(K)$}\label{sBTB}
In this section we describe the Bruhat-Tits building of $\mathrm{SL}_n$, and the links of its vertices. Then, assuming a certain lower
bound on the minimal non-zero eigenvalue of the curvature transformation
acting on the links, we prove Theorem \ref{thm1.1}.
This is an important application of the ideas developed in the previous section.
(The required lower bound on the minimal non-zero eigenvalue will be proven in Section \ref{sec3}.)
We conclude the section with a brief discussion of some applications
of Garland's method to produce examples of groups having the so-called property (T).
\subsection{The building} Let $n\geq 0$ be a non-negative integer.
Let $K$ be a complete discrete valuation field. Let $\mathcal{O}$ be the
ring of integers in $K$, $\pi$ be a uniformizer, $\mathcal{O}/\pi \mathcal{O}\cong \mathbb{F}_q$,
where $\mathbb{F}_q$ denotes the finite field with $q$ elements.
be the residue field, and $q$ be the order of $k$.
Let $\mathcal{V}$ be an $(n+2)$-dimensional vector space over $K$. A
subset $L\subset \mathcal{V}$ which has a structure of a free $\mathcal{O}$-module
of rank $(n+2)$ such that $L\otimes_\mathcal{O} K=\mathcal{V}$ is called a
\textit{lattice}. It is clear that if $L$ is a lattice then
$xL:=\{x\cdot \ell\ |\ \ell\in L\}$, $x\in K^\times$, is also a
lattice. We say that $L$ and $xL$ are \textit{similar}. Similarity
defines an equivalence relation on the set of lattices in $\mathcal{V}$. We
denote the equivalence class of $L$ by $[L]$.
The \textit{Bruhat-Tits building of $\mathrm{SL}_{n+2}(K)$} is the
simplicial complex $\mathfrak B_n$ with set of vertices $\{[L]\ |\ L \text{ is
a lattice in }\mathcal{V}\}$ where $\{[L_0],\dots,[L_i]\}$ form an $i$-simplex
if there is $L'_j\in [L_j]$ for each $j$ with
$$
\pi L_i'\subsetneqq L_0'\subsetneqq L_1'\subsetneqq\cdots\subsetneqq L'_i.
$$
To visualize $\mathfrak B_n$ in some way, fix a basis
$\{e_1,\dots, e_{n+2}\}$ of $\mathcal{V}$. It is easy to see that the classes of
lattices
$$
\mathcal{O}\pi^{a_1} e_1\oplus\cdots\oplus \mathcal{O}\pi^{a_{n+2}} e_{n+2}, \qquad a_1,\dots, a_{n+2}\in \mathbb{Z},
$$
are in bijection with the elements of $\mathbb{Z}^{n+2}/\mathbb{Z}\cdot (1,1,\dots, 1)$. In particular, $\mathfrak B_n$
is infinite. Next, the vertices corresponding to $(a_1, \dots, a_{n+2})$ and $(b_1, \dots, b_{n+2})$
are adjacent in $\mathfrak B_n$ if and only if modulo $\mathbb{Z}\cdot (1,1,\dots, 1)$ we have $a_i\leq b_i\leq a_i+1$ for all $i$.
For example, when $n=0$, these vertices form an infinite line as in Figure \ref{Fig1}.
When $n=1$, these vertices give a triangulation of $\mathbb{R}^2$ part of which looks like Figure \ref{Fig2}.
It is important to stress that the vertices that we considered above do not give all vertices of $\mathfrak B_n$,
but only of a part of the building, called an \textit{apartment}. For example, it is not hard to see
that $\mathfrak B_0$ is an infinite tree in which every vertex is adjacent to exactly $(q+1)$ other vertices.
Similarly, $\mathfrak B_n$ is very symmetric
in the sense that the simplicial complexes $\mathrm{St}(v)$, $v\in \mathrm{Ver}(\mathfrak B_n)$, are all isomorphic to each other.
To see what this complex is take a lattice $\cL$ corresponding to $v$. Then $\cL/\pi\cL=
\mathbb{F}_q^{n+2}=:V$, and the vertices of $\mathrm{St}(v)$ are in one-to-one
correspondence with the positive dimensional linear subspaces of $V$
($v$ itself corresponds to $V$). Let $\{V_0,\dots,V_i\}$ be the
linear subspaces corresponding to the vertices $\{v_0,\dots,v_i\}$
of $\mathrm{St}(v)$. Then $\{v_0,\dots,v_i\}$ form an $i$-simplex if and
only if the linear subspaces $\{V_0,\dots,V_i\}$ fit into an ascending sequence
(possibly after reindexing):
$$
\mathcal{F}: V_0\subset V_1\subset \cdots \subset V_i.
$$
The $i$-simplices of $\mathrm{Lk}(v)$ correspond to those $\mathcal{F}$ for which
$V_i\neq V$. Next, we consider more carefully $\mathrm{Lk}(v)$ as a simplicial complex.
\begin{figure
\begin{tikzpicture}[scale=1.5, semithick, inner sep=.5mm, vertex/.style={circle, fill=black}]
\node (0) at (0, 0) {(0,0)};
\node (1) at (1, 0) {(1,0)};
\node (2) at (2, 0) {(2,0)};
\node (3) at (3, 0) {};
\node (-1) at (-1, 0) {(0,1)};
\node (-2) at (-2, 0) {(0,2)};
\node (-3) at (-3, 0) {};
\path[]
(0) edge (1)
(1) edge (2)
(2) edge[dashed] (3)
(0) edge (-1)
(-1) edge (-2)
(-2) edge[dashed] (-3);
\end{tikzpicture}
\caption{}\label{Fig1}
\end{figure}
\begin{figure
\begin{tikzpicture}[scale=1, semithick, inner sep=.5mm, vertex/.style={circle, fill=black}]
\node (00) at (0, 0) {(0,0,0)};
\node (20) at (2, 0) {(1,0,0)};
\node (11) at (1, 1) {(1,1,0)};
\node (-11) at (-1, 1) {(0,1,0)};
\node (-20) at (-2, 0) {(0,1,1)};
\node (-1-1) at (-1, -1) {(0,0,1)};
\node (1-1) at (1, -1) {(1,0,1)};
\path[]
(00) edge (20) (00) edge (11) (00) edge (-11) (00) edge (-20) (00) edge (-1-1) (00) edge (1-1);
\path[]
(20) edge (11) (11) edge (-11) (-11) edge (-20) (-20) edge (-1-1) (-1-1) edge (1-1) (1-1) edge (20);
\end{tikzpicture}
\caption{}\label{Fig2}
\end{figure}
\subsection{Complexes of flags}\label{ssCF}
Fix some $n\geq 0$. Let $V$ be a linear space of
dimension $n+2$ over the finite field $\mathbb{F}_q$. A \textit{flag} in $V$ is an ascending sequence
\begin{equation}\label{eq-flag}
\mathcal{F}: F_0\subset F_1\subset \cdots \subset F_i
\end{equation}
of distinct linear subspaces $F_0,\dots, F_i$ of $V$ such that
$F_0\neq 0$ and $F_i\neq V$. The \textit{length} of $\mathcal{F}$ is
$i$. We will refer to a flag of length $i$ as $i$-flag.
In particular, the $0$-flags are simply the proper non-zero linear
subspaces of $V$. Given two flags
$$
\mathcal{F}: F_0\subset F_1\subset \cdots \subset F_i\quad \text{and} \quad
\mathcal{G}: G_0 \subset G_1\subset \cdots \subset G_j
$$
we say that $\mathcal{G}$ \textit{refines} $\mathcal{F}$, and write $\mathcal{F}\prec \mathcal{G}$, if for
every $0\leq k\leq i$ there is $0\leq t\leq j$ such that $F_k=G_t$.
It is convenient also to have the empty flag $\varnothing$, which is
the empty sequence of linear subspaces; we put $-1$ for the length of
$\varnothing$. The refinement defines a partial ordering
on the set of flags in $V$; the empty flag is refined by every
other flag.
Let $\mathcal{F}$ be a fixed flag of length $\ell$. Consider the following simplicial
complex $X_\mathcal{F}$ (when $\mathcal{F}=\varnothing$ we will also denote this
complex by $X_\varnothing^n$). The vertices of $X_\mathcal{F}$ are the
$(\ell+1)$-flags refining $\mathcal{F}$. The vertices $v_0,\dots, v_h$ form
an $h$-simplex if the corresponding flags are all refined by a
single $(\ell+h+1)$-flag. It is easy to see that $X_\mathcal{F}$ is indeed a
finite simplicial complex of dimension $n-1-\ell$. Since any flag can be refined into an $n$-flag, $X_\mathcal{F}$ satisfies $(\star)$.
Note that the link of a vertex of the Bruhat-Tits building $\mathfrak B_n$ is isomorphic to $X_\varnothing^n$.
Now assume $\mathcal{F}\neq \varnothing$. Let $\mathcal{F}: F_0\subset F_1\subset
\cdots \subset F_{\ell}$, with $\ell\geq 0$. Consider
the array of integers $(t_0, t_1,\dots, t_{\ell+1})$ defined by
\begin{equation}\label{eq-t}
t_j=\left\{
\begin{array}{ll}
\dim(F_0), & \mbox{if }j=0; \\
\dim(F_j)-\dim(F_{j-1}), & \mbox{if }1\leq j\leq \ell; \\
\dim(V)-\dim(F_\ell), & \mbox{if }j=\ell+1.
\end{array}
\right.
\end{equation}
It is not hard to see that
\begin{equation}\label{eqXFdecomp}
X_\mathcal{F}\cong X_\varnothing^{t_0-2}\ast
X_\varnothing^{t_1-2}\ast\cdots \ast X_\varnothing^{t_{\ell+1}-2},
\end{equation}
where $X_\varnothing^{-1}$ denotes the empty complex.
\begin{lem}\label{lemn2.1}
If $\dim(X_\mathcal{F})>0$ then $X_\mathcal{F}$ is connected.
\end{lem}
\begin{proof} First we show that $X_\varnothing^n$ is connected if $n\geq 1$. Let
$x$ and $y$ be two vertices of $X_\varnothing^n$, and let $W_1$ and
$W_2$ be the corresponding subspaces of $V$. Choose a
$1$-dimensional subspace $L_i$ of $W_i$, $i=1,2$. Consider the
subspace $P:=L_1+L_2$ of $V$. Since $n\geq 1$, $P\neq V$, so
$P$ gives a vertex of $X_\varnothing^n$, which we denote by the same
letter. The vertex $P$ is adjacent to both $L_1$ and $L_2$, $L_1$
is adjacent to $x$, and $L_2$ is adjacent to $y$, so there is a
path from $x$ to $y$.
Now assume $\mathcal{F}\neq \varnothing$ and $X_\mathcal{F}$ is given by \eqref{eqXFdecomp}. If
$\dim(X_\mathcal{F})\geq 1$, then either at least two of
$X_\varnothing^{t_j-2}$'s are non-empty or at least one
$X_\varnothing^{t_j-2}$ has dimension $1$. In either case $X_\mathcal{F}$ is
clearly connected.
\end{proof}
\begin{thm}\label{thm-G} Assume $N:=\dim(X_\mathcal{F})\geq 1$. Then for any $0\leq i\leq N-1$
and $\varepsilon>0$ there is a constant $q(\varepsilon, n)$ depending only on
$\varepsilon$ and $n$ such that $m^i(X_\mathcal{F})\geq N-i-\varepsilon$ once $q>q(\varepsilon, n)$.
\end{thm}
The proof of this theorem is quite complicated and will be given in Section \ref{sec3}.
\begin{cor}\label{cor-vanishing} There is a constant $q(n)$ depending only of $n$
such that if $q>q(n)$, then $\tilde{H}^i(X_\mathcal{F})=0$ for all $0\leq i\leq N-1$.
\end{cor}
\begin{proof}
We use induction on $N$ and $i$.
When $N=1$ or $i=0$, the claim follows from Lemma \ref{lemn2.1}, since
$\tilde{H}^0(X_\mathcal{F})=0$ is equivalent to $X_\mathcal{F}$ being connected. Now assume $N>1$ and $i\geq 1$.
For any $v\in \mathrm{Ver}(X_\mathcal{F})$, we have $\mathrm{Lk}(v)\cong X_\mathcal{G}$ for some $\mathcal{G}\succ \mathcal{F}$. Since $\dim(X_\mathcal{G})=N-1$,
by the induction assumption $\tilde{H}^{i-1}(X_\mathcal{G})=0$ for $q$ large enough.
Next, choosing $\varepsilon$ small enough in Theorem \ref{thm-G},
we can make
$$
\lambda^{i-1}_{\min}(X_\mathcal{F})> \frac{N-i}{i+1}.
$$
Now the assumptions of Corollary \ref{CorFI} are satisfied, so $\tilde{H}^i(X_\mathcal{F})=0$.
\end{proof}
\subsection{Main theorem}
Since $\mathrm{Lk}(v)$ is isomorphic to the simplicial complex
$X^n_\varnothing$, the complex $\mathfrak B_n$ is $(n+1)$-dimensional and satisfies $(\star)$.
\begin{thm}\label{thm3.1}
Let $\Gamma$ be a group acting on $\mathfrak B_n$ so that $(\dag)$ is satisfied. Assume
$\mathfrak B_n/\Gamma$ is finite. There is a constant $q(n)$ depending only on $n$
such that if $q>q(n)$ then $H^i(\mathfrak B_n/\Gamma)=0$ for $1\leq i\leq n$.
\end{thm}
\begin{proof}
Since $\mathrm{Lk}(v)\cong X_\varnothing^n$ for any $v\in \mathrm{Ver}(\mathfrak B_n)$,
Theorem \ref{thm-G} and Corollary \ref{cor-vanishing} imply that there is a constant $q(n)$
depending only on $n$ such that if $q>q(n)$ then for any $1\leq
i\leq n$ we have $\tilde{H}^{i-1}(\mathrm{Lk}(v))=0$ and
$\lambda_{\min}^{i-1}(\mathfrak B_n)=m^{i-1}(X_\varnothing^n)>\frac{n+1-i}{i+1}$.
Now the claim follows from Corollary \ref{CorFII}.
\end{proof}
\begin{rem}\label{rem-ST} It is known that the
cohomology groups $\tilde{H}^i(X_\mathcal{F})$ vanish for $0\leq i\leq
N-1$, without any assumptions on $q$, by a general result
of Solomon and Tits; see Appendix II in \cite{Garland} for a proof.
Assuming this result, to prove Theorem \ref{thm3.1} we only need
the bound $m^i(X^n_\varnothing)\geq n-i-\varepsilon$. On the other hand, the proof of
Theorem \ref{thm-G} is inductive, and requires proving this bound for all $X_\mathcal{F}$.
Another observation is that to prove Theorem \ref{thm3.1} it is enough to
prove $m^0(X_\mathcal{F})\geq N-\varepsilon$ for all $\mathcal{F}$. Indeed, the link of any simplex in $\mathfrak B_n$
is isomorphic to some $X_\mathcal{F}$, so one can appeal to Corollary \ref{corNN1}
to get the vanishing of the cohomology.
\end{rem}
\begin{rem}\label{rem4.2} In $\S$\ref{ss4.1} we will compute
that $m^0(X^1_\varnothing)>1/2$ and $m^0(X^2_\varnothing)>1$.
Hence for $n=1, 2$ Garland's method proves the vanishing of $H^1(\mathfrak B_n/\Gamma)$ for
all $q$. On the other hand,
$m^1(X^2_\varnothing)=1/3$ when $q=2$. To apply Corollary \ref{CorFII} to show that $H^2(\mathfrak B_2/\Gamma)=0$ we need
$\lambda^1_{\min}(\mathfrak B_2/\Gamma)>1/3$, so
we need to assume $q>2$.
\end{rem}
There is an abundance of groups $\Gamma$ satisfying $(\dag)$. The most
important examples of such groups come from arithmetic. One possible
construction proceeds as follows. Let $F=\mathbb{F}_q(T)$ be the field of rational functions
in indeterminate $T$ with $\mathbb{F}_q$ coefficients. Fix a place $\infty=1/T$ of $F$.
Let $A=\mathbb{F}_q[T]$ be the polynomial ring. Let $K=\mathbb{F}_q(\!(1/T)\!)$ be the completion
of $F$ at $\infty$. Let $D$ be a central division algebra over $F$
of dimension $(n+2)^2$. Assume $D$ is split at $\infty$, i.e.,
$D\otimes_F K\cong \mathrm{Mat}_{n+2}(K)$. Let $\cD$ be a
maximal $A$-order in $D$; see \cite{Reiner} for the definitions. Let $\cD^\times$ be the
group of multiplicative units in $\cD$.
The quotient $\cD^\times/\mathbb{F}_q^\times$ can be identified with
a discrete, cocompact subgroup of $\mathrm{PGL}_{n+2}(K)$. Replacing $\cD^\times/\mathbb{F}_q^\times$
by a subgroup $\Gamma\subset \cD^\times/\mathbb{F}_q^\times$ of finite index if necessary,
we get a group which naturally acts on $\mathfrak B_n$ and satisfies $(\dag)$.
Moreover, the quotient $\mathfrak B_n/\Gamma$ is finite.
For these facts we refer to \cite[p. 140]{Laumon}, \cite{Li}, \cite{LSV}, \cite{Serre}.
Theorem
\ref{thm3.1} implies $H^i(\mathfrak B_n/\Gamma)=0$ for all $1\leq i\leq n$. On
the contrary, $H^{n+1}(\mathfrak B_n/\Gamma)$ is usually quite large. Its
dimension approximately equals the volume of $\mathrm{PGL}_{n+2}(K)/\Gamma$
with respect to an appropriately normalized Haar measure on
$\mathrm{PGL}_{n+2}(K)$; see \cite{Serre}. The simplicial complexes $\mathfrak B_n/\Gamma$ are often used
in the construction of Ramanujan complexes; see
\cite{Li}, \cite{LSV}.
\subsection{Property (T)} Garland's method has been applied to prove that
certain groups have Kazhdan's property (T).
Let $\Gamma$ be a group generated by a finite set $S$. Let $\pi:\Gamma\to U(H_\pi)$ be a
unitary representation. We say that $\pi$ almost has invariant vectors if for every $\varepsilon>0$
there exists a non-zero vector $u_\varepsilon$ in the Hilbert space $H_\pi$ such that
$|\!|\pi(s)u_\varepsilon-u_\varepsilon|\!|\leq \varepsilon |\!|u_\varepsilon|\!|$
for every $s\in S$.
The group $\Gamma$ is said to have \textit{property (T)} if
every unitary representation of $\Gamma$ which almost has invariant vectors has a non-zero invariant vector.
Property (T) has important applications to representation theory,
ergodic theory, geometric group theory and the theory of networks.
For example, Margulis used groups with property (T) to give the first explicit examples of expanding graphs
and to solve the Banach-Ruziewicz problem that asks whether the Lebesgue measure is the only normalized rotationally invariant
finitely additive measure on the $n$-dimensional sphere.
We refer to Lubotzky's book \cite{Lubotzky} for a discussion of property (T) and its applications.
It is known that a group $\Gamma$ has property (T) if and only if for any unitary representation $\pi$ of $\Gamma$, the
first cohomology group $H^1(\Gamma, \pi)$ is zero. This suggests the following line of attack
to prove that $\Gamma$ has property (T).
Suppose that $\Gamma$ is the fundamental group of a finite simplicial complex $X$.
By group cohomology, $H^1(\Gamma, \pi)=H^1(X, E_\pi)$, where $E_\pi$ is a local system on $X$
associated to $\pi$. Then one can try to prove the vanishing of $H^1(X, E_\pi)$ by a
generalization of Garland's method. This approach in the case when $X$ is a $2$-dimensional finite simplicial complex
was pursued independently by Ballmann and {\'S}wiatkowski \cite{BS}, Pansu \cite{Pansu}, and \.Zuk \cite{Zuk}.
For example, in \cite{BS}, the authors prove the following theorem: Assume $X$ is a
$2$-dimensional finite simplicial complex, $\mathrm{Lk}(v)$ is a connected graph for any vertex of $X$, and $\lambda^0_{\min}(X)>1/2$.
Then $\Gamma=\pi_1(X)$ has property (T). Note that these assumptions are the same as in Corollary \ref{CorFI}
for $n=2$. They are fulfilled when $X$ is a finite quotient of a 2-dimensional Bruhat-Tits building.
These results gave new explicit examples of groups with property (T) which
were significantly different from the earlier known examples.
In \cite{DJ1}, \cite{DJ2}, Dymara and Januszkiewicz applied a generalization of Garland's method to groups acting on buildings of arbitrary type
and dimension (e.g. hyperbolic buildings), and produced examples of groups having property (T), not coming from locally symmetric spaces or euclidean
buildings.
\section{Complexes of flags}\label{sec3}
The main goal of this section is to prove Theorem \ref{thm-G}.
The notation will be the same as in $\S$\ref{ssCF}.
In particular, $V$ is a linear space of dimension $n+2$ over the finite field $\mathbb{F}_q$, and $\mathcal{F}$
is a (possibly empty) flag in $V$ of length $\ell$. We denote
$$
N=\dim(X_\mathcal{F}).
$$
The proof of Theorem \ref{thm-G} proceeds by induction on $N$ and $i$. The base case $N=1$
follows from a direct calculation. We will carry out this calculation in $\S$\ref{ss4.1}.
In the same subsection we give some explicit
examples which provide a sense of the complexity of the eigenvalues of $\Delta$ acting on $C^i(X_\mathcal{F})$.
These examples suggest a remarkable asymptotic behaviour of the eigenvalues of $\Delta$ as $q\to \infty$,
which we state as a conjecture.
The inductive step, discussed in $\S$\ref{ss4.2}, has two parts.
Assuming the claim holds for $i=0$ and all $N$, the proof of the general case
quickly follows from the inequality in Theorem \ref{thmFIeigen}.
On the other hand, the argument which proves the claim for $i=0$ and $N\geq 1$ is fairly intricate.
The outline is approximately the following: We start with a $\Delta$-eigenfunction $f\in C^0(X_\mathcal{F})$
having eigenvalue $c>0$. The machinery developed in $\S$\ref{ss3.2} cannot be applied to this function, since
we cannot apply the operator $\tau_v$ directly to $f$. Instead, we introduce a parameter $R\in \mathbb{R}$,
and multiply the values of $f$ on an appropriate subset of $\mathrm{Ver}(X_\mathcal{F})$ by $R$. The resulting
function $f_\alpha$ is no longer an eigenfunction of $\Delta$, but we get some
flexibility because we can vary $R$. We apply
the machinery of $\S$\ref{ss3.2} to $df_\alpha\in C^1(X_\mathcal{F})$.
Choosing $R$ appropriately forces some miraculous cancellations,
which in the end give the desired bound $c\geq N-\varepsilon$.
In $\S$\ref{ss4.3}, we prove some auxiliary results about the eigenvalues of
curvature transformations. These results are not used elsewhere in the paper, and are given
as some evidence for the conjecture in $\S$\ref{ss4.1}.
\subsection{The base case and explicit examples}\label{ss4.1} For $N=1$ we
need to consider only $\Delta$ acting on $C^0(X_\mathcal{F})$, since $0\leq i\leq N-1$.
\begin{lem}\label{lem-n2.2} If $N=1$, then
$m^0(X_\mathcal{F})$ is equal either to $1$ or $1-\frac{\sqrt{q}}{q+1}$.
\end{lem}
\begin{proof} If $\dim(X_\mathcal{F})=1$ then the length of $\mathcal{F}$ is $\ell=n-2$. Let $(t_0,\dots,
t_{n-1})$ be defined by \eqref{eq-t}. Since $t_i\geq 1$ and $\sum_{i=0}^{n-1} (t_i-1)=2$,
either exactly two $t_i$, $t_j$, $i<j$, are equal to $2$ and all
others are $1$, or exactly one $t_i$ is equal to $3$ and all others
are $1$. In the first case $X_\mathcal{F}\cong X_\varnothing^0\ast
X_\varnothing^0$, in the second case $X_\mathcal{F}\cong X_\varnothing^1$.
In the first case $X_\mathcal{F}$ is a $(q+1)$-regular bipartite graph with $2(q+1)$ vertices.
\begin{comment}More precisely,
$\mathrm{Ver}(X_\mathcal{F})$ is a union of two disjoint subsets $A$ and $B$ of cardinality $m=(q+1)$ each.
A vertex in $A$ is not adjacent to any other vertex in $A$ but is adjacent to
every vertex in $B$, and similarly, every vertex in $B$ is not
adjacent to a vertex in $B$ but is adjacent to every vertex in $A$.
\end{comment}
It is easy to check that $(q+1)\Delta$ acts on $C^0(X_\mathcal{F})$ as the
matrix
$$
(q+1)I_{2(q+1)}-\begin{pmatrix} 0 & J_{q+1} \\ J_{q+1} & 0\end{pmatrix}.
$$
The minimal polynomial of this matrix is $x(x-(q+1))(x-2(q+1))$, so the
eigenvalues of $\Delta$ are $0$, $1$, and $2$.
In the second case, $X_\mathcal{F}$ is isomorphic to the graph
whose vertices correspond to $1$ and $2$-dimensional
subspaces of a $3$-dimensional vector space $V$ over $\mathbb{F}_q$, two
vertices being adjacent if one of the corresponding subspaces is
contained in the other. With a slight abuse of terminology, we will
call $1$ and $2$ dimensional subspaces lines and planes,
respectively. The number of lines and planes in $V$ is $m=q^2+q+1$
each. Let $A=(a_{ij})$ be the $m\times m$ matrix whose rows are
enumerated by the lines in $V$ and columns by the planes, and
$a_{ij}=-1$ if the $i$th line lies in the $j$th plane, and is $0$
otherwise. We can choose a basis of $C^0(X_\mathcal{F})$ so that
$(q+1)\Delta$ acts as the matrix
$$
(q+1)I_{2m}+\begin{pmatrix} 0 & A \\ A^t & 0\end{pmatrix},
$$
where $A^t$ denotes the transpose of $A$.
Let $M=\begin{pmatrix} 0 & A \\ A^t & 0\end{pmatrix}$. Since any two
distinct lines lie in a unique plane and any line lies in $(q+1)$
planes, $AA^t=qI_m+J_m$. By a similar argument, $A^tA=qI_m+J_m$.
Hence
$$
M^2= qI_{2m}+\begin{pmatrix} J_m & 0 \\ 0 & J_m\end{pmatrix}.
$$
This implies that $(M^2-qI_{2m})(M^2-(q+1)^2I_{2m})=0$. Since
$(q+1)\Delta - (q+1)I_{2m}=M$, we conclude that $(q+1)\Delta$
satisfies the polynomial equation
$$
x(x-(2q+2))(x^2-(2q+2)x+(q^2+q+1))=0.
$$
It is not hard to see that this is in fact the minimal polynomial of
$(q+1)\Delta$. Hence the eigenvalues of $\Delta$ are $0$, $2$, and
$1\pm\frac{\sqrt{q}}{q+1}$.
\end{proof}
Denote by $\mathrm{min.pol}^i_n(x)$ the minimal polynomial of $\Delta$ acting on
$C^i(X^n_\varnothing)$. The proof of Lemma \ref{lem-n2.2} shows that
$$
\mathrm{min.pol}^0_1(x)=x(x-2)\left(x^2-2x+\frac{q^2+q+1}{q^2+2q+1}\right).
$$
Note that $$m^0(X^1_\varnothing)=1- \frac{\sqrt{q}}{q+1}$$
is always strictly larger than $1/2$ and tends to $1$ as
$q\to \infty$. Moreover, the whole polynomial tends coefficientwise to the
polynomial $x(x-2)(x-1)^2$.
Now assume $n=2$. In this case it is considerably harder to compute
the minimal polynomials. With the help of a computer, we deduced that
\begin{align*}
\mathrm{min.pol}^0_2(x)=&x(x-2)\left(x-3\right)\left(x-\frac{2q^2+3q+2}{q^2+q+1}\right)\\
&\times\left(x^2-\frac{4q^2+3q+4}{q^2+q+1}x+\frac{4q^2+4}{q^2+q+1}\right).
\end{align*}
This implies
$$
m^0(X^2_\varnothing)= \frac{1}{2(q^2+q+1)}\left(4q^2+3q+4-\sqrt{8q^3+9q^2+8q}\right)
$$
is at least $1.08$ and tends to $2$ from below as $q\to
\infty$. The whole polynomial tends coefficientwise to the
polynomial $x(x-3)(x-2)^4$ as $q\to \infty$. Next
\begin{align*}
\mathrm{min.pol}^1_2(x)=&x(x-1)(x-2)(x-3)\\
&\times\left(x^2-2x+\frac{q^2+1}{q^2+2q+1}\right)
\left(x^2-3x+\frac{2q^2+2q+2}{q^2+2q+1}\right)\\
&\times\left(x^2-4x+\frac{4q^2+6q+4}{q^2+2q+1}\right).
\end{align*}
In this case $$m^1(X^2_\varnothing)=1-\frac{\sqrt{2q}}{q+1}.$$
It is easy to see that $1/3\leq m^1(X^2_\varnothing)<1$. Moreover, $m^1(X^2_\varnothing)$ is strictly larger
than $1/3$ for $q>2$ and tends to $1$ as $q\to \infty$; the whole
polynomial tends to $x(x-3)(x-2)^4(x-1)^4$.
\begin{conj}\label{conj}
The previous examples, combined with some calculations for $n=3$
which we do not list, suggest a remarkable property of the eigenvalues of $\Delta$
acting on $C^i(X^n_\varnothing)$, $0\leq i\leq n-1$:
\begin{enumerate}
\item The number of distinct eigenvalues of $\Delta$ depends only on $i$, i.e., does not depend on $q$, even though the
eigenvalues themselves and the dimension of $C^i(X^n_\varnothing)$ depend on $q$.
\item The positive eigenvalues of $\Delta$, which in general are neither rational nor integral,
tend to the integers
$$
n-i,\ n-i+1,\ \dots,\ n+1
$$ as $q\to \infty$.
\end{enumerate}
\end{conj}
\subsection{Inductive step}\label{ss4.2} Since we proved Theorem \ref{thm-G}
for $N=1$, we assume $N\geq 2$. Let $1\leq i\leq N-1$ be given.
Assume for the moment that we proved the bound in Theorem \ref{thm-G} for $\Delta$
acting on $C^{i-1}(X_\mathcal{G})$, where $\mathcal{G}$ is any flag with $\dim(\mathcal{G})= N-1$.
Since for any $v\in \mathrm{Ver}(X_\mathcal{F})$ its link $\mathrm{Lk}(v)$
is isomorphic to $X_\mathcal{G}$ for some $\mathcal{G}\succ\mathcal{F}$ with $\dim(X_\mathcal{G})=N-1$,
we get
\begin{align*}
\lambda^{i-1}_{\min}(X_\mathcal{F}) &\geq (N-1)-(i-1)-\varepsilon'=N-i-\varepsilon',
\end{align*}
where $\varepsilon'=i\cdot\varepsilon/(i+1)$. Then, by Theorem \ref{thmFIeigen}, we have
\begin{equation}\label{eqNNN2}
m^i(X_\mathcal{F}) \geq \frac{(i+1)\cdot \lambda^{i-1}_{\min}(X_\mathcal{F})-(N-i)}{i} \geq N-i-\varepsilon.
\end{equation}
Therefore, to complete the proof of Theorem \ref{thm-G} it remains to show
that
\begin{equation}\label{eqNNN}
m^0(X_\mathcal{F})\geq N-\varepsilon.
\end{equation} This will occupy the rest of this subsection.
\begin{rem}
Instead of induction, one can deduce the lower bound \eqref{eqNNN2}
directly from \eqref{eqNNN} using Corollary \ref{corFIeigen}. Indeed, the link of any
$(i-1)$-simplex in $X_\mathcal{F}$ is isomorphic to some $X_\mathcal{G}$ with $\dim(X_\mathcal{G})=N-i$, so
assuming $m^0(X_\mathcal{G})\geq (N-i)-\varepsilon'$, $\varepsilon'=\varepsilon/(i+1)$, Corollary \ref{corFIeigen} gives
$$
m^i(X_\mathcal{F})\geq (i+1)(N-i-\varepsilon')-i(N-i)= N-i-\varepsilon
$$
On the other hand,
the proof of Corollary \ref{corFIeigen} uses similar inductive argument as above.
\end{rem}
We start by proving some preliminary lemmas. For an integer $m\geq 1$ we put $(m)_q=\prod_{k=1}^m(q^k-1)$, and we put $(0)_q=1$.
The number of $d$-dimensional subspaces in an $m$-dimensional linear space over $\mathbb{F}_q$ is equal to
$$
\gc{m}{d}:=\frac{(m)_q}{(d)_q(m-d)_q}.
$$
With this notation it is easy to give a formula for the number of $n$-flags refining a given flag:
\begin{lem}\label{lem-3.1}
Let $s$ be a simplex in $X_\mathcal{F}$ corresponding to $\mathcal{G}\succ \mathcal{F}$. Let $(r_0,\dots, r_{j})$
be the integers defined for $\mathcal{G}$ by \eqref{eq-t}. The number of $N$-simplices in $X_\mathcal{F}$
containing $s$ is given by the formula
$$
w(s)=\prod_{k=0}^j\prod_{z=1}^{r_k}\gc{z}{1}=\prod_{k=0}^{j}(r_k)_q/(1)^{r_k}_q.
$$
\end{lem}
Let $v\in \mathrm{Ver}(X_\mathcal{F})$ and let $\mathcal{G}$ be the
corresponding $(\ell+1)$-flag. There is a unique subspace $G$ in the
sequence of $\mathcal{G}$ which does not occur in $\mathcal{F}$. Let
$$\mathrm{Type}(v):=\dim(G).$$
Denote the set of types of vertices of $X_\mathcal{F}$ by $\mathfrak T$.
It is easy to see that the vertices of a simplex in $X_\mathcal{F}$
have distinct types. Moreover, $\# \mathfrak T=N+1$.
\begin{lem}\label{eq-yet} Let $v\in \mathrm{Ver}(X_\mathcal{F})$. Assume $\alpha\in \mathfrak T$ is fixed and $\alpha\neq \mathrm{Type}(v)$. Then
$$
\sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ [v, x]\in \widehat{S}_{1}(X_\mathcal{F}) \\ \mathrm{Type}(x)=\alpha}}w([v,x]) = w(v).
$$
\end{lem}
\begin{proof}
Let $\mathcal{G}$ be the flag of length $i:=\ell+1$ in $V$ corresponding
to $v$. Let $(t_0,\dots, t_{i+1})$ be the array \eqref{eq-t} of $\mathcal{G}$. Let $[v, x]\in \widehat{S}_{1}(X_\mathcal{F})$
and $\mathcal{G}'\succ \mathcal{G}$ be the corresponding $(i+1)$-flag. There is a unique $t_a$
such that the array of $\mathcal{G}'$ is $(t_0,\dots,t_a', t_a'',\cdots, t_{i+1})$
with $t_a'+t_a''=t_a$. Moreover, the type of $x$ uniquely determines
$a$ and $t_a'$. The number of $[v, x]\in \widehat{S}_{1}(X_\mathcal{F})$ with $\mathrm{Type}(x)=\alpha$ is
equal to $\gc{t_a}{t_a'}$. Using Lemma \ref{lem-3.1}, we compute
$$
\sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ [v, x]\in \widehat{S}_{1}(X_\mathcal{F}) \\ \mathrm{Type}(x)=\alpha}}\frac{w([v,x])}{w(v)}
=\gc{t_a}{t_a'}\frac{(1)_q^{t_a}
(t_a')_q(t_a'')_q}{(t_a)_q(1)_q^{t_a'}(1)_q^{t_a''}}=1.
$$
\end{proof}
\begin{rem} Lemma \ref{eq-yet} is a refined version of Lemma \ref{lem-w}. Indeed,
$$
\sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ [v, x]\in \widehat{S}_{1}(X_\mathcal{F}) }}w([v,x]) =
\sum_{\substack{\alpha\in \mathfrak T\\ \alpha\neq \mathrm{Type}(v)}}
\sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ [v, x]\in \widehat{S}_{1}(X_\mathcal{F}) \\ \mathrm{Type}(x)=\alpha}}w([v,x]) =
\sum_{\substack{\alpha\in \mathfrak T\\ \alpha\neq \mathrm{Type}(v)}} w(v) = Nw(v).
$$
\end{rem}
Let $f\in C^0(X_\mathcal{F})$ and let $R\in \mathbb{R}$ be a fixed constant. For each
$\alpha\in \mathfrak T$ define the function $f_\alpha\in C^0(X_\mathcal{F})$ by
$$
f_\alpha(v) =
\begin{cases}
R\cdot f(v), & \text{if $\mathrm{Type}(v)= \alpha$};\\
f(v), & \text{if $\mathrm{Type}(v)\neq \alpha$}.
\end{cases}
$$
Also, for $i\geq 0$ define a linear transformation $\rho_\alpha: C^i(X_\mathcal{F})\to C^i(X_\mathcal{F})$ by
$$
\rho_\alpha=\sum_{\substack{v\in \mathrm{Ver}(X_\mathcal{F})\\ \mathrm{Type}(v)=\alpha}}\rho_v.
$$
\begin{lem} We have
\begin{align}
\label{eq-ny3}
\sum_{\alpha\in \mathfrak T}(1-\rho_\alpha)df&=(N-1)df, \\
\label{eq-ny}
\sum_{\substack{v\in \mathrm{Ver}(X_\mathcal{F})\\ \mathrm{Type}(v)=\alpha}} (\Delta\rho_v df_\alpha,
\rho_vdf_\alpha) &=(\Delta \rho_\alpha df_\alpha, \rho_\alpha df_\alpha), \\
\label{lem2.3}
(\rho_\alpha df_\alpha, df_\alpha) & =(df_\alpha,
df_\alpha)-((1-\rho_\alpha)df, df),\\
\label{lem2.2}
(\Delta\rho_\alpha df_\alpha, \rho_\alpha df_\alpha) &=((1-\rho_\alpha)df,df).
\end{align}
\end{lem}
\begin{proof} Equation \eqref{eq-ny3} follows from a straightforward calculation:
$$
\sum_{\alpha\in \mathfrak T}(1-\rho_\alpha)df=(N+1)df-\sum_{v\in
\mathrm{Ver}(X_\mathcal{F})}\rho_v df \nonumber =(N+1)df-2df=(N-1)df.
$$
To prove \eqref{eq-ny}, expand its right hand-side as
$$
(d \rho_\alpha df_\alpha, d\rho_\alpha df_\alpha)=\sum_{\substack{v, v'\in \mathrm{Ver}(X_\mathcal{F})\\ \mathrm{Type}(v)=\mathrm{Type}(v)=\alpha}}
(d\rho_v df_\alpha, d\rho_{v'} df_\alpha).
$$
Now let $s=[x,y,z]\in S_2(X_\mathcal{F})$. Since the vertices of the same simplex have distinct types, only one of $x,y,z$ can be of type $\alpha$.
Therefore, if $v\neq v'$ but $\mathrm{Type}(v)=\mathrm{Type}(v)=\alpha$, then $d\rho_v df_\alpha(s)\cdot d\rho_{v'} df_\alpha(s)=0$.
This implies that in the above sum only the terms with $v=v'$ are possibly non-zero, so
$$
\sum_{\substack{v, v'\in \mathrm{Ver}(X_\mathcal{F})\\ \mathrm{Type}(v)=\mathrm{Type}(v)=\alpha}} (d\rho_v df_\alpha, d\rho_{v'} df_\alpha) =
\sum_{\substack{v \in \mathrm{Ver}(X_\mathcal{F})\\ \mathrm{Type}(v)=\alpha}} (d\rho_v df_\alpha, d\rho_v df_\alpha).
$$
To prove \eqref{lem2.3}, note that if $s\in S_1(X_\mathcal{F})$ contains a vertex of type
$\alpha$, then $(1-\rho_\alpha)g(s)=0$ for any $g\in C^1(X)$.
On the other hand, if $s$ does not contain a vertex of type $\alpha$, then
$(1-\rho_\alpha)df_\alpha(s)= df_\alpha(s)=df(s)=(1-\rho_\alpha)df(s)$. Hence
$$((1-\rho_\alpha)df_\alpha, df_\alpha)=((1-\rho_\alpha)df, df).$$ Now
$$
((1-\rho_\alpha)df, df)=((1-\rho_\alpha)df_\alpha,
df_\alpha)=(df_\alpha, df_\alpha)-(\rho_\alpha df_\alpha,
df_\alpha).
$$
Finally, to prove \eqref{lem2.2}, let $s=[x,y,z]\in S_2(X_\mathcal{F})$. If none of the vertices
of $s$ has type $\alpha$ then $d\rho_\alpha d f_\alpha(s)=0$. If $s$
has a vertex of type $\alpha$, then such a vertex is unique. Without
loss of generality, assume $\mathrm{Type}(x)=\alpha$. Then
$$
d\rho_\alpha d f_\alpha([x,y,z])=f(y)-f(z)=-df([y,z]).
$$
Hence
$$
(d\rho_\alpha d f_\alpha, d\rho_\alpha d f_\alpha)=\sum_{\substack{v\in \mathrm{Ver}(X_\mathcal{F})\\
\mathrm{Type}(v)=\alpha}}\sum_{s\in \widehat{S}_1(\mathrm{Lk}(v))}w([v,s])df(s)^2
$$
$$
=\sum_{s\in \widehat{S}_1(X_\mathcal{F})} (1-\rho_\alpha)df(s)\cdot df(s)
\sum_{\substack{v\in \mathrm{Ver}(\mathrm{Lk}(s))\\
\mathrm{Type}(v)=\alpha}}w([v,s])= ((1-\rho_\alpha)df, df),
$$
where in the last equality we used Lemma \ref{eq-yet}.
\end{proof}
\begin{lem}\label{lemd31}
Let $f\in C^0(X_\mathcal{F})$ and suppose $\Delta f=c\cdot f$. Then
$$
\sum_{\alpha\in \mathfrak T}(\Delta f_\alpha,
f_\alpha)=\left[(N-c)(R-1)^2+c(R^2+N)\right]\cdot (f,f).
$$
\end{lem}
\begin{proof} Fix some type $\alpha$ and
let $g\in C^0(X_\mathcal{F})$ be a function such that $g(v)=0$ if
$\mathrm{Type}(v)\neq \alpha$. Then $(\Delta g, g)=N\cdot (g,g)$. Indeed,
\begin{align*}
(\Delta g, g) &=(dg,dg)=\sum_{[x,v]\in \widehat{S}_1(X_\mathcal{F})}w([x,v])(g(v)-g(x))^2 \\
& =\sum_{\substack{v\in \mathrm{Ver}(X_\mathcal{F})\\ \mathrm{Type}(v)=\alpha}}g(v)^2\sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ [x,v]\in \widehat{S}_1(X_\mathcal{F})}}w([x,v]) \\
&\overset{\mathrm{Lem.} \ref{lem-w}}{=}N\sum_{\substack{v\in \mathrm{Ver}(X_\mathcal{F})\\ \mathrm{Type}(v)=\alpha}}w(v)\cdot g(v)^2=N\cdot
(g,g).
\end{align*}
If we apply this to $g=f_\alpha-f$, then we get
\begin{equation}\label{eq-d31}
(\Delta f_\alpha, f_\alpha)=N\cdot (f_\alpha,
f_\alpha)-2(N-c)(f_\alpha, f)+(N-c)(f,f).
\end{equation}
Since the cardinality of $\mathfrak T$ is $(N+1)$,
$$
\sum_{\alpha\in \mathfrak T}f_\alpha= (N+R)\cdot f\quad \text{and} \quad
\sum_{\alpha\in \mathfrak T}(f_\alpha, f_\alpha)=(N+R^2)\cdot (f,f).
$$
Summing \eqref{eq-d31} over all types and using the previous two
equalities, we get the claim.
\end{proof}
\begin{prop}\label{prop4.15-15} For any $\varepsilon>0$ there is a constant $q(\varepsilon, n)$ depending only on
$\varepsilon$ and $n$, such that if $q>q(\varepsilon, n)$ then $m^0(X_\mathcal{F})\geq
N-\varepsilon$.
\end{prop}
\begin{proof} Since Lemma \ref{lem-n2.2} implies this claim for $N=1$, we can assume
from now on that $N\geq 2$.
Let $f\in C^0(X_\mathcal{F})$ and suppose $\Delta
f=c\cdot f$. If $\mathrm{Type}(v)=\alpha$, then
\begin{align}
\Delta f_\alpha(v) = \sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ [x,v]\in S_1(X_\mathcal{F})}}\frac{w([x,v])}{w(v)}(Rf(v)-f(x))
=NRf(v)-C,
\end{align}
where $C$ does not depend on $R$ since $\mathrm{Type}(x)\neq \mathrm{Type}(v)$ if $[x,v]\in S_1(X_\mathcal{F})$.
If we take $R=1$, then $f_\alpha=f$, so $\Delta f_\alpha(v)=c\cdot f(v)$.
We conclude that $C=(N-c)f(v)$, and
$\Delta f_\alpha(v)=(NR-(N-c))f(v)$.
From now on we assume that $R=(N-c)/N$. With this choice of $R$ our calculation implies
\begin{equation}\label{eq-deltaalpha}
\Delta f_\alpha(v)=0\quad \text{if } \mathrm{Type}(v)=\alpha.
\end{equation}
Let $v\in \mathrm{Ver}(X_\mathcal{F})$ be a vertex of type $\alpha$. By Lemma \ref{prop7.14},
$$
(\Delta \rho_v df_\alpha, \rho_vdf_\alpha)= (\Delta_v\tau_v df_\alpha, \tau_v df_\alpha)_v.
$$
Since
$$
(\mathbf{1}, \tau_v df_\alpha)_v \overset{\eqref{eq-1tuaf}}{=} -w(v)\delta df_\alpha(v)= -w(v)\Delta f_\alpha(v) \overset{\eqref{eq-deltaalpha}}{=} 0,
$$
we can use the argument in the proof of Lemma \ref{lemDec5} to conclude
$$
(\Delta_v\tau_v df_\alpha, \tau_v df_\alpha)_v\geq m^{0}(\mathrm{Lk}(v))(\tau_v df_\alpha, \tau_v df_\alpha)_v
\geq \lambda^0_{\min}(X_\mathcal{F}) (\tau_v df_\alpha, \tau_v df_\alpha)_v.
$$
(Note that $\mathrm{Lk}(v)$ is connected since $\mathrm{Lk}(v)\cong X_\mathcal{G}$ for some $\mathcal{G}$ with $\dim(X_\mathcal{G})=N-1\geq 1$.)
Hence
\begin{align*}
(\Delta \rho_v df_\alpha, \rho_vdf_\alpha)\geq \lambda^0_{\min}(X_\mathcal{F}) (\tau_v df_\alpha, \tau_v df_\alpha)_v
&\overset{\mathrm{Lem}. \ref{prop7.12}}{=} \lambda^0_{\min}(X_\mathcal{F}) (\rho_v df_\alpha, \rho_v df_\alpha) \\
& \overset{\eqref{eq-(rho)}}{=}\lambda^0_{\min}(X_\mathcal{F}) (\rho_v df_\alpha, df_\alpha).
\end{align*}
Summing these inequalities over all vertices of type $\alpha$ and using \eqref{eq-ny}, we get
$$
(\Delta \rho_\alpha df_\alpha, \rho_\alpha df_\alpha)\geq
\lambda^0_{\min}(X_\mathcal{F})\cdot (\rho_\alpha df_\alpha, df_\alpha).
$$
Using \eqref{lem2.3} and \eqref{lem2.2}, we can rewrite this inequality as
$$
(1+\lambda^0_{\min}(X_\mathcal{F}))((1-\rho_\alpha)df,df)\geq
\lambda^0_{\min}(X_\mathcal{F})\cdot (df_\alpha, df_\alpha)
$$
Summing these inequalities over all types and using \eqref{eq-ny3}
and Lemma \ref{lemd31}, we get
\begin{equation}\label{eq2.1}
(1+\lambda^0_{\min}(X_\mathcal{F}))(N-1)c\geq \lambda^0_{\min}(X_\mathcal{F})\cdot
\left[(N-c)(R-1)^2+c(R^2+N)\right].
\end{equation}
Suppose $c=m^0(X_\mathcal{F})$. If $c\geq N$, then we are done. On the other hand, if $c<N$, then
$(N-c)(R-1)^2$ is positive, so \eqref{eq2.1} implies
$$
(1+\lambda^0_{\min}(X_\mathcal{F}))(N-1)c \geq \lambda^0_{\min}(X_\mathcal{F})c (R^2+N).
$$
Dividing both sides by $c$ (recall that $c>0$), we get
$$
N-1\geq (1+R^2)\lambda^0_{\min}(X_\mathcal{F}).
$$
By induction on $N$, for any $\varepsilon>0$ there is a constant $q(\varepsilon,
n)$ such that $\lambda^0_{\min}(X_\mathcal{F})\geq N-1-\varepsilon$ if $q\geq q(\varepsilon,
n)$. Thus
$$
\varepsilon\geq R^2(N-1-\varepsilon).
$$
We see that $R^2\to 0$ as $\varepsilon\to 0$. Since $R=(N-c)/N$, this forces $c\to N$.
\end{proof}
\subsection{Auxiliary results about eigenvalues}\label{ss4.3}
In this subsection we prove that $M^i(X_\mathcal{F})\leq N+1$ and $m^0(X_\mathcal{F})\leq N$. This
implies that if we allow $q$ to vary, then the lower bound $m^0(X_\mathcal{F})\geq N-\varepsilon$ in Proposition \ref{prop4.15-15}
is optimal; in other terms, $m^0(X_\mathcal{F})\to N$ as $q\to \infty$, which is consistent with Conjecture \ref{conj}.
We also show that $M^0(X_\mathcal{F})=N+1$ and its multiplicity is $N$, so does not depend on $q$.
\begin{prop}\label{prop_ny} For all $0\leq i\leq N-1$ we have
$M^i(X_\mathcal{F})\leq N+1$.
\end{prop}
\begin{proof} The proof is again by induction on $N$. If $N=1$, then the calculations in the proof
of Lemma \ref{lem-n2.2} show that $M^0(X_\mathcal{F})=2$.
Now assume $N\geq 2$ and $i\geq 1$. Assume we proved that $M^{i-1}(X_\mathcal{G})\leq N$
for any $\mathcal{G}$ with $\dim(X_\mathcal{G})=N-1$. Then $\lambda_{\max}^{i-1}(X_\mathcal{F})\leq N$, so Theorem \ref{thmFIeigen}
implies $M^i(X_\mathcal{F})\leq N+1$. It remains to prove that $M^0(X_\mathcal{F})\leq N+1$.
Let $f\in C^0(X_\mathcal{F})$. By an argument very similar to the proof of Proposition \ref{prop4.15-15} we get
\begin{align*}
(\Delta \rho_v df_\alpha, \rho_vdf_\alpha) = (\Delta_v\tau_v df_\alpha, \tau_v df_\alpha)_v
&\leq \lambda^0_{\max}(X_\mathcal{F})(\tau_v df_\alpha, \tau_v df_\alpha)_v \\
& = \lambda^0_{\max}(X_\mathcal{F})\cdot (\rho_v df_\alpha, df_\alpha),
\end{align*}
which leads to
$$
(\Delta \rho_\alpha df_\alpha, \rho_\alpha df_\alpha)\leq \lambda^0_{\max}(X_\mathcal{F})\cdot (\rho_\alpha df_\alpha, df_\alpha).
$$
By induction, $\lambda^0_{\max}(X_\mathcal{F})\leq N$, so using \eqref{lem2.3} and \eqref{lem2.2}, we can rewrite the previous inequality as
$$
(1+N)\cdot ((1-\rho_\alpha)df,df)\leq N\cdot (df_\alpha, df_\alpha).
$$
Assume $\Delta f=c\cdot f$ is an eigenfunction. Summing the above inequalities over all types and using \eqref{eq-ny3}
and Lemma \ref{lemd31}, we get
$$
(N+1)(N-1)c\leq N\cdot
\left[(N-c)(R-1)^2+c(R^2+N)\right].
$$
If we put $R=(N-c)/N$, then this inequality forces $c\leq
N+1$. In particular, $M^0(X_\mathcal{F})\leq N+1$.
\end{proof}
Let $Y$ be an $N$-simplex. Since $Y$ has a unique simplex of maximal dimension, the weights \eqref{eq-themetric} of
the simplices of $Y$ are all equal to $1$. Then, relative to the inner-product \eqref{eq-pairing}, we have the orthogonal direct sum decomposition
(cf. Lemma \ref{lem1.7})
$$
C^0(Y)=\mathbb{R}\mathbf{1}\oplus \delta C^1(Y),
$$
where $\delta C^1(Y)$ can be explicitly described as the space of functions satisfying $$\sum_{v\in \mathrm{Ver}(Y)}g(v)=0.$$
It is easy to check that $\Delta g = 0$ if and only if $g\in \mathbb{R}\mathbf{1}$, and $\Delta g= (N+1)g$ if and only if $g\in \delta C^1(Y)$.
Hence $0$ and $N+1$ are the only eigenvalues of $\Delta$ acting on $C^0(Y)$, and their multiplicities are $1$ and $N$,
respectively.
\begin{defn} We say that $f\in C^0(X_\mathcal{F})$ is \textit{type-constant} if $f(v)=f(v')$ for all $v, v'\in \mathrm{Ver}(X_\mathcal{F})$ of the same type.
We denote the space of type-constant functions by $\mathscr{C}$.
\end{defn}
Label the vertices of $Y$ by the elements of $\mathfrak T$.
Given a function $f\in \mathscr{C}$, define $\tilde{f}\in C^0(Y)$ by $\tilde{f}(x)=c_{\alpha}(f)$, $x\in \mathrm{Ver}(Y)$, where
$\alpha=\mathrm{Type}(x)$ and $c_\alpha(f)$ is the value of $f$ on vertices of type $\alpha\in \mathfrak T$.
It is clear that $\mathscr{C}\to C^0(Y)$, $f\mapsto \tilde{f}$, is an isomorphism of vector spaces which restricts
to an isomorphism $\mathscr{C}_0 \xrightarrow{\sim} \delta C^1(Y)$, where $\mathscr{C}_0\subset \mathscr{C}$ is the subspace of functions $f\in \mathscr{C}$ satisfying
$$
\sum_{\alpha\in \mathfrak T}c_\alpha(f)=0.
$$
\begin{lem}\label{lemNNC}
If $f\in \mathscr{C}$, then $\Delta f\in \mathscr{C}$. Moreover, $\widetilde{\Delta f} = \Delta \tilde{f}$.
This implies that for $f\in \mathscr{C}_0$ we have $\Delta f= (N+1)f$.
\end{lem}
\begin{proof}
For a fixed $v\in \mathrm{Ver}(X_\mathcal{F})$, we have
$$
\Delta f(v)=\sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ [x,v]\in S_1(X_\mathcal{F})}}\frac{w([x,v])}{w(v)}(f(v)-f(x))
$$
$$
\overset{\mathrm{Lem.} \ref{lem-w}}{=}N f(v)- \sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ [x,v]\in S_1(X_\mathcal{F})}}\frac{w([x,v])}{w(v)}f(x)
$$
$$
= N f(v)- \sum_{\substack{\alpha\in \mathfrak T\\ \alpha\neq \mathrm{Type}(v)}}
\sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ \mathrm{Type}(x)=\alpha \\ [x,v]\in S_1(X_\mathcal{F})}}\frac{w([x,v])}{w(v)}f(x).
$$
Now
$$
\sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ \mathrm{Type}(x)=\alpha \\ [x,v]\in S_1(X_\mathcal{F})}}\frac{w([x,v])}{w(v)}f(x) =
c_\alpha(f)\sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ \mathrm{Type}(x)
=\alpha \\ [x,v]\in S_1(X_\mathcal{F})}}\frac{w([x,v])}{w(v)}\overset{\mathrm{Lem.}\ref{eq-yet}}{=} c_\alpha(f).
$$
Thus, if we denote $\beta=\mathrm{Type}(v)$,
$$
\Delta f(v) = N c_\beta(f) - \sum_{\substack{\alpha\in \mathfrak T\\ \alpha\neq \beta}} c_\alpha(f).
$$
It is clear from this that $\Delta f\in \mathscr{C}$. Moreover, for any $z\in \mathrm{Ver}(Y)$ we have
$$
\widetilde{\Delta f}(z)=N\tilde{f}(z)-\sum_{\substack{y\in \mathrm{Ver}(Y_\mathcal{F})\\ y\neq z}}
\tilde{f}(y)=\Delta\tilde{f}(z).
$$
\end{proof}
\begin{lem}\label{lemNNC2}
Let $f\in C^0(X_\mathcal{F})$ be a $\Delta$-eigenfunction with eigenvalue $c$. If $c=0$ or $N+1$, then
$f\in \mathscr{C}$.
\end{lem}
\begin{proof}
Using the fact that $X_\mathcal{F}$ is connected, it is easy to show that $\Delta f=0$ if and only if $f$
is constant. We prove that if $\Delta f=(N+1)f$, then $f$ is type-constant.
First, assume $N=1$. Then either $X_\mathcal{F}=X_\varnothing^0\ast X_\varnothing^0$ or $X_\mathcal{F}=X_\varnothing^1$.
In either case, the matrix of $m\Delta$ has the form
$$
\begin{pmatrix}
m I_m & -A \\
-A^t & m I_m
\end{pmatrix},
$$
where $A$ gives the adjacency relations between vertices of type $\alpha$ and $\beta$ (there are only two types), and
either $m=q+1$ or $m=q^2+q+1$. Let $\mathbf{x}=(x_1, x_2, \dots, x_{2m})^t$ be an eigenvector with eigenvalue $2m$.
Let $\mathbf{x}_\alpha=(x_1, x_2, \dots, x_{m})^t$ and $\mathbf{x}_\beta=(x_{m+1}, x_2, \dots, x_{2m})^t$.
Then
\begin{align*}
mI_m \mathbf{x}_\alpha - A\mathbf{x}_\beta &= 2m \mathbf{x}_\alpha \\
-A^t \mathbf{x}_\alpha + mI_m \mathbf{x}_\beta &= 2m \mathbf{x}_\beta.
\end{align*}
Hence $m \mathbf{x}_\alpha = -A\mathbf{x}_\beta$ and $m \mathbf{x}_\beta =-A^t \mathbf{x}_\alpha$.
This implies $m^2 \mathbf{x}_\alpha = AA^t \mathbf{x}_\alpha$. In the first case, $AA^t=m J_m$,
so $m \mathbf{x}_\alpha = J_m \mathbf{x}_\alpha$. This implies that $m x_j=\sum_{i=1}^m x_i$ for all $1\leq j\leq m$.
Hence $x_1=x_2=\dots=x_m$. Similarly, one shows that $x_{m+1}=\dots=x_{2m}$. In the second case, $AA^t=qI_m+J_m$, so
$$
m^2 \mathbf{x}_\alpha= q \mathbf{x}_\alpha + J_m \mathbf{x}_\alpha.
$$
Hence $(m^2-q) x_j=\sum_{i=1}^m x_i$ for all $1\leq j\leq m$, which again implies $x_1=x_2=\dots=x_m$.
Similarly, one shows that $x_{m+1}=\dots=x_{2m}$, since $A^t A=qI_m+J_m$.
Now assume $N>1$ and that we proved the claim for all $X_\mathcal{F}$ of dimension less than $N$.
Suppose $f$ is not type-constant. Then there are two vertices $x, y$ of the same type
such that $f(x)\neq f(y)$. We claim that we can choose $x$ and $y$ so that
there is a vertex $v\in \mathrm{Ver}(X_\mathcal{F})$ such that $x, y\in \mathrm{Lk}(v)$. We start with $X_\mathcal{F}=X_\varnothing^n$. In that case $x$
and $y$ correspond to subspaces $W_1$ and $W_2$ of $\mathbb{F}_q^{n+2}$ of the same dimension.
By assumption $n>1$. If $\dim(W_i)=1$, then $v$ corresponding to $W_1+W_2$ is adjacent to both $x$ and $y$.
(Note that $\dim(W_1+W_2)=2<n+2$.) If $r=\dim(W_i)>1$, choose a line $\ell_i\in W_i$. Let $W_3$ be a subspace
of dimension $r$ which contains $\ell_1+\ell_2$. Let $z$ be the corresponding vertex.
If $f(x)=f(z)$, then we replace $x$ by $z$ and take $v$ corresponding to $\ell_2$. If $f(x)\neq f(z)$, then we replace $y$ by $z$
and take $v$ corresponding to $\ell_1$. Now suppose $X_\mathcal{F}\cong X_\varnothing^{n_1}\ast \cdots \ast X_\varnothing^{n_s}$, $s\geq 2$.
Our vertices are in the same $X_\varnothing^{n_i}$ since they have the same type, but then any vertex in another
$X_\varnothing^{n_j}$ is adjacent to both $x$ and $y$.
Let $x, y\in \mathrm{Lk}(v)$ be as in the previous paragraph. Let $\mathrm{Type}(v)=\alpha$.
Consider the function $\tau_vdf_\alpha$. We have
$$
\tau_vdf_\alpha(x)=df_\alpha([v,x])=f(x)-Rf(v)\neq f(y)-Rf(v) = \tau_vdf_\alpha(y).
$$
Hence $\tau_vdf_\alpha\in C^0(\mathrm{Lk}(v))$ is not type-constant.
By induction, $\tau_vdf_\alpha$ does not lie in the subspace of $C^0(\mathrm{Lk}(v))$ spanned by eigenfunctions
with eigenvalue $N$. This implies (use the orthonormal decomposition of Lemma \ref{lemDec5} and Proposition \ref{prop_ny})
$$
(\Delta_v \tau_vdf_\alpha, \tau_vdf_\alpha)_v< N (\tau_vdf_\alpha, \tau_vdf_\alpha)_v.
$$
This inequality implies, as in the proof of Proposition \ref{prop_ny}, that
$$
(\Delta \rho_\alpha df_\alpha, \rho_\alpha df_\alpha) < N (\rho_\alpha df_\alpha, df_\alpha).
$$
As in the proof of Proposition \ref{prop_ny}, this leads to
$$
(N+1)(N-1)c< N[(N-c)(R-1)^2+(N+1)(R^2+N)],
$$
where $c=N+1$ and $R=(N-c)/N$. But for these $c$ and $R$ both sides
are equal, so the inequality cannot be strict.
\end{proof}
\begin{prop}
A function $f\in C^0(X_\mathcal{F})$ is a $\Delta$-eigenfunction with eigenvalue $0$
if and only if $f$ is constant. A function $f\in C^0(X_\mathcal{F})$ is a $\Delta$-eigenfunction with eigenvalue $N+1$
if and only if $f\in \mathscr{C}_0$. This implies that $M^0(X_\mathcal{F})=N+1$ and its multiplicity
as an eigenvalue of $\Delta$ is $N$.
\end{prop}
\begin{proof} It is easy to check that $\Delta f = 0\Leftrightarrow df=0\Leftrightarrow f$ is constant (since $X_\mathcal{F}$ is connected).
By Lemma \ref{lemNNC}, if $f\in \mathscr{C}_0$, then $\Delta f=(N+1)f$. Conversely, suppose $\Delta f=(N+1)f$.
By Lemma \ref{lemNNC2}, $f$ is type-constant, so by Lemma \ref{lemNNC},
$$
\widetilde{\Delta f} = \widetilde{(N+1) f} = (N+1)\tilde{f}=\Delta \tilde{f}.
$$
This implies $f\in \mathscr{C}_0$.
\end{proof}
\begin{rem}
In \cite{PapMM}, we proved a general result about finite buildings which implies that $M^i(X_\mathcal{F})=N+1$ for all $0\leq i\leq N-1$.
\end{rem}
\begin{prop}\label{thm-last}
$m^0(X_\mathcal{F})\leq N$.
\end{prop}
\begin{proof}
Denote $c:=m^0(X_\mathcal{F})$ and let $f$ be a $\Delta$-eigenfunction with
eigenvalue $c$. First we claim that $c\neq N+1$. Indeed, $\Delta$ is
a semi-simple operator and if $c=N+1$ then by Proposition
\ref{prop_ny} it has only two distinct eigenvalues, namely $0$ and
$N+1$. This implies that $\Delta^2=(N+1)\Delta$.
In $X_\mathcal{F}$ we can find two vertices $x$ and $y$ which are not adjacent
but such that there is another vertex $v$ which is adjacent to both $x$ and $y$.
Let $g\in C^0(X_\mathcal{F})$ be a function such that $g(x)\neq 0$ but $g(x')=0$ if $x'\neq x$.
Now $\Delta g(y)=0$ because this is a sum of the values of $g$ at $y$ and the vertices
adjacent to $y$, and $x$ is not one of them. On the the other hand, $\Delta^2 g(y)\neq 0$
since this is a sum which involves $g(x)$ with a non-zero coefficient. This
contradicts the equality $\Delta^2=(N+1)\Delta$.
Define a function $h\in C^0(X_\mathcal{F})$ by
$$h(v)=\sum_{\substack{x\in \mathrm{Ver}(X_\mathcal{F})\\ \mathrm{Type}(x)=\mathrm{Type}(v)}} f(x), \qquad \text{for any }v\in \mathrm{Ver}(X_\mathcal{F}).
$$
It is clear that $h$ is type-constant, and because
$f$ is a $\Delta$-eigenfunction, we have $\Delta h=ch$. Since $c\neq 0, N+1$,
the function $h$ must be identically $0$. Therefore,
$\sum_{\mathrm{Type}(v)=\beta}f(v)=0$ for any
fixed $\beta\in \mathfrak T$. Obviously the same is also true for
$f_\alpha$, i.e., $\sum_{\mathrm{Type}(v)=\beta}f_\alpha(v)=0$. Since $w(v)$
depends only on the type of $v$, we see that $f_\alpha$ is orthogonal
to $\mathbf{1}$ in $C^0(X_\mathcal{F})$ with respect to the pairing \eqref{eq-pairing}.
As in the proof of Lemma
\ref{lemDec5}, this implies that
$$
(\Delta f_\alpha, f_\alpha)\geq c\cdot (f_\alpha, f_\alpha).
$$
Summing over all types, we get
$$
\sum_{\alpha\in \mathfrak T} (\Delta f_\alpha, f_\alpha)\geq c(N+R^2)\cdot (f, f).
$$
Comparing this inequality with the expression in Lemma \ref{lemd31},
we conclude that $(N-c)(R-1)^2\geq 0$. Since $R$ is arbitrary, we
must have $c\leq N$.
\end{proof}
\subsection*{Acknowledgements} The author thanks Ori Parzanchevski and Farbod Shokrieh
for useful comments on an earlier version of the paper.
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2016-12-26T02:02:27",
"yymm": "1612",
"arxiv_id": "1612.07904",
"language": "en",
"url": "https://arxiv.org/abs/1612.07904",
"abstract": "This is an expository paper on Garland's vanishing theorem specialized to the case when the linear algebraic group is $\\mathrm{SL}_n$. Garland's theorem can be stated as a vanishing of the cohomology groups of certain finite simplicial complexes. The method of the proof is quite interesting on its own. It relates the vanishing of cohomology to the assertion that the minimal positive eigenvalue of a certain combinatorial laplacian is sufficiently large. Since the 1970's, this idea has found applications in a variety of problems in representation theory, group theory, and combinatorics, so the paper might be of interest to a wide audience. The paper is intended for non-specialists and graduate students.",
"subjects": "Combinatorics (math.CO); Representation Theory (math.RT)",
"title": "On Garland's vanishing theorem for $\\mathrm{SL}_n$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9755769085257165,
"lm_q2_score": 0.839733963661418,
"lm_q1q2_score": 0.8192250642528526
} |
https://arxiv.org/abs/1903.10668 | On the Weakly Prime-Additive Numbers with Length 4 | In 1992, Erd$ő$s and Hegyv$á$ri showed that for any prime p, there exist infinitely many length 3 weakly prime-additive numbers divisible by p. In 2018, Fang and Chen showed that for any positive integer m, there exists infinitely many length 3 weakly prime-additive numbers divisible by m if and only if 8 does not divide m. Under the assumption (*) of existence of a prime in certain arithmetic progression with prescribed primitive root, which is true under the Generalized Riemann Hypothesis (GRH), we show for any positive integer m, there exists infinitely many length 4 weakly prime-additive numbers divisible by m. We also present another related result analogous to the length 3 case shown by Fang and Chen. | \section{Introduction}
A number $n$ with at least 2 distinct prime divisors is called \textit{prime-additive} if $n=\sum_{p|n}p^{a_p}$ for some $a_p>0$. If additionally $p^{a_p}<n\leq p^{a_p+1}$ for all $p|n$, then $n$ is called \textit{strongly prime-additive}. In 1992, Erd\H{o}s and Hegyv\'{a}ri \cite{Erdos} stated a few examples and conjectured that there are infinitely many strongly prime-additive numbers. However, this problem was and is still far from being solved. For example, not even the infinitude of prime-additive numbers is known. Therefore they introduced the following weaker version of prime-additive numbers.
\begin{defn}
A positive integer $n$ is said to be \textit{weakly prime-additive} if $n$ has 2 distinct prime divisors, and there exists distinct prime divisors $p_1,...,p_t$ of $n$ and positive integers $a_1,...,a_t$ such that $n=p_1^{a_1}+\cdots+p_t^{a_t}$. The minimal value of such $t$ is defined to be the \textit{length} of $n$, denoted as $\kappa_n$.
Note that if $n$ is a weakly prime-additive number, then $\kappa_n\geq3$. So we call a weakly prime-additive number with length 3 a \textit{shortest weakly prime-additive} number.
\end{defn}
Erd\H{o}s and Hegyv\'{a}ri \cite{Erdos} showed that for any prime $p$, there exist infinitely many weakly prime-additive numbers divisible by $p$. In fact, they showed that these weakly prime-additive numbers can be taken to be shortest weakly prime-additive in their proof. They also showed that the number of shortest weakly prime-additive numbers up to some integer $N$ is at least $c(\log N)^3$ for sufficiently small constant $c>0$.
In 2018, Fang and Chen \cite{JHF} showed that for any positive integer $m$, there exists infinitely many shortest weakly prime-additive numbers divisible by $m$ if any only if $8$ does not divide $m$. This is Theorem \ref{shortest} stated in this paper. Also, they showed that for any positive integer $m$, there exists infinitely many weakly prime-additive numbers with length $\kappa_n\leq 5$ and divisible by $m$.
In the same paper, Fang and Chen posted 4 open problems, the first one asking for any positive integer $m$, if there are infinitely many weakly prime-additive numbers $n$ with $m|n$ and $\kappa_n=4$. In Theorem 1 of this paper, we confirm this is true under the assumption $(\ast)$ of existence of a prime in certain arithmetic progression with prescribed primitive root, and such assumption is true under the Generalized Riemann Hypothesis (GRH).
Finally, it was also shown in \cite{JHF} that for any distinct primes $p,q$, there exists a prime $r$ and infinitely many $a,b,c$ such that $pqr|p^a+q^b+r^c$. In Theorem 2, we showed an analogous result for 4 primes with a mild congruence conditions assuming $(\ast)$, the same assumption as above.
\section{Main Results}
\begin{assump}
Let $1\leq a\leq f$ be positive integers with $(a,f)=1$ and $4|f$. Let $g$ be an odd prime dividing $f$ such that $\left(\frac{g}{a}\right)=-1$ with $\left(\frac{\cdot}{\cdot}\right)$ being the Kronecker symbol. Then there exists a prime $p$ such that $p\equiv a\Mod f$ and $g$ is a primitive root of $p$.
\end{assump}
It is known that ($\ast$) is a consequence of the Grand Riemann Hypothesis (GRH), see Corollary \ref{moree} in the next section for details. Under the assumption ($\ast$), we have the following.
\begin{thm}\label{main}
Assume ($\ast$). For any positive integer $m$, there exist infinitely many weakly prime-additive numbers $n$ with $m|n$ and $\kappa_n=4$.
\end{thm}
Note that if a positive integer $n=p^a+q^b+r^c+s^d$ for some distinct primes $p,q,r,s$, and positive integers $a,b,c,d$ such that $p,q,r,s|n$, then $p,q,r,s$ are all odd primes. We have the following partial converse and analog to Theorem 1.4 in \cite{JHF}.
\begin{thm}
Assume ($\ast$). For any distinct odd primes $p,q,r$ with one of them $\equiv 3$ or $5\Mod 8$, there exists infinitely many prime $s$, infinitely many positive integers $a,b,c,d$ such that $$pqrs|p^a+q^b+r^c+s^d.$$
\end{thm}
\section{Preliminaries}
\begin{lemma}\label{fermat}
(The Fermat-Euler Theorem, Theorem 72, \cite{elementary}) Let $a,n$ be coprime positive integers, then $$a^{\phi(n)}\equiv1\Mod n$$
where $\phi$ is the Euler totient function.
\end{lemma}
We will need the following properties of the Kronecker Symbol $(\frac{\cdot}{\cdot})$, a generalization of the Legendre symbol. Whenever we write $(\frac{a}{b})$ for some integers $a,b$, it means the Kronecker symbol.
\begin{lemma}\label{kronecker}
For any nonzero integers $a,b,c$, and any odd primes $p,q$. Let $a'$, $b'$ be the odd part of $a$ and $b$ respectively, then we have:\\
\begin{align*}
1. & \left(\frac{ab}{c}\right)=\left(\frac{a}{c}\right)\left(\frac{b}{c}\right) \text{ unless c=-1}\\
2. & \left(\frac{a}{b}\right)=(-1)^{\frac{a'-1}{2}\frac{b'-1}{2}}\left(\frac{b}{a}\right)\\
3. & \left(\frac{-2}{p}\right)=\begin{cases}1&\text{ if } p\equiv 1,3\Mod 8\\ -1&\text{ if } p\equiv 5,7\Mod 8\end{cases}\\
4. & \left(\frac{a}{p}\right)\equiv a^\frac{p-1}{2}\Mod p\\
5. & \left(\frac{p}{q}\right)=\left(\frac{q}{p}\right) \text{ unless } p\equiv q\equiv 3\Mod 4.\\
& If p\equiv q\equiv 3\Mod 4, \left(\frac{p}{q}\right)=-\left(\frac{q}{p}\right)
\end{align*}
\end{lemma}
\begin{proof}
See \cite{kronecker}, p. 289-290.
\end{proof}
\begin{thm}\label{dirichlet}
(Dirichlet's Theorem) Let $a,d$ be coprime positive integers, then there are infinitely many primes $p$ such that $p\equiv a\Mod d$.
\end{thm}
\begin{proof}
See \cite{kronecker}, chapter 1.
\end{proof}
Under GRH, we have the following generalization.
\begin{thm}\label{more}
(Theorem 1.3, \cite{PM}) Let $1\leq a\leq f$ be positive integers with $(a,f)=1$. Let $g$ be an integer that is not equal to $-1$ or a square, and let $h\geq1$ be the largest integer such that $g$ is a $h$th power. Write $g=g_1g_2^2$ with $g_1$ square free, $g_1,g_2$ are integers. Let $$\beta=\frac{g_1}{(g_1,f)} \text{ and } \gamma_1=\begin{cases}(-1)^\frac{\beta-1}{2}(f,g_1) & \text{if }\beta \text{ is odd;}\\1 & \text{otherwise.}\end{cases}$$
Let $\pi_g(x;f,a)$ be the number of primes $p\leq x$ such that $p\equiv a\Mod{f}$ and $g$ is a primitive root $\Mod{p}$. Then, assuming GRH, we have $$\pi_g(x;f,a)=\delta(a,f,g)\frac{x}{\log x}+O_{f,g}\left(\frac{x\log\log x}{\log^2 x}\right)$$
where $$\delta(a,f,g)=\frac{A(a,f,h)}{\phi(f)}\left(1-\left(\frac{\gamma_1}{a}\right)\frac{\mu(|\beta|)}{\prod_{\substack{p|\beta\\ p|h}}(p-1)\prod_{\substack{p|\beta \\ p\not|h}}(p^2-p-1)}\right)$$
if $g_1\equiv1\Mod{4}$ or $(g_1\equiv 2\Mod{4}$ and $8|f)$ or $(g_1\equiv3\Mod{4}$ and $4|f$), and $$\delta(a,f,g)=\frac{A(a,f,h)}{\phi(f)}$$ otherwise.
Here $\mu$ is the mobius function, $\left(\frac{\cdot}{\cdot}\right)$ is the Kronecker symbol, and $$A(a,f,h)=\prod_{p|(a-1,f)}\left(1-\frac{1}{p}\right)\prod_{\substack{p\not| f \\ p|h}}\left(1-\frac{1}{p-1}\right)\prod_{\substack{p\not| f\\ p\not| h}}\left(1-\frac{1}{p(p-1)}\right)$$ if $(a-1,f,h)=1$ and $A(a,f,h)=0$ otherwise.
\end{thm}
\begin{cor}\label{moree}
Assume GRH. Let $a,f,g$ as above and $\left(\frac{g}{a}\right)=-1$. Then there exists a prime $p$ such that $p\equiv a\Mod f$ and $g$ is a primitive root of $p$, i.e. ($\ast$) is true under GRH.
\end{cor}
\begin{proof}
This is a special case of Theorem \ref{more}, where with our conditions on $a,f,g$, $\beta=h=1$, $\gamma_1=g$.
$$\delta(a,f,g)=\frac{2}{\phi(f)}\prod_{p|(a-1,f)}\left(1-\frac{1}{p}\right)\prod_{p\not| f}\left(1-\frac{1}{p(p-1)}\right)>0$$
\end{proof}
\begin{remark}
This shows that our result also follows from GRH, which is a much stronger assumption than ($\ast$).
\end{remark}
\begin{thm}\label{shortest}
(Corollary 1.1, \cite{JHF}) Let $m$ be a positive integer, then there exists infinitely many shortest weakly prime-additive numbers $n$ with $m|n$ if and only if 8 does not divide $m$.
\end{thm}
\section{Proof of Theorem 1}
We first prove the following weaker version of Theorem 1.
\begin{thm}\label{main}
Assume ($\ast$). For any positive integer $m$, there exist infinitely many weakly prime-additive numbers $n$ with $m|n$ and $\kappa_n\leq4$.
\end{thm}
\begin{proof}
Let $m$ be a positive integer. Write $m=2^km_1$ with $(m_1,2)=1$ and $k\geq0\in\Z$. Without loss of generality, we assume $k\geq3$. We will construct a family of distinct primes $p,q,r,s$ and positive integers $a,b,c$ such that $m,p,q,r,s|n:=p^a+q^b+r^c+s$.
Let $p$ be an odd prime such that $(p,m)=1$. By the Chinese Remainder Theorem and Theorem \ref{dirichlet}, there exists an odd prime $q$ such that $$q\equiv 1 \Mod{2^kp}\text{ and }q\equiv-1\Mod{m_1}$$
Again by the same two theorems, there exists an odd prime $r$ such that $$r\equiv 3\Mod{2^k}\text{ and }r\equiv1\Mod{pqm_1}$$
By the Chinese Remainder Theorem, let $s_0$ be the unique integer such that $1\leq s_0\leq pqrm$ and \begin{align*}\label{acong}
&s_0\equiv -5\Mod{2^k}\\
&s_0\equiv -1\Mod{m_1}\\
&s_0\equiv -2\Mod{pqr}
\end{align*}
Then we can see that $(s_0,pqrm)=1$.
Now note that as $k\geq3$, $r\equiv3\Mod{2^k}$, $s_0\equiv -5\mod{2^k}$ gives $r\equiv 3\Mod 8$ and $s_0\equiv 3\Mod 8$ respectively, so we have by Lemma \ref{kronecker}, \begin{align*}
\left(\frac{r}{s_0}\right)=\left(\frac{s_0}{r}\right)(-1)^{\frac{s_0-1}{2}\frac{r-1}{2}}=-\left(\frac{s_0}{r}\right)=-\left(\frac{-2}{r}\right)=-1
\end{align*}
where we used $s_0\equiv -2\Mod r$ as $s_0\equiv -2\Mod{pqr}$.
Hence applying Corollary \ref{moree} with $a=s_0$, $f=pqrm$ and $g=r$, there exists an odd prime $s$ such that $s\equiv s_0\Mod{pqrm}$ and $r$ is a primitive root of $s$. So $s$ satisfies all of the above congruence relations satisfied by $s_0$, and there exists a positive integer $c_0$ s.t. $$r^{c_0}\equiv -2 \Mod s$$
Note by construction, $p,q,r,s$ are all distinct odd primes.
Now for any positive integer $c'$, take $$c=(p-1)(q-1)(r-1)\phi(m)c'+c_0$$
and for any positive odd integer $b'$, take $$b=\frac{1}{4}(r-1)(s-1)b'$$
Note that since $r\equiv 3\Mod{2^k}$ and $s\equiv -5\Mod{2^k}$, we have $r,s\equiv 3\Mod 4$. So $b$ is odd.
Finally, for any positive integer $a'$, take $$a=(q-1)(r-1)(s-1)\phi(m)a'$$
where $\phi$ is the euler $\phi$ function, and let $$n=p^a+q^b+r^c+s$$
Then we have the following congruence conditions:\\
1. As $q\equiv r\equiv1\Mod p$, $s\equiv -2\Mod p$, we have $$n\equiv p^a+q^b+r^c+s\equiv0+1+1-2\equiv 0\Mod p$$
2. As $q-1|a$, by Lemma \ref{fermat}, $p^a\equiv1\mod q$. So we have $$n\equiv p^a+q^b+r^c+s\equiv1+0+1-2\equiv0\Mod q$$
3. Similarly, $p^a\equiv1\Mod r$ as $r-1|a$. Since $q\equiv1\Mod{2^kp}$, $q\equiv 1\Mod 8$. By Lemma \ref{kronecker}, with $r\equiv3\Mod 8$ and $r\equiv1\Mod q$, $$q^b\equiv (q^{\frac{1}{2}(r-1)})^{\frac{1}{2}(s-1)b'}\equiv\left(\frac{q}{r}\right)^{\frac{1}{2}(s-1)b'}\equiv\left(\frac{r}{q}\right)\equiv\left(\frac{1}{q}\right)\equiv1\Mod{r}$$
So we have $$n\equiv p^a+q^b+r^c+s\equiv1+1+0-2\equiv0\Mod r$$
4. Similarly, $p^a\equiv q^b\equiv1\Mod s$. As $r^c\equiv-2\Mod s$, we have $$n\equiv p^a+q^b+r^c+s\equiv1+1-2+0\equiv 0\Mod s$$
5. As $\phi(m)|a$, by Lemma \ref{fermat}, $p^a\equiv1\Mod m$. Since $b$ is odd and $q\equiv-1\Mod{m_1}$, we get $q^b\equiv -1\Mod{m_1}$. Together with $r\equiv1\Mod{m_1}$ and $s\equiv -1\Mod{m_1}$, we have $$n\equiv p^a+q^b+r^c+s\equiv1-1+1-1\equiv0\Mod{m_1}$$
6. As $p^a\equiv1\Mod m$, $q\equiv 1\Mod{2^k}$, $r\equiv 3\Mod{2^k}$ and $s\equiv -5\Mod{2^k}$, we have $$n\equiv p^a+q^b+r^c+s\equiv1+1+3-5\equiv0\Mod{2^k}$$
Hence $n=p^a+q^b+r^c+s$ is weakly prime additive and is divisible by $m$. Since $a',c'$ can be any positive integer, $b'$ can be any positive odd integer and $p$ can be any arbitrary odd prime coprime to $m$, we have constructed infinitely many weakly prime-additive $n$ with length $\leq 4$.
\end{proof}
\begin{remark}
In the above construction, $s$ can be raised to any $d$-th power for any positive integer $d\equiv 1\Mod{\phi(pqrm)}$.
\end{remark}
Together with Theorem \ref{shortest}, we can prove Theorem 1.
\begin{proof}[Proof of Theorem 1]
Let $m$ be a positive integer. Then by Theorem \ref{main}, there exists infinitely many weakly prime-additive numbers with length $\leq4$ such that they are divisible by $8m$. By Theorem \ref{shortest}, as $8|8m$, these numbers cannot be shortest weakly prime-additive, hence they are all weakly prime-additive numbers with length 4.
\end{proof}
\section{Proof of Theorem 2}
Let $p,q,r$ be distinct odd primes with one of them, WLOG say, $r\equiv 3$ or $5\Mod 8$. Let $k\in\N$ such that $\frac{(p-1)(q-1)}{2^k}$ is odd, and let $A=\frac{(p-1)(q-1)}{2^k}$. Let $f=8Apqr$ and by the Chinese Remainder Theorem, let $s_0$ be the unique integer such that $1\leq s_0\leq f$ and \begin{align*}
&s_0\equiv3\Mod{8}\\
&s_0\equiv -2\Mod{Apqr}
\end{align*}
Note that by Lemma \ref{kronecker}, using $r\equiv3$ or $5\Mod 8$, we have \begin{align*}
\left(\frac{r}{s_0}\right)=(-1)^{\frac{r-1}{2}\frac{s_0-1}{2}}\left(\frac{s_0}{r}\right)=(-1)^\frac{r-1}{2}\left(\frac{-2}{r}\right)=-1
\end{align*}
Apply Theorem 5 with the above $a, f$, and $g=r$, there exists an odd prime $s$ such that\\
$s\equiv s_0\Mod{8Apqr}$ and $r$ is a primitive root of $s$. So there exists $0<c_0<s-1$ such that $$r^{c_0}\equiv -2\Mod{s}.$$
Now note that as $s\equiv3\Mod{8}$, we have $\left(\frac{-2}{s}\right)=1$. So $c_0$ is even.
Now by the Chinese Remainder Theorem, take any positive integer $c$ such that \begin{align*}
&c\equiv c_0\Mod{\frac{s-1}{2}}\\
&c\equiv 0\Mod{(p-1)(q-1)}
\end{align*}
This is possible as $s\equiv3\Mod{8}$ and $s\equiv -2\Mod{A}$, so $(\frac{s-1}{2},(p-1)(q-1))=1$. Since $c_0$ is even and $\frac{s-1}{2}$ is odd, this makes $c\equiv c_0\Mod{s-1}$. So we have $r^c\equiv -2\Mod{s}$ and $r^c\equiv1\Mod{pq}$.
Finally, for any positive integers $a,b,d$ such that $(q-1)(r-1)(s-1)|a$, $(p-1)(r-1)(s-1)|b$, $d\equiv1\Mod{(p-1)(q-1)(r-1)}$, we have the following:\begin{align*}
p^a+q^b+r^c+s^d&\equiv0+1+1-2\equiv0\Mod p\\
p^a+q^b+r^c+s^d&\equiv1+0+1-2\equiv0\Mod q\\
p^a+q^b+r^c+s^d&\equiv1+1+0-2\equiv0\Mod r\\
p^a+q^b+r^c+s^d&\equiv1+1-2+0\equiv0\Mod s
\end{align*}
So we get for any positive integers $a,b,c,d$ as above, $$pqrs|p^a+q^b+r^c+s^d.$$ \qed
| {
"timestamp": "2019-03-27T01:08:12",
"yymm": "1903",
"arxiv_id": "1903.10668",
"language": "en",
"url": "https://arxiv.org/abs/1903.10668",
"abstract": "In 1992, Erd$ő$s and Hegyv$á$ri showed that for any prime p, there exist infinitely many length 3 weakly prime-additive numbers divisible by p. In 2018, Fang and Chen showed that for any positive integer m, there exists infinitely many length 3 weakly prime-additive numbers divisible by m if and only if 8 does not divide m. Under the assumption (*) of existence of a prime in certain arithmetic progression with prescribed primitive root, which is true under the Generalized Riemann Hypothesis (GRH), we show for any positive integer m, there exists infinitely many length 4 weakly prime-additive numbers divisible by m. We also present another related result analogous to the length 3 case shown by Fang and Chen.",
"subjects": "Number Theory (math.NT)",
"title": "On the Weakly Prime-Additive Numbers with Length 4",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9852713826904225,
"lm_q2_score": 0.8311430520409023,
"lm_q1q2_score": 0.8189014640978776
} |
https://arxiv.org/abs/1906.02781 | Tutte Polynomial Activities | Unlike Whitney's definition of the corank-nullity generating function $T(G;x+1,y+1)$, Tutte's definition of his now eponymous polynomial $T(G;x,y)$ requires a total order on the edges of which the polynomial is a posteriori independent. Tutte presented his definition in terms of internal and external activities of maximal spanning forests. Although Tutte's original definition may appear somewhat ad hoc upon first inspection, subsequent work by various researchers has demonstrated that activity is a deep combinatorial concept. In this survey, we provide an introduction to activities for graphs and matroids. Our primary goal is to survey several notions of activity for graphs which admit expansions of the Tutte polynomial. Additionally, we describe some fundamental structural theorems, and outline connections to the topological notion of shellability as well as several topics in algebraic combinatorics. | \chapter*{}
\chapterauthor{Spencer Backman}{Einstein Institute for Mathematics\\
Edmond J. Safra Campus\\
The Hebrew University of Jerusalem\\
Givat Ram. Jerusalem, 9190401, Israel\\
\texttt{spencer.backman@mail.huji.ac.il}\\}
\chapter{Tutte Polynomial Activities}
\section{Synopsis}
Activities are certain statistics associated to spanning forests and more general objects, which can be used for defining the Tutte polynomial. This chapter is intended to serve as an introduction to activities for graphs and matroids. We describe \index{activity}
\begin{itemize}
\item Tutte's original spanning forest activities
\item Gordon and Traldi's subgraph activities
\item Gessel and Sagan's depth-first search activities
\item Bernardi's embedding activities
\item Gordon and McMahon's generalized subgraph activities
\item Las Vergnas' orientation activities
\item Etienne and Las Vergnas' activity bipartition
\item Crapo's activity interval decomposition
\item Las Vergnas' active orders
\item Shellability and the algebraic combinatorics of activities
\end{itemize}
\section{Introduction}
\index{activity} \index{Tutte polynomial} \index{spanning forest} \index{matroid} \index{cut} \index{cycle}
Unlike Whitney's definition of the corank-nullity generating function \index{corank-nullity generating function} $T(G;x+1,y+1)$, Tutte's definition of his now eponymous polynomial $T(G;x,y)$ \index{Tutte polynomial} requires a total order on the edges of which the polynomial is {\it a posteriori} independent. Tutte presented his definition in terms of \emph{internal and external activities} of maximal spanning forests.
Although Tutte's original definition may appear somewhat ad hoc upon first inspection, subsequent work by various researchers has demonstrated that activity is a deep combinatorial concept. In this chapter, we provide an introduction to activities for graphs and matroids. Our primary goal is to survey several notions of activity for graphs which admit expansions of the Tutte polynomial. Additionally, we describe some fundamental structural theorems, and outline connections to the topological notion of shellability as well as several topics in algebraic combinatorics.
We use the language of graphs except in sections \ref{active} and \ref{shell} where matroid terminology is employed, although sections \ref{spanningforest}, \ref{activebipartition}, \ref{activesubgraph}, and \ref{unified} apply equally well to matroids, and section \ref{orientation} applies to oriented matroids.
\section{Activities for maximal spanning forests}\label{spanningforest}
We recall Whitney's original definition of the the Tutte polynomial \cite{MR1562461}. The rank of a subset of edges $A$, written $r(A)$, is the maximum cardinality of a forest contained in $A$. \index{forest}
\begin{definition}\label{EMM:def:ss}
If $G=(V,E)$ is a graph, then the Tutte polynomial of $G$ is
\begin{equation}\label{eq:rank_expansion}
T(G; x, y) = \sum_{A\subseteq E} (x-1)^{r(E)-r(A)} (y-1)^{|A|-r(A)}.
\end{equation}
\end{definition}
Let $G=(V,E)$ be a graph and $F$ be a maximal spanning forest of $G$.
The maximality of $F$ means that if $G$ is connected then $F$ is a spanning tree, and if $G$ is not connected, restricting $F$ to any component of $G$ gives a spanning tree of that component.
\index{activity} \index{Tutte polynomial} \index{spanning forest} \index{cut!fundamental} \index{cycle!fundamental}
\begin{definition}
Let $F$ be a maximal spanning forest of $G$, $f \in F$, and $e \in E \setminus F$. The \emph{fundamental cut associated with $F$ and $e$} \index{cut}\index{cycle} is
\[ U_F(e) = \text{\{the edges of the unique cut in $(E \setminus F) \cup e$\}}. \]
Similarly, the \emph{fundamental cycle associated with $F$ and $e$} is
\[ Z_F(e) =\text{\{the edges of the unique cycle in $F\cup e$\}}. \]
\end{definition}
We describe Tutte's activities using fundamental cuts and cycles.
\begin{definition} Let $G = (V,E)$ be a graph with a total order on $E$, and $F$ be a maximal spanning forest of $G$. We say that an edge $e \in E$ is \index{spanning forest}
\begin{itemize}
\item \emph{internally active} ($e \in \mathrm I(F)$) with respect to $F$ if $e\in F$ and it is the smallest edge in $U_F(e)$,
\item \emph{externally active} ($e \in \mathrm E(F)$) with respect to $F$ if $e\not\in F$ and it is the smallest edge in $Z_F(e)$.
\end{itemize}
We note that all bridges are internally active and all loops are externally active.
\end{definition}
\begin{figure}[h]
\begin{subfigure}
\centering
\includegraphics[scale=1]{activities2}
\caption{(i) A graph with a total order on the edges and a spanning tree $T$ in red. (ii) Edge 1 is the only internally active edge and its fundamental cut is colored blue. (iii) Edge 2 is the only externally active edge and its fundamental cycle is colored green. }
\end{subfigure}
\end{figure}
\index{activity}
We can now give the \emph{spanning tree (maximal spanning forest) activities expansion} of the Tutte polynomial. \index{Tutte polynomial}
\begin{definition}[Tutte \cite{MR0061366}]\label{EMM:d.intext}
If $G=(V,E)$ is a graph with a fixed total order of $E$, then
\begin{equation}\label{EMM:e2}
T(G;x,y)=\sum\limits_{\substack{F }}x^{|\mathrm{I}(F)|} y^{|\mathrm{E}(F)|},
\end{equation}
where the sum is over all maximal spanning forests of $G$.
\end{definition}
Tutte demonstrated that this polynomial is well defined, i.e., it is independent of the total order on the edges.
\section{Activity bipartition}\label{activebipartition}
\index{activity}\index{activity!bipartition} \index{Tutte polynomial} \index{spanning forest} \index{matroid}\index{flat}\index{convolution}
Etienne and Las Vergnas showed that the activities of a maximal spanning forest induce a canonical bipartition of the edge set of a graph or more generally a matroid. First we recall the definition of a flat.
\begin{definition} Let $\mathcal{F}\subset E$. If there exists no $e \in E \setminus \mathcal{F}$ such that $e$ is contained in a non loop cycle in $e \cup \mathcal{F}$, we say that $\mathcal{F}$ is a \emph{flat} of $G$. A flat is \emph{cyclic} if it is a union of cycles. \index{forest}
\end{definition}
\begin{theorem}(Etienne-Las Vergnas \cite{MR1489076}). Given a maximal spanning forest $F$ of $G$, there exists a unique cyclic flat $\mathcal{F} $ of G such that $\mathcal{F} \cap F$ is a maximal spanning forest of $G|_{\mathcal{F}}$ with no internal activity, and $\mathcal{F}^c \cap F$ is a maximal spanning forest of $G/\mathcal{F} $ with no external activity.
\end{theorem}
As an application of this decomposition, one may obtain a convolution formula for the Tutte polynomial which was independently discovered by Kook, Reiner, and Stanton via incidence algebra methods, and has since been substantially refined and generalized \cite{MR1676189, :ab, MR2718681, MR3678586, MR3887405}. \index{Tutte polynomial}
\begin{theorem}[Etienne-Las Vergnas \cite{MR1489076}, Kook-Reiner-Stanton \cite{MR1699230}] Given a graph G, then
$$\sum_{\mathcal{F}}T(G;x,y) = T(G/\mathcal{F};x,0)T(G|_{\mathcal{F}};0,y),$$
where the sum is over all cyclic flats of G.
\end{theorem}
\section{Activities for subgraphs}\label{activesubgraph}
\index{subgraph}\index{activity!subgraph}
Gordon and Traldi introduced a notion of activities for arbitrary subgraphs, and used this to provide a 4-variable expansion of the Tutte polynomial which naturally specializes to both Whitney's and Tutte's original expansions.
\index{activity}
\begin{definition} Let $G = (V,E)$ be a graph with a total order on $E$, and $S \subset E$, then we say that an edge $e$ is
\begin{itemize}
\item \emph{internally active present} with respect to $S$ $(e \in I(S)\cap S)$ if $e\in S$ and $e$ is the smallest edge in some cut in $S^c \cup e$,
\item \emph{internally active absent} with respect to $S$ $(e \in I(S)\cap S^c)$ if $e\notin S$ and $e$ is the smallest edge in some cut in $S^c$,
\item \emph{externally active present} with respect to $S$ $(e \in L(S)\cap S)$ if $e\in S$ and $e$ is the smallest edge in some cycle in $S$,
\item \emph{externally active absent} with respect to $S$ $(e \in \mathrm L(S)\cap S^c)$ if $e\notin S$ and $e$ is the smallest edge in some cycle in $S \cup e$.
\end{itemize}
\end{definition}
\begin{theorem}[Gordon-Traldi \cite{MR1080623}]\label{EMM:d.intext} \index{Tutte polynomial}
If $G$ is a graph with a fixed total order of $E$, then
\begin{equation}\label{EMM:e2}
T(G;x+w,y +z)=\sum\limits_{\substack{S \subset E}} x^{|{I(S)\cap S}|}w^{| I(S)\cap S^c|} y^{| L(S)\cap S|}z^{|L(S)\cap S^c|}.
\end{equation}
\end{theorem}
By setting $x= 1$ and $z=1$, we recover Whitney's definition, and by setting $w= 0$ and $y=0$, we recover Tutte's definition.
While Gordon and Traldi's expansion is proven recursively via deletion-contraction, from which they obtain more general formulae, the 4-variable expansion is equivalent to an earlier theorem of Crapo. \index{forest}
\begin{theorem}[Crapo \cite{MR0262095}]
Let $P(E)$ be the Boolean lattice of subgraphs of $G$ ordered by containment. Given a spanning forest $F$, define an interval in this lattice $[ F \setminus I(F), F \cup E(F)]$. Then
$$P(E) = \sqcup_{F} [ F \setminus I(F), F \cup E(F)]$$
where the disjoint union is over all maximal spanning forests.
\end{theorem}
\begin{figure}[h]
\begin{subfigure}
\centering
\includegraphics[scale=.9]{activities3.pdf}
\caption{A Crapo decomposition of the subgraphs of $K_3$. Each row collects the subgraphs corresponding to the spanning tree on the left.}
\end{subfigure}
\end{figure}
In the following sections \ref{DFS}, \ref{combmaps}, and \ref{unified} we will describe other notions of activity for maximal spanning forests which employ input data different from a total order on the edges, but still allow for expansions of the Tutte polynomial and an analogue of Crapo's interval decomposition.
\index{activity}\index{depth-first search}\index{Crapo interval}\index{spanning tree}\index{activity!depth-first search}
\section{Depth-first search external activity}\label{DFS}
Gessel and Sagan introduced a notion of external activity for maximal spanning forests based on depth-first search. For simplicity sake, we will assume that our graph is connected and has no parallel edges. In what follows, we view a tree rooted at a vertex $q$ to be oriented so that every vertex is reachable from $q$ by a directed path. \index{activity}
\begin{definition}[Gessel-Sagan \cite{MR1392494}]
Let $<$ be a total order on the vertices of $G$, and $F$ be a spanning tree of $G$ rooted at the smallest vertex $q$. Let $e = (u,v)$ be an edge of $G \setminus F$. We say that $e$ is {\emph{depth-first search externally active (DFS externally active)}}, and write $e\in E_{DFS}(F)$, if either $u=v$, or $(u,w)$ is an oriented edge in $F$ belonging to the unique cycle in $F \cup (u,v)$, and $w>v$.
\end{definition}
\begin{figure}[h]
\begin{subfigure}
\centering
\includegraphics[scale=.8]{activities4.pdf}
\caption{$K_4$ with a root $q$ and a total order on its vertices. A spanning tree in red and a DFS externally active edge in blue.}
\end{subfigure}
\end{figure}
The name DFS externally active is justified by the following observation: given a spanning subgraph of $G$, we can produce a spanning forest $F$ by performing a DFS search which favors larger labeled vertices. Then $(u,v)$ is DFS externally active if when we apply DFS search to the graph $F \cup (u,v)$, we obtain $F$. Gessel and Sagan showed that DFS external activity when combined with Tutte's notion of internal activity allows for an expansion of the Tutte polynomial.\index{spanning forest}\index{Tutte polynomial}
\begin{theorem}[Gessel-Sagan \cite{MR1392494}]\index{Tutte polynomial} \index{forest}
If $G$ is a connected graph with a total order of its vertices, then
\begin{equation}\label{GS}
T(G;x,y)=\sum\limits_{\substack{T}} x^{|{I(T)}|}y^{|{E_{DFS}(T)}|},
\end{equation}
where the sum is over all spanning trees $T$ of $G$.
\end{theorem}
In the same article, Gessel and Sagan produced a notion of Nearest Neighbor First activity, which will not be reviewed here.
\index{activity!depth-first search}\index{depth-first search}\index{spanning tree}\index{map}\index{embedding}\index{half-edge}\index{activity!embedding}\index{spanning tree}
\section{Activities via combinatorial maps}\label{combmaps}
\index{activity}
Bernardi proposed a notion of activity induced by a rooted combinatorial map, which is essentially an embedding of a graph $G=(V,E)$ with a distinguished half-edge $h$ into an orientable surface. In what follows, we assume that $G$ is connected and loopless, although these restrictions are not essential.
Informally, given a spanning tree $T$ of $G$, we can use a rooted combinatorial map to tour the edges of $G$ by starting at $h$ and traveling counterclockwise around the outside of $T$. We then declare an edge $e \notin T$ to be externally active if it is the first edge in its associated fundamental cycle which we meet during the tour. We similarly say an edge $e \in T$ is internally active if it is the first edge in its associated fundamental cut which we meet during the tour.
We now describe Cori's maps \cite{MR0404045} and Bernardi's activities more formally.
We define a \emph{half-edge} to be an edge and an incident vertex, e.g. if $e \in E$ with $e = (u,v)$ such that $u,v \in V$, then $(e,u)$ and $(e,v)$ are the two associated half-edges. Let $\sigma$ be a permutation of the half-edges of $G$ such that $\sigma((e,u))=(e',u)$ some other half-edge incident to $u$, and for any two half-edges $(e,u)$ and $(e',u)$ with the same endpoint, there exists some $k >0$ such that $\sigma^k((e,u))=(e',u)$. Let $h$ be a distinguished half-edge. We define a \emph{rooted combinatorial map} to be a triple $(G,\sigma,h)$. Let $\alpha$ be the involution on the set of half-edges such that for all $e = (u,v) \in E$, $\alpha((e,u)) = (e,v)$. Given a half-edge $i$ and a spanning tree $T$, we define the \emph{motion operator} \[
t(i)=
\begin{cases}
\sigma(i) &\text{if } i \notin T\\
\sigma \circ \alpha(i) & \text{if } i \in T
\end{cases}
\]
It is easy to check that the iterated motion operator defines a tour of the half-edges of $G$. This tour induces a total order $<_{T}$ on the edges given by the first time one of its half-edges is visited. We use this total order to define internally and externally active edges. \index{forest}
\begin{figure}[!ht]
\begin{subfigure
\centering
\includegraphics[scale=.35]{bernardi_tour1.pdf}
\end{subfigure}
\begin{subfigure
\centering
\includegraphics[scale=.35]{bernardi_tour2.pdf}
\caption{Tours of two different spanning trees induced by the motion operator associated to the same rooted combinatorial map.}
\label{bernardi}
\end{subfigure}
\end{figure}
\index{activity}
\begin{definition}[Bernardi \cite{MR2428901}] Let $(G,\sigma, h)$ be a rooted combinatorial map, $T$ a spanning tree of $G$, and $e$ an edge of $G$.
\begin{itemize}
\item The edge $e$ is \emph{embedding-internally active} $(e \in I_B(T))$ if $e \in T$ and $e$ is the minimum edge with respect to $<_{T}$ in its associated fundamental cut.
\item The edge $e$ is \emph{embedding-externally active} $(e \in E_B(T))$ if $e \notin T$ and $e$ is the minimum edge with respect to $<_{T}$ in its associated fundamental cycle.
\end{itemize}
\end{definition}
Bernardi showed that this definition of activity admits an expansion of the Tutte polynomial.
\begin{theorem}[Bernardi \cite{MR2428901}]\index{Tutte polynomial}
Let $(G,\sigma,h)$ be a rooted combinatorial map, then
\begin{equation}\label{GS}
T(G;x,y)=\sum\limits_{\substack{T}} x^{|{I_B(T)}|}y^{|{E_B(T)}|},
\end{equation}
where the sum is over all spanning trees.
\end{theorem}
We remark that Courtiel recently introduced a different notion of activity via combinatorial maps which he calls the ``blossom activity" \cite{courtiel:hal-01088871}.
\index{Tutte polynomial}
\section{Unified activities for subgraphs via decision trees}\label{unified}
The Gordon-Traldi activities were further generalized by Gordon-McMahon \cite{MR1425948} in a way which also applies to greedoids. The Gordon-McMahon notion of activity was rediscovered by Courtiel \cite{courtiel:hal-01088871} who proved that it allows for a unification of all of the aforementioned notions of activity. We describe these activities following Courtiel and using the language of decision trees.
\index{activity}\index{activity!blossom}\index{activity!unified}\index{decision tree}\index{spanning tree}\index{binary tree}
\begin{definition} \index{activity}
A \emph{decision tree} $D$ for $G$ consists of a perfect binary tree, i.e., a rooted tree such that all non-leaf nodes have two descendants, and a labeling of each node of the tree by elements of $E$ such that the labels along any particular branch give a permutation of $E$.
\end{definition}
\begin{figure}[h]
\begin{subfigure}
\centering
\includegraphics[scale=.9]{activities6.pdf}
\caption{An example of a decision tree for a graph on 4 edges.}
\end{subfigure}
\end{figure}
Given a decision tree $D$ and a subgraph $S \subset E$, we can use $D$ to partition $S$ into four sets: $I(S)$, $L(S)$, $S_E$, $S_I$. We describe the recursive algorithm for producing this partition informally and refer the reader to \cite{courtiel:hal-01088871} for a pseudocode presentation.
\begin{algorithm}{\text Recursive generalized activities algorithm}
\
Initialize with $X = E, I(S) = L(S) = S_I = S_E = \emptyset$, and $e$ corresponding to the label of the root of $D$. While $X \neq \emptyset$, do the following:
\begin{itemize}
\item If $e$ is a bridge, add $e$ to $I(S)$, contract $e$ in $X$, and move to the right descendant of $e$ in $D$.
\item If $e \in S $ is neither a bridge nor a loop, add $e$ to $S_I$, contract $e$ in $X$, and move to the right descendant of $e$ in $D$.
\item If $e$ is a loop, add $e$ to $L(S)$, delete $e$ in $X$, and move to the left descendant of $e$ in $D$.
\item If $e \notin S$ is neither a bridge nor a loop, add $e$ to $S_E$, delete $e$ in $X$, and move to the left descendant of $e$ in $D$.
After we move to a descendant, we update $e$ to be the label of the new node and recurse.
\end{itemize}
\end{algorithm}
\index{activity}
\begin{theorem}[Gordon-McMahon \cite{MR1425948}]\index{Tutte polynomial}
Let $G$ be a graph, $D$ a decision tree for $G$, and $I(S)$ and $L(S)$ as above, then the Tutte polynomial has the following expansion
$$T(G;x+w, y+z) = \sum_{S \subset E} x^{|S \cap I(S)|}w^{|S^c \cap I(S)|}y^{|S^c \cap L(S)|}z^{|S \cap L(S)|}.$$
\end{theorem}
\index{activity!orientation}\index{algorithm}\index{algorithm!recursive}\index{Tutte polynomial}\index{orientation}\index{cycle!directed}\index{cut!directed}
\section{Orientation activities}\label{orientation} \index{activity}
A famous result of Stanley states that $T(G;2,0)$ (equivalently, the chromatic polynomial evaluated at $-1$) counts the number of acyclic orientations of $G$ \cite{MR0317988}. This result was generalized to hyperplane arrangements by Zaslavsky \cite{MR0357135}, and to oriented matroids by Las Vergnas \cite{MR586435}.
Las Vergnas \cite{MR776814} introduced a notion of orientation activities, which parallels those of subsets, and allows for an orientation expansion of the Tutte polynomial which recovers Stanley's result. He later introduced refined orientation activities \cite{Vergnas:aa}, which we now describe. Similar to the way that Tutte's activities are defined in terms of fundamental cuts and cycles, Las Vergnas' orientation activities are defined in terms of directed cuts and cycles.
\begin{definition}
Let $\mathcal{O}$ be an orientation of the edges of $G$. Let $Z$ be a cycle in $G$, and $U = (X,X^c)$ be a cut in $G$. We say that $Z$ is a \emph{directed cycle} if we can walk around the cycle traveling in the direction of the edge orientations. We similarly define $U$ to be a \emph{directed cut} if all of its edges are, without loss of generality, oriented from $X$ to $X^c$.
\end{definition}
We now use directed cuts and cycles to introduce notions of orientations activities.
\begin{definition}[Las Vergnas \cite{Vergnas:aa}]\label{oract} \index{activity}\index{orientation}\index{activity!orientation}\index{orientation!reference}
Let $G = (V,E)$ be a graph with a total order on $E$, and $\mathcal{O}_{ref}$ a reference orientation of the edges of $G$. If $\mathcal{O}$ is an orientation of $G$ and $e \in E$, then we say $e$ is
\begin{itemize}
\item \emph{positive cut active} ($e \in I(\mathcal{O})^+$) if $e$ is the smallest edge in some directed cut and is oriented in agreement with $\mathcal{O}_{ref}$,
\item \emph{negative cut active} ($e \in I(\mathcal{O})^-$) if $e$ is the smallest edge in some directed cut and is oriented in disagreement with $\mathcal{O}_{ref}$,
\item \emph{positive cycle active} ($e \in E(\mathcal{O})^+$) if $e$ is the smallest edge in some directed cycle and is oriented in agreement with $\mathcal{O}_{ref}$,
\item \emph{negative cycle active} ($e \in E(\mathcal{O})^-$) if $e$ is the smallest edge in some directed cycle and is oriented in disagreement with $\mathcal{O}_{ref}$.
\end{itemize}
\end{definition}
\begin{figure}[h]
\begin{subfigure}
\centering
\includegraphics[scale=1.2]{activities7}
\caption{Left: A total order and reference orientation of the edges of a graph $G$. Right: An orientation $\mathcal{O}$ of $G$ with $I(\mathcal{O})^+ = \emptyset , I(\mathcal{O})^- = \{ 1\}, L(\mathcal{O})^+ =\{2\}, {\rm and \, } L(\mathcal{O})^- = \emptyset$.}
\end{subfigure}
\end{figure}
\begin{theorem}[Las Vergnas \cite{Vergnas:aa}]\label{EMM:d.intext}\index{Tutte polynomial}
Let $G$ be a graph with a fixed total order and reference orientation of $E$, and $I(\mathcal{O})^+, I(\mathcal{O})^-, L(\mathcal{O})^+,$ and $L(\mathcal{O})^-$ as in Definition \ref{oract}, then
\begin{equation}\label{EMM:e2}
T(G;x +w,y +z)=\sum\limits_{\substack{\mathcal{O} }} x^{|I(\mathcal{O})^+|}w^{|I(\mathcal{O})^-|} y^{|E(\mathcal{O})^+|}z^{|E(\mathcal{O})^-|},
\end{equation}
where the sum is over all orientations of $E$.
\end{theorem}
Las Vergnas orientation expansion holds for all oriented matroids. By specializing variables $x=w = u/2$ and $y = z = v/2$, we recover Las Vergnas earlier expansion which does not make use of a reference orientation.
\begin{corollary}[Las Vergnas \cite{MR776814}]
Let $G$ be a graph with a fixed total order $E$, and let $I(\mathcal{O})$ and $L(\mathcal{O})$ to be the set of edges which are minimum is some directed cut or cycle, respectively, then
\begin{equation}\label{EMM:e2}
T(G;x,y)=\sum\limits_{\substack{\mathcal{O}}} \left({u \over 2}\right)^{|I(\mathcal{O})|}\left({v\over 2}\right)^{|L(\mathcal{O})|},
\end{equation}
where the sum is over all orientations of $E$.
\end{corollary}
\index{activity}\index{orientation}\index{activity!orientation}\index{active bijection}\index{fourientation}\index{subgraph}\index{Tutte polynomial}\index{active orders}\index{pivoting}\index{matroid}\index{activity!fourientation}
We remark that Berman \cite{MR0469810} was the first to propose an orientation expansion of the Tutte polynomial, although his definition was not correct. There are natural notions of orientation activity classes which parallel Crapo's subset intervals. The refined orientation expansion of the Tutte polynomial also follows as a direct consequence of ``the active bijection" of Gioan and Las Vergnas which gives a bijection between orientation activity classes and Crapo subset intervals, which respects the four different activities \cite{MR2552669, MR3895683}. See Gioan's chapter for further details. \index{activity}
A \emph{fourientation} of a graph is a choice for each edge of the graph whether to orient that edge in either direction, leave it unoriented, or biorient it. One may naturally view fourientations as a mixture of orientations and subgraphs where absent and present edges correspond to unoriented and bioriented edges, respectively. Backman, Hopkins, and Traldi \cite{MR3707217} introduced notions of activities for fourientations which provide a common refinement of the Gordon and Traldi subgraph activities and Las Vergnas' orientation activities.
\section{Active orders}\label{active}
\index{activity}
For deepening our understanding of activities, Las Vergnas introduced three \emph{active orders} on the bases of a matroid. We describe these partial orders here and Las Vergnas' key result that they induce lattice structures on bases.
Let $B_1$ and $B_2$ be bases of a matroid $M$ with fixed order on its ground set. We say that $B_1$ is obtained from $B_2$ by an \emph{externally active pivoting} if $B_2 = B_1 \setminus e \cup f$, where $e$ is the minimum element in $Z_{B_1}(f)$, and we write $B_1 \leftarrow_M B_2$. Dually, we say that $B_1$ is obtained from $B_2$ by an \emph{internally active pivoting} if $B_1 = B_2 \setminus e \cup f$, where $e$ is the smallest element in $U_{B_1}(f)$, and we write $B_1 \leftarrow *_M B_2$. We let $<_{Ext}$ and $<_{Int}$ denote the partial orders on the bases obtained by taking the transitive closures of the relations $ \leftarrow_M$ and $ \leftarrow *_M$, respectively. We refer to $<_{Ext}$ as the \emph{external order}, and $<_{Int}$ as the \emph{internal order}.
Las Vergnas also defined the following join of the external and internal orders. Let $B_1$ and $B_2$ be bases, then we say that $B_1 <_{Ext/Int} B_2$ if there exists bases $C_1, \dots , C_k$ such that $B_1 = C_1$, $B_2 = C_k$, and for each $i$ either $C_i \leftarrow_M C_{i+1}$ or $C_i \leftarrow *_M C_{i+1}$.
Recall that a lattice is a poset such that every pair of elements have a unique join and meet.
\begin{theorem}[Las Vergnas \cite{MR1845495}]
Let $\mathcal{B} (M)$ be the set of bases of a matroid $M$ with a total order on its ground set, then the posets \mbox{$(\mathcal{B} (M) \cup \{\mathbf{0}\}, <_{Ext})$}, \mbox{$(\mathcal{B} (M) \cup \{\mathbf{1}\}, <_{Int})$}, and $(\mathcal{B} (M), <_{Ext/Int})$ are lattices.
\end{theorem}
\index{active orders}\index{pivoting}\index{matroid}\index{bases}\index{lattice}\index{lattice!atomistic}\index{lattice!supersolvable}\index{lattice!distributive}\index{shellability}\index{simplicial complex}\index{facet}\index{face}\index{complex!independence}\index{complex!no broken circuit}\index{$f$-polynomial}\index{$h$-polynomial}\index{$f$-vector}\index{$h$-vector}\index{Tutte polynomial}
Las Vergnas observed that the lattice associated to the external order is not distributive, although it is atomistic. This appears to have been remedied in the recent PhD thesis of Gillespie \cite{:aa} where it is shown that by extending the external order to all independent sets, a supersolvable join-distributive lattice is obtained.
\section{Shellability and activity}\label{shell}
\index{activity}
There are important connections between activity and combinatorial topology. We refer the reader to Bj\"orner \cite{MR1165544} for an excellent introduction to this topic. Let $[n]$ denote the finite set $\{1, \dots , n\}$. An \emph{abstract simplicial complex} (often just called a simplicial complex) $\Delta$ on $n$ elements is a collection of subsets (faces) of $[n]$ which is closed under taking subsets.
Informally, a simplicial complex $\Delta$ is \emph{shellable} if there is an ordering of the maximal faces $(\emph{facets})$ of $\Delta$ so that each facet can be added to the previous ones by glueing along codimension 1 faces.
The \emph{f-polynomial} of a simplicial complex $\Delta$ is $f_{\Delta}(x) = \sum_{i=0}^n f_ix^{d-i}$ where $f_i$ is the number of faces of $\Delta$ of size $i$. The \emph{h-polynomial} of $\Delta$ is the $h_{\Delta}(x) = f_{\Delta}(x-1)$. The $f$-vector and $h$-vector of $\Delta$ are the vectors whose entries are the coefficients of the $f$-polynomial and $h$-polynomial, respectively. A shellable complex is homotopy equivalent to a wedge of spheres, and its $h$-vector is nonnegative.
The following complexes are shellable, and the proofs of shellability are related to activities.
\begin{itemize}
\item The \emph{independence complex} $(IN(M))$ whose faces are the independent sets of a matroid $M$ \cite{MR2627467, MR593648}. Its $h$-polynomial is $T(M;x,1)$. Matroids are characterized by the fact that the $IN(M)$ are the pure simplicial complexes which are lexicographically shellable with respect to any order on the ground set \cite{MR1165544}.
\item The \emph{no broken circuit complex} $(NBC(M))$ whose faces are the independent sets of a matroid with no external activity in the sense of Tutte \cite{MR2627467, MR593648}. Its $h$-polynomial is $T(M;x,0)$.
\item The \emph{external activity complex} defined on $E(M)\times E(M)$, whose facets are given by $B \cup L(B) \times B\cup (B \cup L(B))^c$, where $B$ ranges over the bases of $M$ \cite{MR3558045}. The shelling makes use of Las Vergnas' order $<_{Ext/Int}$. It has the same $h$-polynomial as $IN(M)$ \cite{MR3558045}.
\item The \emph{order complex of IN(M)} modulo Las Vergnas' external active order $<_{Ext}$ \cite{:aa}.
\item The \emph{order complex of the lattice of flats} \cite{MR570784}. It has Euler characteristic $T(M;1,0)$.
\end{itemize}
\index{Tutte polynomial}
These complexes admit many interesting connections with algebraic and geometric combinatorics. We briefly mention a few. Orlik and Solomon \cite{MR558866} introduced a certain graded algebra which is isomorphic to the cohomology ring of the complement of a complex hyperplane arrangement, and showed that monomials corresponding to faces of $NBC(M)$ give a basis for this algebra; see Falk and Kung's chapter for an introduction to these algebras. \index{activity}\index{shellability}\index{simplicial complex}\index{facet}\index{face}\index{complex!independence}\index{complex!no broken circuit}\index{$f$-polynomial}\index{$h$-polynomial}\index{complex!order}\index{algebra!Orlik-Solomon}\index{tropical geometry}\index{Bergman fan}\index{log-concavity}\index{unimodular}\index{$h$-vector}\index{$f$-vector}\index{$O$-sequence}\index{flat}\index{lattice}\index{monomial}
The external activity complex was introduced by Ardila and Boocher in their investigation of commutative algebraic aspects of the closure of a linear space in a product of projective lines \cite {MR3439307}. The ideals they consider are homogenizations of ones considered earlier by Proudfoot and Speyer \cite{MR2246531} and Terao\cite{MR1899865}. A slight variation of these objects play an important role in Huh and Wang's proof of the Dowling-Wilson conjecture for realizable matroids \cite{MR3733101}.
In the field of tropical geometry, which is often referred to a as a piecewise linear version of algebraic geometry, Bergman fans are certain balanced polyhedral fans which provide a new and exciting framework for studying matroids. Ardila and Klivans \cite{MR2185977} showed that they are unimodularily triangulated by the fan over the order complex of the lattice of flats. This relationship has lead authors to uncover interesting connections between algebraic geometry and matroids, most notably the proof of the Heron-Rota-Welsh conjecture that the $f$-vector of $NBC(M)$ is log-concave by Adirprasito, Huh, and Katz \cite{MR3862944} building on earlier works of Huh \cite{MR2904577}, and Huh and Katz \cite{MR2983081}. Recently, Fink, Speyer, and Woo introduced an \emph{extended NBC complex} in order to shed some light on these different manifestations of the $f$-polynomial of $NBC(M)$ \cite{fink}.
Finally, a major open question in combinatorial commutative algebra is Stanley's 1977 conjecture \cite{MR0572989} that the $h$-vector of $IN(M)$ is a pure $O$-sequence. Merino settled this conjecture in the case of cographic matroids by application of chip-firing \cite{MR1888777}; see Merino's chapter for further details.
\section{Acknowledgements}
Many thanks to the anonymous referee for providing very helpful feedback on an earlier draft of this chapter, and to Sam Hopkins for pointing out several typographical errors. Additional thanks to Matt Baker for generously sharing Figure \ref{bernardi}.
\bibliographystyle{plain}
| {
"timestamp": "2019-06-11T02:27:43",
"yymm": "1906",
"arxiv_id": "1906.02781",
"language": "en",
"url": "https://arxiv.org/abs/1906.02781",
"abstract": "Unlike Whitney's definition of the corank-nullity generating function $T(G;x+1,y+1)$, Tutte's definition of his now eponymous polynomial $T(G;x,y)$ requires a total order on the edges of which the polynomial is a posteriori independent. Tutte presented his definition in terms of internal and external activities of maximal spanning forests. Although Tutte's original definition may appear somewhat ad hoc upon first inspection, subsequent work by various researchers has demonstrated that activity is a deep combinatorial concept. In this survey, we provide an introduction to activities for graphs and matroids. Our primary goal is to survey several notions of activity for graphs which admit expansions of the Tutte polynomial. Additionally, we describe some fundamental structural theorems, and outline connections to the topological notion of shellability as well as several topics in algebraic combinatorics.",
"subjects": "Combinatorics (math.CO)",
"title": "Tutte Polynomial Activities",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985271387015241,
"lm_q2_score": 0.8311430457670241,
"lm_q1q2_score": 0.8189014615109478
} |
https://arxiv.org/abs/0706.4112 | Induced Ramsey-type theorems | We present a unified approach to proving Ramsey-type theorems for graphs with a forbidden induced subgraph which can be used to extend and improve the earlier results of Rodl, Erdos-Hajnal, Promel-Rodl, Nikiforov, Chung-Graham, and Luczak-Rodl. The proofs are based on a simple lemma (generalizing one by Graham, Rodl, and Rucinski) that can be used as a replacement for Szemeredi's regularity lemma, thereby giving much better bounds. The same approach can be also used to show that pseudo-random graphs have strong induced Ramsey properties. This leads to explicit constructions for upper bounds on various induced Ramsey numbers. | \section{Background and Introduction}
Ramsey theory refers to a large body of deep results in mathematics
concerning partitions of large structures. Its underlying philosophy
is captured succinctly by the statement that ``In a large system,
complete disorder is impossible.'' This is an area in which a great
variety of techniques from many branches of mathematics are used and
whose results are important not only to graph theory and
combinatorics but also to logic, analysis, number theory, and
geometry. Since the publication of the seminal paper of Ramsey
\cite{Ra} in 1930, this subject has grown with increasing vitality,
and is currently among the most active areas in combinatorics.
For a graph $H$, the {\it Ramsey number} $r(H)$ is the least
positive integer $n$ such that every two-coloring of the edges of
the complete graph $K_n$ on $n$ vertices contains a monochromatic copy
of $H$. Ramsey's theorem states that $r(H)$ exists for every graph
$H$. A classical result of Erd\H{o}s and Szekeres~\cite{ErSz}, which
is a quantitative version of Ramsey's theorem, implies that $r(K_k)
\leq 2^{2k}$ for every positive integer $k$. Erd\H{o}s~\cite{Er}
showed using probabilistic arguments that $r(K_k) > 2^{k/2}$ for $k
> 2$. Over the last sixty years, there has been several
improvements on the lower and upper bounds of $r(K_k)$, the most
recent by Conlon \cite{Co}. However, despite efforts by various
researchers, the constant factors in the exponents of these bounds
remain the same.
A subset of vertices of a graph is {\it homogeneous} if it is either
an independent set (empty subgraph) or a clique (complete subgraph).
For a graph $G$, denote by $\hom(G)$ the size of the largest
homogeneous subset of vertices of $G$. A restatement of the
Erd\H{o}s-Szekeres result is that every graph $G$ on $n$ vertices
satisfies $\hom(G) \geq \frac{1}{2}\log n$, while the Erd\H{o}s
result says that for each $n\geq 2$ there is a graph $G$ on $n$
vertices with $\hom(G) \leq 2\log n$. (Here, and throughout the
paper, all logarithms are base $2$.) The only known proofs of the
existence of {\it Ramsey graphs}, i.e., graphs for which
$\hom(G)=O(\log n)$, come from various models of random graphs with
edge density bounded away from $0$ and $1$. This supports the belief
that any graph with small $\hom(G)$ looks `random' in one sense or
another. There are now several results which show that Ramsey graphs
have random-like properties.
A graph $H$ is an {\it induced subgraph} of a graph $G$ if $V(H)
\subset V(G)$ and two vertices of $H$ are adjacent if and only if
they are adjacent in $G$. A graph is {\it $k$-universal} if it
contains all graphs on at most $k$ vertices as induced subgraphs. A
basic property of large random graphs is that they almost surely are
$k$-universal. There is a general belief that graphs which are not
$k$-universal are highly structured. In particular, they should
contain a homogeneous subset which is much larger than that
guaranteed by the Erd\H{o}s-Szekeres bound for general graphs.
In the early 1970's, an important generalization of Ramsey's
theorem, known as the Induced Ramsey Theorem, was discovered
independently by Deuber \cite{De}, Erd\H{o}s, Hajnal, and Posa
\cite{ErHaPo}, and R\"odl \cite{Ro1}. It states that for every graph
$H$ there is a graph $G$ such that in every $2$-edge-coloring of $G$ there is
an induced copy of $H$ whose edges are monochromatic. The least positive integer
$n$ for which there is an $n$-vertex graph with this property is called the {\it induced
Ramsey number} $r_{\textrm{ind}}(H)$. All of the early proofs of the Induced
Ramsey Theorem give enormous upper bounds on $r_{\textrm{ind}}(H)$.
It is still a major open problem to prove good bounds on induced
Ramsey numbers. Ideally, we would like to understand conditions for
a graph $G$ to have the property that in every two-coloring of the
edges of $G$, there is an induced copy of graph $H$ that is
monochromatic.
In this paper, we present a unified approach to proving Ramsey-type
theorems for graphs with a forbidden induced subgraph which can be
used to extend and improve results of various researchers. The same
approach is also used to prove new bounds on induced Ramsey numbers.
In the few subsequent sections we present in full detail our
theorems and compare them with previously obtained results.
\subsection{Ramsey properties of $H$-free graphs}\label{hfree}
As we already mentioned, there are several results (see, e.g.,
\cite{ErSzem,She,AlKrSu,BuSu}) which indicate that Ramsey graphs,
graphs $G$ with relatively small $\hom(G)$, have random-like
properties. The first advance in this area was made by Erd\H{o}s and
Szemer\'edi \cite{ErSzem}, who showed that the Erd\H{o}s-Szekeres
bound $\hom(G) \geq \frac{1}{2}\log n$ can be improved for graphs
which are very sparse or very dense. The edge density of a graph $G$
is the fraction of pairs of distinct vertices of $G$ that are edges.
The Erd\H{o}s-Szemer\'edi theorem states that there is an absolute
positive constant $c$ such that $\hom(G) \geq \frac{c \log
n}{\epsilon \log \frac{1}{\epsilon}}$ for every graph $G$ on $n$
vertices with edge density $\epsilon \in (0,1/2)$. This result shows
that the Erd\H{o}s-Szekeres bound can be significantly improved for
graphs that contain a large subset of vertices that is very sparse
or very dense.
R\"odl \cite{Ro} proved
that if a graph is not $k$-universal with $k$
fixed, then it contains a linear-sized induced subgraph that is very
sparse or very dense. A graph is called {\it $H$-free} if it does not
contain $H$ as an induced subgraph. More precisely, R\"odl's theorem says that for
each graph $H$ and $\epsilon \in (0,1/2)$, there is a positive
constant $\delta(\epsilon,H)$ such that every $H$-free graph on $n$
vertices contains an induced subgraph on at least
$\delta(\epsilon,H)n$ vertices with edge density either at most
$\epsilon$ or at least $1-\epsilon$. Together with the theorem of Erd\H{o}s and Szemeredi, it
shows that the Erd\H{o}s-Szekeres bound can be improved by any constant factor
for any family of graphs that have a forbidden induced subgraph.
R\"odl's proof uses Szemer\'edi's regularity lemma \cite{Sz}, a
powerful tool in graph theory, which was introduced by Szemer\'edi
in his celebrated proof of the Erd\H{o}s-Tur\'an conjecture on long arithmetic progressions in
dense subsets of the integers. The regularity lemma roughly says
that every large graph can be partitioned into a small number of
parts such that the bipartite subgraph between almost every pair of
parts is random-like. To properly state the regularity lemma
requires some terminology. The edge density $d(X,Y)$ between two
subsets of vertices of a graph $G$ is the fraction of pairs $(x,y)
\in X \times Y$ that are edges of $G$, i.e.,
$d(X,Y)=\frac{e(X,Y)}{|X||Y|}$, where $e(X,Y)$ is the number of edges with one endpoint in $X$ and
the other in $Y$.
A pair $(X,Y)$ of vertex sets
is called $\epsilon$-regular if for every $X' \subset X$ and $Y'
\subset Y$ with $|X'| > \epsilon |X|$ and $|Y'| > \epsilon |Y|$ we
have $|d(X',Y')-d(X,Y)|<\epsilon$. A partition $V=\bigcup_{i=1}^k
V_i$ is called {\it equitable} if $\big||V_i|-|V_j|\big|\leq 1$ for
all $i,j$.
Szemer\'edi's regularity lemma \cite{Sz} states that for each
$\epsilon>0$, there is a positive integer $M(\epsilon)$ such that
the vertices of any graph $G$ can be equitably partitioned
$V(G)=\bigcup_{i=1}^k V_i$ into $k$ subsets with $\epsilon^{-1}
\leq k \leq M(\epsilon)$ satisfying that all but at most $\epsilon
k^2$ of the pairs $(V_i,V_j)$ are $\epsilon$-regular. For more
background on the regularity lemma, see the excellent
survey by Koml\'os and Simonovits \cite{KoSi}.
In the regularity lemma, $M(\epsilon)$ can be taken to be a tower of
$2$'s of height proportional to $\epsilon^{-5}$. On the other hand,
Gowers \cite{Go} proved a lower bound on $M(\epsilon)$ which is a
tower of $2$'s of height proportional to $\epsilon^{-\frac{1}{16}}$.
His result demonstrates that $M(\epsilon)$ is inherently large as a
function of $\epsilon^{-1}$. Unfortunately, this implies that the
bounds obtained by applications of the regularity lemma are often
quite poor. In particular, this is a weakness of the bound on
$\delta(\epsilon,H)$ given by R\"odl's proof of his theorem. It is
therefore desirable to find a new proof of R\"odl's theorem that
does not use the regularity lemma. The following theorem does just
that, giving a much better bound on $\delta(\epsilon,H)$. Its proof
works as well in a multicolor setting (see concluding remarks).
\begin{theorem} \label{main}
There is a constant $c$ such that for each $\epsilon \in (0,1/2)$
and graph $H$ on $k \geq 2$ vertices, every $H$-free graph on $n$
vertices contains an induced subgraph on at least $2^{-ck(\log
\frac{1}{\epsilon})^2}n$ vertices with edge density either at most
$\epsilon$ or at least $1-\epsilon$.
\end{theorem}
Nikiforov \cite{Ni} recently strengthened R\"odl's theorem by
proving that for each $\epsilon>0$ and graph $H$ of order $k$, there
are positive constants $\kappa=\kappa(\epsilon,H)$ and
$C=C(\epsilon,H)$ such that for every graph $G=(V,E)$ that
contains at most $\kappa|V|^{k}$ induced copies of $H$, there is an
equitable partition $V=\bigcup_{i=i}^C V_i$ of the vertex set such
that the edge density in each $V_i$ ($i \geq 1$) is at most
$\epsilon$ or at least $1-\epsilon$. Using the same technique as the
proof of Theorem \ref{main}, we give a new proof of this result
without using the regularity lemma, thereby solving the main open problem
posed in \cite{Ni}.
Erd\H{o}s and Hajnal \cite{ErHa} gave a significant improvement on
the Erd\H{o}s-Szekeres bound on the size of the largest homogeneous
set in $H$-free graphs. They proved that for every graph $H$ there
is a positive constant $c(H)$ such that $\hom(G) \geq
2^{c(H)\sqrt{\log n}}$ for all $H$-free graphs $G$ on $n$ vertices.
Erd\H{o}s and Hajnal further conjectured that every such $G$
contains a complete or empty subgraph of order $n^{c(H)}$. This
beautiful problem has received increasing attention by various
researchers, and was also featured by Gowers \cite{Go1} in his list
of problems at the turn of the century. For various partial results
on the Erd\H{o}s-Hajnal conjecture see, e.g., \cite{AlPaSo, ErHaPa,
FoSu, AlPaPiRaSh, FoPaTo2, LaMaPaTo, ChSa} and their references.
Recall that a graph is $k$-universal if it contains all graphs on at
most $k$ vertices as induced subgraphs. Note that the Erd\H{o}s-Hajnal
bound, in particular, implies that, for every fixed $k$,
sufficiently large Ramsey graphs are $k$-universal. This was
extended further by Pr\"omel and R\"odl \cite{PrRo}, who obtained an
asymptotically best possible result. They proved that if $\hom(G)
\leq c_1 \log n$ then $G$ is $c_2\log n$-universal for some constant
$c_2$ which depends on $c_1$.
Let $\hom(n,k)$ be the largest positive integer such that every
graph $G$ on $n$ vertices is $k$-universal or satisfies $\hom(G)
\geq \hom(n,k)$. The Erd\H{o}s-Hajnal theorem and the Promel-R\"odl
theorem both say that $\hom(n,k)$ is large for fixed or slowly
growing $k$. Indeed, from the first theorem it follows that for
fixed $k$ there is $c(k)>0$ such that $\hom(n,k) \geq
2^{c(k)\sqrt{\log n}}$, while the second theorem says that for each
$c_1$ there is $c_2>0$ such that $\hom(n,c_2\log n) \geq c_1 \log
n$. One would naturally like to have a general lower bound on
$\hom(n,k)$ that implies both the Erd\H{o}s-Hajnal and Promel-R\"odl
results. This is done in the following theorem.
\begin{theorem}\label{combined}
There are positive constants $c_3$ and $c_4$ such that for all
$n,k$, every graph on $n$ vertices is $k$-universal or satisfies
$\hom(G) \geq c_32^{c_4\sqrt{\frac{\log n}{k}}}\log n.$
\end{theorem}
Theorem \ref{main} can be also used to answer a question
of Chung and Graham \cite{ChGr}, which was motivated by the study of quasirandom graphs.
Given a fixed graph $H$, it is well known that a typical graph
on $n$ vertices contains many induced copies of $H$ as $n$ becomes large.
Therefore if a large graph $G$ contains no induced copy of $H$, its edge distribution should deviate from
``typical" in a rather strong way. This intuition was made rigorous in \cite{ChGr}, where the authors
proved that if a graph $G$ on $n$ vertices is not $k$-universal,
then there is a subset $S$ of $\lfloor \frac{n}{2} \rfloor$ vertices
of $G$ such that $|e(S)-\frac{1}{16}n^2|>2^{-2(k^2+27)}n^2$. For
positive integers $k$ and $n$, let $D(k,n)$ denote the largest
integer such that every graph $G$ on $n$ vertices that is not
$k$-universal contains a subset $S$ of vertices of size $\lfloor
\frac{n}{2} \rfloor$ with $|e(S)-\frac{1}{16}n^2|>D(k,n)$. Chung and
Graham asked whether their lower bound on $D(k,n)$ can be
substantially improved, e.g., replaced by $c^{-k}n^2$.
Using Theorem \ref{main} this can be easily done as follows.
A lemma of Erd\H{o}s, Goldberg, Pach, and Spencer \cite{ErGoPaSp}
implies that if a graph on $n$ vertices has a subset $R$ that
deviates by $D$ edges from having edge density $1/2$, then there is
a subset $S$ of size $\lfloor n/2 \rfloor$ that deviates by at least a
constant times $D$ edges from having edge density $1/2$. By Theorem
\ref{main} with $\epsilon=1/4$, there is a positive constant $C$
such that every graph on $n$ vertices that is not $k$-universal has
a subset $R$ of size at least $C^{-k}n$ with edge density at most
$1/4$ or at least $3/4$. This $R$
deviates from having edge density $1/2$ by at least
$$\frac{1}{4}{|R| \choose 2}\geq \frac{1}{16}|R|^2 \geq
\frac{1}{16}C^{-2k}n^2$$ edges. Thus, the above mentioned lemma from
\cite{ErGoPaSp} implies that there is an absolute constant $c$ such
that every graph $G$ on $n$ vertices which is not $k$-universal
contains a subset $S$ of size $\lfloor n/2 \rfloor$ with
$|e(S)-\frac{n^2}{16}|>c^{-k}n^2$. Chung and Graham also ask for
non-trivial upper bounds on $D(k,n)$. In this direction, we show
that there are $K_k$-free graphs on $n$ vertices for which
$|e(S)-\frac{1}{16}n^2|=O(2^{-k/4}n^2)$ holds for every subset $S$
of $\lfloor \frac{n}{2} \rfloor$ vertices of $G$. Together with the
lower bound it determines the asymptotic behavior of $D(k,n)$ and
shows that there are constants $c_1,c_2>1$ such that
$c_1^{-k}n^2<D(k,n)<c_2^{-k}n^2$ holds for all positive integers $k$
and $n$. This completely answers the questions of Chung and Graham.
Moreover, we can obtain a more precise result about the relation
between the number of induced copies of a fixed graph $H$ in a large graph $G$
and the edge distribution of $G$.
In their celebrated paper, Chung, Graham, and Wilson
\cite{ChGrWi} introduced a large collection of equivalent graph
properties shared by almost all graphs which are called {\it
quasirandom}. For a graph $G=(V,E)$ on $n$ vertices, two of these
properties are
\vspace{0.1cm}
${\bf P_1}:\textrm{For each subset}~S \subset V,$
$$e(S)=\frac{1}{4}|S|^2+o(n^2).$$
${\bf P}_2$: For every fixed graph $H$ with $k$ vertices, the number
of labeled induced copies of $H$ in $G$ is
$$(1+o(1))n^{k}2^{-{k \choose 2}}.$$
So one can ask naturally, by how much does a graph deviate from ${\bf
P}_1$ assuming a deviation from ${\bf P}_2$? The following theorem
answers this question.
\begin{theorem}\label{dev} Let $H$ be a graph with $k$ vertices and $G=(V,E)$ be a
graph with $n$ vertices and at most $(1-\epsilon)2^{-{k \choose
2}}n^k$ labeled induced copies of $H$. Then there is a subset $S
\subset V$ with $|S|=\lfloor n/2 \rfloor$ and $|e(S)-\frac{n^2}{16}|
\geq \epsilon c^{-k} n^2$, where $c$ is an absolute constant.
\end{theorem}
The proof of Theorem \ref{dev} can be easily adjusted if we
replace the ``at most'' with ``at least'' and the $(1-\epsilon)$
factor by $(1+\epsilon)$. Note that this theorem answers the
original question of Chung and Graham in a very strong sense.
\subsection{Induced Ramsey numbers}\label{inducedsubsection}
Recall that the induced Ramsey number $r_{\textrm{ind}}(H)$ is the
minimum $n$ for which there is a graph $G$ with $n$ vertices such
that for every $2$-edge-coloring of $G$, one can find an induced
copy of $H$ in $G$ whose edges are monochromatic. One of the
fundamental results in graph Ramsey theory (see chapter 9.3 of
\cite{Di}), the Induced Ramsey Theorem, says that
$r_{\textrm{ind}}(H)$ exists for every graph $H$. R\"odl \cite{Ro}
noted that a relatively simple proof of the theorem follows from a
simple application of his result discussed in the previous section.
However, all of the early proofs of the Induced Ramsey Theorem give
poor upper bounds on $r_{\textrm{ind}}(H)$.
Since these early proofs, there has been a considerable amount of
research on induced Ramsey numbers. Erd\H{o}s \cite{Er2} conjectured
that there is a constant $c$ such that every graph $H$ on $k$
vertices satisfies $r_{\textrm{ind}}(H) \leq 2^{ck}$. Erd\H{o}s and
Hajnal \cite{Er1} proved that $r_{\textrm{ind}}(H) \leq
2^{2^{k^{1+o(1)}}}$ holds for every graph $H$ on $k$ vertices.
Kohayakawa, Pr\"omel, and R\"odl \cite{KoPrRo} improved this bound
substantially and showed that if a graph $H$ has $k$ vertices and
chromatic number $\chi$, then $r_{\textrm{ind}}(H) \leq k^{ck(\log
\chi)},$ where $c$ is a universal constant. In particular, their result
implies an upper bound of $2^{ck (\log k)^2}$ on the induced Ramsey
number of any graph on $k$ vertices. In their proof, the graph $G$
which gives this bound is randomly constructed using projective
planes.
There are several known results that provide upper bounds on induced
Ramsey numbers for sparse graphs. For example, Beck \cite{Be}
studied the case when $H$ is a tree; Haxell, Kohayakawa, and \L
uczak \cite{HaKoLu} proved that the cycle of length $k$ has induced
Ramsey number linear in $k$; and, settling a conjecture of Trotter,
\L uczak and R\"odl \cite{LuRo} showed that the induced Ramsey
number of a graph with bounded degree is at most polynomial in the
number of its vertices. More precisely, they proved that for every
integer $d$, there is a constant $c_d$ such that every graph $H$ on
$k$ vertices and maximum degree at most $d$ satisfies
$r_{\textrm{ind}}(H) \leq k^{c_d}$. Their proof, which also uses
random graphs, gives an upper bound on $c_d$ that is a tower of
$2$'s of height proportional to $d^2$.
As noted by Schaefer and Shah \cite{SchSh}, all known proofs of the
Induced Ramsey Theorem either rely on taking $G$ to be an
appropriately chosen random graph or give a poor upper bound on
$r_{\textrm{ind}}(H)$. However, often in combinatorics, explicit
constructions are desirable in addition to existence proofs given by
the probabilistic method. For example, one of the most famous such
problems was posed by Erd\H{o}s \cite{AlSp}, who asked for the
explicit construction of a graph on $n$ vertices without a complete
or empty subgraph of order $ c\log n$. Over the years, this
intriguing problem and its bipartite variant has drawn a lot of attention
by various researches (see, e.g.,
\cite{FrWi,Al2,BaKiShSuWi,Bo,BaRaShWi}), but, despite these efforts,
it is still open. Similarly, one would like to have an explicit
construction for the Induced Ramsey Theorem. We obtain such a
construction using pseudo-random graphs.
The {\it random graph}
$G(n,p)$ is the probability space of all labeled graphs on $n$ vertices, where
every edge appears randomly and independently with probability $p$. An
important property of $G(n,p)$ is that, with high
probability, between any two large subsets of vertices $A$ and $B$, the edge
density $d(A,B)=\frac{e(A,B)}{|A||B|}$ is approximately $p$. This observation is one of the motivations for the following useful
definition. A graph $G=(V,E)$ is {\it $(p,\lambda)$-pseudo-random}
if the following inequality holds for all subsets $A,B \subset V$:
$$|d(A,B)-p| \leq \frac{\lambda}{\sqrt{|A||B|}}.$$
It is easy to show that if $p<0.99$, then with high probability, the
random graph $G(n,p)$ is $(p,\lambda)$-pseudo-random with
$\lambda=O(\sqrt{pn})$. Moreover, there are also many explicit
constructions of pseudo-random graphs which can be obtained using
the following fact. Let $\lambda_1 \geq \lambda_2 \geq \ldots \geq
\lambda_n$ be the eigenvalues of the adjacency matrix of a graph
$G$. An {\it $(n,d,\lambda)$-graph} is a $d$-regular graph on $n$
vertices with $\lambda = \max_{i \geq 2} |\lambda_i|$. It was proved
by Alon (see, e.g., \cite{AlSp}, \cite{KrSu}) that every
$(n,d,\lambda)$-graph is in fact
$(\frac{d}{n},\lambda)$-pseudo-random. Therefore to construct good
pseudo-random graphs we need regular graphs with $\lambda \ll d$.
For more details on pseudo-random graphs, including many
constructions, we refer the interested reader to the recent survey
\cite{KrSu}.
A graph is {\it $d$-degenerate} if every subgraph of it has a vertex
of degree at most $d$. The degeneracy number of a graph $H$ is the
smallest $d$ such that $H$ is $d$-degenerate. This quantity, which
is always bounded by the maximum degree of the graph, is a natural
measure of its sparseness. In particular, in a $d$-degenerate graph
every subset $X$ spans at most $d|X|$ edges. The chromatic number
$\chi(H)$ of graph $H$ is the minimum number of colors needed to
color vertices of $H$ such that adjacent vertices get different
colors. Using a greedy coloring, it is easy to show that
$d$-degenerate graphs have chromatic number at most $d+1$. The
following theorem, which is special case of a more general result
which we prove in Section 4, shows that any sufficiently
pseudo-random graph of appropriate density has strong induced Ramsey
properties.
\begin{theorem}\label{quasicor1}
There is an absolute constant $c$ such that for all integers $k,d,\chi \geq
2$, every $(\frac{1}{k},n^{0.9})$-pseudo-random graph $G$ on $n \geq
k^{cd\log \chi}$ vertices satisfies that every $d$-degenerate graph on
$k$ vertices with chromatic number at most $\chi$ occurs as an
induced monochromatic copy in all $2$-edge-colorings of $G$.
Moreover, all of these induced monochromatic copies can be found in
the same color.
\end{theorem}
This theorem implies that, with high probability, $G(n,p)$ with
$p=1/k$ and $n \geq k^{cd\log \chi}$ satisfies that every
$d$-degenerate graph on $k$ vertices with chromatic number at most
$\chi$ occurs as an induced monochromatic copy in all $2$-edge-colorings
of $G$. It gives the first polynomial upper bound on the induced
Ramsey numbers of $d$-degenerate graphs. In particular, for bounded
degree graphs this is a significant improvement of the above
mentioned \L uczak-R\"odl result. It shows that the exponent of the
polynomial in their theorem can be taken to be $O(d\log d)$, instead
of the previous bound of a tower of $2$'s of height proportional to
$d^2$.
\begin{corollary}\label{corollaryobvious}
There is an absolute constant $c$ such that every $d$-degenerate
graph $H$ on $k$ vertices with chromatic number $\chi \geq 2$ has
induced Ramsey number $r_{\textrm{ind}}(H) \leq k^{cd \log \chi}$.
\end{corollary}
A significant additional benefit of Theorem \ref{quasicor1} is that
it leads to explicit constructions for induced Ramsey numbers. One
such example can be obtained from a construction of Delsarte and
Goethals and also of Turyn (see \cite{KrSu}). Let $r$ be a prime
power and let $G$ be a graph whose vertices are the elements of the
two dimensional vector space over finite field $\mathbb{F}_r$, so
$G$ has $r^2$ vertices. Partition the $r+1$ lines through the origin
of the space into two sets $P$ and $N$, where $|P|=t$. Two vertices
$x$ and $y$ of the graph $G$ are adjacent if $x-y$ is parallel to a
line in $P$. This graph is known to be $t(r-1)$-regular with
eigenvalues, besides the largest one, being either $-t$ or $r-t$.
Taking $t= \frac{r^2}{k(r-1)}$, we obtain an $(n,d,\lambda)$-graph
with $n=r^2$, $d = n/k$, and $\lambda=r-t\leq r \leq n^{1/2}$. This
gives a $(p,\lambda)$-pseudo-random graph with $p =d/n=1/k$ and
$\lambda \leq n^{1/2}$ which satisfies the assertion of Theorem
\ref{quasicor1}.
Another well-known explicit construction is the Paley graph
$P_n$. Let $n$ be a prime power which is congruent to 1 modulo 4 so that $-1$ is a square in
the finite field $\mathbb{F}_n$. The {\it Paley graph} $P_n$ has vertex
set $\mathbb{F}_n$ and distinct elements $x,y \in \mathbb{F}_n$ are
adjacent if $x-y$ is a square. It is well known and not difficult to prove that
the Paley graph $P_n$ is $(1/2,\lambda)$-pseudo-random with $\lambda=\sqrt{n}$.
This can be used together with the generalization of Theorem \ref{quasicor1}, which we discuss in
Section 5, to prove the following result.
\begin{corollary}\label{payley}
There is an absolute constant $c$ such that for prime $n \geq
2^{ck\log^2 k}$, every graph on $k$ vertices occurs as an
induced monochromatic copy in all $2$-edge-colorings of the Paley graph $P_n$.
\end{corollary}
This explicit construction matches the best known upper bound on
induced Ramsey numbers of graphs on $k$ vertices obtained by
Kohayakawa, Pr\"omel, and R\"odl \cite{KoPrRo}. Similarly, we can
prove that there is a constant $c$ such that, with high probability,
$G(n,1/2)$ with $n \geq 2^{ck\log^2 k}$ satisfies that every graph
on $k$ vertices occurs as an induced monochromatic copy in all
$2$-edge-colorings of $G$.
Very little is known about lower bounds for induced Ramsey numbers
beyond the fact that an induced Ramsey number is at least its
corresponding Ramsey number. A well-known conjecture of Burr and
Erd\H{o}s \cite{BuEr} from 1973 states that for each positive
integer $d$ there is a constant $c(d)$ such that the Ramsey number
$r(H)$ is at most $c(d)k$ for every $d$-degenerate graph $H$ on $k$
vertices. As mentioned earlier, Haxell et al. \cite{HaKoLu} proved
that the induced Ramsey number for the cycle on $k$
vertices is linear in $k$. This implies that the induced
Ramsey number for the path on $k$ vertices is also linear in $k$.
Also, using a star with $2k-1$ edges, it is trivial to see that the induced Ramsey number of a star
with $k$ edges is $2k$. It is natural to ask whether the
Burr-Erd\H{o}s conjecture extends to induced Ramsey numbers. The
following result shows that this fails already for trees, which are
$1$-degenerate graphs.
\begin{theorem} \label{tree} For every $c>0$ and sufficiently large integer $k$
there is a tree $T$ on $k$ vertices such that
$r_{\textrm{ind}}(T) \geq ck$.
\end{theorem}
The tree $T$ in the above theorem can be taken to be any
sufficiently large tree that contains a matching of linear size and
a star of linear size as subgraphs. It is interesting that the
induced Ramsey number for a path on $k$ vertices or a star on $k$
vertices is linear in $k$, but the induced Ramsey number for a tree
which contains both a path on $k$ vertices and a star on $k$
vertices is superlinear in $k$.
\vspace{0.3cm}
\noindent
{\bf Organization of the paper.}\,\,
In the next section we give short proofs of Theorem
\ref{main} and Theorem \ref{combined} which illustrate our methods.
Section \ref{section12} contains the key lemma that is used as a
replacement for Szemer\'edi's regularity lemma in the proofs of
several results. We answer questions of Chung-Graham and Nikiforov
on the edge distribution in graphs with a forbidden induced
subgraph in Section \ref{sectionmulticolor}. In Section \ref{moreoninduced} we show that any sufficiently
pseudo-random graph of appropriate density has strong induced Ramsey properties.
Combined with known examples of pseudo-random graphs, this leads to explicit
constructions which match and improve the best known estimates for induced Ramsey
numbers. The proof of the result that there are trees whose
induced Ramsey number is superlinear in the number of vertices is in Section \ref{superlinear}. The last section of this
paper contains some concluding remarks together with a discussion of a few conjectures and open problems.
Throughout the paper, we systematically
omit floor and ceiling signs whenever they are not crucial for the
sake of clarity of presentation. We also do not make any serious attempt
to optimize absolute constants in our statements and proofs.
\section{Ramsey-type results for $H$-free graphs}\label{section2}
In this section, we prove Theorems \ref{main} and \ref{combined}.
While we obtain more general results later in the paper, the purpose
of this section is to illustrate on simple examples the main ideas and techniques
that we will use in our proofs. Our theorems
strengthen and generalize results from \cite{Ro} and \cite{PrRo} and the proofs we present
here are shorter and simpler than the original ones.
We start with the proof of Theorem \ref{main}, which uses the following lemma of Erd\H{o}s and
Hajnal \cite{ErHa}. We prove a generalization of this lemma
in Section \ref{sectionmulticolor}.
\begin{lemma}\label{lemmaerdoshajnal}
For each $\epsilon \in (0,1/2)$, graph $H$ on $k$ vertices, and
$H$-free graph $G=(V,E)$ on $n \geq 2$ vertices, there are disjoint
subsets $A$ and $B$ of $V$ with $|A|, |B| \geq
\epsilon^{k-1}\frac{n}{k}$ such that either every vertex in $A$ has
at most $\epsilon|B|$ neighbors in $B$, or every vertex in $A$ has
at least $(1-\epsilon)|B|$ neighbors in $B$.
\end{lemma}
Actually, the statement of the lemma in \cite{ErHa} is a bit weaker
than that of Lemma \ref{lemmaerdoshajnal} but it is easy to get the above statement
by analyzing more carefully the proof of Erd\H{o}s and Hajnal.
Lemma \ref{lemmaerdoshajnal} roughly says that every $H$-free graph
contains two large disjoint vertex subsets such that the edge
density between them is either very small or very large. However, to prove
Theorem \ref{main}, we need to find a large induced subgraph
with such edge density. Our next lemma shows
how one can iterate the bipartite density result of Lemma
\ref{lemmaerdoshajnal} in order to establish the complete density
result of Theorem \ref{main}.
For $\epsilon_1,\epsilon_2 \in (0,1)$ and a graph $H$, define
$\delta(\epsilon_1,\epsilon_2,H)$ to be the largest $\delta$ (which
may be 0) such that for each $H$-free graph on $n$ vertices, there
is an induced subgraph on at least $\delta n$ vertices with edge
density at most $\epsilon_1$ or at least $1-\epsilon_2$. Notice that
for $2 \leq n_0 \leq n_1$, the edge-density of a graph on $n_1$
vertices is the average of the edge-densities of the induced
subgraphs on $n_0$ vertices. Therefore, from definition of $\delta$,
it follows that for every $2 \leq n_0 \leq
\delta(\epsilon_1,\epsilon_2,H) n$ and $H$-free graph $G$ on $n$
vertices, $G$ contains an induced subgraph on exactly $n_0$ vertices
with edge density at most $\epsilon_1$ or at least $1-\epsilon_2$.
Recall that the edge-density $d(A)$ of a subset $A$ of $G$ equals
$e(A)/{|A| \choose 2}$, where $e(A)$ is the number of edges spanned
by $A$.
\begin{lemma}\label{useful}
Suppose $\epsilon_1,\epsilon_2 \in (0,1)$ with
$\epsilon_1+\epsilon_2<1$ and $H$ is a graph on $k \geq 2$ vertices.
Let $\epsilon=\min(\epsilon_1,\epsilon_2)$. We have
$$\delta(\epsilon_1,\epsilon_2,H) \geq
(\epsilon/4)^{k}k^{-1}\min\Big(\delta\big(3\epsilon_1/2,\epsilon_2,H\big),
\delta\big(\epsilon_1, 3\epsilon_2/2,H\big)\Big).$$
\end{lemma}
\begin{proof}
Let $G$ be a $H$-free graph on $n \geq 2$ vertices. If $n<k$ then we
may consider any two-vertex induced subgraph of $G$ which has always
density either 0 or 1. Therefore, for $G$ of order less than $k$ we
can take $\delta=2/k$, which is clearly larger than the right hand
side of the inequality in the assertion of the lemma. Thus we can
assume that $n \geq k$. Applying Lemma \ref{lemmaerdoshajnal} to $G$
with $\epsilon/4$ in place of $\epsilon$, we find two subsets $A$
and $B$ with $|A|,|B| \geq (\epsilon/4)^{k-1} n/k $, such that
either every vertex in $A$ is adjacent to at most
$\frac{\epsilon}{4}|B|$ vertices of $B$ or every vertex of $A$ is
adjacent to at least $(1-\frac{\epsilon}{4})|B|$ vertices of $B$.
Consider the first case in which every vertex in $A$ is adjacent to
at most $\frac{\epsilon}{4}|B|$ vertices of $B$ (the other case can
be treated similarly) and let $G[A]$ be the subgraph of $G$ induced
by the set $A$. By definition of function $\delta$, $G[A]$ contains
a subset $A'$ with
$$|A'|= \delta(3\epsilon_1/2,\epsilon_2,H)\Big(\frac{\epsilon}{4}\Big)^{k}\frac{n}{k} \leq
\delta(3\epsilon_1/2,\epsilon_2,H)|A|,$$
such that the subgraph induced by $A'$ has edge density at most $\frac{3}{2}\epsilon_1$ or at least
$1-\epsilon_2$. If $A'$ has edge density at least $1-\epsilon_2$ we are done, since
$G[A']$ is an induced subgraph of $G$ with at least
$(\epsilon/4)^{k}k^{-1}\delta(3\epsilon_1/2,\epsilon_2,H)n$
vertices and edge density at least $1-\epsilon_2$. So we may assume
that the edge density in $A'$ is at most $\frac{3}{2}\epsilon_1$.
Let $B_1 \subset B$ be those vertices of $B$ that have at most
$\frac{\epsilon}{2}|A'|$ neighbors in $A'$. Since $A' \subset A$, each vertex of
$A'$ has at most $\frac{\epsilon}{4}|B|$ neighbors in $B$ and the number of edges
$e(A',B) \leq \frac{\epsilon}{4}|A'||B|$. Therefore $B_1$ has at least $|B|/2$ vertices.
Then, by definition of $\delta$, $B_1$ contains a
subset $B'$ with
$$|B'| = \delta(3\epsilon_1/2,\epsilon_2,H)\Big(\frac{\epsilon}{4}\Big)^{k}\frac{n}{k}
\leq \delta(3\epsilon_1/2,\epsilon_2,H)|B_1|,$$
such that the induced subgraph $G[B']$ has edge density at most
$\frac{3}{2}\epsilon_1$ or at least $1-\epsilon_2$. If it has edge
density at least $1-\epsilon_2$ we are done, so we may assume
that the edge density $d(B')$ is at most $\frac{3}{2}\epsilon_1$.
Finally to complete the proof note that, since $|A'|=|B'|$, $|A' \cup B'|=2|A'|$,
$d(A'),d(B') \leq \frac{3}{2}\epsilon_1$, and
$d(A',B') \leq \frac{\epsilon_1}{2}$, we have that
\begin{eqnarray*}
e(A' \cup B')&=&e(A')+e(B')+e(A',B') \leq \frac{3}{2}\epsilon_1 {|A'| \choose 2}+
\frac{3}{2}\epsilon_1 {|B'| \choose 2}+
\frac{\epsilon_1}{2}|A'||B'|\\
&=&2\epsilon_1|A'|^2-3\epsilon_1 |A'|/2 \leq \epsilon_1{2|A'| \choose 2}.
\end{eqnarray*}
Therefore, $d(A' \cup B') \leq \epsilon_1$.
\end{proof}
From this lemma, the proof of our first result, that every $H$-free
graph on $n$ vertices contains a subset of at least $2^{-ck(\log
\frac{1}{\epsilon})^2}n$ vertices with edge density either $ \leq
\epsilon$ or $\geq 1-\epsilon$, follows in a few lines.
\noindent {\bf Proof of Theorem \ref{main}:} Notice that if
$\epsilon_1+\epsilon_2 \geq 1$, then trivially
$\delta(\epsilon_1,\epsilon_2,H) =1$. In particular, if
$\epsilon_1\epsilon_2 \geq \frac{1}{4}$, then $\epsilon_1+\epsilon_2
\geq 1$ and $\delta(\epsilon_1,\epsilon_2,H) =1$. Therefore, by
iterating Lemma \ref{useful} for $t=\log \frac{1}{\epsilon^2}/\log
\frac{3}{2}$ iterations and using that $\epsilon \leq 1/2$, we
obtain
$$\delta(\epsilon,\epsilon,H) \geq
\left(\frac{\epsilon^{k}}{4^{k}k}\right)^t \geq 2^{-\frac{2}{\log
3/2}\left(k(\log 1/\epsilon)^2+\left(2k+\log k \right)\log
1/\epsilon\right)} \geq 2^{-15k (\log 1/\epsilon)^2},$$
which, by definition of $\delta$, completes the proof of the theorem. \ifvmode\mbox{ }\else\unskip\fi\hskip 1em plus 10fill$\Box$
Recall the Erd\H{o}s-Szemer\'edi theorem, which states that there is
an absolute constant $c$ such that every graph $G$ on $n$ vertices
with edge density $\epsilon \in (0,1/2)$ has a homogeneous set of
size at least $\frac{c \log n}{\epsilon \log \frac{1}{\epsilon}}$.
Theorem \ref{combined} follows from a simple application of Theorem
\ref{main} and the Erd\H{o}s-Szemer\'edi theorem.
\noindent {\bf Proof of Theorem \ref{combined}:} Let $G$ be a graph
on $n$ vertices which is not $k$-universal, i.e., it is $H$-free for
some fixed graph $H$ on $k$ vertices. Fix
$\epsilon=2^{-\frac{1}{5}\sqrt{\frac{\log n}{k}}}$ and apply Theorem
\ref{main} to $G$. It implies that $G$ contains a subset $W \subset
V(G)$ of size at least $2^{-15k(\log
\frac{1}{\epsilon})^2}n=n^{2/5}$ such that the subgraph induced by
$W$ has edge density at most $\epsilon$ or at least $1-\epsilon$.
Applying the Erd\H{o}s-Szemer\'edi theorem to the induced subgraph
$G[W]$ or its complement and using that $\epsilon \log 1/\epsilon
\leq 4\epsilon^{1/2}$ for all $\epsilon \leq 1$, we obtain a
homogeneous subset $W' \subset W$ with $$|W'| \geq \frac{c \log
n^{2/5}}{\epsilon \log \frac{1}{\epsilon}} \geq \frac{c\log
n}{10\epsilon^{1/2}} \geq
\frac{c}{10}2^{\frac{1}{10}\sqrt{\frac{\log n}{k}}}\log n,$$ which
completes the proof of Theorem \ref{combined}. \ifvmode\mbox{ }\else\unskip\fi\hskip 1em plus 10fill$\Box$
\section{Key Lemma}\label{section12}
In this section we present our key lemma. We use it as a
replacement for Szemer\'edi's regularity lemma in the proofs of several Ramsey-type results,
thereby giving much better estimates. A very special case of this statement was
essentially proved in Lemma \ref{useful} in the previous section.
Our key lemma generalizes the result of Graham, R\"odl, and
Rucinski \cite{GrRoRu} and has a simpler proof
than the one in \cite{GrRoRu}.
Roughly, our result says that if $(G_1,\ldots,G_r)$ is a
sequence of graphs on the same vertex set $V$ with the property that
every large subset of $V$ contains a pair of large disjoint sets
with small edge density between them in at least one of the graphs $G_i$,
then every large subset of $V$ contains a large set with
small edge density in one of the $G_i$. To formalize this concept,
we need a couple definitions.
For a graph $G=(V,E)$ and disjoint subsets $W_1,\ldots,W_t \subset V$, the {\it density} $d_{G}(W_1,\ldots,W_t)$
between the $t \geq 2$ vertex subsets $W_1,\ldots,W_t$ is defined by
$$d_G(W_1,\ldots,W_t)=\frac{\sum_{ i < j}
e(W_i,W_j)}{\sum_{i < j } |W_i||W_j|}.$$ If
$|W_1|=\ldots=|W_t|$, then
$$d_G(W_1,\ldots,W_t)={t \choose 2}^{-1}\sum_{ i < j }
d_G(W_i,W_j).$$
Also, in this section if $t=1$ we define the density to be zero.
\begin{definition} \label{d31}
For $\alpha,\rho,\epsilon \in [0,1]$ and positive integer $t$, a
sequence $(G_1,\ldots,G_r)$ of graphs on the same vertex set $V$ is
{\bf $(\alpha,\rho,\epsilon,t)$-sparse} if for all subsets $U
\subset V$ with $|U| \geq \alpha |V|$, there are positive integers
$t_1,\ldots,t_r$ such that $\prod_{i=1}^r t_i \geq t$ and for each
$i \in [r]=\{1, \ldots, r\}$ there are disjoint subsets
$W_{i,1},\ldots,W_{i,t_i} \subset U$ with $|W_{i,1}|=\ldots
=|W_{i,t_i}|=\lceil \rho |U|\rceil$ and
$d_{G_i}(W_{i,1},\ldots,W_{i,t_i})\leq \epsilon$.
\end{definition}
We call a graph $(\alpha,\rho,\epsilon,t)$-sparse if the one-term
sequence consisting of that graph is
$(\alpha,\rho,\epsilon,t)$-sparse. By averaging, if $\alpha'\geq
\alpha$, $\rho' \leq \rho$, $\epsilon'\geq \epsilon$, $t' \leq t$,
and $(G_1,\ldots,G_r)$ is $(\alpha,\rho,\epsilon,t)$-sparse, then
$(G_1,\ldots,G_r)$ is also $(\alpha',\rho',\epsilon',t')$-sparse.
The following is our main result in this section.
\begin{lemma}
\label{l32}
If a sequence of graphs $(G_1,\ldots,G_r)$ with common vertex set $V$ is
$(\frac{1}{2}\alpha\rho,\rho',\epsilon,t)$-sparse
and $(\alpha,\rho,\epsilon/4,2)$-sparse,
then $(G_1,\ldots,G_r)$ is also $(\alpha,\frac{1}{2}\rho\rho',\epsilon,2t)$-sparse.
\end{lemma}
\begin{proof}
Since $(G_1,\ldots,G_r)$ is $(\alpha,\rho,\epsilon/4,2)$-sparse,
then for each $U \subset V$ with $|U| \geq \alpha |V|$, there is $i
\in [r]$ and disjoint subsets $X,Y \subset U$ with $|X|=|Y| =
\rho|U|$ and $d_{G_{i}}(X,Y) \leq \epsilon/4$. Let $X_1$ be the set of vertices in
$X$ that have at most $\frac{\epsilon}{2}|Y|$ neighbors in $Y$ in graph $G_i$. Then
$e_{G_i}(X\setminus X_1,Y) \geq \epsilon|X\setminus X_1||Y|/2$ and we also have $e_{G_i}(X,Y) \leq \epsilon|X||Y|/4$.
Therefore $|X_1| \geq |X|/2 \geq \frac{1}{2}\rho|U|$ and by removing extra vertices we
assume that $|X_1|= \frac{1}{2}\rho|U|$.
Since $(G_1,\ldots,G_r)$ is
$(\frac{1}{2}\alpha\rho,\rho',\epsilon,t)$-sparse, then there are
positive integers $t_{1},\ldots,t_{r}$ such that $\prod_{j=1}^r
t_{j} \geq t$ and for each $j \in [r]$ there are disjoint subsets
$X_{j,1},\ldots,X_{j,t_j} \subset X_1$ of size
$|X_{j,1}|=\ldots=|X_{j,t_j}|=\rho' |X_1|$ with density
$d_{G_j}(X_{j,1},\ldots,X_{j,t_j})\leq \epsilon$. Let $Y_1$ the set
of vertices in $Y$ that have at most $\epsilon|X_{i,1} \cup \ldots
\cup X_{i,t_i}|$ neighbors in $X_{i,1} \cup \ldots \cup X_{i,t_i}$
in graph $G_i$. Since every vertex of $X_1$ is adjacent to at most
$\frac{\epsilon}{2}|Y|$ vertices of $Y$ and since $X_{i,1} \cup
\ldots \cup X_{i,t_i}\subset X_1$ we have that $d_{G_i}(X_{i,1} \cup
\ldots \cup X_{i,t_i},Y)\leq \epsilon/2$. On the other hand,
$d_{G_i}(X_{i,1} \cup \ldots \cup X_{i,t_i},Y\setminus Y_1)\geq
\epsilon$. Therefore $|Y_1|\geq |Y|/2$, so again we can assume that
$|Y_1|=\frac{1}{2}\rho|U|=|X_1|$. Since $(G_1,\ldots,G_r)$ is
$(\frac{1}{2}\alpha\rho,\rho',\epsilon,t)$-sparse, then there are
positive integers $s_{1},\ldots,s_{r}$ such that $\prod_{j=1}^r
s_{j} \geq t$ and for each $j \in [r]$ there are disjoint subsets
$Y_{j,1},\ldots,Y_{j,s_j} \subset Y_1$ with
$d_{G_j}(Y_{j,1},\ldots,Y_{j,s_j})\leq \epsilon$ and
$|Y_{j,1}|=\ldots=|Y_{j,s_j}|=\rho' |Y_1|$.
By the above construction, the edge density between
$X_{i,1}\cup \ldots \cup X_{i,t_i}$ and $Y_{i,1}\cup \ldots \cup Y_{i,s_i}$
is bounded from above by $\epsilon$. We also have that both
$d_{G_i}(X_{i,1},\ldots,X_{i,t_i})$ and $d_{G_i}(Y_{i,1},\ldots,Y_{i,s_i})$ are at most $\epsilon$
and these two sets have the same size.
Therefore $d_{G_i}(X_{i,1},\ldots,X_{i,t_i},Y_{i,1},\ldots,Y_{i,s_i}) \leq \epsilon$, implying
that $(G_1,\ldots,G_r)$ is
$\left(\alpha,\frac{1}{2}\rho\rho',\epsilon,u\right)$-sparse with
$u=(t_i+s_i)\prod_{j \in [r]\setminus \{i\}}\max(t_j,s_j)$ for some $i$.
By the arithmetic mean-geometric mean inequality, we have
$$t^2 \leq \prod_{j=1}^r t_j \prod_{j=1}^r s_j \leq
t_is_i\left(\prod_{j \in [r]\setminus \{i\}} \max(t_j,s_j)\right)^2
=\frac{t_is_i}{(t_i+s_i)^2}u^2 \leq \frac{u^2}{4}.$$
Thus $u \geq 2t$. Altogether this shows that $(G_1,\ldots,G_r)$ is
$\left(\alpha,\frac{1}{2}\rho\rho',\epsilon,2t\right)$-sparse, completing the proof.
\end{proof}
Rather than using this lemma directly, in applications we usually need the following two
corollaries. The first one is obtained by simply applying Lemma \ref{l32} $h-1$ times.
\begin{corollary}\label{secondcor}
If $(G_1,\ldots,G_r)$ is $(\alpha,\rho,\epsilon/4,2)$-sparse and $h$
is a positive integer, then $(G_1,\ldots,G_r)$ is also
$\left((\frac{2}{\rho})^{h-1}\alpha,2^{1-h}\rho^{h},\epsilon,2^h\right)$-sparse.
\end{corollary}
If we use the last statement with $h=r\log \frac{1}{\epsilon}$ and $\alpha=(\frac{\rho}{2})^{h-1}$,
then we get that there is an index $i \in [r]$ and disjoint
subsets $W_1,\ldots,W_{t} \subset V$ with $t \geq 2^{h/r}=
\frac{1}{\epsilon}$, $|W_1|=\ldots=|W_t|= 2^{1-h}\rho^{h}|V|$,
and $d_{G_i}(W_1,\ldots,W_t) \leq \epsilon$. Since
${|W_1| \choose 2} \leq \frac{\epsilon}{t}{t|W_1| \choose 2}$, even if every $W_i$ has edge density one, still the edge density in the
set $W_1 \cup \ldots \cup W_t$ is at most $2\epsilon$. Therefore, (using $\epsilon/2$ instead of $\epsilon$) we
have the following corollary.
\begin{corollary}\label{corbip}
If $(G_1,\ldots,G_r)$ is
$((\frac{\rho}{2})^{h-1},\rho,\epsilon/8,2)$-sparse where $h=r\log \frac{2}{\epsilon}$,
then there is $i \in [r]$ and an
induced subgraph $G'$ of $G_i$ on
$2\epsilon^{-1}2^{1-h}\rho^{h}|V|$ vertices that has edge density at
most $\epsilon$.
\end{corollary}
The key lemma in the paper of Graham, R\"odl, and Rucinski
\cite{GrRoRu} on the Ramsey number of graphs (their Lemma 1) is
essentially the $r=1$ case of Corollary \ref{corbip}.
\section{Edge distribution in $H$-free graphs}\label{sectionmulticolor}
In this section, we obtain several results on the edge distribution of
graphs with a forbidden induced subgraph which answer open questions by
Nikiforov and Chung-Graham. We first prove a strengthening of R\"odl's
theorem (mentioned in the introduction) without using the regularity lemma.
Then we present a proof of Theorem \ref{dev} on the
dependence of error terms in quasirandom properties. We
conclude this section with an upper bound on the maximum edge discrepancy
in subgraphs of $H$-free graphs.
To obtain these results we need the following generalization of Lemma
\ref{lemmaerdoshajnal}.
\begin{lemma}\label{lemmaerdoshajnal4}
Let $H$ be a $k$-vertex graph and let $G$ be a graph on $n \geq k^2$ vertices
that contains less than $n^{k}(1-\frac{k^2}{2n})\prod_{i=1}^{k-1} (1-\delta_i)\epsilon_i^{k-i}$
labeled induced copies of $H$, where $\epsilon_0=1$ and $\epsilon_i,\delta_{i} \in (0,1)$ for all
$1 \leq i \leq k-1$. Then there is an index $i\leq k-1$ and disjoint subsets $A$ and $B$ of $G$ with
$|A|\geq \frac{\delta_in}{k(k-i)}\prod_{j<i} \epsilon_j$ and $|B|
\geq \frac{n}{k}\prod_{j < i} \epsilon_j$ such that either every vertex of $A$
is adjacent to at most $\epsilon_i|B|$ vertices of $B$ or every
vertex of $A$ is adjacent to at least $(1-\epsilon_i)|B|$ vertices
of $B$.
\end{lemma}
\begin{proof}
Let $M$ denote the number of labeled induced copies of $H$ in $G$, which by our assumption is at most
\begin{eqnarray}
\label{M-bound}
M < n^{k}\left(1-\frac{k^2}{2n}\right)\prod_{i=1}^{k-1} (1-\delta_i)\epsilon_i^{k-i}.
\end{eqnarray}
We may assume that the vertex set of $H$ is $[k]$. Consider a random
partition $V_1 \cup \ldots \cup V_k$ of the vertices of $G$ such that each $V_i$ has
cardinality $n/k$. Note that for any such partition there are $(n/k)^k$ ordered
$k$-tuples of vertices of $G$ with the property that the $i$-th vertex of the $k$-tuple is in $V_i$ for all $i
\in [k]$.
On the other hand the total number of ordered $k$-tuples of vertices is $n(n-1)\cdots(n-k+1)$ and each of these
$k$-tuples
has the above property with equal probability. This implies that for any given
$k$-tuple the probability that its $i$-th vertex is in $V_i$ for all $i \in [k]$ equals
$\prod_{i=1}^k \frac{n/k}{n-i+1}$. In particular, by linearity of expectation,
the expected number of labeled induced copies of $H$ in $G$
for which the image of every vertex $i \in [k]$ is in $V_i$ is at most $M\cdot \prod_{i=1}^k \frac{n/k}{n-i+1}$.
Using that $\prod (1-x_i) \geq 1-\sum x_i$ for any $0 \leq x_i \leq 1$ and that $n \geq k^2$, we obtain
\begin{eqnarray*}
\prod_{i=1}^k
\frac{n/k}{n-i+1} &=& k^{-k}\prod_{i=0}^{k-1}(1-i/n)^{-1} \leq
k^{-k}\left(1-\sum_{i=0}^{k-1} i/n\right)^{-1} = k^{-k}\left(1-{k
\choose 2}/n\right)^{-1}\\ &<&
\left(1-\frac{k^2}{2n}\right)^{-1}k^{-k}.
\end{eqnarray*}
This, together with (\ref{M-bound}), shows that there is a partition $V_1 \cup \ldots \cup V_k$ of $G$ into sets
of cardinality $n/k$ such that the total number of labeled induced copies of $H$ in $G$
for which the image of every vertex $i\in [k]$ is in $V_i$ is less than
\begin{equation}
\label{upbound} M\left(1-\frac{k^2}{2n}\right)^{-1}k^{-k}<k^{-k}n^{k}\prod_{i=1}^{k-1}
(1-\delta_i)\epsilon_i^{k-i}.
\end{equation}
We use this estimate to construct sets $A$ and $B$ which satisfy the assertion of the lemma.
For a vertex $v \in V$, the {\it neighborhood} $N(v)$ is the set of
vertices of $G$ that are adjacent to $v$. For $v \in V_i$ and a
subset $S \subset V_j$ with $i \not = j$, let $\tilde{N}(v,S)=N(v) \cap S$
if $(i,j)$ is an edge of $H$ and $\tilde{N}(v,S)=S \setminus N(v)$
otherwise. We will try iteratively to build many induced copies of $H$.
After $i$ steps, we will
have vertices $v_1,\ldots,v_{i}$ with $v_j \in
V_j$ for $j \leq i$ and subsets
$V_{i+1,i}, V_{i+2,i},\ldots, V_{k,i}$ such that
\begin{enumerate}
\item $V_{\ell,i}$ is a subset of $V_{\ell}$ of size $|V_{\ell,i}| \geq \frac{n}{k}\prod_{j=1}^i\epsilon_j$ for all $i+1 \leq
\ell \leq k$,
\item for $1 \leq j<\ell \leq i$, $(v_j,v_{\ell})$ is an edge of $G$ if and
only if $(j,\ell)$ is an edge of $H$,
\item and if $j \leq i<\ell$ and $w \in V_{\ell,i}$, then
$(v_j,w)$ is an edge of $G$ if and only if $(j,\ell)$ is an edge of
$H$.
\end{enumerate}
In the first step, we call a vertex $v\in V_1$ {\it good} if
$|\tilde{N}(v,V_i)| \geq \epsilon_1|V_i|$ for each $i >1$. If less than a
fraction $1-\delta_1$ of the vertices in $V_1$ are good, then, by
the pigeonhole principle, there is a subset $A \subset V_1$ with
$|A|\geq \frac{\delta_1}{k-1}|V_1|=\frac{\delta_1}{k(k-1)}n$ and an index $j > 1$ such that
$|\tilde{N}(v,V_j)| < \epsilon_1|V_j|$ for each $v \in A$. Letting $B=V_j$, one can easily check that
$A$ and $B$ satisfy the assertion of the lemma. Hence, we may assume that at least a
fraction $1-\delta_1$ of the vertices $v_1 \in V_1$ are good, choose any good $v_1$ and
define $V_{i,1}=\tilde{N}(v_1,V_i)$ for $i>1$, completing the first step.
Suppose that after step $i$ the properties 1-3 are satisfied. Then,
in step $i+1$, we again call a vertex $v \in V_{i+1,i}$ {\it good} if
$|\tilde{N}(v,V_{j,i})| \geq \epsilon_{i+1}|V_{j,i}|$ for each $j>i+1$. If
less than a fraction $1-\delta_{i+1}$ vertices of $V_{i+1,i}$ are
good, then, by the pigeonhole principle, there is a subset $A
\subset V_{i+1,i}$ with
$|A|\geq \frac{\delta_{i+1}}{k-i-1}|V_{i+1,i}|$
and index $j >i+1$ such that
$|\tilde{N}(v,V_{j,i})|<\epsilon_{i+1}|V_{j,i}|$ for each $v \in A$. Letting,
$B=V_{j,i}$, one can check using properties 1-3, that
$A$ and $B$ satisfy the assertion of the lemma. Hence, we may assume that a
fraction $1-\delta_{i+1}$ of the vertices $v_{i+1} \in V_{i+1,i}$
are good, choose any good $v_{i+1}$ and define $V_{j,i+1}=\tilde{N}(v_{i+1},V_{j,i})$ for $j>i+2$,
completing step $i+1$. Notice that after step $i+1$, we have
$|V_{j,i+1}|\geq \epsilon_{i+1}|V_{j,i}|$ for $j>i+1$, which guarantees that
property 1 is satisfied. The remaining properties (2 and 3) follow from
our construction of sets $V_{j,i+1}$.
Thus if our process fails in one of the first $k-1$ steps we obtain desired sets $A$ and $B$.
Suppose now that we successfully performed $k-1$ steps. Note that in step $i+1$, we had at least $(1-\delta_{i+1})|V_{i+1,i}| \geq
\frac{n}{k}(1-\delta_{i+1})\prod_{j=1}^i \epsilon_j$ vertices to choose for vertex $v_{i+1}$. Also note that, by property 3, after step $k-1$
we can choose any vertex in the set $V_{k,k-1}$ to be $v_k$. Moreover, by the property 2, every choice of the vertices $v_1, \ldots, v_k$ form a
labeled induced copy of $H$. Altogether, this gives at least
\begin{eqnarray*}
|V_{k,k-1}| \cdot \prod_{i=1}^{k-1}\bigg( \frac{n}{k}(1-\delta_{i})\prod_{0 \leq j<i}
\epsilon_j\bigg) &\geq& \frac{n}{k}\prod_{j=1}^{k-1}\epsilon_j \, \cdot \,
\prod_{i=1}^{k-1}\bigg( \frac{n}{k}(1-\delta_{i})\prod_{0 \leq j<i}
\epsilon_j\bigg)\\
&=&(n/k)^k\prod_{i=1}^{k-1}(1-\delta_i)\epsilon_i^{k-i}
\end{eqnarray*}
labeled induced copies of $H$ for which the image of every vertex $i \in [k]$ is in $V_i$.
This contradicts (\ref{upbound}) and completes the proof.
\end{proof}
Notice that the number of induced copies of $H$ in any induced
subgraph of $G$ is at most the number of induced copies of $H$ in
$G$. Let $\epsilon_i=\epsilon\leq 1/2$ and $\delta_i=\frac{1}{2}$
for $1 \leq i \leq k-1$ and let $\alpha \geq k^2/n$. Applying Lemma
\ref{lemmaerdoshajnal4} with these $\epsilon_i, \delta_i$ to subsets
of $G$ of size $\alpha n$ and using that $\frac{\epsilon^{i-1}}{k-i}
\geq \epsilon^{k-1}, 1-\frac{k^2}{2\alpha n}\geq 1/2$ we obtain the
following corollary.
\begin{corollary}\label{last6}
Let $H$ be a graph with $k$ vertices, $\alpha \geq k^2/n$, $\epsilon
\leq 1/2$, and $G$ be a graph with at most $2^{-k}\epsilon^{k
\choose 2}(\alpha n)^{k}$ induced copies of $H$. Then the pair $(G,
\bar G)$ is $(\alpha,\frac{\epsilon^{k-1}}{2k},\epsilon,2)$-sparse.
\end{corollary}
The next statement strengthens Theorem
\ref{main} by allowing for many induced copies of $H$.
It follows from Corollary \ref{corbip} with $r=2, h=2\log (2/\epsilon),
\rho=\frac{\epsilon^{k-1}}{2k}$,
combined with the last statement in which we set $\alpha=(\rho/2)^{h-1}$.
\begin{corollary}\label{maingeneralized7}
There is a constant $c$ such that for each $\epsilon \in (0,1/2)$
and graph $H$ on $k$ vertices, every graph $G$ on $n$ vertices with less than
$2^{-c(k \log \frac{1}{\epsilon})^2}n^k$ induced copies of
$H$ contains an induced subgraph of size at least $2^{-ck (\log
\frac{1}{\epsilon})^2}n$ with edge density at most $\epsilon$ or at
least $1-\epsilon$.
\end{corollary}
This result demonstrates that for each $\epsilon \in (0,1/2)$ and
graph $H$, there exist positive constants
$\delta^*=\delta^*(\epsilon,H)$ and $\kappa^*=\kappa^*(\epsilon,H)$
such that every graph $G=(V,E)$ on $n$ vertices with less than
$\kappa^* \,n^k$ induced copies of $H$ contains a subset $W \subset
V$ of size at least $\delta^*\,n$ such that the edge density of $W$
is at most $\epsilon$ or at least $1-\epsilon$. Furthermore, there
is a constant $c$ such that we can take $\delta^*(\epsilon,H)=2^{-ck
(\log \frac{1}{\epsilon})^2}$ and $\kappa^*(\epsilon,H)=2^{-c(k\log
\frac{1}{\epsilon})^2}$. Applying Corollary \ref{maingeneralized7}
recursively one can obtain an equitable partition of $G$ into a
small number of subsets each with low or high density.
\begin{theorem}\label{weakversion}
For each $\epsilon \in (0,1/2)$ and graph $H$ on $k$ vertices, there
are positive constants $\kappa=\kappa(\epsilon,H)$ and
$C=C(\epsilon,H)$ such that every graph $G=(V,E)$ on $n$ vertices
with less than $\kappa\, n^{k}$ induced copies of $H$, there is an
equitable partition $V=\bigcup_{i=1}^{\ell} V_i$ such that $\ell
\leq C$ and the edge density in each $V_i$ is at most $\epsilon$ or
at least $1-\epsilon$.
\end{theorem}
\noindent This extension of R\"odl's theorem was proved by Nikiforov
\cite{Ni} using the regularity lemma and therefore it had quite poor
(tower like) dependence of $\kappa$ and $C$ on $\epsilon$ and $k$.
Obtaining a proof without using the regularity lemma was the main
open problem raised in \cite{Ni} .
\vspace{0.2cm}
\noindent
{\bf Proof of Theorem \ref{weakversion}.}\,
Let
$\kappa(\epsilon,H)=(\frac{\epsilon}{4})^k\kappa^*(\frac{\epsilon}{4},H)$
and $C(\epsilon,H)=4/(\epsilon \delta^*(\frac{\epsilon}{4},H))$, where
$\kappa^*$ and $\delta^*$ were defined above.
Take a subset $W_1 \subset V$ of size $
\delta^*(\frac{\epsilon}{4},H)\frac{\epsilon}{4}n$ whose edge
density is at most $\frac{\epsilon}{4}$ or at least
$1-\frac{\epsilon}{4}$, and set $U_1 =V\setminus W_1$. For $j \geq
1$, if $|U_j| \geq \frac{\epsilon}{4} n$, then by definition of $\kappa$ we have that the number of induced copies of
$H$ in $U_j$ is at most (the number of such copies in $G$) $\kappa\,n^k=
(\frac{\epsilon}{4})^k\kappa^*\,n^k \leq \kappa^*|U_j|^k$.
Therefore by definition of $\kappa^*$ and $\delta^*$ we can
find a subset $W_{j+1} \subset U_j$ of size
$\delta^*\frac{\epsilon}{4}n \leq \delta^*|U_j|$ whose edge
density is at most $\frac{\epsilon}{4}$ or at least
$1-\frac{\epsilon}{4}$, and set $U_{j+1}=U_j \setminus W_{j+1}$.
Once this process stops, we have disjoint sets
$W_1,\ldots,W_{\ell}$, each with the same cardinality, and a subset
$U_{\ell}$ of cardinality at most $\frac{\epsilon}{4}n$. The number
$\ell$ is at most $$n/|W_1| \leq 4/(\epsilon
\delta^*(\frac{\epsilon}{4},H)).$$
Partition set $U_\ell$ into $\ell$ equal parts $T_1, \ldots, T_\ell$
and let $V_j=W_j \cup T_j$ for $1 \leq j \leq \ell$.
Notice that $V=V_1 \cup \ldots \cup V_{\ell}$ is
an equitable partition of $V$. By definition,
$|T_j|=|U_\ell|/\ell \leq \frac{\epsilon}{4}n/\ell$.
On the other hand $|W_j|=(n-|U_\ell|)/\ell \geq (1-\epsilon/4)n/\ell$.
Since $1-\epsilon/4 >7/8$, this implies that
$$|T_j|\leq \frac{\epsilon}{4}n/\ell \leq \frac{\epsilon}{4}\big(1-\epsilon/4\big)^{-1}|W_j| \leq \frac{2\epsilon}{7}|W_j|.$$
We next look at the edge density in $V_j$. If the edge density in $W_j$ is at most $\epsilon/4$, then
using the above bound on $|T_j|$, it is easy to check that the number of edges in $V_j$ is at most
$${|T_j| \choose 2}+|T_j||W_j|+\frac{\epsilon}{4}{|W_j| \choose 2}
\leq \epsilon{|W_j| \choose 2} \leq \epsilon{|V_j| \choose 2}.$$
Hence, the edge density in each such $V_j$ is at most $\epsilon$.
Similarly, if the edge density in $W_j$ is at least
$1-\frac{\epsilon}{4}$, then the edge density in $V_j$ is at least
$1-\epsilon$. This completes the proof.
\hfill $\Box$
\vspace{0.2cm}
We next use Lemma \ref{lemmaerdoshajnal4} to prove that there is a constant $c>0$ such that
every graph $G$ on $n$ vertices which contains at most
$(1-\epsilon)2^{-{k \choose 2}}n^k$ labeled
induced copies of some fixed $k$-vertex graph $H$ has a subset
$S$ of size $|S|=\lfloor n/2 \rfloor$ with $|e(S)-\frac{n^2}{16}| \geq \epsilon
c^{-k} n^2$.
\vspace{0.25cm}
\noindent
{\bf Proof of Theorem \ref{dev}.}\, For $1 \leq i \leq k-1$, let
$\epsilon_i=\frac{1}{2}(1-2^{i-k-2}\epsilon)$ and
$\delta_i=2^{i-k-2}\epsilon$. Notice that for all $i \leq k-1$
\begin{eqnarray}
\label{eq3}
\prod_{j<i}\epsilon_j=2^{-i+1}\prod_{j<i}\big(1-2^{j-k-2}\epsilon\big)\geq
2^{-i+1}\bigg(1-\epsilon \sum_{j<k-1}2^{j-k-2}\bigg) \geq
2^{-i+1}(1-\epsilon/8)>2^{-i}
\end{eqnarray}
and also that
\begin{eqnarray*}
\prod_{i=1}^{k-1}(1-\delta_i)\epsilon_i^{k-i}&=&
2^{-{k \choose 2}}\prod_{i=1}^{k-1}\big(1-2^{i-k-2}\epsilon \big)^{k-i+1}=
2^{-{k \choose
2}}\prod_{j=2}^{k} \big(1-\epsilon2^{-j-1}\big)^{j}\\ &\geq&
2^{-{k \choose 2}}\bigg(1-\epsilon\sum_{j=2}^k \frac{j}{2^{j+1}}\bigg)
>
\left(1-\frac{\epsilon}{2}\right)2^{-{k \choose 2}}.
\end{eqnarray*}
We may assume that $\epsilon \geq k^2/n$ since otherwise
by choosing constant $c$ large enough we get that $\epsilon c^{-k}n^2<1$ and the conclusion of the
theorem follows easily. Therefore
$$ \left(1-\frac{k^2}{2n}\right)\prod_{i=1}^{k-1} (1-\delta_i)\epsilon_i^{k-i} \geq
\left(1-\frac{\epsilon}{2}\right)^22^{-{k \choose 2}} >
(1-\epsilon)2^{-{k \choose 2}},$$ and we can apply Lemma
\ref{lemmaerdoshajnal4} with $\epsilon_i$ and $\delta_i$ as above to
our graph $G$ since it contains at most $(1-\epsilon)2^{-{k \choose
2}}n^k$ labeled induced copies of $H$. This lemma, together with
(\ref{eq3}), implies that there is an index $i \leq k-1$ and
disjoint subsets $A$ and $B$ with
$$|A|\geq \frac{\delta_in}{k(k-i)}\prod_{j<i} \epsilon_j \geq k^{-2}2^{-k-2}n,$$ $$|B|
\geq \frac{n}{k}\prod_{j < i} \epsilon_j \geq 2^{-i}k^{-1}n,$$ and
every element of $A$ is adjacent to at most $\epsilon_i|B|$ elements
of $B$ or every element of $A$ is adjacent to at least
$(1-\epsilon_i)|B|$ elements of $B$. In either case, we have
$$\left|e(A,B)-\frac{1}{2}|A||B|\right| \geq
\left(\frac{1}{2}-\epsilon_i\right)|A||B|=2^{i-k-3}\epsilon|A||B| \geq k^{-3}2^{-2k-5}\epsilon n^2.$$
Note that
$$e(A,B)-\frac{1}{2}|A||B|=\left(e(A \cup B)-\frac{1}{2}{|A \cup B|
\choose 2}\right)-\left(e(A)-\frac{1}{2}{|A| \choose
2}\right)-\left(e(B)-\frac{1}{2}{|B| \choose 2}\right).$$ It follows
from the triangle inequality that there is some subset of vertices
$R \in \{A,B, A \cup B\}$ such that
\begin{equation}
\label{subset}
\left|e(R)-\frac{1}{2}{|R| \choose 2}\right| \geq
\frac{1}{3} k^{-3}2^{-2k-5}\epsilon n^2,
\end{equation}
i.e., it deviates by at least $\epsilon k^{-3}2^{-2k-5}n^2/3$ edges from having edge density $1/2$.
To finish the proof we will use the lemma of Erd\H{o}s et al. \cite{ErGoPaSp}, mentioned in the introduction.
This lemma says that if graph $G$ on $n$ vertices with edge density $\eta$ has a subset that
deviates by $D$ edges from having edge density $\eta$, then it also
has a subset of size $n/2$ that deviates by at least $D/5$ edges from having edge density $\eta$.
Note that if the edge density of our graph $G$ is either larger than
$1/2+\epsilon k^{-3}2^{-2k-5}n^2/30$ or smaller than $1/2-\epsilon k^{-3}2^{-2k-5}n^2/30$
than by averaging over all subsets of size $n/2$ we will find subset $S$ satisfying our assertion.
Otherwise, if the edge density $\eta$ of $G$ satisfies $|\eta-1/2| \leq \epsilon k^{-3}2^{-2k-5}n^2/30$, then
the subset $R$ from (\ref{subset}) deviates by at least
$\epsilon k^{-3}2^{-2k-5}n^2/3- \epsilon k^{-3}2^{-2k-5}n^2/30 \geq \epsilon k^{-3}2^{-2k}n^2/4$ edges from
having edge density $\eta$. Then, by the lemma of Erd\H{o}s et al., $G$ has a subset $S$ of cardinality $n/2$
that deviates by at least $\epsilon k^{-3}2^{-2k-5}n^2/20$ edges from having edge density $\eta$.
This $S$ satisfies
$$\left|e(S)-\frac{1}{4}|S|^2\right| \geq \epsilon k^{-3}2^{-2k-5}n^2/20-\epsilon k^{-3}2^{-2k-5}n^2/30=
\Omega\left(\epsilon
k^{-3}2^{-2k}n^2\right),$$ completing the proof. \ifvmode\mbox{ }\else\unskip\fi\hskip 1em plus 10fill$\Box$
\vspace{0.1cm}
For positive integers $k$ and $n$, recall that $D(k,n)$ denotes the largest
integer such that every graph $G$ on $n$ vertices that is $H$-free for some $k$-vertex graph
$H$ contains a subset $S$ of size $n/2$
with $|e(S)-\frac{1}{16}n^2|>D(k,n)$.
We end this section by proving the upper bound on $D(k,n)$.
\begin{proposition}
There is a constant $c>0$ such that for all positive integers $k$ and
$n \geq 2^{k/2}$, there is a $K_k$-free graph $G$ on $n$ vertices such that for
every subset $S$ of $n/2$ vertices of $G$,
$$\Big|e(S)-\frac{1}{16}n^2\Big|<c2^{-k/4}n^2.$$
\end{proposition}
\begin{proof}
Consider the random graph $G(\ell,1/2)$ with $\ell=2^{ k /2}$. For
every subset of vertices $X$ in this graph the number of edges in
$X$ is a binomially distributed random variable with expectation
$\frac{|X|(|X|-1)}{4}$. Therefore by Chernoff's bound (see, e.g.,
Appendix A in \cite{AlSp}), the probability that it deviates from
this value by $t$ is at most $2e^{-t^2/|X|^2}$. Thus choosing
$t=1.5\ell^{3/2}$ we obtain that the probability that there is a
subset of vertices $X$ such that
$\big|e(X)-\frac{|X|(|X|-1)}{4}\big|>t$ is at most $2^\ell \cdot
2e^{-t^2/\ell^2} \ll 1$. This implies that there is graph $\Gamma$
on $\ell$ vertices such that every subset $X$ of $\Gamma$ satisfies
\begin{equation}
\label{random}
\Big|e(X)-\frac{1}{4}|X|^2\Big| \leq 2\ell^{3/2}.
\end{equation}
Let $G$ be the graph obtained by replacing every
vertex $u$ of $\Gamma$ with an independent set $I_u$, of size $n/\ell$,
and by replacing every edge $(u,v)$ of $\Gamma$ with a complete bipartite graph, whose
partition classes are independent sets $I_u$ and $I_v$.
Clearly, since $\Gamma$ does not contain $K_k$, then neither does $G$.
We claim that graph $G$ satisfies the assertion of the proposition.
Suppose for contradiction that there is a subset $S$ of $n/2$ vertices of $G$ satisfying
$$e(S)-\frac{1}{16}n^2 >4\ell^{3/2}(n/\ell)^2=4\ell^{-1/2}n^2=2^{-k/4+2}n^2,$$
(the other case when $e(S)-n^2/16<-4\ell^{-1/2}n^2$ can be treated similarly).
For every vertex $u \in \Gamma$ let the size of $S \cap I_u$ be $a_un/\ell$.
By definition, $0 \leq a_u \leq 1$ and since $S$ has size $n/2$ we have that $\sum_u a_u=\ell/2$.
We also have that
$$e(S)=\sum_{(u,v)\in E(\Gamma)} a_u a_v \cdot (n/\ell)^2>\frac{1}{16}n^2+4\ell^{3/2}(n/\ell)^2,$$
and therefore
$$\sum_{(u,v)\in E(\Gamma)} a_u a_v >\ell^2/16+4\ell^{3/2}=\frac{1}{4}\Big(\sum_u a_u\Big)^2 +4\ell^{3/2}.$$
Consider a random subset $Y$ of $\Gamma$ obtained by choosing every
vertex $u$ randomly and independently with probability $a_u$. Since
all choices were independent we have that
$$\mathbb{E}\big[|Y|^2\big]= \sum_u a_u +\sum_{u\not =v}a_ua_v \leq
\big(\sum_u a_u\big)^2+\ell/2.$$
We also have that the
expected number of edges spanned by $Y$ is
$\mathbb{E}\big[e(Y)\big]=\sum_{(u,v)\in E(\Gamma)} a_u a_v$. Then, by the above discussion,
$\mathbb{E}\big[e(Y)-|Y|^2/4\big] >3\ell^{3/2}$. In
particular, there is subset $Y$ of $\Gamma$ with this property,
which contradicts
(\ref{random}). This shows that every subset $S$ of $n/2$
vertices of $G$ satisfies
$$\Big|e(S)-\frac{1}{16}n^2\Big| \leq 2^{-k/4+2}n^2$$
and completes the proof.
\end{proof}
\section{Induced Ramsey Numbers and Pseudorandom Graphs} \label{moreoninduced}
The main result in this section is Theorem \ref{quasirandominduced},
which shows that any sufficiently pseudo-random graph of appropriate
density has strong induced Ramsey properties. It generalizes Theorem
\ref{quasicor1} and Corollary \ref{payley} from the introduction.
Combined with known examples of pseudo-random graphs, this theorem
gives various explicit constructions which match and improve the
best known estimates for induced Ramsey numbers.
The idea of the proof of Theorem \ref{quasirandominduced} is rather
simple. We have a sufficiently large, pseudo-random graph $G$ that
is not too sparse or dense. We also have $d$-degenerate graphs $H_1$
and $H_2$ each with vertex set $[k]$ and chromatic number at most
$q$. We suppose for contradiction that
there is a red-blue edge-coloring of $G$ without an induced red
copy of $H_1$ and without an induced blue copy
of $H_2$. We may view the red-blue coloring of $G$ as a
red-blue-green edge-coloring of the complete graph $K_{|G|}$, in
which the edges of $G$ have their original color, and the edges of
the complement $\bar G$ are colored green. The fact that in $G$
there is no induced red copy of $H_1$ means that the red-blue-green
coloring of $K_{|G|}$ does not contain a particular red-green
coloring of the the complete graph $K_k$. Then we prove, similar to
Lemma \ref{lemmaerdoshajnal} of Erd\H{o}s and Hajnal, that any
large subset of vertices of $G$ contains two large disjoint subsets
for which the edge density in color red between them is small. By
using the key lemma from Section 3, we find $k$ large disjoint
vertex subsets $V_1,\ldots,V_k$ of $G$ for which the edge density in
color red is small between any pair $(V_i,V_j)$ for which $(i,j)$ an
edge of $H_2$.
Next we try to find an induced blue copy of $H_2$ with vertex $i$ in
$V_i$ for all $i \in [k]$. Since the edge density between $V_i$ and
$V_j$ in color red is sufficiently small for every edge $(i,j)$ of
$H_2$, we can build an induced blue copy of $H_2$ one vertex at a
time. At each step of this process we use pseudo-randomness of $G$
to make sure that the existing possible subsets for not yet embedded
vertices of $H_2$ are sufficiently large and that the density of red
edges does not increase a lot between any pair of subsets
corresponding to adjacent vertices of $H_2$. This last part of the
proof, embedding an induced blue copy of $H_2$, is the most
technically involved and handled by Lemma \ref{densitylemma}.
Recall that $[i]=\{1, \ldots, i\}$ and that a graph is $d$-degenerate if every subgraph has a
vertex of degree at most $d$. For an edge-coloring $\Psi:E(K_{k})\rightarrow [r]$, we say that
another edge-coloring $\Phi:E(K_{n})\rightarrow [s]$ is {\it
$\Psi$-free} if, for every subset $W$ of size $k$ of the complete graph $K_n$,
the restriction of $\Phi$ to $W$ is not isomorphic to
$\Psi$. In the following lemma, we have a coloring $\Psi$ of the
edges of the complete graph $K_k$ with colors $1$ and $2$ such that
the graph of color $2$ is $d$-degenerate. We also have a $\Psi$-free coloring
$\Phi$ of the edges of the complete graph $K_n$ such that between any two large subsets of vertices
there are sufficiently many edges of color 1. With these
assumptions, we show that there are two large subsets of $K_n$
which in coloring $\Phi$ have few edges of color $2$ between them. A graph $G$ is {\it
bi-$(\epsilon,\delta)$-dense} if $d(A,B)>\epsilon$ holds for all
disjoint subsets $A,B \subset V(G)$ with $|A|,|B| \geq \delta
|V(G)|$.
\begin{lemma}\label{lemmaerdoshajnal2}
Let $d$ and $k$ be positive integers and $\Psi:E(K_k) \rightarrow
[2]$ be a $2$-coloring of the edges of $K_k$ such that the graph of
color $2$ is $d$-degenerate. Suppose that $q,\epsilon \in (0,1)$ and
$\Phi:E(K_n) \rightarrow [s]$ is a $\Psi$-free edge-coloring such
that the graph of color $1$ is
bi-$(q,\epsilon^dq^{k}k^{-2})$-dense. Then there are disjoint
subsets $A$ and $B$ of $K_n$ with $|A|,|B| \geq
\epsilon^{d}q^{k} k^{-2}n$ such that every vertex of $A$ is connected to at most
$\epsilon|B|$ vertices in $B$ by edges of color 2.
\end{lemma}
\begin{proof}
Note that from definition, the vertices of every $d$-degenerate
graph can be labeled $1,2, \ldots$ such that for every vertex $\ell$
the number of vertices $j<\ell$ adjacent to it is at most $d$.
(Indeed, remove from the graph a vertex of minimum degree, place it
in the end of the list and repeat this process in the remaining
subgraph.) Therefore we may assume that the labeling $1, \ldots, k$
of vertices of $K_k$ has the property that for every $\ell \in [k]$
there are at most $d$ vertices $j <\ell$ such that the color
$\Psi(j,\ell)=2$. Partition the vertices of $K_n$ into sets $V_1
\cup \ldots \cup V_{k}$ each of size $\frac{n}{k}$. For $w \in V_i$
and a subset $S \subset V_j$ with $j \not = i$, let $N(w,S)=\{s \in
S ~|~ \Phi(w,s)=\Psi(i,j)\}.$ For $i<\ell $, let $D(\ell,i)$ denote
the number of vertices $j \leq i$ such that the color
$\Psi(j,\ell)=2$. By the above assumption, $D(\ell,i) \leq d$ for $1
\leq i < \ell \leq k$.
We will try iteratively to build a copy of $K_{k}$ with coloring
$\Psi$. After $i$ steps, we either find two disjoint subsets of vertices $A, B$ which satisfy the assertion of the lemma or we
will have vertices $v_1,\ldots,v_{i}$ and
subsets $V_{i+1,i},V_{i+2,i},\ldots,V_{k,i}$ such that
\begin{enumerate}
\item $V_{\ell,i}$ is a subset of $V_\ell$ of size $|V_{\ell,i}| \geq \epsilon^{D(\ell,i)}q^{i-D(\ell,i)}|V_{\ell}|$ for all $i+1 \leq
\ell \leq k$,
\item $\Phi(v_j,v_{\ell})=\Psi(j,\ell)$ for $1 \leq j<\ell \leq i$,
\item and if $j \leq i< \ell$ and $w \in V_{\ell,i}$, then
$\Phi(v_j,w)=\Psi(j,\ell)$.
\end{enumerate}
In the first step, we call a vertex $w\in V_1$ {\it good} if
$|N(w,V_j)| \geq \epsilon|V_j|$ for all $j>1$ with $\Psi(1,j) =2$
and $|N(w,V_j)| \geq q|V_i|$ for all $j>1$ with $\Psi(1,j)=1$. If
there is no good vertex in $V_1$, then there is a subset $A \subset
V_1$ with $|A|\geq \frac{1}{k-1}|V_1|$ and index $j >1$ such that
either $\Psi(1,j)=1$ and every vertex $w \in A$ has fewer than
$q|V_j|$ edges of color $1$ to $V_j$ or $\Psi(1,j)=2$ and
every vertex $w \in A$ is connected to less than $\epsilon|V_j|$
vertices in $V_j$ by edges of color 2. Letting $B=V_j$, we conclude
that the first case is impossible since the graph of color $1$ is
bi-$(q,\epsilon^dq^{k}k^{-2})$-dense, while in the second case we
would be done, since $A$ and $B$ would satisfy the assertion of the
lemma. Therefore, we may assume that there is a good vertex $v_1 \in
V_1$, and we define $V_{i,1}=N(v_1,V_i)$ for $i>1$.
Suppose that after step $i$ the properties 1-3 are still satisfied.
Then, in step $i+1$, a vertex $w \in V_{i+1,i}$ is called {\it
good} if $|N(w,V_{j,i})| \geq \epsilon|V_{j,i}|$ for each $j> i+1$
with $\Psi(i+1,j) =2$ and $|N(w,V_{j,i})| \geq q|V_{j,i}|$ for each
$j >i+1$ with $\Psi(i+1,j) =1$. If there is no good vertex in
$V_{i+1,i}$, then there is a subset $A \subset V_{i+1,i}$ with
$|A|\geq \frac{1}{k-i-1}|V_{i+1,i}|$ and $j >i+1$ such that either
$\Psi(i+1,j)=1$ and every vertex $w \in A$ has fewer than
$q|V_{j,i}|$ edges of color $1$ to $V_{j,i}$ or
$\Psi(1,j)=2$ and every vertex $w \in A$ is connected to less than
$\epsilon|V_{j,i}|$ vertices in $V_{j,i}$ by edges of color
2. Note that even in the last step when $i+1=k$ the size of $A$ is
still at least $|V_{k,k-1}|/k\geq \epsilon^dq^k|V_k|/k\geq
\epsilon^dq^kk^{-2}n$. Therefore, letting $B=V_{j,i}$, we conclude
that as before the first case is impossible since the graph of color
$1$ is bi-$(q,\epsilon^dq^{k}k^{-2})$-dense, while the second case
would complete the proof, since $A$ and $B$ would satisfy the
assertion of the lemma. Hence, we may assume that there is a good
vertex $v_{i+1} \in V_{i+1,i}$, and we define
$V_{j,i+1}=N(v_{i+1},V_{j,i})$ for $j >i+1$.
Note that $|V_{j,i+1}| \geq q|V_{j,i}|$ if $\Psi(i+1,j) =1$
and $|V_{j,i+1}| \geq \epsilon |V_{j,i}|$ if $\Psi(i+1,j) =2$.
This implies that after step $i+1$ we have
that $|V_{\ell,i+1}| \geq
\epsilon^{D(\ell,i+1)}q^{i+1-D(\ell,i+1)}|V_{\ell}|$ for all $i+2
\leq \ell \leq k$.
The iterative process must stop at one of the steps $j \leq k-1$,
since otherwise the coloring $\Phi$ would not be $\Psi$-free.
As we already explained above, when this happens we have two disjoint subsets $A$ and $B$
that satisfy the assertion of the lemma.
\end{proof}
Notice that if coloring $\Phi:K_n \rightarrow [s]$ is $\Psi$-free,
then so is $\Phi$ restricted to any subset of $K_n$ of size $\alpha n$. Therefore,
Lemma \ref{lemmaerdoshajnal2} has the following corollary.
\begin{corollary}\label{last}
Let $d$ and $k$ be positive integers and $\Psi:E(K_k) \rightarrow
[2]$ be a $2$-coloring of the edges of $K_k$ such that the graph of
color $2$ is $d$-degenerate. If $q,\alpha,\epsilon \in (0,1)$ and
$\Phi:E(K_n) \rightarrow [s]$ is a $\Psi$-free edge-coloring such
that the graph of color $1$ is bi-$(q,\alpha\rho)$-dense with
$\rho=\epsilon^{d}q^{k} k^{-2}$, then the graph of color $2$ is
$(\alpha,\rho,\epsilon,2)$-sparse.
\end{corollary}
\noindent
The next statement follows immediately from
Corollary \ref{last} (with $\epsilon/4$ instead of $\epsilon $) and Corollary \ref{secondcor}.
\begin{corollary}\label{last2}
Let $d$, $k$, and $h$ be positive integers and $\Psi:E(K_k)
\rightarrow [2]$ be a $2$-coloring of the edges of $K_k$ such that
the graph of color $2$ is $d$-degenerate. Suppose that
$q,\alpha,\epsilon \in (0,1)$ and $\Phi:E(K_n) \rightarrow [s]$ is
a $\Psi$-free edge-coloring such that the graph of color $1$ is
bi-$(q,\alpha\rho)$-dense with
$\rho=(\epsilon/4)^dq^{k}k^{-2}$. Then the graph of color $2$ is
$((\frac{2}{\rho})^{h-1}\alpha,2^{1-h}\rho^h,\epsilon,2^h)$-sparse.
\end{corollary}
\noindent
Pending one additional lemma, we are now ready to prove the main result of this
section, showing that pseudo-random graphs have strong induced
Ramsey properties.
\begin{theorem}\label{quasirandominduced}
Let $\chi \geq 2$ and $G$ be a $(p,\lambda)$-pseudo-random graph with
$0 <p \leq 3/4$ and $\lambda \leq ((\frac{p}{10k})^d2^{-pk})^{20\log
\chi}n$. Then every $d$-degenerate graph on $k$ vertices with chromatic
number at most $\chi$ occurs as an induced monochromatic copy in every
$2$-coloring of the edges of $G$. Moreover, all of these induced
monochromatic copies can be found in the same color.
\end{theorem}
Taking $p=1/k$, $n=k^{cd\log \chi}$ and constant
$c$ sufficiently large so that $((\frac{p}{10k})^d2^{-pk})^{20\log \chi}>n^{-0.1}$
one can easily see that this result implies Theorem \ref{quasicor1}.
To obtain Corollary \ref{payley}, recall that for a prime power $n$, the Paley graph $P_n$ has vertex
set $\mathbb{F}_n$ and distinct vertices $x,y \in \mathbb{F}_n$ are
adjacent if $x-y$ is a square. This graph is $(1/2,\lambda)$-pseudo-random with $\lambda=\sqrt{n}$
(see e.g., \cite{KrSu}).
Therefore, for sufficiently large constant $c$, the above theorem with $n=2^{ck\log^2 k}$, $p=1/2$ and $d=\chi=k$
implies that every graph on $k$ vertices occurs as an
induced monochromatic copy in all $2$-edge-colorings of the Paley graph.
Similarly, one can prove that there is a constant $c$ such that, with high probability,
the random graph $G(n,1/2)$ with $n \geq 2^{ck\log^2 k}$ satisfies that every graph
on $k$ vertices occurs as an induced monochromatic copy in all
$2$-edge-colorings of $G$.
\vspace{0.1cm} \noindent {\bf Proof of Theorem
\ref{quasirandominduced}.}\, Suppose for contradiction that there is
an edge-coloring $\Phi_0$ of $G$ with colors red and blue, and
$d$-degenerate graphs $H_1$ and $H_2$ each having $k$ vertices and
chromatic number at most $\chi$ such that there is no induced red
copy of $H_1$ and no induced blue copy of $H_2$. Since $H_1, H_2$
are $d$-degenerate graphs on $k$ vertices we may suppose that their
vertex set is $[k]$ and every vertex $i$ has at most $d$ neighbors
less than $i$ in both $H_1$ and $H_2$.
Consider the red-blue-green edge-coloring $\Phi$ of the complete graph $K_n$, in which the
edges of $G$ have their original coloring $\Phi_0$,
and the edges of the complement $\bar G$ are colored green.
Let $\Psi$ be the edge-coloring of the complete graph $K_k$ where the red edges form a
copy of $H_1$ and the remaining edges are green. By assumption, the
coloring $\Phi$ is $\Psi$-free.
Since $G$ is $(p,\lambda)$-pseudo-random, we have that the density of edges
in $\bar G$ between any two disjoint sets $A, B$ of size at least $6p^{-1} \lambda$ is at least
$$d_{\bar G}(A,B)=1-d_G(A,B)\geq 1-\Big(p+\frac{\lambda}{\sqrt{|A||B|}}\Big) \geq 1-\frac{7}{6}p.$$
Therefore the green graph in coloring $\Phi$ is
bi-$(q,6p^{-1}\frac{\lambda}{n})$-dense for $q=1-7p/6$.
Let $\epsilon= \frac{p}{1000k^6}$,
$\rho=(\epsilon/4)^dq^{k}k^{-2}$, $h=\log \chi $, and
$\alpha=(\rho/2)^{h-1} $. Using that
$q=1-7p/6$ and $\lambda/n \leq ((\frac{p}{10k})^d2^{-pk})^{20\log \chi}$ it is straightforward to check that
$6p^{-1}\frac{\lambda}{n} \leq
2^{1-h}\rho^{h}=\alpha \rho$. By Corollary \ref{last2} and Definition \ref{d31}, there are
$2^h=\chi$ subsets $W_1,\ldots,W_{\chi}$ of $K_n$ with $|W_1|=\ldots=|W_\chi| \geq
2^{1-h}\rho^{h}n$, such that the sum of densities of red edges
between all pairs $W_i$ and $W_j$ is at most ${\chi \choose 2}\epsilon$.
Hence, the density between $W_i$
and $W_j$ is also at most $\chi^2\epsilon$ for all $1 \leq i < j \leq \chi$.
Partition every set $W_i$ into $k$ subsets
each of size $|W_i|/k \geq \frac{1}{k}2^{1-h}\rho^{h}n$.
Since the chromatic number of $H_2$ is at most $\chi$ and it has $k$ vertices, we can
choose for every vertex $i$ of $H_2$ one of these subsets, which we call $V_i$, such that all subsets
corresponding to vertices of $H_2$ in the same color class (of a proper $\chi$-coloring)
come from the same set $W_\ell$.
In particular, for every edge $(i,j)$ of $H_2$, the corresponding sets
$V_i$ and $V_j$ lie in two different sets $\{W_\ell\}$. Since the size of $V_i$'s is by
a factor $k$ smaller than the size
of $W_\ell$'s the density of red edges between $V_i$ and $V_j$
corresponding to an edge in $H_2$ is at most $k^2\chi^2\epsilon \leq
\frac{p}{1000k^2}$ (note that it can increase by a factor at most $k^2$ compare to density between sets $\{W_\ell\}$).
Notice that the subgraph $G' \subset G$ induced by $V_1 \cup \ldots \cup
V_{k}$ has $n' \geq 2^{1-h}\rho^{h}n$ vertices and is also $(p,\lambda)$-pseudo-random.
By the definitions of $\rho$ and $h$, and our assumption on $\lambda$, we have that
$$\lambda/n'\leq 2^{h-1}\rho^{-h}\lambda/n \leq 2^{h-1}\rho^{-h} \left(\Big(\frac{p}{10k}\Big)^d2^{-pk}\right)^{20\log \chi}
\leq \left(\Big(\frac{p}{10k}\Big)^d2^{-pk}\right)^{10\log \chi}.$$
Applying Lemma \ref{densitylemma} below with $H=H_2$ to the coloring
$\Phi_0$ of graph $G'$ with partition $V_1 \cup \ldots \cup V_{k}$, we find an induced blue copy of $H_2$, completing
the proof. \hfill $\Box$
\begin{lemma}\label{densitylemma}
Let $H$ be a $d$-degenerate graph with vertex set $[k]$ such that
each vertex $i$ has at most $d$ neighbors less than $i$. Let
$G=(V,E)$ be a $(p,\lambda)$-pseudo-random graph on $n$ vertices
with $0<p \leq 3/4$, $\lambda \leq ((\frac{p}{10k})^d2^{-pk})^{10}n$
and let $V=V_1 \cup \ldots \cup V_k$ be a partition of its vertices
such that each $V_i$ has size $n/k$. Suppose that the edges of $G$
are $2$-colored, red and blue, such that for every edge $(j,\ell)$
of $H$, the density of red edges between the pair $(V_j,V_{\ell})$
is at most $\beta = \frac{p}{1000k^2}$. Then there is an induced
blue copy of $H$ in $G$ for which the image of every vertex $i \in
[k]$ lies in $V_i$.
\end{lemma}
\begin{proof}
For $i<j$, let $D(i,j)$ denote the number of neighbors of $j$ that are at most $i$. Let $\epsilon_1=\frac{1}{k}$,
$\epsilon_2=\frac{p}{10k}$, and $\delta=(1-p)^kp^d$. Since $p \leq 3/4$, notice that $\delta
\geq 2^{-3pk}p^d$ and
\begin{eqnarray}
\label{eq2}
\lambda \leq \left(\Big(\frac{p}{10k}\Big)^d2^{-pk}\right)^{10}n \leq \frac{p^8}{(10k)^{10}}\delta^2n.
\end{eqnarray}
We construct an induced blue
copy of $H$ one vertex at a time. At the end of step $i$, we will have vertices
$v_1,\ldots,v_i$ and subsets $V_{j,i} \subset V_j$ for $j >i $ such that the following four conditions hold
\begin{enumerate}
\item for $j, \ell \leq i$, if $(j,{\ell})$ is an
edge of $H$, then $(v_j,v_{\ell})$ is a blue edge of $G$, otherwise
$v_j$ and $v_{\ell}$ are not adjacent in $G$,
\item for $ j \leq i < \ell$, if $(j,{\ell})$ is an
edge of $H$, then $v_j$ is adjacent to all vertices in $V_{\ell,i}$
by blue edges, otherwise there are no edges of $G$ from $v_j$ to
$V_{\ell,i}$,
\item for $i < j$, we have $|V_{j,i}| \geq (1-p-\epsilon_2)^{i-D(i,j)}(p-\epsilon_2)^{D(i,j)}|V_j|$,
\item and for $j, \ell>i$ if $(j,\ell)$ is an edge of $H$, then the density of red edges between $V_{j,i}$ and $V_{\ell,i}$ is
at most $(1+\epsilon_1)^i \beta$.
\end{enumerate}
Clearly, in the end of the first $k$ steps of this process we obtain a
required copy of $H$. For $i=0$ and $j \in [k]$, define
$V_{j,0}=V_j$. Notice that the above four properties are satisfied
for $i=0$ (the first two properties being vacuously satisfied). We
now assume that the above four properties are satisfied at the end
of step $i$, and show how to complete step $i+1$ by finding a vertex
$v_{i+1} \in V_{i+1,i}$ and subsets $V_{j,i+1} \subset V_{j,i}$ for $j>i+1$ such
that the conditions 1-4 still hold.
We need to introduce some notation. For a vertex $w \in V_j$ and a
subset $S \subset V_{\ell}$ with $j \not =\ell$, let
\begin{itemize}
\item $N(w,S)$ denote the set of vertices $s \in S$ such that $(s,w)$ is an edge of
$G$,
\item $B(w,S)$ denote the set of vertices $s \in S$ such that $(s,w)$ is a
blue edge of $G$,
\item $R(w,S)$ denote the set of vertices $s \in S$ such that $(s,w)$ is a
red edge of $G$,
\item $\tilde{N}(w,S)=N(w,S)$ if $(j, \ell)$ is an edge of $H$ and $\tilde{N}(w,S)=S
\setminus N(w,S)$ otherwise,
\item $\tilde{B}(w,S)=B(w,S)$ if $(j, \ell)$ is an edge of $H$ and $\tilde{B}(w,S):=S
\setminus N(w,S)$ otherwise, and
\item
$p_{j,\ell}=p$ if $(j,\ell)$ is an edge of $H$ and
$p_{j,\ell}=1-p$ if $(j,\ell)$ is not an edge of $H$.
\end{itemize}
Note that since graph $G$ is pseudo-random with edges density $p$, by the above definitions,
for every large subset $S \subset V_{\ell}$ and for most vertices $w \in V_j$ we expect
the size of $\tilde{N}(w,S)$ to be roughly $p_{j,\ell}|S|$. We also have for
all $S \subset V_{\ell}$ and $w \in V_j$ that
$\tilde{B}(w,S)=\tilde{N}(w,S) \setminus R(w,S)$.
Call a vertex $w \in V_{i+1,i}$ {\it good} if for all $j >i+1$,
$\tilde{B}(w,V_{j,i})\geq (p_{i+1,j}-\epsilon_2)|V_{j,i}|$ and
for every edge $(j,\ell)$ of $H$ with $j, \ell>i+1$,
the density of red edges between $\tilde{B}(w,V_{j,i})$ and $\tilde{B}(w,V_{\ell,i})$ is at most
$(1+\epsilon_1)^{i+1}\beta$. If we find a good vertex $w \in V_{i+1,i}$, then we simply let
$v_{i+1}=w$ and $V_{j,i+1}=\tilde{B}(w,V_{j,i})$ for $j > i+1$,
completing step $i+1$. It therefore suffices to show that there is a
good vertex in $V_{i+1,i}$.
We first throw out some vertices of $V_{i+1,i}$ ensuring that the
remaining vertices satisfy the first of the two properties of good
vertices. For $j >i+1$ and an edge $(i+1,j)$ of $H$, let
$R_{j}$ consist of those $w \in V_{i+1,i}$ for which the
number of red edges $(w,w_j)$ with $w_j \in V_{j,i}$ is at least
$\frac{\epsilon_2}{2}|V_{j,i}|$. Since the density of red between
$V_{i+1,i}$ and $V_{j,i}$ is at most $(1+\epsilon_1)^i\beta$, then
$R_{j}$ contains at most
$$|R_{j}| \leq \frac{(1+\epsilon_1)^i\beta|V_{i+1,i}||V_{j,i}|}{\frac{\epsilon_2}{2}|V_{j,i}|}=2(1+\epsilon_1)^i\epsilon_2^{-1}\beta|V_{i+1,i}|$$
vertices. Let $V'$ be the set of vertices in $V_{i+1,i}$ that are not in
any of the $R_j$. Using that $\epsilon_1=1/k, \epsilon_2=\frac{p}{10k}$ and $
\beta=\frac{p}{1000k^2}$ we obtain
\begin{eqnarray*}
|V'| &\geq& |V_{i+1,i}|-\sum_{j>i+1}|R_j| \geq |V_{i+1,i}|-k\Big(2(1+\epsilon_1)^i\epsilon_2^{-1}\beta|V_{i+1,i}|\Big)\\
&\geq&
\Big(1-2k(1+\epsilon_1)^k\epsilon_2^{-1}\beta\Big)|V_{i+1,i}|
\geq \frac{1}{2}|V_{i+1,i}|.
\end{eqnarray*}
For $j >i+1$, let $S_j$ consist of those $w \in V'$
for which $\tilde{N}(w,V_{j,i}) < (p_{i+1,j}-\frac{\epsilon_2}{2})|V_{j,i}|$.
Then the density of edges of $G$ between
$S_j$ and $V_{j,i}$ deviates from $p$ by at least $\frac{\epsilon_2}{2}$. Since
graph $G$ is $(p,\lambda)$-pseudo-random, we obtain that $\frac{\epsilon_2}{2} \leq
\frac{\lambda}{\sqrt{|V_{j,i}||S_j|}}$ and hence $|S_j| \leq \frac{4\lambda^2}{\epsilon_2^2|V_{j,i}|}$.
Also using that $p \leq 3/4$ we have
$1-p-\epsilon_2=1-p-\frac{p}{10k} \geq (1-\frac{1}{3k})(1-p)$.
Therefore, our third condition, combined with $\delta=(1-p)^kp^d$ and $(1-x)^t \geq 1-xt$ for all $0 \leq x \leq 1$,
imply that for $j \geq i+1$
\begin{eqnarray}
\label{eq1}
|V_{j,i}| &\geq& (1-p-\epsilon_2)^{i-D(i,j)}(p-\epsilon_2)^{D(i,j)}|V_j| \geq
(1-p-\epsilon_2)^k(p-\epsilon_2)^d|V_j| \nonumber\\
&\geq& \left(\Big(1-\frac{1}{3k}\Big)(1-p)\right)^k
\left(p-\frac{p}{10k}\right)^d|V_j| \nonumber \\
&\geq& \left(1-\frac{1}{3k}\right)^k\left(1-\frac{1}{10k}\right)^k(1-p)^kp^d|V_j| \nonumber\\
&\geq&
\frac{1}{2}(1-p)^kp^d|V_i|= \frac{\delta n}{2k}.
\end{eqnarray}
Since $\lambda \leq \frac{p\delta}{100k^3}n$ (see (\ref{eq2}))
and $\epsilon_2=\frac{p}{10k}$, we therefore have $|S_j| \leq
\frac{4\lambda^2}{\epsilon_2^2|V_{j,i}|} \leq
\frac{1}{4k}|V_{i+1,i}|$. Let $V''$ be the set of vertices in $V'$
that are not in any of the sets $S_j$. The cardinality of $V''$ is at least
$$|V''|\geq |V'|-\sum_{j>i+1}|S_j| \geq |V'|-k\cdot\Big(\frac{1}{4k}|V_{i+1,i}|\Big) \geq
|V'|-\frac{1}{4}|V_{i+1,i}| \geq \frac{1}{4}|V_{i+1,i}|.$$
Moreover, by definition, for every $j>i+1$ and every vertex
$w \in V''$ there are $|R(w,V_{j,i})| \leq \frac{\epsilon_2}{2}|V_{j,i}|$ red edges from
$w$ to $V_{j,i}$ if $(i+1,j)$ is an edge of $H$ and also $\tilde{N}(w,V_{j,i})$ has size at least $(p_{i+1,j}-\frac{\epsilon_2}{2})|V_{j,i}|$.
This implies that
$$|\tilde{B}(w,V_{j,i})|= |\tilde{N}(w,V_{j,i}) \setminus R(w,V_{j,i})| \geq |\tilde{N}(w,V_{j,i})|-\frac{\epsilon_2}{2}|V_{j,i}| \geq
(p_{i+1,j}-\epsilon_2)|V_{j,i}|$$
and therefore the vertices of $V''$ satisfy the first
of the two properties of good vertices.
We have reduced our goal to showing that there is an element of
$V''$ that has the second property of good vertices. For $i+1<j<\ell
\leq k$ and $(j,\ell)$ an edge of $H$, let $T_{j,\ell}$ denote the
set of $w \in V''$ such that the density of red edges between
$\tilde{B}(w,V_{j,i})$ and $\tilde{B}(w,V_{\ell,i})$ is more than
$(1+\epsilon_1)^{i+1}\beta$. Notice that any vertex of $V''$ not in
any of the sets $T_{j,\ell}$ is good. Therefore, if we show that
$T_{j,\ell} <\frac{|V''|}{k^2}$ for each $T_{j,\ell}$, then there is
a good vertex in $V''$ and the proof would be complete. To do so we
will assume without loss of generality that $p_{i+1,j}$ and
$p_{i+1,\ell}$ are both $p$ (the other 3 cases can be treated
similarly using the fact that $\bar G$ is
$(1-p,\lambda)$-pseudo-random). Since by (\ref{eq1}) we have that
$|V_{\ell,i}|, |V_{j,i}| \geq \frac{\delta n}{2k}$ and
$\frac{|V''|}{k^2} \geq \frac{1}{4k^2}|V_{i+1,i}| \geq \frac{\delta
n}{8k^3}$, the result follows from the following claim.
\begin{claim}
Let $X,Y$ and $Z$ be three disjoint subsets of our
$(p,\lambda)$-pseudo-random graph $G$ such that $|X| \geq
\frac{\delta n}{8k^3}$ and $|Y|,|Z| \geq \frac{\delta n}{2k}$. For
every $w \in X$ let $B_1(w), B_2(w)$ be the set of vertices in $Y$
and $Z$ respectively connected to $w$ by a blue edge and suppose
that $|B_1(w)|\geq (p-\frac{p}{10k})|Y|$ and $|B_2(w)| \geq
(p-\frac{p}{10k})|Z|$. Also suppose that the density of red edges
between $Y$ and $Z$ is at most $\eta$ for some $\eta \geq
\frac{p}{1000k^2}$. Then there is a vertex $w\in X$ such that the
density of red edges between $B_1(w)$ and $B_2(w)$ is at most
$\frac{k+1}{k}\eta$.
\end{claim}
\noindent {\bf Proof.} Let $m$ denote the number of triangles
$(x,y,z)$ with $x \in X, y \in Y, z \in Z$, such that the edge
$(y,z)$ is red. We need an upper bound on $m$. Let $U$ be the set of
vertices in $Y$ that have fewer than $p^3\delta^3(10k)^{-10}n$ red
edges to $Z$. So the number $m_1$ of triangles $(x,y,z)$ which have
$y\in U$ and edge $(y,z)$ red is clearly at most $m_1\leq
p^3\delta^3(10k)^{-10}n^3$. Let $W_1, W_2$ denote the subsets of
vertices in $Y$ whose number of neighbors in $X$ is at least
$(p+\frac{p}{20k})|X|$ or respectively at most
$(p-\frac{p}{20k})|X|$. Since the density of edges between $W_i$ and
$X$ deviates from $p$ by more than $\frac{p}{20k}$, using
$(p,\lambda)$-pseudo-randomness of $G$, we have $\frac{p}{20k} \leq
\frac{\lambda}{\sqrt{|X||W_i|}}$, or equivalently, $|X||W_i| \leq
400k^2p^{-2}\lambda^2.$ Therefore, using the upper bound $\lambda
\leq \frac{p^8}{(10k)^{10}}\delta^2n$ from (\ref{eq2}), the number
$m_2$ of triangles $(x,y,z)$ with $y \in W=W_1 \cup W_2$ and edge
$(y,z)$ red is at most
$$m_2 \leq |X||W|n \leq 800k^2p^{-2}\lambda^2n \leq (10k)^{-10}p^4\delta^4n^3.$$
For $y \in Y \setminus (U \cup W)$, we have the number of neighbors
of $y$ in $X$ satisfy $\big|\frac{|N(y, X)|}{|X|}-p\big|\leq
\frac{p}{20k}$ and the number of red edges from $y$ to $Z$ is at
least $p^3\delta^3(10k)^{-10}n$. Recall that $R(y,Z)$ denotes the
set of vertices in $Z$ connected to $y$ by red edges, hence we have
that $|R(y,Z)| \geq p^3\delta^3(10k)^{-10}n$ for every $y \in Y
\setminus (U \cup W)$. We also have that $|N(y, X)| \geq p|X|/2 \geq
\frac{p\delta n}{16k^3}$. Since $G$ is $(p,\lambda)$-pseudo-random,
we can bound the number of edges between $N(y, X)$ and $R(y,Z)$ by
$p|N(y, X)||R(y,Z)|+\lambda\sqrt{|N(y, X)||R(y,Z)|}$. Using the
above lower bounds on $|N(y, X)|$ and $|R(y,Z)|$, and the upper
bound (\ref{eq2}) for $\lambda$, one can easily check that
$$\frac{\lambda}{\sqrt{|N(y, X)||R(y,Z)|}}
\leq\frac{\lambda}{\sqrt{\big(p\delta n/(16k^3)\big)
\big(p^3\delta^3(10k)^{-10}n\big)}} \leq \frac{p}{20k}.$$ Hence the
number of edges between $N(y, X)$ and $R(y,Z)$ is at most
$(p+\frac{p}{20k})|N(y, X)||R(y,Z)|$. Recall that for all
$y \in Y \setminus (U \cup W)$ we have that
$|N(y, X)| \leq \big(p+\frac{p}{20k}\big)|X|$. Also, since the density
of red edges between $Y$ and $Z$ is at most $\eta$, we have that
$\sum_y|R(y,Z)| \leq \eta|Y||Z|$. Therefore, the
number $m_3$ of triangles $(x,y,z)$ with $y \in Y \setminus (U \cup
W), x\in X, z\in Z$ such that the edge $(y,z)$ is red is at most
$$m_3 \leq \Big(p+\frac{p}{20k}\Big)\sum_{y \in Y \setminus (U \cup W)}
|N(y, X)||R(y,Z)|\leq
\Big(p+\frac{p}{20k}\Big)^2|X|\sum_y|R(y,Z)| \leq
\Big(p+\frac{p}{20k}\Big)^2\eta|X||Y||Z|.$$
Using the lower bounds on $|X|, |Y|, |Z|, \eta$ from the assertion of the claim we have that
$$p^2\eta|X||Y||Z| \geq \frac{p^3\delta^3}{(10k)^7}n^3 \geq (10k)^3 \max\big(m_1, m_2\big).$$
This implies that the total number of
triangles $(x,y,z)$ with $x \in X, y \in Y, z \in Z$, such that
the edge $(y,z)$ is red is at most
\begin{eqnarray*}
m &=& m_1+m_2+m_3 \leq
2\frac{p^2\eta|X||Y||Z|}{(10k)^3}+\Big(p+\frac{p}{20k}\Big)^2\eta|X||Y||Z|\\
&\leq& \big(1+1/(8k)\big)p^2\eta|X||Y||Z|.\end{eqnarray*} Therefore,
there is vertex $w \in X$ such that the number of these triangles
through $w$ is at most $(1+1/(8k))p^2\eta|Y||Z|$. Since $B_1(w)
\subset N(w,Y)$ and $B_2(w) \subset N(w,Z)$, then the number of red
edges between $B_1(w)$ and $B_2(w)$ is at most
$(1+1/(8k))p^2\eta|Y||Z|$. Since we have that $|B_1(w)|\geq
(p-\frac{p}{10k})|Y|$ and $|B_2(w)| \geq (p-\frac{p}{10k})|Z|$, the
density of red edges between $B_1(w)$ and $B_2(w)$ can be at most
$$\frac{(1+1/(8k))p^2\eta|Y||Z|}{|B_1(w)||B_2(w)|} \leq \frac{(1+1/(8k))p^2\eta} {(p-\frac{p}{10k})^2} \leq
\frac{k+1}{k}\eta,$$ completing the proof.
\end{proof}
\section{Trees with superlinear induced Ramsey numbers}
\label{superlinear}
In this section we prove Theorem \ref{tree}, that there are trees whose
induced Ramsey number is superlinear in the number of vertices. The proof
uses Szemer\'edi's regularity lemma, which we mentioned in the introduction.
A red-blue edge-coloring of the edges of a graph partitions the
graph into two monochromatic subgraphs, the {\it red graph}, which
contains all vertices and all red edges, and the {\it blue graph},
which contains all vertices and all blue edges. The weak induced
Ramsey number $r_{\textrm{weak ind}}(H_1,H_2)$, introduced by Gorgol
and \L uczak \cite{GoLu}, is the least positive integer $n$ such
that there is a graph $G$ on $n$ vertices such that for every
red-blue coloring of the edges of $G$, either the
red graph contains $H_1$ as an induced subgraph or the blue graph
contains $H_2$ as an induced subgraph. Note that this definition is a relaxation of the
induced Ramsey numbers since we allow blue edges between the vertices of
red copy of $H_1$ or red edges between the vertices of blue copy of $H_2$.
Therefore a weak induced Ramsey number lies between the
usual Ramsey number and the induced Ramsey number. Using this new notion we
can strengthen Theorem \ref{tree} as follows. Recall that $K_{1,k}$ denotes a star with $k$ edges.
\begin{theorem}\label{weak}
For each $\alpha \in (0,1)$, there is a constant $k(\alpha)$ such that if $H$ is a graph on
$k \geq k(\alpha)$ vertices with maximum independent set of size less than
$(1-\alpha)k$, then $r_{\textrm{weak ind}}(H,K_{1,k}) \geq
\frac{k}{\alpha}$.
\end{theorem}
Let $T$ be a tree which is a union of path of length $k/2$ with the
star of size $k/2$ such that the end point of the path is the center
of the star. Since $T$ contains the path $P_{k/2}$ and the star
$K_{1,k/2}$ as induced subgraphs, then $r_{\textrm{ind}}(T) \geq
r_{\textrm{weak ind}}(P_{k/2},K_{1,k/2})$. By using the above theorem with
$k/2$ instead of $k$, $H=P_{k/2}$, and sufficiently small
$\alpha$, we obtain that $r_{\textrm{ind}}(T)/k \rightarrow \infty$.
Moreover the same holds for every sufficiently large tree which
contains a star and a matching of linear size as subgraphs. We
deduce Theorem \ref{weak} from the following lemma.
\begin{lemma}\label{twocoloring} For each $\delta>0$ there is a constant $c_{\delta}>0$ such that if $G=(V,E)$ is a graph
on $n$ vertices, then there is a $2$-coloring of the edges of $G$
with colors red and blue such that the red graph has maximum degree
less than $\delta n$ and for every subset $W \subset V$, either there are
at least $c_{\delta}n^2$ blue edges in the subgraph induced by $W$
or there is an independent set in $W$ in the blue graph of
cardinality at least $|W|-\delta n$.
\end{lemma}
\begin{proof} Let $\epsilon=\frac{\delta^2}{100}$. By Szemer\'edi's
regularity lemma, there is a positive integer $M(\epsilon)$ together with
an equitable partition $V=\bigcup_{i=1}^k V_i$ of vertices of the graph $G=(V,E)$ into $k$ parts with
$\frac{1}{\epsilon}<k<M(\epsilon)$ such that all but at most $\epsilon
k^2$ of the pairs $(V_i,V_j)$ are $\epsilon$-regular. Recall that a partition is equitable if
$\big||V_i|-|V_j|\big| \leq 1$ and
a pair $(V_i,V_j)$ is called $\epsilon$-regular if for every $X \subset V_i$ and $Y
\subset V_j$ with $|X| > \epsilon |V_i|$ and $|Y| > \epsilon |V_j|$, we
have $|d(X,Y)-d(V_i,V_j)|<\epsilon$.
Let $c_{\delta}=\epsilon M(\epsilon)^{-2}$. Notice that to prove Lemma
\ref{twocoloring}, it suffices to prove it under the assumption that
$n $ is sufficiently large. So we may assume that $n \geq
\epsilon^{-1}M(\epsilon)$.
If a pair $(V_i,V_j)$ is $\epsilon$-regular with density
$d(V_i,V_j)$ at least $2\epsilon$, then color the edges between
$V_i$ and $V_j$ blue. Let $G'$ be the subgraph of $G$ formed by
deleting the edges of $G$ that are already colored blue. Let $V'$ be
the vertices of $G'$ of degree at least $\delta n$. Color blue any
edge of $G'$ with a vertex in $V'$. The remaining edges are colored
red. First notice that every vertex has red degree less than $\delta
n$.
We next show that $|V'|$ is small by showing that $G'$ has few
edges. There are at most
$$\sum_{i=1}^k {|V_i| \choose 2} \leq \frac{n^2}{k} \leq
\epsilon n^2$$ edges $(v,w)$ of $G$ with $v$ and $w$ both in the
same set $V_i$. Since at most $\epsilon k^2$ of the pairs
$(V_i,V_j)$ are not $\epsilon$-regular, then there are at most
$\epsilon n^2$ edges in such pairs. The $\epsilon$-regular pairs
$(V_i,V_j)$ with density less than $2\epsilon$ contain at most a
fraction $2\epsilon$ of all possible edges on $n$ vertices. So there
are less than $\epsilon n^2$ edges of this type. Therefore the
number of edges of $G'$ is at most $3\epsilon n^2$, and therefore
there are at most $|V'| \leq 2\frac{e(G')}{\delta n} \leq 6\epsilon
\delta^{-1}n<\frac{\delta n}{10}$ vertices of degree at least
$\delta n$ in it.
Let $W \subset V$. Let $W'=W \setminus V'$, so $W'$ has
cardinality at least $|W|-\frac{\delta n}{10}$. Let $W_i =V_i \cap
W'$. Let $W''=\bigcup_{|W_i| \geq \epsilon |V_i|} W_i$. Notice that
for any $i \in [k]$ there are at most $\epsilon \frac{n}{k}$
vertices in $(W' \setminus W'') \cap V_i$, so there are at most
$k(\epsilon \frac{n}{k}) =\epsilon n = \frac{\delta^2 n}{100}$
vertices in $W'\setminus W''$. Therefore, $W''$ has at least
$|W|-\delta n$ vertices. If there are $i \not =j$ such that
$|W_i|,|W_j| \geq \epsilon\frac{n}{k}$ and the
pair $(V_i,V_j)$ is $\epsilon$-regular with density at least
$2\epsilon$, then there are at least
$$\epsilon|W_i||W_j| \geq \frac{\epsilon}{k^2}n^2 \geq \epsilon M(\epsilon)^{-2}n^2=c_{\delta}n^2$$ blue edges between $W_i$ and $W_j$.
In this case the blue subgraph induced by $W$
has at least $c_{\delta}n^2$ edges. Otherwise, all the edges in
$W''$ are red, and $W''$ is an independent set in the blue graph of
cardinality at least $|W|-\delta n$.
\end{proof}
\vspace{0.15cm} \noindent {\bf Proof of Theorem \ref{weak}.}\, Let
$H$ be a graph on $k$ vertices with maximum independent set of size
less than $(1-\alpha)k$. Take $\delta=\alpha^2$ and $c_{\delta}$ to
be as in Lemma \ref{twocoloring}. Let $G=(V,E)$ be any graph on $n$
vertices, where $n \leq \frac{k}{\alpha}$. If $H$ has at least
$c_{\delta}k^2$ edges, consider a random red-blue coloring of the
edges of $G$ such that the probability of an edge being red is
$\frac{\alpha}{2}$. The expected degree of a vertex in the red graph
is at most $\alpha n/2$. Therefore by the standard Chernoff bound
for the Binomial distribution it is easy to see that with
probability $1-o(1)$ the degree of every vertex in the red graph is
less than $\alpha n\leq k$, i.e., it contains no $K_{1,k}$. On the
other hand, for $k$ sufficiently large, the probability that the
blue graph contains a copy of $H$ is at most
$$ n^k (1-\alpha/2)^{e(H)} \leq n^k e^{-\alpha c_{\delta}k^2/2} \leq
e^{-\alpha c_{\delta}k^2/2+k\log (k/\alpha)}=o(1).$$
Thus with high probability this coloring has no blue copy of $H$ as well.
This implies that we can assume that the number of edges in $H$ is less than $c_{\delta}k^2$.
By Lemma \ref{twocoloring}, there is a red-blue edge-coloring of the
edges of $G$ such that the red graph has maximum degree at most
$\delta n$ and every subset $W \subset V$ contains either an independent
set in the blue graph of size at least $|W|-\delta n$ or contains at
least $c_{\delta}n^2$ blue edges. Since $\delta n=\alpha^2 n <k$, then the red
graph does not contain $K_{1,k}$ as a subgraph.
Suppose for contradiction that there is an induced copy of $H$ in
the blue graph, and let $W$ be the vertex set of this copy.
The blue graph induced by $W$ has
$e(H)<c_{\delta}k^2\leq c_{\delta}n^2$ edges.
Therefore it contains an independent set of size at least $|W|-\delta n \geq
|W|-\alpha k=(1-\alpha)k$, contradicting the fact that $H$ has no
independent set of size $(1-\alpha)k$. Therefore, there are no
induced copies of $H$ in the blue graph.\ifvmode\mbox{ }\else\unskip\fi\hskip 1em plus 10fill$\Box$
\section{Concluding Remarks}
\begin{itemize}
\item
All of the results in this paper concerning induced subgraphs can be extended to
many colors. One such multicolor result
was already proved in Section \ref{moreoninduced} (see Lemma \ref{lemmaerdoshajnal2}),
and we use here the notation from that section. For example, one can obtain the following
generalization of Theorem \ref{main}. For $k \geq 2$, let $\Psi:E(K_{k}) \rightarrow [r]$ be
an edge-coloring of the complete graph $K_{k}$ and $\Phi:E(K_n) \rightarrow [s]$ be
a $\Psi$-free edge-coloring of the complete graph $K_n$. Then there is a constant $c$ so that for
every $\epsilon \in (0,1/2)$, there is a subset $W \subset K_n$ of size at least $2^{-crk (\log
\frac{1}{\epsilon})^2}n$ and a color $i \in [r]$ such that the edge
density of color $i$ in $W$ is at most $\epsilon$.
Since the proofs of this statement and other generalizations can be obtained using our key lemma in essentially the same way as
the proofs of the results that we already presented (which correspond to the two color case), we
do not include them here.
\item
It would be very interesting to get a better estimate in Theorem \ref{main}. This will immediately
give an improvement of the best known result for Erd\H{o}s-Hajnal conjecture on the size of the maximum
homogeneous set in $H$-free graphs. We believe that our bound can be strengthened as follows.
\begin{conjecture}\label{mainconjecture}
For each graph $H$, there is a constant $c(H)$ such that if
$\epsilon \in (0,1/2)$ and $G$ is a $H$-free graph on $n$ vertices,
then there is an induced subgraph of $G$ on at least
$\epsilon^{c(H)}n$ vertices that has edge density either at most
$\epsilon$ or at least $1-\epsilon$.
\end{conjecture}
\noindent
This conjecture if true would imply the Erd\H{o}s-Hajnal
conjecture. Indeed, take $\epsilon=n^{-\frac{1}{c(H)+1}}$. Then
every $H$-free graph $G$ on $n$ vertices contains an induced
subgraph on at least $\epsilon^{c(H)}n=n^{\frac{1}{c(H)+1}}$
vertices that has edge density at most $\epsilon$ or at least
$1-\epsilon$. Note that this induced subgraph or its
complement has average degree at most $1$, which implies that it
contains a clique or independent set of size at least
$\frac{1}{2}n^{\frac{1}{c(H)+1}}$.
\item
One of the main remaining open problems on induced Ramsey numbers is
a beautiful conjecture of Erd\H{o}s which states that there exists a
positive constant $c$ such that $r_{\textrm{ind}}(H) \leq 2^{ck}$
for every graph $H$ on $k$ vertices. This, if true, will show that
induced Ramsey numbers in the worst case have the same order of
magnitude as ordinary Ramsey numbers. Our results here suggest that
one can attack this problem by studying 2-edge-colorings of a random
graph with edge probability $1/2$. It looks very plausible that for
sufficiently large constant $c$, with high probability random graph
$G(n,1/2)$ with $n\geq 2^{ck}$ has the property that any of its
2-edge-colorings contains every graph on $k$ vertices as an induced
monochromatic subgraph. Moreover, maybe this is even true for every
sufficiently pseudo-random graph with edge density $1/2$.
\item The results on induced Ramsey numbers of sparse graphs naturally
lead to the following questions. What is the asymptotic
behavior of the maximum of induced Ramsey numbers over all trees on
$k$ vertices? We have proved $r_{\textrm{ind}}(T)$ is superlinear in
$k$ for some trees $T$. On the other hand, Beck
\cite{Be} proved that $r_{\textrm{ind}}(T)=O\left( k^{2}\log^2
k\right)$ for all trees $T$ on $k$ vertices.
For induced Ramsey numbers of bounded degree graphs, we proved a
polynomial upper bound with exponent which is nearly linear in the
maximum degree. Can this be improved further, e.g., is it true that
the induced Ramsey number of every $n$-vertex graph with maximum
degree $d$ is at most a polynomial in $n$ with exponent independent
of $d$? It is known that the usual Ramsey numbers of bounded degree
graphs are linear in the number of vertices.
\end{itemize}
\vspace{0.2cm} \noindent {\bf Acknowledgment.}\, We'd like to thank
Janos Pach and Csaba T\'oth for helpful comments on an early stage
of this project and Steve Butler and Philipp Zumstein for carefully
reading this manuscript.
| {
"timestamp": "2007-12-27T22:18:10",
"yymm": "0706",
"arxiv_id": "0706.4112",
"language": "en",
"url": "https://arxiv.org/abs/0706.4112",
"abstract": "We present a unified approach to proving Ramsey-type theorems for graphs with a forbidden induced subgraph which can be used to extend and improve the earlier results of Rodl, Erdos-Hajnal, Promel-Rodl, Nikiforov, Chung-Graham, and Luczak-Rodl. The proofs are based on a simple lemma (generalizing one by Graham, Rodl, and Rucinski) that can be used as a replacement for Szemeredi's regularity lemma, thereby giving much better bounds. The same approach can be also used to show that pseudo-random graphs have strong induced Ramsey properties. This leads to explicit constructions for upper bounds on various induced Ramsey numbers.",
"subjects": "Combinatorics (math.CO)",
"title": "Induced Ramsey-type theorems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9852713887451681,
"lm_q2_score": 0.8311430436757312,
"lm_q1q2_score": 0.8189014608882735
} |
https://arxiv.org/abs/2109.03749 | Lifting methods in mass partition problems | Many results in mass partitions are proved by lifting $\mathbb{R}^d$ to a higher-dimensional space and dividing the higher-dimensional space into pieces. We extend such methods to use lifting arguments to polyhedral surfaces. Among other results, we prove the existence of equipartitions of $d+1$ measures in $\mathbb{R}^d$ by parallel hyperplanes and of $d+2$ measures in $\mathbb{R}^d$ by concentric spheres. For measures whose supports are sufficiently well separated, we prove results where one can cut a fixed (possibly different) fraction of each measure either by parallel hyperplanes, concentric spheres, convex polyhedral surfaces of few facets, or convex polytopes with few vertices. | \section{Introduction}
In a standard mass partition problem, we are given measures or finite families of points in a Euclidean space and we seek to partition the ambient space into regions that meet certain conditions. Some conditions determine how we split the measures and the sets of points. For instance, in an equipartition, we ask that each part has the same size in each measure or contains the same number of points of each set. Some conditions restrict the types of partitions which are allowed, such as partition by a single hyperplane. Determining whether such partitions always exist leads to a rich family of problems. Solutions to these problems often require topological methods and can have computational applications \cites{matousek2003using, Zivaljevic2017, RoldanPensado2021}. The quintessential mass partition result is the ham sandwich theorem, conjectured by Steinhaus and proved by Banach \cite{Steinhaus1938}.
\begin{theorem}[Ham sandwich theorem]
Let $d$ be a positive integer and $\mu_1, \ldots, \mu_d$ be finite measures of $\mathds{R}^d$. Then, there exists a hyperplane $H$ of $\mathds{R}^d$ so that its two closed half-spaces $H^+$ and $H^-$ satisfy
\begin{align*}
\mu_i (H^+) & \ge \frac{1}{2}\mu_i (\mathds{R}^d), \\
\mu_i (H^-) & \ge \frac{1}{2}\mu_i (\mathds{R}^d) \qquad \mbox{for $i=1,\ldots,d$.}
\end{align*}
\end{theorem}
If we further ask that $\mu_i(H') = 0$ for each hyperplane $H'$ and every $i=1,\ldots, d$, the inequalities above are equalities. Stone and Tukey proved the ham sandwich theorem for general measures \cite{Stone:1942hu}. They also proved the polynomial ham sandwich theorem which states that \textit{any $\binom{d+k}{k}-1$ measures in $\mathds{R}^d$ can be halved with a polynomial on $d$ variables of degree at most $k$.} Even though this is a far-reaching generalization of the ham sandwich theorem, its proof relies on a simple trick. We lift $\mathds{R}^d$ to $\mathds{R}^{\binom{d+k}{k}-1}$ by the Veronese map and apply the ham sandwich theorem in the higher-dimensional space.
In this paper, we prove several mass partition results by lifting $\mathds{R}^d$ to higher-dimensional spaces, particularly $\mathds{R}^{d+1}$, in new ways. In \cref{sec:sines-and-spheres}, we revisit a known result about equipartitions of measures with spheres and prove a new result about equipartitions of three measures in $\mathds{R}^2$ using a sinusoidal curve of fixed period. Then, instead of lifting to higher-dimensional spaces via smooth maps, such as the Veronese maps, we lift to polyhedral surfaces in $\mathds{R}^{d+1}$. This forces us to use the ham sandwich theorem for general measures, which is interesting on its own.
One of our main results is the following ham sandwich theorem for parallel hyperplanes.
\begin{theorem}\label{thm:parallel-hyperplanes}
Let $d$ be a positive integer and $\mu_1, \ldots, \mu_{d+1}$ be $d+1$ finite measures in $\mathds{R}^d$, each absolutely continuous with respect to the Lebesgue measure. Then, there exist two parallel hyperplanes $H_1, H_2$ so that the region between them contains exactly half of each measure.
\end{theorem}
If there is a hyperplane $H_1$ that halves all measures, we can consider $H_2$ to be at infinity. If $H_1$ and $H_2$ are not required to be parallel, we present with \cref{thm:simple-wedge} a simple proof that \textit{for any $d+1$ finite measures in $\mathds{R}^d$ there exist two closed half-spaces whose intersection contains exactly half of each measure}. The intersection of two half-spaces is called a wedge. The fact that $d+1$ measures in $\mathds{R}^d$ can be halved by a wedge was first proved by B\'ar\'any and Matou\v{s}ek in dimension two \cite{Barany:2001fs} and later generalized to $\mathds{R}^d$ by Schnider \cite{Schnider:2019ua}. For $\mathds{R}^2$, Bereg presented algorithmic approaches for the discrete version which show that more conditions can be imposed to the wedge \cite{Bereg:2005voa}.
The proof requires a new Borsuk--Ulam type theorem about direct products of spheres and Stiefel manifolds, \cref{thm:new-topological-result}, which we describe in \cref{sec:polyhedral-lift}. As a corollary, we combine \cref{thm:parallel-hyperplanes} with known lifting techniques. We nickname the following result as the ``bagel ham sandwich theorem'', due to how it looks in $\mathds{R}^2$.
\begin{corollary}[Bagel ham sandwich theorem]\label{cor:Bagel}
Let $d$ be a positive integer and $\mu_1, \ldots, \mu_{d+2}$ be $d+2$ finite measures in $\mathds{R}^d$, each absolutely continuous with respect to the Lebesgue measure. Then, we can find two concentric spheres in $\mathds{R}^d$ so that the closed region between them has exactly half of each measure.
\end{corollary}
\cref{thm:parallel-hyperplanes} is optimal, as the region between the two hyperplanes is convex. One can simply take $d+1$ measures concentrated each around a vertex of a simplex and a final measure concentrated around the barycenter of the simplex to show that the result is impossible with $d+2$ measures. The problem of cutting the same fraction for a family of measures with a single convex set has been studied before \cites{Akopyan:2013jt, Blagojevic:2007ij}, which we revisit in \cref{sec:remarks}. \cref{thm:parallel-hyperplanes} is also related to the problem of halving measures in $\mathds{R}^d$ using hyperplane arrangements. Langerman conjectured that \textit{any $dn$ measures in $\mathds{R}^d$ can be simultaneously halved by a chessboard coloring induced by $n$ hyperplanes} \cites{Barba:2019to, Hubard2020}. For $n=2$, this has been confirmed for $2d-O(\log d)$ measures \cite{Blagojevic2018}. If the hyperplanes are required to be parallel, this reduces the dimension of the space of possible partitions from $2d$ to $d+1$, matching the number of measures in \cref{thm:parallel-hyperplanes}.
General mass partition results like the ham sandwich theorem can halve many measures simultaneously. If we want to cut a fixed (but possibly different) fraction of each measure, conditions need to be imposed. For example, if two measures coincide, it is impossible to find a half-space that contains exactly half of one and one third of the other.
The first result with arbitrary sizes for each measure was proved by Hugo Steinhaus in dimensions two and three \cite{Steinhaus1945}. He required the support of the measures to be well separated, meaning that the supports of any set of measures could be separated from the supports of the rest by a hyperplane. This condition was sufficient to guarantee the existence of a half-space cutting a fixed fraction of several measures. This result was rediscovered and extended to high dimensions independently by B\'ar\'any, Hubard, and Jer\'onimo and by Breuer \cites{Barany:2008vv, Breuer2010}.
\begin{theorem}[B\'ar\'any, Hubard, Jer\'onimo 2008; Breuer 2010]\label{thm:BHJ}
Let $d$ be a positive integer and $\mu_1, \ldots, \mu_d$ be $d$ finite measures in $\mathds{R}^d$, each absolutely continuous with respect to the Lebesgue measure so that their supports $K_1, \ldots, K_d$ are well separated. Let $\alpha_1, \ldots, \alpha_d$ be real numbers in $(0,1)$. Then, there exists a half-space $H$ so that
\[
\mu_i(H) = \alpha_i \cdot \mu_i (\mathds{R}^d) \qquad \mbox{for }i=1,\ldots, d.
\]
\end{theorem}
The proof of B\'ar\'any, Hubard, and Jer\'onimo uses Brouwer's fixed point theorem. Breuer's proof uses the Poincar\'e--Miranda theorem, which is equivalent to Brouwer's fixed point theorem but has a significantly different formulation. Steinhaus' proof is quite different and uses the Jordan curve theorem. In \cref{sec:well separated} we present a new proof of \cref{thm:BHJ} that uses a degree argument. We extend \cref{thm:parallel-hyperplanes} in a similar way for well separated measures.
\begin{theorem}\label{thm:parallel-hyperplanes-separated}
Let $d$ be a positive integer and $\mu_1, \ldots, \mu_{d+1}$ be $d+1$ finite measures in $\mathds{R}^d$, each absolutely continuous with respect to the Lebesgue measure. Suppose that the supports $K_1, \ldots, K_{d+1}$ of $\mu_1, \ldots, \mu_{d+1}$ are well separated. Let $\alpha_1, \ldots, \alpha_{d+1}$ be real numbers in $(0,1)$. Then, there exist two parallel hyperplanes $H_1, H_2$ in $\mathds{R}^d$ so that the region $A$ between them satisfies
\[
\mu_i(A) = \alpha_i \cdot \mu_i(\mathds{R}^d) \qquad \mbox{for all }i=1,\ldots, d+1.
\]
\end{theorem}
We also combine the results with partitions with few hyperplanes and those of partitions using a single convex. We exhibit conditions for measures in $\mathds{R}^d$ that guarantee the existence of a (possibly unbounded) convex polyhedron of few facets which contains a fixed fraction of each measure or the existence of a convex polytope with few vertices that contains a fixed fraction of each measure. This is done in \cref{sec:polyhedral}. These results work with an arbitrary number of measures in $\mathds{R}^d$.
Finally, we revisit a mass partition result by Akopyan and Karasev that uses a lifting argument in its proof. Akopyan and Karasev proved that for any positive integer $n$ and any $d+1$ measures in $\mathds{R}^d$, there exists a convex set $K$ whose measure is exactly $1/n$ of each measure. We extend the methods from \cref{sec:polyhedral-lift} to bound the complexity of $K$ by writing it as the intersection of few half-spaces.
\begin{theorem}\label{thm:same-fraction}
Let $n,d $ be positive integers and $\mu_1, \ldots, \mu_{d+1}$ be $d+1$ finite measures in $\mathds{R}^d$, each absolutely continuous with respect to the Lebesgue measure. There exists a convex set $K$, such that $K$ is the intersection of $\sum_{j=1}^r k_j(p_j-1)p_j$ half-spaces and
\[
\mu_i(K) = \frac{1}{n}\mu_i(\mathds{R}^d) \qquad \mbox{for all }i=1,\ldots, d+1,
\]
where $n=p_1^{k_1}\ldots p_r^{k_r}$ is the prime factorization of $n$.
\end{theorem}
We conclude in \cref{sec:remarks} with remarks and open problems.
\section{Equipartition with spheres and sine curves}\label{sec:sines-and-spheres}
While the traditional ham sandwich theorem simultaneously halves $d$ measures in $\mathds{R}^d$ by a hyperplane, we can simultaneously halve $d+1$ or more measures in $\mathds{R}^d$ if we increase the complexity of the cut. The following theorem is a consequence of Stone and Tukey's polynomial ham sandwich theorem \cite{Stone:1942hu} and was one of Stone and Tukey's first examples of their main results. It was also proved in dimension two by Hugo Steinhaus in 1945 \cite{Steinhaus1945} using a particular parametrizations of the space of circles in $\mathds{R}^2$. We present a new proof with a stereographic projection.
\begin{theorem}\label{thm:circular-sandwich}
Let $d$ be a positive integer and let $\mu_1, \ldots, \mu_{d+1}$ be $d+1$ finite measures in $\mathds{R}^d$, each absolutely continuous with respect to the Lebesgue measure. Then, there exists either a sphere or a hyperplane that simultaneously splits each measure by half.
\end{theorem}
\begin{proof}
We first embed $\mathds{R}^d$ to $\mathds{R}^{d+1}$ by appending a coorinate $1$ to each point, so $x \mapsto (x,1) \in \mathds{R}^{d+1}$. Then, we apply $r: \mathds{R}^{d+1}\setminus\{0\} \to \mathds{R}^{d+1}\setminus\{0\}$ the inversion centered at $0$ with radius $1$. This is a transformation that sends spheres containing the origin to hyperplanes and hyperplanes to spheres containing the origin. Hyperplanes containing the origin are fixed set-wise by the inversion and we consider them as degenerate spheres as well. Restricted to the embedding of $\mathds{R}^d$, the inversion is a stereographic projection to the sphere $S$ of radius $1/2$ centered at $(0,\ldots,0,1/2)$ by rays through the origin. We also know that $r \circ r$ is the identity.
When we lift the measures $\mu_1, \ldots, \mu_{d+1}$ to $\mathds{R}^{d+1}$ and apply $r$, we get measures $\sigma_1, \ldots, \sigma_{d+1}$ on $S$. By the ham sandwich theorem in $\mathds{R}^{d+1}$, there exists a hyperplane $H$ halving each of $\sigma_1, \ldots, \sigma_{d+1}$. Since $r(H)$ is a sphere in $\mathds{R}^{d+1}$, it intersects the embedding of $\mathds{R}^d$ in a $(d-1)$-dimensional sphere halving each of $\mu_1, \ldots, \mu_{d+1}$, as we wanted. The only exceptional case is if $H$ contains the origin, in which case $r(H) = H$, which gives us an equipartition by a hyperplane in $\mathds{R}^d$.
\end{proof}
Similarly, the idea of lifting allows for an visual and intuitive proof of the existence of equipartitions of three sets in $\mathds{R}^2$ by a sine wave. By a sine wave of period $\alpha$ we mean the graph of a function of the form $y = r + A \sin (2\pi x / \alpha + s)$, for real numbers $A,r,s$.
\begin{theorem}\label{thm:cylinder}
Let $\alpha > 0$ be a real number. Given three finite measures $\mu_1, \mu_2, \mu_3$ in $\mathds{R}^2$, each absolutely continuous with respect to the Lebesgue measure, there exists a sine wave of period $\alpha$ halving each measure.
\end{theorem}
We allow ``degenerate'' sine waves of period $\alpha$. A degenerate sine wave of period $\alpha$ is formed by taking two vertical lines intersecting the $x$-axis in the interval $[0,\alpha)$ and making a translated copy in each interval of the form $n\alpha + [0,\alpha)$. This set induces a chessboard coloring of the plane into two regions. We can think of this as the limit of a sequence of sine waves of period $\alpha$ of increasing ampitude.
\begin{proof}
We prove the result for $\alpha = 2\pi$, as the two cases are equivalent after a scaling argument. We wrap $\mathds{R}^d$ around the cylinder $C$ in $\mathds{R}^3$ with equation $x^2 + z^2 = 1$ with the function.
\begin{align*}
f: \mathds{R}^2 & \to C \\
(x,y) & \mapsto \left(\cos\left(x\right), y, \sin\left(x\right)\right)
\end{align*}
Let $\sigma_1, \sigma_2, \sigma_3$ be the measures that $\mu_1, \mu_2, \mu_3$ induce on $C$ by this lifting, respectively. We apply the ham sandwich theorem to these three measures in $\mathds{R}^3$. Therefore, we can find a plane $H=\{(x,y,z): ax + by + cz = d\}$ that halves each of $\sigma_1, \sigma_2, \sigma_3$. When we pull $H \cap C$ back to $\mathds{R}^2$, we get the set of points $(x,y)$ that satisfy $a \cos(x) + b y + c\sin(x) = d$. Since a linear combination of the sine and cosine functions is a sinusoid with the same period but possibly different amplitude and phase shift, we have $a \sin(x) + c \cos (x) = A\sin(x + s)$ for some $A$ and $s$.
\begin{figure}[ht!]
\centering
\includegraphics[width=.7\textwidth]{cylinder.png}
\caption{An example of a cylinder $C$ with period $\alpha=2\pi$. The lift $\mathds{R}^2 \to C$ is not an injective function but this does not cause a problem.}
\label{fig:cylinder}
\end{figure}
\end{proof}
The degenerate cases appear when $b$, the coefficient of $y$, is zero. One can prove a high-dimensional version of \cref{thm:cylinder} by wrapping $\mathds{R}^d$ around $S^{d-1}\times \mathds{R}$ to find ``sinusoidal surfaces of fixed period'' that halve $d+1$ measures in $\mathds{R}^d$.
\section{Equipartitions with wedges and parallel hyperplanes}\label{sec:polyhedral-lift}
In this section, we prove results regarding equipartitions of $d+1$ mass distributions in $\mathds{R}^d$ by a wedge.
A \textit{wedge} in $\mathds{R}^d$ is the intersection of two closed half-spaces. Note that a single closed half-space is also considered a wedge.
We say that a measure $\mu$ with support $K$ in $\mathds{R}^d$ is \textit{absolutely continuous} if it is absolutely continuous with respect to the Lebesgue measure, the interior of $K$ is connected and not empty, and for every open set $U \subset K$ we have $\mu(U) > 0$. This guarantees that there is a unique halving hyperplane in each direction for $\mu$ and that the halving hyperplane varies continuously as we change the direction of the cut. We first establish a lemma about halving hyperplanes. We only use the lemma below with $n=d+1$, but it works in general.
\begin{lemma}\label{lem:separating-fixed-direction}
Let $n$ be a positive integer, $\mu_1, \ldots, \mu_{n}$ be finite absolutely continuous measures in $\mathds{R}^d$, and $v$ be a unit vector in $\mathds{R}^d$. There either exists a hyperplane $H$ orthogonal to $v$ that halves each of the $n$ measures or there exists a hyperplane $H$ such that its two closed half-spaces satisfy
\begin{align*}
\mu_i (H^+) & < \frac{1}{2}\mu_i(\mathds{R}^d) \qquad \mbox{for some $i \in [n]$ and}\\
\mu_{i'} (H^-) & < \frac{1}{2}\mu_{i'}(\mathds{R}^d) \qquad \mbox{for some $i' \in [n]$.}
\end{align*}
\end{lemma}
\begin{proof}
For each $i$, let $H_i$ be the halving hyperplane for $\mu_i$ orthogonal to $v$. If all these hyperplanes coincide we are done. Otherwise, we can order this set of hyperplanes by the direction $v$ and any hyperplane $H$ strictly between the first $H_i$ and the last $H_{i'}$ satisfies the conditions we want.
\end{proof}
In the situation above, we always take the hyperplane $H'$ exactly half-way between the first $H_i$ and the last $H_{i'}$. This makes the choice of hyperplanee continuous as $v$ varies, and invariant if we replace $v$ by $-v$.
Through the rest of the manuscript we denote the canonical basis of $\mathds{R}^d$ by $e_1, \ldots, e_d$.
\begin{theorem}\label{thm:simple-wedge}
Let $d$ be a positive integer and $\mu_1, \ldots, \mu_{d+1}$ be $d+1$ finite absolutely continuous measures in $\mathds{R}^d$. Then, there exists a wedge that contains exactly half of each measure.
\end{theorem}
\begin{proof}
Let $v$ be a unit vector in $\mathds{R}^d$. If there is a hyperplane $H$ orthogonal to $v$ halving each measure we are done. Otherwise, by \cref{lem:separating-fixed-direction}, we can find a hyperplane $H$ such that each side contains less than half of some measure.
Consider the lifting of $\mathds{R}^d$ to $\mathds{R}^{d+1}$ where we append an additional coordinate to every point $x \in \mathds{R}^d$. Formally, we lift via the map $x \mapsto (x, \operatorname{dist}(x,H))$.
We denote by $S(H)$ the image of $\mathds{R}^{d}$ in this embedding. Note that the function $x \mapsto \operatorname{dist}(x,H)$ is affine on each side of $H$, so $S(H)$ is contained in the union of two hyperplanes that contain $\{(x,0): x \in H\}$. See \cref{fig:S(H)} for an illustration of the case $d=2$.
We lift each measure $\mu_i$ in $\mathds{R}^d$ to a measure $\sigma_i$ in $\mathds{R}^{d+1}$. The measures $\sigma_1, \ldots, \sigma_{d+1}$ are no longer absolutely continuous. We now apply the ham sandwich theorem for general measures in $\mathds{R}^{d+1}$. Therefore, we can find a hyperplane $H'$ in $\mathds{R}^{d+1}$ so that its two closed half-spaces $(H')^+, (H')^-$ satisfy $\sigma_i((H')^+) \ge \frac{1}{2}\sigma_i(\mathds{R}^{d+1})$ and $\sigma_i((H')^-) \ge \frac{1}{2}\sigma_i(\mathds{R}^{d+1})$ for all $i =1,\ldots, d+1$.
By construction, each side of $H$ has strictly less than half of one of the measures $\mu_i$. If the hyperplane $H'$ coincides with one of the two hyperplanes whose union contains $S(H)$, the half-space bounded by $H'$ that contains infinite rays in the direction $-e_{d+1}$ would have less than half the corresponding $\sigma_i$. Therefore $H'$ is not one of the two hyperplanes forming $S(H)$.
As the two components of $S(H)$ were the only hyperplanes with non-zero measure for each $\sigma_i$, we conclude that $H'$ halves each of the measures in $\mathds{R}^{d+1}$. As a final observation, $H'\cap S(H)$ projects back to $\mathds{R}^d$ as the boundary of a wedge that halves all measures.
\end{proof}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Vshape.pdf}
\caption{An example of lifting when given three sets of points in $\mathds{R}^2$. Points on the $xy$-plane are sent to points on this surface. The hyperplane $H$ in this case is the $x$-axis.}
\label{fig:S(H)}
\end{figure}
In the proof of \cref{thm:simple-wedge} we could choose the direction $v$ arbitrarily. We now use this degree of freedom to strengthen the result. Even though \cref{thm:parallel-hyperplanes} implies \cref{thm:simple-wedge}, we state it separately as the proof requires more technical tools. In particular, a simple application of the ham sandwich theorem is insufficient. We require some additional topological tools in lieu of the Borsuk--Ulam theorem.
Let $V_k(\mathds{R}^d)$ be the Stiefel manifold of orthonormal $k$-frames in $\mathds{R}^d$. Formally,
\[
V_k(\mathds{R}^d) = \{(v_1,\ldots, v_k) : v_1, \ldots, v_k \in \mathds{R}^d \mbox{ are orthonormal}\}.
\]
The space $V_k(\mathds{R}^d)$ has a free action of the group $(\mathds{Z}_2)^{k}$, where we consider $\mathds{Z}_2 = \{+1,-1\}$ with multiplication. Given $(v_1,\ldots, v_k) \in V_k(\mathds{R}^d)$ and $(\lambda_1, \ldots, \lambda_k) \in (\mathds{Z}_2)^k$, we define
\[
(\lambda_1, \ldots, \lambda_k)\cdot (v_1,\ldots, v_k) = (\lambda_1 v_1, \ldots, \lambda_k v_k) \in V_k(\mathds{R}^d).
\]
A similar action of $(\mathds{Z}_2)^k$ can be defined in $\mathds{R}^{d-1}\times \ldots \times \mathds{R}^{d-k}$, as the direct product of the actions of $\mathds{Z}_2$ on each $\mathds{R}^{d-i}$. A recent result of Chan, Chen, Frick, and Hull describes properties of $(\mathds{Z}_2)^k$-equivariant maps between these two spaces.
\begin{theorem}[Chan, Chen, Frick, Hull 2020 \cite{Chan2020}]\label{thm:topo1}
Let $k, d$ be positive integers. Every continuous $(\mathds{Z}_2)^k$-equivariant map $f:V_k(\mathds{R}^d) \to \mathds{R}^{d-1}\times \ldots \times \mathds{R}^{d-k}$ has a zero.
\end{theorem}
Manta and Sober\'on recently found an elementary proof of \cref{thm:topo1} \cite{Manta2021}. We use the result above in \cref{sec:well separated}. For this section, we need a slight modification. We use the product of the actions of $\mathds{Z}_2$ on the $d$-dimensional sphere $S^d$ and of $(\mathds{Z}_2)^k$ on $V_k(\mathds{R}^d)$ to define a free action of $(\mathds{Z}_2)^{k+1}$ on $S^d \times V_k(\mathds{R}^d)$.
\begin{theorem}\label{thm:new-topological-result}
Let $k, d$ be positive integers. Every continuous $(\mathds{Z}_2)^{k+1}$-equivariant map $f:S^d \times V_k(\mathds{R}^d) \to \mathds{R}^d \times \mathds{R}^{d-1}\times \ldots \times \mathds{R}^{d-k}$ has a zero.
\end{theorem}
There are several ways to prove the result above. The dimension of the image and the domain are the same and the action of $(\mathds{Z}_2)^{k+1}$ is free on $S^d \times V_k(\mathds{R}^d)$. Therefore, \cref{thm:new-topological-result} is a consequence of the general Borsuk--Ulam type results of Musin \cite{Mus12}. We simply need to find an equivariant function between these two spacse that has an odd number of orbits of zeroes. Such functions are known for $\mathds{Z}_2$-equivariant $f_1:S^d \to \mathds{R}^d$ and for $(\mathds{Z}_2)^{k}$-equivariant $f_2: V_k(\mathds{R}^d) \to \mathds{R}^{d-1}\times \ldots \times \mathds{R}^{d-k}$, so we can simply take $f_0= f_1 \times f_2$. Alternatively, one can use the methods of Chan et al. \cite{Chan2020} to prove \cref{thm:new-topological-result}. It suffices to note that $S^d \times V_k(\mathds{R}^d)$ is a space in which their topological invariants can be applied and the particular function $f_0$ is all that's needed to replace \cite{Chan2020}*{second proof of Lemma 3.2}. We only use \cref{thm:new-topological-result} for $k=d-1$.
We present a short proof using the existing computations of the Fadell--Husseini index of these spaces on $\mathds{Z}_2$ cohomology \cite{Fadell:1988tm}. Given spaces $X$ and $Y$ with actions of $(\mathds{Z}_2)^{k+1}$, their indices $\ind^{(\mathds{Z}_2)^{k+1}}(X)$, $\ind^{(\mathds{Z}_2)^{k+1}}(Y)$ are ideals in the polynomial ring $\mathds{Z}_2[t_0,t_1,\ldots,t_k]$. Moreover, if there exists a continuous $(\mathds{Z}_2)^{k+1}$-equivariant map $f:X \to Y$, we must have $\ind^{(\mathds{Z}_2)^{k+1}}(Y) \subset \ind^{(\mathds{Z}_2)^{k+1}}(X)$. More details on this index and its computation for spaces and group actions common in discrete geometry can be found on recent work of Blagojevi\'c, L\"uck, and Ziegler \cite{Blagojevic2015}.
\begin{proof}[Proof of \cref{thm:new-topological-result}]
The result is equivalent to showing that there exists no continuous $(\mathds{Z}_2)^{k+1}$-equivariant map $f:S^d \times V_k(\mathds{R}^d) \to (\mathds{R}^d \times \mathds{R}^{d-1}\times \ldots \times \mathds{R}^{d-k}) \setminus \{0\}$. The space $(\mathds{R}^{d}\times \ldots \times \mathds{R}^{d-k}) \setminus \{0\}$ is homotopy equivalent to the join of spheres $S^{d-1}*\ldots * S^{d-k-1}$ and we know $\ind^{(\mathds{Z}_2)^{k+1}} ( S^{d-1}*\ldots * S^{d-k-1} ) \subset \mathds{Z}_2 [t_0, t_1,\ldots, t_k]$ is the ideal generated by the single monomial $t_0^dt_1^{d-1}\ldots t_{k}^{d-k}$.
On the other hand, $\ind^{(\mathds{Z}_2)^{k+1}} (S^d \times V_k(\mathds{R}^d))\subset \mathds{Z}_2[t_0,\ldots, t_k]$ is the ideal generated by the polynomials $t_0^{d+1}, f_1,\ldots, f_k$ where $f_1,\ldots, f_k \subset \mathds{Z}_2[t_1,\ldots, t_k]$ generate $\ind^{(\mathds{Z}_2)^k}(V_k(\mathds{R}^d))$. These polynomials are were described completely by Fadell and Husseini \cite{Fadell:1988tm}*{Thm. 3.16}. Notably
\[
f_i = t_i^{n-i+1}+ w_{i,n-i}t_i^{n-i}+ \ldots + w_{i,0},
\]
where $w_{i,j} \in \mathds{Z}_2[t_1,\ldots, t_{i-1}]$ and the degree of $w_{i,j}t_i^j$ is $n-i+1$. In particular,
\[
t_0^dt_1^{d-1}\ldots t_{k}^{d-k} \not\in \ind^{(\mathds{Z}_2)^{k+1}}(S^d \times V_k(\mathds{R}^d)),
\]
which shows that no continuous $(\mathds{Z}_2)^{k+1}$-equivariant map $f:S^d \times V_k(\mathds{R}^d) \to (\mathds{R}^d \times \mathds{R}^{d-1}\times \ldots \times \mathds{R}^{d-k})\setminus\{0\}$ exists.
\end{proof}
The second tool we require is a minor modification of the lift from $\mathds{R}^d$ to $\mathds{R}^{d+1}$. In the previous proof, given a hyperplane $H \subset \mathds{R}^d$ we lifted $\mathds{R}^d$ directly to $S(H) \subset \mathds{R}^{d+1}$. This has the inconvenience that the lift of an absolutely continuous measure is no longer absolutely continuous in $\mathds{R}^{d+1}$. We do not require such a strong condition, but we do require the lifted measures to assign mass zero to any hyperplane. To avoid this problem, we lift each measure $\mu_i$ to $S(H)^{\varepsilon}$, the region between $S(H)-\varepsilon \cdot e_{d+1}$ and $S(H)+\varepsilon \cdot e_{d+1}$, which we formalize below.
We say that a measure $\mu$ in $\mathds{R}^d$ is smooth if it is the integral of a continuous positive function $f:\mathds{R}^d \to \mathds{R}$ (i.e., $f(A) = \int_A f$ for any measurable set $A$). We ``lift'' $f$ to a function
\begin{align*}
\tilde{f} : \mathds{R}^d \times \mathds{R} & \to \mathds{R} \\
(x,t) & \mapsto \begin{cases}
\left(\frac{1}{2\varepsilon}\right)f(x) & \mbox{ if } |\operatorname{dist}(x,H) - t| \le \varepsilon \\
0 & \mbox{ otherwise.}
\end{cases}
\end{align*}
We say that the measure $\sigma^{\varepsilon}$ defined as the integral of $\tilde{f}$ in $\mathds{R}^d$ is the lift of $\mu$ to $S(H)^{\varepsilon}$. Notice that as $\varepsilon \to 0$, the measure $\sigma^{\varepsilon}$ converges weakly to the lift of $\mu$ to $S(H)$. For $\varepsilon>0$, the measure $\sigma^{\varepsilon}$ is not absolutely continuous, but it has value $0$ on each hyperplane.
\begin{proof}[Proof of \cref{thm:parallel-hyperplanes}]
We first assume that no hyperplane simultaneously halves all measures, or we are done. Since the set of smooth measures is dense in the set of absolutely continuous measures, we may assume without loss of generality that the measures $\mu_1, \ldots, \mu_{d+1}$ are smooth. Let $\varepsilon>0$. For $v \in S^d$ and $(v_1, \ldots, v_{d-1}) \in V_{d-1}(\mathds{R}^d)$, consider the element $(v,v_1, \ldots, v_{d-1}) \in S^d \times V_{d-1} (\mathds{R}^d)$.
Let $H$ be the translate of the hyperplane $T=\operatorname{span}\{v_1,\ldots, v_{d-1}\}$ chosen from \cref{lem:separating-fixed-direction}. Let $\sigma^{\varepsilon}_1, \ldots, \sigma^{\varepsilon}_{d+1}$ be the lifts of $\mu_1, \ldots, \mu_{d+1}$ to $S(H)^{\varepsilon}$, respectively.
Let $\lambda$ be the value so that the half-spaces
\begin{align*}
A & = \{x \in \mathds{R}^{d+1}: \langle x, v \rangle \ge \lambda \} \quad \mbox{ and } \\
B & = \{x \in \mathds{R}^{d+1}: \langle x, v \rangle \le \lambda \}
\end{align*}
have the same $\sigma_{d+1}^{\varepsilon}$-measure. Now we are ready to define a map
\begin{align*}
f : S^d \times V_{d-1}(\mathds{R}^d) & \to \mathds{R}^d \times \mathds{R}^{d-1} \times \ldots \times \mathds{R}^1 \\
(v,v_1,\ldots, v_{d-1}) & \mapsto (x_d,\ldots, x_1)
\end{align*}
where $x_i \in \mathds{R}^{i}$ for $i=1,\ldots, d$. First, we consider
\[
x_d = \begin{bmatrix}
\sigma_1^{\varepsilon}(A)-\sigma_1^{\varepsilon}(B) \\
\vdots \\
\sigma_d^{\varepsilon}(A)-\sigma_d^{\varepsilon}(B)
\end{bmatrix}.
\]
For $i=1,\ldots, d-1$, the first coordinate of $x_i$ is $\langle v, e_{d+1}\rangle \langle v, (v_{d-i},0) \rangle$ and the rest are zero.
This function is continuous by the construction of the lift of the measures. It is also $(\mathds{Z}_2)^{k+1}$-equivariant: if we flip the sign of $v$, only $x_d$ changes sign and if we flip the sign of $v_i$ only $x_{d-i}$ changes sign for $i=1,\ldots, d-1$.
By \cref{thm:new-topological-result} the function $f$ has a zero. The condition $x_d = 0$ tells us that $A$ and $B$ each have exactly half of each $\sigma^{\varepsilon}_i$ for $i=1,\ldots, d+1$. The conditions of the rest of the $x_i$ being zero vectors means that either $v$ is orthogonal to $e_{d+1}$ or $v$ is orthogonal to each $(v_i,0)$ for $i=1,\ldots, d-1$.
If the first condition happens, then when we project $\mathds{R}^{d+1}$ to the hyperplane $e_{d+1} = 0$, each $\sigma^{\varepsilon}_i$ projects to $\mu_i$ and $A$, $B$ project onto two half-spaces of $\mathds{R}^d$. This would mean we have a hyperplane halving each of the original measures, contradicting our initial assumption. Therefore $v$ is orthogonal to $(v_i,0)$ for all $i$.
We take a sequence of positive real numbers $\varepsilon_{k} \to 0$. For each of them, we find a zero of the function induced above. As $S^d \times V_{d-1}(\mathds{R}^d)$ is compact, the zeros must have a converging subsequence. In the limit, we obtain two complementary half-spaces $A, B$ so that each contains at least half of $\sigma_i$ for $i=1,\ldots, d+1$ on $S(H)$ (where the direction of $H$ is determined by the limit of $(v_1, \ldots, v_{d-1})$). Let $H'$ be the hyperplane at the boundary of $A$ and $B$.
In the limit, the vector $v$ normal to $H'$ is orthogonal to the subspace $T_0=\operatorname{span}\{(v_1,0), \ldots, (v_{d-1},0)\}$. By the construction of $H$, the hyperplane $H'$ cannot be one of the two hyperplane components of $S(H)$. The orthogonality mentioned before implies that $H' \cap S(H)$ must be two $(d-1)$-dimensional affine spaces parallel to $T_0$. When we project back to $\mathds{R}^d$, these parallel intersections form $H_1$ and $H_2$ and the region between them has exactly half of each $\mu_i$.
\end{proof}
\begin{proof}[Proof of \cref{cor:Bagel}]
We lift $\mathds{R}^d$ to the paraboloid
\[
\mathcal{P} = \left\{\left(x_1,\ldots, x_d, \sum_{i=1}^d x_i^2\right)\in \mathds{R}^{d+1}\right\}.
\]
We now have $d+2$ measures in $\mathds{R}^{d+1}$, and every hyperplane has measure zero in each of them. We can apply \cref{thm:parallel-hyperplanes} and find two parallel hyperplanes $H_1, H_2$ so that the region between them contains exactly half of each measure. Notice that $H_1 \cap \mathcal{P}$ and $H_2 \cap \mathcal{P}$ project onto concentric spheres in $\mathds{R}^d$.
\end{proof}
Of course, similar results can be obtained by applying general Veronese maps as Stone and Tukey did to prove the polynomial ham sandwich theorem \cite{Stone:1942hu}.
\begin{corollary}
Let $d,k$ be positive integers. For any set of $\binom{d+k}{k}$ finite absolutely continuous measures in $\mathds{R}^d$, there exists a polynomial $P$ on $d$ variables and constants $\lambda_1, \lambda_2$ so that $\{x \in \mathds{R}^d : \lambda_1 \le P(x) \le \lambda_2\}$ has exactly half of each measure.
\end{corollary}
The polynomial ham sandwich usually splits $\binom{d+k}{k}-1$ measures using a polynomial of degree at most $k$. If we want to restrict which monomials are used in the splitting polynomial, we just have to reduce the number of measures accordingly.
\section{Fixed size partitions for well separated measures}\label{sec:well separated}
As noted by B\'ar\'any et al. \cite{Barany:2008vv}, it is known that for well separated convex subsets $K_1, \ldots, K_d$ in $\mathds{R}^d$, there are $2^d$ hyperplanes tangent to all of them. If none of the hyperplanes are vertical (i.e., perpendicular to $e_d$), the tangent hyperplanes are in one-to-one correspondence with the sets of $K_i$ below the hyperplane. This is was also proved by Klee, Lewis, and Hohenbalken \cite{Klee1997}. We use this fact in our proof of \cref{thm:BHJ}.
\begin{proof}[Proof of \cref{thm:BHJ}]
Consider the hypercube $Q = [0,1]^d$. Each vertex of $Q$ can be assigned to a subset of $I \subset [d]$ uniquely. We denote
\[
v_I = (p_1,\ldots, p_d) \qquad \mbox{where } \quad p_i = \begin{cases}
1 & \mbox{ if } i\in I \\
0 & \mbox{ if } i\not\in I.
\end{cases}
\]
For a point $q = (q_1, \ldots, q_d) \in Q$ and $I \subset [d]$ we consider the coefficients
\begin{align*}
\lambda_I(q) = \prod_{i \in I}q_i \prod_{i \not\in I} (1-q_i).
\end{align*}
The coefficients $\lambda_I(q)$ are the coefficients of a convex combination, as they are non-negative and their sum is $1$. Suppose we have a function $f: \{0,1\}^d \to \mathds{R}^d$. We can extend it to a function $\tilde{f}:Q \to \mathds{R}^d$ by mapping
\[
q \mapsto \sum_{I \subset [d]} \lambda_I(q) f(v_I).
\]
Notice that if $\sigma$ is a face of $Q$, then $\tilde{f}(\sigma) \subset \conv(\{f(v_I): v_I \in \sigma\})$. In particular, $\tilde{f}(v_I) = f(v_I)$.
Now, suppose we are given $d$ well separated convex sets $K_1, \ldots, K_d$ in $\mathds{R}^d$ and measures $\mu_1, \ldots, \mu_d$ so that the support of $\mu_i$ is $K_i$. We may assume without loss of generality that there is no vertical hyperplane tangent to each of $K_1, \ldots, K_d$.
Each non-vertical hyperplane $H$ can be written as
\[
\{(x_1, \ldots, x_d) : x_d = \alpha_1 x_1 + \ldots + \alpha_{d-1} x_{d-1} + \alpha_d \}
\]
for some constants $\alpha_1, \ldots, \alpha_d$. We assign the vector $r(H)=(\alpha_1, \ldots, \alpha_d)$ to the hyperplane $H$. We say that a point is above $H$ if $x_d \ge \alpha_1 x_1 + \ldots + \alpha_{d-1} x_{d-1} + \alpha_d$ and below $H$ if $x_d \le \alpha_1 x_1 + \ldots + \alpha_{d-1} x_{d-1} + \alpha_d$. Notice that if a point $x$ is below a set of hyperplanes $H_1, \ldots, H_k$, then it is also below the hyperplane $r^{-1}(y)$ for any $y \in \conv (r(H_1), \ldots, r(H_k))$.
We know that for each subset $I \subset [d]$, there is a unique hyperplane $H_I$ that is tangent to each $K_i$ and so that $K_i$ is below $H$ if and only if $i \in I$. This defines a function $f:\{0,1\}^d \to \mathds{R}^d$ by simply taking $f(v_I) = r(H_I)$. We extend $f$ to a function $\tilde{f}:Q \to \mathds{R}^d$ as described above. For $q \in Q$, let $H(q)$ be the set of points below $r^{-1}(\tilde{f}(q))$.
We define a final function
\begin{align*}
g: Q & \to Q \\
q & \mapsto (\mu_1 (H(q)), \ldots, \mu_d (H(q)))
\end{align*}
The function $g$ is continuous. Notice, for example, that $g(v_I) = v_I$ for each $I \subset [d]$. Moreover, using the properties of $\tilde{f}$, we have that for every face $\sigma \subset Q$, we have $g(\sigma) \subset \sigma$. This means that $g$ is of degree one on the boundary, so it is surjective. In particular, there is a point $q_0 \in Q$ such that $g(q_0) = (\alpha_1, \ldots, \alpha_d)$. Therefore, the hyperplane $H(q_0)$ is the hyperplane we were looking for.
\end{proof}
B\'ar\'any, Hubard, and Karasev also showed that under simple conditions the half-space $H$ from \cref{thm:BHJ} is unique. It suffices that
\begin{itemize}
\item each measure $\mu_i$ assigns a positive value to each open set in its support $K_i$,
\item the interior of each $K_i$ is connected and not empty,
\item no vertical hyperplane is tangent to $K_1, \ldots, K_d$, and
\item the half-space $H$ contains infinite rays in direction $-e_d$.
\end{itemize}
\begin{proof}[Proof of \cref{thm:parallel-hyperplanes-separated}]
We follow a process similar to the proof of \cref{thm:parallel-hyperplanes}. First we need an additional observation about our construction of $S(H)$. In $\mathds{R}^d$, there is no hyperplane $H$ intersecting each of $K_1, \ldots, K_{d+1}$. Otherwise, we can take a point $p_i \in K_i \cap H$. This gives us $d+1$ points in a $(d-1)$-dimensional space, so by Radon's lemma we can find a partition of them into two subsets $A, B$ whose convex hulls intersect. This implies that $\{K_i : p_i \in A\}$ cannot be separated from $\{K_i : p_i \in B\}$, contradicting the hypothesis.
Therefore, when we are given an vector $v \in \mathds{R}^d$ and we construct $S(H)$ for $H \perp v$, each side of $H$ must have measure zero for some $\mu_i$. This is much stronger than simply having less than half of some measure.
The main idea will be to lift each measure to a surface $S(H)$ for an appropriate $H$ and use \cref{thm:BHJ}. We show that by choosing $H$ carefully, we can deduce the existence of the two parallel hyperplanes we seek.
Consider the Stiefel manifold $V_{d-1}(\mathds{R}^d)$. Given $(v_1, \ldots, v_{d-1}) \in V_{d-1}(\mathds{R}^d)$, we lift $\mathds{R}^d$ to $S(H) \subset \mathds{R}^{d+1}$ as in \cref{lem:separating-fixed-direction} where $H$ is parallel to $\operatorname{span}\{v_1,\ldots, v_{d-1}\}$. This lifts each measure $\mu_i$ in $\mathds{R}^d$ to a measure $\sigma_i$ in $S(H)$. Every hyperplane in $\mathds{R}^d$ separating the support vectors of two sets of the measures $\mu_{i}$ can be extended vertically in $\mathds{R}^{d+1}$ to separate the corresponding measures $\sigma_i$. A small tilting can ensure that the separating hyperplane is not vertical. Therefore, the measures $\sigma_i$ are well separated.
The measures $\sigma_i$ do not satisfy the requirements of \cref{thm:BHJ}, so an additional step is necessary. For an $\varepsilon>0$, we lift the measures to $S(H)^{\varepsilon}$ as in the proof of \cref{thm:parallel-hyperplanes}, apply \cref{thm:BHJ}, and then take $\varepsilon \to 0$.
This ensures that we get a half-space $H^+$ that has infinite rays in the direction $-e_{d+1}$ and such that $\sigma_i(H^+) \ge \alpha_i \cdot \sigma_i(\mathds{R}^d)$ and $\sigma_i(H^-) \ge (1-\alpha_i) \cdot \sigma_i(\mathds{R}^d)$, where $H^-$ is the complementary closed half-space of $H^+$. Since $\sigma_i(H^+)>0$ for all $i$, we know that the boundary of $H^+$ cannot be one of the two hyperplane components of $S(H)$. Therefore the measure $\sigma_i$ of the boundary of $H^+$ is zero for all $i$ and $\sigma_i(H^+) = \alpha_i \cdot \mu_i(H^+)$. The same arguments that B\'ar\'any, Hubard, and Jer\'onimo used to show the uniqueness in their theorem can be applied to show that $H^+$ is uniquely defined. The uniqueness also implies that $H^+$ changes continuously as we modify $(v_1, \ldots, v_{d-1})$.
Let $n \in S^d \subset \mathds{R}^{d+1}$ be the normal vector to the boundary of $H^+$ that points in the direction of $H^+$. We can use this to construct a function
\begin{align*}
g: V_{d-1} (\mathds{R}^{d}) & \to \mathds{R}^{d-1} \times \ldots \times \mathds{R}^{1} \\
(v_1, \ldots, v_{d-1}) & \mapsto (x_1, \ldots, x_d).
\end{align*}
For each $i=1,\ldots, d-1$, the first coordinate of $x_i \in \mathds{R}^{d-i}$ is $\langle n, (v_i,0) \rangle$ and the rest are zero. This function is well defined and continuous. If we flip the sign of $v_i$, the surface $S(H)$ does not change. The vector $n \in S^d$ is not affected by this change, so only the sign of $x_i$ changes. Therefore, the function $g$ is $(\mathds{Z}_2)^{d-1}$-equivariant. By \cref{thm:topo1}, the function $g$ must have a zero. This implies that the projection of $H^+ \cap S(H)$ onto $\mathds{R}^d$ is the region between two hyperplanes parallel to $H$.
\end{proof}
The construction of the function $g$ only uses $d-1$ out of the $d(d-1)/2$ coordinates that \cref{thm:topo1} makes available. It would be interesting to know if much stronger conditions can be imposed on $H$.
We also have consequences similar to \cref{cor:Bagel}. We say that a family of sets $K_1, \ldots, K_{d+2}$ in $\mathds{R}^d$ is \textit{well separated by spheres} if for any subset way to split them into two families $I, J$, there is a sphere that separates $I$ and $J$, i.e., it contains the union of one of the sets and leaves out the union of the other set.
\begin{corollary}
Let $d$ be a positive integer and $\mu_1, \ldots, \mu_{d+2}$ be measures in $\mathds{R}^d$ absolutely continuous with respect to the Lebesgue measure. Suppose that the supports $K_1, \ldots, K_{d+2}$ of $\mu_1, \ldots, \mu_{d+2}$ are well separated by spheres.
Let $\alpha_1, \ldots, \alpha_{d+2}$ be real numbers in $(0,1)$. Then, there exist two concentric $S_1, S_2$ in $\mathds{R}^d$ so that the region $A$ between them satisfies
\[
\mu_i(A) = \alpha_i \cdot \mu_i(\mathds{R}^d) \qquad \mbox{for all }i=1,\ldots, d+2.
\]
\end{corollary}
\begin{proof}
We lift $\mathds{R}^d$ to the paraboloid
\[
\mathcal{P} = \left\{\left(x_1,\ldots, x_d, \sum_{i=1}^d x_i^2\right)\in \mathds{R}^{d+1}\right\}.
\]
A sphere in $\mathds{R}^d$ separating two families $I, J$ of measure supports translates to a hyperplane in $\mathds{R}^{d+1}$ separating the lift of those supports. We apply \cref{thm:parallel-hyperplanes-separated} to the family of measures induced on $\mathcal{P}$ and we are done.
Even though the set of lifted measures do not satisfy the conditions of \cref{thm:parallel-hyperplanes-separated}, a standard approximation argument fixes this problem.
\end{proof}
\section{Equipartition with Polytopes and polyhedral surfaces of bounded complexity}\label{sec:polyhedral}
In previous sections, the number of measures to be partitioned was constrained by the dimension of the ambient space, while the boundaries of the partition were relatively simple. In this section we consider mass partitions of a family of $n$ measures in $\mathds{R}^d$, where $n$ can be much larger than $d$. We do so by increasing the complexity of the boundary of the partition. We focus on partitions by polyhedral surfaces.
\begin{definition}\label{def:nicely-separated}
Let $\mathcal{F}=\{\mu_1,\ldots, \mu_n\}$ be a family of finite absolutely continuous measures in $\mathds{R}^d$ with support $K_i$ for each $1\leq i\leq n$. The supports are called \textit{nicely separated} if for each $1\leq i\leq n$, there exists a hyperplane $H_i$ such that $K_i\cap H_i^+=\emptyset$ and $K_j\cap H_i^-=\emptyset$ for all $j\neq i$.
\end{definition}
The maximum number of well separated measures is $d+1$, due to Radon's theorem. For nicely separated measures we only want to be able to separate any measure for the union of the other $n-1$, and not any two subsets. An example of nicely separated measures are $n$ measures such that each is concentrated near a vertex of a convex polytope.
\begin{figure}[h!]
\centering
\includegraphics[width=1.2\textwidth]{nicely-separated.pdf}
\caption{(a) An example of four nicely separated measures in $\mathds{R}^2$. (b) An example of five nicely separated and concentrated measures in $\mathds{R}^2$. Notice that if we take $q_1$ instead of $p_1$ to form the convex hull, the resulting polygon contains all of $K_1$.}
\label{fig:nicely-separated}
\end{figure}
We define a polyhedron in $\mathds{R}^d$ to be a finite intersection of closed half-spaces. A facet of a polyhedron is a $(d-1)$-dimensional face, and a vertex of a polyhedron is a zero dimensional face.
\begin{theorem}\label{thm:n-faces}
Let $\mathcal{F}=\{\mu_1,\ldots, \mu_n\}$ be a family of finite absolutely continuous measures in $\mathds{R}^d$ with nicely separated supports $K_i$ for all $1\leq i\leq n$, and let $\alpha_1,\ldots,\alpha_{n}$ be real numbers in $(0,1)$. Then, there exists a polyhedron $P$ with at most $n$ facets such that $\mu_i(P)=\alpha_i\cdot \mu_i(\mathds{R}^d)$ for every $1\leq i\leq n$.
\end{theorem}
\begin{proof}
Because the supports are nicely separated, for each $1\leq i \leq n$, we can fix a hyperplane $H_i$ with $K_i\cap H_i^+ = \emptyset$ and $K_j\cap H_i^-=\emptyset$ for all $j\neq i$. Notice that a polyhedron $P = \bigcap_{ i=1}^n H_i^+$ has the property $\mu_i(P) = 0$ for every $1\leq i \leq n$.
Now, consider $\mu_1$. Let $v$ be the normal vector to the hyperplane $H_1$ pointing in the direction of $H^-$. We can move $H_1$ in the direction of $v$ until we have the desired portion of the measure $\mu_1$, so we can fix $H_1'\parallel H_1$ with $\mu_1(H_1'^+) = \alpha \cdot \mu_1(\mathds{R}^d)$. By letting $P' = \left(\bigcap_{i=2}^n H_i^+\right) \cap H_1'^+ $, we have $\mu_1(P') = \alpha_1 \cdot \mu_1(\mathds{R}^d)$ because $\mu_1\big(\bigcap_{i=2}^{n}H_i^+\big) = \mu_1(\mathds{R}^d)$.
Moreover, because $K_j\cap H_1^-=\emptyset$ for each $j\neq 1$, moving $H_1$ to the direction of $H_1^-$ does not interfere with the rest of the measures $\mu_2,\ldots,\mu_n$. We can repeat the same process for $\mu_2,\ldots,\mu_n$ to find a convex polyhedron of at most $n$ facets with the desired property.
\end{proof}
While \cref{thm:n-faces} allows for a mass partition with a polyhedron of $n$ facets, we can quantify the complexity of a compact polyhedron by the number of vertices as well. \cref{thm:n-vertices} proves a mass partition with a polyhedron of $n$ vertices, but this time for $n$ measures with a slightly stronger separation condition. We will use a similar idea to the proof of \cref{thm:BHJ}.
Let $\mu_1, \ldots, \mu_n$ be a family of nicely separated measures in $\mathds{R}^d$. Let $H_i$ be the hyperplane separating $K_i$ from the rest of the supports, as in \cref{def:nicely-separated}. For $n \ge d+1$, by Helly's theorem we know that $P=\bigcap_{i=1}^n H^+_i \neq \emptyset$. We say that the measures are \textit{concentrated} if the following happens. There exists a point $p \in P$ and points $p_i, q_i$ for $i = 1,\ldots, n$ so that the following holds.
\begin{itemize}
\item For each $i=1,\ldots,n$, $p_i \in H_i \cap P$. We denote $K_0=\conv\{p_1,\ldots, p_n\}$.
\item We have $p \in K_0$.
\item For each $i=1,\ldots, n$, $q_i$ is in the ray $pp_i$ and in $\bigcap_{i' \neq i}H^+_{i'}$.
\item For each $i=1,\ldots, n$, we have $K_i \subset \conv (\{q_i\}\cup K_0)$.
\end{itemize}
An example is illustrated in \cref{fig:nicely-separated}(b).
\begin{theorem}\label{thm:n-vertices}
Let $n,d$ be positive integers. Let $\mathcal{F}=\{\mu_1,\ldots, \mu_n\}$ be a family of nicely separated and concentrated measures in $\mathds{R}^d$, each absolutely continuous. Let $ \alpha_1,\ldots,\alpha_{n}$ be real numbers in $(0,1)$. Then, there exists a polytope $K$ with $n$ vertices such that $\mu_i(K)=\alpha_i\cdot \mu_i(\mathds{R}^d)$ for every $1\leq i\leq n$.
\end{theorem}
Note that the intuitive idea we used to prove \cref{thm:n-faces} would indicate that we should slide each $p_i$ towards $q_i$ until we have the desired measure. The issue with this is that the values of other measures are no longer fixed.
\begin{proof}
Consider the hypercube $Q=[0,1]^n$. For $x=(x_1,\ldots, x_n) \in Q$, and $i=1,\ldots, n$, let $y_i = (1-x_i)q_i+x_i p_i$. We define
\[
K(x) = \conv \{y_1,\ldots, y_n\}.
\]
This convex set allows us to construct a function
\begin{align*}
f: Q & \to Q \\
x & \mapsto \left( \frac{\mu_1(K(x))}{\mu_1(\mathds{R}^d)}, \ldots, \frac{\mu_n(K(x))}{\mu_n(\mathds{R}^d)}\right).
\end{align*}
The function is continuous. From the conditions of the measures, we can see that for every vertex $v$ of $Q$, we have $f(v) = v$. However, we have a stronger condition. For every face $\sigma \subset Q$, we have $f(\sigma) \subset \sigma$. This is because if a coordinate $x_i$ of $x$ equals zero, the $K(x) \subset H_i^+$, so $\mu_i(K(x)) = 0$. If $x_i=1$, then $K(x) \supset \conv\{\{q_i\}\cup K_0\}$, so $\mu_i(K(x)) = \mu_i (\mathds{R}^d)$. Therefore $f$ is of degree one on the boundary and must be surjective. There is a point $x \in Q$ such that $f(x) = (\alpha_1,\ldots,\alpha_n)$, which implies that $K(x)$ is the polytope we were looking for.
\end{proof}
\section{remarks and open problems}\label{sec:remarks}
To prove \cref{thm:same-fraction}, we need to strengthen \cref{lem:separating-fixed-direction}.
\begin{lemma}\label{lem:separating-fixed-direction-strong}
Let $m, n$ be positive integers, $\mu_1, \ldots, \mu_{n}$ be $n$ finite absolutely continuous measures in $\mathds{R}^d$, and $v$ be a unit vector in $\mathds{R}^d$. There either exists $m-1$ hyperplanes orthogonal to $v$ that divide $\mathds{R}^d$ into $m$ regions $R_1,\ldots, R_m$ of equal measure for each $\mu_i$ simultaneously or there exist $m-1$ hyperplanes orthogonal to $v$ such that they divide $\mathds{R}^d$ into $m$ regions $R_1, \ldots, R_m$ such that for every $j=1,\ldots, m$ there exists an $i$ such that
\begin{align*}
\mu_i (R_j) & < \frac{1}{m}\mu_i(\mathds{R}^d).
\end{align*}
\end{lemma}
\begin{proof}
Given parallel hyperplanes $H_1, \ldots, H_{m-1}$ in this order, we denote by $R_1, \ldots, R_m$ the regions they divide $\mathds{R}^d$ into such that $R_j$ is bounded by $H_{j-1}$ and $H_j$. The unbounded regions $R_1$, $R_m$ are bounded only by $H_1$ and $H_m$ respectively.
We can find $m-1$ hyperplanes such that $\mu_1(R_j) = (1/m)\mu_1(\mathds{R}^d)$ for every $j$. If these regions also form an equipartition for every other $\mu_i$, we are done. Otherwise, there is an $i$ and a $j$ such that $\mu_i(R_j) < (1/m)\mu_i(\mathds{R}^d)$. We can widen $R_j$ by moving $R_{j-1}$ and $R_j$ slightly apart so that we still have $\mu_i(R_j) < (1/m)\mu_i(\mathds{R}^d)$.
Then, $\mu_1(R_{j-1}) < (1/m) \mu_1(\mathds{R}^d)$ and $\mu_1(R_{j+1}) < (1/m) \mu_1(\mathds{R}^d)$. We can translate $H_{j-2}$ and $H_{j+1}$ away from $H_{j-1}$ and $H_j$ respectively so that these inequalities are preserved. This makes $\mu_1(R_{j-2})$ and $\mu_1(R_{j+2})$ to be strictly reduced. We continue this way until we are done.
\end{proof}
Now, given $\mu_1, \ldots, \mu_{d+1}$ finite absolutely continuous measures in $\mathds{R}^d$, we construct a surface in $\mathds{R}^{d+1}$. We take $v=e_{d}$ and find the $m-1$ hyperplanes $H_1, \ldots, H_m$ such that
\[
H_j = \{(x_1,\ldots, x_d) \in \mathds{R}^d: x_d = \lambda_j\}.
\]
For some $\lambda_1 < \ldots < \lambda_{m-1}$. We define $\lambda_0 =-\infty$ and $\lambda_m = \infty$. Let $h:\mathds{R} \to \mathds{R}$ be a convex function that is linear between $\lambda_j$ and $\lambda_{j+1}$ for each $j=0,\ldots, {m-1}$, but not between $\lambda_j$ and $\lambda_{j+2}$ for each $j=0,\ldots, m-2$.
Let $V$ be the surface in $\mathds{R}^{d+1}$ defined by the equation $x_{d+1}=h(x_d)$. The set of points on or above $V$ is the intersection of $m$ closed half-spaces. To prove \cref{thm:same-fraction} we repeat the proof of Akopyan and Karasev but we lift $\mathds{R}^d$ to $V$ instead of a paraboloid.
\begin{proof}[Proof of \cref{thm:same-fraction}]
By a subdivision argument, it suffices to prove the result when $n=p$ a prime number. We apply \cref{lem:separating-fixed-direction-strong} with $m=p$. If there are $p-1$ parallel hyperplanes that form an equipartition of the measures, we are done. Otherwise, we lift $\mathds{R}^d$ to $\mathds{R}^{d+1}$ by lifting it to the surface $V$ defined above. Let $\sigma_1, \ldots, \sigma_{d+1}$ be the measures induced by $\mu_1,\ldots, \mu_{d+1}$ on $V$. It's known that we can split $\mathds{R}^{d+1}$ into $p$ convex sets $C_1, \ldots, C_p$ that form an equipartition of $\mu_1, \ldots, \mu_{d+1}$. Since each of the regions $R_1, \ldots, R_p$ we constructed in $\mathds{R}^{d}$ have less than a $(1/p)$-fraction of some $\mu_i$ and $V$ is the boundary of a convex set, none of the boundaries between the sets $C_{j}$ can coincide with the hyperplanes defining $V$.
Moreover, the sets are $C_1, \ldots, C_p$ are induced by a generalized Voronoi diagram \cites{Karasev:2014gi, Blagojevic:2014ey}. In other words, there are points (called sites) $s_1, \ldots, s_p$ in $\mathds{R}^{d+1}$ and real number $\beta_1, \ldots, \beta_p$ such that the $p$ convex regions
\[
C_j = \{x \in \mathds{R}^{d+1}: ||x-s_j||^2-\beta_j \le ||x - s_{j'}||^2-\beta_{j'} \mbox{ for } j' =1,\ldots, p\}
\]
form an equipartition of $\mu_1, \ldots, \mu_{d+1}$. Since the set of point above $V$ is convex, if we take the region $C_j$ whose site $s_j$ has minimal $(d+1)$-th coordinate, when we project $C_j \cap V$ back to $\mathds{R}^d$ we get a convex set. This is the set $K$ we are looking for. The boundary of the corresponding $C_j$ is the union of at most $p-1$ hyperplanes (the ones dividing it from each other $C_{j'}$). Each of those $p-1$ hyperplanes can intersect each of the $p$ hyperplanes defining $V$, forming at most $p(p-1)$ linear components of the boundary of $C_j \cap V$. This gives us the bound on the number of half-spaces whose intersection is $K$.
\end{proof}
Earlier proofs of the equipartition result we used do not guarantee that the partition $C_1, \ldots, C_p$ comes directly from a generalized Voronoi diagram \cite{Soberon:2012kp}, which is important in this proof. When $n$ is a prime power, the number of half-spaces we used grows logarithmically with $n$. We wonder if this holds in general.
\begin{question}
Let $d$ be a fixed integer. Determine if for every positive integer $n$ and any $d+1$ finite absolutely continuous measures $\mu_1, \ldots, \mu_{d+1}$ in $\mathds{R}^d$ there exists a convex set $K \subset \mathds{R}^d$ formed by the intersection of $O(\log n)$ half-spaces that contains exactly a $(1/n)$-fraction of each $\mu_i$.
\end{question}
We nicknamed \cref{cor:Bagel} the bagel ham sandwich theorem due to its drawing in $\mathds{R}^2$. However, since the set used is the region between two concentric spheres, it certainly does not look like a Bagel in $\mathds{R}^3$. We define a \textit{regular torus} in $\mathds{R}^3$ to be the any set of the form $\{x \in \mathds{R}^3: \operatorname{dist}(x,S)\le \alpha\}$ where $S$ is a flat circle in $\mathds{R}^3$ and $\alpha$ is a positive real number.
\begin{question}[Three-dimensional bagels]
Is is true that for any five absolutely continuous finite measures in $\mathds{R}^3$ there exists a regular torus containing exactly half of each measure?
\end{question}
With four measures the result holds, since when $S$ degenerates to a point the regular torus is a sphere.
One of the questions that motivated the work on this manuscript was inspired by a conjecture by Mikio Kano. Kano conjectured that for any $n$ smooth measures in $\mathds{R}^2$ there exists a path formed only by horizontal and vertical segments, that takes at most $n-1$ turns, that simultaneously halves each measure. The conjecture is only known for $k=1,2$ or if the path is allowed to go through infinity \cites{Uno:2009wk, Karasev:2016cn}. We wonder if the following way to mix Kano's conjecture with \cref{thm:circular-sandwich} holds.
\begin{question}[Existence of square sandwiches]\label{question:square}
Is is true that for any three finite absolutely continuous measures in $\mathds{R}^2$ there exists a square that contains exactly half of each measure?
\end{question}
\cref{thm:parallel-hyperplanes} shows that we have a positive answer for rectangles (if the support of the measures are compact, we can cut the two lines given by \cref{thm:parallel-hyperplanes} by perpendicular segments sufficiently far away, otherwise we have degenerate rectangles). However, it is still possible that for squares the answer to \cref{question:square} is affirmative.
\begin{bibdiv}
\begin{biblist}
\bib{Akopyan:2013jt}{article}{
author={Akopyan, Arseniy},
author={Karasev, Roman~N.},
title={{Cutting the Same Fraction of Several Measures}},
date={2013},
journal={Discrete Comput. Geom.},
volume={49},
number={2},
pages={402\ndash 410},
}
\bib{Blagojevic:2007ij}{article}{
author={Blagojević, Pavle V.~M.},
author={Blagojević, Aleksandra~Dimitrijević},
title={{Using equivariant obstruction theory in combinatorial
geometry}},
date={2007},
journal={Topology Appl.},
volume={154},
number={14},
pages={2635\ndash 2655},
}
\bib{Blagojevic2018}{article}{
author={Blagojevi{\'c}, Pavle V.~M.},
author={Blagojevi{\'c}, Aleksandra~Dimitrijevi{\'c}},
author={Karasev, Roman},
author={Kliem, Jonathan},
title={{More bisections by hyperplane arrangements}},
date={2018},
journal={arXiv preprint arXiv:1809.05364},
volume={math.MG},
}
\bib{Bereg:2005voa}{article}{
author={Bereg, Sergey},
title={{Equipartitions of Measures by 2-Fans}},
date={2005},
journal={Discrete Comput. Geom.},
volume={34},
number={1},
pages={87\ndash 96},
}
\bib{Barany:2008vv}{article}{
author={B{\'a}r{\'a}ny, Imre},
author={Hubard, Alfredo},
author={Jerónimo, Jesús},
title={{Slicing Convex Sets and Measures by a Hyperplane}},
date={2008},
ISSN={0179-5376},
journal={Discrete Comput. Geom.},
volume={39},
number={1-3},
pages={67\ndash 75},
}
\bib{Blagojevic2015}{article}{
author={Blagojevi\'{c}, Pavle V.~M.},
author={L\"{u}ck, Wolfgang},
author={Ziegler, G\"{u}nter~M.},
title={Equivariant topology of configuration spaces},
date={2015},
ISSN={1753-8416},
journal={J. Topol.},
volume={8},
number={2},
pages={414\ndash 456},
url={https://doi.org/10.1112/jtopol/jtv002},
review={\MR{3356767}},
}
\bib{Barany:2001fs}{article}{
author={B{\'a}r{\'a}ny, Imre},
author={Matou\v{s}ek, Ji\v{r}\'i},
title={{Simultaneous partitions of measures by K-fans}},
date={2001},
journal={Discrete Comput. Geom.},
volume={25},
number={3},
pages={317\ndash 334},
}
\bib{Barba:2019to}{article}{
author={Barba, Luis},
author={Pilz, Alexander},
author={Schnider, Patrick},
title={{Sharing a pizza: bisecting masses with two cuts}},
date={2019},
journal={arXiv preprint arXiv:1904.02502},
volume={cs.CG},
}
\bib{Breuer2010}{article}{
author={Breuer, Felix},
title={{Uneven Splitting of Ham Sandwiches}},
date={2010},
ISSN={0179-5376},
journal={Discrete Comput. Geom.},
volume={43},
number={4},
pages={876\ndash 892},
}
\bib{Blagojevic:2014ey}{article}{
author={Blagojević, Pavle V.~M.},
author={Ziegler, G\"unter~M.},
title={{Convex equipartitions via Equivariant Obstruction Theory}},
date={2014},
journal={Israel J. Math.},
volume={200},
number={1},
pages={49\ndash 77},
}
\bib{Chan2020}{article}{
author={Chan, Yu~Hin},
author={Chen, Shujian},
author={Frick, Florian},
author={Hull, J.~Tristan},
title={{Borsuk-Ulam theorems for products of spheres and Stiefel
manifolds revisited}},
date={2020},
journal={Topol. Methods Nonlinear Anal.},
volume={55},
number={2},
pages={553\ndash 564},
}
\bib{Fadell:1988tm}{article}{
author={Fadell, Edward},
author={Husseini, Sufian},
title={{An ideal-valued cohomological index theory with applications to
Borsuk—Ulam and Bourgin—Yang theorems}},
date={1988},
journal={Ergodic Theory Dynam. Systems},
volume={8},
pages={73\ndash 85},
}
\bib{Hubard2020}{article}{
author={Hubard, Alfredo},
author={Karasev, Roman},
title={Bisecting measures with hyperplane arrangements},
date={2020},
journal={Math. Proc. Cambridge Philos. Soc.},
volume={169},
number={3},
pages={639\ndash 647},
}
\bib{Karasev:2014gi}{article}{
author={Karasev, Roman~N.},
author={Hubard, Alfredo},
author={Aronov, Boris},
title={{Convex equipartitions: the spicy chicken theorem}},
date={2014},
journal={Geom. Dedicata},
volume={170},
number={1},
pages={263\ndash 279},
}
\bib{Klee1997}{article}{
author={Klee, Victor},
author={Lewis, Ted},
author={Von~Hohenbalken, Balder},
title={Appollonius revisited: supporting spheres for sundered systems},
date={1997},
ISSN={0179-5376},
journal={Discrete Comput. Geom.},
volume={18},
number={4},
pages={385\ndash 395},
url={https://doi.org/10.1007/PL00009324},
}
\bib{Karasev:2016cn}{article}{
author={Karasev, Roman~N.},
author={Roldán-Pensado, Edgardo},
author={Soberón, Pablo},
title={{Measure partitions using hyperplanes with fixed directions}},
date={2016},
journal={Israel J. Math.},
volume={212},
number={2},
pages={705\ndash 728},
}
\bib{matousek2003using}{book}{
author={Matou\v{s}ek, Ji\v{r}\'{\i}},
title={Using the {B}orsuk-{U}lam theorem: Lectures on topological
methods in combinatorics and geometry},
series={Universitext},
publisher={Springer-Verlag, Berlin},
date={2003},
ISBN={3-540-00362-2},
}
\bib{Manta2021}{article}{
author={Manta, Michael~N.},
author={Sober{\'o}n, Pablo},
title={{Generalizations of the Yao--Yao partition theorem and the
central transversal theorem}},
date={2021},
journal={arXiv preprint arXiv:2107.06233},
volume={math.CO},
}
\bib{Mus12}{article}{
author={Musin, Oleg},
title={{Borsuk--Ulam type theorems for manifolds}},
date={2012},
journal={Proc. Amer. Math. Soc.},
volume={140},
number={7},
pages={2551\ndash 2560},
}
\bib{RoldanPensado2021}{article}{
author={R{old{\'a}n-Pensado}, Edgardo},
author={Sober{\'o}n, Pablo},
title={A survey of mass partitions},
date={2021},
journal={Bull. Amer. Math. Soc.},
note={Electronically published on February 24, 2021, DOI:
https://doi.org/10.1090/bull/1725 (to appear in print).},
}
\bib{Schnider:2019ua}{article}{
author={Schnider, Patrick},
title={{Equipartitions with Wedges and Cones}},
date={2019},
journal={arXiv preprint arXiv:1910.13352},
volume={cs.CG},
}
\bib{Soberon:2012kp}{article}{
author={Sober{\'o}n, Pablo},
title={{Balanced Convex Partitions of Measures in $R^d$}},
date={2012},
journal={Mathematika},
volume={58},
number={01},
pages={71\ndash 76},
}
\bib{Stone:1942hu}{article}{
author={Stone, Arthur~H.},
author={Tukey, John~W.},
title={{Generalized “sandwich” theorems}},
date={1942},
ISSN={0012-7094},
journal={Duke Math. J.},
volume={9},
number={2},
pages={356\ndash 359},
}
\bib{Steinhaus1938}{article}{
author={Steinhaus, Hugo},
title={A note on the ham sandwich theorem},
date={1938},
journal={Mathesis Polska},
volume={9},
pages={26\ndash 28},
}
\bib{Steinhaus1945}{article}{
author={Steinhaus, Hugo},
title={Sur la division des ensembles de l'espace par les plans et des
ensembles plans par les cercles},
date={1945},
journal={Fund. Math.},
volume={33},
number={1},
pages={245\ndash 263},
}
\bib{Uno:2009wk}{article}{
author={Uno, Miyuki},
author={Kawano, Tomoharu},
author={Kano, Mikio},
title={{Bisections of two sets of points in the plane lattice}},
date={2009},
journal={IEICE Transactions on Fundamentals of Electronics, Communications
and Computer Sciences},
volume={92},
number={2},
pages={502\ndash 507},
}
\bib{Zivaljevic2017}{incollection}{
author={{\v{Z}}ivaljevi{\'c}, Rade~T.},
title={Topological methods in discrete geometry},
date={2017},
booktitle={{Handbook of Discrete and Computational Geometry}},
edition={Third},
publisher={CRC Press},
pages={551\ndash 580},
}
\end{biblist}
\end{bibdiv}
\end{document} | {
"timestamp": "2021-09-09T02:24:48",
"yymm": "2109",
"arxiv_id": "2109.03749",
"language": "en",
"url": "https://arxiv.org/abs/2109.03749",
"abstract": "Many results in mass partitions are proved by lifting $\\mathbb{R}^d$ to a higher-dimensional space and dividing the higher-dimensional space into pieces. We extend such methods to use lifting arguments to polyhedral surfaces. Among other results, we prove the existence of equipartitions of $d+1$ measures in $\\mathbb{R}^d$ by parallel hyperplanes and of $d+2$ measures in $\\mathbb{R}^d$ by concentric spheres. For measures whose supports are sufficiently well separated, we prove results where one can cut a fixed (possibly different) fraction of each measure either by parallel hyperplanes, concentric spheres, convex polyhedral surfaces of few facets, or convex polytopes with few vertices.",
"subjects": "Combinatorics (math.CO)",
"title": "Lifting methods in mass partition problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9904405998064016,
"lm_q2_score": 0.8267117983401363,
"lm_q1q2_score": 0.8188089294150336
} |
https://arxiv.org/abs/1812.10850 | Decomposition of Gaussian processes, and factorization of positive definite kernels | We establish a duality for two factorization questions, one for general positive definite (p.d) kernels $K$, and the other for Gaussian processes, say $V$. The latter notion, for Gaussian processes is stated via Ito-integration. Our approach to factorization for p.d. kernels is intuitively motivated by matrix factorizations, but in infinite dimensions, subtle measure theoretic issues must be addressed. Consider a given p.d. kernel $K$, presented as a covariance kernel for a Gaussian process $V$. We then give an explicit duality for these two seemingly different notions of factorization, for p.d. kernel $K$, vs for Gaussian process $V$. Our result is in the form of an explicit correspondence. It states that the analytic data which determine the variety of factorizations for $K$ is the exact same as that which yield factorizations for $V$. Examples and applications are included: point-processes, sampling schemes, constructive discretization, graph-Laplacians, and boundary-value problems. | \section{Introduction}
We give an integrated approach to positive definite (p.d.) kernels
and Gaussian processes, with an emphasis on factorizations, and their
applications. Positive definite kernels serve as powerful tools in
such diverse areas as Fourier analysis, probability theory, stochastic
processes, boundary theory, potential theory, approximation theory,
interpolation, signal/image analysis, operator theory, spectral theory,
mathematical physics, representation theory, complex function-theory,
moment problems, integral equations, numerical analysis, boundary-value
problems for partial differential equations, machine learning, geometric
embedding problems, and information theory. While there is no single
book which covers all these applications, the reference \cite{MR3526117}
goes some of the way. As for the use of RKHS analysis in machine learning,
we refer to \cite{MR2327597} and \cite{MR3236858}.
Here, we give a new and explicit duality for positive definite functions
(kernels) on the one hand, and Gaussian processes on the other. A
covariance kernel for a general stochastic process is positive definite.
In general, the stochastic process in question is not determined by
its covariance kernel. But in the special case when the process is
Gaussian, it is. In fact (\thmref{C1}), every p.d. kernel $K$ is
indeed the covariance kernel of a Gaussian process. The construction
is natural; starting with the p.d. kernel $K$, there is a canonical
inductive limit construction leading to the Gaussian process for this
problem, following a realization of Gaussian processes dating back
to Kolmogorov. The interplay between analytic properties of p.d. kernels
and their associated Gaussian processes is the focus of our present
study.
We formulate two different factorization questions, one for general
p.d. kernels $K$, and the other for Gaussian processes, say $V$.
The latter notion, for Gaussian processes, is a subordination approach.
Our approach to factorization for p.d. kernels is directly motivated
by matrix factorizations, but in infinite dimensions, there are subtle
measure theoretic issues involved. If the given p.d. kernel $K$ is
already presented as a covariance kernel for a Gaussian process $V$,
we then give an explicit duality for these two seemingly different
notions of factorization. Our main result, \thmref{E1}, states that
the analytic data which determine the variety of factorizations for
$K$ is the exact same as that which yield factorizations for $V$.
\section{\label{sec:pdk}Positive definite kernels}
The notion of a positive definite (p.d.) kernel has come to serve
as a versatile tool in a host of problems in pure and applied mathematics.
The abstract notion of a p.d. kernel is in fact a generalization of
that of a positive definite function, or a positive-definite matrix.
Indeed, the matrix-point of view lends itself naturally to the particular
factorization question which we shall address in \secref{fac} below.
The general idea of p.d. kernels arose first in various special cases
in the first half of 20th century: It occurs in work by J. Mercer
in the context of solving integral operator equations; in the work
of G. Szeg\H{o} and S. Bergmann in the study of harmonic analysis
and the theory of complex domains; and in the work by N. Aronszajn
in boundary value problems for PDEs. It was Aronszajn who introduced
the natural notion of reproducing kernel Hilbert space (RKHS) which
will play a central role here; see especially (\ref{eq:A4}) below.
References covering the areas mentioned above include: \cite{MR3687240,MR0051437,MR562914,IM65,jorgensen2018harmonic,MR0277027,MR3882025},
and \cite{MR3721329}.
Right up to the present, p.d. kernels have arisen as powerful tools
in many and diverse areas of mathematics. A partial list includes
the areas listed above in the Introduction. An important new area
of application of RKHS theory includes the following \cite{MR1120274,MR1200633,MR1473250,MR1821907,MR1873434,MR1986785,MR2223568,MR2373103}.
\subsection*{Positive definite kernels and their reproducing kernel Hilbert spaces}
Let $X$ be a set and let $K$ be a complex valued function on $X\times X$.
We say that $K$ is \emph{positive definite} (p.d.) iff (Def.) for
all finite subset $F$ ($\subset X$) and complex numbers $\left(\xi_{x}\right)_{x\in F}$,
we have:
\begin{equation}
\sum_{x\in F}\sum_{y\in F}\overline{\xi}_{x}\xi_{y}K\left(x,y\right)\geq0.\label{eq:A1}
\end{equation}
In other words, the $\left|F\right|\times\left|F\right|$ matrix $\left(K\left(x,y\right)\right)_{F\times F}$
is positive definite in the usual sense of linear algebra. We refer
to the rich literature regarding theory and applications of p.d. functions
\cite{MR2966130,MR3507188,HKL14,RAKK05,MR3290453,MR3046303,MR2982692}.
We shall also need the Aronszajn \cite{MR0051437} reproducing kernel
Hilbert spaces (R.K.H.S.), denoted $\mathscr{H}\left(K\right)$: It
is the Hilbert completion of all functions
\begin{equation}
\sum_{x\in F}\xi_{x}K\left(\cdot,x\right)\label{eq:A2}
\end{equation}
where $F$, and $\left(\xi\right)_{x\in F}$, are as above.
If $F$ (finite) is fixed, and $\left(\xi_{x}\right)_{x\in F}$, $\left(\eta_{x}\right)_{x\in F}$
are vectors in $\mathbb{C}^{\left|F\right|}$, we set
\begin{equation}
\left\langle \sum\nolimits _{x\in F}\xi_{x}K\left(\cdot,x\right),\sum\nolimits _{y\in F}\eta_{y}K\left(\cdot,y\right)\right\rangle _{\mathscr{H}\left(K\right)}:=\sum\sum\nolimits _{F\times F}\overline{\xi}_{x}\eta_{y}K\left(x,y\right).\label{eq:A3}
\end{equation}
With the definition of the R.K.H.S. $\mathscr{H}\left(K\right)$,
we get directly that the functions $\left\{ K\left(\cdot,x\right)\right\} _{x\in X}$
are automatically in $\mathscr{H}\left(K\right)$; and that, for all
$h\in\mathscr{H}\left(K\right)$, we have
\begin{equation}
\left\langle K\left(\cdot,x\right),h\right\rangle _{\mathscr{H}\left(K\right)}=h\left(x\right);\label{eq:A4}
\end{equation}
i.e., the \emph{reproducing} property holds.
Further recall (see e.g. \cite{MR3526117}) that, given $K$, then
the R.K.H.S. $\mathscr{H}\left(K\right)$ is determined uniquely,
up to isometric isomorphism in Hilbert space.
\begin{lem}
\label{lem:B1}Let $X\times X\xrightarrow{\;K\;}\mathbb{C}$ be a
p.d. kernel, and let $\mathscr{H}\left(K\right)$ be the corresponding
RKHS (see (\ref{eq:A3})-(\ref{eq:A4})). Let $h$ be a function defined
on $X$; then TFAE:
\begin{enumerate}
\item \label{enu:LemA2-1}$h\in\mathscr{H}\left(K\right)$;
\item \label{enu:LemA2-2}there is a constant $C=C_{h}<\infty$ such that,
for all finite subset $F\subset X$, and all $\left(\xi_{x}\right)_{x\in F}$,
$\xi_{x}\in\mathbb{C}$, the following \uline{a priori} estimate
holds:
\begin{equation}
\left|\sum\nolimits _{x\in F}\xi_{x}h\left(x\right)\right|^{2}\leq C_{h}\sum\nolimits _{x\in F}\sum\nolimits _{y\in F}\overline{\xi}_{x}\xi_{y}K\left(x,y\right).\label{eq:B5}
\end{equation}
\end{enumerate}
\end{lem}
\begin{proof}
The implication (\ref{enu:LemA2-1})$\Rightarrow$(\ref{enu:LemA2-2})
is immediate, and in this case, we may take $C_{h}=\left\Vert h\right\Vert _{\mathscr{H}\left(K\right)}^{2}$.
Now for the converse, assume (\ref{enu:LemA2-2}) holds for some finite
constant. On the $\mathscr{H}\left(K\right)$-dense span in (\ref{eq:A2}),
define a linear functional
\begin{equation}
L_{h}\left(\sum\nolimits _{x\in F}\xi_{x}K\left(\cdot,x\right)\right):=\sum\nolimits _{x\in F}\xi_{x}h\left(x\right).\label{eq:B6}
\end{equation}
From the assumption (\ref{eq:B5}) in (\ref{enu:LemA2-2}), we conclude
that $L_{h}$ (in (\ref{eq:B6})) is a well defined bounded linear
functional on $\mathscr{H}\left(K\right)$. Initially, $L_{h}$ is
only defined on the span (\ref{eq:A2}), but by (\ref{eq:B5}), it
is bounded, and so extends uniquely by $\mathscr{H}\left(K\right)$-norm
limits. We may therefore apply Riesz' lemma to the Hilbert space $\mathscr{H}\left(K\right)$,
and conclude that there is a unique $H\in\mathscr{H}\left(K\right)$
such that
\begin{equation}
L_{h}\left(\psi\right)=\left\langle \psi,H\right\rangle _{\mathscr{H}\left(K\right)}\label{eq:B7}
\end{equation}
for all $\psi\in\mathscr{H}\left(K\right)$. Now, setting $\psi\left(\cdot\right):=K\left(\cdot,x\right)$,
for $x\in X$, we conclude from (\ref{eq:B7}) that $h\left(x\right)=H\left(x\right)$;
and so $h\in\mathscr{H}\left(K\right)$, proving (\ref{enu:LemA2-1}).
\end{proof}
\section{\label{sec:gp}Gaussian processes}
The interest in positive definite (p.d.) functions has at least three
roots: (i) Fourier analysis, and harmonic analysis more generally;
(ii) Optimization and approximation problems, involving for example
spline approximations as envisioned by I. Sch\"oenberg; and (iii)
Stochastic processes. See \cite{MR0004644,MR709376}.
Below, we sketch a few details regarding (iii). A \emph{stochastic
process} is an indexed family of random variables based on a fixed
probability space. In some cases, the processes will be indexed by
some group $G$, or by a subset of $G$. For example, $G=\mathbb{R}$,
or $G=\mathbb{Z}$, correspond to processes indexed by real time,
respectively discrete time. A main tool in the analysis of stochastic
processes is an associated \emph{covariance function}.
A process $\{X_{g}\mid g\in G\}$ is called \emph{Gaussian} if each
random variable $X_{g}$ is Gaussian, i.e., its distribution is Gaussian.
For Gaussian processes, we only need two moments. So if we normalize,
setting the mean equal to $0$, then the process is determined by
its covariance function. In general, the covariance function is a
function on $G\times G$, or on a subset, but if the process is \emph{stationary},
the covariance function will in fact be a p.d. function defined on
$G$, or a subset of $G$. For a systematic study of positive definite
functions on groups $G$, on subsets of groups, and the variety of
the extensions to p.d. functions on $G$, see e.g. \cite{MR3559001}.
By a theorem of Kolmogorov \cite{MR735967}, every Hilbert space may
be realized as a (Gaussian) reproducing kernel Hilbert space (RKHS),
see \thmref{C1} below, and also \cite{PaSc75,IM65,NF10}.
Now every positive definite kernel is also the covariance kernel of
a Gaussian process; a fact which is a point of departure in our present
analysis: Given a positive definite kernel, we shall explore its use
in the analysis of the associated Gaussian process; and vice versa.
This point of view is especially fruitful when one is dealing with
problems from stochastic analysis. Even restricting to stochastic
analysis, we have the exciting area of applications to statistical
learning theory \cite{MR2327597,MR3236858}.\\
Let $\left(\Omega,\mathscr{F},\mathbb{P}\right)$ be a \emph{probability
space}, i.e., $\Omega$ is a fixed set (sample space), $\mathscr{F}$
is a specified sigma-algebra (events) of subsets in $\Omega$, and
$\mathbb{P}$ is a probability measure on $\mathscr{F}$.
A Gaussian random variable is a function $V:\Omega\rightarrow\mathbb{R}$
(in the real case), or $V:\Omega\rightarrow\mathbb{C}$, such that
$V$ is measurable with respect to the sigma-algebra $\mathscr{F}$
on $\Omega$, and the corresponding sigma-algebra of Borel subsets
in $\mathbb{R}$ (or in $\mathbb{C}$). Let $\mathbb{E}$ denote the
expectation defined from $\mathbb{P}$, i.e.,
\begin{equation}
\mathbb{E}\left(\cdots\right)=\int_{\Omega}\left(\cdots\right)d\mathbb{P}.\label{eq:A5}
\end{equation}
The requirement on $V$ is that its distribution is Gaussian. If $g$
denotes a Gaussian on $\mathbb{R}$ (or on $\mathbb{C}$), the requirement
is that
\begin{equation}
\mathbb{E}\left(f\circ V\right)=\int_{\mathbb{R}\left(\text{or \ensuremath{\mathbb{C}}}\right)}f\,dg;\label{eq:A6}
\end{equation}
or equivalently
\begin{equation}
\mathbb{P}\left(V\in B\right)=\int_{B}dg=g\left(B\right)\label{eq:A7}
\end{equation}
for all Borel sets $B$; see \figref{G1}.
\begin{figure}
\includegraphics[width=0.35\textwidth]{gr}
\caption{\label{fig:G1}A Gaussian random variable and its distribution, see
(\ref{eq:A7}).}
\end{figure}
If $N\in\mathbb{N}$, and $V_{1},\cdots,V_{N}$ are random variables,
the Gaussian requirement is (see \figref{G2}) that the joint distribution
of $\left(V_{1},\cdots,V_{N}\right)$ is an $N$-dimensional Gaussian,
say $g_{N}$, so if $B\subset\mathbb{R}^{N}$ then
\begin{equation}
\mathbb{P}\left(\left(V_{1},\cdots,V_{N}\right)\in B\right)=g_{N}\left(B\right).\label{eq:A8}
\end{equation}
\begin{figure}
\includegraphics[width=0.35\textwidth]{grn}
\caption{\label{fig:G2} A Gaussian system and its joint distribution, see
(\ref{eq:A8}).}
\end{figure}
For our present purpose we may restrict to the case where the mean
(of the respective Gaussians) is assumed zero. In that case, a finite
joint distribution is determined by its covariance matrix. In the
$\mathbb{R}^{N}$ case, it is specified as follows (the extension
to $\mathbb{C}^{N}$ is immediate) $\left(G_{N}\left(j_{1},j_{2}\right)\right)_{j_{1},j_{2}=1}^{N}$,
\begin{equation}
G_{N}\left(j_{1},j_{2}\right)=\int_{\mathbb{R}^{N}}x_{j_{1}}x_{j_{2}}g_{N}\left(x_{1},\cdots,x_{N}\right)dx_{1}\cdots dx_{N}\label{eq:A9}
\end{equation}
where $dx_{1}\cdots dx_{N}=\lambda_{N}$ denotes the standard Lebesgue
measure on $\mathbb{R}^{N}$.
The following is known:
\begin{thm}[Kolmogorov \cite{MR0133175}, see also \cite{MR562914,MR1176778}]
\label{thm:C1}A kernel $K:X\times X\rightarrow\mathbb{C}$ is positive
definite if and only if there is a (mean zero) Gaussian process $\left(V_{x}\right)_{x\in X}$
indexed by $X$ such that
\begin{equation}
\mathbb{E}\left(\overline{V}_{x}V_{y}\right)=K\left(x,y\right)\label{eq:A10}
\end{equation}
where $\overline{V}_{x}$ denotes complex conjugation.
Moreover (see Hida \cite{MR0301806,MR1176778}), the process in (\ref{eq:A10})
is uniquely determined by the kernel $K$ in question. If $F\subset X$
is finite, then the covariance kernel for $\left(V_{x}\right)_{x\in F}$
is $K_{F}$ given by
\begin{equation}
K_{F}\left(x,y\right)=G_{F}\left(x,y\right),\label{eq:A11}
\end{equation}
for all $x,y\in F$, see (\ref{eq:A9}) above.
\end{thm}
In the subsequent sections, we shall address a number of properties
of Gaussian processes important for their stochastic calculus. Our
analysis deals with both the general case, and particular examples
from applications. We begin in \secref{sms} with certain Wiener processes
which are indexed by sigma-finite measures. For this class, the corresponding
p.d. kernel has a special form; see (\ref{eq:C1}) in \defref{D1}.
(The case of fractal measures is part of \secref{exa} below.) In
\secref{fac}, we address the general case: We prove our duality result
for factorization, \thmref{E1}. The remaining sections are devoted
to examples and applications.
\section{\label{sec:sms}Sigma-finite measure spaces and Gaussian processes}
We shall consider functions of $\sigma$-finite measure space $\left(M,\mathscr{F}_{M},\mu\right)$
where $M$ is a set, $\mathscr{F}_{M}$ a $\sigma$-algebra of subsets
in $M$, and $\mu$ is a positive measure defined on $\mathscr{F}_{M}$.
It is further assumed that there is a countably indexed $\left(A_{i}\right)_{i\in\mathbb{N}}$
s.t. $0<\mu\left(A_{i}\right)<\infty$, $M=\cup_{i}A_{i}$; and further
that the measure space $\left(M,\mathscr{F}_{M},\mu\right)$ is complete;
so the Radon-Nikodym theorem holds. We shall also restrict to the
case when $\mu$ is assumed non-atomic. The case when $\mu$ is atomic
is different, and is addressed in \secref{atomic} below.
\begin{defn}
\label{def:D1}Set
\[
\mathscr{F}_{fin}=\left\{ A\in\mathscr{F}_{M}\mid0<\mu\left(A\right)<\infty\right\} .
\]
\end{defn}
Note then
\begin{equation}
K^{\left(\mu\right)}\left(A,B\right)=\mu\left(A\cap B\right),\;A,B\in\mathscr{F}_{fin}\label{eq:C1}
\end{equation}
is positive definite. The corresponding Gaussian process $(W_{A}^{\left(\mu\right)})_{A\in\mathscr{F}_{fin}}$
is called the Wiener process \cite{MR0301806,MR1176778}. In particular,
we have
\begin{equation}
\mathbb{E}\left(W_{A}^{\left(\mu\right)}W_{B}^{\left(\mu\right)}\right)=\mu\left(A\cap B\right),\label{eq:C2}
\end{equation}
and
\begin{equation}
\lim_{\left(A_{i}\right)}\sum_{i}\left(W_{A_{i}}^{\left(\mu\right)}\right)^{2}=\mu\left(A\right).\label{eq:C3}
\end{equation}
The precise limit in (\ref{eq:C3}), quadratic variation, is as follows:
Given $\mu$ as above, and $A\in\mathscr{F}_{fin}$, we then take
limit over the filter of all partitions of $A$ (see (\ref{eq:C4}))
relative to the standard notation of refinement:
\begin{equation}
A=\cup_{i}A_{i},\;A_{i}\cap A_{j}=\emptyset\;\text{if \ensuremath{i\neq j}},\;\text{and}\;\lim\mu\left(A_{i}\right)=0.\label{eq:C4}
\end{equation}
Details: Let $\left(\Omega,Cyl,\mathbb{P}\right)$, $\mathbb{P}=\mathbb{P}^{\left(\mu\right)}$
be the probability space which realizes $W^{\left(\mu\right)}$ as
a Gaussian process (or generalized Wiener process), i.e., s.t. (\ref{eq:C2})
holds for all pairs in $\mathscr{F}_{fin}$. In particular, we have
that $W_{A}^{\left(\mu\right)}\underset{\left(\text{dist}\right)}{\sim}N\left(0,\mu\left(A\right)\right)$,
i.e., mean zero, Gaussian, and variance = $\mu\left(A\right)$. Then:
\begin{lem}[see e.g., \cite{MR3687240}]
With the assumptions as above, we have
\begin{equation}
\lim_{\left(A_{i}\right)}\mathbb{E}\left(\big|\mu\left(A\right)\mathbbm{1}-\sum\nolimits _{i}(W_{A_{i}}^{\left(\mu\right)})^{2}\Big|^{2}\right)=0\label{eq:C4-1}
\end{equation}
where (in (\ref{eq:C4-1})) the limit is taken over the filter of
all partitions $\left(A_{i}\right)$ of $A$, and $\mathbbm{1}$ denotes
the constant function ``one'' on $\Omega$.
\end{lem}
As a result, we get the following Ito-integral
\begin{equation}
W^{\left(\mu\right)}\left(f\right):=\int_{M}f\left(s\right)\,dW_{s}^{\left(\mu\right)},\label{eq:C6}
\end{equation}
defined for all $f\in L^{2}\left(M,\mathscr{F},\mu\right)$, and
\begin{equation}
\mathbb{E}\left(\left|\int\nolimits _{M}f\left(s\right)dW_{s}^{\left(\mu\right)}\right|^{2}\right)=\int_{M}\left|f\left(s\right)\right|^{2}d\mu\left(s\right).\label{eq:C5}
\end{equation}
We note that the following operator,
\begin{equation}
L^{2}\left(M,\mu\right)\ni f\longmapsto W^{\left(\mu\right)}\left(f\right)\in L^{2}\left(\Omega,\mathbb{P}\right)\label{eq:C7}
\end{equation}
is isometric.
In our subsequent considerations, we shall need the following precise
formula (see \lemref{D3}) for the RKHS associated with the p.d. kernel
\begin{equation}
K^{\left(\mu\right)}\left(A,B\right):=\mu\left(A\cap B\right),\label{eq:C8}
\end{equation}
defined on $\mathscr{F}_{fin}\times\mathscr{F}_{fin}$. We denote
the RKHS by $\mathscr{H}(K^{\left(\mu\right)})$.
\begin{lem}
\label{lem:D3}Let $\mu$ be as above, and let $K^{\left(\mu\right)}$
be the p.d. kernel on $\mathscr{F}_{fin}$ defined in (\ref{eq:C8}).
Then the corresponding RKHS $\mathscr{H}(K^{\left(\mu\right)})$ is
as follows: A function $\Phi$ on $\mathscr{F}_{fin}$ is in $\mathscr{H}(K^{\left(\mu\right)})$
if and only if there is a $\varphi\in L^{2}\left(M,\mathscr{F}_{M},\mu\right)\left(=:L^{2}\left(\mu\right)\right)$
such that
\begin{equation}
\Phi\left(A\right)=\int_{A}\varphi\,d\mu,\label{eq:C10}
\end{equation}
for all $A\in\mathscr{F}_{fin}$. Then
\begin{equation}
\left\Vert \Phi\right\Vert _{\mathscr{H}(K^{\left(\mu\right)})}=\left\Vert \varphi\right\Vert _{L^{2}\left(\mu\right)}.\label{eq:D11-1}
\end{equation}
\end{lem}
\begin{proof}
To show that $\Phi$ in (\ref{eq:C10}) is in $\mathscr{H}(K^{\left(\mu\right)})$,
we must choose a finite constant $C_{\Phi}$ such that, for all finite
subset $\left(A_{i}\right)_{i=1}^{N}$, $A_{i}\in\mathscr{F}_{fin}$,
$\left\{ \xi_{i}\right\} _{i=1}^{N}$, $\xi_{i}\in\mathbb{R}$, we
get the following \emph{a priori} estimate:
\begin{equation}
\left|\sum\nolimits _{i=1}^{N}\xi_{i}\Phi\left(A_{i}\right)\right|^{2}\leq C_{\Phi}\sum\nolimits _{i}\sum\nolimits _{j}\xi_{i}\xi_{j}K^{\left(\mu\right)}\left(A_{i},A_{j}\right).\label{eq:C11}
\end{equation}
But a direct application of Schwarz to $L^{2}\left(\mu\right)$ shows
that (\ref{eq:C11}) holds, and for a finite $C_{\Phi}$, we may take
$C_{\Phi}=\left\Vert \varphi\right\Vert _{L^{2}\left(\mu\right)}^{2}$,
where $\varphi$ is the $L^{2}\left(\mu\right)$-function in (\ref{eq:C10}).
The desired conclusion now follows from an application of \lemref{B1}.
We have proved one implication from the statement of the lemma: Functions
$\Phi$ on $\mathscr{F}_{fin}$ of the formula (\ref{eq:C10}) are
in the RKHS $\mathscr{H}\left(K^{\left(\mu\right)}\right)$, and the
norm $\left\Vert \cdot\right\Vert _{\mathscr{H}\left(K^{\left(\mu\right)}\right)}$
is as stated in (\ref{eq:D11-1}). In the below, we shall denote these
elements in $\mathscr{H}\left(K^{\left(\mu\right)}\right)$ as pairs
$\left(\Phi,\varphi\right)$. We shall also restrict attention to
the case of real valued functions.
For the converse implication, let $H$ be a function on $\mathscr{F}_{fin}$,
and assume $H\in\mathscr{H}\left(K^{\left(\mu\right)}\right)$. Then
by Schwarz applied to $\left\langle \cdot,\cdot\right\rangle _{\mathscr{H}\left(K^{\left(\mu\right)}\right)}$
we get
\begin{equation}
\left|\left\langle H,\Phi\right\rangle _{\mathscr{H}\left(K^{\left(\mu\right)}\right)}\right|\leq\left\Vert H\right\Vert _{\mathscr{H}\left(K^{\left(\mu\right)}\right)}\left\Vert \varphi\right\Vert _{L^{2}\left(\mu\right)},\label{eq:D11-2}
\end{equation}
where we used (\ref{eq:D11-1}). Hence when Schwarz is applied to
$L^{2}\left(\mu\right)$, we get a unique $h\in L^{2}\left(\mu\right)$
such that
\begin{equation}
\left\langle H,\Phi\right\rangle _{\mathscr{H}\left(K^{\left(\mu\right)}\right)}=\int_{M}h\,\varphi\,d\mu\label{eq:D11-3}
\end{equation}
for all $\left(\Phi,\varphi\right)$ as in (\ref{eq:C10}). Now specialize
to $\varphi=\chi_{A}$, $A\in\mathscr{F}_{fin}$, in (\ref{eq:D11-3})
and we conclude that
\begin{equation}
H\left(A\right)=\int_{A}h\,d\mu;
\end{equation}
which translates into the assertion that the pair $\left(H,h\right)$
has the desired form (\ref{eq:C10}). And hence by (\ref{eq:D11-1})
we have $\left\Vert H\right\Vert _{\mathscr{H}\left(K^{\left(\mu\right)}\right)}=\left\Vert h\right\Vert _{L^{2}\left(\mu\right)}$
as stated. This concludes the proof of the converse inclusion.
\end{proof}
\section{\label{sec:fac}Factorizations and stochastic integrals}
In Sections \ref{sec:pdk} and \ref{sec:gp}, we introduced the related
notions of positive definite (p.d.) functions (kernels) on the one
hand, and Gaussian processes on the other. One notes the immediate
fact that a covariance kernel for a general stochastic process is
positive definite. In general, the stochastic process in question
is not determined by its covariance kernel. But in the special case
when the process is Gaussian, it is.
In \thmref{C1}, we stated that every p.d. kernel $K$ is indeed the
covariance kernel of a Gaussian process. The construction is natural;
starting with the p.d. kernel $K$, there is a canonical inductive
limit construction leading to the Gaussian process for this problem.
The basic idea for this particular construction of Gaussian processes
dates back to pioneering work by Kolmogorov \cite{MR735967,MR562914}.
In the present section, we formulate two different factorization questions,
one for general p.d. kernels $K$, and the other for Gaussian processes,
say $V$. For details, see the respective definitions in (\ref{eq:D2})
and (\ref{eq:D3}) below. If $K$ is indeed the covariance kernel
for a Gaussian process $V$, it is natural to try to relate these
two seemingly different notions of factorization. (In the case of
Gaussian processes, a better name is perhaps \textquotedblleft subordination\textquotedblright{}
(see (\ref{eq:D6}) below), but our theorem justifies the use of factorization
in both of these contexts.) Our main result, \thmref{E1}, states
that the data determining factorization for $K$ is the exact same
as that which yields factorization for $V$.\\
Let $K$ be a positive definite kernel $X\times X\xrightarrow{\;K\;}\mathbb{C}$;
and let $V=V_{K}$ be the corresponding Gaussian (mean zero) process,
indexed by $X$, i.e., $V_{x}\in L^{2}\left(\Omega,\mathbb{P}\right)$,
$\forall x\in X$, and
\begin{equation}
\mathbb{E}\left(\overline{V}_{x}V_{y}\right)=K\left(x,y\right),\;\forall\left(x,y\right)\in X\times X.\label{eq:D1}
\end{equation}
We set
\begin{align}
\mathscr{F}\left(K\right) & :=\Big\{\left(M,\mathscr{F}_{M},\mu\right)\mid\text{s.t. }K\left(\cdot,x\right)\longmapsto k_{x}\in L^{2}\left(M,\mu\right)\label{eq:D2}\\
& \qquad\text{extends to an isometry, i.e., }\nonumber \\
& \qquad K\left(x,y\right)=\int_{M}\overline{k_{x}\left(s\right)}k_{y}\left(s\right)d\mu\left(s\right)=\left\langle k_{x},k_{y}\right\rangle _{L^{2}\left(\mu\right)},\;\forall x,y\in X\Big\}.\nonumber
\end{align}
Further, if $V$ is the Gaussian process (from (\ref{eq:D1})), we
set
\begin{align}
\mathscr{M}\left(V\right) & :=\Big\{\left(M,\mathscr{F}_{M},\mu\right)\mid\text{s.t. \ensuremath{V} admits an Ito-integral representation}\nonumber \\
& \qquad V_{x}=\int_{M}k_{x}\left(s\right)dW_{s}^{\left(\mu\right)},\;\forall x\in X,\text{ where}\:\left\{ k_{x}\right\} _{x\in X}\text{ is an}\label{eq:D3}\\
& \qquad\text{indexed system in \ensuremath{L^{2}\left(M,\mu\right)}}\Big\}.\nonumber
\end{align}
Following parallel terminology from measure theory, we say that a
Gaussian process $V$ admits a disintegration, via suitable Ito-integrals,
when there is a measure space with measure $\mu$ such that the corresponding
Wiener process $W^{\left(\mu\right)}$ satisfies (\ref{eq:D3}). Our
theorem below (\thmref{E1}) shows that this disintegration question
may be decided instead by the answer to an equivalent spectral decomposition
question; the latter of course formulated for the covariance kernel
for $V$. As is shown in the examples/applications below, given a
Gaussian process, it is not at all clear what disintegrations hold;
see for example \corref{F7}.
\begin{thm}
\label{thm:E1}Let $K:X\times X\rightarrow\mathbb{C}$ be given positive
definite, and let $\left\{ V_{x}\right\} _{x\in X}$ be the corresponding
Gaussian (mean zero) process, then
\begin{equation}
\mathscr{F}\left(K\right)=\mathscr{M}\left(V\right).\label{eq:D4}
\end{equation}
\end{thm}
\begin{proof}
We shall need the following:
\end{proof}
\begin{lem}
\label{lem:F2}From the definition of $\mathscr{F}\left(K\right)$,
with $K$ fixed and assumed p.d., we get to every $\left(\left(k_{x}\right)_{x\in X},\mu\right)\in\mathscr{F}\left(K\right)$
a natural isometry $T_{\mu}:\mathscr{H}\left(K\right)\longrightarrow L^{2}\left(M,\mu\right)$.
It is denoted by
\begin{equation}
T_{\mu}(\underset{\in\mathscr{H}\left(K\right)}{\underbrace{K\left(\cdot,x\right)}}):=k_{x}\in L^{2}\left(\mu\right);\label{eq:D4-1}
\end{equation}
and the adjoint operator $T_{\mu}^{*}:L^{2}\left(M,\mu\right)\longrightarrow\mathscr{H}\left(K\right)$
is as follows: For all $f\in L^{2}\left(M,\mu\right)$ we have
\begin{equation}
\left(T_{\mu}^{*}f\right)\left(x\right)=\int_{M}f\left(s\right)\overline{k_{x}\left(s\right)}d\mu\left(s\right).\label{eq:D4-2}
\end{equation}
Moreover, we also have
\begin{equation}
T_{\mu}^{*}\left(k_{x}\right)=K\left(\cdot,x\right),\;\text{for all \ensuremath{x\in X}.}\label{eq:E7-1}
\end{equation}
\end{lem}
\begin{proof}
Since $\left(k_{x},\mu\right)\in\mathscr{F}\left(K\right)$, we have
the factorization property (\ref{eq:D2}), and so it follows from
(\ref{eq:D4-1}) that this extends by linearity and norm-completion
to an isometry $\mathscr{H}\left(K\right)\xrightarrow{\;T_{\mu}\;}L^{2}\left(\mu\right)$
as stated.
By the definition of the adjoint operator $L^{2}\left(\mu\right)\xrightarrow{\;T_{\mu}^{*}\;}\mathscr{H}\left(K\right)$,
we have for $f\in L^{2}\left(\mu\right)$:
\[
\left(T_{\mu}^{*}f\right)\left(x\right)=\left\langle K\left(\cdot,x\right),T_{\mu}^{*}f\right\rangle _{\mathscr{H}\left(K\right)}=\left\langle k_{x},f\right\rangle _{L^{2}\left(\mu\right)}=\int_{M}f\left(s\right)\overline{k_{x}\left(s\right)}d\mu\left(s\right),
\]
which is the assertion in the lemma.
From the properties of $\mathscr{H}\left(K\right)$ (see \secref{pdk}),
it follows that (\ref{eq:E7-1}) holds iff
\begin{equation}
\left\langle K\left(\cdot,y\right),T_{\mu}^{*}\left(k_{x}\right)\right\rangle _{\mathscr{H}\left(K\right)}=\left\langle K\left(\cdot,y\right),K\left(\cdot,x\right)\right\rangle _{\mathscr{H}\left(K\right)}\label{eq:E7-2}
\end{equation}
for all $y\in X$. But we may compute both sides in eq. (\ref{eq:E7-2})
as follows:
\begin{eqnarray*}
\text{LHS}_{\left(\ref{eq:E7-2}\right)} & = & \left\langle T_{\mu}K\left(\cdot,y\right),k_{x}\right\rangle _{L^{2}\left(\mu\right)}\\
& \underset{\text{by \ensuremath{\left(\ref{eq:D4-1}\right)}}}{=} & \left\langle k_{y},k_{x}\right\rangle _{L^{2}\left(\mu\right)}\\
& \underset{\text{by \ensuremath{\left(\ref{eq:D2}\right)}}}{=} & K\left(y,x\right)\\
& \underset{\text{by \ensuremath{\left(\ref{eq:A3}\right)}}}{=} & \text{RHS}_{\left(\ref{eq:E7-2}\right)}.
\end{eqnarray*}
\end{proof}
\begin{proof}[Proof of \thmref{E1} continued]
The proof is divided into two parts, one for each of the inclusions
$\subseteq$ and $\supseteq$ in (\ref{eq:D4}).
\textbf{Part 1 ``$\subseteq$''.} Assume a pair $\left(\left(k_{x}\right)_{x\in X},\mu\right)$
is in $\mathscr{F}\left(K\right)$; see (\ref{eq:D2}). Then by definition,
the factorization (\ref{eq:D3}) holds on $X\times X$. Now let $W^{\left(\mu\right)}$
denote the Wiener process associated with $\mu$, i.e., $W^{\left(\mu\right)}$
is a Gaussian process indexed by $\mathscr{F}_{fin}$, and
\begin{equation}
\mathbb{E}\left(W_{A}^{\left(\mu\right)}W_{B}^{\left(\mu\right)}\right)=\mu\left(A\cap B\right),\label{eq:D5}
\end{equation}
for all $A,B\in\mathscr{F}_{fin}$; see (\ref{eq:C1}) above. Now
form the Ito-integral
\begin{equation}
V_{x}:=\int_{M}k_{x}\left(s\right)dW_{s}^{\left(\mu\right)},\;x\in X.\label{eq:D6}
\end{equation}
We stress that then $V_{x}$, as defined by (\ref{eq:D6}), is a Gaussian
process indexed by $X$. To see this, use the general theory of Ito-integration,
see also \cite{MR3882025,MR3701541,MR3574374,MR3721329,MR3670916,MR0301806,MR562914}.
The approximation in (\ref{eq:D6}) is over the filter of all \emph{partitions
}
\begin{equation}
\left\{ A_{i}\right\} _{i\in\mathbb{N}}\text{ s.t. }A_{i}\cap A_{j}=\emptyset,\:i\neq j,\;\cup_{i\in\mathbb{N}}A_{i}=M,\;\text{and}\;0<\mu\left(A_{i}\right)<\infty;\label{eq:D7}
\end{equation}
see (\ref{eq:C4}). From the property of $W_{A_{i}}^{\left(\mu\right)}$,
$i\in\mathbb{N}$, we conclude that, for all $s_{i}\in A_{i}$, we
have that
\begin{equation}
\sum_{i\in\mathbb{N}}k_{x}\left(s_{i}\right)W_{A_{i}}^{\left(\mu\right)}\label{eq:D8}
\end{equation}
is Gaussian (mean zero) with
\begin{align}
\mathbb{E}\left|\sum\nolimits _{i}k_{x}\left(s_{i}\right)W_{A_{i}}^{\left(\mu\right)}\right|^{2} & =\sum\nolimits _{i}\sum\nolimits _{j}\overline{k_{x}\left(s_{i}\right)}k_{x}\left(s_{j}\right)\mu\left(A_{i}\cap A_{j}\right)\nonumber \\
& =\sum\nolimits _{i}\left|k_{x}\left(s_{i}\right)\right|^{2}\mu\left(A_{i}\right);\label{eq:D9}
\end{align}
where we used (\ref{eq:D7}). Passing to the limit over the filter
of all partitions of $M$ (as in (\ref{eq:D7})), we then get
\[
\mathbb{E}\left(\int_{M}\overline{k_{x}\left(s\right)}dW_{s}^{\left(\mu\right)}\int_{M}k_{y}\left(t\right)dW_{t}^{\left(\mu\right)}\right)=\int_{M}\overline{k_{x}\left(s\right)}k_{y}\left(s\right)d\mu\left(s\right);
\]
and with definition (\ref{eq:D6}), therefore:
\begin{equation}
\mathbb{E}\left(\overline{V}_{x}V_{y}\right)=\left\langle k_{x},k_{y}\right\rangle _{L^{2}\left(\mu\right)}=K\left(x,y\right),\;\forall\left(x,y\right)\in X\times X,\label{eq:D10}
\end{equation}
where the last step in the derivation (\ref{eq:D10}) uses the assumption
that $\left(\left(k_{x}\right)_{x\in X},\mu\right)\in\mathscr{F}\left(K\right)$;
see (\ref{eq:D2}).
\textbf{Part 2 ``$\supseteq$''.} Assume now that some pair $\left(\left(k_{x}\right)_{x\in X},\mu\right)$
is in $\mathscr{M}\left(V\right)$ where $K$ is given assumed p.d.;
and where $\left(V_{x}\right)_{x\in X}$ is ``the'' associated (mean
zero) Gaussian process; i.e., with $K$ as its covariance kernel;
see (\ref{eq:D1}).
We claim that $\left(\left(k_{x}\right)_{x\in X},\mu\right)$ must
then be in $\mathscr{F}\left(K\right)$, i.e., that the factorization
(\ref{eq:D3}) holds. This in turn follows from the following chain
of identities:
\begin{alignat}{2}
\left\langle k_{x},k_{y}\right\rangle _{L^{2}\left(\mu\right)} & =\mathbb{E}\left(\overline{V}_{x}V_{y}\right) & \quad & \big(\text{since \ensuremath{V_{x}=\int_{M}k_{x}\left(s\right)dW_{s}^{\left(\mu\right)}}}\big)\nonumber \\
& =K\left(x,y\right) & & \big(\text{since \ensuremath{K} is the covariance kernel}\label{eq:D11}\\
& & & \text{ of the Gaussian process \ensuremath{\left(V_{x}\right)_{x\in X}}}\big)\nonumber
\end{alignat}
valid for $\forall\left(x,y\right)\in X\times X$, and the conclusion
follows. Note that the first step in the derivation of (\ref{eq:D11})
uses the Ito-isometry. Hence, initially $K$ may possibly be the covariance
kernel for a mean zero Gaussian process, say $\left(V'_{x}\right)$,
different from $V_{x}:=\int_{M}k_{x}\left(s\right)dW_{s}^{\left(\mu\right)}$.
But we proved that the two Gaussian processes $V_{x}$, and $V_{x}'$,
have the same covariance kernel. It follows then the two processes
must be equivalent. This is by general theory; see e.g. \cite{MR0277027,MR2053326,MR3687240}.
The last uniqueness is only valid since we can consider Gaussian processes.
Other stochastic processes are typically not determined uniquely from
the respective covariance kernels.
\end{proof}
\begin{rem}
In the statement of \thmref{E1} there are two isometries: Starting
with $\left(\left(k_{x}\right)_{x\in X},\mu\right)\in\mathscr{F}\left(K\right)$
we get the canonical isometry $T_{\mu}:\mathscr{H}\left(K\right)\rightarrow L^{2}\left(\mu\right)$
given by
\begin{equation}
T_{\mu}\left(K\left(\cdot,x\right)\right)=k_{x};
\end{equation}
see (\ref{eq:D4-1}) of \lemref{F2}. But with $\mu$, we then also
get the Wiener process $W^{\left(\mu\right)}$ and the Ito-integral
\begin{equation}
L^{2}\left(M,\mu\right)\ni f\longmapsto\int_{M}f\,dW^{\left(\mu\right)}\in L^{2}\left(\Omega,Cyl,\mathbb{P}\right)
\end{equation}
as an isometry. Here $\left(\Omega,Cyl,\mathbb{P}\right)$ denotes
the standard probability space, with $Cyl$ abbreviation for the cylinder
sigma-algebra of subsets of $\Omega:=\mathbb{R}^{M}$. For finite
subsets $\left(s_{1},s_{2},\cdots,s_{k}\right)$ in $M$, and Borel
subsets $B_{k}$ in $\mathbb{R}^{k}$, the corresponding cylinder
set
\[
Cyl\left(\left(s_{i}\right)_{i=1}^{k}\right):=\left\{ \omega\in\mathbb{R}^{M}\mathrel{;}\left(\omega\left(s_{1}\right),\cdots,\omega\left(s_{k}\right)\right)\in B_{k}\right\} .
\]
In summary, we get the the following diagram of isometries, corresponding
to a fixed $\left(\left(k_{x}\right)_{x\in X},\mu\right)\in\mathscr{F}\left(K\right)$,
where $K$ is a fixed p.d. function on $X\times X$:
\begin{figure}[H]
\[
\xymatrix{\mathscr{H}\left(K\right)\ar@/^{1.5pc}/[rr]^{T_{\mu}}\ar@/_{1.2pc}/[dr]_{\text{composition}} & & L^{2}\left(M,\mu\right)\ar@/^{1.2pc}/[dl]^{\text{Ito-isometry for \ensuremath{W^{\left(\mu\right)}}}}\\
& L^{2}\left(\Omega,\mathbb{P}\right)
}
\]
\caption{The two isometries. Factorizations by isometries.}
\end{figure}
\end{rem}
\section{\label{sec:exa}Examples and applications}
Below we present four examples in order to illustrate the technical
points in \thmref{E1}. In the first example $X=\left[0,1\right]$,
the unit interval, and in the next two examples $X=\mathbb{D}=\left\{ z\in\mathbb{C}\mathrel{;}\left|z\right|<1\right\} $
the open complex disk. In the fourth example, the Drury-Arveson kernel,
we have $X=\mathbb{C}^{k}$.
We begin with a note on identifications: For $t\in\left[0,1\right]$,
we set
\[
e\left(t\right):=e^{i2\pi t}.
\]
We write $\lambda_{1}$ for the Lebesgue measure restricted to $\left[0,1\right]$;
and we make the identification:
\begin{equation}
\left[0,1\right]\cong\mathbb{R}/\mathbb{Z}\cong\mathbb{T}^{1}=\left\{ z\in\mathbb{C}\mathrel{;}\left|z\right|=1\right\} .\label{F1}
\end{equation}
Hence, for $L^{2}\left(\left[0,1\right],\lambda_{1}\right)$ we have
the familiar Fourier expansion: With
\begin{equation}
f\in L^{2}\left(\lambda_{1}\right),\;\text{and}\;c_{n}:=\int_{\left[0,1\right]}\overline{e\left(nt\right)}f\left(t\right)d\lambda_{1}\left(t\right),\;n\in\mathbb{Z},
\end{equation}
\begin{equation}
f\left(t\right)=\sum_{n\in\mathbb{Z}}c_{n}e\left(nt\right),\;\text{and}\;\int_{0}^{1}\left|f\left(t\right)\right|^{2}d\lambda_{1}\left(t\right)=\sum_{n\in\mathbb{Z}}\left|c_{n}\right|^{2}.
\end{equation}
On $\left[0,1\right]$, we shall also consider the Cantor measure
$\mu_{4}$ with support equal to the Cantor set
\[
C_{4}=\left\{ x\mathrel{;}x=\sum\nolimits _{k=1}^{\infty}b^{k}/4^{k},\;b_{k}\in\left\{ 0,2\right\} \right\} \subseteq\left[0,1\right];
\]
see \figref{F1} and \cite{MR1655831,jorgensen2018harmonic}.
\begin{figure}
\includegraphics[width=0.35\textwidth]{c4}
\caption{\label{fig:F1} The $4$-Cantor set with double gaps as an iterated
function system. This is an iterated-function system construction:
Cantor-set and measure; see (\ref{eq:F4}) below.}
\end{figure}
It is known that $\mu_{4}$ is the unique probability measure s.t.
\begin{equation}
\frac{1}{2}\int_{0}^{1}\left(f\left(\frac{x}{4}\right)+f\left(\frac{x+2}{4}\right)\right)d\mu_{4}\left(x\right)=\int_{0}^{1}f\,d\mu_{4}.\label{eq:F4}
\end{equation}
For the Fourier transform $\widehat{\mu}_{4}$ we have
\begin{equation}
\widehat{\mu}_{4}\left(t\right)=\prod_{k=1}^{\infty}\frac{1}{2}\left(1+e^{i\pi t/4^{k}}\right),\;t\in\mathbb{R}.
\end{equation}
In \tabref{F1}, we summarize the three examples with the data from
\thmref{E1}. We now turn to the details of the respective examples:
\renewcommand{\arraystretch}{1.5}
\begin{table}
\begin{tabular}{|c|c|c|>{\centering}p{0.35\columnwidth}|c|}
\hline
& $X$ & $K$ & $k_{x},M=\left[0,1\right]\cong\mathbb{T}^{1}$, $\mathscr{F}\left(K\right)=\left\{ \left(k_{x},\mu\right)\right\} $ & \multicolumn{1}{c|}{$\mu$}\tabularnewline
\hline
Ex 1 & $\left[0,1\right]$ & $x\wedge y$ & $k_{x}\left(s\right)=\chi_{\left[0,x\right]}\left(s\right)$ & $\lambda_{1}$\tabularnewline
\hline
Ex 2 & $\mathbb{D}$ & ${\displaystyle \frac{1}{1-z\overline{w}}}$ & ${\displaystyle k_{z}\left(t\right)=\frac{1}{1-z\overline{e\left(t\right)}}}$ & $\lambda_{1}$ on $\mathbb{T}^{1}$\tabularnewline
\hline
Ex 3 & $\mathbb{D}$ & ${\displaystyle \prod_{n=0}^{\infty}\left(1+z^{4^{n}}\overline{w}^{4^{n}}\right)}$ & ${\displaystyle k_{z}\left(t\right)=\prod_{n=0}^{\infty}\left(1+z^{4^{n}}\overline{e\left(4^{n}t\right)}\right)}$ & $\mu_{4}$\tabularnewline
\hline
\multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{>{\centering}p{0.35\columnwidth}}{} & \multicolumn{1}{c}{}\tabularnewline
\end{tabular}
\caption{\label{tab:F1} Three p.d. kernels and their respective Gaussian realizations.}
\end{table}
\renewcommand{\arraystretch}{1}
\begin{example}
\label{exa:F1}If $K\left(x,y\right):=x\wedge y$ is considered a
kernel on $\left[0,1\right]\times\left[0,1\right]$, then the corresponding
RKHS $\mathscr{H}\left(K\right)$ is the Hilbert space of functions
$f$ on $\left[0,1\right]$ such that the distribution derivative
$f'=df/dx$ is in $L^{2}\left(\left[0,1\right],\lambda_{1}\right)$,
$\lambda_{1}=dx$, $f\left(0\right)=0$, and
\begin{equation}
\left\Vert f\right\Vert _{\mathscr{H}\left(K\right)}^{2}:=\int_{0}^{1}\left|f'\left(x\right)\right|^{2}dx;\label{eq:F6}
\end{equation}
and it is immediate that $\left(k_{x},\lambda_{1}\right)\in\mathscr{F}\left(K\right)$
where $k_{x}\left(s\right):=\chi_{\left[0,x\right]}\left(s\right)$,
the indicator function; see \figref{F2}.
\begin{figure}[H]
\begin{tabular}{c}
\includegraphics[width=0.35\textwidth]{ind1}\tabularnewline
\includegraphics[width=0.35\textwidth]{ind2}\tabularnewline
\tabularnewline
\end{tabular}
\caption{\label{fig:F2} The generators of the Cameron-Martin RKHS. See \exaref{F1}.}
\end{figure}
\begin{figure}[H]
\includegraphics[width=0.65\textwidth]{bm}
\caption{\label{fig:F3}Brownian motion on $\left[0,1\right]$. Sample-paths
by Monte Carlo. See \exaref{F1}.}
\end{figure}
\begin{figure}[H]
\includegraphics[width=0.65\textwidth]{bm2}
\caption{\label{fig:F4}A Wiener process with holding patterns in the gaps
of the Cantor set $C_{4}$ in \figref{F1}: The $W^{\left(\mu_{4}\right)}$
process on $\left[0,1\right]$. Sample-paths by Monte Carlo.}
\end{figure}
The process $W^{\left(\lambda_{1}\right)}$ is of course the standard
Brownian motion on $\left[0,1\right]$, pinned at $x=0$; see \figref{F3},
and compare with the $W^{\left(\mu_{4}\right)}$-process in \figref{F4}.
For Monte Carlo simulation, see e.g. \cite{KDB,MR3884672}.
The Hilbert space characterized by (\ref{eq:F6}) is called the Cameron-Martin
space, see e.g., \cite{MR562914}. Moreover, to see that (\ref{eq:F6})
is indeed the precise characterization of the RKHS for this kernel,
one again applies \lemref{B1}.
\end{example}
It immediately follows from \thmref{E1} then the Gaussian processes
corresponding to the data in \tabref{F1} are as follows:
\begin{example}
$z\in\mathbb{D}$:
\begin{equation}
V_{z}=\int_{0}^{1}\frac{1}{1-z\overline{e\left(t\right)}}dW_{t}^{\left(\lambda_{1}\right)}
\end{equation}
realized as an Ito-integral.
As an application of \thmref{E1}, we get:
\[
\mathbb{E}\left(\overline{V}_{z}V_{w}\right)=\frac{1}{1-z\overline{w}},\;\forall\left(z,w\right)\in\mathbb{D}\times\mathbb{D}.
\]
\end{example}
\begin{example}
\label{exa:F3}$z\in\mathbb{D}$:
\begin{equation}
V_{z}=\int_{0}^{1}\prod_{n=0}^{\infty}\left(1+z^{4^{n}}\overline{e\left(4^{n}t\right)}\right)dW_{t}^{\left(\mu_{4}\right)}
\end{equation}
were the $W^{\left(\mu_{4}\right)}$-Ito integral is supported on
the Cantor set $C_{4}\subset\left[0,1\right]$, see \figref{F1}.
As an application of \thmref{E1}, we get:
\[
\mathbb{E}\left(\overline{V}_{z}V_{w}\right)=\prod_{n=0}^{\infty}\left(1+\left(z\overline{w}\right)^{4^{n}}\right).
\]
\end{example}
The reasoning of \exaref{F3} is based on a theorem of the paper \cite{MR1655831}
(see also \cite{jorgensen2018harmonic}). Set
\begin{align}
\Lambda_{4} & =\left\{ 0,1,4,5,16,17,20,21,64,65,\cdots\right\} \nonumber \\
& =\left\{ \sum\nolimits _{k=0}^{\text{finite}}\alpha_{k}4^{k}\mid\alpha_{k}\in\left\{ 0,1\right\} ,\;\text{fintie summations}\right\} \label{eq:F9}
\end{align}
then the Fourier functions $\left\{ e\left(\lambda t\right)\mathrel{;}\lambda\in\Lambda_{4}\right\} $
forms an orthonormal basis in $L^{2}\left(C_{4},\mu_{4}\right)$,
i.e., every $f\in L^{2}\left(C_{4},\mu_{4}\right)$ has its Fourier
expansion
\begin{align*}
\widehat{f}\left(\lambda\right) & =\int_{C_{4}}\overline{e\left(\lambda t\right)}f\left(t\right)d\mu_{4}\left(t\right);\\
f\left(t\right) & =\sum_{\lambda\in\Lambda_{4}}\widehat{f}\left(\lambda\right)e\left(\lambda t\right);
\end{align*}
and
\[
\int_{C_{4}}\left|f\right|^{2}d\mu_{4}=\sum_{\lambda\in\Lambda_{4}}\left|\widehat{f}\left(\lambda\right)\right|^{2}.
\]
\begin{lem}
\label{lem:F4}Consider the set $\Lambda_{4}$ in (\ref{eq:F9}),
and, for $s\in\mathbb{D}$, let
\begin{equation}
F\left(s\right):=\sum_{\lambda\in\Lambda_{4}}s^{\lambda}
\end{equation}
be the corresponding generating function. Then we have the following
infinite-product representation
\begin{equation}
F\left(s\right)=\prod_{n=0}^{\infty}\left(1+s^{4^{n}}\right).\label{eq:F11}
\end{equation}
\end{lem}
\begin{proof}
From (\ref{eq:F9}) we have the following self-similarity for $\Lambda_{4}$:
It is the following identity of sets
\begin{equation}
\Lambda_{4}=\left\{ 0,1\right\} +4\Lambda_{4}.\label{eq:F12}
\end{equation}
Note that (\ref{eq:F12}) is an algorithm for generating points in
$\Lambda_{4}$. Hence,
\begin{align*}
F\left(s\right) & =\sum_{\lambda\in\Lambda_{4}}s^{\lambda}=\sum_{\left\{ 0,1\right\} +4\Lambda_{4}}s^{\lambda}\\
& =\sum_{4\Lambda_{4}}s^{\lambda}+s\sum_{4\Lambda_{4}}s^{\lambda}\\
& =\left(1+s\right)F\left(s^{4}\right)\\
& =\big(1+s\big)\big(1+s^{4}\big)\cdots\big(1+s^{4^{n-1}}\big)F\big(s^{4^{n}}\big)
\end{align*}
and by induction.
Hence, if $s\in\mathbb{D}$, the infinite-product is absolutely convergent,
and the desired product formula (\ref{eq:F11}) follows.
\end{proof}
\begin{rem}
Note that, in combination with the theorem from \cite{MR1655831}
(see also \cite{jorgensen2018harmonic}), this property of the generating
function $F=F_{\Lambda_{4}}$ from \lemref{F4} is used in the derivation
of the assertions made about the factorization properties in \exaref{F3};
this includes the two formulas (Ex 3) as stated in \tabref{F1}; as
well as of the verification that $\left(k_{z},\mu_{4}\right)\in\mathscr{F}\left(K\right)$,
where $k_{z}$, $\mu_{4}$, and $K$ are as stated.
A direct computation of the two cases, \exaref{F1} and \exaref{F3},
is of interest. Our result, \lemref{D3}, is useful in the construction:
When computing the two Wiener processes $W^{\left(\lambda_{1}\right)}$
and $W^{\left(\mu\right)}$ one notes that the covariance computed
on intervals $\left[0,x\right]$ as $0<x<1$ are as follows:
\begin{align}
\mathbb{E}\left(\left(W_{\left[0,x\right]}^{\left(\lambda_{1}\right)}\right)^{2}\right) & =\lambda_{1}\left(\left[0,x\right]\right)=x,\;\text{and}\label{eq:F13}\\
\mathbb{E}\left(\left(W_{\left[0,x\right]}^{\left(\mu_{4}\right)}\right)^{2}\right) & =\mu_{4}\left(\left[0,x\right]\right).\label{eq:F14}
\end{align}
So the two functions have the representations as in \figref{F5}.
\begin{figure}[H]
\begin{tabular}{>{\centering}p{0.45\columnwidth}>{\centering}p{0.45\columnwidth}}
\includegraphics[width=0.3\textwidth]{var1} & \includegraphics[width=0.3\textwidth]{var2}\tabularnewline
The variance formula in (\ref{eq:F13}). & The Devil's staircase. The variance formula in (\ref{eq:F14}).\tabularnewline
\end{tabular}
\caption{\label{fig:F5}The two cumulative distributions.}
\end{figure}
\end{rem}
\begin{example}
The following example illustrates the need for a distinction between
$X$, and families of choices $M$ in \thmref{E1}. \emph{A priori},
one might expect that if $X\times X\xrightarrow{\;K\;}\mathbb{C}$
is given and p.d., it would be natural to try to equip $X$ with a
$\sigma$-algebra $\mathscr{F}_{X}$ of subsets, and a measure $\mu$
such that the condition in (\ref{eq:D2}) holds for $\left(X,\mathscr{F}_{X},\mu\right)$,
i.e.,
\begin{equation}
K\left(x,y\right)=\int_{X}\overline{k}_{x}k_{y}d\mu,\;\left(x,y\right)\in X\times X
\end{equation}
with $\left\{ k_{x}\right\} _{x\in X}$ a system in $L^{2}\left(X,\mathscr{F}_{X},\mu\right)$.
It turns out that there are interesting examples where this is known
to \emph{not }be feasible. The best known such example is perhaps
the Drury-Arveson kernel; see \cite{MR1668582} and \cite{MR2419381,MR2648865}.
Specifics. Consider $\mathbb{C}^{k}$ for $k\geq2$, and $B_{k}\subset\mathbb{C}^{k}$
the complex ball defined for $z=\left(z_{1},\cdots,z_{k}\right)\in\mathbb{C}^{k}$,
\begin{equation}
B_{k}:=\Big\{ z\in\mathbb{C}^{k}\mathrel{;}\underset{\left\Vert z\right\Vert _{2}^{2}}{\underbrace{\sum\nolimits _{j=1}^{k}\left|z_{j}\right|^{2}}}<1\Big\}.
\end{equation}
For $z,w\in\mathbb{C}^{k}$, set
\begin{align}
\left\langle z,w\right\rangle & :=\sum_{j=1}^{k}z_{j}\overline{w}_{j},\;\text{and}\nonumber \\
K_{DA}\left(z,w\right) & :=\frac{1}{1-\left\langle z,w\right\rangle },\;\left(z,w\right)\in B_{k}\times B_{k}.\label{eq:F17}
\end{align}
\end{example}
\begin{cor}[{Arveson \cite[Coroll 2]{MR1668582}}]
\label{cor:F7}Let $k\geq2$, and let $\mathscr{H}\left(K_{DA}\right)$
be the RKHS of the D-A kernel in (\ref{eq:F17}). Then there is \uline{no}
Borel measure on $\mathbb{C}^{k}$ such that $\left(\mathbb{C}^{k},\mathscr{B}_{k},\mu\right)\in\mathscr{F}\left(K_{DA}\right)$;
i.e., there is \uline{no} solution to the formula
\[
\left\Vert f\right\Vert _{\mathscr{H}\left(K_{DA}\right)}^{2}=\int_{\mathbb{C}^{k}}\left|f\left(z\right)\right|^{2}d\mu\left(z\right),
\]
for all $f\left(z\right)$ $k$-polynomials.
\end{cor}
\begin{rem}
It is natural to ask about disintegration properties for the Gaussian
process $V_{DA}$ corresponding to the Drury-Arveson kernel (\ref{eq:F17}).
Combining our \thmref{E1} above with the corollary (Coroll \ref{cor:F7}),
we conclude that, in two or more complex dimensions $k$, the question
of finding the admissible disintegrations this Gaussian process $V_{DA}$
is subtle. It must necessarily involve measure spaces going beyond
$\mathbb{C}^{k}$.
\end{rem}
\section{\label{sec:atomic}The case of $\left(k_{x},\mu\right)\in\mathscr{F}\left(K\right)$
when $\mu$ is atomic}
Below we present a case where $\mu$ from pairs in $\mathscr{F}\left(K\right)$
may be chosen to be atomic. The construction is general, but for the
sake of simplicity we shall assume that a given p.d. $K$ is such
that the RKHS $\mathscr{H}\left(K\right)$ is separable, i.e., when
it has an (all) orthonormal basis (ONB) indexed by $\mathbb{N}$.
\begin{defn}
\label{def:G1}Let $\mathscr{H}$ be a Hilbert space (separable),
and let $\left\{ g_{n}\right\} _{n\in\mathbb{N}}$ be a system of
vectors in $\mathscr{H}$ such that
\begin{equation}
\sum_{n\in\mathbb{N}}\left|\left\langle \psi,g_{n}\right\rangle _{\mathscr{H}}\right|^{2}=\left\Vert \psi\right\Vert _{\mathscr{H}}^{2}\label{eq:F1}
\end{equation}
holds for all $\psi\in\mathscr{H}$. We then say that $\left\{ g_{n}\right\} _{n\in\mathbb{N}}$
is a Parseval frame for $\mathscr{H}$. (Also see \defref{J1}.)
An equivalent assumption is that the mapping
\begin{equation}
\mathscr{H}\ni\psi\xmapsto{\quad T\quad}\left(\left\langle \psi,g_{n}\right\rangle _{\mathscr{H}}\right)\in l^{2}\left(\mathbb{N}\right)
\end{equation}
is isometric. One checks that then the adjoint $T^{*}:l^{2}\rightarrow\mathscr{H}$
is:
\[
T^{*}\left(\left(\xi_{n}\right)\right)=\sum_{n\in\mathbb{N}}\xi_{n}g_{n}\in\mathscr{H}.
\]
\end{defn}
For general background references on frames in Hilbert space, we refer
to \cite{MR2367342,MR2538596,MR3118429,MR3005286,MR3009685,MR3204026,MR3121682,MR3275625,MR3574374},
and also see \cite{MR3005286,MR3526434,MR3688637,MR3700114,MR3800275}.
\begin{lem}
\label{lem:G2}Let $K$ be given p.d. on $X\times X$, and assume
that $\left\{ g_{n}\right\} _{n\in\mathbb{N}}$ is a Parseval frame
in $\mathscr{H}\left(K\right)$; then
\begin{equation}
K\left(x,y\right)=\sum_{n\in\mathbb{N}}g_{n}\left(x\right)\overline{g_{n}\left(y\right)}\label{eq:G3}
\end{equation}
with the sum on the RHS in (\ref{eq:G3}) absolutely convergent.
\end{lem}
\begin{proof}
By the reproducing property of $\mathscr{H}\left(K\right)$, see \secref{pdk},
we get, for all $\left(x,y\right)\in X\times X$:
\begin{eqnarray*}
K\left(x,y\right) & = & \left\langle K\left(\cdot,x\right),K\left(\cdot,y\right)\right\rangle _{\mathscr{H}\left(K\right)}\\
& \underset{\text{by \ensuremath{\left(\ref{eq:F1}\right)}}}{=} & \sum_{n\in\mathbb{N}}\left\langle K\left(\cdot,x\right),g_{n}\right\rangle _{\mathscr{H}\left(K\right)}\left\langle g_{n},K\left(\cdot,y\right)\right\rangle _{\mathscr{H}\left(K\right)}\\
& \underset{\text{by \ensuremath{\left(\ref{eq:A4}\right)}}}{=} & \sum_{n\in\mathbb{N}}g_{n}\left(x\right)\overline{g_{n}\left(y\right)}.
\end{eqnarray*}
\end{proof}
Now a direct application of the argument in the proof of \thmref{E1}
yields the following:
\begin{cor}
Let $K$ be given p.d. on $X\times X$ such that $\mathscr{H}\left(K\right)$
is separable, and let $\left\{ g_{n}\right\} _{n\in\mathbb{N}}$ be
a Parseval frame, for example an ONB in $\mathscr{H}\left(K\right)$.
Let $\left\{ \zeta_{n}\right\} _{n\in\mathbb{N}}$ be a chosen system
of i.i.d. (independent identically distributed) system of standard
Gaussians, i.e., with $N\left(0,1\right)$-distribution $\nicefrac{1}{\sqrt{2\pi}}e^{\nicefrac{-s^{2}}{2}}$,
$s\in\mathbb{R}$. Then the following sum defines a Gaussian process,
\begin{equation}
V_{x}\left(\cdot\right):=\sum_{n\in\mathbb{N}}g_{n}\left(x\right)\zeta_{n}\left(\cdot\right),
\end{equation}
i.e., $\left\{ V_{x}\right\} _{x\in X}$ is well-defined in $L^{2}\left(\Omega,Cyl,\mathbb{P}\right)$,
as stated, where $\Omega=\mathbb{R}^{\mathbb{N}}$ as a realization
in an infinite Cartesian product with the usual cylinder $\sigma$-algebra,
and $\left\{ V_{x}\right\} _{x\in X}$ has $K$ as covariance kernel,
i.e.,
\[
\mathbb{E}\left(\overline{V}_{x}V_{y}\right)=K\left(x,y\right),\;\forall\left(x,y\right)\in X\times X;
\]
see (\ref{eq:D11}).
\end{cor}
\begin{proof}
This is a direct application of \lemref{G2}, and we leave the remaining
verifications to the reader.
\end{proof}
\section{Point processes: The case when $\left\{ \delta_{x}\right\} \subset\mathscr{H}\left(K\right)$}
Let $X\times X\xrightarrow{\;K\;}\mathbb{R}$ be a fixed positive
definite kernel. We know that the RKHS $\mathscr{H}\left(K\right)$
consists of functions $h$ on $X$ subject to the \emph{a priori}
estimate in \lemref{B1}. For recent work on point-processes over
infinite networks \cite{MR3860446,MR3246982,MR3843552,MR3670916,MR3390972,MR3861681,MR3848251,MR3881668,MR3854505},
the case when the Dirac measures $\delta_{x}$ are in $\mathscr{H}\left(K\right)$
is of special significance. In this case there is an abstract Laplace
operator $\Delta$, defined as follows:
\begin{equation}
\left(\Delta h\right)\left(x\right)=\left\langle \delta_{x},h\right\rangle _{\mathscr{H}\left(K\right)},\;\forall h\in\mathscr{H}\left(K\right).\label{eq:H1}
\end{equation}
For the $\left\Vert \cdot\right\Vert _{\mathscr{H}\left(K\right)}$-norm
of $\delta_{x}$, we have
\begin{equation}
\left(\Delta\delta_{x}\right)\left(x\right)=\left\Vert \delta_{x}\right\Vert _{\mathscr{H}\left(K\right)}^{2};\label{eq:H2}
\end{equation}
immediate from (\ref{eq:H1}).
For every finite subset $F\subset X$, we consider the induced $\left|F\right|\times\left|F\right|$
matrix
\begin{equation}
K_{F}\left(x,y\right)=\left(K\left(x,y\right)\right)_{x,y\in F}.
\end{equation}
Note that $K_{F}$ is a positive definite square matrix. Its spectrum
consists of eigenvalues $\lambda_{s}\left(F\right)$.
If $\left(K,X\right)$ is as described, i.e., $X\times X\xrightarrow{\;K\;}\mathbb{R}\left(\text{or }\mathbb{C}\right)$
p.d., and if
\begin{equation}
\left\{ \delta_{x}\right\} _{x\in X}\subset\mathscr{H}\left(K\right),\label{eq:H4-1}
\end{equation}
we shall see that $X$ must then be discrete. (In interesting cases,
also countable.) If (\ref{eq:H4-1}) holds, we shall say that $\left(K,X\right)$
is a \emph{point process}. We shall further show that point processes
arise by restriction as follows:
Let $\left(K,X\right)$ be given with $K$ a p.d. kernel. If a countable
subset $S\subset X$ is such that $K^{\left(S\right)}:=K\big|_{S\times S}$
has
\begin{equation}
\left\{ \delta_{x}\right\} _{x\in S}\in\mathscr{H}(K^{\left(S\right)}),
\end{equation}
then we shall say that $\left(K^{\left(S\right)},S\right)$ is an
\emph{induced point process}.
\subsection{Nets of finite submatrices, and their limits}
Given $(K,X)$ as above with $K$ p.d. and defined on $X\times X$.
Then the finite submatrices in the subsection header are indexed by
the net of all finite subsets $F$ of $X$ as follows: Given $F$,
then the corresponding $\left|F\right|\times\left|F\right|$ square
matrix $K_{F}$ is simply the restriction of $K$ to $F\times F$.
Of course, each matrix $K_{F}$ is positive definite, and so it has
a finite list of eigenvalues. These eigenvalue lists figure in the
discussion below.
\begin{lem}
\label{lem:H1}Let $K$, $F$, and $K_{F}$ be as above, with $\lambda_{s}\left(F\right)$
denoting the numbers in the list of eigenvalues for the matrix $K_{F}$.
Then
\begin{equation}
1\leq\lambda_{s}\left(F\right)\sum_{x\in F}\left\Vert \delta_{x}\right\Vert _{\mathscr{H}\left(K\right)}^{2}.\label{eq:H4}
\end{equation}
\end{lem}
\begin{proof}
Consider the eigenvalue equation
\begin{equation}
\left(\xi_{x}\right)_{x\in F},\quad\sum_{x\in F}\left|\xi_{x}\right|^{2}=\left\Vert \xi\right\Vert _{2}^{2}=1,\quad K_{F}\xi=\lambda_{s}\left(F\right)\xi.
\end{equation}
From \lemref{B1} and for $x\in F$, we then get
\begin{align}
\left|\xi_{x}\right|^{2} & \leq\left\Vert \delta_{x}\right\Vert _{\mathscr{H}\left(K\right)}^{2}\left\langle \xi,K_{F}\xi\right\rangle _{l^{2}\left(F\right)}\nonumber \\
& =\left\Vert \delta_{x}\right\Vert _{\mathscr{H}\left(K\right)}^{2}\lambda_{s}\left(F\right).\label{eq:H6}
\end{align}
Now apply $\sum_{x\in F}$ to both sides in (\ref{eq:H6}), and the
desired conclusion (\ref{eq:H4}) follows.
\end{proof}
\begin{rem}
A consequence of the lemma is that the matrices $K_{F}^{-1}$ and
$K_{F}^{-1/2}$ automatically are well defined (by the spectral theorem)
with associated spectral bounds.
\end{rem}
\begin{defn}
Let $K$, $F$, and $K_{F}$ be as above; and with the condition $\delta_{x}\in\mathscr{H}\left(K\right)$
in force. Set
\begin{equation}
\mathscr{H}_{K}\left(F\right):=span_{x\in F}\left\{ K\left(\cdot,x\right)\right\} .
\end{equation}
\end{defn}
It is a finite-dimensional (and therefore closed) subspace in $\mathscr{H}\left(K\right)$.
The orthogonal projection onto $\mathscr{H}_{K}\left(F\right)$ will
be denoted $P_{F}:\mathscr{H}\left(K\right)\rightarrow\mathscr{H}_{K}\left(F\right)$.
\begin{lem}
\label{lem:H4}Let $K$, $F$, $K_{F}$, and $\mathscr{H}_{K}\left(F\right)$
be as above. Then the orthogonal projection $P_{F}$ is as follows:
For $h\in\mathscr{H}\left(K\right)$, set $h_{F}=h\big|_{F}$, restriction:
\begin{equation}
\left(P_{F}h\right)\left(\cdot\right)=\sum_{y\in F}\left(K_{F}^{-1}h_{F}\right)\left(y\right)K\left(\cdot,y\right).\label{eq:H8}
\end{equation}
\end{lem}
\begin{proof}
It is immediate from the definition that $P_{F}h$ has the form
\begin{equation}
P_{F}h=\sum_{y\in F}\xi_{y}K\left(\cdot,y\right)
\end{equation}
with $\left(\xi_{y}\right)_{y\in F}\in\mathbb{C}^{\left|F\right|}$.
Since $P_{F}$ is the orthogonal projection,
\begin{equation}
\left(h-P_{F}h\right)\perp_{\mathscr{H}\left(K\right)}\left\{ K\left(\cdot,y\right)\right\} _{y\in F}
\end{equation}
(orthogonality in the $\mathscr{H}\left(K\right)$-inner product)
which yields:
\[
h\left(x\right)=\left(K_{F}\xi\right)\left(x\right)\left(=\sum\nolimits _{y\in F}K\left(x,y\right)\xi_{y}\right),\;\forall x\in F;
\]
and therefore, $\xi=K_{F}^{-1}h_{F}$, which is the desired formula
(\ref{eq:H8}).
\end{proof}
\begin{cor}
\label{cor:H5}Let $X$, $K$, $\mathscr{H}\left(K\right)$ be as
above, and assume $\delta_{x}\in\mathscr{H}\left(K\right)$ for some
$x\in X$. Then a function $h$ on $X$ is in $\mathscr{H}\left(K\right)$
if and only if
\begin{equation}
\sup_{F}\left\Vert K_{F}^{-1/2}h_{F}\right\Vert _{l^{2}\left(F\right)}<\infty,\label{eq:H11}
\end{equation}
where the supremum is over all finite subsets $F$ of $X$. If $h$
is finite energy, then
\begin{equation}
\left\Vert h\right\Vert _{\mathscr{H}\left(K\right)}^{2}=\sup_{F}\left\Vert K_{F}^{-1/2}h_{F}\right\Vert _{l^{2}\left(F\right)}^{2}.\label{eq:H12}
\end{equation}
\end{cor}
\begin{proof}
The proof follows from an application of Hilbert space geometry to
the RKHS $\mathscr{H}\left(K\right)$, on the family of orthogonal
projections $P_{F}$ indexed by the finite subsets $F$ in $X$. With
the standard lattice operations, applied to projections, we have $\sup_{F}P_{F}=I_{\mathscr{H}\left(K\right)}$.
The conclusions (\ref{eq:H11})-(\ref{eq:H12}) follow from this since,
by the lemma,
\begin{eqnarray}
\left\Vert P_{F}h\right\Vert _{\mathscr{H}\left(K\right)}^{2} & \underset{\text{by \ensuremath{\left(\ref{eq:A3}\right)}}}{=} & \left\langle K_{F}^{-1}h_{F},K_{F}K_{F}^{-1}h_{F}\right\rangle _{l^{2}\left(F\right)}\nonumber \\
& = & \left\langle h_{F},K_{F}^{-1}h_{F}\right\rangle _{l^{2}\left(F\right)}=\left\Vert K_{F}^{-1/2}h_{F}\right\Vert _{l^{2}\left(F\right)}^{2}.\label{eq:H13}
\end{eqnarray}
\end{proof}
\begin{rem}
The advantage with the use of this system of orthogonal projections
$P_{F}$, indexed by the finite subsets $F$ of $X$, is that we may
then take advantage of the known lattice operations for orthogonal
projections in Hilbert space. But it is important that we get approximation
with respect to the canonical norm in the RKHS $\mathscr{H}\left(K\right)$.
This works because by our construction, the orthogonality properties
for the projections $P_{F}$ refers precisely to the inner product
in $\mathscr{H}\left(K\right)$. Naturally we get the best $\mathscr{H}\left(K\right)$-approximation
properties when $X$ is further assumed countable. But the formula
for the $\mathscr{H}\left(K\right)$-norm holds in general.
\end{rem}
\begin{cor}
\label{cor:H7}Let $X\times X\xrightarrow{\;K\;}\mathbb{C}$ be fixed,
assumed p.d., and let $\mathscr{H}\left(K\right)$ be the corresponding
RKHS. Let $x\in X$ be given. Then $\delta_{x}\in\mathscr{H}\left(K\right)$
if and only if
\begin{equation}
\sup_{F\subset X,\:F\text{ finite, \ensuremath{x\in F}}}\left(K_{F}^{-1}\right)_{x,x}<\infty.\label{eq:H13-1}
\end{equation}
In this case, we have:
\[
\left\Vert \delta_{x}\right\Vert _{\mathscr{H}\left(K\right)}^{2}=\text{the supremum in }\left(\ref{eq:H13-1}\right).
\]
\end{cor}
\begin{proof}
The result is immediate from \corref{H5} applied to $h:=\delta_{x}$,
where $x$ is fixed. Here the terms in (\ref{eq:H12}) are, for $F$
finite, $x\in F$:
\begin{equation}
\left\langle \delta_{x}\big|_{F},K_{F}^{-1}\left(\delta_{x}\big|_{F}\right)\right\rangle _{l^{2}\left(F\right)}=\left(K_{F}^{-1}\right)_{x,x},
\end{equation}
and the stated conclusion is now immediate.
\end{proof}
\begin{cor}
\label{cor:H8}Let $X$, $K$, and $\mathscr{H}\left(K\right)$ be
as above, but assume now that $X$ is countable, with a monotone net
of finite sets:
\begin{equation}
F_{1}\subset F_{2}\subset F_{3}\cdots,\;\text{and}\quad X=\cup_{i\in\mathbb{N}}F_{i};
\end{equation}
then a function $h$ on $X$ is in $\mathscr{H}\left(K\right)$ iff
$\sup_{i}\left\Vert K_{F_{i}}^{-1/2}h\big|_{F_{i}}\right\Vert _{l^{2}\left(F_{i}\right)}<\infty$.
Moreover,
\begin{equation}
\left\Vert h\right\Vert _{\mathscr{H}_{E}}^{2}=\lim_{i\rightarrow\infty}\left\Vert K_{F_{i}}^{-1/2}h\big|_{F_{i}}\right\Vert _{l^{2}\left(F_{i}\right)}^{2},\label{eq:H15}
\end{equation}
where, the convergence in (\ref{eq:H15}) is monotone.
\end{cor}
\begin{proof}
From the definition of the order of orthogonal projections, we have
\begin{equation}
P_{F_{1}}\leq P_{F_{2}}\leq P_{F_{3}}\leq\cdots,
\end{equation}
and therefore,
\begin{equation}
\left\Vert P_{F_{1}}h\right\Vert _{\mathscr{H}\left(K\right)}^{2}\leq\left\Vert P_{F_{2}}h\right\Vert _{\mathscr{H}\left(K\right)}^{2}\leq\left\Vert P_{F_{3}}h\right\Vert _{\mathscr{H}\left(K\right)}^{2}\leq\cdots,\label{eq:H17}
\end{equation}
with $\lim_{i\rightarrow\infty}\left\Vert P_{F_{i}}h\right\Vert _{\mathscr{H}\left(K\right)}^{2}=\left\Vert h\right\Vert _{\mathscr{H}\left(K\right)}^{2}$.
But by (\ref{eq:H13}) and the proof of \corref{H5}, we have
\[
\left\Vert K_{F_{i}}^{-1/2}h\big|_{F_{i}}\right\Vert _{l^{2}\left(F_{i}\right)}^{2}=\left\Vert P_{F_{i}}h\right\Vert _{\mathscr{H}\left(K\right)}^{2}
\]
and, so, by (\ref{eq:H17}), we get:
\[
\left\Vert K_{F_{1}}^{-1/2}h\big|_{F_{1}}\right\Vert _{l^{2}\left(F_{1}\right)}^{2}\leq\left\Vert K_{F_{2}}^{-1/2}h\big|_{F_{2}}\right\Vert _{l^{2}\left(F_{2}\right)}^{2}\leq\left\Vert K_{F_{3}}^{-1/2}h\big|_{F_{3}}\right\Vert _{l^{2}\left(F_{3}\right)}^{2}\leq\cdots.
\]
The conclusion now follows.
\end{proof}
\subsection{Restrictions of p.d. kernels}
Below we shall be considering pairs $(K,X)$ with $K$ a fixed p.d.
kernel defined on $X\times X$, and, as before, we denote by $\mathscr{H}\left(K\right)$
the corresponding RKHS with its canonical inner product. In general,
$X$ is an arbitrary set, typically of large cardinality, in particular
uncountable: It may be a complex domain, a generalized boundary, or
it may be a manifold arising from problems in physics, in signal processing,
or in machine learning models. Moreover, for such general pairs $(K,X)$,
with $K$ a fixed p.d. kernel, the Dirac functions $\delta_{x}$ are
typically not in $\mathscr{H}\left(K\right)$.
Here we shall turn to induced systems, indexed by suitable countable
discrete subsets $S$ of $X$. Indeed, for a number of sampling or
interpolation problems, it is possible to identify countable discrete
subsets $S$ of $X$, such that when $K$ is restricted to $S\times S$,
i.e., $K^{\left(S\right)}:=K\big|_{S\times S}$, then for $x\in S$,
the Dirac functions $\delta_{x}$ will be in $\mathscr{H}\left(K^{\left(S\right)}\right)$;
i.e., we get induced point processes indexed by $S$. In fact, with
\corref{H8}, we will be able to identify a variety of such subsets
$S$.
Moreover, each such choice of subset $S$ yields point-process, and
an induced graph, and graph Laplacian; see (\ref{eq:H1})-(\ref{eq:H2}).
These issues will be taken up in detail in the two subsequent sections.
In the following \exaref{H8}, for illustration, we identify a particular
instance of this, when $X=\mathbb{R}$ (the reals), and $S=\mathbb{Z}$
(the integers), and where $K$ is the covariance kernel of standard
Brownian motion on $\mathbb{R}$.
\begin{example}[\textbf{Discretizing the covariance function for Brownian motion on
$\mathbb{R}$}]
\label{exa:H8}The present example is a variant of \exaref{F1},
but with $X=\mathbb{R}$ (instead of the interval $\left[0,1\right]$).
We now set
\begin{equation}
K\left(x,y\right):=\begin{cases}
\left|x\right|\wedge\left|y\right| & \left(x,y\right)\in\mathbb{R}\times\mathbb{R},\;xy\geq0;\\
0 & xy<0.
\end{cases}\label{eq:H22}
\end{equation}
It is immediate that (\ref{eq:F6}) in \exaref{F1} carries over,
but now with $\mathbb{R}$ in place of $\left[0,1\right]$. The normalization
$f\left(0\right)=0$ is carried over. We get that: A function $f\left(x\right)$
on $\mathbb{R}$ is in $\mathscr{H}\left(K\right)$ iff it has distribution-derivative
$f'=df/dx$ in $L^{2}\left(\mathbb{R}\right)$, see (\ref{eq:H19}).
As before, we conclude that the $\mathscr{H}\left(K\right)$-norm
is:
\begin{equation}
\left\Vert f\right\Vert _{\mathscr{H}\left(K\right)}^{2}=\int_{\mathbb{R}}\left|f'\right|^{2}dx;\label{eq:H19}
\end{equation}
see also \lemref{D3}.
Set
\begin{equation}
K^{\left(\mathbb{Z}\right)}=K\big|_{\mathbb{Z}\times\mathbb{Z}},
\end{equation}
and consider the corresponding RKHS $\mathscr{H}\left(K^{\left(\mathbb{Z}\right)}\right)$.
Using \cite{MR3450534,MR3507188}, we conclude that functions $\Phi$
on $\mathbb{Z}$ are in $\mathscr{H}\left(K^{\left(\mathbb{Z}\right)}\right)$
iff $\Phi\left(0\right)=0$, and
\[
\sum_{n\in\mathbb{Z}}\left|\Phi\left(n\right)-\Phi\left(n+1\right)\right|^{2}<\infty.
\]
In that case,
\begin{equation}
\left\Vert \Phi\right\Vert _{\mathscr{H}\left(K^{\left(\mathbb{Z}\right)}\right)}^{2}=\sum_{n\in\mathbb{Z}}\left|\Phi\left(n\right)-\Phi\left(n+1\right)\right|^{2}.\label{eq:H24-1}
\end{equation}
For the $\mathbb{Z}$-kernel, we have: $\left\{ \delta_{n}\right\} _{n\in\mathbb{Z}}\subset\mathscr{H}\left(K^{\left(\mathbb{Z}\right)}\right)$,
and
\begin{equation}
\delta_{n}\left(\cdot\right)=2K\left(\cdot,n\right)-K\left(\cdot,n+1\right)-K\left(\cdot,n-1\right),\;\forall n\in\mathbb{Z}.\label{eq:H26-1}
\end{equation}
Moreover, the corresponding Laplacian $\Delta$ from (\ref{eq:H1})
is
\begin{equation}
\left(\Delta\Phi\right)\left(n\right)=2\Phi\left(n\right)-\Phi\left(n+1\right)-\Phi\left(n-1\right),
\end{equation}
i.e., the standard discretized Laplacian.
From the matrices $K_{F}^{\left(\mathbb{Z}\right)}$, $F\subset\mathbb{Z}$,
we have the following; illustrated with $F=F_{N}=\left\{ 1,2,\cdots,N\right\} $.
\begin{align}
K_{F_{N}}^{\left(\mathbb{Z}\right)} & =\begin{bmatrix}1 & 1 & 1 & 1 & \cdots & \cdots & 1\\
1 & 2 & 2 & 2 & \cdots & \cdots & 2\\
1 & 2 & 3 & 3 & \cdots & \cdots & 3\\
1 & 2 & 3 & 4 & \cdots & \cdots & 4\\
\vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \vdots\\
\vdots & \vdots & \vdots & \vdots & \ddots & N-1 & N-1\\
1 & 2 & 3 & 4 & \cdots & N-1 & N
\end{bmatrix},\\
\intertext{and}\left(K_{F_{N}}^{\left(\mathbb{Z}\right)}\right)^{-1} & =\begin{bmatrix}2 & -1 & 0 & 0 & 0 & \cdots & 0\\
-1 & 2 & -1 & 0 & 0 & \cdots & 0\\
0 & -1 & 2 & -1 & 0 & \cdots & 0\\
\vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots\\
\vdots & & \ddots & -1 & 2 & -1 & 0\\
0 & 0 & \cdots & 0 & -1 & 2 & -1\\
0 & 0 & \cdots & 0 & 0 & -1 & 1
\end{bmatrix}.
\end{align}
In particular, we have for $n,m\in\mathbb{Z}$:
\[
\left\langle \delta_{n},\delta_{m}\right\rangle _{\mathscr{H}\left(K^{\left(\mathbb{Z}\right)}\right)}=\begin{cases}
2 & \text{if \ensuremath{n=m}}\\
-1 & \text{if \ensuremath{\left|n-m\right|}=1}\\
0 & \text{otherwise}.
\end{cases}
\]
\end{example}
\begin{rem}
The determinant of $K_{F_{N}}^{\left(\mathbb{Z}\right)}$ is 1 for
all $N$. \emph{Proof}. By eliminating the first column, and then
the first row, $\det(K_{F_{N}}^{\left(\mathbb{Z}\right)})$ is reduced
to $\det(K_{F_{N-1}}^{\left(\mathbb{Z}\right)})$ . So by induction,
the determinant is 1.
Note that
\[
\sum_{k\in\mathbb{Z}}\chi_{\left[1,n\right]}\left(k\right)\chi_{\left[1,m\right]}\left(k\right)=n\wedge m
\]
which yields the factorization
\begin{equation}
K_{F_{N}}^{\left(\mathbb{Z}\right)}=A_{N}A_{N}^{*},\label{eq:H29-1}
\end{equation}
i.e.,
\[
K_{F_{N}}^{\left(\mathbb{Z}\right)}\left(n,m\right)=\left(A_{N}A_{N}^{*}\right)_{n,m}=\sum A_{N}\left(n,k\right)A_{N}^{*}\left(k,m\right),
\]
where $A_{N}$ is the $N\times N$ lower triangular matrix given by
\[
A_{N}=\begin{bmatrix}1 & 0 & \cdots & \cdots & 0\\
1 & 1 & 0 & \cdots & 0\\
\vdots & \vdots & \ddots & \ddots & \vdots\\
\vdots & \vdots & \vdots & \ddots & 0\\
1 & 1 & \cdots & \cdots & 1
\end{bmatrix}.
\]
In particular, we get that $\det(K_{F_{N}}^{\left(\mathbb{Z}\right)})=1$
immediately. This is a special case of \thmref{E1}.
For the general case, let $F_{N}=\left\{ x_{j}\right\} _{j=1}^{N}$
be a finite subset of $\mathbb{R}$, assuming $x_{1}<x_{2}<\cdots<x_{N}$.
Then the factorization (\ref{eq:H29-1}) holds with
\begin{equation}
A_{N}=\begin{bmatrix}\sqrt{x_{1}} & 0 & 0 & \cdots & 0\\
\sqrt{x_{1}} & \sqrt{x_{2}-x_{1}} & 0 & \cdots & \vdots\\
\sqrt{x_{1}} & \sqrt{x_{2}-x_{1}} & \sqrt{x_{3}-x_{2}} & \ddots & \vdots\\
\vdots & \vdots & \vdots & \ddots & 0\\
\sqrt{x_{1}} & \sqrt{x_{2}-x_{1}} & \sqrt{x_{3}-x_{2}} & \cdots & \sqrt{x_{N}-x_{N-1}}
\end{bmatrix}.\label{eq:H31}
\end{equation}
Thus,
\begin{equation}
\det(K_{F_{N}}^{\left(\mathbb{Z}\right)})=x_{1}\left(x_{2}-x_{1}\right)\cdots\left(x_{N}-x_{N-1}\right).
\end{equation}
In the setting of \secref{fac} (finite sums of standard Gaussians),
we have the following: Let $\left\{ x_{i}\right\} _{i=1}^{N}$ be
as in (\ref{eq:H31}), and let $1\leq n,m\leq N$. Let $\left\{ Z_{i}\right\} _{i=1}^{N}$
be a system i.i.d. standard Gaussians $N\left(0,1\right)$, i.e.,
independent identically distributed. Set
\begin{equation}
V_{n}=Z_{1}\sqrt{x_{1}}+Z_{2}\sqrt{x_{2}-x_{1}}+\cdots+Z_{n}\sqrt{x_{n}-x_{n-1}}.
\end{equation}
Then one checks that
\begin{equation}
\mathbb{E}\left(V_{n}V_{m}\right)=x_{n}\wedge x_{m}=K\left(x_{n},x_{m}\right)
\end{equation}
which is the desired Gaussian realization of $K$.
Alternatively, $K_{F_{N}}^{\left(\mathbb{Z}\right)}$ assumes the
following factorization via non-square matrices: Assume $F_{N}\subset\mathbb{Z}_{+}$,
then
\begin{equation}
K_{F_{N}}^{\left(\mathbb{Z}\right)}=AA^{*},
\end{equation}
where $A$ is the $N\times x_{N}$ matrix such that
\[
A_{n,k}=\begin{cases}
1 & \text{if \ensuremath{1\leq k\leq x_{n}}}\\
0 & \text{otherwise}
\end{cases}.
\]
That is, $A$ takes the form:
\begin{equation}
A=\begin{bmatrix}\tikzmark{1L}1 & \cdots & 1\tikzmark{1R} & 0 & \cdots & \cdots & \cdots & \cdots & \cdots & 0\\
\\
\\
\tikzmark{2L}1 & \cdots & \cdots & \cdots & 1\tikzmark{2R} & 0 & \cdots & \cdots & \cdots & 0\\
\\
\\
\tikzmark{3L}1 & \cdots & \cdots & \cdots & \cdots & \cdots & 1\tikzmark{3R} & 0 & \cdots & 0\\
\\
\\
\vdots & \vdots & & & \vdots & \vdots & & \vdots & \vdots & \vdots\\
\vdots & \vdots & & & \vdots & \vdots & & \vdots & \vdots & 0\\
\tikzmark{NL}1 & 1 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & 1 & 1\tikzmark{NR}
\end{bmatrix}.
\end{equation}
\tikz[overlay, remember picture, decoration={brace, amplitude=3pt}] {
\draw[decorate,thick] (1R.south) -- (1L.south) node [midway,below=5pt] {$x_{1}$};
\draw[decorate,thick] (2R.south) -- (2L.south) node [midway,below=5pt] {$x_{2}$};
\draw[decorate,thick] (3R.south) -- (3L.south) node [midway,below=5pt] {$x_{3}$};
\draw[decorate,thick] (NR.south) -- (NL.south) node [midway,below=5pt] {$x_{N}$};
}
\end{rem}
\vspace{2em}
\begin{rem}[Spectrum of the matrices $K_{F}$; see also \cite{MR3034493}]
It is known that the factorization as in (\ref{eq:H29-1}) can be
used to obtain the spectrum of positive definite matrices. The algorithm
is as follows: Let $K$ be a given p.d. matrix.
Initialization: $B:=K$;
Iterations: $k=1,2,\cdots,n-1$,
\begin{enumerate}
\item $B=AA^{*}$;
\item $B=A^{*}A$;
\end{enumerate}
Here $A$ in step (i) denotes the lower triangular matrix in the Cholesky
decomposition of $B$ (see (\ref{eq:H29-1})). Then $\lim_{n\rightarrow\infty}B$
converges to a diagonal matrix consisting of the eigenvalues of $K$.
\end{rem}
We now resume consideration of the general case of p.d. kernels $K$
on $X\times X$ and their restrictions: A setting for harmonic functions.
\begin{rem}
\label{rem:H10}In the general case of (\ref{eq:H2}) and \lemref{H1},
we still have a Laplace operator $\Delta$. It is a densely defined
symmetric operator on $\mathscr{H}\left(K\right)$. Moreover (general
case),
\begin{equation}
\Delta_{\cdot}K\left(\cdot,x\right)=\delta_{x}\left(\cdot\right),\;\forall x\in X\label{eq:H23}
\end{equation}
(assuming that $\delta_{x}\in\mathscr{H}\left(K\right)$). The dot
``$\cdot$'' in (\ref{eq:H23}) refers to the action variable for
the operator $\Delta$. In other words, $K\left(\cdot,\cdot\right)$
is a generalized Greens kernel.
\end{rem}
\begin{defn}
Let $X\times X\xrightarrow{\;K\:}\mathbb{C}$ be given p.d., and assume
\begin{equation}
\left\{ \delta_{x}\right\} _{x\in X}\subset\mathscr{H}\left(K\right).\label{eq:H24}
\end{equation}
Let $\Delta$ denote the induced Laplace operator. A function $h$
(in $\mathscr{H}\left(K\right)$) is said to be \emph{harmonic} iff
(Def.) $\Delta h=0$.
\end{defn}
\begin{cor}
Let $\left(X,K,\mathscr{H}\left(K\right)\right)$ be as above. Assume
(\ref{eq:H24}), and let $\Delta$ be the induced Laplace operator.
Then we have the following orthogonal decomposition for $\mathscr{H}\left(K\right)$:
\begin{equation}
\mathscr{H}\left(K\right)=\left\{ h\mathrel{;}\Delta h=0\right\} \oplus clospan^{\mathscr{H}\left(K\right)}\left(\left\{ \delta_{x}\right\} _{x\in X}\right)\label{eq:H25}
\end{equation}
where ``clospan'' in (\ref{eq:H25}) refers to the norm in $\mathscr{H}\left(K\right)$.
\end{cor}
\begin{proof}
It is immediate from (\ref{eq:H1}) that
\begin{equation}
\left\{ h\in\mathscr{H}\left(K\right)\mathrel{;}\Delta h=0\right\} =\left(\left\{ \delta_{x}\right\} _{x\in X}\right)^{\perp}\label{eq:H26}
\end{equation}
where the orthogonality ``$\perp$'' in (\ref{eq:H26}) refers to
the inner product $\left\langle \cdot,\cdot\right\rangle _{\mathscr{H}\left(K\right)}$.
Since, by Hilbert space geometry, $\left(\left\{ \delta_{x}\right\} _{x\in X}\right)^{\perp\perp}=clospan^{\mathscr{H}\left(K\right)}\left(\left\{ \delta_{x}\right\} _{x\in X}\right)$,
we only need to observe that $\left\{ h\in\mathscr{H}\left(K\right)\mathrel{;}\Delta h=0\right\} $
is closed in $\mathscr{H}\left(K\right)$. But this is immediate from
(\ref{eq:H1}).
\end{proof}
\begin{cor}[Duality]
\label{cor:H14} Let $X\times X\xrightarrow{\;K\;}\mathbb{R}$ be
given, assumed p.d., and let $S\subset X$ be a countable subset such
that
\begin{equation}
\mathscr{D}\left(S\right):=\left\{ \delta_{x}\right\} _{x\in S}\subset\mathscr{H}(K^{\left(S\right)}).
\end{equation}
\begin{enumerate}
\item Then the following duality holds for the two induced kernels:
\begin{align}
K^{\left(S\right)} & :=K\big|_{S\times S},\;\text{and}\\
D^{\left(S\right)}\left(x,y\right) & :=\left\langle \delta_{x},\delta_{y}\right\rangle _{\mathscr{H}\left(K^{\left(S\right)}\right)},\;\forall\left(x,y\right)\in S\times S;
\end{align}
both p.d. kernels on $S\times S$.
For every pair $x,y\in S$, we have the following matrix-inversion
formula:
\begin{equation}
\sum_{z\in S}D^{\left(S\right)}\left(x,z\right)K^{\left(S\right)}\left(z,y\right)=\delta_{x,y},\label{eq:H32}
\end{equation}
where the summation on the LHS in (\ref{eq:H32}) is a limit over
a net of finite subsets $\left\{ F_{i}\right\} _{i\in\mathbb{N}}$,
$F_{1}\subset F_{2}\subset\cdots$, s.t. $\cup_{i}F_{i}=S$; and the
result is independent of choice of net.
\item \label{enu:corH14-2}We get an \uline{induced graph} with $S$
as the set of vertices, and edge set $E$ as follows: $E\subset\left(S\times S\right)\backslash\left(\text{diagonal}\right)$.
An edge is a pair $\left(x,y\right)\in\left(S\times S\right)\backslash\left(\text{diagonal}\right)$
such that
\[
\left\langle \delta_{x},\delta_{y}\right\rangle _{\mathscr{H}\left(K^{\left(S\right)}\right)}\neq0.
\]
\end{enumerate}
\end{cor}
\begin{proof}
The result follows from an application of Corollaries \ref{cor:H7}
and \ref{cor:H8}, and \remref{H10}.
\end{proof}
Let $X$, $K$, and $S$ be as stated, $S$ countable infinite, with
assumptions as in the previous two results. We showed that then the
subset $S$ acquires the structure of a vertex set in an induced infinite
graph (\corref{H14} (\ref{enu:corH14-2})). If $\Delta$ denotes
the corresponding graph Laplacian, then the following boundary value
problem is of great interest: Make precise the boundary conditions
at \textquotedblleft infinity\textquotedblright{} for this graph Laplacian
$\Delta$. An answer to this will require identification of Hilbert
space, and limit at \textquotedblleft infinity.\textquotedblright{}
The result below is such an answer, and the limit notion will be,
limit over the filter of all finite subsets in $S$; see \corref{H7}.
Another key tool in the arguments below will again be the net of orthogonal
projections $\left\{ P_{F}\right\} $ from \lemref{H4}, and the convergence
results from Corollaries \ref{cor:H5} and \ref{cor:H7}.
\begin{cor}
Let $X\times X\xrightarrow{\;K\;}\mathbb{R}$, and $S\subset X$ be
as in the statement of \corref{H14}. Let $\mathscr{F}_{fin}\left(S\right)$
denote the filter of finite subsets $F\subset S$. Let $\Delta=\Delta_{S}$
be the graph Laplacian defined in (\ref{eq:H2}), i.e.,
\[
\left(\Delta h\right)\left(x\right):=\left\langle \delta_{x},h\right\rangle _{\mathscr{H}(K^{\left(S\right)})},
\]
for all $x\in S$, $h\in\mathscr{H}(K^{\left(S\right)})$. Then the
following equivalent conditions hold:
\begin{enumerate}
\item For all $h\in\mathscr{H}(K^{\left(S\right)})$,
\begin{align}
\left\Vert h\right\Vert _{\mathscr{H}(K^{\left(S\right)})}^{2} & =\sup_{F\in\mathscr{F}_{fin}\left(S\right)}\left\langle h\big|_{F},\Delta P_{F}h\right\rangle _{l^{2}\left(F\right)}\\
& =\sup_{F\in\mathscr{F}_{fin}\left(S\right)}\left\langle h\big|_{F},K_{F}^{-1}\left(h\big|_{F}\right)\right\rangle _{l^{2}\left(F\right)}.\nonumber
\end{align}
\item For $\forall F\in\mathscr{F}_{fin}\left(S\right)$, $x\in F$, $h\in\mathscr{H}(K^{\left(S\right)})$,
\begin{equation}
\left(\Delta\left(P_{F}h\right)\right)\left(x\right)=\left(K_{F}^{-1}\left(h\big|_{F}\right)\right)\left(x\right).\label{eq:H46}
\end{equation}
\item $K_{F}\Delta P_{F}h=h\big|_{F}$.
\end{enumerate}
\end{cor}
\begin{proof}
On account of \corref{H8}, we only need to verify (\ref{eq:H46}).
Let $F\in\mathscr{F}_{fin}\left(S\right)$, $h\in\mathscr{H}(K^{\left(S\right)})$,
then we proved that
\begin{align}
\left(P_{F}h\right)\left(\cdot\right) & =\sum_{y\in F}\xi_{y}K\left(\cdot,y\right)\;\text{with}\label{eq:H47}\\
\xi_{y} & =\left(K_{F}^{-1}\left(h\big|_{F}\right)\right)\left(y\right).
\end{align}
Now apply $\left\langle \delta_{x},\cdot\right\rangle _{\mathscr{H}(K^{\left(S\right)})}$
to both sides in (\ref{eq:H47}); and we get
\begin{equation}
\left(\Delta\left(P_{F}h\right)\right)\left(x\right)=\xi_{x}\label{eq:H49}
\end{equation}
where we used $\left\langle \delta_{x},K\left(\cdot,y\right)\right\rangle _{\mathscr{H}(K^{\left(S\right)})}=\delta_{x,y}$.
The desired conclusion (\ref{eq:H46}) now follows from (\ref{eq:H49}).
Also note that $\left(\Delta\left(P_{F}h\right)\right)\left(x\right)=0$
if $x\in X\backslash F$.
\end{proof}
\subsection{Canonical isometries computed from point processes}
Below we consider p.d. kernels $K$ defined initially on $X\times X$.
Our present aim is to consider restrictions to $S\times S$ when $S$
is a suitable subset of $X$. Our first observation is the identification
of a canonical isometry $T_{S}$ between the respective reproducing
kernel Hilbert spaces; $T_{S}$ identifying $\mathscr{H}(K^{\left(S\right)})$
as an isometric subspace inside $\mathscr{H}(K)$. This isometry $T_{S}$
exists in general. However, we shall show that, when the subset $S$
is further restricted, the respective RKHSs, and isometry $T_{S}$
will admit explicit characterizations. For example, if $S$ is countable,
and is the Dirac functions $\delta_{s}$, $s\in S$, are in $\mathscr{H}(K^{\left(S\right)})$
we shall show that this setting leads to a point process. In this
case, we further identify an induced (infinite) graph with the set
$S$ as vertices, and with associated edges defined by an induced
$\delta_{s}$ kernel.
\begin{thm}
\label{thm:H14}Let $X\times X\xrightarrow{\;K\;}\mathbb{C}$ be a
p.d. kernel, and let $S\subset X$ be a subset. Set $K^{\left(S\right)}:=K\big|_{S\times S}$.
Let $\mathscr{H}\left(K\right)$, and $\mathscr{H}(K^{\left(S\right)})$,
be the respective RKHSs.
\begin{enumerate}
\item Then there is a canonical isometric embedding
\[
\mathscr{H}(K^{\left(S\right)})\xrightarrow{\;T\;}\mathscr{H}\left(K\right),
\]
given by the following formula: For $s\in S$, set
\begin{equation}
T(K^{\left(S\right)}\left(\cdot,s\right))=K\left(\cdot,s\right).\label{eq:H35}
\end{equation}
(Note that $K^{\left(S\right)}\left(\cdot,s\right)$ on the LHS in
(\ref{eq:H35}) is a function on $S$, while $K\left(\cdot,s\right)$
on the RHS is a function on $X$.)
\item \label{enu:H14-2}The adjoint operator $T^{*}$,
\begin{equation}
\mathscr{H}\left(K\right)\xrightarrow{\;T^{*}\;}\mathscr{H}(K^{\left(S\right)})
\end{equation}
is given by restriction, i.e., if $f\in\mathscr{H}(K)$, and $s\in S$,
then $\left(T^{*}f\right)\left(s\right)=f\left(s\right)$; or equivalently,
for all $f\in\mathscr{H}(K)$,
\begin{equation}
T^{*}f=f\big|_{S}.\label{eq:H37}
\end{equation}
\end{enumerate}
\end{thm}
\begin{proof}
To show that $T$ in (\ref{eq:H35}) is isometric, proceed as follows:
Let $\left\{ s_{i}\right\} _{i=1}^{N}$ be a finite subset of $S$,
and $\left\{ \xi_{i}\right\} _{i=1}^{N}\in\mathbb{C}^{N}$, then
\begin{eqnarray*}
\left\Vert T(\sum\nolimits _{i}\xi_{i}K^{\left(S\right)}\left(\cdot,s_{i}\right))\right\Vert _{\mathscr{H}\left(K\right)}^{2} & = & \left\Vert \sum\nolimits _{i}\xi_{i}T(K^{\left(S\right)}\left(\cdot,s_{i}\right))\right\Vert _{\mathscr{H}\left(K\right)}^{2}\\
& \underset{\text{by \ensuremath{\left(\ref{eq:H35}\right)}}}{=} & \left\Vert \sum\nolimits _{i}\xi_{i}K\left(\cdot,s_{i}\right)\right\Vert _{\mathscr{H}\left(K\right)}^{2}\\
& \underset{\text{by \ensuremath{\left(\ref{eq:A3}\right)}}}{=} & \sum\nolimits _{i}\sum\nolimits _{j}\overline{\xi}_{i}\xi_{j}K\left(s_{i},s_{j}\right)\\
& = & \sum\nolimits _{i}\sum\nolimits _{j}\overline{\xi}_{i}\xi_{j}K^{\left(S\right)}\left(s_{i},s_{j}\right)\\
& = & \left\Vert \sum\nolimits _{i}\xi_{i}K^{\left(S\right)}\left(\cdot,s_{i}\right)\right\Vert _{\mathscr{H}\left(K^{\left(S\right)}\right)}^{2}
\end{eqnarray*}
which is the desired isometric property.
We now turn to (\ref{eq:H37}), the restriction formula: Let $s\in S$,
and $f\in\mathscr{H}\left(K\right)$, then
\begin{eqnarray}
\left\langle T(K^{\left(S\right)}\left(\cdot,s\right)),f\right\rangle _{\mathscr{H}\left(K\right)} & = & \left\langle K^{\left(S\right)}\left(\cdot,s\right),T^{*}f\right\rangle _{\mathscr{H}\left(K^{\left(S\right)}\right)}\label{eq:H38}\\
& \underset{\text{by \ensuremath{\left(\ref{eq:A4}\right)}}}{=} & \left(T^{*}f\right)\left(s\right).\nonumber
\end{eqnarray}
But, for the LHS in (\ref{eq:H38}), we have
\[
\left\langle T(K^{\left(S\right)}\left(\cdot,s\right)),f\right\rangle _{\mathscr{H}\left(K\right)}\underset{\text{by \ensuremath{\left(\ref{eq:H35}\right)}}}{=}\left\langle K\left(\cdot,s\right),f\right\rangle _{\mathscr{H}\left(K\right)}\underset{\text{by \ensuremath{\left(\ref{eq:A4}\right)}}}{=}f\left(s\right);
\]
and so the desired formula (\ref{eq:H37}) follows.
\end{proof}
\begin{rem}
\label{rem:H8}\textbf{The canonical isometry for \exaref{H8} ($\mathbb{Z}$-discretization
of the covariance function for Brownian motion on $\mathbb{R}$).}
From \thmref{H14}, we know that the canonical isometry $T$ maps
$\mathscr{H}(K^{\left(Z\right)})$ into $\mathscr{H}\left(K\right)$;
see (\ref{eq:H22}). But (\ref{eq:H19}) and (\ref{eq:H24-1}) in
the Example offer exact characterization of these two Hilbert spaces.
So, in the special case of \exaref{H8}, the canonical isometry $T$
maps from functions $\Phi$ on $\mathbb{Z}$ into functions on $\mathbb{R}$.
In view of (\ref{eq:H19}), this assignment turns out to be a precise
spline realization of the point grids realized by these sequences
$\Phi$.
Below we present an explicit formula, and graphics, for the spline
realizations. By (\ref{eq:H26-1}), the embedding of $\delta_{n}$
from $\mathscr{H}(K^{\left(\mathbb{Z}\right)})$ into $\mathscr{H}\left(K\right)$
is given by
\[
\left(T\delta_{n}\right)\left(x\right)=2K\left(x,n\right)-K\left(x,n+1\right)-K\left(x,n-1\right),\;\forall x\in\mathbb{R}.
\]
See \figref{H1}. Therefore, for all $h\in\mathscr{H}\left(K\right)$,
we get
\begin{align*}
\left(T^{*}h\right)\left(m\right) & =\sum_{n\in\mathbb{Z}}h\left(n\right)\delta_{n}\left(m\right),\;m\in\mathbb{Z},\;\text{and}\\
\left(TT^{*}h\right)\left(x\right) & =\sum_{n\in\mathbb{Z}}h\left(n\right)\left(2K\left(x,n\right)-K\left(x,n+1\right)-K\left(x,n-1\right)\right),\;x\in\mathbb{R}
\end{align*}
which is the spline interpolation.
\end{rem}
\begin{figure}
\includegraphics[width=0.6\columnwidth]{sp1}
\caption{\label{fig:H1}Isometric extrapolation from functions on $\mathbb{Z}$
to functions on $\mathbb{R}$. An illustration of the isometric embedding
of $\delta_{n}$ from $\mathscr{H}(K^{\left(\mathbb{Z}\right)})$
into $\mathscr{H}\left(K\right)$, with $n=3$.}
\end{figure}
\begin{cor}
Let $X\times X\xrightarrow{\;K\;}\mathbb{C}$ be a p.d. kernel, and
let $S\subset X$ be a subset. Assume further that $\left\{ \delta_{s}\right\} _{s\in S}\subset\mathscr{H}(K^{\left(S\right)})$.
Then every finitely supported function $h$ on $S$ is in $\mathscr{H}(K^{\left(S\right)})$,
and we have the following generalized spline interpolation; i.e.,
isometrically extending $h$ from $S$ to $X$:
\begin{equation}
\widetilde{h}\left(x\right)=\sup\nolimits _{F\supset F_{0}}\sum\nolimits _{y\in F}\left(K_{F}^{-1}h_{F}\right)\left(y\right)K\left(y,x\right),\;x\in X,
\end{equation}
where $F_{0}=suppt\left(h\right)$, and the sup is taken over the
filter of all finite subsets of $X$ containing $F_{0}$.
\end{cor}
\begin{proof}
Assume $h\in\mathscr{H}(K^{\left(S\right)})$, supported on a finite
subset $F_{0}\subset S$. Then,
\begin{align*}
\widetilde{h}\left(x\right):=Th\left(x\right) & =T\left(\sum\nolimits _{s\in F_{0}}h\left(s\right)\delta_{s}\right)\left(x\right)\\
& =\sum\nolimits _{s\in F_{0}}h\left(s\right)\left(T\delta_{s}\right)\left(x\right)\\
& =\sum\nolimits _{s\in F_{0}}h\left(s\right)\sup\nolimits _{F\supset F_{0}}\left(P_{F}\delta_{s}\right)\left(x\right)\\
& =\sup\nolimits _{F\supset F_{0}}P_{F}\left(\sum\nolimits _{s\in F_{0}}h\left(s\right)\delta_{s}\right)\left(x\right)\\
& =\sup\nolimits _{F\supset F_{0}}\left(P_{F}h_{F_{0}}\right)\left(x\right)\\
& =\sup\nolimits _{F\supset F_{0}}\sum\nolimits _{y\in F}\left(K_{F}^{-1}h_{F}\right)\left(y\right)K\left(x,y\right),
\end{align*}
where the last step follows from (\ref{eq:H8}), and $P_{F}$ is the
orthogonal projection from $\mathscr{H}\left(K\right)$ onto the subspace
$\mathscr{H}_{K}\left(F\right)$.
\end{proof}
\begin{cor}
\label{cor:H15}Let $X\times X\xrightarrow{\;K\;}\mathbb{C}$, p.d..
be given, and let $S\subset X$ be a subset. Let $T=T_{S}$, $\mathscr{H}(K^{\left(S\right)})\xrightarrow{\;T\;}\mathscr{H}\left(K\right)$,
be the canonical isometry. Then a function $f$ in $\mathscr{H}\left(K\right)$
satisfies $\left\langle f,T(\mathscr{H}(K^{\left(S\right)}))\right\rangle _{\mathscr{H}\left(K\right)}=0$
if and only if
\begin{equation}
f\left(s\right)=0\;\text{for all \ensuremath{s\in S}.}\label{eq:H39}
\end{equation}
\end{cor}
\begin{proof}
Immediate from part (\ref{enu:H14-2}) in \thmref{H14}.
\end{proof}
\begin{rem}
Let $\left(X,K,S\right)$ be as in \corref{H15}, and let $T_{S}$
be the canonical isometry. Let $P_{S}:=T_{S}T_{S}^{*}$ be the corresponding
projection. Then $I_{\mathscr{H}\left(K\right)}-P_{S}$ is the projection
onto the subspace given in (\ref{eq:H39}).
\end{rem}
\begin{cor}
Let $X\times X\xrightarrow{\;K\;}\mathbb{C}$ be given p.d.; and let
$S\subset X$ be a subset with induced kernel
\begin{equation}
K^{\left(S\right)}:=K\big|_{S\times S}.\label{eq:H40}
\end{equation}
Consider the two sets $\mathscr{F}\left(S\right)$ and $\mathscr{F}(K^{\left(S\right)})$
from (\ref{eq:D2}) and \thmref{E1}. Let $T_{S}:\mathscr{H}(K^{\left(S\right)})\rightarrow\mathscr{H}\left(K\right)$
be the canonical isometry (\ref{eq:H35}) in \thmref{H14}. Then the
following implication holds:
\begin{eqnarray}
\left(\left\{ k_{x}\right\} _{x\in X},\mu\right) & \in & \mathscr{F}\left(K\right)\label{eq:H41}\\
& \Downarrow\nonumber \\
\left(\left\{ k_{s}\right\} _{s\in S},\mu\right) & \in & \mathscr{F}(K^{\left(S\right)})\label{eq:H42}
\end{eqnarray}
\end{cor}
\begin{proof}
Assuming (\ref{eq:H41}), we get the representation (\ref{eq:D2}):
\begin{equation}
K\left(x,y\right)=\int_{M}\overline{k}_{x}k_{y}d\mu,\;\forall\left(x,y\right)\in X\times X.
\end{equation}
But then, for all $\left(s_{1},s_{2}\right)\in S\times S$, we then
have
\begin{eqnarray*}
K^{\left(S\right)}\left(s_{1},s_{2}\right) & = & \left\langle T_{S}(K^{\left(S\right)}\left(\cdot,s_{1}\right)),T_{S}(K^{\left(S\right)}\left(\cdot,s_{2}\right))\right\rangle _{\mathscr{H}\left(K\right)}\\
& \underset{\text{by \ensuremath{\left(\ref{eq:H40}\right)}}}{=} & K\left(s_{1},s_{2}\right)\\
& = & \int_{M}\overline{k}_{s_{1}}k_{s_{2}}d\mu,
\end{eqnarray*}
which is the desired conclusion.
\end{proof}
\section{Boundary value problems}
Our setting in the present section is the discrete case, i.e., RKHSs
of functions defined on a prescribed countable infinite discrete set
$S$. We are concerned with a characterization of those RKHSs $\mathscr{H}$
which contain the Dirac masses $\delta_{x}$ for all points $x\in S$.
Of the examples and applications where this question plays an important
role, we emphasize two: (i) discrete Brownian motion-Hilbert spaces,
i.e., discrete versions of the Cameron-Martin Hilbert space; (ii)
energy-Hilbert spaces corresponding to graph-Laplacians.
The problems addressed here are motivated in part by applications
to analysis on infinite weighted graphs, to stochastic processes,
and to numerical analysis (discrete approximations), and to applications
of RKHSs to machine learning. Readers are referred to the following
papers, and the references cited there, for details regarding this:
\cite{MR3231624,MR2966130,MR2793121,MR3286496,MR3246982,MR2862151,MR3096457,MR3049934,MR2579912,MR741527,MR3024465}.
The discrete case can be understood as restrictions of analogous PDE-models.
In traditional numerical analysis, one builds discrete and algorithmic
models (finite element methods), each aiming at finding approximate
solutions to PDE-boundary value problems. They typically use multiresolution-subdivision
schemes, applied to the continuous domain, subdividing into simpler
discretized parts, called finite elements. And with variational methods,
one then minimize various error-functions. In this paper, we turn
the tables: our object of study are the discrete models, and analysis
of suitable continuous PDE boundary problems serve as a tool for solutions
in the discrete world.
\begin{defn}
\label{def:dmp}Let $X\times X\xrightarrow{\;K\;}\mathbb{C}$ be a
given p.d. kernel on $X$. The RKHS $\mathscr{H}=\mathscr{H}\left(K\right)$
is said to have the \emph{discrete mass} property ($\mathscr{H}$
is called a \emph{discrete RKHS}), if $\delta_{x}\in\mathscr{H}$,
for all $x\in X$.
\end{defn}
In fact, it is known (\cite{MR3507188}) that every fundamental solution
for a Dirichlet boundary value problem on a bounded open domain $\Omega$
in $\mathbb{R}^{\nu}$, allows for discrete restrictions (i.e., vertices
sampled in $\Omega$), which have the desired ``discrete mass''
property.
We recall the following result to stress the distinction of the discrete
models vs their continuous counterparts.
Let $\Omega$ be a bounded, open, and connected domain in $\mathbb{R}^{\nu}$
with smooth boundary $\partial\Omega$. Let $K:\Omega\times\Omega\rightarrow\mathbb{R}$
continuous, p.d., given as the Green's function of $\Delta_{0}$,
where
\begin{equation}
\begin{split} & \Delta_{0}:=-\sum_{j=1}^{\nu}\left(\frac{\partial}{\partial x_{j}}\right)^{2},\\
& dom\left(\Delta_{0}\right)=\left\{ f\in L^{2}\left(\Omega\right)\:\big|\:\Delta f\in L^{2}\left(\Omega\right),\;\mbox{and }f\big|_{\partial\Omega}\equiv0\right\} .
\end{split}
\label{eq:e4}
\end{equation}
for the Dirichlet boundary condition. Thus, $\Delta_{0}$ is positive
selfadjoint, and
\begin{align}
& \Delta_{0}K=\delta\left(x-y\right)\text{ on \ensuremath{\Omega\times\Omega}}\label{eq:m1}\\
& K\left(x,\cdot\right)\big|_{\partial\Omega}\equiv0.\label{eq:m2}
\end{align}
Let $\mathscr{H}_{CM}\left(\Omega\right)$ be the corresponding Cameron-Martin
RKHS.
For $\nu=1$, $\Omega=\left(0,1\right)$, take
\begin{equation}
\begin{split}\mathscr{H}_{CM}\left(0,1\right)= & \Big\{ f\:\big|\:f'\in L^{2}\left(0,1\right),\;f\left(0\right)=f\left(1\right)=0,\\
& \left\Vert f\right\Vert _{CM}^{2}:=\int_{0}^{1}\left|f'\right|^{2}dx<\infty\Big\}
\end{split}
\label{eq:m3}
\end{equation}
For $\nu>1$, let
\begin{equation}
\begin{split}\mathscr{H}_{CM}\left(\Omega\right)= & \left\{ f\:\big|\:\nabla f\in L^{2}\left(\Omega\right),\:f\big|_{\partial\Omega}\equiv0,\:\left\Vert f\right\Vert _{CM}^{2}:=\int_{\Omega}\left|\nabla f\right|^{2}dx<\infty\right\} ,\\
& \text{ where }\nabla=\left(\frac{\partial}{\partial x_{1}},\frac{\partial}{\partial x_{2}},\cdots,\frac{\partial}{\partial x_{\nu}}\right).
\end{split}
\label{eq:m4}
\end{equation}
\begin{thm}
\label{thm:main}Let $\Omega$, and $S\subset\Omega$, be given. Then
\begin{enumerate}
\item Discrete case: Fix $S\subset\Omega$, $\#S=\aleph_{0}$, where $S=\left\{ x_{j}\right\} _{j=1}^{\infty}$,
$x_{j}\in\Omega$. Assume $\exists\varepsilon>0$ s.t. $\left\Vert x_{i}-x_{j}\right\Vert \geq\varepsilon$,
$\forall i,j$, $i\neq j$. Let
\[
\mathscr{H}\left(S\right)=\text{RKHS of \ensuremath{K^{\left(S\right)}:=K\big|_{S\times S}}};
\]
then $\delta_{x_{j}}\in\mathscr{H}\left(S\right)$.
\item Continuous case; by contrast: $K_{x}^{\left(S\right)}\in\mathscr{H}_{CM}\left(S\right)$,
but $\delta_{x}\notin\mathscr{H}_{CM}\left(\Omega\right)$, $x\in\Omega$.
\end{enumerate}
\end{thm}
\begin{proof}
The result follows from an application of Corollaries \ref{cor:H7}
and \ref{cor:H8}. It extends earlier results \cite{MR3450534,MR3507188}
by the co-authors.
\end{proof}
\section{Sampling in $\mathscr{H}\left(K\right)$}
In the present section, we study classes of reproducing kernels $K$
on general domains with the property that there are non-trivial restrictions
to countable discrete sample subsets $S$ such that every function
in $\mathscr{H}\left(K\right)$ has an $S$-sample representation.
In this general framework, we study properties of positive definite
kernels $K$ with respect to sampling from ``small\textquotedblright{}
subsets, and applying to all functions in the associated Hilbert space
$\mathscr{H}\left(K\right)$.
We are motivated by concrete kernels which are used in a number of
applications, for example, on one extreme, the Shannon kernel for
band-limited functions, which admits many sampling realizations; and
on the other, the covariance kernel of Brownian motion which has no
non-trivial countable discrete sample subsets.
\begin{defn}
\label{def:J1}Let $X\times X\xrightarrow{\;K\;}\mathbb{C}$ be a
p.d. kernel, and $\mathscr{H}\left(K\right)$ be the associated RKHS.
We say that $K$ has non-trivial sampling property, if there exists
a countable subset $S\subset X$, and $a,b\in\mathbb{R}_{+}$, such
that
\begin{equation}
a\sum_{s\in S}\left|f\left(s\right)\right|^{2}\leq\left\Vert f\right\Vert _{\mathscr{H}\left(K\right)}^{2}\leq b\sum_{s\in S}\left|f\left(s\right)\right|^{2},\quad\forall f\in\mathscr{H}\left(K\right).\label{eq:sp1}
\end{equation}
If equality holds in (\ref{eq:sp1}) with $a=b=1$, then we say that
$\left\{ K\left(\cdot,s\right)\right\} _{s\in S}$ is a Parseval frame.
(Also see \defref{G1}.)
\end{defn}
It follows that sampling holds in the form
\[
f\left(x\right)=\sum_{s\in S}f\left(s\right)K\left(x,s\right),\quad\forall f\in\mathscr{H}\left(K\right),\:\forall x\in X
\]
if and only if $\left\{ K\left(\cdot,s\right)\right\} _{s\in S}$
is a Parseval frame.
\begin{lem}
\label{lem:fr}Suppose $K$, $X$, $a$, $b$, and $S$ satisfy the
condition in (\ref{eq:sp1}), then the linear span of $\left\{ K\left(\cdot,s\right)\right\} _{s\in S}$
is dense in $\mathscr{H}\left(K\right)$. Moreover, there is a positive
operator $B$ in $\mathscr{H}\left(K\right)$ with bounded inverse
such that
\[
f\left(\cdot\right)=\sum_{s\in S}\left(Bf\right)\left(s\right)K\left(\cdot,s\right)
\]
is a convergent interpolation formula valid for all $f\in\mathscr{H}\left(K\right)$.
Equivalently,
\[
f\left(x\right)=\sum_{s\in S}f\left(s\right)B\left(K\left(\cdot,s\right)\right)\left(x\right),\;\text{for all \ensuremath{x\in X}.}
\]
\end{lem}
\begin{proof}
Define $A:\mathscr{H}\left(K\right)\rightarrow l^{2}\left(S\right)$
by $\left(Af\right)\left(s\right)=f\left(s\right)$, $s\in S$. Then
the adjoint operator $A^{*}:l^{2}\left(S\right)\rightarrow\mathscr{H}\left(K\right)$
is given by $A^{*}\xi=\sum_{s\in S}\xi_{s}K\left(\cdot,s\right)$,
$\forall\xi\in l^{2}\left(S\right)$, and
\[
A^{*}Af=\sum_{s\in S}f\left(s\right)K\left(\cdot,s\right)
\]
holds in $\mathscr{H}\left(K\right)$, with $\mathscr{H}\left(K\right)$-norm
convergence. Now set $B=\left(A^{*}A\right)^{-1}$, and note that
$\left\Vert B\right\Vert _{\mathscr{H}\left(K\right)\rightarrow\mathscr{H}\left(K\right)}\leq a^{-1}$,
where $a$ is in the lower bound in (\ref{eq:sp1}).
\end{proof}
\begin{thm}
\label{thm:ps}Let $K:X\times X\rightarrow\mathbb{R}$ be a p.d. kernel,
and let $S\subset X$ be a countable discrete subset. For all $s\in S$,
set $K_{s}\left(\cdot\right)=K\left(\cdot,s\right)$. Then TFAE:
\begin{enumerate}
\item \label{enu:ps1}The family $\left\{ K_{s}\right\} _{s\in S}$ is a
Parseval frame in $\mathscr{H}\left(K\right)$;
\item \label{enu:ps2}
\[
\left\Vert f\right\Vert _{\mathscr{H}\left(K\right)}^{2}=\sum_{s\in S}\left|f\left(s\right)\right|^{2},\;\forall f\in\mathscr{H}\left(K\right);
\]
\item \label{enu:ps3}
\[
K\left(x,x\right)=\sum_{s\in S}\left|K\left(x,s\right)\right|^{2},\;\forall x\in X;
\]
\item \label{enu:ps4}
\[
f\left(x\right)=\sum_{s\in S}f\left(s\right)K\left(x,s\right),\;\forall f\in\mathscr{H}\left(K\right),\:\forall x\in X,
\]
where the sum converges in the norm of $\mathscr{H}\left(K\right)$.
\end{enumerate}
\end{thm}
\begin{proof}
The proof is simple, and follows the steps in the proof of \lemref{G2}.
Details are left to the reader.
\end{proof}
We now turn to dichotomy: Existence of countably discrete sampling
sets vs non-existence.
\begin{example}
\label{exa:shan}Let $X=\mathbb{R}$, and let $K:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$
be the Shannon kernel, where
\begin{align}
K\left(x,y\right) & :=\text{sinc}\,\pi\left(x-y\right)\nonumber \\
& =\frac{\sin\pi\left(x-y\right)}{\pi\left(x-y\right)},\quad\forall x,y\in\mathbb{R}.\label{eq:sp5}
\end{align}
We may choose $S=\mathbb{Z}$, and then $\left\{ K\left(\cdot,n\right)\right\} _{n\in\mathbb{Z}}$
is even an orthonormal basis (ONB) in $\mathscr{H}\left(K\right)$,
but there are many other examples of countable discrete subsets $S\subset\mathbb{R}$
such that (\ref{eq:sp1}) holds for finite $a,b\in\mathbb{R}_{+}$.
The RKHS $\mathscr{H}\left(K\right)$ in (\ref{eq:sp5}) is the Hilbert
space $\subset L^{2}\left(\mathbb{R}\right)$ consisting of all $f\in L^{2}\left(\mathbb{R}\right)$
such that $suppt(\hat{f})\subset\left[-\pi,\pi\right]$, where ``suppt''
stands for support of the Fourier transform $\hat{f}$. Note $\mathscr{H}\left(K\right)$
consists of functions on $\mathbb{R}$ which have entire analytic
extensions to $\mathbb{C}$. Using the above observations, we get
\begin{align*}
f\left(x\right) & =\sum_{n\in\mathbb{Z}}f\left(n\right)K\left(x,n\right)\\
& =\sum_{n\in\mathbb{Z}}f\left(n\right)\text{sinc}\,\pi\left(x-n\right),\quad\forall x\in\mathbb{R},\:\forall f\in\mathscr{H}\left(K\right).
\end{align*}
\end{example}
\begin{example}
Let $K$ be the covariant kernel of standard Brownian motion, with
$X:=[0,\infty)$ or $[0,1)$, and
\begin{equation}
K\left(x,y\right):=x\wedge y=\min\left(x,y\right),\;\forall\left(x,y\right)\in X\times X.\label{eq:sp6}
\end{equation}
\end{example}
\begin{thm}
\label{thm:bm}Let $K$, $X$ be as in (\ref{eq:sp6}); then there
is no countable discrete subset $S\subset X$ such that $\left\{ K\left(\cdot,s\right)\right\} _{s\in S}$
is dense in $\mathscr{H}\left(K\right)$.
\end{thm}
\begin{proof}
Suppose $S=\left\{ x_{n}\right\} $, where
\begin{equation}
0<x_{1}<x_{2}<\cdots<x_{n}<x_{n+1}<\cdots;\label{eq:sp7}
\end{equation}
then consider the following function
\begin{equation}
\raisebox{-6mm}{\includegraphics[width=0.7\textwidth]{wave1.pdf}}\label{eq:sp9}
\end{equation}
On the respective intervals $\left[x_{n},x_{n+1}\right]$, the function
$f$ is as follows:
\[
f\left(x\right)=\begin{cases}
c_{n}\left(x-x_{n}\right) & \text{if }x_{n}\leq x\leq\frac{x_{n}+x_{n+1}}{2}\\
c_{n}\left(x_{n+1}-x\right) & \text{if }\frac{x_{n}+x_{n+1}}{2}<x\leq x_{n+1}.
\end{cases}
\]
In particular, $f\left(x_{n}\right)=f\left(x_{n+1}\right)=0$, and
on the midpoints:
\[
f\left(\frac{x_{n}+x_{n+1}}{2}\right)=c_{n}\frac{x_{n+1}-x_{n}}{2},
\]
see \figref{stooth}.
\begin{figure}[H]
\includegraphics[width=0.35\textwidth]{stooth}
\caption{\label{fig:stooth}The saw-tooth function.}
\end{figure}
Choose $\left\{ c_{n}\right\} _{n\in\mathbb{N}}$ such that
\begin{equation}
\sum_{n\in\mathbb{N}}\left|c_{n}\right|^{2}\left(x_{n+1}-x_{n}\right)<\infty.\label{eq:sp11}
\end{equation}
Admissible choices for the slope-values $c_{n}$ include
\[
c_{n}=\frac{1}{n\sqrt{x_{n+1}-x_{n}}},\;n\in\mathbb{N}.
\]
We will now show that $f\in\mathscr{H}\left(K\right)$. For the distribution
derivative computed from (\ref{eq:sp9}), we get
\begin{equation}
\raisebox{-12mm}{\includegraphics[width=0.7\textwidth]{wave2.pdf}}\label{eq:sp9b}
\end{equation}
\[
\int_{0}^{\infty}\left|f'\left(x\right)\right|^{2}dx=\sum_{n\in\mathbb{N}}\left|c_{n}\right|^{2}\left(x_{n+1}-x_{n}\right)<\infty
\]
which is the desired conclusion, see (\ref{eq:sp9}).
\end{proof}
\begin{cor}
For the kernel $K\left(x,y\right)=x\wedge y$ in (\ref{eq:sp6}),
$X=[0,\infty)$, the following holds:
Given $\left\{ x_{j}\right\} _{j\in\mathbb{N}}\subset\mathbb{R}_{+}$,
$\left\{ y_{j}\right\} _{j\in\mathbb{N}}\subset\mathbb{R}$, then
the interpolation problem
\begin{equation}
f\left(x_{j}\right)=y_{j},\;f\in\mathscr{H}\left(K\right)\label{eq:ip1}
\end{equation}
is solvable if
\begin{equation}
\sum_{j\in\mathbb{N}}\left(y_{j+1}-y_{j}\right)^{2}/\left(x_{j+1}-x_{j}\right)<\infty.\label{eq:sp2}
\end{equation}
\end{cor}
\begin{proof}
Let $f$ be the piecewise linear spline (see \figref{ip}) for the
problem (\ref{eq:ip1}), see \figref{ip}; then the $\mathscr{H}\left(K\right)$-norm
is as follows:
\[
\int_{0}^{\infty}\left|f'\left(x\right)\right|^{2}dx=\sum_{j\in\mathbb{N}}\left(\frac{y_{j+1}-y_{j}}{x_{j+1}-x_{j}}\right)^{2}\left(x_{j+1}-x_{j}\right)<\infty
\]
when (\ref{eq:sp2}) holds.
\end{proof}
\begin{figure}[H]
\includegraphics[width=0.35\textwidth]{spline}
\caption{\label{fig:ip}Piecewise linear spline.}
\end{figure}
\begin{rem}
Let $K$ be as in (\ref{eq:sp6}), $X=[0,\infty)$. For all $0\leq x_{j}<x_{j+1}<\infty$,
let
\begin{align*}
f_{j}\left(x\right): & =\frac{2}{x_{j+1}-x_{j}}\left(K\left(x-x_{j},\frac{x_{j+1}-x_{j}}{2}\right)-K\left(x-\frac{x_{j}+x_{j+1}}{2},\frac{x_{j+1}-x_{j}}{2}\right)\right)\\
& =\raisebox{-5mm}{\includegraphics[width=0.4\textwidth]{tmp.pdf}}
\end{align*}
Assuming (\ref{eq:sp11}) holds, then
\[
f\left(x\right)=\sum_{j}c_{j}f_{j}\left(x\right)\in\mathscr{H}\left(K\right).
\]
\end{rem}
\begin{thm}
Let $X$ be a set of cardinality $c$ of the continuum, and let $K:X\times X\rightarrow\mathbb{R}$
be a positive definite kernel. Let $S=\left\{ x_{j}\right\} _{j\in\mathbb{N}}$
be a discrete subset of $X$. Suppose there are weights $\left\{ w_{j}\right\} _{j\in\mathbb{N}}$,
$w_{j}\in\mathbb{R}_{+}$, such that
\begin{equation}
\left(f\left(x_{j}\right)\right)\in l^{2}\left(\mathbb{N},w\right)\label{eq:c1}
\end{equation}
for all $f\in\mathscr{H}\left(K\right)$. Suppose further that there
is a point $t_{0}\in X\backslash S$, a $y_{0}\in\mathbb{R}\backslash\left\{ 0\right\} $,
and $\alpha\in\mathbb{R}_{+}$ such that the infimum
\begin{equation}
\inf_{f\in\mathscr{H}\left(K\right)}\left\{ \sum\nolimits _{j}w_{j}\left|f\left(x_{j}\right)\right|^{2}+\left|f\left(t_{0}\right)-y_{0}\right|^{2}+\alpha\left\Vert f\right\Vert _{\mathscr{H}\left(K\right)}^{2}\right\} \label{eq:c2}
\end{equation}
is strictly positive.
Then $S$ is \uline{not} a interpolation set for $\left(K,X\right)$.
\end{thm}
\begin{proof}
This results follows from \lemref{fr} and \thmref{ps} above. We
also refer readers to \cite{MR3670916}.
\end{proof}
\begin{acknowledgement*}
The co-authors thank the following colleagues for helpful and enlightening
discussions: Professors Daniel Alpay, Sergii Bezuglyi, Ilwoo Cho,
Myung-Sin Song, Wayne Polyzou, and members in the Math Physics seminar
at The University of Iowa.
\end{acknowledgement*}
\bibliographystyle{amsalpha}
| {
"timestamp": "2018-12-31T02:13:18",
"yymm": "1812",
"arxiv_id": "1812.10850",
"language": "en",
"url": "https://arxiv.org/abs/1812.10850",
"abstract": "We establish a duality for two factorization questions, one for general positive definite (p.d) kernels $K$, and the other for Gaussian processes, say $V$. The latter notion, for Gaussian processes is stated via Ito-integration. Our approach to factorization for p.d. kernels is intuitively motivated by matrix factorizations, but in infinite dimensions, subtle measure theoretic issues must be addressed. Consider a given p.d. kernel $K$, presented as a covariance kernel for a Gaussian process $V$. We then give an explicit duality for these two seemingly different notions of factorization, for p.d. kernel $K$, vs for Gaussian process $V$. Our result is in the form of an explicit correspondence. It states that the analytic data which determine the variety of factorizations for $K$ is the exact same as that which yield factorizations for $V$. Examples and applications are included: point-processes, sampling schemes, constructive discretization, graph-Laplacians, and boundary-value problems.",
"subjects": "Functional Analysis (math.FA); Probability (math.PR)",
"title": "Decomposition of Gaussian processes, and factorization of positive definite kernels",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587272306607,
"lm_q2_score": 0.8289388019824947,
"lm_q1q2_score": 0.8187915359983376
} |
https://arxiv.org/abs/2007.09958 | Monodromy of general hypersurfaces | Let $X$ be a general complex projective hypersurface in $\mathbb{P}^{n+1}$ of degree $d>1$. A point $P$ not in $X$ is called uniform if the monodromy group of the projection of $X$ from $P$ is isomorphic to the symmetric group. We prove that all the points in $\mathbb{P}^{n+1}$ are uniform for $X$, generalizing a result of Cukierman on general plane curves. | \section{Introduction}
The monodromy group of linear projections of irreducible complex projective varieties has been intensively studied. Fixed an irreducible and reduced projective hypersurface $X \subset \mathbb{P}^{n+1}$, consider its linear projections from a point $P \in \mathbb{P}^{n+1}$. We want to look at those maps from a topological point of view: in particular, we aim to classify the centres of projection through their monodromy group. We recall that we can give also an algebraic description: indeed, the monodromy group is isomorphic to the Galois group for finite dominant morphisms between irreducible complex varieties \cite[Section I]{H}.
We will say that a point $P$ is \emph{uniform} for $X$ if the monodromy group of the projection from $P$ is the symmetric group, \emph{non uniform} otherwise.
A direct consequence of the Castelnuovo's uniform position principle, in the formulation of Harris \cite{JHCurves}, is that a general projection has always symmetric monodromy group. In 2005 Pirola and Schlesinger \cite{PS} improved this result showing that an irreducible and reduced plane curve admits at most a finite number of non uniform points. Moreover, in \cite{CMS} it is proved that smooth surfaces in $\mathbb{P}^3$ admit at most a finite number of non uniform points. More recently the author, Cuzzucoli and Moschetti \cite{CCM} studied the case of hypersurfaces of higher dimension proving that, except for special configurations, the non uniform locus is contained in linear subspaces of codimension two. In particular, we proved that smooth hypersurfaces admit at most a finite number of non uniform points \cite[Theorem 1.3]{CCM}.
Examples of smooth hypersurfaces admitting at least a non uniform point are known (see for instance \cite{Miura1} \cite{MY1} for plane curves or \cite{Yoshiara} for hypersurfaces).
One may ask if every smooth hypersurface admit non uniform points, but the answer is negative.
In 1999 Fernando Cukierman (\cite{Cuk}) proved that for general plane curves, all the outer points are uniform. In this work we generalize this result proving the following
\begin{thm}
Let $X \subset \mathbb{P}^{n+1}$ be a general hypersurface of degree $d>1$. Then all the points $P \in \mathbb{P}^{n+1}$ are uniform.
\end{thm}
The result was already known for a special class of non uniform points that are the Galois points (\cite[Theorem 1]{Yoshiara}). We remark also that the Theorem extends the result of Cukiermann to inner points of general plane curves.
The proof combines inner and outer projections and it is based on an induction argument on the degree of the variety: we degenerate the hypersurface $X$ to a limit one given by a general hypersurface $Y$ of degree $d-1$ and a hyperplane.
The base case of the induction ($d=3$, Theorem \ref{d3}) is consequence of a result of Matsumura and Monsky \cite{MM} saying that a general hypersurface has trivial automorphism group. More in general, the induction step is based on the study of the behaviour of the monodromy group of the projection $\pi_P$ of $X$ from $P$ under degenerations (Lemma \ref{induction}).
\begin{lem}
Let $P \in \mathbb{P}^{n+1}$ and let $\pi_0$ be the map $\pi_P$ restricted to $Y$. Then, the monodromy group $M(\pi_0)$ is contained in the monodromy group $M(\pi_P)$.
\end{lem}
This Lemma is based on some classical topological results on homotopy of fibrations (Proposition \ref{lem:Noriprelim}), reported in Section \ref{sec3}. In particular, we considered the case of a family of dominant maps $F: \mathcal{X} \to Y \times \mathbb{P}^1$ parametrised by $\mathbb{P}^1$, where $\mathcal{X}$ is a flat family of projective varieties of dimension $m$ in $\mathbb{P}^N$ and $Y$ is a smooth projective variety. For a general $s \in \mathbb{P}^1$, the fibre over $\mathbb{P}^1$ is a smooth projective variety $X_s$ of dimension $m$ in $\mathbb{P}^N$, together with a finite dominant morphism $f_s: X_s \to Y$ of degree $d$. We deduce a result on monodromy groups (Proposition \ref{lemmagenerale}):
\begin{prop}
If $X_s$ is reduced for every $s \in \mathbb{P}^1$, then $$M(F)\cong M(f_s).$$
\end{prop}
More generally, if there is a non reduced fibre $X_0$ with a reduced component $Z$, we have (Proposition \ref{redcomponent})
\begin{prop}
The monodromy group of $f_0$ restricted to $Z$ is contained in the monodromy group $M(f_s)$ for a general $s$ in a neighbourhood of $0$.
\end{prop}
To conclude the proof of the main Theorem, we use results on multiply transitive permutation groups (see Section \ref{permutazioni}).
\medskip
\textbf{Notations.}
All the varieties are assumed to be complex and projective. Let $\mathscr{F}$ be a family of objects parametrised by a scheme $V$. We say the general element of $\mathscr{F}$ satisfies a certain property if this property holds for every element in a Zariski dense open subset of $V$. Moreover, we will always use the Zariski topology, unless stated otherwise.
\section{Preliminaries} \label{sec:preliminaries}
\subsection{Monodromy and Galois group}
Let $f:X \to Y$ be a finite dominant morphism of degree $d$ between complex irreducible reduced varieties of the same dimension. Let $U \subset Y$ be a Zariski open set over which $f$ is \'etale, and let $y$ denote a point in $U$. We have a well defined map
$$\mu: \pi_1(U,y) \to \operatorname{Aut}\big(f^{-1}(y)\big) \simeq S_d.$$
The image $M(f):=\mu\left(\pi_1(U,y)\right)$ is called \emph{monodromy group} of the map $f$; it is a transitive subgroup of the symmetric group.
We can also describe this group by means of Galois extensions: let $K$ be the Galois closure of the extension $\mathbb{C}(X)/\mathbb{C}(Y)$, where $\mathbb{C}(X),\mathbb{C}(Y)$ define the fields of rational functions of $X$ and $Y$ respectively. Define the \emph{Galois group} $G(f)$ of the map $f$ to be the Galois group of the field extension $K/\mathbb{C}(Y)$.
It turns out that $G(f)$ is isomorphic to $M(f)$, see \cite[Section I]{H}. We recall also that Galois group of a field extension $K/\mathbb{C}(Y)$ is defined as the group of automorphisms of $K$ fixing $\mathbb{C}(Y)$.
\subsection{Automorphisms of general hypersurfaces}
Let $V$ be a projective variety; we denote by $\operatorname{Aut}(V)$ the group of automorphisms of $V$. We will use the following result of Matsumura and Monsky \cite{MM}:
\begin{thm}\label{autom}
Let $X$ be a general hypersurface in $\mathbb{P}^{n+1}$, with $ n\geq 2$ and $d\geq 3$. Then $\operatorname{Aut}(X)$ is trivial.
\end{thm}
\subsection{Permutation groups} \label{permutazioni}
We recall some definitions and results that we will use in the following.
A group $G$ acting on a set $\ \Omega=\{1,\ldots,d\}$ is $k$-transitive, with $k \leq d$, if, given two ordered $k$-tuples $(m_1,\ldots,m_k)$ and $(t_1,\ldots,t_k)$ of distinct points in $\Omega$, there is an element $g \in G$ such that sends $g \cdot m_i=t_i$ for every $i=1,\ldots,$. If $k=1$ we say that $G$ is transitive.
We state some results on transitive permutation groups that we will use in the following. See for instance \cite[Chapter 8]{Isaacs} for a more complete treatment.
\begin{lem}\label{lemma1}
Let $G$ be a group acting transitively on $\Omega$, let $i \in \Omega$ and $k \leq d-1$.
The group $G$ is $k$-transitive on $\Omega$ if and only if the stabilizer of $i$ in $G$ is $(k-1)$-transitive on $\Omega \setminus \{i\}$.
\end{lem}
We recall that a {block} is a non-empty subset $B \subset \Omega$ such that either $g\cdot B=B$ or $(g\cdot B) \cap B = \emptyset$ for all $g \in G$. We say that $G$ is {imprimitive} if its action preserves non-trivial blocks and {primitive} otherwise. A 2-transitive permutation group is primitive, but the converse is not always true.
\begin{lem}\label{block}
Let $G$ be a group acting transitively on $\Omega$ and let $B$ be a block. Then $|B|$ divides $d=|\Omega|$ and in $\Omega$ there are exactly ${|\Omega|}/{|B|}$ disjoint blocks, all with the same cardinality.
\end{lem}
\begin{lem}\label{lemma2}
Let $G$ be a primitive group on $\Omega$, let $A \subset \Omega$ such that $0 < |A| \leq d-2$ and the stabilizer of $A$ is transitive on $\Omega \setminus A$.
Then $G$ is $2$-transitive on $\Omega$.
\end{lem}
We recall that the monodromy group of $\pi_P$ is imprimitive if and only if the projection is decomposable (\cite[Remark 2.2]{PS}).
\section{Topology of finite morphisms}\label{sec3}
We introduce the following definition of what will be for us a \emph{fibration}. For a more complete treatment see \cite[Chapter III sect. 8]{BHPVdV}.
\begin{defn}
A \emph{fibration} is a proper surjective morphism $f: X \to Y$ with connected fibres from a smooth complex variety to a smooth quasi-projective curve.
Let $y \in Y$ be a point. A fibre of $f$ is $F:=f^*(y)=\sum n_iY_i$, where the $Y_i$'s are irreducible components and $n_i \geq 1$ are their multiplicities. A fibre $F$ is called \emph{multiple} if $\gcd\{n_i\}:=m>1$; we will write $F=mE$, where $E=\sum t_iY_i$ with $\gcd\{t_i\}=1$.
\end{defn}
We remark that $f$ is flat since $Y$ is smooth.
We will use the following classical result on homotopy of fibrations.
\begin{prop}\label{lem:Noriprelim}
Let $f: X \to Y$ be a fibration. If $f$ does not have multiple fibres, then the following sequence is exact
\begin{equation*}
\pi_1(F) \to \pi_1(X)\to \pi_1(Y)\to 1
\end{equation*}
where $F$ is a general fibre of $f$.
\end{prop}
The proof is based on a combination of the techniques in \cite[Lemma 1.5]{Nori} that proved the result in the case where every fibre has at least a reduced component, and in \cite{Serrano} that proved the exactness of the sequence for homology groups.
\subsection{Monodromy group of families of maps} \label{sec:limit}
We refer the reader to \cite[ChapterIII.9]{Hartshorne} and \cite[Chapter4.6.7]{Sernesi} for background material about families of algebraic space.
Let $\mathcal{X} \to \mathbb{P}^1$ be a (flat) family of projective varieties of dimension $m$ in $\mathbb{P}^N$ parametrized by $\mathbb{P}^1$ and let $Y$ be a smooth projective variety. Consider the following diagram
\begin{equation*}
\xymatrix{
\mathcal{X} \ar[r]^F \ar[d]^p & Y \times \mathbb{P}^1 \ar[dl]^q \\
\mathbb{P}^1 & }
\end{equation*}
Let $p$ and $q$ be proper surjective maps with connected fibres.
A general fibre $ X_s$ of $p$, with $s \in \mathbb{P}^1$, is a smooth projective variety of dimension $m$ in $\mathbb{P}^N$, together with a finite dominant morphism $f_s: X_s \to Y$ of degree $d$.
Denote by $B_s \subset Y$ the branch divisor of $f_s$ and $R_s \subset X_s$ its ramification divisor. Let $M(f_s)$ be the monodromy group of $f_s$.
Let $\mathcal{R}\subset \mathcal{X}$ be the ramification divisor of $F$, i.e. $\mathcal{X} \setminus \mathcal{R}=\lbrace X_s \setminus R_s\ |\ s \in \mathbb{P}^1 \rbrace$. Let moreover $\mathcal{B}$ be the branch divisor of $F$, i.e. $(Y \times \mathbb{P}^1) \setminus \mathcal{B}=:\mathcal{V}=\lbrace (Y \setminus B_s) \times \{s\}\ |\ s \in \mathbb{P}^1 \rbrace$, open inside $Y \times \mathbb{P}^1$. Let $M(F)$ be the monodromy group of $F$.
If all the varieties $X_s$ are reduced we can deduce the following property of monodromy groups.
\begin{prop}\label{lemmagenerale}
In the above setting, assume that every fibre of $p$ is reduced; then
$$M(F)\cong M(f_s)$$ for a general $s \in \mathbb{P}^1$.
\end{prop}
\begin{proof}
By assumptions, there is no $s \in \mathbb{P}^1$ such that $(Y,s) \subset \mathcal{B}$. Therefore, the map $q': (Y \times \mathbb{P}^1)\setminus \mathcal{B} \to \mathbb{P}^1$ is a fibration with reduced and connected fibres. Thanks to Proposition \ref{lem:Noriprelim}, the following sequence is exact for a general $s \in \mathbb{P}^1$
\begin{equation*}
\xymatrix{
\pi_1(Y\setminus B_s) \ar@{->>}[r] & \pi_1((Y \times \mathbb{P}^1) \setminus \mathcal{B}) \ar[r] & \pi_1(\mathbb{P}^1)=1. }
\end{equation*}
Combining this together with the monodromy map $\mu$, we have
\begin{equation*}
\xymatrix{
\pi_1(Y \setminus B_s) \ar@{->>}[r] \ar@{->>}[d]^\mu & \pi_1((Y \times \mathbb{P}^1) \setminus \mathcal{B}) \ar@{->>}[d]^\mu\\
M(f_s) \ar@{->>}[r] & M(F)}
\end{equation*}
Moreover, if we have a subvariety $Z \subset Y\times \mathbb{P}^1$ that is not contained in $\mathcal{B}$, then
\begin{equation*}\label{iniettiva}
\pi_1(Z) \hookrightarrow \pi_1((Y \times \mathbb{P}^1) \setminus \mathcal{B}).
\end{equation*}
Taking $Z$ as a general fibre of $q'$, then we have also that the map between the monodromy groups is injective. Hence $M(F) \cong M(f_s)$.
\end{proof}
\begin{cor}
In the above assumptions, let $X_0$ be a fibre of $p$ and $f_0: X_0 \to Y$ its dominant morphism. Then $M(f_0) \subseteq M(f_s)$.
\end{cor}
More generally, assume that the fibration $p:\mathcal{X} \to \mathbb{P}^1$ has a singular fibre $F_0=\sum n_i Z_i$ with at least a reduced component $Z_i$.
Let $g_0$ be the map $f_0$ restricted to $Z_i$, i.e. $g_0 = (f_0)_{|Z_i}:Z_i \to Y$, dominant morphism of degree strictly lower than $d=\deg(f_0)$.
\begin{prop}\label{redcomponent}
For a general $s$ in a neighbourhood of $\ 0$,
$$M(g_0) \subset M(f_s).$$
\end{prop}
\begin{proof}
Let $Z:=Z_i$ and, by abuse of notation, we will still denote by $B_0$ the branch divisor of $g_0$ and $R_0$ its ramification divisor. Let $\sigma \in M(g_0)$. Then there exists $[\gamma] \in \pi_1(Y \setminus B_0,y)$ such that $\mu (\gamma)=\sigma$. Let $\gamma$ be a representative of $[\gamma]$ and let $\Tilde{\gamma}$ be its lifting to $\Tilde{Z}:=Z\setminus R_0$.
The path $\Tilde{\gamma}$ is the image of $[0,1]$ inside $\Tilde{Z}$, such that $\Tilde{\gamma}(0)=z_0$ and $\Tilde{\gamma}(1)=z_1$, where $z_0,z_1$ are two distinct points in the fibre $g_0^{-1}(y)$. We can also assume that we avoid the points in which $Z$ meets the other components $Z_j$ of $F_0$.
The path is compact and consider a tubular neighbourhood $U$ of it. Then, by assumptions, the fibration $p$ restricted to $U$ is a locally trivial fibration by Ehresmann's Theorem (\cite[Lemma 4.2]{Catanese}, \cite[Sec 4]{Massey} for manifolds with boundary). Hence, the path $\Tilde{\gamma}$ can be moved in $U$ to a path $\Tilde{\gamma}_s$ in a fibre $X_s,\ s \neq 0$.
Therefore, $M(f_s)\owns\mu\left(p(\Tilde{\gamma}_s)\right)= \sigma$.
\end{proof}
\section{Projections of general hypersurfaces}\label{sec:main}
We want now apply the previous construction to the following situation. Let $\mathcal{X} \to \Delta$ be a pencil of hypersurfaces in $\mathbb{P}^{n+1}$ parametrised by a disc $\Delta$, small neighbourhood of $0$.
Its general element is a general hypersurface $X$ and the hypersurface $X_0$ is given by a general hypersurface $Y$ of degree $d-1$ and a hyperplane $H$.
Let $P \in \mathbb{P}^{n+1}$ be a point and $\mathbb{P}^n$ an hyperplane not containing $P$ and consider
\begin{equation*}
\xymatrix{
\widetilde{\mathbb{P}^{n+1}} \ar[d]^\nu \ar[dr]^{\widetilde{\pi_P}} & \\
\mathbb{P}^{n+1} \ar@{-->}[r]^{\pi_P} & \mathbb{P}^n
}
\end{equation*}
where $\nu$ is the blow up of the projective space at $P$ and $\pi_P$ is the projection of $\mathbb{P}^{n+1}$ form $P$.
Consider the linear projection $\pi_s:= (\pi_P)_{|X_s}: X_s \dashrightarrow \mathbb{P}^n$ of a general element $X_s$ in $\mathcal{X}$ with $s \in \Delta$.
Degenerating the hypersurface $X$ to $X_0$ as $t$ goes to $0$, the point $P$ degenerate onto a point $P_0 \in \mathbb{P}^{n+1}$. Note that, if the point $P$ is in $X$, the point $P_0$ is in $X_0$. After a change of coordinates, we can think the point $P$ as fixed. We have the following diagram
\begin{equation*}
\xymatrix{
\widetilde{\mathcal{X}} \ar[r]^{\widetilde{\pi_P}} \ar[d]^p & \Delta \times \mathbb{P}^n \ar[dl]^q \\
\Delta & }
\end{equation*}
where $\widetilde{\mathcal{X}}$ is the family of the strict transforms $\widetilde{X_s} \subset \widetilde{\mathbb{P}^{n+1}}$ for every $X_s \subset \mathcal{X}$ and $\widetilde{\pi_s}: \widetilde{X_s} \to \mathbb{P}^n $ is a dominant morphism of degree $d$. We recall that, if $P \notin X_s$, then $\widetilde{\pi_s}=\pi_s$ and moreover, the monodromy group does not change when we blow up a smooth point of $X_s$.
\begin{lem}\label{induction}
Let $P \in \mathbb{P}^{n+1}$ and let $\pi_0$ be the map $\pi_P$ restricted to $Y$. Then, the monodromy group $M(\pi_0)$ is contained in the monodromy group $M(\pi_s)$ for a general $s \in \Delta$.
\end{lem}
\begin{proof}
If in the limit $P \notin Y \cap H$, the singular locus of $X_0$, then all the varieties in $\widetilde{\mathcal{X}}$ are reduced. We can apply Proposition \ref{lemmagenerale} and have that $M(\widetilde{\pi_s}) \cong M(\widetilde{\pi_P})$. Moreover, we have that $$M(\pi_0) \subset M(\widetilde{\pi}:\widetilde{X_0} \to \mathbb{P}^n) \subset M(\widetilde{\pi_s})=M(\pi_s).$$
If $P \in Y \cap H$, then we have that $\widetilde{Y}$ is a reduced component of $\widetilde{X_0}$ and $P$ is a smooth point of $Y$. By Proposition \ref{redcomponent}, the monodromy group of $\pi_0$ is still contained in the monodromy group of a general fibre $M(\pi_s)$.
\end{proof}
\begin{comment}
Let $P \in \mathbb{P}^{n+1}$ be a point and $\mathbb{P}^n$ an hyperplane not containing $P$ and consider
\begin{equation*}
\xymatrix{
\widetilde{\mathbb{P}^{n+1}} \ar[d]^\nu \ar[dr]^{\widetilde{\pi_P}} & \\
\mathbb{P}^{n+1} \ar@{-->}[r]^{\pi_P} & \mathbb{P}^n
}
\end{equation*}
where $\nu$ is the blow up the projective space at $P$ and $\pi_P$ is the projection of $\mathbb{P}^{n+1}$ form $P$. Let $X \subset \mathbb{P}^{n+1}$ be a general hypersurface of degree $d$ and let $\pi_s: X \dashrightarrow \mathbb{P}^n$, the restriction of $\pi_P$ to $X$.
Let $\mathcal{X}$ be a smooth family of hypersurfaces in $\mathbb{P}^{n+1}$ of degree $d \geq 3$ parametrized by a disc $\Delta$, whose general member is a general hypersurface. Let $X_0$ be a general hypersurface $Y$ of degree $d-1$ and a hyperplane $H$.
\begin{lem}\label{induction}
Let $P \in \mathbb{P}^{n+1}$. Then, the monodromy group $M(\pi_0)$ is contained in the monodromy group $M(\pi_s)$ for a general $s \in \Delta$.
\end{lem}
\begin{proof}
The variety $\Tilde{X_0}$, the blow up of $X_0$ at $P$, has at least one non reduced irreducible component. Hence we can apply Lemma \ref{lemmagenerale} and conclude that $M(\pi_0) \subset M(\pi_s)$.
\end{proof}
\end{comment}
\medskip
\subsection{Monodromy of general hypersurfaces}
Let $X \subset \mathbb{P}^{n+1}$ be a general hypersurface of degree $d >1$ and let $P \in \mathbb{P}^{n+1}$ be a point. Let $\pi_P$ be the linear projection of $X$ from $P$ and let $M(\pi_P)$ be its corresponding monodromy group. We recall that if $P \in X$, then $M(\pi_P) \subseteq S_{d-1}$, while if $P \notin X$, then $M(\pi_P)\subseteq S_d$.
\begin{rmk}\label{d2}
Every point $P$ is uniform if $d=2$. Indeed, there are no proper transitive subgroup of $S_2$.
\end{rmk}
As a consequence of Theorem \ref{autom} we get the following.
\begin{thm}\label{d3}
Let $d=3$. Then every point is uniform.
\end{thm}
\begin{proof}
If $P \in X$, the degree of $\pi_P: X \dashrightarrow \mathbb{P}^n$ is two and so $M(\pi_P)=S_2$.
Let now $P \notin X$ and assume by contradiction that $P$ is non uniform. Then $M(\pi_P)=A_3$ and so $X$ has a non trivial automorphism. This is a contradiction of Theorem \ref{autom}. Hence $M(\pi_P)=S_3$ for every $P \notin X$.
\end{proof}
We are now ready to prove the main result of the paper.
\begin{thm}\label{interni}
Let $X \subset \mathbb{P}^{n+1}$ be a general hypersurface of degree $d>1$. Then all the points $P \in \mathbb{P}^{n+1}$ are uniform.
\end{thm}
\begin{proof}
The result has been already proven for $d \leq 3$ (Theorem \ref{d3}).
We work by induction on $d=\deg(X)$. Assume that every point is uniform for a general hypersurface of degree $d-1$. Let $P \in \mathbb{P}^{n+1}$ be a point and degenerate $X$ onto $X_0=Y \cup H$ as in Lemma \ref{induction}. Recall that the hypersurface $Y$ is general of degree $d-1$.
\medskip
Assume that $P \in X$, hence $P \in X_0$ by degeneration. If $P \in Y$, by induction $M((\pi_0)_{|Y})=S_{d-2}$. Moreover, $S_{d-2} \subseteq M(\pi_P)$ by Lemma \ref{induction}. Therefore, $M(\pi_P)$ is a transitive group acting on a general fibre of $\pi_P$ and, by construction, it contains a subgroup that is $d-2$ transitive on $d-2$ points of the fibre. Therefore $M(\pi_P)$ is $d-1$ transitive by Lemma \ref{lemma1}, and so $P$ is uniform. If $P \in H$, then $M((\pi_0)_{|Y})=S_{d-1}$. Therefore, applying Lemma \ref{induction} we conclude that $P$ is uniform for $X$.
\medskip
Assume now that $P \notin X$. We recall that, in the degeneration, the point $P$ may be in $X_0$. If $P \notin Y$, then $M((\pi_0)_{|Y})=S_{d-1}$. Moreover, by Lemma \ref{induction}, it is contained in $M(\pi_P)$. Hence it is a group acting transitively on a general fibre of $\pi_P$ and that contains a subgroup that is $d-1$ transitive on $d-1$ points of the fibre. Therefore, by Lemma \ref{lemma1}, $M(\pi_P)$ is $d$ transitive, i.e. the point $P$ is uniform.
If $P \in Y $, then $S_{d-2}= M((\pi_0)_{|Y})$. By Lemma \ref{induction} we have that $M(\pi_P)$ contains a subgroup that is $d-2$ transitive on $d-2$ points of a general fibre. If moreover $M(\pi_P)$ is primitive, then by Lemma \ref{lemma2} we have that it is $2$-transitive. If we apply again Lemma \ref{lemma1} we get that it is $d$-transitive on a general fibre, i.e. $P$ is uniform.
We are then left to prove that the action of $M(\pi_P)$ is primitive. If $d \geq 5$ the action of $M(\pi_P)$ is clearly primitive since $d-2$ does not divide $d$ (see Lemma \ref{block}). If $d=4$, assume by contradiction that the map $\pi_P$ is decomposable. The only possibility is that it factors via two maps of degree two $X \stackrel{2:1}{\to} Y \stackrel{2:1}{\to} \mathbb{P}^n.$ The first map can be seen as an involution of the general quartic, hence a non trivial automorphism of $X$. This contradicts Theorem \ref{autom}.
Therefore, every point is uniform.
\end{proof}
\section*{Acknowledgements}
The author is supported by MIUR: Dipartimenti di Eccellenza Program (2018-2022) - Dept. of Math. Univ. of Pavia and by PRIN 2017 "Moduli spaces and Lie Theory" code 2017YRA3LK\_003.
I would like to thank Gian Pietro Pirola for introducing me to the problem and for all the help he gave during the preparation of this paper. I also thank Ciro Ciliberto, Riccardo Moschetti, Lidia Stoppino and Thomas Dedieu for helpful discussions and suggestions.
\bibliographystyle{alpha}
| {
"timestamp": "2020-07-21T02:29:47",
"yymm": "2007",
"arxiv_id": "2007.09958",
"language": "en",
"url": "https://arxiv.org/abs/2007.09958",
"abstract": "Let $X$ be a general complex projective hypersurface in $\\mathbb{P}^{n+1}$ of degree $d>1$. A point $P$ not in $X$ is called uniform if the monodromy group of the projection of $X$ from $P$ is isomorphic to the symmetric group. We prove that all the points in $\\mathbb{P}^{n+1}$ are uniform for $X$, generalizing a result of Cukierman on general plane curves.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Monodromy of general hypersurfaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575152637946,
"lm_q2_score": 0.8333245994514082,
"lm_q1q2_score": 0.8187893478451727
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.